Tag Archives: Featured

The Science Journal is graduating from Google — coming to Arduino this fall!

via Arduino Blog

This post was written by Valentina Chinnici, Arduino Product Manager.

Arduino and Google are excited to announce that the Science Journal app will be transferring from Google to Arduino this September! Arduino’s existing experience with the Science Journal and a long-standing commitment to open source and hands-on science has been crucial to the transfer ownership of the open source project over to Arduino.  

The Google versions of the app will officially cease support and updates on December 11th, 2020, with Arduino continuing all support and app development moving forward, including a brand new Arduino integration for iOS. 

Arduino Science Journal will include support for the Arduino Nano 33 BLE Sense board, as well as the Arduino Science Kit, with students able to document science experiments and record observations using their own Android or iOS device. The Science Journal actively encourages students to learn outside of the classroom, delivering accessible resources to support both teachers and students for remote or in person activities. For developers, the Arduino version will continue to be open: codes, APIs, and firmware to help them create innovative new projects.

“Arduino’s heritage in both education and open source makes us the ideal partner to take on and develop the great work started by Google with the Science Journal,” commented Fabio Violante, Arduino CEO. “After all, Arduino has been enabling hands-on learning experiences for students and hobbyists since they were founded in 2005. Our mission is to shape the future of the next generation of STEAM leaders, and allow them to have a more equitable and affordable access to complete, hands-on, and engaging learning experiences, in line with UN Sustainable Goals of Quality Education.”

In 2019, we released the Arduino Science Kit, an Arduino-based physics lab that’s fully compatible with the Science Journal. Moving forward, all new updates to the app will take place through Arduino’s new version of the Science Journal, available in September. 

The new Arduino version of the app will still be free and open to let users measure the world around them using the capabilities built into their phone, tablet, and Chromebook. Furthermore, Arduino will be providing better integration between the Science Journal and existing Arduino products and education programs. 

Stay tuned for Arduino’s version of the Science Journal, coming to iOS and Android in September 2020!

The three pillars of the Arduino CLI

via Arduino Blog

The Arduino CLI is an open source command line application written in Golang that can be used from a terminal to compile, verify and upload sketches to Arduino boards, and that’s capable of managing all the software and tools needed in the process. But don’t get fooled by its name: the Arduino CLI can do much more than the average console application, as shown by the Pro IDE and Arduino Create, which rely on it for similar purposes but each one in a completely different way from the other.

In this article, we introduce the three pillars of the Arduino CLI, explaining how we designed the software so that it can be effectively leveraged under different scenarios.

The first pillar: command line interface

Console applications for humans

As you might expect, the first way to use the Arduino CLI is from a terminal and by a human, and user experience plays a key role here. The UX is under a continuous improvement process as we want the tool to be powerful without being too complicated. We heavily rely on sub-commands to provide a rich set of different operations logically grouped together, so that users can easily explore the interface while getting very specific contextual help.

Console applications for robots

Humans are not the only type of customers we want to support and the Arduino CLI was also designed to be used programmatically — think about automation pipelines or a CI/CD system. 

There are some niceties to observe when you write software that’s supposed to be easy to run when unattended and one in particular is the ability to run without a configuration file. This is possible because every configuration option you find in the arduino-cli.yaml configuration file can be provided either through a command line flag or by setting an environment variable. To give an example, the following commands are all equivalent and will proceed fetching the unstable package index that can be used to work with experimental versions of cores: 

See the documentation for details about Arduino CLI’s configuration system.

Consistent with the previous paragraph, when it comes to providing output the Arduino CLI aims to be user friendly but also slightly verbose, something that doesn’t play well with robots. This is why we added an option to provide output that’s easy to parse. For example, the following figure shows what getting the software version in JSON format looks like.

Even if not related to software design, one last feature that’s worth mentioning is the availability of a one-line installation script that can be used to make the latest version of the Arduino CLI available on most systems with an HTTP client like curl or wget and a shell like bash.

The second pillar: gRPC interface

gRPC is a high-performance RPC framework that can efficiently connect client and server applications. The Arduino CLI can act as a gRPC server (we call it daemon mode), exposing a set of procedures that implement the very same set of features of the command line interface and waiting for clients to connect and use them. To give an idea, the following is some Golang code capable of retrieving the version number of a remote running Arduino CLI server instance:

gRPC is language-agnostic: even if the example is written in Golang, the programming language used for the client can be Python, JavaScript or any of the many supported ones, leading to a variety of possible scenarios. The new Arduino Pro IDE is a good example of how to leverage the daemon mode of the Arduino CLI with a clean separation of concerns: the Pro IDE knows nothing about how to download a core, compile a sketch or talk to an Arduino board and it demands all these features of an Arduino CLI instance. Conversely, the Arduino CLI doesn’t even know that the client that’s connected is the Pro IDE, and neither does it care.

The third pillar: embedding

The Arduino CLI is written in Golang and the code is organized in a way that makes it easy to use it as a library by including the modules you need in another Golang application at compile time. Both the first and second pillars rely on a common Golang API, a set of functions that abstract all the functionalities offered by the Arduino CLI, so that when we provide a fix or a new feature, they are automatically available to both the command line and gRPC interfaces. 

The source modules implementing this API can be imported in other Golang programs to embed a full-fledged Arduino CLI. For example, this is how some backend services powering Arduino Create can compile sketches and manage libraries. Just to give you a taste of what it means to embed the Arduino CLI, here is how to search for a core using the API:

Embedding the Arduino CLI is limited to Golang applications and requires a deep knowledge of its internals. For the average use case, the gRPC interface might be a better alternative; nevertheless this remains a valid option that we use and provide support for.

Conclusion

You can start playing with the Arduino CLI right away. The code is open source and we provide extensive documentation. The repo contains example code showing how to implement a gRPC client, and if you’re curious about how we designed the low-level API, have a look at the commands package and don’t hesitate to leave feedback on the issue tracker if you’ve got a use case that doesn’t fit one of the three pillars.

Arduino Security Primer

via Arduino Blog

SSL/TLS stack and HW secure element

At Arduino, we are hard at work to keep improving the security of our hardware and software products, and we would like to run you through how our IoT Cloud service works.

The Arduino IoT Cloud‘s security is based on three key elements:

  • The open-source library ArduinoBearSSL for implementing TLS protocol on Arduino boards;
  • A hardware secure element (Microchip ATECCX08A) to guarantee authenticity and confidentiality during communication;
  • A device certificate provisioning process to allow client authentication during MQTT sessions.

ArduinoBearSSL

In the past, it has been challenging to create a complete SSL/TLS library implementation on embedded (constrained) devices with very limited resources. 

An Arduino MKR WiFi 1010, for instance, only has 32KB of RAM while the standard SSL/TLS protocol implementations were designed for more powerful devices with ~256MB of RAM.

As of today, a lot of embedded devices still do not properly implement the full SSL/TLS stack and fail to implement good security because they misuse or strip functionalities from the library, e.g. we found out that a lot of off-brand boards use code that does not actually validate the server’s certificate, making them an easy target for server impersonation and man-in-the-middle attacks.

Security is paramount to us, and we do not want to make compromises in this regard when it comes to our offering in both hardware and software. We are therefore always looking at “safe by default” settings and implementations. 

Particularly in the IoT era, operating without specific security measures in place puts customers and their data at risk.

This is why we wanted to make sure the security standards adopted nowadays in high-performance settings are ported to microcontrollers (MCUs) and embedded devices.

Back in 2017, while looking at different SSL/TLS libraries supporting TLS 1.2 and modern cryptography (something that could work with very little RAM/ROM footprint, have no OS dependency, and be compatible with the embedded C world), we decided to give BearSSL a try.

BearSSL: What is it?

BearSSL provides an implementation of the SSL/TLS protocol (RFC 5246) written in C and developed by Thomas Pornin.

Optimized for constrained devices, BearSSL aims at small code footprint and low RAM usage. As per its guiding rules, it tries to find a reasonable trade-off between several partly conflicting goals:

  • Security: defaults should be robust and using patently insecure algorithms or protocols should be made difficult in the API, or simply not possible;
  • Interoperability with existing SSL/TLS servers; 
  • Allowing lightweight algorithms for CPU-challenged platforms; 
  • Be extensible with strong and efficient implementations on big systems where code footprint is less important.

BearSSL and Arduino

Our development team picked it as an excellent starting point for us to make BearSSL fit in our Arduino boards focusing on both security and performance.

The firmware developers team worked hard on porting BearSSL to Arduino bundling it together as a very nice and open-source library: ArduinoBearSSL.

Because the computational effort of performing a crypto algorithm is high, we decided to offload part of this task to hardware, using a secure element (we often call it a “cypto chip”). Its advantages are:

  • Making the computation of cryptography operations faster;
  • You are not forced to use all the available RAM of your device for these demanding tasks;
  • Allows storing private keys securely (more on this later);
  • It provides a true random number generator (TRNG).

How does the TLS protocol work?

TLS uses both asymmetric and symmetric encryption. Asymmetric encryption is used during the TLS handshake between the client and the server to exchange the shared session key for communication encryption. The algorithms commonly used in this phase are based on Rivest-Shamir-Adleman (RSA) or Diffie-Hellman algorithms. 

TLS 1.2 Handshake flow

After the TLS handshake, the client and the server both have a session key for symmetric encryption (e.g. algorithms AES 128 or AES 256).

The TLS protocol is an important part of our IoT Cloud security model because it guarantees an encrypted communication between the IoT devices and our servers.

The secure element

In order to save memory and improve security, our development team has chosen to introduce a hardware secure element to offload part of the cryptography algorithms computational load, as well as to generate, store, and manage certificates. For this reason, on the Arduino MKR family, Arduino Nano 33 IoT and Arduino Uno WiFi Rev2, you will find the secure element ATECC508A or ATECC608A manufactured by Microchip.

How do we use the secure element?

A secure element is an advanced hardware component able to perform cryptographic functions, we have decided to implement it on our boards to guarantee two fundamental security properties in the IoT communication: 

  • Authenticity: You can trust who you are communicating with;
  • Confidentiality: You can be sure the communication is private.

Moreover, the secure element is used during the provisioning process to configure the Arduino board for Arduino IoT Cloud. In order to connect to the Arduino IoT Cloud MQTT broker, our boards don’t use a standard credentials authentication (username/password pair). We rather opted for implementing a higher-level authentication, known as client certificate authentication.

How does the Arduino provisioning work?

The whole process is possible thanks to an API, which exposes an endpoint a client can interact with.

As you can see in the diagram below, first the Client requests to register a new device on Arduino IoT Cloud via the API, to which the server (API) returns a UUID (Universally Unique IDentifier). At this point, the user can upload the sketch Provisioning.ino to the target board. This code is responsible for multiple tasks:

  • Generating a private key using the ATECCX08A, and store it in a secure slot that can be only read by the secure element;
  • Generating a CSR (Certificate Signing Request) using the device UUID as Common Name (CN) and the generated private key to sign it;
  • Storing the certificate signed by Arduino acting as the authority.

After the CSR generation, the user sends it via the API to the server and the server returns a certificate signed by Arduino. This certificate is stored, in a compressed format, in a slot of the secure element (usually in slot 10) and it is used to authenticate the device to the Arduino IoT Cloud.

Machine vision with low cost camera modules

via Arduino Blog

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
Camera.testPattern();
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found the simple downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Two-factor authentication on Arduino

via Arduino Blog

Today, we’re announcing a new security feature for our community: two-factor authentication (2FA) on Arduino web services. We have implemented a two-step verification login to arduino.cc, so our users can be sure of their online safety.

If enabled, two-factor authentication offers an additional security layer to the user’s account, so the user can have better protection of their IoT devices connected to Arduino IoT Cloud. We encourage our users to enable 2FA to improve their online safety.

How to enable two-factor authentication

Arduino supports two-factor authentication via authenticator software as Authy or the Google Authenticator. To enable 2FA on your account:

1. Go to id.arduino.cc and click on Activate in the Security frame of your account:

2. Scan the QR code using your own authenticator app (e.g. Authy, Google Authenticator, Microsoft Authenticator, etc.)

3. Now, in your authenticator app, it appears a six-digit code that changes every 30 seconds: copy it in the text field and click Verify.

4. Important: Save your Recovery code in a safe place and do not lose it. If you lose your 2FA codes (e.g. you misplace or break your phone), you can still restore your account using the recovery code. If you lose both 2FA and recovery codes, you will no longer be able to access your account.

5. Great! Now you have the Two-Factor Authentication enabled on your Arduino account.

Exploring open-source ventilators: Apollo Ventilator

via Arduino Blog

This article was written by César Garcia, researcher at La Hora Maker.

This week, we will be exploring the Apollo Ventilator in detail! This project emerged at Makespace Madrid two months ago. It was a response to the first news about the expected lack of ventilators in Spain because of COVID-19.

Several members of the space decided to explore this problem. They joined Telegram groups and started participating in the coronavirus maker forum. In this group, they stumbled upon an initial design shared by a doctor, that would serve as a starting point for the ventilator project.

Credits: Apollo Ventilator (Photo by Apollo Ventilator Team)

To advance the project, a small but active group would join daily at “Makespace Virtual.” This virtual space used open-source video conferencing software Jitsi. Each one of the eight core members would contribute with their expertise in design, engineering, coding, etc. Due to the confinement measures in place, access to the space was quite limited. Everyone decided to work from home and a single person would merge all advances at the make space physically. A few weeks later doctors from La Paz Hospital in Madrid got in touch with the Apollo team, looking for ways to work together on the ventilator.

One of the hardest challenges to overcome was the lack of medical materials. The global demand has disrupted supply chains everywhere! The team had to improvise with the means at their disposal. To regulate the flow of gases, they created a 3D-printed pinch, that would collapse a medical-grade silicone tube in the input. This mechanism is controlled using the same electronics used in 3D printers: an Arduino Mega 2560 board with a RAMPS shield!

Credits: 3D-printed valve pinch (Photo by Apollo Ventilator Team)

In respect of sensors, they decided to go for certified versions that could be sterilized in an autoclave. They looked everywhere without success. A few days later, they got support from a large electronics supplier to provide them an equivalent model suited for children or adults up to 80 kg.

They decided to work on a shared repository to coordinate all the distributed efforts. This attracted new members and talents, doubling in size and sparking new lines of development. The Apollo Ventilator is an open-source project, meaning that new people can learn and create together new features.

Based on their expertise sourcing components, they wanted Apollo to be flexible. Most other certified ventilators are too specific. But they want to become “the Marlin for ventilators!” Marlin is one of the most used firmware in the world to control 3D printers. This software can manage all kinds of boards and adapt to different configurations easily.

In the case of the Apollo Ventilator, the initial setup runs on a single Arduino Mega board. It uses the attached computer as the display. Current code can be configured to use a secondary Arduino board connected by serial port as a display too. As for the interface, there are several alternatives using GTK and QT. It’s also possible to send this data using MQTT, so data from many ventilators can be centralized. Other alternative builds used even regular snorkeling pieces! The Apollo Ventilator aspires to serve as the basis for several new projects and initiatives where off the shelf solutions are not available. Another potential outcome would be low-cost ventilators for veterinary practice or education.

Credits: Apollo Ventilator made out of snorkeling equipment (Photo by Apollo Ventilator Team)

The Apollo Ventilator is currently under development. They plan to expand the tests on lung simulators right now. Next steps would involve working with hospitals and veterinary schools. They will tackle these phases once the medical services are less overwhelmed.

The Apollo Ventilator takes its name from the famous Apollo missions to the moon. They managed to overcome all obstacles to take us where humanity had not been before. This project shares the same goals in regards to open-source ventilators. They are trying to overcome one of the biggest contemporary challenges, the COVID-19 pandemic. 

To learn more about the Apollo Ventilator, you can check out its repository. At this link you can also find an interview (in Spanish) to Javi, Apollo Ventilator’s project leader.

If you’d like to know more about Makespace Madrid, visit their website.

Arduino staff and Arduino community are strongly committed to support projects aimed at fighting and lessening the impact of COVID-19. Arduino products are essential for both R&D and manufacturing purposes related to the global response to Covid-19, in building digital medical devices and manufacturing processes for medical equipment and PPE. However, all prototypes and projects aimed to fight COVID-19 using Arduino open-source electronics and digital fabrication do not create any liability to Arduino (company, community and Arduino staff members). Neither Arduino nor Arduino board, staff members and community will be responsible in any form and to any extent for losses or damages of whatever nature (direct, indirect, consequential, or other) which may arise related to Arduino prototypes, Arduino electronic equipment for critical medical devices, research operations, forum and blog discussions and in general Covid-19 Arduino-based pilot and non pilot projects, independently of the Arduino control on progress or involvement in the research, development, manufacturing and in general implementation phases.