Monthly Archives: June 2020

Machine vision with low cost camera modules

via Arduino Blog

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
Camera.testPattern();
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found the simple downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Capture the flag with a GPS, RFID and LoRa twist

via Arduino Blog

Capture the flag can be fun, but Karel Bousson has put a new spin on the game that allows you to compete against neighbors over who can keep a single item — a modified tool case — in their possession the longest.

The box contains an Arduino Mega that interfaces with an RFID reader to enable the current owner to scan in, plus a GPS module for location data. Additionally, an LDR sensor can be incorporated to set the brightness of an LED matrix on the outside.

Data passed along to a Raspberry Pi for time of possession tracking via LoRa with The Things Network. This also runs a server that shows game info to others playing, meaning that you’ll have to be very careful to keep it around!

Code for the project is available on GitHub.

This Arduino-controlled robot leaves messages in the sand

via Arduino Blog

Ivan Miranda has come up with a novel method for drawing messages in the sand, using a tread assembly that prints as it travels along the beach.

The robot uses a length of square tubing to connect a pair of half tanks, with 50 SG90 micro servos spaced out on the bottom. As it pulls itself, the motors are controlled with a total of three Arduino Mega boards, intermittently extending into the sand. This creates lines that combine to form individual letters.

You can see the build process in the video below, including his initial trial at around the 11:00 mark. This is actually Miranda’s second attempt at a “beach drawer,” and his first version, which uses a much different technique is seen here.

Two-factor authentication on Arduino

via Arduino Blog

Today, we’re announcing a new security feature for our community: two-factor authentication (2FA) on Arduino web services. We have implemented a two-step verification login to arduino.cc, so our users can be sure of their online safety.

If enabled, two-factor authentication offers an additional security layer to the user’s account, so the user can have better protection of their IoT devices connected to Arduino IoT Cloud. We encourage our users to enable 2FA to improve their online safety.

How to enable two-factor authentication

Arduino supports two-factor authentication via authenticator software as Authy or the Google Authenticator. To enable 2FA on your account:

1. Go to id.arduino.cc and click on Activate in the Security frame of your account:

2. Scan the QR code using your own authenticator app (e.g. Authy, Google Authenticator, Microsoft Authenticator, etc.)

3. Now, in your authenticator app, it appears a six-digit code that changes every 30 seconds: copy it in the text field and click Verify.

4. Important: Save your Recovery code in a safe place and do not lose it. If you lose your 2FA codes (e.g. you misplace or break your phone), you can still restore your account using the recovery code. If you lose both 2FA and recovery codes, you will no longer be able to access your account.

5. Great! Now you have the Two-Factor Authentication enabled on your Arduino account.

Building an Arduino-based bipedal bot

via Arduino Blog

If you’d like to build a walking biped robot, this 3D-printed design by Technovation looks like a fantastic place to start. Each leg features three servos that actuate it at the hip, knee, and ankle for a total of six degrees of freedom.

Control is handled by an Arduino Uno board that rides on top of the legs, along with a perfboard to connect to the servos directly.

Movements are calculated via inverse kinematics, meaning one simply has to input the x and z positions, and the Arduino calculates the proper servo angles. The bot is even able to take steps between two and 10 centimeters without falling over.

Bike signal display keeps riders safe with machine learning

via Arduino Blog

Cycling can be fun, not to mention great exercise, but is also dangerous at times. In order to facilitate safety and harmony between road users on his hour-plus bike commute in Marseille, France, Maltek created his own LED backpack signaling setup.

The device uses a hand mounted Arduino Nano 33 BLE Sense to record movement via its onboard IMU and runs a TinyML gesture recognition model to translate this into actual road signals. Left and right rotations of the wrist are passed along to the backpack unit over BLE, which shows the corresponding turn signal on its LED panel.

Other gestures include a back twist for stop, forward twist to say “merci,” and it displays a default green forward scrolling arrow as the default state.

More details on the project can be found in Maltek’s write-up here.