Tag Archives: Nano 33 BLE Sense

Customizable artificial intelligence and gesture recognition

via Arduino Blog

In many respects we think of artificial intelligence as being all encompassing. One AI will do any task we ask of it. But in reality, even when AI reaches the advanced levels we envision, it won’t automatically be able to do everything. The Fraunhofer Institute for Microelectronic Circuits and Systems has been giving this a lot of thought.

AI gesture training

Okay, so you’ve got an AI. Now you need it to learn the tasks you want it to perform. Even today this isn’t an uncommon exercise. But the challenge that Fraunhofer IMS set itself was training an AI without any additional computers.

As a test case, an Arduino Nano 33 BLE Sense was employed to build a demonstration device. Using only the onboard 9-axis motion sensor, the team built an untethered gesture recognition controller. When a button is pressed, the user draws a number in the air, and corresponding commands are wirelessly sent to peripherals. In this case, a robotic arm.

Embedded intelligence

At first glance this might not seem overly advanced. But consider that it’s running entirely from the device, with just a small amount of memory and an Arduino Nano. Fraunhofer IMS calls this “embedded intelligence,” as it’s not the robot arms that’s clever, but the controller itself.

This is achieved when training the device using a “feature extraction” algorithm. When the gesture is executed, the artificial neural network (ANN) is able to pick out only the relevant information. This allows for impressive data reduction and a very efficient, compact AI.

Fraunhofer IMS Arduino Nano with Gesture Recognition

Obviously this is just an example use case. It’s easy to see the massive potential that this kind of compact, learning AI could have. Whether it’s in edge control, industrial applications, wearables or maker projects. If you can train a device to do the job you want, it can offer amazing embedded intelligence with very few resources.

The post Customizable artificial intelligence and gesture recognition appeared first on Arduino Blog.

Upgrading an inexpensive exercise bike with a Nano 33 BLE Sense

via Arduino Blog

After purchasing a basic foldable exercise bike, Thomas Schucker wondered if he could get a bit more out of it, perhaps even using it with virtual riding apps like Zwift and RGT. By default, this piece of equipment is set up to output cadence info via a simple headphone jack, using a demagnetized portion of the flywheel for sensing.

Taking this a step further, Schucker found that the magnetic field amplitude actually changes with the resistance input, allowing him to correlate the two with an analog sensor built into the Arduino Nano 33 BLE Sense.

The Nano is attached near the flywheel, and sends data over BLE, enabling him to use this rather cheap indoor bike in a much more involved way than it was likely ever intended. Code for the project is available on GitHub, while a demo of it controlling Zwift can be seen in the video below.

Machine vision with low cost camera modules

via Arduino Blog

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
Camera.testPattern();
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found the simple downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Bike signal display keeps riders safe with machine learning

via Arduino Blog

Cycling can be fun, not to mention great exercise, but is also dangerous at times. In order to facilitate safety and harmony between road users on his hour-plus bike commute in Marseille, France, Maltek created his own LED backpack signaling setup.

The device uses a hand mounted Arduino Nano 33 BLE Sense to record movement via its onboard IMU and runs a TinyML gesture recognition model to translate this into actual road signals. Left and right rotations of the wrist are passed along to the backpack unit over BLE, which shows the corresponding turn signal on its LED panel.

Other gestures include a back twist for stop, forward twist to say “merci,” and it displays a default green forward scrolling arrow as the default state.

More details on the project can be found in Maltek’s write-up here.

Edge Impulse makes TinyML available to millions of Arduino developers

via Arduino Blog

This post is written by Jan Jongboom and Dominic Pajak.

Running machine learning (ML) on microcontrollers is one of the most exciting developments of the past years, allowing small battery-powered devices to detect complex motions, recognize sounds, or find anomalies in sensor data. To make building and deploying these models accessible to every embedded developer we’re launching first-class support for the Arduino Nano 33 BLE Sense and other 32-bit Arduino boards in Edge Impulse.

The trend to run ML on microcontrollers is called Embedded ML or Tiny ML. It means devices can make smart decisions without needing to send data to the cloud – great from an efficiency and privacy perspective. Even powerful deep learning models (based on artificial neural networks) are now reaching microcontrollers. This past year great strides were made in making deep learning models smaller, faster and runnable on embedded hardware through projects like TensorFlow Lite Micro, uTensor and Arm’s CMSIS-NN; but building a quality dataset, extracting the right features, training and deploying these models is still complicated.

Using Edge Impulse you can now quickly collect real-world sensor data, train ML models on this data in the cloud, and then deploy the model back to your Arduino device. From there you can integrate the model into your Arduino sketches with a single function call. Your sensors are then a whole lot smarter, being able to make sense of complex events in the real world. The built-in examples allow you to collect data from the accelerometer and the microphone, but it’s easy to integrate other sensors with a few lines of code. 

Excited? This is how you build your first deep learning model with the Arduino Nano 33 BLE Sense (there’s also a video tutorial here: setting up the Arduino Nano 33 BLE Sense with Edge Impulse):

  • Download the Arduino Nano 33 BLE Sense firmware — this is a special firmware package (source code) that contains all code to quickly gather data from its sensors. Launch the flash script for your platform to flash the firmware.
  • Launch the Edge Impulse daemon to connect your board to Edge Impulse. Open a terminal or command prompt and run:
$ npm install edge-impulse-cli -g
$ edge-impulse-daemon
  • Your device now shows in the Edge Impulse studio on the Devices tab, ready for you to collect some data and build a model.
  • Once you’re done you can deploy your model back to the Arduino Nano 33 BLE Sense. Either as a binary which includes your full ML model, or as an Arduino library which you can integrate in any sketch.
Deploy to Arduino from Edge Impulse
Deploying to Arduino from Edge Impulse
  • Your machine learning model is now running on the Arduino board. Open the serial monitor and run `AT+RUNIMPULSE` to start classifying real world data!
Keyword spotting on the Arduino Nano 33 BLE Sense
Keyword spotting on the Arduino Nano 33 BLE Sense

Integrates with your favorite Arduino platform

We’ve launched with the Arduino Nano 33 BLE Sense, but you can also integrate Edge Impulse with your favourite Arduino platform. You can easily collect data from any sensor and development board using the Data forwarder. This is a small application that reads data over serial and sends it to Edge Impulse. All you need is a few lines of code in your sketch (here’s an example).

After you’ve built a model you can easily export your model as an Arduino library. This library will run on any Arm-based Arduino platform including the Arduino MKR family or Arduino Nano 33 IoT, providing it has enough RAM to run your model. You can now include your ML model in any Arduino sketch with just a few lines of code. After you’ve added the library to the Arduino IDE you can find an example on integrating the model under Files > Examples > Your project – Edge Impulse > static_buffer.

To run your models as fast and energy-efficiently as possible we automatically leverage the hardware capabilities of your Arduino board – for example the signal processing extensions available on the Arm Cortex-M4 based Arduino Nano BLE Sense or more powerful Arm Cortex-M7 based Arduino Portenta H7. We also leverage the optimized neural network kernels that Arm provides in CMSIS-NN.

A path to production

This release is the first step in a really exciting collaboration. We believe that many embedded applications can benefit from ML today, whether it’s for predictive maintenance (‘this machine is starting to behave abnormally’), to help with worker safety (‘fall detected’), or in health care (‘detected early signs of a potential infection’). Using Edge Impulse with the Arduino MKR family you can already quickly deploy simple ML based applications combined with LoRa, NB-IoT cellular, or WiFi connectivity. Over the next months we’ll also add integrations for the Arduino Portenta H7 on Edge Impulse, making higher performance industrial applications possible.

On a related note: if you have ideas on how TinyML can help to slow down or detect the COVID-19 virus, then join the UNDP COVID-19 Detect and Protect Challenge. For inspiration, see Kartik Thakore’s blog post on cough detection with the Arduino Nano 33 BLE Sense and Edge Impulse.

We can’t wait to see what you’ll build!

Jan Jongboom is the CTO and co-founder of Edge Impulse. He built his first IoT projects using the Arduino Starter Kit.

Dominic Pajak is VP Business Development at Arduino.

Designing a two-axis gesture-controlled platform for DSLR cameras

via Arduino Blog

Holding your phone up to take an occasional picture is no big deal, but for professional photographers who often need to manipulate heavier gear for hours on end, this can actually be quite tiring. With this in mind, Cornell University students Kunpeng Huang, Xinyi Yang, and Siqi Qian designed a two-axis gesture-controlled camera platform for their ECE 4760 final project.

Their device mounts a 3.6kg (~8lb) DSLR camera in an acrylic turret, allowing it to look up and down (pitch) as well as left and right (yaw) under the control of two servo motors. The platform is powered by a PIC32 microcontroller, while human operation is performed via a gamepad-style SparkFun Joystick Shield or through an Arduino Nano 33 BLE Sense

When in Nano mode, the setup leverages its IMU to move the camera along with the user’s hand gestures, and its built-in light and proximity sensing abilities activate the camera itself.

Our 2-DOF gesture-controlled platform can point the camera in any direction within a hemi-sphere based on spherical coordinates. It is capable of rotating continuously in horizontal direction and traversing close to 180 degrees in vertical direction. It is able to support a relatively large camera system (more than 3kg in total weight and 40cm in length), orient the camera accurately (error less than 3 degree), and respond quickly to user input (transverse 180 degrees in less than 3 seconds). In addition to orienting the camera, the system also has simple control functionality, such as allowing the user to auto-focus and take photos remotely, which is achieved through DSLR’s peripheral connections.

At a high level, our design supports three user input modes — the first one uses a joystick while the other two use an inertial measurement unit (IMU). In the first mode, the x- and y-axis of a joystick is mapped to the velocities in the yaw and pitch directions of the camera. In the second mode, the roll and pitch angles of the user’s hand are mapped to the velocities of the camera in the yaw and pitch directions, while the third mode mapped the angles to the angular position of the camera.