Tag Archives: camera

Machine vision with low cost camera modules

via Arduino Blog

If you’re interested in embedded machine learning (TinyML) on the Arduino Nano 33 BLE Sense, you’ll have found a ton of on-board sensors — digital microphone, accelerometer, gyro, magnetometer, light, proximity, temperature, humidity and color — but realized that for vision you need to attach an external camera.

In this article, we will show you how to get image data from a low-cost VGA camera module. We’ll be using the Arduino_OVD767x library to make the software side of things simpler.

Hardware setup

To get started, you will need:

You can of course get a board without headers and solder instead, if that’s your preference.

The one downside to this setup is that (in module form) there are a lot of jumpers to connect. It’s not hard but you need to take care to connect the right cables at either end. You can use tape to secure the wires once things are done, lest one comes loose.

You need to connect the wires as follows:

Software setup

First, install the Arduino IDE or register for Arduino Create tools. Once you install and open your environment, the camera library is available in the library manager.

  • Install the Arduino IDE or register for Arduino Create
  • Tools > Manage Libraries and search for the OV767 library
  • Press the Install button

Now, we will use the example sketch to test the cables are connected correctly:

  • Examples > Arduino_OV767X > CameraCaptureRawBytes
  • Uncomment (remove the //) from line 48 to display a test pattern
Camera.testPattern();
  • Compile and upload to your board

Your Arduino is now outputting raw image binary over serial. To view this as an image we’ve included a special application to view the image output from the camera using Processing.

Processing is a simple programming environment that was created by graduate students at MIT Media Lab to make it easier to develop visually oriented applications with an emphasis on animation and providing users with instant feedback through interaction.

  • Install and open Processing 
  • Paste the CameraVisualizerRawBytes code into a Processing sketch
  • Edit line 31-37 to match the machine and serial port your Arduino is connected to
  • Hit the play button in Processing and you should see a test pattern (image update takes a couple of seconds):

If all goes well, you should see the striped test pattern above!

Next we will go back to the Arduino IDE and edit the sketch so the Arduino sends a live image from the camera in the Processing viewer: 

  • Return to the Arduino IDE
  • Comment out line 48 of the Arduino sketch
// We've disabled the test pattern and will display a live image
// Camera.testPattern();
  • Compile and upload to the board
  • Once the sketch is uploaded hit the play button in Processing again
  • After a few seconds you should now have a live image:

Considerations for TinyML

The full VGA (640×480 resolution) output from our little camera is way too big for current TinyML applications. uTensor runs handwriting detection with MNIST that uses 28×28 images. The person detection example in the TensorFlow Lite for Microcontrollers example uses 96×96 which is more than enough. Even state-of-the-art ‘Big ML’ applications often only use 320×320 images (see the TinyML book). Also consider an 8-bit grayscale VGA image occupies 300KB uncompressed and the Nano 33 BLE Sense has 256KB of RAM. We have to do something to reduce the image size! 

Camera format options

The OV7670 module supports lower resolutions through configuration options. The options modify the image data before it reaches the Arduino. The configurations currently available via the library today are:

  • VGA – 640 x 480
  • CIF – 352 x 240
  • QVGA – 320 x 240
  • QCIF – 176 x 144

This is a good start as it reduces the amount of time it takes to send an image from the camera to the Arduino. It reduces the size of the image data array required in your Arduino sketch as well. You select the resolution by changing the value in Camera.begin. Don’t forget to change the size of your array too.

Camera.begin(QVGA, RGB565, 1)

The camera library also offers different color formats: YUV422, RGB444 and RGB565. These define how the color values are encoded and all occupy 2 bytes per pixel in our image data. We’re using the RGB565 format which has 5 bits for red, 6 bits for green, and 5 bits for blue:

Converting the 2-byte RGB565 pixel to individual red, green, and blue values in your sketch can be accomplished as follows:

    // Convert from RGB565 to 24-bit RGB

    uint16_t pixel = (high << 8) | low;

    int red   = ((pixel >> 11) & 0x1f) << 3;
    int green = ((pixel >> 5) & 0x3f) << 2;
    int blue  = ((pixel >> 0) & 0x1f) << 3;

Resizing the image on the Arduino

Once we get our image data onto the Arduino, we can then reduce the size of the image further. Just removing pixels will give us a jagged (aliased) image. To do this more smoothly, we need a downsampling algorithm that can interpolate pixel values and use them to create a smaller image.

The techniques used to resample images is an interesting topic in itself. We found the simple downsampling example from Eloquent Arduino works with fine the Arduino_OV767X camera library output (see animated GIF above).

Applications like the TensorFlow Lite Micro Person Detection example that use CNN based models on Arduino for machine vision may not need any further preprocessing of the image — other than averaging the RGB values in order to remove color for 8-bit grayscale data per pixel.

However, if you do want to perform normalization, iterating across pixels using the Arduino max and min functions is a convenient way to obtain the upper and lower bounds of input pixel values. You can then use map to scale the output pixel values to a 0-255 range.

byte pixelOut = map(input[y][x][c], lower, upper, 0, 255); 

Conclusion

This was an introduction to how to connect an OV7670 camera module to the Arduino Nano 33 BLE Sense and some considerations for obtaining data from the camera for TinyML applications. There’s a lot more to explore on the topic of machine vision on Arduino — this is just a start!

Using Raspberry Pi for deeper learning in education

via Raspberry Pi

Using deeper learning as a framework for transformative educational experiences, Brent Richardson outlines the case for a pedagogical approach that challenges students using a Raspberry Pi. From the latest issue of Hello World magazine — out today!

A benefit of completing school and entering the workforce is being able to kiss standardised tests goodbye. That is, if you don’t count those occasional ‘prove you watched the webinar’ quizzes some supervisors require.

In the real world, assessments often happen on the fly and are based on each employee’s ability to successfully complete tasks and solve problems. It is often obvious to an employer when their staff members are unprepared.

Formal education continues to focus on accountability tools that measure base-level proficiencies instead of more complex skills like problem-solving and communication.

One of the main reasons the U.S. education system is criticised for its reliance on standardised tests is that this method of assessing a student’s comprehension of a subject can hinder their ability to transfer knowledge from an existing situation to a new situation. The effect leaves students ill-prepared for higher education and the workforce.

A study conducted by the National Association of Colleges and Employers found a significant gap between how students felt about their abilities and their employer’s observations. In seven out of eight categories, students rated their skills much higher than their prospective employers had.

Some people believe that this gap continues to widen because teaching within the confines of a standardised test encourages teachers to narrow their instruction. The focus becomes preparing students with a limited scope of learning that is beneficial for testing.

With this approach to learning, it is possible that students can excel at test-taking and still struggle with applying knowledge in new ways. Educators need to have the support to not only prepare students for tests but also to develop ways that will help their students connect to the material in a meaningful manner.

In an effort to boost the U.S. education system’s ability to increase the knowledge and skills of students, many private corporations and nonprofits directly support public education. In 2010, the Hewlett Foundation went so far as to develop a framework called ‘deeper learning’ to help guide its education partners in preparing learners for success.

The principles of deeper learning

Deeper learning focuses on six key competencies:

    1. Master core academic content
    2. Think critically and solve
      complex problems
    3. Work collaboratively
    4. Communicate effectively
    5. Learn how to learn
    6. Develop academic mindsets

This framework ensures that learners are active participants in their education. Students are immersed in a challenging curriculum that requires them to seek out and acquire new information, apply what they have learned, and build upon that to create new knowledge.

While deeper learning experiences are important for all students, research shows that schools that engage students from low-income families and students of colour in deeper learning have stronger academic outcomes, better attendance and behaviour, and lower dropout rates. This results in higher graduation rates, and higher rates
of college attendance and perseverance than comparison schools serving similar students. This pedagogical approach is one we strive to embed in all our work at Fab Lab Houston.

A deeper learning timelapse project

The importance of deeper learning was undeniable when a group of students I worked with in Houston built a solar-powered time-lapse camera. Through this collaborative project, we quickly found ourselves moving beyond classroom pedagogy to a ‘hero’s journey’ — where students’ learning paths echo a centuries-old narrative arc in which a protagonist goes on an adventure, makes new friends, encounters roadblocks, overcomes adversity, and returns home a changed person.

In this spirit, we challenged the students with a simple objective: ‘Make a device to document the construction of Fab Lab Houston’. In just one sentence, participants understood enough to know where the finish line was without being told exactly how to get there. This shift in approach pushed students to ask questions as they attempted to understand constraints and potential approaches.

Students shared ideas ranging from drone video to photography robots. Together everyone began to break down these big ideas into smaller parts and better define the project we would tackle together. To my surprise, even the students that typically refused to do most things were excited to poke holes in unrealistic ideas. It was decided, among other things, that drones would be too expensive, robots might not be waterproof, and time was always a concern.

The decision was made to move forward with the stationary time-lapse camera, because although the students didn’t know how to accomplish all the aspects of the project, they could at least understand the project enough to break it down into doable parts and develop a ballpark budget. Students formed three teams and picked one aspect of the project to tackle. The three subgroups focused on taking photos and converting them to video, developing a remote power solution, and building weatherproof housing.

A group of students found sample code for Raspberry Pi that could be repurposed to take photos and store them sequentially on a USB drive. After quick success, a few ambitious learners started working to automate the image post-processing into video. Eventually, after attempting multiple ways to program the computer to dynamically turn images into video, one team member discovered a new approach: since the photos were stored with a sequential numbering system, thousands of photos could be loaded into Adobe Premiere Pro straight off the USB with the ‘Automate to Sequence’ tool in Premiere.

A great deal of time was spent measuring power consumption and calculating solar panel and battery size. Since the project would be placed on a pole in the middle of a construction site for six months, the students were challenged with making their solar-powered time-lapse camera as efficient as possible.

Waking the device after it was put into sleep mode proved to be more difficult than anticipated, so a hardware solution was tested. The Raspberry Pi computer was programmed to boot up when receiving power, take a picture, and then shut itself down. With the Raspberry Pi safely shut down, a timer relay cut power for ten minutes before returning power and starting the cycle again.

Finally, a waterproof container had to be built to house the electronics and battery. To avoid overcomplicating the process, the group sourced a plastic weatherproof ammunition storage box to modify. Students operated a 3D printer to create custom parts for the box.

After cutting a hole for the camera, a small piece of glass was attached to a 3D-printed hood, ensuring no water entered the box. On the rear of the box, they printed a part to hold and seal the cable from the solar panel where it entered the box. It only took a few sessions before the group produced a functioning prototype. The project was then placed outside for a day to test the capability of the device.

The test appeared successful when the students checked the USB drive. The drive was full of high-quality images captured every ten minutes. When the drive was connected back to Raspberry Pi, a student noticed that all the parts inside the case moved. The high temperature on the day of the test had melted the glue used to attach everything. This unexpected problem challenged students to research a better alternative and reattach the pieces.

Once the students felt confident in their device’s functionality, it was handed over to the construction crew, who installed the camera on a twenty-foot pole. The installation went smoothly and the students anxiously waited to see the results.

Less than a week after the camera went up, Houston was hit hard with the rains brought on by hurricane Harvey. The group was nervous to see whether the project they had constructed would survive. However, when they saw that their camera had survived and was working, they felt a great sense of pride.

They recognised that it was the collaborative effort of the group to problem-solve possible challenges that allowed their camera to not only survive but to capture a spectacular series of photos showing the impact of the hurricane in the location it was placed.

BakerRipleyTimeLapse2

This is “BakerRipleyTimeLapse2” by Brent Richardson on Vimeo, the home for high quality videos and the people who love them.

A worthwhile risk

Overcoming many hiccups throughout the project was a great illustration of how the students learned how to learn and
to develop an academic mindset; a setback that at the beginning of the project might have seemed insurmountable was laughable in the end.

Throughout my experience as a classroom teacher, a museum educator, and now a director of a digital makerspace, I’ve seen countless students struggle to understand the relevance of learning, and this has led me to develop a strong desire to expand the use of deeper learning.

Sometimes it feels like a risk to facilitate learning rather than impart knowledge, but seeing a student’s development into a changed person, ready to help someone else learn, makes it worth the effort. Let’s challenge ourselves as educators to help students acquire knowledge and use it.

Get your FREE copy of Hello World today

Issue 12 of Hello World is available now as a FREE PDF download. UK-based educators can also subscribe to receive Hello World directly to their door in all its shiny printed goodness. Visit the Hello World website for more information.

The post Using Raspberry Pi for deeper learning in education appeared first on Raspberry Pi.

Raspberry Pi 3 baby monitor | Hackspace magazine #26

via Raspberry Pi

You might have a baby/dog/hamster that you want to keep an eye on when you’re not there. We understand: they’re lovely, especially hamsters. Here’s how HackSpace magazine contributor Dr Andrew Lewis built a Raspberry Pi baby cam to watch over his small creatures…

When a project is going to be used in the home, it pays to take a little bit of extra time on appearance

Wireless baby monitors

You can get wireless baby monitors that have a whole range of great features for making sure your little ones are safe, sound, and sleeping happily, but they come with a hefty price tag.

In this article, you’ll find out how to make a Raspberry Pi-powered streaming camera, and combine it with a built-in I2C sensor pack that monitors temperature, pressure, and humidity. You’ll also see how you can use the GPIO pins on Raspberry Pi to turn an LED night light on and off using a web interface.

The hardware for this project is quite simple, and involves minimal soldering, but the first thing you need to do is to install Raspbian onto a microSD card for your Raspberry Pi. If you’re planning on doing a headless install, you’ll also need to enable SSH by creating an empty file called SSH on the root of the Raspbian install, and a file with your wireless LAN details called wpa_supplicant.conf.

You can download the code for this as well as the 3D-printable files from our GitHub. You’ll need to transfer the code to the Raspberry Pi. Next, connect the camera, the BME280 board, and the LEDs to the Raspberry Pi, as shown in the circuit diagram.

The BME280 module uses the I2C connection on pins 3 and 5 of the GPIO, taking power from pins 1 and 9. The LEDs connect directly to pins 19 and 20, and the camera cable fits into the camera connector.

Insert the microSD card into the Raspberry Pi and boot up. If everything is working OK, you should be able to see the IP address for your device listed on your hub or router, and you should be able to connect to it via SSH. If you don’t see the Raspberry Pi listed, check your wireless connection details and make sure your adapter is supplying enough power. It’s worth taking the time to assign your Raspberry Pi with a static IP address on your network, so it can’t change its IP address unexpectedly.

Smile for Picamera

Use the raspi-config application to enable the camera interface and the I2C interface. If you’re planning on modifying the code yourself, we recommend enabling VNC access as well, because it will make editing and debugging the code once the device is put together much easier. All that remains on the software side is to update APT, download the babycam.py script, install any dependencies with PIP, and set the script to run automatically. The main dependencies for the babycam.py script are the RPi.bme280 module, Flask, PyAudio, picamera, and NumPy. Chances are that these are already installed on your system by default, with the exception of RPi.bme280, which can be installed by typing sudo pip3 install RPi.bme280 from the terminal. Once all of the dependencies are present, load up the script and give it a test run, and point your web browser at port 8000 on the Raspberry Pi. You should see a webpage with a camera image, controls for the LED lights, and a read-out of the temperature, pressure, and humidity of the room.

Finishing a 3D print by applying a thin layer of car body filler and sanding back will give a much smoother surface. This isn’t always necessary, but if your filament is damp or your nozzle is worn, it can make a model look much better when it’s painted

The easiest way to get the babycam.py script to run on boot is to add a line to the rc.local file. Assuming that the babycam.py file is located in your home directory, you should add the line python3 /home/pi/babycam.py to the rc.local file, just before the line that reads exit 0. It’s very important that you include the ampersand at the end of the line, otherwise the Python script will not be run in a separate process, the rc.local file will never complete, and your Raspberry Pi will never boot.

Tinned Raspberry Pi

With the software and hardware working, you can start putting the case together. You might need to scale the 3D models to suit the tin can you have before you print them out, so measure your tin before you click Print. You’ll also want to remove any inner lip from the top of the can using a can opener, and make a small hole in the side of the can near the bottom for the USB power cable. Next, make a hole in the bottom of the can for the LED cables to pass through.

If you want to add more than a couple of LEDs (or want to use brighter LEDs), you should connect your LEDs to the power input, and use a transistor on the GPIO to trigger them

If you haven’t already done so, solder appropriate leads to your LEDs, and don’t forget to put a 330 Ω resistor in-line on the positive side. The neck of the camera is supported by two lengths of aluminium armature wire. Push the wire up through each of the printed neck pieces, and use a clean soldering iron to weld the pieces together in the middle. Push the neck into the printed top section, and weld into place with a soldering iron from underneath. Be careful not to block the narrow slot with plastic, as this is where the camera cable passes up through the neck and into the camera.

You need to mount the BME280 so that the sensor is exposed to the air in the room. Do this by drilling a small hole in the 3D-printed top piece and hot gluing the sensor into position. If you’re going to use the optional microphone, you can add an extra hole and glue the mic into place in the same way. A short USB port extender will give you enough cable to plug the USB microphone into the socket on your Raspberry Pi

Paint the tin can and the 3D-printed parts. We found that spray blackboard paint gives a good effect on 3D-printed parts, and PlastiKote stone effect paint made the tin can look a little more tactile than a flat colour. Once the paint is dry, pass the camera cable up through the slot in the neck, and then apply the heat-shrink tubing to cover the neck with a small gap at the top and bottom. Connect the camera to the top of the cable, and push the front piece on to hold it into place. Glue shouldn’t be necessary, but a little hot glue might help if the front parts don’t hold together well.

Push the power cable through the hole in the case, and secure it with a knot and some hot glue. Leave enough cable free to easily remove the top section from the can in future without stressing the wires.

If you’re having trouble getting the armature wire through the 3D-printed parts, try using a drill to help twist the wire through

This is getting heavy

Glue the bottom section onto the can with hot glue, and hot-glue the LEDs into place on the bottom, feeding the cable up through the hole and into the GPIO header. This is a good time to hot-glue a weight into the bottom of the can to improve its stability. I used an old weight from some kitchen scales, but any small weight should be fine. Finally, fix the Raspberry Pi into place on the top piece by either drilling or gluing, then reconnect the rest of the cables, and push the 3D-printed top section into the tin can. If the top section is too loose, you can add a little bit of hot glue to hold things together once you know everything is working.

With the right type of paint, even old tin cans make a good-looking enclosure
for a project

That should be all of the steps complete. Plug in the USB and check the camera from a web browser. The babycam.py script includes video, sensors, and light control. If you are using the optional USB microphone, you can expand the functionality of the app to include audio streaming, use cry detection to activate the LEDs (don’t make the LEDs too stimulating or you’ll never get a night’s sleep again), or maybe even add a Bluetooth speaker and integrate a home assistant.

HackSpace magazine issue 26

HackSpace magazine is out now, available in print from your local newsagent, the Raspberry Pi Store in Cambridge, and online from Raspberry Pi Press.

If you love HackSpace magazine as much as we do, why not have a look at the subscription offers available, including the 12-month deal that comes with a free Adafruit Circuit Playground!

And, as always, you can download the free PDF here.

The post Raspberry Pi 3 baby monitor | Hackspace magazine #26 appeared first on Raspberry Pi.