Tag Archives: Raspberry Pi 4

Raspberry Pi touchscreen music streamer

via Raspberry Pi

If you liked the look of yesterday’s Raspberry Pi Roon Endpoint Music Streamer but thought: “Hey, you know what would be great? If it had a touchscreen,” then look no further. Home Theater Fanatics has built something using the same RoPieee software, but with the added feature of a screen, for those who need one.

Subscribe to Home Theater Fanatics on YouTube for more great builds like this one

The build cost for this is a little higher than the $150 estimate to recreate yesterday’s project, given the inclusion of a fancier Digital Audio Decoder and the touchscreen itself.

Hardware

connecting raspberry pi to touchscreen
It really is a super user-friendly walkthrough video

The brilliant Home Theater Fanatics show you how to put all of this together from this point in the build video, before moving on to the software install. They take care to go through all of the basics of the hardware in case you’re not familiar with things like ribbon cables or fans. It’s a really nice bird’s-eye view walkthrough, so beginners aren’t likely to have any problems following along.

ribbon attaching to raspberry pi
See – close-ups of how to connect your ribbon cables and everything

Software

Same as yesterday’s build:

At this point in the build video, Home Theater Fanatics go through the three steps you need to take to get the RoPieee and Roon software sorted out, then connect the DAC. Again, it’s a really clear, comprehensive on-screen walkthrough that beginners can be comfortable with.

Why do I need a touchscreen music streamer?

touchscreen music player
Get all your album track info right in your face

Aside from being able to see the attributed artwork for the music you’re currently listening to, this touchscreen solution provides easy song switching during home workouts. It’s also a much snazzier-looking tabletop alternative to a plugged-in phone spouting a Spotify playlist.

The post Raspberry Pi touchscreen music streamer appeared first on Raspberry Pi.

How to build a Raspberry Pi Roon Endpoint Music Streamer

via Raspberry Pi

Our friend Mike Perez at Audio Arkitekts is back to show you how to build PiFi, a Raspberry Pi-powered Roon Endpoint Music Streamer. The whole build costs around $150, which is pretty good going for such a sleek-looking Roon-ready end product.

Roon ready

Roon is a platform for all the music in your home, and Roon Core (which works with this build) manages all your music files and streaming content. The idea behind Roon is to bring all your music together, so you don’t have to worry about where it’s stored, what format it’s in, or where you stream it from. You can start a free trial if you’re not already a user.

Parts list

Sleek HiFiBerry case

Simple to put together

Fix the HiFiBerry DAC2 Pro into the top of the case with the line output and headphone outputs poking out. A Raspberry Pi 4 Model B is the brains of the operation, and slots nicely onto the HiFiBerry. The HiFiBerry HAT is compatible with all Raspberry Pi models with a 40-pin GPIO connector and just clicks right onto the GPIO pins. It is also directly powered by the Raspberry Pi so, no additional power supply needed.

Raspberry Pi 4 connected to HiFiBerry HAT inside the top half of the case (before the bottom half is screwed on)

Next, secure the bottom half of the case, making sure all the Raspberry Pi ports line up with the case’s ready-made holes. Mike did the whole thing by hand with just a little help from a screwdriver right at the end.

Software

Download the latest RoPieee image onto your SD card to make it a Roon Ready End Point, then slot it back into your Raspberry Pi. Now you have a good-looking, affordable audio output ready to connect to your Roon Core.

And that’s it. See – told you it was easy. Don’t forget, Audio Arkitekts’ YouTube channel is a must-follow for all audiophiles.

The post How to build a Raspberry Pi Roon Endpoint Music Streamer appeared first on Raspberry Pi.

Edge Impulse and TinyML on Raspberry Pi

via Raspberry Pi

Raspberry Pi is probably the most affordable way to get started with embedded machine learning. The inferencing performance we see with Raspberry Pi 4 is comparable to or better than some of the new accelerator hardware, but your overall hardware cost is just that much lower.

Raspberry Pi 4 Model B

However, training custom models on Raspberry Pi — or any edge platform, come to that — is still problematic. This is why today’s announcement from Edge Impulse is a big step, and makes machine learning at the edge that much more accessible. With full support for Raspberry Pi, you now have the ability to take data, train against your own data in the cloud on the Edge Impulse platform, and then deploy the newly trained model back to your Raspberry Pi.

Today’s announcement includes new SDKs: for Python, Node.js, Go, and C++. This allows you to integrate machine learning models directly into your own applications. There is also support for object detection, exclusively on the Raspberry Pi; you can train a custom object detection model using camera data taken on your own Raspberry Pi, and then deploy and use this custom model, rather than relying on a pretrained stock image classification model.

Because the importance of bananas to machine learning researchers can not be overstated. To test it out, we’re going to train a very simple model that can tell the difference between a banana🍌 and an apple🍎.

Getting started

If you don’t already have an Edge Impulse account you should open up a browser on your laptop and then create an account, along with a test project. I’m going to to call mine “Object detection”.

Creating a new project in Edge Impulse
Creating a new project in Edge Impulse

We’re going to be building an image classification project, one that can tell the difference between a banana 🍌 and an apple 🍎, but Edge Impulse will also let you build an object detection project, one that will identify multiple objects in an image.

Building an object detection rather than an image classification system? This video is for you!

After creating your project, you should see something like this:

My new object detection project open in Edge Impulse

Now log in to your Raspberry Pi, open up a Terminal window, and type

$ curl -sL https://deb.nodesource.com/setup_12.x | sudo bash -
$ sudo apt install -y gcc g++ make build-essential nodejs sox gstreamer1.0-tools gstreamer1.0-plugins-good gstreamer1.0-plugins-base gstreamer1.0-plugins-base-apps
$ sudo npm install edge-impulse-linux -g --unsafe-perm

to install the local toolchain. Then type

$ edge-impulse-linux
Edge Impulse Linux client v1.1.5
? What is your user name or e-mail address (edgeimpulse.com)? alasdair
? What is your password? [hidden]
This is a development preview.
Edge Impulse does not offer support on edge-impulse-linux at the moment.


? To which project do you want to connect this device? Alasdair Allan / Object d
etection
? Select a microphone USB-Audio - Razer Kiyo
[SER] Using microphone hw:1,0
? Select a camera Razer Kiyo
[SER] Using camera Razer Kiyo starting...
[SER] Connected to camera
[WS ] Connecting to wss://remote-mgmt.edgeimpulse.com
[WS ] Connected to wss://remote-mgmt.edgeimpulse.com
? What name do you want to give this device? raspberrypi
[WS ] Device "raspberrypi" is now connected to project "Object detection"
[WS ] Go to https://studio.edgeimpulse.com/studio/XXXXX/acquisition/training to build your machine learning model!

and log in to your Edge Impulse account. You’ll then be asked to choose a project, and finally to select a microphone and camera to connect to the project. I’ve got a Razer Kiyo connected to my own Raspberry Pi so I’m going to use that.

Raspberry Pi has connected to Edge Impulse
Raspberry Pi has connected to Edge Impulse

If you still have your project open in a browser you might see a notification telling you that your Raspberry Pi is connected. Otherwise you can click on “Devices” in the left-hand menu for a list of devices connected to that project. You should see an entry for your Raspberry Pi.

The list of devices connected to your project

Taking training data

If you look in your Terminal window on your Raspberry Pi you’ll see a URL that will take you to the “Data acquisition” page of your project. Alternatively you can just click on “Data acquisition” in the left-hand menu.

Getting ready to collect training data
Getting ready to collect training data

Go ahead and select your Raspberry Pi if it isn’t already selected, and then select the Camera as the sensor. You should see a live thumbnail from your camera appear to the right-hand side. If you want to follow along, position your fruit (I’m starting with with the banana 🍌), add a text label in the “Label” box, and hit the “start sampling” button. This will take and save an image to the cloud. Reposition the banana and take ten images. Then do it all again with the apple 🍎.

Ten labelled images each of the banana 🍌 and the apple 🍎

Since we’re building an incredibly simplistic model, and we’re going to leverage transfer learning, we probably now have enough training data with just these twenty images. So let’s go and create a model.

Creating a model

Click on “Impulse design” in the left-hand menu. Start by clicking on the “Add an input block” box and click on the “Add” button next to the “Images” entry. Next click on the “Add a processing block” box. Then click on the “Add” button next to the “Image” block to add a processing block that will normalise the image data and reduce colour depth. Then click on the “Add a learning block” box and select the “Transfer Learning (images)” block to grab a pretrained model intended for image classification, on which we will perform transfer learning to tune it for our banana 🍌 and apple 🍎 recognition task. You should see the “Output features” block update to show 2 output features. Now hit the “Save Impulse” button.

Our configured
Our configured Impulse

Next click on the “Images” sub-item under the “Impulse design” menu item, switch to the “Generate features” tab, and then hit the green “Generate features” button.

Generating model features

Finally, click on the “Transfer learning” sub-item under the “Impulse design” menu item, and hit the green “Start training” button at the bottom of the page. Training the model will take some time. Go get some coffee ☕.

A trained model

Testing our model

We can now test our trained model against the world. Click on the “Live classification” entry in the left-hand menu, and then hit then the green “Start sampling” button to take a live picture from your camera.

Live classification to test your model
Live classification to test your model

You might want to go fetch a different banana 🍌, just for testing purposes.

A live test of the model

If you want to do multiple tests, just scroll up and hit the “Start sampling” button again to take another image.

Deploying to your Raspberry Pi

Now we’ve (sort of) tested our model, we can deploy it back to our Raspberry Pi. Go to the Terminal window where the edge-impulse-linux command connecting your Raspberry Pi to Edge Impulse is running, and hit Control-C to stop it. Afterwards we can do a quick evaluation deployment using the edge-impulse-runner command.

$ edge-impulse-linux-runner
This is a development preview.
Edge Impulse does not offer support on edge-impulse-linux-runner at the moment.

Edge Impulse Linux runner v1.1.5

[RUN] Already have model /home/pi/.ei-linux-runner/models/24217/v2/model.eim not downloading...
[RUN] Starting the image classifier for Alasdair Allan / Object detection (v2)
[RUN] Parameters image size 96x96 px (3 channels) classes [ 'apple', 'banana' ]
[RUN] Using camera Razer Kiyo starting...
[RUN] Connected to camera

Want to see a feed of the camera and live classification in your browser? Go to http://XXX.XXX.XXX.XXX:XXXX

classifyRes 31ms. { apple: '0.0097', banana: '0.9903' }
classifyRes 29ms. { apple: '0.0082', banana: '0.9918' }
 .
 .
 .
classifyRes 23ms. { apple: '0.0078', banana: '0.9922' }

This will connect to the Edge Impulse cloud, download your trained model, and start up an application that will take the video stream coming from your camera and look for bananas 🍌 and apples 🍎. The results of the model inferencing will be shown frame by frame in the Terminal window. When the runner application starts up you’ll also see a URL: copy and paste this into a browser, and you’ll see the view from the camera in real time along with the inferencing results.

Deployed model running locally on your Raspberry Pi

Success! We’ve taken our training data and trained a model in the cloud, and we’re now running that model locally on our Raspberry Pi. Because we’re running the model locally, we no longer need network access. No data needs to leave the Raspberry Pi. This is a huge privacy advantage for edge computing compared to cloud-connected devices.

Wrapping up?

While we’re running our model inside Edge Impulse’s “quick look” application, we can deploy the exact same model into our own applications, as today’s announcement includes new SDKs: for Python, Node.js, Go, and C++. These SDKs let us build standalone applications to collect data not just from our camera and microphone, but from other sensors like accelerometers, magnetometers, or anything else you can connect to a Raspberry Pi.

Performance metrics for Edge Impulse are promising, although still somewhat below what we’ve seen using TensorFlow Lite directly on Raspberry Pi 4, for inferencing using similar models. That said, it’s really hard to compare performance across even very similar models as it depends so much on the exact situation you’re in and what data you’re dealing with, so your mileage may vary quite a lot here.

However, the new Edge Impulse announcement offers two very vital things: a cradle-to-grave framework for collecting data and training models then deploying these custom models at the edge, together with a layer of abstraction. Increasingly we’re seeing deep learning eating software as part of a general trend towards increasing abstraction, sometimes termed lithification, in software. Which sounds intimidating, but means that we can all do more, with less effort. Which isn’t a bad thing at all.

The post Edge Impulse and TinyML on Raspberry Pi appeared first on Raspberry Pi.

Raspberry Pi thermal camera

via Raspberry Pi

It has been a cold winter for Tom Shaffner, and since he is working from home and leaving the heating on all day, he decided it was finally time to see where his house’s insulation could be improved.

camera attached to raspberry pi in a case
Tom’s setup inside a case with a cooling fan; the camera is taped on bottom right

An affordable solution

His first thought was to get a thermal IR (infrared) camera, but he found the price hasn’t yet come down as much as he’d hoped. They range from several thousand dollars down to a few hundred, with a $50 option just to rent one from a hardware store for 24 hours.

When he saw the $50 option, he realised he could just buy the $60 (£54) MLX90640 Thermal Camera from Pimoroni and attach it to a Raspberry Pi. Tom used a Raspberry Pi 4 for this project. Problem affordably solved.

A joint open source effort

Once Tom’s hardware arrived, he took advantage of the opportunity to combine elements of several other projects that had caught his eye into a single, consolidated Python library that can be downloaded via pip and run both locally and as a web server. Tom thanks Валерий КурышевJoshua Hrisko, and Adrian Rosebrock for their work, on which this solution was partly based.

heat map image showing laptop and computer screen in red with surroundings in bluw
The heat image on the right shows that Tom’s computer and laptop screens are the hottest parts of the room

Tom has also published everything on GitHub for further open source development by any enterprising individuals who are interested in taking this even further.

Quality images

The big question, though, was whether the image quality would be good enough to be of real use. A few years back, the best cheap thermal IR camera had only an 8×8 resolution – not great. The magic of the MLX90640 Thermal Camera is that for the same price the resolution jumps to 24×32, giving each frame 768 different temperature readings.

heat map image showing window in blue and lamp in red
Thermal image showing heat generated by a ceiling lamp but lost through windows

Add a bit of interpolation and image enlargement and the end result gets the job done nicely. Stream the video over your local wireless network, and you can hold the camera in one hand and your phone in the other to use as a screen.

Bonus security feature

Bonus: If you leave the web server running when you’re finished thermal imaging, you’ve got yourself an affordable infrared security camera.

video showing the thermal camera cycling through interpolation and color modes and varying view
Live camera cycling through interpolation and colour modes and varying view

Documentation on the setup, installation, and results are all available on Tom’s GitHub, along with more pictures of what you can expect.

And you can connect with Tom on LinkedIn if you’d like to learn more about this “technically savvy mathematical modeller”.

The post Raspberry Pi thermal camera appeared first on Raspberry Pi.

Machine learning and depth estimation using Raspberry Pi

via Raspberry Pi

One of our engineers, David Plowman, describes machine learning and shares news of a Raspberry Pi depth estimation challenge run by ETH Zürich (Swiss Federal Institute of Technology).

Spoiler alert – it’s all happening virtually, so you can definitely make the trip and attend, or maybe even enter yourself.

What is Machine Learning?

Machine Learning (ML) and Artificial Intelligence (AI) are some of the top engineering-related buzzwords of the moment, and foremost among current ML paradigms is probably the Artificial Neural Network (ANN).

They involve millions of tiny calculations, merged together in a giant biologically inspired network – hence the name. These networks typically have millions of parameters that control each calculation, and they must be optimised for every different task at hand.

This process of optimising the parameters so that a given set of inputs correctly produces a known set of outputs is known as training, and is what gives rise to the sense that the network is “learning”.

A popular type of ANN used for processing images is the Convolutional Neural Network. Many small calculations are performed on groups of input pixels to produce each output pixel
A popular type of ANN used for processing images is the Convolutional Neural Network. Many small calculations are performed on groups of input pixels to produce each output pixel

Machine Learning frameworks

A number of well known companies produce free ML frameworks that you can download and use on your own computer. The network training procedure runs best on machines with powerful CPUs and GPUs, but even using one of these pre-trained networks (known as inference) can be quite expensive.

One of the most popular frameworks is Google’s TensorFlow (TF), and since this is rather resource intensive, they also produce a cut-down version optimised for less powerful platforms. This is TensorFlow Lite (TFLite), which can be run effectively on Raspberry Pi.

Depth estimation

ANNs have proven very adept at a wide variety of image processing tasks, most notably object classification and detection, but also depth estimation. This is the process of taking one or more images and working out how far away every part of the scene is from the camera, producing a depth map.

Here’s an example:

Depth estimation example using a truck

The image on the right shows, by the brightness of each pixel, how far away the objects in the original (left-hand) image are from the camera (darker = nearer).

We distinguish between stereo depth estimation, which starts with a stereo pair of images (taken from marginally different viewpoints; here, parallax can be used to inform the algorithm), and monocular depth estimation, working from just a single image.

The applications of such techniques should be clear, ranging from robots that need to understand and navigate their environments, to the fake bokeh effects beloved of many modern smartphone cameras.

Depth Estimation Challenge

C V P R conference logo with dark blue background and the edge of the earth covered in scattered orange lights connected by white lines

We were very interested then to learn that, as part of the CVPR (Computer Vision and Pattern Recognition) 2021 conference, Andrey Ignatov and Radu Timofte of ETH Zürich were planning to run a Monocular Depth Estimation Challenge. They are specifically targeting the Raspberry Pi 4 platform running TFLite, and we are delighted to support this effort.

For more information, or indeed if any technically minded readers are interested in entering the challenge, please visit:

The conference and workshops are all taking place virtually in June, and we’ll be sure to update our blog with some of the results and models produced for Raspberry Pi 4 by the competing teams. We wish them all good luck!

The post Machine learning and depth estimation using Raspberry Pi appeared first on Raspberry Pi.

Smart Fairy Tale

via Raspberry Pi

This is creepy, and we love it. OK, it’s not REALLY creepy, it’s just that some people have an aversion to dolls that appear to move of their own accord, due to a disturbing childhood experience — but enough about me.

Smart Fairy Tale is a whimsical, unique community project created by Berlin-based installation artist Niklas Roy and interaction designer Felix Fisgus.

Using a smartphone app, viewers determine which way a ball travels through transparent pipes, and depending on which light barriers the ball interrupts on its journey, various toys are animated to tell different stories.

The server of the installation is a Raspberry Pi 4. Via its GPIO pins, it controls the track switches and releases the ball.

Raspberry Pi 4 mounted onto plastic with the installation's servo and all the microcontrollers
Raspberry Pi 4 tucked in the top right-hand corner, mounted together with the router. Photo courtesy of Niklas’ project page

The apparatus is full of toys donated by residents of Wolfsburg, Germany. The artists wanted local people to not only be able to operate the mechanical piece, but also to have a hand in creating it. Each animatronic toy is made as a separate module, controlled by its own Arduino Nano.

Smart Fairy Tale can be remotely controlled by viewers who want to check in on the toys they gifted to the installation, and by any other curious people elsewhere in the world.

A phone using the app to control the installation. The installation is out of focus in the background
The app in action. Photo from Felix’s project page.

Better yet, the stories the toys tell were devised by local school students. The artists showed the gifted toys to a few elementary school classes, and the students drew several stories featuring toys they liked. The makers then programmed the toys to match what the drawings said they could do. A servo here, a couple of LEDs there, and the students’ stories were brought to life.

Some drawings local children made suggesting storylines for each of the gifted toys
Some of the storylines drawn by local children. Photo courtesy of Felix’s project page.

So what kind of stories did Wolfsburg’s finest come up with? One of the creators explains:

“There were a lot of scenes to interpret, like the blow-up love story, the chemtrail conspiracy, and the fossil fuel disaster, which culminates in a major traffic jam. The latter one even involved a laboratory for breeding synthetic dinosaurs by the use of renewable energies.”

Felix Fisgus

We LOVE it. Don’t tell me this isn’t creepy though…

WHY DO YOU HAUNT MY DREAMS???

You’ll find tonnes of extra technical specs and images in the project posts on both Felix and Niklas‘ websites.

The post Smart Fairy Tale appeared first on Raspberry Pi.