Tag Archives: camera board

An image-processing robot for RoboCup Junior

via Raspberry Pi

Helen: Today we’re delighted to have a guest post from 17-year-old student Arne Baeyens, aka Robotanicus, who has form in designing prize-winning robots. His latest, designed for the line-following challenge of a local competition, is rather impressive. Over to Arne…

Two months ago, the 24th of May, I participated in the RoboCup Junior competition Flanders, category ‘Advanced Rescue’. With a Raspberry Pi, of course – I used a model B running Raspbian. Instead of using reflectance sensors to determine the position of the line, I used the Pi Camera to capture a video stream and applied computer vision algorithms to follow the line. My robot wasn’t the fastest but I obtained the third place.

A short video of the robot in action:

In this category of the RCJ competition the robot has to follow a black line and to avoid obstacles. The T-junctions are marked by green fields to indicate the shortest trajectory. The final goal is to push a can out of the green field.

RPi line follower RPi line follower2 RPi line follower3

This is not my first robot for the RCJ competition. In 2013 I won the competition with a robot with the Dwengo board as control unit. It used reflectance and home-made colour sensors. The Dwengo board uses the popular pic18f4550 microcontroller and has amongst other functionalities a motor driver, a 2×16 char screen and a big, 40pin extension connector. The Dwengo board is, like the RPi, designed for educational purposes, with projects in Argentina and India.

As the Dwengo board is a good companion for the Raspberry Pi, I decided to combine both boards in my new robot. While the Pi does high-level image processing, the microcontroller controls the robot.

The Raspberry Pi was programmed in C++ using the OpenCV libraries, the wiringPi library (from Gordon Henderson) and the RaspiCam openCV interface library (from Pierre Raufast and improved by Emil Valkov). I overclocked the Pi to 1GHz to get a frame rate of 12 to 14 fps.

Using a camera has some big advantages: first of all, you don’t have that bunch of sensors mounted close to the ground that are interfering with obstacles and deranged by irregularities. The second benefit is that you can see what is in front of the robot without having to build a swinging sensor arm. So, you have information about the actual position of the robot above the line but also on the position of the line in front, allowing calculation of curvature of the line. In short, following the line is much more controllable. By using edge detection rather than greyscale thresholding, the program is virtually immune for shadows and grey zones in the image.

If the line would have had less hairpin bends and I would have had a bit more time, I would have implemented a speed regulating algorithm on the base of the curvature of the line. This is surely something that would improve the performance of the robot.

I also used the camera to detect and track the green direction fields at a T-junction where the robot has to take the right direction. I used a simple colour blob tracking algorithm for this.

A short video of what the robot thinks:

Please note that in reality the robot goes a little bit slower following the line.

Different steps of the image processing

Image acquired by the camera (with some lines and points already added):
Image acquired by the camera

The RPi converts the colour image to a greyscale image. Then the pixel values on a horizontal line in the image are extracted and put into an array. This array is visualized by putting the values in a graph (also with openCV):
Visualizing pixel values along a line

From the first array, a second is calculated by taking the difference from two successive values. In other words, we calculate the derivative:
Calculating the derivative

An iterating loop then searches for the highest and lowest value in the array. To have the horizontal relative position of the line in the array, the two position values—on the horizontal x axis in the graphed image—are averaged. The position is put in memory for the next horizontal scan with a new image. This makes that the scan line does not have to span the whole image but only about a third of it. The scan line moves horizontally with the centre about above the line.

But this is not enough for accurate tracking. From the calculated line position, circles following the line are constructed, each using the same method (but with much more trigonometry calculations as the scan lines are curved). For the second circle, not only the line position but also the line angle is used. Thanks to using functions, adding a circle is a matter of two short lines of code.

The colour tracking is done by colour conversion to HSV, thresholding and then blob tracking, like explained in this excellent video. The colour tracking slows the line following down by a few fps but this is acceptable.

HSV image Thresholded image

As seen in the video, afterwards all the scan lines and some info points are plotted on the input image so we can see what the robot ‘thinks’.

And then?

After the Raspberry Pi has found the line, it sends the position data and commands at 115,2 kbps over the hardware serial port to the Dwengo microcontroller board. The Dwengo board does some additional calculations, like taking the square root of the proportional error and squaring the ‘integral error’ (curvature of the line). I also used a serial interrupt and made the serial port as bug-free as possible by receiving each character separately. Thus, the program does not wait for the next character while in the serial interrupt.

The Dwengo board sends an answer character to control the data stream. The microcontroller also takes the analogue input of the SHARP IR long range sensor to detect the obstacles and scan for the container.

In short, the microcontroller is controlling the robot and the Raspberry Pi does an excellent job by running the CPU intensive line following program.

There’s a post on the forum with a more detailed technical explanation – but you will find the most important steps below.

Electrical wiring
Both devices are interconnected by two small boards—one attaches to the RPi and the other to the Dwengo board—that are joined by a right angle header connection. The first does with some resistors the logic level converting (the Dwengo board runs on 5V), the latter board also has two DC jacks with diodes in parallel for power input to the RPi. To regulate the power to the Pi, I used a Turnigy UBEC that delivers a stable 5.25V and feeds it into the Pi by the micro USB connector. This gives a bit more protection to the sensitive Pi. As the camera image was a bit distorted I added a 470uF capacitor to smooth things out. This helped. Even though the whole Pi got hot, the UBEC stayed cold. The power input was between 600 and 700mA at around 8.2 volts.

Last year, I almost missed the first place as the robot only just pushed the can out of the field. Not a second time! Having this in thought, I constructed two 14cm long arms that could be turned open by two 9g servos. With the two grippers opened, the robot spans almost 40 centimetres. Despite this, the robot managed—to everyone’s annoyance—‘to take its time before doing its job’, as can be seen in the video.

Building the robot platform
To build the robot platform I followed the same technology as the year before (link, in Dutch). I made a design in SketchUp, then converted it to a 2D vector drawing and finally lasercutted it at FabLab Leuven. However, the new robot platform is completely different in design. Last year, I made a ‘riding box’ by taking almost the maximum dimensions and mounting the electronics somewhere on or in it.

This time, I took a different approach. Instead of using an outer shell (like insects have), I made a design that supports and covers the parts only where necessary. The result of this is not only that the robot looks much better, but also that the different components are much easier to mount and that there is more space for extensions and extra sensors. The design files can be found here: Robot RoboCup Junior – FabLab Leuven.

3D renders in SketchUp:

RCJ_Robot_2014_render3 RCJ_Robot_2014_render5

On the day of the RCJ competition I had some bad luck as there wasn’t enough light in he competition room. The shutter time of the camera became much longer. As a consequence, the robot had much more difficulties in following sharp bends in the line. However, this problem did not affect the final outcome of the competition.

Maybe I should have mounted some LEDs to illuminate the line…

Vectors from coarse motion estimation

via Raspberry Pi

Liz: Gordon Hollingworth, our Director of Software, has been pointing the camera board at things, looking at dots on a screen, and cackling a lot over the last couple of weeks. We asked him what he was doing, so he wrote this for me. Thanks Gordon!

The Raspberry Pi is based on a BCM2835 System on a Chip (SoC), which was originally developed to do lots of media acceleration for mobile phones. Mobile phone media systems tend to follow behind desktop systems, but are far more energy efficient. You can see this efficiency at work in your Raspberry Pi: to decode H264 video on a standard Intel desktop processor requires GHz of processing capability, and many (30-40) Watts of power; whereas the BCM2835 on your Raspberry Pi can decode full 1080p30 video at a clock rate of 250MHz, and only burn 200mW.


Because we have this amazing hardware it enables us to do things like video encode and decode in real time without actually doing much work at all on the processor (all the work is done on the GPU, leaving the ARM free to shuffle bits around!) This also means we have access to very interesting bits of the encode pipeline that you’d otherwise not be able to look at.

One of the most interesting of these parts is the motion estimation block in the H264 encoder. To encode video, one of the things the hardware does is to compare the current frame with the previous (or a fixed) reference frame, and work out where the current macroblock (16×16 pixels) best matches the reference frame. It then outputs a set of vectors which tell you where the block came from – i.e. a measure of the motion in the image.

In general, this is the mechanism used within the application motion. It compares the image on the screen with the previous image (or a long-term reference), and uses the information to trigger events, like recording the video or writing a image to a disk, or triggering an alarm. Unfortunately, at this resolution it takes a huge amount of processing to achieve this in the pixel domain; which is silly if the hardware has already done all the hard work for you!

So over the last few weeks I’ve been trying to get the vectors out of the video encoder for you, and the attached animated gif shows you the results of that work. What you are seeing is the magnitude of the vector for each 16×16 macroblock equivalent to the speed at which it is moving! The information comes out of the encoder as side information (it can be enabled in raspivid with the -x flag). It is one integer per macroblock and is ((mb_width+1) × mb_height) × 4 bytes per frame, so for 1080p30 that is 120 × 68 × 4 == 32KByte per frame. And here are the results. (If you think you can guess what the movement you’re looking at here represents, let us know in the comments.)


Since this represents such a small amount of data, it can be processed very easily which should lead to 30fps motion identification and object tracking with very little actual work!

Go forth and track your motion!

Books, the digitising and text-to-speechifying thereof

via Raspberry Pi

A couple of books projects for you today. One is simple, practical and of great use to the visually-impaired. The other is over-complicated, and a little bit nuts; nonetheless, we think it’s rather wonderful; and actually kind of useful if you’ve got a lot of patience.

We’ll start with the simple and practical one first: Kolibre is a Finnish non-profit making open-source audiobook software so you can build a reader with very simple controls. This is Vadelma, an internet-enabled audio e-reader. It’s very easy to put together at home with a Raspberry Pi: you can find full instructions and discussion of the project at Kolibre’s website.

The overriding problem with automated audio e-readers is always the quality of the text-to-speech voice, and it’s the reason that books recorded with real, live actors reading them are currently so much more popular; but those are expensive, and it’s likely we’ll see innovations in text-to-speech as natural language processing research progresses (its challenging: people have been hammering away at this problem for half a century), and as this stuff becomes easier to automate and more widespread.

How easy is automation? Well, the good people at Dexter Industries decided that what the Pi community (which, you’ll have noticed, has a distinct crossover with the LEGO community) really needed was a  robot that could use optical character recognition (OCR) to digitise the text of a book, Google Books style. They got that up and running with a Pi and a camera module, using the text on a Kindle as proof of concept pretty quickly.

But if you’re that far along, why stop there? The Dexter team went on to add Lego features, until they ended up with a robot capable of wrangling real paper books, down to turning pages with one of those rubber wheels when the device has finished scanning the current text.

So there you have it: a Google Books project you can make at home, and a machine you can make to read the books to you when you’re done. If you want to read more about what Dexter Industries did, they’ve made a comprehensive writeup available at Makezine. Let us know how you get on if you decide to reduce your own library to bits.

Timelapse tutorial from Carrie Anne’s Geek Gurl Diaries

via Raspberry Pi

Even though Carrie Anne Philbin is working here at Pi Towers now, she’s still carrying on with the Geek Gurl Diaries YouTube channel that she set up before she joined us – for which we’re all profoundly grateful, because her videos are some of the best tutorials we’ve seen.

Here’s the latest from Carrie Anne: a tutorial on setting up the camera board, making timelapse video, and creating animations.

Are you a primary or secondary teacher in the UK? Would you like two days of free CPD from Carrie Anne and the rest of our superstar education team? You’ll get to come here to Pi Towers, meet all of us, and learn about the many ways you can use the Raspberry Pi in the classroom. Apply here - we’d love to hear from you.

New camera mode released

via Raspberry Pi

Liz: you’ll notice that this post has no pictures or video. That’s because we’d like you to make some for us, using the new camera mode. Take some 90fps video using our camera board and the information below, slow it down to 30fps and send us a link: if yours is particularly splendid, we’ll feature it here and on the front page. Over to JamesH!

I asked for video, you were forthcoming. Here are two which arrived in the few hours after we posted this: the first video, with the juggling clubs, is from Tobias Huebner; the second, with the bouncing balls, is from JBAnovling. JamesH’s discussion about what’s going on happens after the pretty. Enjoy!

When the Raspberry Pi camera was released, the eagle-eyed among you noticed that the camera hardware itself can support various high frame rate modes, but that the software could ‘only’ manage 30 frames per second in its high-definition video mode.

There’s is no hardware limitation in the Raspberry Pi itself. It’s quite capable of handling these high frame rate modes, but it does require a certain amount of effort to work out these new ‘modes’ inside the camera software. At the original release of the camera, two modes were provided: a stills capture mode, which offers the full resolution of the sensor (2592×1944), and a 1080p video mode (1920x1080p). Those same eagle-eyed people will see that these modes have different aspect ratios – the ratio of width to height. Stills outputs 4:3 (like old school TV), video 16:9 (wide screen).

This creates a problem when previewing stills captures, since the preview uses the video mode so it can run at 30 frames per second (fps) – not only is the aspect of the preview different, but because the video mode ‘crops’ the sensor (i.e. takes a 1920×1080 windows from the centre), the field of view in preview mode is very different from the actual capture.

We had some work to do to develop new modes for high frame rates, and also fix the stills preview mode so that is matches the capture mode.

So now, finally, some very helpful chaps at Broadcom, with some help from Omnivision, the sensor manufacturer, have found some spare time to sort out these modes, and not just that but to add some extra goodness while they were at it. (Liz interjects: The Raspberry Pi Foundation is not part of Broadcom – we’re a customer of theirs – but we’ve got a good relationship and the Foundation’s really grateful for the volunteer help that some of the people at Broadcom offer us from time to time. You guys rock: thank you!)

The result is that we now have a set of mode as follows :

  • 2592×1944 1-15fps, video or stills mode, Full sensor full FOV, default stills capture
  • 1920×1080 1-30fps, video mode, 1080p30 cropped
  • 1296×972 1-42fps, video mode, 4:3 aspect binned full FOV. Used for stills preview in raspistill.
  • 1296×730 1-49fps, video mode, 16:9 aspect , binned, full FOV (width), used for 720p
  • 640×480 42.1-60fps, video mode, up to VGAp60 binned
  • 640×480 60.1-90fps, video mode, up to VGAp90 binned

I’ve introduced a new word in the that list. Binned. This is how we can get high frame rates. Binning means combining pixels from the sensor together in a ‘bin’ in the analogue domain.  As well as reducing the amount of data, this can also improve low light performance as it averages out sensor ‘noise’ in the absence of quantisation noise introduced by the analogue to digital converters (ADCs), which are the bits of electronics in the sensor that convert the analogue information created by incoming photons to digital numbers.

So if we do a 2×2 ‘bin’ on the sensor, it only sends a quarter (2×2 = 4 pixels merged in to one = one quarter!) of the amount of data per frame to the Raspberry Pi. This means we can quadruple (approximately – there are some other issues at play) the frame rate for the same amount of data! So a simple 2×2 bin theoretically means quadruple the frame rate, but at half the X and Y resolution. This is how the 1296×972 mode works – it’s exactly a 2×2 binned mode, so it’s still 4:3 ratio, uses the whole sensor field of view, and makes a perfect preview mode for stills capture.

We also have a very similar mode, which is 1296×730. This is used for 720p video recording (the sensor image is scaled by the GPU to 1280×720). This is a 2×2 binned mode with an additional crop, which also means a slightly increased frame rate as there is less data to transfer.

Now by reducing the resolution output by the sensor even further and by using ‘skipping’ of pixels in combination with binning, we can get even higher frame rates, and this is how the high speed 640×480 VGA modes work. So, the fastest mode is now VGA resolution at 90 frames per second – three times the frame rate of 1080p30.

So, how do we use these new modes?

The demo applications raspistill and raspivid will already work with the new modes. You can specify the resolution you need and the frame rate, and the correct mode will be chosen. You will need to get the newest GPU firmware using sudo rpi-update which contains all these shiny new modes.

One thing to note: the system will always try to run at the frame rate specified in preference to resolution. Therefore if you specify a high rate at a resolution it cannot manage, it will use a low resolution mode to achieve the frame rate and upscale to the requested size – upscaling rarely looks good. It may also be too fast for the video encoder, so some of the extra frames may be skipped. So always ensure the resolution you specify can achieve the required frame rate to get the best results.

So, a quick example, to record a 10s VGA clip at 90fps

raspivid -w 640 -h 480 -fps 90 -t 10000 -o test90fps.h264

There have also been minor changes to the V4l2 driver to support these new modes. These should be included when you do the rpi-update to get the new GPU firmware.

The V4L2 driver supports the new modes too. Just using the normal requests, you can now ask for up to 90fps. So doing the same streaming of VGA at 90fps to H264 would be the following set of v4l2-ctl commands:

v4l2-ctl -p 90
 v4l2-ctl -v width=640,height=480,pixelformat=H264
 v4l2-ctl --stream-mmap=3 --stream-count=900 --stream-to=test90fps.h264

There are a few provisos that you will need to consider when using the faster modes, especially with the V4L driver.

  • They will be increasing the load on the ARM quite significantly as there will more callbacks per second. This may have unpredictable effects on V4L applications so that they may not be able to keep up.
  • The MJPEG codec doesn’t cope above about 720P40 – it will start dropping frames, and above 45fps it seems to be able to lock things solid. You have been warned.
  • H264 will keep up quite happily up to 720P49, or VGA@90fps

That said, most people should find no problem with these new features, so a big thank you must go to Dave Stevenson and Naush Patuck at Broadcom for finding the time to implement them! Also, thanks to Omnivision for their continued support.

GPS-tracking helmet cam

via Raspberry Pi

Martin O’Hanlon’s a familiar name in these parts, especially for fans of Minecraft: his repository of Pi Minecraft tricks and tutorials is one of our favourite resources. But Martin’s not all about magicking Menger-Sierpinski Sponges into the Minecraft universe: he does wonderful stuff with hardware and the Raspberry Pi too. Here’s some footage from his latest:

What you’re looking at here is something we haven’t seen before: camera footage with a GPS overlay, showing the route Martin has skied and his current speed. (Gordon, who has his own helmet cam hack, is quivering with envy.) Martin’s setup, like all the best Raspberry Pi hacks, also involves tupperware. It’s a one-button, one-led design, so it’s as easy as possible to use when you’re wearing ski gloves.

Work in progress

You can find everything you’ll need to construct your own at Martin’s Stuff about Code; he’s also done a very detailed writeup of the design process and included plenty of construction tips, along with the usual code and parts list. Thanks Martin!

Touchscreen point-and-shoot, from Adafruit

via Raspberry Pi

LadyAda from Adafruit is one of my very favourite people. We have a tradition of spending at least one evening eating Korean barbecue whenever I visit New York. We have told each other many secrets over bowls of fizzy fermented rice beverage, posed for photographs in front of plastic meats, been filmed pointing at electronics for the New York Times, and behaved very badly together in Pinkberry in September. LadyAda is the perfect combination of super-smart hacker, pink hair and business ninja; her cat Mosfet likes to Skype transatlantically with the Raspberry Pi cat, Mooncake (at least I think that their intense ignoring of each other constitutes “liking”); and we are incredibly fortunate that she saw the Pi and instantly understood what we were trying to do back in 2011. Here she is on the cover of the MagPi. (Click the image to visit the MagPi website, where you can download the issue for free.)

Her business, Adafruit, which employs an army of hackers and makers, does wonderful things with the Pi. They’ve been incredibly helpful to us in getting the word about Raspberry Pi and our educational mission out in North America. Adafruit not only stocks the Raspberry Pi and a whole warehouse-full of compatible electronics; the team also creates some amazing Raspberry Pi add-ons, along with projects and tutorials.

This is Adafruit’s latest Pi project, and it blew our minds.

All the parts you’ll need to create your own point-and-shoot camera using the Raspberry Pi, a Raspberry Pi camera board, and a little touch-screen TFT add-on board that Adafruit have made especially for the Pi, are available from Adafruit (they ship worldwide and are super-friendly). You can also find out how to send your photos to another computer over WiFi, or using Dropbox. As the Adafruit team says:

This isn’t likely to replace your digital camera (or even phone-cam) anytime soon…it’s a simplistic learning exercise and not a polished consumer item…but as the code is open source, you or others might customize it into something your regular camera can’t do.

As always, full instructions on making your own are on the learning section of Adafruit’s website, with a parts list, comprehensive setup instructions, and much more.

Adafruit have been especially prolific this week: we’ll have another project from them to show you in a few days. Thanks to LadyAda, PT, and especially to Phillip Burgess, who engineered this camera project.

Twitter-triggered photobooth

via Raspberry Pi

A guest post today: I’m just off a plane and can barely string a sentence together. Thanks so much to all the progressive maths teachers we met at the Wolfram conference in ew York this week; we’re looking forward to finding out what your pupils do with Mathematica from now on!

Over to Adam Kemény, from photobot.co in Hove, where he spends the day making robotic photobooths.

This summer Photobot.Co Ltd built what we believe to be the world’s first Twitter-triggered photobooth. Its first outing, at London Fashion Week for The Body Shop, allowed the fashion world to create unique portraits of themselves which were then delivered straight to their own mobile devices.

In February we took our talking robotic photobooth, Photobot, to London Fashion Week for The Body Shop to use to entertain the media and VIPs. We saw that almost every photostrip that Photobot printed was quickly snapped with smartphones before being shared to twitter and facebook and that resonated with an idea we’d had for a photobooth that used twitter as a trigger, rather than buttons or coins.

When The Body Shop approached us to create something new and fun for September’s Fashion Week we pitched a ‘Magic Mirror’ photobooth concept that would allow a Fashion Week attendee to quickly share their personal style.

The first Magic Mirror design, in Sketchup

By simply sending a tweet to the booth’s twitter account the Magic Mirror would respond by greeting them via a hidden display before taking their photo. The resulting image would then be tweeted back to them as well as being shared to a curated gallery.

The Magic Mirror, ready for action

Exploring the concept a little further led us to realise that space constraints would mean that in order to capture a full length portrait we’d need to look at a multi-camera setup. We decided to take inspiration from the fragmented portrait concept that Kevin Meredith (aka Lomokev) developed for his work in Brighton Source magazine – and began experimenting with an increasing number of cameras. Raspberry Pi’s, with their camera module, soon emerged as a good candidate for the cameras due to their image quality, ease of networked control and price.

Test rig, firing fifteen cameras

We decided to give the booth a more impactful presence by growing it into a winged dressing mirror with three panels. This angled arrangement would allow the subject to see themselves from several perspectives at once and, as each mirrored panel would contain a five camera array, they’d be photographed by a total of 15 RPi cameras simultaneously. The booth would then composite the 15 photographs into one stylised portrait before tweeting it back to the subject.

Final hardware

As The Body Shop was using the Magic Mirror to promote a new range of makeup called Colour Crush, graphics on the booth asked the question “What is the main #colour that you are wearing today?”. When a Fashion Week attendee replied to this question (via Twitter) the Magic Mirror would be triggered. Our software scanned tweets to the booth for a mention of any of a hundred colours and send a personalised reply tweet based on what colour the attendee was wearing. For example, attendees tweeting that they wore the colour ‘black’ would have their photos taken before receiving a tweet from the booth that said “Pucker up for a Moonlight Kiss mwah! Love, @TheBodyShopUK #ColourCrush” along with their composited portrait.

The final, eye-searing result!

The Magic Mirror was a challenge to build, mainly due to the complications of getting so many tiny cameras to align in a useful way, but our wonderful developer created a GUI interface that allowed far easier configuration of the 15 photos onto the composited final image, saving the day and my sanity. The booth was a hit and we’re always on the look out for other creative uses for the booth so would welcome any contact from potential collaborators or clients.


What’s that blue thing doing here?

via Raspberry Pi

So your new Pi NoIR has plopped through the letterbox, and you’ve unpacked it. There’s a little square of blue gel in there. What’s it for?

Thanks to Andrew Back for the picture! Click to read more about his adventures with Pi NoIR on the Designspark blog.

If you’ve been reading our posts about why we developed an infrared camera board keenly, you’ll have noticed that we mentioned a lot of interest from botanists, who use infrared photography to work on the health of trees. We started to read up about the work, found it absolutely fascinating, and thought you’d like to get in on it too.

A short biology lesson follows.

Photosynthesis involves chlorophyll absorbing light and using the energy to drive a charge separation process which ultimately (via a vast range of hacked together bits and pieces – if you believe in intelligent design, you won’t after you’ve read the Wikipedia page on chlorophyll) generates oxygen and carbohydrate. Here’s a nice picture of the absorption spectrum of two sorts of chlorophyll, swiped from Wikipedia:

Notice that both kinds of chlorophyll absorb blue and red light, but not green or infrared.

So: why are trees green? The graph above shows you that it’s because green is what’s left once the chlorophyll has grabbed all the long wavelength (red) and short wavelength (blue) light.

Let’s say you’re a biologist, and you want to measure how much photosynthesis is going on. One way to do this would be to look for greenness, but it turns out an even better method is to look for infrared and not blue – this is what the filter lets us do. Bright areas in a picture filtered like this mean that lots of photosynthesis is happening in those spots.

There’s a long history of doing this stuff from space (the Landsat vehicles, for example, look at the Earth across a very broad spectrum), and Public Lab have done loads of research as part of their Infragram project (and associated Kickstarter) to find ways of modifying cameras, and to find cheap alternatives to expensive optical bandgap filters. Our friend Roscolux #2007 Storaro Blue (that’s the blue thing’s full name) turns out to be a great example – we buy it on giant reels and the guys at the factory in Wales where we make the Raspberry Pi and both kinds of camera board cut it up into little squares for you to use. It’s not very expensive at all for us to provide you with a little square of blue, and it adds a lot of extra functionality to the camera that we hope you’ll enjoy playing with.

The work of the folk at Public Lab has been absolutely vital in helping us understand all this, and we’re very grateful to them for their work on finding suitable filters at low prices, and especially on image processing. We strongly recommend that you visit Public Lab’s Infragram to process your own images. We’re talking to Public Lab at the moment about working together on developing some educational activities around Pi NoIR. We’ll let you know what we come up with right here.

We sent Matthew Lippincott from Public Lab an early Pi NoIR (and a blue thing), and he sent it up on a quadcopter to take some shots of the tree canopy, which he’s processed using Infragram, to show you what’s possible.

Click to visit a Flickr set with many more Pi NoIR + Infragram images.

We still have some to do in getting images taken with the filter absolutely perfect (notably in white balance calibration), but we hope that what you can do with the filter already gives you a feel for the potential of an infrared camera. In a way, it’s a shame we’re launching this in the autumn: there’s less photosynthesising going on out there that there might be. But you’ll still get some really interesting results if you go outside today and start snapping.

Pi NoIR infrared camera: now available!

via Raspberry Pi

Pi NoIR, the infrared version of our camera board, is available to purchase for $25 plus tax from today. You’ll find it at all the usual suspects: RS Components, Premier Farnell and their subsidiaries; and at Adafruit. Other stores will be getting stock soon.

Pictures courtesy of Adafruit, who, unlike us, actually have a studio for doing this stuff in – thanks guys!

Back view

What’s that mysterious square of stuff, you ask? I’ll let you know tomorrow.



Creeptacular face-tracking Halloween portrait

via Raspberry Pi

You’ve got a week to build this portrait, whose eyes follow you around the room, for Halloween.

Adafruit have produced a tutorial, courtesy of Tony DiCola, which uses OpenCV and openFrameworks with your Raspberry Pi and camera board to create a picture of pullulating panic. It’s haunted hardware of horripilating hideousness.

You’ll also find instructions on making your own frame in the tutorial: we recommend making one large enough to drill a hole in, so you can conceal the camera board inside before using this to scare your loved ones. It’s elegant and spooky; plus, you can keep it for the rest of the year and use it for another OpenCV project like the Magic Mirror.


Infrared camera – you asked us, so we’re making them!

via Raspberry Pi

You may have heard rumours about something we’re calling Pi NoIR (Pi, no infrared) – it’s been a very badly kept secret. Some months ago we featured some work that was being done at Reading Hackspace, where members were removing the infrared filter to use the camera to sense infrared signals, and for low-light work, especially with wildlife. The Reading camera boards ended up going to the Horniman Museum in London, where they’re currently being used to track the activity of corals at night.

A lot of you are interested in wildlife monitoring and photography. London Zoo mentioned to us that the infrared filter on the standard Pi camera board is a barrier to using it in projects like the Kenyan rhino-tracking project they’re running based around the Pi – although the Pi is used as the base of the project and does all the computational tasks required, they started out having to use a more expensive and more power-hungry camera than the Pi camera board, because that IR filter meant that it wasn’t useable at night.

Once the news from Reading Hackspace and the Horniman got out, we were inundated by emails from you, along with comments here on the blog and on the forums, asking for a camera variant with no IR filter. You wanted it for camera effects, for instances where you wanted to be able to see IR beams from remote controls and the like, for low-light photography illuminated by IR, and especially for wildlife photography. Archeologists wanted to take aerial photographs of fields with an IR camera to better see traces of lost buildings and settlements. Some botanists got in touch too: apparently some health problems in trees can be detected early with an IR camera.

Initially we thought it wasn’t going to be something we could do: Sunny, who make the sensor, filter and lens package that’s at the heart of our camera board, did not offer a package without the filter at all. Removing it would mean an extra production line would have to be set up just for us – and they had other worries when we started to talk to them about adding an infrared camera option. They told us they were particularly concerned that users would try to use a camera board without a filter for regular daytime photography, and be would be upset at the image quality. (There’s a reason that camera products usually integrate an infrared filter – the world looks a little odd to our eyes with an extra colour added to the visible spectrum.)

We convinced them that you Pi users are a pragmatic and sensible lot, and would not try to replace a regular camera board with a Pi NoIR – the Pi NoIR is a piece of equipment for special circumstances. So Sunny set up an extra line just for us, to produce the Pi NoIR as a special variant. We will be launching Real Soon Now – modules are on their way and we’re aiming for early November – so keep an eye out here for news about release.

RS Components have got their hands on an early prototype, and Andrew Back produced a blog post about using it in timelapse wildlife photography at night, with infrared illumination. You can read it at DesignSpark, RS’s community hub.

Andrew’s garden is a paradise of slugs.

Jon and JamesH would like you to be aware that the red flashes are most likely due to not letting the camera “warm up” sufficiently before taking each picture. Jon says: “Raspistill defaults to 5 second previews before capture which should be enough. If using the “-t” parameter then don’t set it below 2000. The “-tl” parameter is for timelapse, which doesn’t shut the camera down between picture grabs.” (We’re checking the white balance before release all the same, though.)

Let us know if you’re in the market for a Pi NoIR in the comments. We’d love to hear your plans for one! We’re planning to sell it for $25, the same price as a regular camera module. Check back here: we’ll tell you as soon as it’s released.

Turn your Pi into a low-cost HD surveillance cam

via Raspberry Pi

Local government CCTV is awful, and it’s everywhere in the UK. But I’m much happier about surveillance in the hands of private people – it’s a matter of quis custodiet ipsos custodes? (Who watches the watchmen?), and I’m pleased to see the Raspberry Pi bring the price of networked motion-sensitive HD surveillance cameras down to be affordable by consumers. Off the shelf, you’re looking at prices in the hundreds of pounds. Use a Pi to make your own HD system, and your setup should come in at under £50, with a bit of shopping around. This is a great use case for the value bundle our distributors are offering at the moment, where a camera board and a Model A Raspberry Pi with an SD card is coming in at $45.

Christoph Buenger has used a Pi and a camera board as the guts of his project, and, in a stroke of sheer genius, has waterproofed the kit by housing it in one of those fake CCTV shells you can buy to fool burglars. The fakes are head-scratchingly cheap – I just found one that looks pretty convincing on Amazon for £6.24.

Christoph’s camera, snug inside its housing

Christoph has made build instructions and code available so you can set your own camera up. It does more than just film what’s in front of it: he’s added some motion-detection capability to run in the background, so if the camera spots something moving, it’ll start recording for a set period.

At the moment, Christoph saves video to a Windows shared folder (you can, of course, save it wherever you like if you’re not a Windows person). The live stream is also available to be viewed online if you configure your local network.

A live (and topical) frame from the camera’s video feed

Christoph’s looking at adding more functionality to the setup. He says:

There are a thousand things you can do with such a surveillance cam basic setup now. How about sending Growl notifications when some motion was detected? This guide explains how to add this functionality easily.

Or you could easily add a temperature-sensor to the cam. It’s only a few bucks and can be integrated very easily.

We’re currently working on integrating the live stream into MediaPortal server so that we can switch to a TV channel to see the live stream from the cam in our office.

If you want extra security, you could also add a battery pack to the camera. Be sure to buy one that is able to charge simultaneously while powering the Raspberry. This would enable you to detect if some bad guy cuts the power strips of your camera and send some alert messages to you (i.e. SMS or email) including the video of the disturber.

Let us know if you set up your own security camera with the Pi. We’d love to see what adaptations you come up with!

Get a Model A and a camera board for $40

via Raspberry Pi

We’ve talked before about how the camera board and the Model A are natural bedfellows. Whether you’re shooting a time lapse video or hollowing out a sweet, innocent teddy bear, the 256MB of RAM on the Model A is easily sufficient to run raspistill and raspivid, and the much lower power consumption gives you a lot more battery life for mobile applications. To allow more of you to have a play with this combination, we’ve got together with our partners to offer the two together for the bargain price of $40.

Model A and camera board – best of friends

UK customers can visit element14 or RS Components (who are also offering a $45 bundle with an 4GB SD card); international customers should be able to find the same bundles on their respective national sites.

Feeder Tweeter

via Raspberry Pi

This is hands down the best bird feeder project we’ve seen yet. I got an email from the folks at Manifold, a creative design agency in San Francisco, this week. One of their developers works from Denver, Colorado, and has been spending some time building the ultimate bird table. It’s autonomous, it’s solar-powered, it feeds, it photographs, it tweets images when a bird comes to feed, and it’s open source.

A PIR (passive infra-red) sensor detects when a bird lands at the table to feed, and triggers the camera. Photographs are then uploaded to Twitter. PIR’s a great choice here because it only responds to warm-body heat; if a leaf blows in front of the assembly, nothing will trigger, but if a toasty-warm little bird stops by for some seed, the sensor will detect it, and set off the camera.

This was not a trivial build. Issues like waterproofing, power constraints, and all those fiddly annoyances you find with outdoor projects had to be dealt with. The prototype (built from the ground up out of bits of wood: no pre-made bird feeders for these guys) took around 25 hours to put together. Here’s a time-lapse video of what happened in the workshop.

The first iteration of the Tweeter Feeder had a few bugs: the webcam in the assembly didn’t offer high enough resolution for decent pictures of the birds, and was swapped out with a Raspberry Pi camera board. But the camera board’s focal depth wasn’t right for this project, so an additional lens was put into the assembly – and then all the camera code had to be changed to reflect the switch. With cracking results: here’s a before and after picture.

The PIR sensor was getting false positives from changes in temperature due to the sun on the feeder: an additional motion sensor was added to iron those out. A light sensor found its way into the assembly to stop the camera triggering when there wasn’t enough ambient light for a reasonable photograph. The solar panel positioning wasn’t optimal. And so on and so on – but the bugs have all been stomped now, and the end result is a thing of beauty.

Read Chad’s account of what they were up to on Manifold’s blog (which has a ton of information on the development of the Feeder Tweeter), and then head to the Feeder Tweeter site itself, where there is an area for developers with a hardware list, wiring diagrams, links to all the code you’ll need on GitHub and much more. And let us know if you decide to make or adapt the Feeder Tweeter for your own use – we’d like to see what you come up with!