Tag Archives: camera board

Watching an endangered Tuatara hatch

via Raspberry Pi

As I type this, Emma is hugging herself and shouting “LOOK AT THE LOVELY BABY!” We believe that every office environment is enriched by biologists.

The little guy/gal in the video above is a Tuatara – and I didn’t have to go to Wikipedia to learn more about them, because Emma is amazingly well-versed in New Zealand’s endemic lizards. One of her friends works in conserving Tuataras, which are endangered, in New Zealand. (Emma says, sadly, that we can’t have one of these in the office – they live for more than a hundred years, and we won’t be around to feed it forever.)

tuatara

This video was filmed in a specially prepared, laser-cut incubator, with a Pi NoIR camera, hacked together with a DSLR lens. Over at Hackaday, Warren (what’s your surname, Warren? Let us know, and we’ll add it here), the maker, has put together a detailed how-to. He says:

Ok so a few weeks ago I was asked to film the hatching of a endangered species of reptile called a Tuatara. It isn’t often that you get a chance actually be involved in this sort of project. So needless to say I said yup I am happy to do it. Then I was told what the restrictions were….. first problem was they are in an incubator, I was thinking incubator as in the sort we see on TV with windows and such so it would be easy to pop a camera on the side focus thru a window and ta-dah video footage complete, job done, but no! No windows or no light.. The space is a temperature and humidity controlled space. So time to think laterally. I had a friend who used Raspberry pi’s and had rav’d about how cool they were. I had been tinkering with the idea of getting one for home and having a bit of a play.

The results speak for themselves. Thanks Warren; we love it.

Rendering camera images in Minecraft in real time

via Raspberry Pi

Ferran Fabregas worked out a couple of months ago how to render .jpg images in the Minecraft world using Minecraft Pi Edition. Our logo seemed an obvious place to start.

pilogominecraft

And Ferran has just made that good idea an absolutely fantastic idea, by adapting it to render images captured in real time from a Pi Camera.

Here’s his face, in all its pixellated, Minecrafty glory.

minecraftface

This is a really, really simple project to replicate at home – all the code you’ll need is on Ferran’s website, so if you’ve got a Raspberry Pi camera, you’re ready to go. Links to screenshots of your own in the comments below, please!

Slimline point-and-shoot camera from Ben Heck

via Raspberry Pi

Ben Heck, King of the Makers, has made the prettiest point-and-shoot camera build we’ve seen done with a Raspberry Pi. The secret to it is a bit of desoldering and depopulating the Pi he uses, to slim down the profile of the board – he’s yanked nearly everything except the SoC – the processor and memory package in the middle of the Pi. (If we were you, Ben, we’d have used a Model B+ so you didn’t have that SD card sticking out.)

Our own Ben Nuttall, who affects to be totally unimpressed by everything, was overheard saying: “That’s a very cool camera.” There is no higher praise.

There’s no writeup, but the video is very thorough and walks you through everything you need to know, including a parts list (that nice little TFT touchscreen, which is the thing I anticipate most of you being interested in, is from Adafruit). Let us know what you’d do differently, and if you plan on making something similar yourself, in the comments!

A digital making community for wildlife: Naturebytes camera traps

via Raspberry Pi

Start-up Naturebytes hopes their 3D printed Raspberry Pi camera trap (a camera triggered by the presence of animals) will be the beginning of a very special community of makers.

Supported by the Raspberry Pi Foundation’s Education Fund and Nesta, Naturebytes aims to establish a digital making community for wildlife with a very important purpose. Their gadgets, creations and maker kits (and, hopefully, those of others who get involved) will be put to use collecting real data for conservation and wildlife research projects – and to kick it all off, they took their prototype 3D printed birdbox-style camera trap kit to family festival Camp Bestival to see what everyone thought.

NatureBytes camera trap prototype

If you were one of the lucky bunch to enjoy this year’s Camp Bestival, you’d have seen them over in the Science Tent with a colourful collection of their camera trap enclosures. The enclosure provides a snug home for a Raspberry Pi, Pi camera module, passive infrared sensor (PIR sensor), UBEC (a device used to regulate the power) and battery bank (they have plans to add external power capabilities, including solar, but for now they’re using eight trusty AA batteries to power the trap).

A colourful collection of camera trap enclosures

A colourful collection of camera trap enclosures

The PIR sensor does the job of detecting any wildlife passing by, and they’re using Python to control the camera module, which in turn snaps photos to the SD card. If you’re looking for nocturnal animals then the Pi NoIR could be used instead, with a bank of infrared LEDs to provide illumination.

Naturebytes says:

When you’re aiming to create maker kits for all manner of ages, it’s useful to try out your masterpiece with actual users to see how they found the challenge.

Naturebytes at Camp Bestival

Explaining how the camera trap enclosures are printed

Assembling camera traps at Camp Bestival

Camp Bestival festival-goers assembling camera traps

With screwdrivers at the ready, teams of festival-goers first took a look at one of our camera enclosures being printed on an Ultimaker before everyone sat down to assemble their own trap ready for a Blue Peter-style “Here’s one I made earlier” photo opportunity (we duct-taped a working camera trap to the back of a large TV so everyone could be captured in an image).

In fact, using the cam.start_preview() Python function we could output a few seconds of video when the PIR sensor was triggered, so everyone could watch.

One camera trap in action capturing another camera trap

Naturebytes duct-taped a working camera trap to the back of a large TV so everyone could see a camera trap in action

Our grand plan is to support the upcoming Naturebytes community of digital makers by accepting images from thousands of Naturebytes camera traps out in gardens, schools or wildlife reserves to the Naturebytes website, so we can share them with active conservation projects. We could, for example, be looking for hedgehogs to monitor their decline, and push the images you’ve taken of hedgehogs visiting your garden directly to wildlife groups on the ground who want the cold hard facts as to how many can be found in certain areas.

Job done, Camp Bestival!

Job done, Camp Bestival!

Keep your eyes peeled – Naturebytes is powering up for launch very soon!

An image-processing robot for RoboCup Junior

via Raspberry Pi

Helen: Today we’re delighted to have a guest post from 17-year-old student Arne Baeyens, aka Robotanicus, who has form in designing prize-winning robots. His latest, designed for the line-following challenge of a local competition, is rather impressive. Over to Arne…

Two months ago, the 24th of May, I participated in the RoboCup Junior competition Flanders, category ‘Advanced Rescue’. With a Raspberry Pi, of course – I used a model B running Raspbian. Instead of using reflectance sensors to determine the position of the line, I used the Pi Camera to capture a video stream and applied computer vision algorithms to follow the line. My robot wasn’t the fastest but I obtained the third place.

A short video of the robot in action:

In this category of the RCJ competition the robot has to follow a black line and to avoid obstacles. The T-junctions are marked by green fields to indicate the shortest trajectory. The final goal is to push a can out of the green field.

RPi line follower RPi line follower2 RPi line follower3

This is not my first robot for the RCJ competition. In 2013 I won the competition with a robot with the Dwengo board as control unit. It used reflectance and home-made colour sensors. The Dwengo board uses the popular pic18f4550 microcontroller and has amongst other functionalities a motor driver, a 2×16 char screen and a big, 40pin extension connector. The Dwengo board is, like the RPi, designed for educational purposes, with projects in Argentina and India.

As the Dwengo board is a good companion for the Raspberry Pi, I decided to combine both boards in my new robot. While the Pi does high-level image processing, the microcontroller controls the robot.

The Raspberry Pi was programmed in C++ using the OpenCV libraries, the wiringPi library (from Gordon Henderson) and the RaspiCam openCV interface library (from Pierre Raufast and improved by Emil Valkov). I overclocked the Pi to 1GHz to get a frame rate of 12 to 14 fps.

Using a camera has some big advantages: first of all, you don’t have that bunch of sensors mounted close to the ground that are interfering with obstacles and deranged by irregularities. The second benefit is that you can see what is in front of the robot without having to build a swinging sensor arm. So, you have information about the actual position of the robot above the line but also on the position of the line in front, allowing calculation of curvature of the line. In short, following the line is much more controllable. By using edge detection rather than greyscale thresholding, the program is virtually immune for shadows and grey zones in the image.

If the line would have had less hairpin bends and I would have had a bit more time, I would have implemented a speed regulating algorithm on the base of the curvature of the line. This is surely something that would improve the performance of the robot.

I also used the camera to detect and track the green direction fields at a T-junction where the robot has to take the right direction. I used a simple colour blob tracking algorithm for this.

A short video of what the robot thinks:

Please note that in reality the robot goes a little bit slower following the line.

Different steps of the image processing

Image acquired by the camera (with some lines and points already added):
Image acquired by the camera

The RPi converts the colour image to a greyscale image. Then the pixel values on a horizontal line in the image are extracted and put into an array. This array is visualized by putting the values in a graph (also with openCV):
Visualizing pixel values along a line

From the first array, a second is calculated by taking the difference from two successive values. In other words, we calculate the derivative:
Calculating the derivative

An iterating loop then searches for the highest and lowest value in the array. To have the horizontal relative position of the line in the array, the two position values—on the horizontal x axis in the graphed image—are averaged. The position is put in memory for the next horizontal scan with a new image. This makes that the scan line does not have to span the whole image but only about a third of it. The scan line moves horizontally with the centre about above the line.

But this is not enough for accurate tracking. From the calculated line position, circles following the line are constructed, each using the same method (but with much more trigonometry calculations as the scan lines are curved). For the second circle, not only the line position but also the line angle is used. Thanks to using functions, adding a circle is a matter of two short lines of code.

The colour tracking is done by colour conversion to HSV, thresholding and then blob tracking, like explained in this excellent video. The colour tracking slows the line following down by a few fps but this is acceptable.

HSV image Thresholded image

As seen in the video, afterwards all the scan lines and some info points are plotted on the input image so we can see what the robot ‘thinks’.

And then?

After the Raspberry Pi has found the line, it sends the position data and commands at 115,2 kbps over the hardware serial port to the Dwengo microcontroller board. The Dwengo board does some additional calculations, like taking the square root of the proportional error and squaring the ‘integral error’ (curvature of the line). I also used a serial interrupt and made the serial port as bug-free as possible by receiving each character separately. Thus, the program does not wait for the next character while in the serial interrupt.

The Dwengo board sends an answer character to control the data stream. The microcontroller also takes the analogue input of the SHARP IR long range sensor to detect the obstacles and scan for the container.

In short, the microcontroller is controlling the robot and the Raspberry Pi does an excellent job by running the CPU intensive line following program.

There’s a post on the forum with a more detailed technical explanation – but you will find the most important steps below.

Electrical wiring
Both devices are interconnected by two small boards—one attaches to the RPi and the other to the Dwengo board—that are joined by a right angle header connection. The first does with some resistors the logic level converting (the Dwengo board runs on 5V), the latter board also has two DC jacks with diodes in parallel for power input to the RPi. To regulate the power to the Pi, I used a Turnigy UBEC that delivers a stable 5.25V and feeds it into the Pi by the micro USB connector. This gives a bit more protection to the sensitive Pi. As the camera image was a bit distorted I added a 470uF capacitor to smooth things out. This helped. Even though the whole Pi got hot, the UBEC stayed cold. The power input was between 600 and 700mA at around 8.2 volts.

Grippers
Last year, I almost missed the first place as the robot only just pushed the can out of the field. Not a second time! Having this in thought, I constructed two 14cm long arms that could be turned open by two 9g servos. With the two grippers opened, the robot spans almost 40 centimetres. Despite this, the robot managed—to everyone’s annoyance—‘to take its time before doing its job’, as can be seen in the video.

Building the robot platform
To build the robot platform I followed the same technology as the year before (link, in Dutch). I made a design in SketchUp, then converted it to a 2D vector drawing and finally lasercutted it at FabLab Leuven. However, the new robot platform is completely different in design. Last year, I made a ‘riding box’ by taking almost the maximum dimensions and mounting the electronics somewhere on or in it.

This time, I took a different approach. Instead of using an outer shell (like insects have), I made a design that supports and covers the parts only where necessary. The result of this is not only that the robot looks much better, but also that the different components are much easier to mount and that there is more space for extensions and extra sensors. The design files can be found here: Robot RoboCup Junior – FabLab Leuven.

3D renders in SketchUp:

RCJ_Robot_2014_render3 RCJ_Robot_2014_render5

On the day of the RCJ competition I had some bad luck as there wasn’t enough light in he competition room. The shutter time of the camera became much longer. As a consequence, the robot had much more difficulties in following sharp bends in the line. However, this problem did not affect the final outcome of the competition.

Maybe I should have mounted some LEDs to illuminate the line…

Vectors from coarse motion estimation

via Raspberry Pi

Liz: Gordon Hollingworth, our Director of Software, has been pointing the camera board at things, looking at dots on a screen, and cackling a lot over the last couple of weeks. We asked him what he was doing, so he wrote this for me. Thanks Gordon!

The Raspberry Pi is based on a BCM2835 System on a Chip (SoC), which was originally developed to do lots of media acceleration for mobile phones. Mobile phone media systems tend to follow behind desktop systems, but are far more energy efficient. You can see this efficiency at work in your Raspberry Pi: to decode H264 video on a standard Intel desktop processor requires GHz of processing capability, and many (30-40) Watts of power; whereas the BCM2835 on your Raspberry Pi can decode full 1080p30 video at a clock rate of 250MHz, and only burn 200mW.

Grodon

Because we have this amazing hardware it enables us to do things like video encode and decode in real time without actually doing much work at all on the processor (all the work is done on the GPU, leaving the ARM free to shuffle bits around!) This also means we have access to very interesting bits of the encode pipeline that you’d otherwise not be able to look at.

One of the most interesting of these parts is the motion estimation block in the H264 encoder. To encode video, one of the things the hardware does is to compare the current frame with the previous (or a fixed) reference frame, and work out where the current macroblock (16×16 pixels) best matches the reference frame. It then outputs a set of vectors which tell you where the block came from – i.e. a measure of the motion in the image.

In general, this is the mechanism used within the application motion. It compares the image on the screen with the previous image (or a long-term reference), and uses the information to trigger events, like recording the video or writing a image to a disk, or triggering an alarm. Unfortunately, at this resolution it takes a huge amount of processing to achieve this in the pixel domain; which is silly if the hardware has already done all the hard work for you!

So over the last few weeks I’ve been trying to get the vectors out of the video encoder for you, and the attached animated gif shows you the results of that work. What you are seeing is the magnitude of the vector for each 16×16 macroblock equivalent to the speed at which it is moving! The information comes out of the encoder as side information (it can be enabled in raspivid with the -x flag). It is one integer per macroblock and is ((mb_width+1) × mb_height) × 4 bytes per frame, so for 1080p30 that is 120 × 68 × 4 == 32KByte per frame. And here are the results. (If you think you can guess what the movement you’re looking at here represents, let us know in the comments.)

blamenuttall

Since this represents such a small amount of data, it can be processed very easily which should lead to 30fps motion identification and object tracking with very little actual work!

Go forth and track your motion!

Books, the digitising and text-to-speechifying thereof

via Raspberry Pi

A couple of books projects for you today. One is simple, practical and of great use to the visually-impaired. The other is over-complicated, and a little bit nuts; nonetheless, we think it’s rather wonderful; and actually kind of useful if you’ve got a lot of patience.

We’ll start with the simple and practical one first: Kolibre is a Finnish non-profit making open-source audiobook software so you can build a reader with very simple controls. This is Vadelma, an internet-enabled audio e-reader. It’s very easy to put together at home with a Raspberry Pi: you can find full instructions and discussion of the project at Kolibre’s website.

The overriding problem with automated audio e-readers is always the quality of the text-to-speech voice, and it’s the reason that books recorded with real, live actors reading them are currently so much more popular; but those are expensive, and it’s likely we’ll see innovations in text-to-speech as natural language processing research progresses (its challenging: people have been hammering away at this problem for half a century), and as this stuff becomes easier to automate and more widespread.

How easy is automation? Well, the good people at Dexter Industries decided that what the Pi community (which, you’ll have noticed, has a distinct crossover with the LEGO community) really needed was a  robot that could use optical character recognition (OCR) to digitise the text of a book, Google Books style. They got that up and running with a Pi and a camera module, using the text on a Kindle as proof of concept pretty quickly.

But if you’re that far along, why stop there? The Dexter team went on to add Lego features, until they ended up with a robot capable of wrangling real paper books, down to turning pages with one of those rubber wheels when the device has finished scanning the current text.

So there you have it: a Google Books project you can make at home, and a machine you can make to read the books to you when you’re done. If you want to read more about what Dexter Industries did, they’ve made a comprehensive writeup available at Makezine. Let us know how you get on if you decide to reduce your own library to bits.

Timelapse tutorial from Carrie Anne’s Geek Gurl Diaries

via Raspberry Pi

Even though Carrie Anne Philbin is working here at Pi Towers now, she’s still carrying on with the Geek Gurl Diaries YouTube channel that she set up before she joined us – for which we’re all profoundly grateful, because her videos are some of the best tutorials we’ve seen.

Here’s the latest from Carrie Anne: a tutorial on setting up the camera board, making timelapse video, and creating animations.

Are you a primary or secondary teacher in the UK? Would you like two days of free CPD from Carrie Anne and the rest of our superstar education team? You’ll get to come here to Pi Towers, meet all of us, and learn about the many ways you can use the Raspberry Pi in the classroom. Apply here - we’d love to hear from you.

New camera mode released

via Raspberry Pi

Liz: you’ll notice that this post has no pictures or video. That’s because we’d like you to make some for us, using the new camera mode. Take some 90fps video using our camera board and the information below, slow it down to 30fps and send us a link: if yours is particularly splendid, we’ll feature it here and on the front page. Over to JamesH!

I asked for video, you were forthcoming. Here are two which arrived in the few hours after we posted this: the first video, with the juggling clubs, is from Tobias Huebner; the second, with the bouncing balls, is from JBAnovling. JamesH’s discussion about what’s going on happens after the pretty. Enjoy!

When the Raspberry Pi camera was released, the eagle-eyed among you noticed that the camera hardware itself can support various high frame rate modes, but that the software could ‘only’ manage 30 frames per second in its high-definition video mode.

There’s is no hardware limitation in the Raspberry Pi itself. It’s quite capable of handling these high frame rate modes, but it does require a certain amount of effort to work out these new ‘modes’ inside the camera software. At the original release of the camera, two modes were provided: a stills capture mode, which offers the full resolution of the sensor (2592×1944), and a 1080p video mode (1920x1080p). Those same eagle-eyed people will see that these modes have different aspect ratios – the ratio of width to height. Stills outputs 4:3 (like old school TV), video 16:9 (wide screen).

This creates a problem when previewing stills captures, since the preview uses the video mode so it can run at 30 frames per second (fps) – not only is the aspect of the preview different, but because the video mode ‘crops’ the sensor (i.e. takes a 1920×1080 windows from the centre), the field of view in preview mode is very different from the actual capture.

We had some work to do to develop new modes for high frame rates, and also fix the stills preview mode so that is matches the capture mode.

So now, finally, some very helpful chaps at Broadcom, with some help from Omnivision, the sensor manufacturer, have found some spare time to sort out these modes, and not just that but to add some extra goodness while they were at it. (Liz interjects: The Raspberry Pi Foundation is not part of Broadcom – we’re a customer of theirs – but we’ve got a good relationship and the Foundation’s really grateful for the volunteer help that some of the people at Broadcom offer us from time to time. You guys rock: thank you!)

The result is that we now have a set of mode as follows :

  • 2592×1944 1-15fps, video or stills mode, Full sensor full FOV, default stills capture
  • 1920×1080 1-30fps, video mode, 1080p30 cropped
  • 1296×972 1-42fps, video mode, 4:3 aspect binned full FOV. Used for stills preview in raspistill.
  • 1296×730 1-49fps, video mode, 16:9 aspect , binned, full FOV (width), used for 720p
  • 640×480 42.1-60fps, video mode, up to VGAp60 binned
  • 640×480 60.1-90fps, video mode, up to VGAp90 binned

I’ve introduced a new word in the that list. Binned. This is how we can get high frame rates. Binning means combining pixels from the sensor together in a ‘bin’ in the analogue domain.  As well as reducing the amount of data, this can also improve low light performance as it averages out sensor ‘noise’ in the absence of quantisation noise introduced by the analogue to digital converters (ADCs), which are the bits of electronics in the sensor that convert the analogue information created by incoming photons to digital numbers.

So if we do a 2×2 ‘bin’ on the sensor, it only sends a quarter (2×2 = 4 pixels merged in to one = one quarter!) of the amount of data per frame to the Raspberry Pi. This means we can quadruple (approximately – there are some other issues at play) the frame rate for the same amount of data! So a simple 2×2 bin theoretically means quadruple the frame rate, but at half the X and Y resolution. This is how the 1296×972 mode works – it’s exactly a 2×2 binned mode, so it’s still 4:3 ratio, uses the whole sensor field of view, and makes a perfect preview mode for stills capture.

We also have a very similar mode, which is 1296×730. This is used for 720p video recording (the sensor image is scaled by the GPU to 1280×720). This is a 2×2 binned mode with an additional crop, which also means a slightly increased frame rate as there is less data to transfer.

Now by reducing the resolution output by the sensor even further and by using ‘skipping’ of pixels in combination with binning, we can get even higher frame rates, and this is how the high speed 640×480 VGA modes work. So, the fastest mode is now VGA resolution at 90 frames per second – three times the frame rate of 1080p30.

So, how do we use these new modes?

The demo applications raspistill and raspivid will already work with the new modes. You can specify the resolution you need and the frame rate, and the correct mode will be chosen. You will need to get the newest GPU firmware using sudo rpi-update which contains all these shiny new modes.

One thing to note: the system will always try to run at the frame rate specified in preference to resolution. Therefore if you specify a high rate at a resolution it cannot manage, it will use a low resolution mode to achieve the frame rate and upscale to the requested size – upscaling rarely looks good. It may also be too fast for the video encoder, so some of the extra frames may be skipped. So always ensure the resolution you specify can achieve the required frame rate to get the best results.

So, a quick example, to record a 10s VGA clip at 90fps

raspivid -w 640 -h 480 -fps 90 -t 10000 -o test90fps.h264

There have also been minor changes to the V4l2 driver to support these new modes. These should be included when you do the rpi-update to get the new GPU firmware.

The V4L2 driver supports the new modes too. Just using the normal requests, you can now ask for up to 90fps. So doing the same streaming of VGA at 90fps to H264 would be the following set of v4l2-ctl commands:

v4l2-ctl -p 90
 v4l2-ctl -v width=640,height=480,pixelformat=H264
 v4l2-ctl --stream-mmap=3 --stream-count=900 --stream-to=test90fps.h264

There are a few provisos that you will need to consider when using the faster modes, especially with the V4L driver.

  • They will be increasing the load on the ARM quite significantly as there will more callbacks per second. This may have unpredictable effects on V4L applications so that they may not be able to keep up.
  • The MJPEG codec doesn’t cope above about 720P40 – it will start dropping frames, and above 45fps it seems to be able to lock things solid. You have been warned.
  • H264 will keep up quite happily up to 720P49, or VGA@90fps

That said, most people should find no problem with these new features, so a big thank you must go to Dave Stevenson and Naush Patuck at Broadcom for finding the time to implement them! Also, thanks to Omnivision for their continued support.

GPS-tracking helmet cam

via Raspberry Pi

Martin O’Hanlon’s a familiar name in these parts, especially for fans of Minecraft: his repository of Pi Minecraft tricks and tutorials is one of our favourite resources. But Martin’s not all about magicking Menger-Sierpinski Sponges into the Minecraft universe: he does wonderful stuff with hardware and the Raspberry Pi too. Here’s some footage from his latest:

What you’re looking at here is something we haven’t seen before: camera footage with a GPS overlay, showing the route Martin has skied and his current speed. (Gordon, who has his own helmet cam hack, is quivering with envy.) Martin’s setup, like all the best Raspberry Pi hacks, also involves tupperware. It’s a one-button, one-led design, so it’s as easy as possible to use when you’re wearing ski gloves.

Work in progress

You can find everything you’ll need to construct your own at Martin’s Stuff about Code; he’s also done a very detailed writeup of the design process and included plenty of construction tips, along with the usual code and parts list. Thanks Martin!

Touchscreen point-and-shoot, from Adafruit

via Raspberry Pi

LadyAda from Adafruit is one of my very favourite people. We have a tradition of spending at least one evening eating Korean barbecue whenever I visit New York. We have told each other many secrets over bowls of fizzy fermented rice beverage, posed for photographs in front of plastic meats, been filmed pointing at electronics for the New York Times, and behaved very badly together in Pinkberry in September. LadyAda is the perfect combination of super-smart hacker, pink hair and business ninja; her cat Mosfet likes to Skype transatlantically with the Raspberry Pi cat, Mooncake (at least I think that their intense ignoring of each other constitutes “liking”); and we are incredibly fortunate that she saw the Pi and instantly understood what we were trying to do back in 2011. Here she is on the cover of the MagPi. (Click the image to visit the MagPi website, where you can download the issue for free.)

Her business, Adafruit, which employs an army of hackers and makers, does wonderful things with the Pi. They’ve been incredibly helpful to us in getting the word about Raspberry Pi and our educational mission out in North America. Adafruit not only stocks the Raspberry Pi and a whole warehouse-full of compatible electronics; the team also creates some amazing Raspberry Pi add-ons, along with projects and tutorials.

This is Adafruit’s latest Pi project, and it blew our minds.

All the parts you’ll need to create your own point-and-shoot camera using the Raspberry Pi, a Raspberry Pi camera board, and a little touch-screen TFT add-on board that Adafruit have made especially for the Pi, are available from Adafruit (they ship worldwide and are super-friendly). You can also find out how to send your photos to another computer over WiFi, or using Dropbox. As the Adafruit team says:

This isn’t likely to replace your digital camera (or even phone-cam) anytime soon…it’s a simplistic learning exercise and not a polished consumer item…but as the code is open source, you or others might customize it into something your regular camera can’t do.

As always, full instructions on making your own are on the learning section of Adafruit’s website, with a parts list, comprehensive setup instructions, and much more.

Adafruit have been especially prolific this week: we’ll have another project from them to show you in a few days. Thanks to LadyAda, PT, and especially to Phillip Burgess, who engineered this camera project.

Twitter-triggered photobooth

via Raspberry Pi

A guest post today: I’m just off a plane and can barely string a sentence together. Thanks so much to all the progressive maths teachers we met at the Wolfram conference in ew York this week; we’re looking forward to finding out what your pupils do with Mathematica from now on!

Over to Adam Kemény, from photobot.co in Hove, where he spends the day making robotic photobooths.

This summer Photobot.Co Ltd built what we believe to be the world’s first Twitter-triggered photobooth. Its first outing, at London Fashion Week for The Body Shop, allowed the fashion world to create unique portraits of themselves which were then delivered straight to their own mobile devices.

In February we took our talking robotic photobooth, Photobot, to London Fashion Week for The Body Shop to use to entertain the media and VIPs. We saw that almost every photostrip that Photobot printed was quickly snapped with smartphones before being shared to twitter and facebook and that resonated with an idea we’d had for a photobooth that used twitter as a trigger, rather than buttons or coins.

When The Body Shop approached us to create something new and fun for September’s Fashion Week we pitched a ‘Magic Mirror’ photobooth concept that would allow a Fashion Week attendee to quickly share their personal style.

The first Magic Mirror design, in Sketchup

By simply sending a tweet to the booth’s twitter account the Magic Mirror would respond by greeting them via a hidden display before taking their photo. The resulting image would then be tweeted back to them as well as being shared to a curated gallery.

The Magic Mirror, ready for action

Exploring the concept a little further led us to realise that space constraints would mean that in order to capture a full length portrait we’d need to look at a multi-camera setup. We decided to take inspiration from the fragmented portrait concept that Kevin Meredith (aka Lomokev) developed for his work in Brighton Source magazine – and began experimenting with an increasing number of cameras. Raspberry Pi’s, with their camera module, soon emerged as a good candidate for the cameras due to their image quality, ease of networked control and price.

Test rig, firing fifteen cameras

We decided to give the booth a more impactful presence by growing it into a winged dressing mirror with three panels. This angled arrangement would allow the subject to see themselves from several perspectives at once and, as each mirrored panel would contain a five camera array, they’d be photographed by a total of 15 RPi cameras simultaneously. The booth would then composite the 15 photographs into one stylised portrait before tweeting it back to the subject.

Final hardware

As The Body Shop was using the Magic Mirror to promote a new range of makeup called Colour Crush, graphics on the booth asked the question “What is the main #colour that you are wearing today?”. When a Fashion Week attendee replied to this question (via Twitter) the Magic Mirror would be triggered. Our software scanned tweets to the booth for a mention of any of a hundred colours and send a personalised reply tweet based on what colour the attendee was wearing. For example, attendees tweeting that they wore the colour ‘black’ would have their photos taken before receiving a tweet from the booth that said “Pucker up for a Moonlight Kiss mwah! Love, @TheBodyShopUK #ColourCrush” along with their composited portrait.

The final, eye-searing result!

The Magic Mirror was a challenge to build, mainly due to the complications of getting so many tiny cameras to align in a useful way, but our wonderful developer created a GUI interface that allowed far easier configuration of the 15 photos onto the composited final image, saving the day and my sanity. The booth was a hit and we’re always on the look out for other creative uses for the booth so would welcome any contact from potential collaborators or clients.

 

What’s that blue thing doing here?

via Raspberry Pi

So your new Pi NoIR has plopped through the letterbox, and you’ve unpacked it. There’s a little square of blue gel in there. What’s it for?

Thanks to Andrew Back for the picture! Click to read more about his adventures with Pi NoIR on the Designspark blog.

If you’ve been reading our posts about why we developed an infrared camera board keenly, you’ll have noticed that we mentioned a lot of interest from botanists, who use infrared photography to work on the health of trees. We started to read up about the work, found it absolutely fascinating, and thought you’d like to get in on it too.

A short biology lesson follows.

Photosynthesis involves chlorophyll absorbing light and using the energy to drive a charge separation process which ultimately (via a vast range of hacked together bits and pieces – if you believe in intelligent design, you won’t after you’ve read the Wikipedia page on chlorophyll) generates oxygen and carbohydrate. Here’s a nice picture of the absorption spectrum of two sorts of chlorophyll, swiped from Wikipedia:

Notice that both kinds of chlorophyll absorb blue and red light, but not green or infrared.

So: why are trees green? The graph above shows you that it’s because green is what’s left once the chlorophyll has grabbed all the long wavelength (red) and short wavelength (blue) light.

Let’s say you’re a biologist, and you want to measure how much photosynthesis is going on. One way to do this would be to look for greenness, but it turns out an even better method is to look for infrared and not blue – this is what the filter lets us do. Bright areas in a picture filtered like this mean that lots of photosynthesis is happening in those spots.

There’s a long history of doing this stuff from space (the Landsat vehicles, for example, look at the Earth across a very broad spectrum), and Public Lab have done loads of research as part of their Infragram project (and associated Kickstarter) to find ways of modifying cameras, and to find cheap alternatives to expensive optical bandgap filters. Our friend Roscolux #2007 Storaro Blue (that’s the blue thing’s full name) turns out to be a great example – we buy it on giant reels and the guys at the factory in Wales where we make the Raspberry Pi and both kinds of camera board cut it up into little squares for you to use. It’s not very expensive at all for us to provide you with a little square of blue, and it adds a lot of extra functionality to the camera that we hope you’ll enjoy playing with.

The work of the folk at Public Lab has been absolutely vital in helping us understand all this, and we’re very grateful to them for their work on finding suitable filters at low prices, and especially on image processing. We strongly recommend that you visit Public Lab’s Infragram to process your own images. We’re talking to Public Lab at the moment about working together on developing some educational activities around Pi NoIR. We’ll let you know what we come up with right here.

We sent Matthew Lippincott from Public Lab an early Pi NoIR (and a blue thing), and he sent it up on a quadcopter to take some shots of the tree canopy, which he’s processed using Infragram, to show you what’s possible.

Click to visit a Flickr set with many more Pi NoIR + Infragram images.

We still have some to do in getting images taken with the filter absolutely perfect (notably in white balance calibration), but we hope that what you can do with the filter already gives you a feel for the potential of an infrared camera. In a way, it’s a shame we’re launching this in the autumn: there’s less photosynthesising going on out there that there might be. But you’ll still get some really interesting results if you go outside today and start snapping.

Pi NoIR infrared camera: now available!

via Raspberry Pi

Pi NoIR, the infrared version of our camera board, is available to purchase for $25 plus tax from today. You’ll find it at all the usual suspects: RS Components, Premier Farnell and their subsidiaries; and at Adafruit. Other stores will be getting stock soon.

Pictures courtesy of Adafruit, who, unlike us, actually have a studio for doing this stuff in – thanks guys!

Back view

What’s that mysterious square of stuff, you ask? I’ll let you know tomorrow.

 

 

Creeptacular face-tracking Halloween portrait

via Raspberry Pi

You’ve got a week to build this portrait, whose eyes follow you around the room, for Halloween.

Adafruit have produced a tutorial, courtesy of Tony DiCola, which uses OpenCV and openFrameworks with your Raspberry Pi and camera board to create a picture of pullulating panic. It’s haunted hardware of horripilating hideousness.

You’ll also find instructions on making your own frame in the tutorial: we recommend making one large enough to drill a hole in, so you can conceal the camera board inside before using this to scare your loved ones. It’s elegant and spooky; plus, you can keep it for the rest of the year and use it for another OpenCV project like the Magic Mirror.