Our friends at DesignSpark have produced a really beautiful time-lapse video with one of our new camera boards. It doesn’t start very beautifully, because it was filmed on a day whose start can best be described as “sodden”, but by afternoon the clouds parted and England started to look exceptionally green and pleasant. If you want to skip the rain, fast-forward to 1m46. (There’s a guest appearance from a double rainbow later on, too.)
The camera boards are now available for order! You can buy one from RS Components or from Premier Farnell/Element14. We’ve been very grateful for your patience as we’ve tweaked and refined things; it’d have been good to get the camera board out to you last month, but we wanted your experience to be as good as possible, and we’ve been working on the software right up until last night. Thank you to Gordon and Rob at Raspberry Pi and to Dom Cobley for their work on the firmware (Rob also worked on the documentation); to JamesH for his work on the software; to the Broadcom Cambridge ISP team, particularly David Plowman and Naush Patuck, for volunteering to help with tuning; to Bruce Gentles at Broadcom for his volunteering to help with some of the initial bring-up; to James Adams at Raspberry Pi for running the hardware project, and everybody at Sony Pencoed for making it happen.
Tehzeeb Gunza at OmniVision coordinated things from their end, and helped us with sensor selection. Thanks also to Gert van Loo and Rob Gwynne for their work on the hardware design. (And thank you to Broadcom for letting us take advantage of your team’s willingness to volunteer for us!) This, for the curious, is the camera lab we’ve been borrowing from Broadcom for testing. The mannequin’s name is Veronica. She’s lousy company. The room gives us a calibrated and fixed target to use during tuning; it’s designed to be filled with examples of the sorts of things people tend to take pictures of. Which makes it a kind of creepy place to hang out. Between this and anechoic chambers, we’re getting the full range of testing chambers that give us the shivers.
Click to enlarge. You might be interested to learn that this was snapped with a Nokia N8, which uses an earlier version of the imaging core that’s in the Raspberry Pi (but a different sensor and optics).
For such a small device, this has been an enormous project, and a year-long effort for everybody involved. We’re pretty proud of it: we hope you like it!
How to set up the camera hardware
Please note that the camera can be damaged by static electricity. Before removing the camera from its grey anti-static bag, please make sure you have discharged yourself by touching an earthed object (e.g. a radiator or water tap).
The flex cable inserts into the connector situated between the Ethernet and HDMI ports, with the silver connectors facing the HDMI port. The flex cable connector should be opened by pulling the tabs on the top of the connector upwards then towards the Ethernet port. The flex cable should be inserted firmly into the connector, with care taken not to bend the flex at too acute an angle. The top part of the connector should then be pushed towards the HDMI connector and down, while the flex cable is held in place. (Please view the video above to watch the cable being inserted.)
The camera may come with a small piece of translucent blue plastic film covering the lens. This is only present to protect the lens while it is being mailed to you, and needs to be removed by gently peeling it off.
How to enable camera support in Raspbian
Boot up the Pi and log in. The default username is pi, and the default password is raspberry. (Note: if you have changed these from the default then you will need to supply your own user/password details).
Run the following commands in a terminal to upgrade the Raspberry Pi firmware to the latest version:
sudo apt-get update
Click to enlarge
sudo apt-get upgrade
Click to enlarge
Access the configuration settings for the Pi by running the following command:
Navigate to “camera” and select “enable”.
Click to enlarge
Click to enlarge
Select “Finish” and reboot.
Click to enlarge
How to use the Raspberry Pi camera software
raspivid is a command line application that allows you to capture video with the camera module, while the application raspistill allows you to capture images.
-o or –output specifies the output filename and -t or –timeout specifies the amount of time that the preview will be displayed in milliseconds. Note that this set to 5s by default and that raspistill will capture the final frame of the preview period.
-d or –demo runs the demo mode that will cycle through the various image effects that are available.
Capture an image in jpeg format:
raspistill -o image.jpg
Capture a 5s video in h264 format:
raspivid -o video.h264
Capture a 10s video:
raspivid -o video.h264 -t 10000
Capture a 10s video in demo mode:
raspivid -o video.h264 -t 10000 -d
To see a list of possible options for running raspivid or raspistill, you can run:
Here’s a guest post from our friend Pete Wood at RS Component’s community arm, DesignSpark. Pete is one of the organisers of the Oxford Raspberry Jams. This post was first published at www.designspark.com.
Raspberry Jams are now being held all over the world; I’ve been trying to go to about one a month, and am lucky enough to be in Tokyo for some press and meetings while the Tokyo Jam is on later this month. There’s a list of events in each month’s MagPi, and if you’re looking for something near you, it’s worth checking the events page on our forums. If you can’t find a Jam near your home, why not look into setting one up? There’s information on how to get started at the Raspberry Jam website, which Alan O’Donohoe tells me will be getting a redesign in the coming months.
Over to Pete!
This month’s Jam held at DesignSpark HQ in Oxford UK was our biggest turnout yet, with over 30 Pi Geeks crammed into the room!
Raspberry Pi Camera
I kicked off the event by showing the new Raspberry Pi camera module, which will be available from RS Components later in May. In the picture is a pre-production module, the production version is a couple of millimetres taller. The camera gives stunning HD video from a 5MP sensor at 30 FPS.
Next up was one of my RS colleagues, Pete Milne, who showed us his Digital Signage application. Pete has connected up a network of Raspberry Pis to flat screen TVs here at the RS Oxford Offices and at our main facility in Corby, Northamptonshire. The Pis run a libreoffice slideshow in a continuous loop and display Health and Safety messages for RS employees. He’s been running these continuously for over 8 weeks without having to re-boot, so it’s very robust. The Pis runs without a keyboard or mouse and the content can be updated remotely over the network.
If you want to create your own Digital Signage Application, Pete has shared how to do it on GitHub. Just follow the INSTALL file for setup details.
Wii Controller Car
Oxford Raspberry Jam regular Alex Eames presented another cool little project using a Wii controller and Nunchuck. This one was for controlling a remote control car that has an on-board Raspberry Pi with Bluetooth dongle. It also allows the control of brake lights, headlights and indicators and also drives an aircraft propeller. Alex plans to build all this into the car itself, which would need to accommodate the Pi, the electronics hanging of the GPIO, some model aircraft batteries and the motor and fan. Alex, I think you need a bigger car… how about a Monster Truck?
Our next demo was one that has been featured on the Raspberry Pi site a few weeks ago for a Raspberry Pi powered video wall. Alex and Colin from the Culham Centre for Fusion Energy (CCFE) have built this system in C and some Python Code. It has clever features like bezel compensation to accommodate different styles of screens. They showed a 4 screen setup, but have also run a 9+4 configuration. The software is scalable to any size or shape. Each screen needs a Pi, and one separate Pi is used as the master. This is a classic example showing that you can build your own video wall for a fraction of the price of a commercial solution that would certainly cost a lot more! Chaps, I can see a business opportunity here for screening big screen sporting events on a budget down my local pub. ;0) They expect to licence the software/design at some point. More details are available on their website.
Motion Detected Camera
Another Oxford Jam regular, Dave R, showed his Pi with a webcam motion detection system and linked to a DSLR. Dave created this for his bird table, to capture pictures of birds when they land on the table, I think I need to build a similar solution to stop my kids from stealing my Haribos…
Touch Screen Display
Paul had two projects to show. The first was a simple touch screen for the Pi to allow control and display. Paul was reading and displaying temperatures. The screens are semi-intelligent, storing screen images and having a sound output available. The screen images are loaded via a Windows app and USB connection. The Pi can then control the display of those images.
Sky Remote Controlled LED Lighting
The second demonstration was a programmable LED strip and infrared receiver, controlled by a Sky TV remote control. A simple Python script reads the codes from a remote control. He could the use this to flash the LEDs in various patterns and colours. The LEDs are driven by SPI and can be daisychained up to 1024 LEDs.
Paul M and Annierei L, showed us their ChiPhone box. ChiPi is an Electronic messaging system for children allowing them to send and receive voice messages. They have designed a child friendly box with large buttons and microphone. With simple record and ‘To/play’ buttons it makes for an easy messaging system connected to the internet via WiFi. You can find out more about their project on their website.
Pi Keyword Cruncher
Pi Jam regular and Data Geek John finished off our live demos by showing us his Pi based RSS feed collector and keyword analysis tool. The Pi collects data from various RSS feeds every 30 minutes and stores the results in a MySQL database. The data is then used to monitor trends in keywords, which over time show either peaks of activity or trends of ‘chatter’ about specific topics. The advantage of John using his Raspberry Pi Instead of his 50W laptop, is that it the Pi only takes 2W and can be left on all the time. It also frees up his laptop to do other tasks.
RaspBMC Toddler In-Car Entertainment System
The final presentation of the evening from one of my Jam co-hosts Alex Gibson, who in true Hollywood awards winners style couldn’t attend in person so sent a video message! Alex’s video featured his project for a Pi based RaspBMC In-Car Toddler entertainment system. One of the most impressive bits was a headrest bracket he had printed out on his Raspberry Pi-based 3D printer.
Thanks to all those who showed their projects. Looking forward to the next event!
Liz: here’s the second and final part of David Plowman’s walk through the development of the Raspberry Pi camera board, which will be available to purchase in April. Before you go ahead and read this, check out David’s first post.
The Eye of the Beholder
That’s where beauty lies, so the saying goes. And for all the test charts, metrics and objective measurements that imaging engineers like to throw at their pictures, it’s perhaps sobering that the human eye – what people actually like – is the final arbiter of Image Quality (IQ). There has been much discussion, and no little research, on what makes for “good IQ”, but the consensus probably has it that while the micro aspects of IQ, such as sharpness, noise and detail, are very important, your eye turns first to the the macro (in the sense of large scale) image features – exposure and contrast, colours and colour balance.
We live in a grey world…
All camera modules respond differently to red, green and blue stimuli. Of itself this isn’t so problematic as the behaviour can be measured, calibrated and transformations applied to map the camera’s RGB response (which you saw in the final ugly image of my previous post!) onto our canonical (or standard) notion of RGB. It’s in coping with the different kinds of illumination that things get a little tricky. Let me explain.
Imagine you’re looking at a sheet of white paper. That’s just the thing – it’s always white. If you’re outside on a sunny day, it’s white, and if you’re indoors in gloomy artificial lighting, it’s still white. Yet if you were objectively to measure the colour of the paper with your handy spectrometer, you’d find it wasn’t the same at all. In the first case your spectrometer will tell you the paper is quite blue, and in the second, that it’s very orange. The Human Visual System has adapted itself brilliantly over millions of years simply not to notice any difference, a phenomenon known as colour constancy.
No such luck with digital images, though. Here we have to correct for the ambient illumination to make the colours look “right”. Take a look at the two images below. (You’ll find it easier to judge the “right”-ness if you scroll so only one image is on the screen at a time.)
It’s a scene taken in the Science Park in Cambridge, outside the Broadcom offices. The top one looks fine, but the bottom one has a strong blue cast. This is precisely because the top one has been (in the jargon) white-balanced for an outdoor illuminant and the bottom one for in indoor illuminant. But how do we find the right white balance?
The simplest assumption that camera systems can make is that every scene is, on average, grey, and it works surprisingly well. It has some clear limitations too, of course. With the scene above, a “grey world” white balance would actually give a noticeable yellow cast because of the preponderance of blue sky skewing the average. So in reality more sophisticated algorithms are generally employed which constrain the candidate illuminants to a known set (predominantly those a physicist would describe as being radiated by a black body, which includes sunlight and incandescent bulbs), and in keying on colours other than merely grey (often specific memory colours, such as blue sky or skin tones).
The devil is in the details…
With our colours sorted out, we need to look at the micro aspects of our image tuning. On the Pi, fortunately, we don’t have to worry about focusing, which leaves the noise and sharpening filters within the ISP. Note that some amount of sharpening is essential, really, because of the inherent softening effect of the Bayer mosaic that we saw last time.
When it comes to tuning noise and detail, there are generally two camps. The first camp regards noise as ugly and tries very hard to eliminate it. The second camp thinks a certain amount of noise is tolerable (it can look a bit like film “grain”) in return for better details and a more natural (less processed) look to the image.
To see what I mean, take a look at the following three images. It’s a small crop from a picture of some objects on a mantelpiece, taken in very gloomy orange lighting, and the walls are even a murky pinkish colour too. Pretty challenging for any digital camera!
The top one has had practically no noise filtering applied to it at all. Actually it shows bags of detail, but I think most people would regard the noise as pretty heinous. The second image demonstrates the opposite approach. The noise has been exterminated with extreme prejudice, but out with the bathwater goes the baby – detail and a “natural” looking result. Though my examples are deliberately extreme, you can find the influence of both camps at work in mobile imaging devices today!
The final image shows where we’ve settled with the Pi – a happy medium, I hope, but it does remain, ultimately, a matter of taste. And de gustibus non est disputandum, after all!
I’ve only grazed the surface of this subject – there are many more niggles and wrinkles that an imaging system has to iron out – but I’m hoping I’ve given you some sense of why a proper camera integration represents a significant commitment of time and effort. Whilst you’re all waiting for the boards finally to become available I’ll stick around on this website to answer any questions that I can.
My deep thanks, as ever, is due to those clever engineers at Broadcom who actually make this stuff work.
Liz: We’re very close to being able to release the $25 add-on camera board for the Raspberry Pi now. David Plowman has been doing a lot of the work on imaging and tuning. He’s very kindly agreed to write a couple of guest posts for us explaining some more for the uninitiated about the process of engineering the camera module. Here’s the first – I hope you’ll find it as fascinating as I did. Thanks David!
Lights! Camera! … Action?
So you’ve probably all been wondering how it can take quite so long to get a little camera board working with the Raspberry Pi. Shouldn’t it be like plugging in a USB webcam, all plug’n’play? Alas, it’s not as straightforward as you might think. Bear with me for this – and a subsequent – blog posting and I’ll try and explain all.
The Nature of the Beast
The camera we’re attaching to the Raspberry Pi is a 5MP (2592×1944 pixels) Omnivision 5647 sensor in a fixed focus module. This is very typical of the kinds of units you’d see in some mid-range camera phones (you might argue the lack of autofocus is a touch low-end, but it does mean less work for us and you get your camera boards sooner!). Besides power, clock signals and so forth, we have two principal connections (or data buses in electronics parlance) between our processor (the BCM2835 on the Pi) and the camera.
The first is the I2C (“eye-squared-cee”) bus which is a relatively low bandwidth link that carries commands and configuration information from the processor to the image sensor. This is used to do things like start and stop the sensor, change the resolution it will output, and, crucially, to adjust the exposure time and gain applied to the image that the sensor is producing.
The second connection is the CSI bus, a much higher bandwidth link which carries pixel data from the camera back to the processor. Both of these buses travel along the ribbon cable that attaches the camera board to your Pi. The astute amongst you will notice that there aren’t all that many lines in the ribbon cable – and indeed both I2C and CSI are serial protocols for just this reason.
The pixels produced are 10 bits wide rather than the 8 bits you’re more used to seeing in your JPEGs. That’s because we’re ultimately going to adjust some parts of the dynamic range and we don’t want “gaps” (which would become visible as “banding”) to open up where the pixel values are stretched out. At 15fps (frames per second) that’s a maximum of 2592x1944x10x15 bits per second (approximately 750Mbps). Actually many higher-end cameras will give you frames larger than this at up to 30fps, but still, this is no slouch!
Show me some pictures!
So, armed with our camera modules and adapter board, the next job we have is to write a device driver to translate our camera stack’s view of the camera (“use this resolution”, “start the camera” and so forth) into I2C commands that are meaningful to the image sensor itself. The driver has to play nicely with the camera stack’s AEC/AGC (auto-exposure/auto-gain) algorithm whose job it is to drive the exposure of the image to the “Goldilocks” level – not too dark, not too bright. Perhaps some of you remember seeing one of Dom’s early camera videos where there were clear “winks” and “wobbles” in brightness. These were caused by the driver not synchronising the requested exposure changes correctly with the firmware algorithms… you’ll be glad to hear this is pretty much the first thing we fixed!
With a working driver, we can now capture pixels from the camera. These pixels, however, do not constitute a beautiful picture postcard image. We get a raw pixel stream, even more raw, in fact, than in a DSLR’s so-called raw image where certain processing has often already been applied. Here’s a tiny crop from a raw image, greatly magnified to show the individual pixels.
Surprised? To make sense of this vast amount of strange pixel data the Broadcom GPU contains a special purpose Image Signal Processor (ISP), a very deep hardware pipeline tasked with the job of turning these raw numbers into something that actually looks nice. To accomplish this, the ISP will crunch tens of billions of calculations every second.
What do you mean, two-thirds of my pixels are made up?
Yes, it is imaging’s inconvenient truth that fully two-thirds of the colour values in an RGB image have been, well, we engineers prefer to use the word interpolated. An image sensor is a two dimensional array of photosites, and each photosite can sample only one number – either a red, a green or a blue value, but not all three. It was the idea of Bryce Bayer, working for Kodak back in 1976, to add an array of microlenses over the top so that each photosite can measure a different colour channel. The arrangement of reds, greens and blues that you see in the crop above is now referred to as a “Bayer pattern” and a special algorithm, often called a “demosaic algorithm”, is used to create the fully-sampled RGB image. Notice how there are twice as many greens as reds or blues, because our eyes are far more sensitive to green light than to red or blue.
The Bayer pattern is not without its problems, of course. Most obviously a large part of the incoming light is simply filtered out meaning they perform poorly in dark conditions. Mathematicians may also mutter darkly about “aliasing” which makes a faithful reconstruction of the original colours very difficult when there is high detail, but nonetheless the absolutely overwhelming majority of sensors in use today are of the Bayer variety.
Now finally for today’s posting, here’s what the whole image looks like once it has been “demosaicked”.
My eyes! They burn with the ugly!
It’s recognisable, that’s about the kindest thing you can say, but hardly lovely – we would still seem to have some way to go. In my next posting, then, I’ll initiate you into the arcane world of camera tuning…
Liz: if you haven’t entered our contest to win a pre-production camera board, have a look at the post explaining what you’ll need to do. And if you’re looking for inspiration, here’s a guest post from Gordon, our Head of Software, about a mini-HD camera project he worked on at home using the prototype boards we showed the BBC back in 2011.
I may have mentioned that Gordon does a lot of cycling. He bodged up a 3D helmet cam a couple of years ago: here’s how he did it. (He has also made me include some 2D video because he likes showing off.)
Careful with the last video, which is in 3D – if you’re using bi-coloured 3D glasses to view it, as I did, you are liable to feel VERY motion sick if you’re susceptible to that sort of thing. Over to Gordon!
A few years ago I really wanted to play around with a helmet-mounted camera for my mountain biking. There were quite a few out in the market, but they were quite expensive, and it’s always difficult getting toys past my wife! Because I was working at Broadcom, I was able to get my hands on what we called the MicroDB (the thing David and Eben first showed to the BBC as the Raspberry Pi), and since I had all the software and a bit of competence, I decided to try doing a bit of HD helmet recording.
The hardware I used was based on the same BCM2835 chip that we all know and love. The hardware also had a PMU chip (power supply), which meant you could power it directly from a lithium ion battery and record 720p HD video for about an hour.
So I rigged up some properly engineered mounting. I used a rubber from my daughter’s pencil case (Americans, breathe easy – this is the UK word for what you call an eraser), a couple of cable ties, and a USB socket! I set out on a voyage of discovery…apologies in advance for the lycra clad arses, but It’s something you’ll just have to put up with!
Liz interjects: that’s not the half of it. Eben and Gordon have a regular date on Wednesdays where they take an hour and a half over lunch to go cycling and have a software meeting at the same time. This means a certain amount of strutting sweatily around the office dressed in lycra at the end of the ride. This week, Jack turned up, tutted and said: “You two do realise there are showers downstairs, don’t you.” The rest of us cheered.
This is an example of the helmet cam being used in a chain gang, which is a fast-moving (we’re doing around 26mph average for the whole of the clip) club ride, where you continuously rotate who’s cycling at the front, making it a very efficient way of travelling at speed!
This is another clip from the helmet cam, at the start of a mountain bike race held by a good friend of mine who’s an elite rider.
When I took these videos, I expected to experience the same feeling of speed as when you’re riding for real, but it doesn’t quite make it. The main issue is that the feeling of speed you get is a product of the full 3D stereoscopic experience that the 2D camera throws away. It’s there and it’s fun, but it doesn’t actually feel real; you don’t quite get the full-force feeling of what it’s like to tear down that trail!
I was missing a dimension, so I had to go find it again! OK, now you ask, surely it’s going to cost me a lot of money to buy a proper 3D camera, and you’d be right if you didn’t have a whole bunch of little camera boards kicking around in the office. I realised that all I needed was two of them, and a spot of work to synchronise the pictures: then Bob’s your uncle!
I took two MicroDB’s and connected them together (actually I used a USB -> USB connector which I then cable-tied to my bike helmet with a rubber/eraser to give it something soft to sink into). So what you get out is two videos (each 720p30). To get the images working together, you need to do some processing, which presents a number of problems:
1) The two cameras are not aligned and therefore you have to rotate and translate the images.
2) You also need to invert one of the images.
3) You need to hand-synchronise the two videos (and keep them synchronised during the video).
So I wrote a bit of software based on FFMPEG and SDL, and lots of handcrafted fun code to take the two videos and output them as one in a number of formats, including interleaved line (odd lines are left image, even lines right), horizontal half-resolution and vertical half-resolution (because we had a number of different 3D televisions to play with!) Application of Bresenham’s algorithm is so much fun!
I then went and did a 24-hour mountain bike race in a team of five (we came third that year) and recorded the first half of one of the laps in glorious 3D. You are going to either need a proper 3D television to watch this or use some red/green (actually cyan is closer) glasses (the kind you get in breakfast cereals!) – otherwise you can just hold two bits of suitably coloured filters against your face.
Liz again: editing this post, I have realised that the next video gives me motion sickness even without Gordon’s 3D glasses. Proceed with caution. Gordon, I can’t believe you kept this stuff up without sleeping for 24 hours.
Why am I showing you this? Well mostly because I had so much fun doing it, and it really shows how the real 3D helmet cameras can make the experience of home video just so much better if you’re doing something fast and aggressive. I hope you agree!
Finally, of course, the Raspberry Pi camera (now in production and being released next month) is very closely related to this one – although it’s actually higher quality; the images we’ve been seeing in test are looking fantastic. This project gives you an impression of the kind of thing you’ll be able to do with it with a bit of extra coding – and of the sort of extra legwork we’re looking for from people entering the competition to win a pre-production camera board.
I’ve been looking for where I put the video manipulation code; if I can find it, I’ll put it into GitHub somewhere so you can have a play yourself (if anyone is remotely interested)!
Finally – really finally – you have to think about the fact that the Raspberry Pi has two CSI interfaces, meaning there’s a potential to add two camera boards. Does that mean it would be possible to do all this completely on a single Raspberry Pi? We haven’t experimented with the idea yet – only the future can tell…
We’ve sent the first camera boards to production, and we’re expecting to be able to start selling them some time in April. And we’ve now got several pre-production cameras in the office that we’re testing and tweaking and tuning so the software will be absolutely tickety-boo when you come to buy one.
Gordon is in charge of things camera, and he’s got ten boards to give away. There is, however, a catch.
The reason we’re giving these cameras away is that we want you to help us to do extra-hard testing. We want the people we send these boards to to do something computationally difficult and imaginative with them, so that the cameras are pushed hard in the sort of bonkers scheme that we’ve seen so many of you come up with here before with your Pis, and so that we can learn how they perform (and make adjustments if necessary). The community here always seems to come up with applications for the stuff we do that we wouldn’t have thought of in a million years; we thought we should take advantage of that.
So we want you to apply for a camera, letting us know what you’re planning to do with it (and if you don’t do the thing you promise, we’ll send Clive around on his motorbike to rough you up). We want you to try to get the camera doing something imaginative. Think about playing around with facial recognition; or hooking two of them up together and modging the images together to create some 3d output; or getting the camera to recognise when something enters the frame that shouldn’t be there and doing something to the image as a result. We are not looking for entries from people who just want to take pictures, however pretty they are. (Dave Akerman: we’ve got one bagged up for you anyway, because the stuff you’re taking pictures of is cool enough to earn an exemption here. Everybody else, see Dave’s latest Pi in Space here. He’s put it in a tiny TARDIS.)
So if you have a magnificent, imaginative, computationally interesting thing you’d like to do with a Raspberry Pi camera board, email email@example.com. In your mail you’ll need to explain exactly what you plan to do; and Gordon, who is old-school, is likely to take your application all the more seriously if you can point to other stuff you’ve done in the past (with or without cameras), GitHub code or other examples of your fierce prowess. (He suggested I ask for your CVs, but I think we’ll draw the line there.) We will also need your postal address. The competition is open worldwide until March 12. We’re looking forward to seeing what you come up with!
Of course, Model A is not the only new bit of hardware we’re releasing in 2013. JamesH just sent me these pictures of the forthcoming camera board to whet your appetite. This is the final hardware; we’ve been working on tuning (Gert tells me that picture quality is “pretty good” at the moment, but we’re hoping to get it to “bleedin’ marvellous” before we release the hardware), and there is some work to do on the drivers, but everything’s looking pretty peachy for the moment. I don’t have a release date for you yet, but we’re probably at least a month away (and possibly more) from being able to sell these at the moment.
Click to enlarge
Click to enlarge
Meanwhile, Model A boards are already starting to appear in the wild. Alex from Raspi.TV, a fan site, has what I think is the first blog post about the Model A from someone who’s bought one. (The blue splodge he mentions, which can be removed with meths, is an artefact of the testing process. At certain times of day, when the production line is relatively quiet, that splodge is added as a visual demonstration that the Pi has been through the whole battery of factory tests.)
He also has some video of the board. Plug it in, Alex!
Dom’s been streaming 1080p video from the camera board (this time using a blu-tack mount – you’ll be pleased to hear that proper camera mounts will be available so you won’t have to use whatever sticky stuff you have in the kitchen drawers), and he’s filmed the streaming process to show you how well it works.
He’s used a camera to film what he’s been doing which is rather less good than the RasPiCam itself – so he’s also sent me the raw video output so you can see what it looks like straight from the Pi. You can find the .mkv file here - it’s well worth a look if you’re curious about what you’ll get from RasPiCam. While you’re busy downloading it, you can see Dom’s wobbly cameraphone output below, with an explanation of what you’re looking at.
Pete Wood at RS sent me this video yesterday. He’s been at Electronica 2012 in Germany with Rob Bishop, where RS have been demoing the Raspberry Pi (the large wall-slapping game you can see being played in the video is driven by a Pi) and, most interestingly for you guys, the camera board.
The camera has a 5 megapixel sensor, and can record 1080p H.264 video at 30 frames per second. This board will plug into the currently unused CSI pins on the Pi, using I²C for control. We’re also working on a display board, which will come to market after the camera board.
Pete has, in the tradition of makers and hackers everywhere, employed sellotape and what appears to be a broom handle in his demo. We’ll be making a little mount for the production camera, so sellotape will not be necessary. Broom handles, however, are almost always useful for something or other.
This camera board is a prototype of the production model; we’ve a (very) little way to go before we’re able to send it out to manufacture. We’ve got some testing chamber time booked in December; we need to be sure that that big ribbon cable doesn’t emit any forbidden electromagnetic radiation. We’re hoping to get these ready for sale in the new year, all being well at a price of $25. Keep watching this space!
Do you have plans for the camera add-on? Let us know what they are in the comments.