Tag Archives: Embedded

Raspberry Pi – Translator

via Wolf Paulus » Embedded | Wolf Paulus

Recently, I described how to perform speech recognition on a Raspberry Pi, using the on device sphinxbase / pocketsphinx open source speech recognition toolkit. This approach works reasonably well, but with high accuracy, only for a relatively small dictionary of words.

Like the article showed, pocketsphinx works great on a Raspberry Pi to do keyword spotting, for instance to use your voice, to launch an application. General purpose speech recognition however, is still best performed, using one of the prominent web services.

Speech Recognition via Google Service

Google’s speech recognition and related services used to be accessible and easy to integrate. Recently however, they got much more restrictive and (hard to belief, I know) Microsoft is now the place to start, when looking for decent speech related services. Still, let’s start with Google’s Speech Recognition Service, which requires an FLAC (Free Lossless Audio Codec) encoded voice sound file.

Google API Key

Accessing Google’s speech recognition service requires an API key, available through the Google Developers Console.
I followed these instructions for Chromium Developers and while the process is a little involved, even intimidating, it’s manageable.
I created a project and named in TranslatorPi and selected the following APIs for this project:

Goggle Developer Console: Enabled APIs for this project

Goggle Developer Console: Enabled APIs for this project

The important part is to create an API Key for public access. On the left side menu, select API & auth / Credentials. Here you can create the API key, a 40 character long alpha numeric string.

Installing tools and required libraries

Back on the Raspberry Pi, there are only a few more libraries needed, additionally to what was installed in the above mentioned on-device recognition project.

sudo apt-get install flac
sudo apt-get install python-requests
sudo apt-get install python-pycurl

Testing Google’s Recognition Service from a Raspberry Pi

I have the same audio setup as previously described, now allowing me to capture a FLAC encoded test recording like so:
arecord -D plughw:0,0 -f cd -c 1 -t wav -d 0 -q -r 16000 | flac - -s -f --best --sample-rate 16000 -o test.flac
..which does a high quality, wave type recording and pipes it into the flac encoder, which outputs ./test.flac

The following bash script will send the flac encoded voice sound to Google’s recognition service and display the received JSON response:

# parameter 1 : file name, containg flac encoded voiuce recording
echo Sending FLAC encoded Sound File to Google:
curl -i -X POST -H "Content-Type: audio/x-flac; rate=16000" --data-binary @$1 $url
echo '..all done'
Speech Recognition via Google Service - JSON Response

Speech Recognition via Google Service – JSON Response

  "result": [
      "alternative": [
          "transcript": "today is Sunday",
          "confidence": 0.98650438
          "transcript": "today it's Sunday"
          "transcript": "today is Sundy"
          "transcript": "today it is Sundy"
          "transcript": "today he is Sundy"
      "final": true
  "result_index": 0

More details about accessing the Google Speech API can be found here: https://github.com/gillesdemey/google-speech-v2

Building a Translator

Encoding doesn’t take long and the Google Speech Recognizer is the fastest in the industry, i.e. the transcription is available swiftly and we can send it for translation to yet another web service.

Microsoft Azure Marketplace

Creating an account at the Azure Marketplace is a little easier and the My Data section shows that I have subscribed to the free translation service, providing me with 2,000,000 Characters/month. Again, I named my project TranslatorPi. On the ‘Developers‘ page, under ‘Registered Applications‘, take note of the Client ID and Client secret, both are required for the next step.



With the speech recognition from Google and text translation from Microsoft, the strategy to build the translator looks like this:

  • Record voice sound, FLAC encode it, and send it to Google for transcription
  • Use Google’s Speech Synthesizer and synthesize the recognized utterance.
  • Use Microsoft’s translation service to translate the transcription into the target language.
  • Use Google’s Speech Synthesizer again, to synthesize the translation in the target language.

For my taste, that’s a little too much for a shell script and I use the following Python program instead:

# -*- coding: utf-8 -*-
import json
import requests
import urllib
import subprocess
import argparse
import pycurl
import StringIO
import os.path
def speak_text(language, phrase):
    tts_url = "http://translate.google.com/translate_tts?tl=" + language + "&q=" + phrase
    subprocess.call(["mplayer", tts_url], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def transcribe():
    key = '[Google API Key]'
    stt_url = 'https://www.google.com/speech-api/v2/recognize?output=json&lang=en-us&key=' + key
    filename = 'test.flac'
    print "listening .."
        'arecord -D plughw:0,0 -f cd -c 1 -t wav -d 0 -q -r 16000 -d 3 | flac - -s -f --best --sample-rate 16000 -o ' + filename)
    print "interpreting .."
    # send the file to google speech api
    c = pycurl.Curl()
    c.setopt(pycurl.VERBOSE, 0)
    c.setopt(pycurl.URL, stt_url)
    fout = StringIO.StringIO()
    c.setopt(pycurl.WRITEFUNCTION, fout.write)
    c.setopt(pycurl.POST, 1)
    c.setopt(pycurl.HTTPHEADER, ['Content-Type: audio/x-flac; rate=16000'])
    file_size = os.path.getsize(filename)
    c.setopt(pycurl.POSTFIELDSIZE, file_size)
    fin = open(filename, 'rb')
    c.setopt(pycurl.READFUNCTION, fin.read)
    response_data = fout.getvalue()
    start_loc = response_data.find("transcript")
    temp_str = response_data[start_loc + 13:]
    end_loc = temp_str.find(""")
    final_result = temp_str[:end_loc]
    return final_result
class Translator(object):
    oauth_url = 'https://datamarket.accesscontrol.windows.net/v2/OAuth2-13'
    translation_url = 'http://api.microsofttranslator.com/V2/Ajax.svc/Translate?'
    def __init__(self):
        oauth_args = {
            'client_id': 'TranslatorPI',
            'client_secret': '[Microsoft Client Secret]',
            'scope': 'http://api.microsofttranslator.com',
            'grant_type': 'client_credentials'
        oauth_junk = json.loads(requests.post(Translator.oauth_url, data=urllib.urlencode(oauth_args)).content)
        self.headers = {'Authorization': 'Bearer ' + oauth_junk['access_token']}
    def translate(self, origin_language, destination_language, text):
        german_umlauts = {
            0xe4: u'ae',
            ord(u'ö'): u'oe',
            ord(u'ü'): u'ue',
            ord(u'ß'): None,
        translation_args = {
            'text': text,
            'to': destination_language,
            'from': origin_language
        translation_result = requests.get(Translator.translation_url + urllib.urlencode(translation_args),
        translation = translation_result.text[2:-1]
        if destination_language == 'DE':
            translation = translation.translate(german_umlauts)
        print "Translation: ", translation
        speak_text(origin_language, 'Translating ' + text)
        speak_text(destination_language, translation)
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Raspberry Pi - Translator.')
    parser.add_argument('-o', '--origin_language', help='Origin Language', required=True)
    parser.add_argument('-d', '--destination_language', help='Destination Language', required=True)
    args = parser.parse_args()
    while True:
        Translator().translate(args.origin_language, args.destination_language, transcribe())

Testing the $35 Universal Translator

So here are a few test sentences for our translator app, using English to Spanish or English to German:

  • How are you today?
  • What would you recommend on this menu?
  • Where is the nearest train station?
  • Thanks for listening.

Live Demo

This video shows the Raspberry Pi running the translator, using web services from Google and Microsoft for speech recognition, speech synthesis, and translation.

Raspberry Pi 2 – Speech Recognition on device

via Wolf Paulus » Embedded | Wolf Paulus

This is a lengthy post and very try, but it provides detailed instructions for how to build and install SphinxBase and PocketSphinx and how to generate a pronunciation dictionary and a language model, all so that speech recognition can be run directly on the Raspberry Pi, without network access. Don’t expect it to be as fast as Google’s recognizer, tho …

Creating the RASPBIAN boot MicroSD

Starting with the current RASPBIAN (Debian Wheezy) image, the creation of a bootable MicroSD Card is a well understood and well documented process.

Uncompressing the zip (again, there is no better tool than The Unarchiver, if you are on a Mac) reveals the 2015-02-16-raspbian-wheezy.img

With the MicroSD (inside an SD-Card adapter – no less than 8GB) inserted into the Mac, I run the df -h command in Terminal, to find out how to address the card. Today, it showed up as /dev/disk4s1 56Mi 14Mi 42Mi 26% 512 0 100% /Volumes/boot, which means, I run something like this, to put the boot image onto the MicroSD:

sudo diskutil unmount /dev/disk4s1
sudo dd bs=1m if=/Users/wolf/Downloads/2015-02-16-raspbian-wheezy.img of=/dev/rdisk4

… after a few minutes, once the 3.28 GB have been written onto the card, I execute:

sudo diskutil eject /dev/rdisk4

Customizing the OS

Once booted, using the sudo raspi-config allow the customization of the OS, which means that time-zone, keyboard, and other settings are adjusted, to closely match its environment.
I usually start (PI is already connected to the internet via Ethernet Cable) with

  • updating the raspi-config
  • expanding the filesystem
  • internationalization: un-check en-GB, check en-US.UTF-8 UTF-8
  • internationalization: timezone ..
  • internationalization: keyboard: change to English US
  • setting the hostname to translator, there are too many Raspberry Pis on my home network, to leave it at the default
  • make sure SSH is enabled
  • force audio out on the 3.5mm headphone jack


Given the sparse analog-to-digital support provided by the Raspberry Pi, the probably best and easiest way to connect a decent Mic to the device, is using a USB microphone. I happen to have an older Logitech USB Mic, which works perfectly fine with the Pi.

After a reboot and now with the microphone connected, let’s get started ..
ssh pi@translator with the default password ‘raspberry’ gets me in from everywhere on my local network
cat /proc/asound/cards

0 [ALSA ]: bcm2835 - bcm2835 ALSA
bcm2835 ALSA
1 [AK5370 ]: USB-Audio - AK5370
AKM AK5370 at usb-bcm2708_usb-1.2, full speed

showing that the microphone is visible and its usb extension.
Next, I edit alsa-base.conf to load snd-usb-audio like so:
sudo nano /etc/modprobe.d/alsa-base.conf
options snd-usb-audio index=-2
options snd-usb-audio index=0
and after a sudo reboot, cat /proc/asound/cards
looks like this

0 [AK5370 ]: USB-Audio - AK5370
AKM AK5370 at usb-bcm2708_usb-1.2, full speed
1 [ALSA ]: bcm2835 - bcm2835 ALSA
bcm2835 ALSA

Recording – Playback – Test

Before worrying about Speech Recognition and Speech Synthesis, let’s make sure that the basic recording and audio playback works.
Again, I have an USB Microphone connected to the Pi, as well as a speaker, using the 3.5mm audio plug.

Installing build tools and required libraries

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bison
sudo apt-get install libasound2-dev
sudo apt-get install swig
sudo apt-get install python-dev
sudo apt-get install mplayer
sudo reboot


sudo nano etc/asound.conf and enter something like this:

    type hw
    card AK5370

    type hw
    card ALSA

    type asym
        type plug
        slave.pcm "internal"
        type plug
        slave.pcm "usb"

    type asym
        type plug
        slave.pcm "internal"
        type plug
        slave.pcm "usb"


The current recording settings can be looked at with:
amixer -c 0 sget 'Mic',0
and for me that looks something like this:

  Simple mixer control 'Mic',0
  Capabilities: cvolume cvolume-joined cswitch cswitch-joined penum
  Capture channels: Mono
  Limits: Capture 0 - 78
  Mono: Capture 68 [87%] [10.00dB] [on]

alsamixer -c 0 can be used to increase the capture levels. After an increase, it looks like this:

  Mono: Capture 68 [87%] [10.00dB] [on]


The current playback settings can be looked at with:
amixer -c 1
alsamixer -c 0 can be used to increase the volume. After an increase,
amixer -c 1
it looks like this:

  Simple mixer control 'PCM',0
  Capabilities: pvolume pvolume-joined pswitch pswitch-joined penum
  Playback channels: Mono
  Limits: Playback -10239 - 400
  Mono: Playback -685 [90%] [-6.85dB] [on]

Test Recording and Playback

With the mic switched on ..
arecord -D plughw:0,0 -f cd ./test.wav .. use Control-C to stop the recording.
aplay ./test.wav

With recording and playback working, let’s get into the really cool stuff, on-device speech recognition.

Speech Recognition Toolkit

CMU Sphinx a.k.a. PocketSphinx
Currently pocket sphinx 5 pre-alpha (2015-02-15) is the most recent version. However, there are a few prerequisites that need to be installed first ..

Installing build tools and required libraries

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bison
sudo apt-get install libasound2-dev
sudo apt-get install swig
sudo apt-get install python-dev
sudo apt-get install mplayer

Building Sphinxbase

cd ~/
wget http://sourceforge.net/projects/cmusphinx/files/sphinxbase/5prealpha/sphinxbase-5prealpha.tar.gz
tar -zxvf ./sphinxbase-5prealpha.tar.gz
cd ./sphinxbase-5prealpha
./configure --enable-fixed
make clean all
make check
sudo make install

Building PocketSphinx

cd ~/
wget http://sourceforge.net/projects/cmusphinx/files/pocketsphinx/5prealpha/pocketsphinx-5prealpha.tar.gz
tar -zxvf pocketsphinx-5prealpha.tar.gz
cd ./pocketsphinx-5prealpha
make clean all
make check
sudo make install

Creating a Language Model

Create a text file, containing a list of words/sentences we want to be recognized

For instance ..

Okay Pi
Open Garage
Start Translator
What is the weather in Ramona
What is the time

Upload the text file here: http://www.speech.cs.cmu.edu/tools/lmtool-new.html
and then download the generated Pronunciation Dictionary and Language Model

For the the text file mentioned above, this is what the tool generates:

Pronunciation Dictionary


Language Model

Language model created by QuickLM on Thu Mar 26 00:23:34 EDT 2015
Copyright (c) 1996-2010 Carnegie Mellon University and Alexander I. Rudnicky

The model is in standard ARPA format, designed by Doug Paul while he was at MITRE.

The code that was used to produce this language model is available in Open Source.
Please visit http://www.speech.cs.cmu.edu/tools/ for more information

The (fixed) discount mass is 0.5. The backoffs are computed using the ratio method.
This model based on a corpus of 6 sentences and 16 words

ngram 1=16
ngram 2=20
ngram 3=15

-0.9853 </s> -0.3010
-0.9853 <s> -0.2536
-1.7634 GARAGE -0.2536
-1.7634 IN -0.2935
-1.4624 IS -0.2858
-1.7634 OKAY -0.2935
-1.7634 OPEN -0.2935
-1.7634 PI -0.2536
-1.7634 RAMONA -0.2536
-1.7634 SHUTDOWN -0.2536
-1.7634 START -0.2935
-1.4624 THE -0.2858
-1.7634 TIME -0.2536
-1.7634 TRANSLATOR -0.2536
-1.7634 WEATHER -0.2935
-1.4624 WHAT -0.2858

-1.0792 <s> OKAY 0.0000
-1.0792 <s> OPEN 0.0000
-1.0792 <s> SHUTDOWN 0.0000
-1.0792 <s> START 0.0000
-0.7782 <s> WHAT 0.0000
-0.3010 GARAGE </s> -0.3010
-0.3010 IN RAMONA 0.0000
-0.3010 IS THE 0.0000
-0.3010 OKAY PI 0.0000
-0.3010 OPEN GARAGE 0.0000
-0.3010 PI </s> -0.3010
-0.3010 RAMONA </s> -0.3010
-0.3010 SHUTDOWN </s> -0.3010
-0.3010 START TRANSLATOR 0.0000
-0.6021 THE TIME 0.0000
-0.6021 THE WEATHER 0.0000
-0.3010 TIME </s> -0.3010
-0.3010 TRANSLATOR </s> -0.3010
-0.3010 WEATHER IN 0.0000
-0.3010 WHAT IS 0.0000

-0.3010 <s> OKAY PI
-0.3010 <s> OPEN GARAGE
-0.3010 <s> SHUTDOWN </s>
-0.3010 <s> WHAT IS
-0.3010 IN RAMONA </s>
-0.6021 IS THE TIME
-0.3010 OKAY PI </s>
-0.3010 OPEN GARAGE </s>
-0.3010 THE TIME </s>
-0.3010 WHAT IS THE


Looking carefully, the Sphinx knowledge base generator provides links to the just generated files, which make sit super convenient to pull them down to the Pi. For me it generated a base set with the name 3199:

wget http://www.speech.cs.cmu.edu/tools/product/1427343814_14328/3199.dic
wget http://www.speech.cs.cmu.edu/tools/product/1427343814_14328/3199.lm

Running Speech-recognition locally on the Raspberry Pi

Finally everything is in place, SphinxBase and PocketSphinx have been building installed, a pronunciation dictionary and a language model has been created and locally stored.
During the build process, acoustic model files for the english language, were deployed here: /usr/local/share/pocketsphinx/model/en-us/en-us

.. time to try out the the recognizer:

cd ~/
export LD_LIBRARY_PATH=/usr/local/lib
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

pocketsphinx_continuous -hmm /usr/local/share/pocketsphinx/model/en-us/en-us -lm 3199.lm -dict 3199.dic -samprate 16000/8000/48000 -inmic yes



INFO: ps_lattice.c(1380): Bestpath score: -7682
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(:285:334) = -403763
INFO: ps_lattice.c(1441): Joint P(O,S) = -426231 P(S|O) = -22468
INFO: ngram_search.c(874): bestpath 0.01 CPU 0.003 xRT
INFO: ngram_search.c(877): bestpath 0.01 wall 0.002 xRT

Embedded Scripting (Lua, Espruino, Micro Python)


The thirty-second OSHUG meeting will take a look at the use of scripting languages with deeply embedded computing platforms, which have much more constrained resources than the platforms which were originally targeted by the languages.

Programming a microcontroller with Lua

eLua is a full version of the Lua programming language for microcontrollers, running on bare metal. Lua provides a modern high level dynamicaly typed language, with first class functions, coroutines and an API for interacting with C code, and yet which is very small and can run in a memory constrained environment. This talk will cover the Lua language and microcontroller environment, and show it running on-off-the-shelf ARM Cortex boards as well as the Mizar32, an open hardware design built especially for eLua.

Justin Cormack is a software developer based in London. He previously worked at a startup that built LED displays and retains a fondness for hardware. He organizes the London Lua User Group, which hosts talks on the Lua programming language.

Bringing JavaScript to Microcontrollers

This talk will discuss the benefits and challenges of running a modern scripting language on microcontrollers with extremely limited resources. In particular we will take a look at the Espruino JavaScript interpreter and how it addresses these challenges and manages to run in less than 8kB of RAM.

Gordon Williams has developed software for companies such as Altera, Nokia, Microsoft and Lloyds Register, but has been working on the Espruino JavaScript interpreter for the last 18 months. In his free time he enjoys making things - from little gadgets to whole cars.

Micro Python — Python for microcontrollers

Microcontrollers have recently become powerful enough to host high-level scripting languages and run meaningful programs written in them. In this talk we will explore the software and hardware of the Micro Python project, an open source implementation of Python 3 which aims to be as compatible as possible with CPython, whilst still fitting within the RAM and ROM constraints of a microcontroller. Many tricks are employed to put as much as possible within ROM, and to use the least RAM and minimal heap allocations as is feasible. The project was successfully funded via a Kickstarter campaign at the end of 2013, and the hardware is currently being manufactured at Jaltek Systems UK.

Damien George is a theoretical physicist who likes to write compilers and build robots in his spare time.

Note: Please aim to by 18:15 as the first talk will start at 18:30 prompt.

Sponsored by:

Privacy and Security (Security protocols in constrained environments, RFIDler, Indie Phone)


The thirty-first OSHUG meeting is dedicated to privacy and security, with talks on implementing security protocols in constrained environments, an SDR RFID reader/writer/emulator, and a new initiative that will use design thinking and open source to create a truly empowering mobile phone.

Security protocols in constrained environments

Implementation of security protocols such as TLS, SSH or IPsec come with a memory and compute overhead. Whilst this has become negligible in full scale environments it's still a real issue for hobbyist and embedded developers. This presentation will look at the sources of the overheads, what can be done to minimise them, and what sort of hardware platforms can be made to absorb them. The benefits and potential pitfalls of hardware specific implementations will also be examined.

Chris Swan is CTO at CohesiveFT where he helps build secure cloud based networks. He's previously been a security guy at large Swiss banks, and before that was a Weapon Engineering Officer in the Royal Navy. Chris has tinkered with electronics since pre-school, and these days has a desk littered with various dev boards and projects.

RFIDler: A Software Defined RFID Reader/Writer/Emulator

Software Defined Radio has been quietly revolutionising the world of RF. However, the same revolution has not yet taken place in RFID. The proliferation of RFID/NFC devices means that it is unlikely that you will not interact with one such device or another on a daily basis. Whether it’s your car key, door entry card, transport card, contactless credit card, passport, etc. you almost certainly have one in your pocket right now!

RFIDler is a new project, created by Aperture Labs, designed to bring the world of Software Defined Radio into the RFID spectrum. We have created a small, open source, cheap to build platform that allows any suitably powerful microprocessor access to the raw data created by the over-the-air conversation between tag and reader coil. The device can also act as a standalone ‘hacking’ platform for RFID manipulation/examination. The rest is up to you!

Adam “Major Malfunction” Laurie is a security consultant working in the field of electronic communications, and a Director of Aperture Labs Ltd., who specialise in reverse engineering of secure systems. He started in the computer industry in the late Seventies, and quickly became interested in the underlying network and data protocols.

During this period, he successfully disproved the industry lie that music CDs could not be read by computers, and wrote the world’s first CD ripper, ‘CDGRAB’. He was also involved various early open source projects, including ‘Apache-SSL’ which went on to become the de-facto standard secure web server. Since the late Nineties he has focused his attention on security, and has been the author of various papers exposing flaws in Internet services and/or software, as well as pioneering the concept of re-using military data centres (housed in underground nuclear bunkers) as secure hosting facilities.

Andy Ritchie has been working in the computer and technology industry for over 20 years for major industry players such as ICL, Informix, British Airways and Motorola. Founding his first company, Point 4 Consulting at the age of 25, he built it into a multi-million pound technology design consultancy. Point 4 provided critical back end technology and management for major web sites such as The Electronic Telegraph, MTV, United Airlines, Interflora, Credit Suisse,BT, Littlewoods and Sony. Following Point 4 he went on to found Ablaise, a company that manages the considerable intellectual property generated by Point 4, and Aperture Labs. In his spare time he manages the worlds largest and longest running security conference, Defcon. Andy's research focuses on access control systems, biometric devices and embedded systems security, and he has spoken and trained at information security conferences in Europe and the US publicly and for private and governmental audiences. He is responsible for identifying major vulnerabilities in various access control and biometric systems, and has a passion for creating devices that emulate access control tokens either electronic physical or biometric. Andy has been responsible both directly and indirectly for changing access control guidelines for several western governments. Andy is currently a director of Aperture Labs Ltd, a company that specialises in reverse engineering and security evaluations of embedded systems.

Indie: a tale of privacy, civil liberties, and a phone

Can a phone really help protect our civil liberties? Aral Balkan thinks so. And he’s embarked on an audacious journey to make one. Join us to hear the introduction of a two-year story that is only just beginning.

Aral Balkan is is founder and designer of Indie Phone, a phone that empowers mere mortals to own their own data.

Note: Please aim to by 18:15 as the first talk will start at 18:30 prompt.

Sponsored by:

Extended list of 8-bit AVR Micro-Controllers, easily programmable with the Arduino IDE

via Wolf Paulus » Embedded

A couple days back, I wrote about ‘The $3 Arduino‘, how to leave the Arduino board behind and program an ATmega168 Micro-Controller directly, still using the Arduino IDE but with the AVRMSPII programmer. Of course, the ATmega168 isn’t the only MC available for something like that. In fact, I have quite a few 8-bit AVR Micro-Controllers in a small box right here, next to my desk.
Let’s minimize the ‘Minimal Arduino’ even more, for instance by using the tiny ATtiny85 Microcontroller. Just like we did with the ‘BareBones’ definition, we add board definitions for the Mircocontrollers that the Arduino IDE doesn’t support out-of-the-box. Board definition for the missing MCs can be found here and after moving the attiny folder into the ~/Document/Arduino/hardware folder and restartig the Arduino IDE, the IDE should now know about the new MCs. More details about this can be read here.

Minimizing the Minimal Arduino

Now that the Arduino IDE knows about the really tiny ATtiny85, we can set it’s internal clock to 8Mhz and flash a small program.

To flash the chip, we use the SPI (MOSI/MISO/SCK) Pins like shown here:

  1. RSET -> ATtiny85-Pin 1
  2. GND -> ATtiny85-Pin 4
  3. MOSI -> ATtiny85-Pin 5
  4. MISO -> ATtiny85-Pin 6
  5. SCK -> ATtiny85-Pin 7
  6. +5V -> ATtiny85-Pin 8

Switching the Internal Clock to 8MHz

Using the Fuse Calculator we can find the proper ATtiny85 fuse settings, to use the internal RC Oscillator and setting it to 8Mhz.
The avrdude arguments look something like this: -U lfuse:w:0xe2:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m
Avrdude is one of the tools that the Arduino IDE deploys on your computer. You can either execute Avrdude with those arguments directly, like so:

avrdude -p t85 -b 115200 -P usb -c avrispmkII -V -e -U lfuse:w:0xe2:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m

or just execute the ‘Burn Bootloader’ command in the Arduino IDE’s Tools menu.
While this will NOT burn a bootloader on the ATtiny85 chip, it will set the fuses appropriately. Either way, this step needs to be performed only once.

With the microcontroller still connected to the AT-AVR-ISP2 programmer, a simple program can be quickly uploaded:

int p = 3;                // LED connected to digital pin 13
void setup() {
  pinMode(p, OUTPUT);      // sets the digital pin as output

void loop() {
  digitalWrite(p, HIGH);   // sets the LED on
  delay(100);              // .. for 10th of a sec
  digitalWrite(p, LOW);    // sets the LED off again
  delay(1000);             //  waits for a second
  digitalWrite(p, HIGH);   // sets the LED on
  delay(500);              // .. for 1/2 a sec
  digitalWrite(p, LOW);    // sets the LED off again
  delay(500);              // .. for 1/2 a second

ATtiny2313 ($2.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 2KB ISP flash memory, 128B ISP EEPROM, 128B internal SRAM, universal serial interface (USI), full duplex UART, and debugWIRE for on-chip debugging. The device supports a throughput of 20 MIPS at 20 MHz and operates between 2.7-5.5 volts.
By executing powerful instructions in a single clock cycle, the device achieves throughputs approaching 1 MIPS per MHz, balancing power consumption and processing speed.

ATtiny84 ($3.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 8KB ISP flash memory, 512B EEPROM, 512-Byte SRAM, 12 general purpose I/O lines, 32 general purpose working registers, an 2 timers/counters (8-bit/16-bit) with two PWM channels each, internal and external interrupts, 8-channel 10-bit A/D converter, programmable gain stage (1x, 20x) for 12 differential ADC channel pairs, programmable watchdog timer with internal oscillator, internal calibrated oscillator, and four software selectable power saving modes.

ATtiny85 ($1.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 8KB ISP flash memory, 512B EEPROM, 512-Byte SRAM, 6 general purpose I/O lines, 32 general purpose working registers, one 8-bit timer/counter with compare modes, one 8-bit high speed timer/counter, USI, internal and external Interrupts, 4-channel 10-bit A/D converter, programmable watchdog timer with internal oscillator, three software selectable power saving modes, and debugWIRE for on-chip debugging. The device achieves a throughput of 20 MIPS at 20 MHz and operates between 2.7-5.5 volts.

ATmega8 ($2.00)

The low-power Atmel 8-bit AVR RISC-based microcontroller combines 8KB of programmable flash memory, 1KB of SRAM, 512K EEPROM, and a 6 or 8 channel 10-bit A/D converter. The device supports throughput of 16 MIPS at 16 MHz and operates between 2.7-5.5 volts.

ATmega168 ($4.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 16KB ISP flash memory, 1KB SRAM, 512B EEPROM, an 8-channel/10-bit A/D converter (TQFP and QFN/MLF), and debugWIRE for on-chip debugging. The device supports a throughput of 20 MIPS at 20 MHz and operates between 2.7-5.5 volts.

ATmeaga328 ($5.00)

The high-performance Atmel 8-bit AVR RISC-based microcontroller combines 32KB ISP flash memory with read-while-write capabilities, 1KB EEPROM, 2KB SRAM, 23 general purpose I/O lines, 32 general purpose working registers, three flexible timer/counters with compare modes, internal and external interrupts,serial programmable USART, a byte-oriented 2-wire serial interface, SPI serial port, 6-channel 10-bit A/D converter (8-channels in TQFP and QFN/MLF packages), programmable watchdog timer with internal oscillator, and five software selectable power saving modes. The device operates between 1.8-5.5 volts.

MC Flash (KB) SRAM (Bytes) EEPROM (Byte) SPI I2C UART ADC Chnnl (10bit) PWM Chnnl Timer RTC
ATtiny2312 2 128 128 2 1 1 4 2 No
ATtiny84 8 512 512 1 1 0 8 4 2 No
ATtiny85 8 512 512 1 1 0 4 6 2 No
ATmega8 8 1024 512 1 1 1 8 3 3 Yes
ATmega168 16 1024 512 2 1 1 8 6 3 Yes
ATmega328 32 2048 1024 2 1 1 8 6 3 Yes

The $3 Arduino

via Wolf Paulus » Embedded

Buying and using an official Arduino Board like the standard Arduino Uno is the perfect way to get started with the Arduino language, common electronic components, or your own embedded prototyping project. However, once you have mastered the initial challenges and have built some projects, the Arduino Board can get in the way.

For instance, many enthusiasts, who started out with hacking and extending the Arduino hardware and software, have now moved on to the Raspberry Pi, which is equally or even less expensive, but more powerful and complete. The Raspberry Pi comes with Ethernet, USB, HDMI, Analog-Audio-Out, a lot of memory, and much more; adding all these capabilities to the Arduino with Arduino-Shields, would probably cost several hundred Dollars. However, the Raspberry lacks some of Arduino’s I/O capabilities like Analog-Inputs.
Combining an Arduino with a Raspberry Pi, may make sense for a lot of project; but we don’t need or want to integrate an Arduino Board – the Arduino’s core, the ATMEGA Microcontroller, is all what’s needed.

We still want to program the Arduino with the familiar Arduino IDE, but the boot-loader doesn’t really help, once the ATMEGA Micro is programmed and is ready to communicate with the Raspi or any other platform.

This is probably the most minimal ATmega168-20PU based Arduino you can come up with. The ATmega168 (available of about $3) was the default Arduino chip for quite some time, before being replaced by the ATmega328, doubling the available memory. The chip is powered with +5V on Pin-7 and grounded via Pin-8; the LED is between Pins 19 and 22.

Here you see it processing this rather simple blinky program:

int p = 13;                // LED connected to digital pin 13
void setup() {
  pinMode(p, OUTPUT);      // sets the digital pin as output
void loop() {
  digitalWrite(p, HIGH);   // sets the LED on
  delay(100);              // .. for 10th of a sec 
  digitalWrite(p, LOW);    // sets the LED off again
  delay(1000);             //  waits for a second
  digitalWrite(p, HIGH);   // sets the LED on
  delay(500);              // .. for 1/2 a sec 
  digitalWrite(p, LOW);    // sets the LED off again
  delay(500);              // .. for 1/2 a second

Since we don’t want any bootloader (waiting for a serial upload of new software) on this chip but rather have it immediately start executing the program, we need to tell the Arduino IDE about our bare-bones board and the fact that it doesn’t have a boot loader.

Bare-Bones ATmega168 Board Definition

To add a new hardware definition to the Arduino IDE, create a hardware/BareBones folder inside your Arduino folder (the place where all your sketches are stored). Create a boards.txt file with the following content.
On the Mac for instance, I ended up with this:

minimal168.name=ATmega168 bare bone (internal 8 MHz clock)

Wiring up the ATmega-168

To flash the chip, we use the SPI (MOSI/MISO/SCK) Pins like shown here:

Connections from an AT-AVR-ISP2 to the ATmega-168 are as follows:

  1. GND -> ATmega168-Pin 8
  2. +5V -> ATmega168-Pin 7
  3. MISO -> ATmega168-Pin 18
  4. SCK -> ATmega168-Pin 19
  5. RSET -> ATmega168-Pin 1
  6. MOSI -> ATmega168-Pin 17

Burning without a Boot-Loader

Instead of an Arduino Board, I connected one of those shiny blue AT-AVR-ISP2 programmers from ATMEL (available for about $30) to the Mac. In the Arduino-IDE, in the Tools Menu, under Boards, I selected and ‘ATmega168 bare bone (internal 8 MHz clock)’ and under Programmers, ‘AVRISP mkII’.
Hitting ‘Upload’ will now use the AT-AVR-ISP2 to reset the chip, flash the program, and verify the new memory content. All in all, it takes about 75 seconds.
Once the chip is now powered, it will immediately start executing the program.

Switching the Internal Clock to 8MHz

Using the Fuse Calculator we can find the proper ATmega168 fuse settings, to use the internal RC Oscillator and setting it to 8Mhz.
The avrdude arguments look something like this: -U lfuse:w:0xe2:m -U hfuse:w:0xdf:m -U efuse:w:0x00:m.
Avrdude is one of the tools that the Arduino IDE deploys on your computer. You can either execute Avrdude with those arguments directly, like so:

avrdude -p m168 -b 115200 -P usb -c avrispmkII -V -e -U lfuse:w:0xe2:m -U hfuse:w:0xdf:m -U efuse:w:0x00:m

or just execute the ‘Burn Bootloader’ command in the Arduino IDE’s Tools menu.
While this will NOT burn a bootloader on the ATmege168 chip, it will set the fuses appropriately. Either way, this step needs to be performed only once.


Let’s minimize the ‘Minimal Arduino’ even more, for instance by using the tiny ATtiny85 Microcontroller.
Read more: http://wolfpaulus.com/journal/embedded/avrmc

Embedded (Erlang, Parallella, Compiler Options and Energy Consumption)


Embedded systems continue to grow in importance as they play an ever-increasing role in everyday life: more computing is done on the move as smartphone functionality catches up with desktops and services move to the Cloud; the Internet of Things is set to herald an age in which networked objects create and consume data on our behalves. These, and many other applications, are driving an insatiable demand for more powerful and energy-efficient embedded solutions.

At the twenty-second OSHUG meeting we will hear how Erlang can be used to bring concurrency to multi-core embedded systems, we will learn about the Parallella project which aims to make parallel computing accessible to everyone, and we will hear about vital research into optimising compiler options for energy-efficiency.

Erlang Embedded — Concurrent Blinkenlights and More!

Managing the resources and utilising the increasingly popular multi-core and heterogeneous aspects of modern embedded systems require new sets of tools and methodologies that differ from the traditional C/C++ flow.

Erlang provides features that are highly relevant to solve these issues and yet it is pretty much unknown in the embedded domain — which is surprising considering that it was originally designed for embedded applications at Ericsson!

This talk aims to provide an overview of Erlang and the current state of its usage in the embedded domain and talk about our plans to help speed up the adoption rate of Erlang in embedded projects.

Omer Kilic works on Erlang Embedded, a Knowledge Transfer Partnership project in collaboration with University of Kent. The aim of this project is to bring the benefits of concurrent systems development using Erlang to the field of embedded systems; through investigation, analysis, software development and evaluation.

Prior to joining Erlang Solutions, Omer was a research student in the Embedded Systems Lab at the University of Kent, working on a reconfigurable heterogeneous computing framework as part of his PhD thesis (which he intends to submit soon!)

Omer likes tiny computers, things that 'just work' and real beer.

Parallella — Supercomputing for Everyone

The Parallella computing platform is based on the Adapteva Epiphany processor. Implemented in 65nm or 28nm silicon, Epiphany offers 16 or 64 cores and delivers up to 50 GFLOPS/watt, and the entire Parallella board complete with a dual-core ARM A9 host will consume around 5 watts.

This talk will present the Epiphany architecture and explore the challenges of developing an effective GNU tool chain, and discuss the use of open source, and an approach to engineering that developed one of the fastest chips in the world from concept to second generation silicon for just a few million dollars.

Dr Jeremy Bennett is the founder of Embecosm, and an expert on hardware modelling and embedded software development. Prior to founding Embecosm, Dr Bennett was Vice President of ARC International PLC and previously Vice President of Marconi PLC.

In his earlier academic career, he pursued academic research in computer architecture, modelling and compiler technology at Bath and Cambridge Universities. He is the author of the popular textbook "Introduction to Compiling Techniques" (McGraw-Hill 1990, 1995, 2003).

Dr Bennett holds an MA and PhD in Computer Science from Cambridge University. He is a Member of the British Computer Society, a Chartered Engineer, a Chartered Information Technology Professional and Fellow of the Royal Society of Arts.

Measuring the impact of compiler options on energy consumption in embedded platforms

Energy efficiency is the highest priority for modern software-hardware co-design. The potential for compiler options to impact on power consumption of running programs has often been discussed. However there has never been a comprehensive analysis of the magnitude of that impact, or how it varies between processor architectures and compilers.

This presentation will describe a project undertook during the the Summer of 2012 at the University of Bristol Department of Computer Science and funded by Embecosm, to explore the effect of compiler options on energy consumption of compiled programs.

The talk will discuss the accurate measurement of power consumption on a range of small embedded systems. The whole setup was under control of an XMOS board, making it possible to run the tens of thousands of tests needed for statistical robustness in just a few weeks. The results of these tests will be discussed, the implications for compiling embedded systems, and the potential for future research in this area.

This research was unusual, in that it was funded as a completely open project. A wiki detailed progress from week to week, the relevant open source communities were kept regularly informed, and the results will be published in open access journals. The talk will also cover the issues around funding and running an academic research project in this manner.

James Pallister is a graduate of the University of Bristol, where he achieved joint First Class Honours in Computer Science and Electronics. During the summer of 2012, he led Embecosm's research program into the impact of compilers on energy consumption in embedded systems, which was a development of James' work at the University of Bristol with the XMOS multi-core processor.

Mr Pallister has returned to Bristol in October 2012, where he is studying for a PhD in low-power multi-core system design. He remains a Technical Advisor to Embecosm.

Simon Hollis is a lecturer in the Microelectronics Research Group, Department of Computer Science, University of Bristol. His interests lie in the creation of energy-efficient embedded systems, processor interconnects and parallel languages and run-times.

He is the creator of the RasP and Skip-link Networks-on-Chip, and is currently working on the Swallow many-core system, which targets 480 processing cores in under 200W. A main aim of the research is to re-investigate the memory/communication balance in large scale computing systems.

Note: Please aim to arrive for 18:45 or shortly after as the event will start at 19:00 prompt.

Streaming Your Webcam w/ Raspberry Pi

via Wolf Paulus » Embedded

[Last updated on Feb. 2. 2013 for (2012-12-16-wheezy-raspbian) Kernel Version 3.2.27+]

Three years ago, we bought two small Webcams and since we wanted to use them on Linux and OS X, we went with the UVC and Mac compatible Creative LIVE! CAM Video IM Ultra. This Webcam (Model VF0415) has a high-resolution sensor that lets you take 5.0-megapixel pictures and record videos at up to 1.3-megapixel; supported resolutions include 640×480, 1290×720, and 1280×960. If you like, you can go back and read what I was thinking about the IM Ultra, back in 2009. Today, it’s not much used anymore, but may just be the right accessory for a Raspberry Pi.

With the USB Camera attached to the Raspi, lsusb returns something like this:


Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 002: ID 0424:9512 Standard Microsystems Corp.
Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp.
Bus 001 Device 004: ID 7392:7811 Edimax Technology Co., Ltd EW-7811Un 802.11n Wireless Adapter [Realtek RTL8188CUS]
Bus 001 Device 005: ID 041e:4071 Creative Technology, Ltd

Using the current Raspbian “wheezy” distribution (Kernel 3.2.27+), one can find the following related packages, ready for deployment:

  • luvcview, a camera viewer for UVC based webcams, which includes an mjpeg decoder and is able to save the video stream as an AVI file.
  • uvccapture, which can capture an image (JPEG) from a USB webcam at a specified interval

While these might be great tools, mpeg-streamer looks like a more complete, one-stop-shop kind-of solution.

Get the mpeg-streamer source code

Either install Subversion (svn) on the Raspberry Pi or use svn installed on your Mac or PC, to get the source-code before using Secure Copy (scp) to copy it over to your Raspi.

Here, I’m using svn, which is already installed on the Mac, before copying the files over to my Raspi, (username pi, hostname is phobos)

cd ~
mkdir tmp
cd tmp
svn co https://mjpg-streamer.svn.sourceforge.net/svnroot/mjpg-streamer mjpg-streamer
scp -r ./mjpg-streamer pi@phobos:mjpg-streamer

Please note: Looks like the repo got recently moved, Try this to check-out the code if the previous step does not work:

svn co https://svn.code.sf.net/p/mjpg-streamer/code/mjpg-streamer/ mjpg-streamer

Over on the Raspi, I tried to make the project, but quickly ran into error messages, hinting at a missing library.

ssh pi@phobos
cd mjpg-streamer/mjpg-streamer
jpeg_utils.c:27:21: fatal error: jpeglib.h: No such file or directory, compilation terminated.
make[1]: *** [jpeg_utils.lo] Error 1

After finding out, which libraries were available (apt-cache search libjpeg), I installed libjpeg8-dev like so: sudo apt-get install libjpeg8-dev. This time, I got a lot further, before hitting the next build error:

make[1]: *** [pictures/640x480_1.jpg] Error 127
make[1]: Leaving directory `/home/pi/mjpg-streamer/mjpg-streamer/plugins/input_testpicture'

After some google-ing, which resulted in installing ImageMagick like so: sudo apt-get install imagemagick, the next build attempt looked much more promissing:


and ls -lt shows the newly built files on top:

-rwxr-xr-x 1 pi pi 13909 Sep 8 07:51 input_file.so
-rwxr-xr-x 1 pi pi 168454 Sep 8 07:51 input_testpicture.so
-rwxr-xr-x 1 pi pi 31840 Sep 8 07:50 output_http.so
-rwxr-xr-x 1 pi pi 14196 Sep 8 07:50 output_udp.so
-rwxr-xr-x 1 pi pi 19747 Sep 8 07:50 output_file.so
-rwxr-xr-x 1 pi pi 29729 Sep 8 07:50 input_uvc.so
-rwxr-xr-x 1 pi pi 15287 Sep 8 07:50 mjpg_streamer
-rw-r--r-- 1 pi pi 1764 Sep 8 07:50 utils.o
-rw-r--r-- 1 pi pi 9904 Sep 8 07:50 mjpg_streamer.o


MJPG-streamer is a command line tool to stream JPEG files over an IP-based network. MJPG-streamer relies on input- and output-plugins, e.g. an input-plugin to copy JPEG images to a globally accessible memory location, while an output-plugin, like output_http.so, processes the images, e.g. serve a single JPEG file (provided by the input plugin), or streams them according to existing mpeg standards.

Therefore, the important files that were built in the previous step are:

  • mjpg_streamer – command line tool that copies JPGs from a single input plugin to one or more output plugins.
  • input_uvc.so – captures such JPG frames from a connected webcam. (Stream up to 960×720 pixel large images from your webcam at a high frame rate (>= 15 fps) with little CPU load.
  • output_http.so – HTTP 1.0 webserver, serves a single JPEG file of the input plugin, or streams them according to M-JPEG standard.

Starting the Webcam Server

A simple launch command would look like this:
./mjpg_streamer -i "./input_uvc.so" -o "./output_http.so -w ./www"

MJPG Streamer Version: svn rev:
i: Using V4L2 device.: /dev/video0
i: Desired Resolution: 640 x 480
i: Frames Per Second.: 5
i: Format…………: MJPEG
o: HTTP TCP port…..: 8080
o: username:password.: disabled
o: commands……….: enabled

Open a Webbrowser on another computer on the LAN and open this url: http://{name or IP-address of the Raspi}:8080

However, experimenting with the resolution and frame rate parameters is well worth it and can improved the outcome.

UVC Webcam Grabber Parameters

The following parameters can be passed to this plugin:

-d video device to open (your camera)
-r the resolution of the video device,
can be one of the following strings:
or a custom value like: 640×480
-f frames per second
-y enable YUYV format and disable MJPEG mode
-q JPEG compression quality in percent
(activates YUYV format, disables MJPEG)
-m drop frames smaller then this limit, useful
if the webcam produces small-sized garbage frames
may happen under low light conditions
-n do not initalize dynctrls of Linux-UVC driver
-l switch the LED “on”, “off”, let it “blink” or leave
it up to the driver using the value “auto”

HTTP Output Parameters

The following parameters can be passed to this plugin:

-w folder that contains webpages in flat hierarchy (no subfolders)
-p TCP port for this HTTP server
-c ask for “username:password” on connect
-n disable execution of commands

I have seen some good results with this
./mjpg_streamer -i "./input_uvc.so -n -f 15 -r 640x480" -o "./output_http.so -n -w ./www"
but even a much higher resolution didn’t impact the actually observed frame-rate all that much:
./mjpg_streamer -i "./input_uvc.so -n -f 15 -r 1280x960" -o "./output_http.so -n -w ./www"

MJPG Streamer Version: svn rev:
i: Using V4L2 device.: /dev/video0
i: Desired Resolution: 1280 x 960
i: Frames Per Second.: 15
i: Format…………: MJPEG
o: www-folder-path…: ./www/
o: HTTP TCP port…..: 8080
o: username:password.: disabled
o: commands……….: disabled

Webcam Stream Clients

The included Website (http://{name or IP-address of the Raspi}:8080) shows examples for how to connect a client to the Webcam stream. The easiest way is obviously a simple HTML page that works great with Google Chrome and Firefox but not so much with Safari. Anyways, it’s important to specify the width and height that was configured with the output_http.so, in the HTML as well

  <img alt="" src="http://phobos:8080/?action=stream" width="1280" height="960" />

Raspberry Pi Webcam Streamer

Taking the Raspberry Pi Web Stream Server Outside

This is the Raspberry Pi powered by a 5VDC, 700mA battery, with an (Edimax EW-7811Un) USB-WiFi Adapter and the Creative LIVE! CAM Video IM Ultra connected.

Video Lan Client for Viewing and Recording

Using Video Lan Client, you can view and also record the video stream, served by the Raspi.

Recorded Webcam Streamer

Movie, streamed from a Raspberry Pi

Raspberry Pi Webcam from Tech Casita Productions on Vimeo.

Let me know what Webcam software you found that works well on the Raspberry Pi.

Raspberry Pi, Battle of the Enclosures

via Wolf Paulus » Embedded

Overturned Basket with Raspberries and White Currants, 1882
By Eloise Harriet Stannard (1829 – 1915)

Eventually, you will start looking for an enclosure for the Raspberry Pi. Even during the early hardware development phase of your project, you can put the Raspberry Pi into an enclosure, given that the mainboard doesn’t have any on/off switches and that a Cobbler Breakout Kit provides easy access to the Raspi’s GPIO Pins (on a neighboring solderless breadboard).
Unlike for many other popular embedded development platforms, there are already many enclosures for the Raspberry Pi to chose from; many of which are listed over here at elinux.org.

We have bought two Adafruit Pi Boxes and two Raspberry Pi Cases from Barch Designs.

Adafruit Pi Box

  • Crystal-clear Acrylic (6 pieces)
  • Engraved Labels on all Connector Slots
  • Long Slot to connect a 26-pin IDC cable (e.g. Cobbler Breakout Kit)
  • No additional vents or cooling required
  • $14.95


The case has no screws or standoffs and the little feet have to be squeezed to make the pieces snap together.
Very elegant design, however, (probably accelerated by the Raspberry Pi’s heat emission) after a few days of use, the acrylic became extremely brittle and started to show cracks around the cutouts. One of the feet broke off, while we were trying to open the enclosure, rendering the case useless (all feet are needed to snap the enclosure parts together again.)
Despite operating extremely carefully, the same happened to the second case only a few days later. Kudos to Adafruit though. Once we mentioned our experience with the enclosure, a refund was issued.

While this could have been a temporary issue related to the acrylic used for our cases, we would not recommend the enclosure for longer use or if you needed to open and close the enclosure often or even rarely.

Raspberry Pi Case by Barch Designs

  • CNC Machined from Billet Aluminum
  • Customizable Engraving
  • Long Slot to connect a 26-pin IDC cable (e.g. Cobbler Breakout Kit)
  • Acts as a Heat Sink
  • LED Fiber Optics
  • $69.99 (incl. shipping)


This precisely enclosure is milled from Solid 6061-T6 Aircraft Grade Billet Aluminum in the USA. Fiber Optic cables that have been manufactured into the case and each cable is positioned directly above an LED.
The whole enclosure acts as an heat sink, even a small package of thermal paste is included.
While the price is about four times that of the Acrylic enclosure, if you need an enclosure that lasts you may want to consider this one. It is the Mercedes among the Pi cases, but money well spent.

Raspberry Pi / Case by Barch Designs / with EW-7811Un USB Wireless Adapter

Accessing Raspberry Pi via Serial

via Wolf Paulus » Embedded

Using a serial connection to connect to a Raspbery Pi has many advantages. The boot process (Kernel boot messages go to the UART at 115,200 bit/s) can be monitored, without the need to hookup an HDMI-Monitor. Once booted, you can of course login through a serial terminal as well, i.e. the serial connection allows logging-in form a remote computer without running an SSH daemon on the Raspi.

UART TXD and RXD pins are easily accessible (GPIO 14 and 15), however, like for all GPIO pins, the voltage levels are 3.3 V and are not 5 V tolerant!

Since most of the desktop and laptop computers don’t come equipped with a serial port anymore, accessing the Raspberry Pi via a Serial Connection requires some requisites. I have recently connected to the Raspberry Pi using three different hardware setups ..

1. USB to Serial Adapter

There are many USB-to-Serial adapters available and while not all of them are capable to handle the highest data transfer speeds, the Keyspan HS-19 (OS-X drivers are available here) is certainly one of the best.
However, adapters based on the Prolific 2303 chip, like the Zonet ZUC3100, seem to be a little less expensive, well-supported in Linux, and much more widespread. Drivers for 2303 based devices can be found here, if required, use GUEST as user name and password to gain access.
E.g. I’m currently using the Mac OS X Universal Binary Driver v1.4.0 on OS X 10.8.1 without any issues.

1.1. Level Shifter

Very few of those USB-to-Serial adapters have the standard RS-232 +/- 12V voltage levels on the serial ports (the Zonet ZUC3100 w/ the pl2303 chip however does!) and using a level shifter is certainly a very good idea. Since the Raspi wants no more than 3.3V, TTL-RS-232 level converters based on the Maxim MAX3232 are your best choice.

This photo shows the blue Zonet ZUC3100 Usb-to-Serial adapter, with a Maxim MAX3232 based level shifter. Since the level shifter needs to be powered, the Raspi’s 3.3V pin (red) and Ground (black) are connected to the Level-Shifter. Yellow and Orange are used for the Transmit and Receive lines.

On OS X, a simple Terminal with the screen /dev/tty.usbserial 115200 command issued is all what is needed to connect to the Raspberry Pi. A more dedicated application like CoolTerm may become handy as well.

2. FTDI Basic 3.3V – USB to Serial.

I have a basic breakout board for the FTDI FT232RL USB to serial IC, which is mainly used to program those Arduino boards that don’t have an USB connector. It can of course also be used for general serial applications. Big benefit here is that the FTDI Basic 3.3V already provides the 3.3V levels that the Raspbi requires. The Virtual COM Port Drivers (VCP-Drivers) for the host computer are available here
Since the FTDI Basic does’t need to be powered, only the TXD and RXD pins need to be connected.

This photo shows the FTDI Basic 3.3V Usb-to-Serial adapter, with only two (TXD and RXD) pins connected to the Raspberry Pi. Again, the FTDI Basic is powered through the USB connection coming from your host PC or Laptop. Still, the Raspberry Pi needs to be powered through its micro-usb port.

3. FTDI Basic 3.3V – USB to Serial.

If you look hard and long enough, you will find USB-to-Serial Cable, 6 Female Header Wires, 3.3V I/O, like this one over here at Micro Controller Shop. Adafruit has one as well here.

Cable like these are the easiest way ever to connect to the Raspberry Pi’s serial console port, since they can also power the Raspi.

The USB-to-Serial cable (uses an FTDI FT232RQ) is a USB-to-Serial (3.3V TTL level) converter cable which allows for a simple way to connect 3.3V TTL interface devices to USB.

The 3.3V TTL signals are color coded as follows:

  • Black – GND
  • Brown – CTS
  • Red – +5V DC supply from USB
  • Orange – TXD
  • Yellow – RXD
  • Green – RTS

This photo shows the Micro Controller Shop’s FTDI based 3.3V Usb-to-Serial adapter cable, powering the Raspberry Pi, as well as connecting to its TXD and RXD pins.

Tiny WiFi Adapter for Raspberry Pi

via Wolf Paulus » Embedded

[Updated on Feb. 2. 2013 for (2012-12-16-wheezy-raspbian) Kernel Version 3.2.27+]

The extremely small EW-7811Un USB wireless adapter looks like the perfect WiFi adapter for the Raspberry Pi. Not only is it tiny and relatively inexpensive, it also seems capable enough to be a great companion device for the Raspi. While elinux still shows that some users report timeouts trying to initialize the module, I cannot verify this with 2012-12-16-wheezy-raspbian.

WiFi is not really necessary for the Raspberry Pi. It already comes with an ethernet port, provides RS-232 (aka serial-) connectivity, and has two USB ports. However, in case you wanted to add WiFi to the Raspi, this little adapter seems to be as good as any. Here is why:

The Edimax EW-7811Un

  • complies with wireless IEEE802.11b/g/n standards
  • adjust transmission output by distance and CPU offload, to reduce power consumption when wireless is idle
  • is currently the smallest wireless adapter
  • currently cost between US$ 9 and US$ 15

more than enough reasons to cut the cord and add WiFi connectivity to the Raspberry Pi.

After performing the usual initial configuration in raspi-config, using WiFi Config (a GUI tool sitting at the desktop when starting LXDE with startx) is by far the easiest way to get the Edimax EW-7811Un configured.

But let’s quickly run through the steps of creating that bootable SDCard before dealing with the actual WiFi issues:

Creating that bootable SDCard

  1. Download the image file from http://www.raspberrypi.org/downloads
  2. Unzip the file to get to the image file.
  3. df -h to determine which drive is used for the sdcard, e.g. integrated SDCard Reader turned out to be disk2 for me.
  4. sudo diskutil unmount /dev/disk2s1
  5. sudo dd bs=1m if=/Users/wolf/Downloads/2012-12-16-wheezy-raspbian.img of=/dev/rdisk2
  6. sync
  7. sudo diskutil eject /dev/rdisk2

On a class 10 SD Card, the whole process shouldn’t take much longer than 70 seconds maybe. Insert the SDCard into the Raspi, power up, boot, and use the on screen menu:

In case you need to do this over a network, the Raspberry Pi’s default hostname is raspberrypi. I.e.
ssh pi@raspberrypi .. the pasword is raspberry

sudo raspi-config

  • Expand root_fs
  • Change password
  • Change locale to EN_US.UTF-8 UTF-8 (un-select english UK and select select in long list)
  • Set Time zone (America / Los_Angeles)
  • Change memory split to 128:128
  • Enable ssh

Finally reboot: sudo shutdown -r now
Running the raspi-config again to execute update feature, reboot and login.
Now finding more updates and upgrades like so:

sudo apt-get update
sudo apt-get upgrade

Changing the PI’s hostname

Edit the host name in these two locations:

  • sudo nano /etc/hostname
  • sudo nano /etc/hosts

Adding WiFi support / EW-7811Un

With previous wheezy builds, I had to install the realtek firmware, blacklist the already installed 8192cu driver and install a new one. Not this time. ifconfig shows the wlan0 interface and iwlist wlan0 scan can be used to scan for available Wifi access-points, without any firmware installation or driver updates.


All what’s needed to do to connect the Raspberry Pi to a Wifi Network, is to add a network configuration to /etc/wpa_supplicant/wpa_supplicant.conf.

sudo nano /etc/wpa_supplicant/wpa_supplicant.conf

The network configuration depends very much on your network, SSID, Password Security etc. However, here is what I have added, to make the EW-7811Un connect to my WiFi network:


With the correct WiFi network configuration added to the wpa_supplicant.conf file, the ethernet cable can be removed and the Raspberry Pi will automatically switch over to WiFi.
This behavior is pre-configured in /etc/network/interfaces, which looks something like this:

auto lo

iface lo inet loopback
iface eth0 inet dhcp

allow-hotplug wlan0
iface wlan0 inet manual
wpa-roam /etc/wpa_supplicant/wpa_supplicant.conf
iface default inet dhcp

Raspberry Pi – WiFi (Edimax EW-7811Un)

Backup the SD Card

Once done with setting up Raspian, I usually create an backup image that later can be copied onto the same or a different SD Card (of equal size).


Insert the perfect SDCard into the Card Reader and find out how to address it. Again, for me that usually is disk2s1.

sudo diskutil unmount /dev/disk2s1
sudo dd bs=1m if=/dev/rdisk2 of=~/RASP_3_2_27.img
sudo diskutil eject /dev/rdisk2

Depending on the size of the SDCard, this will create a huge file (like 16GB) an may take a while (like 7min).

Restore or Copy

Insert am empty SDCard into the Card Reader and find out how to address it. Once again, for me that usually is disk2s1.

sudo diskutil unmount /dev/disk2s1
sudo dd bs=1m if=~/RASP_3_2_27.img of=/dev/rdisk2
sudo diskutil eject /dev/rdisk2

Raspberry Pi – Where to start?

via Wolf Paulus » Embedded

At its core, the Raspberry Pi uses the Broadcom BCM2835 System-on-a-chip. This single chip contains

  • an ARM1176 CPU (normally clocked at 700MHz)
  • a VideoCore 4 GPU, i.e. a low-power mobile multimedia processor (also used in the Roku-2)
  • 256 MByte SDRAM
  • in addition to the ARM’s MMU, a second coarse-grained Memory Management Unit for mapping ARM physical addresses onto system bus addresses.

The memory needs to be divided into ARM and GPU memory (happens by including one of the supplied start*.elf files into the boot partition). The minimum amount of memory which can be given to the GPU is 32MB. However that will restrict the multimedia performance and 32MB does not provide enough buffering for the GPU to do 1080p30 video decoding.

The second, slightly smaller chip on the Raspberry Pi board, is an LAN9512, an USB 2.0 hub and 10/100 MBit Ethernet controllers. The LAN9512 is a low-cost, power-efficient, small-footprint USB to Ethernet and multi-port USB connectivity solution in a single package, contains a Hi-Speed USB 2.0 hub with two fully-integrated downstream USB 2.0 PHYs, an integrated upstream USB 2.0 PHY, a 10/100 Ethernet MAC/PHY controller, and an EEPROM controller.

Single-Chip, Hi-Speed USB 2.0 Hub and High-Performance 10/100 Ethernet Controllers

Boot Process

Besides the hardware board itself, starting with the boot process seems to be as good an idea as any… When the Raspberry Pi powers up, it’s the GPU that is active, looking for bootcode.bin, loader.bin, start.elf at the root dir of the first partition at the (fat formatted) SDCard. I.e., booting is hardcoded to happen from the SDCard.
The GPU reads and executes bootcode.bin, which then loads loader.bin, which loads start.elf.
Again in the root dir of the first partition it looks for config.txt, contains information like the arm speed (defaults to 700MHz), address from where to load kernel.img, etc.
Now it kernel.img (arm boot binary file) is copied it to memory and the ARM11 is reset that it runs from the address where kernel.img (default kernel_address 0×8000) was loaded.

Memory Split

The memory needs to be divided into ARM and GPU memory and currently, we have three start.elf files to choose from (see below for details).

  • arm128_start.elf: 1:1, 128MBytes for the ARM11 and 128MBytes for the GPU
  • arm192_start.elf: 3:1, 192MBytes for the ARM11 and 64MBytes for the GPU
  • arm224_start.elf: 7:1, 224MBytes for the ARM11 and 32MBytes for the GPU

Broadcom states in their BCM2835 documentation that 32MBytes might not be enough memory for the GPU and until you reach the point where 128MByte aren’t quite enough for the ARM, you may want to go with the 1:1 spit.

Minimal Boot Image and Blinky Program

Let’s put this Boot Process assumptions that were made above to the test.

  • Prepare an SDCard card (a 1 GByte Class-2 cards works just fine) by formatting it with the MS-DOS (FAT) file system.
  • Download a Raspberry Pi Distribution (currently wheezy-raspbian vers.2012-07-15), uncompress the zip file and open the resulting image file 2012-07-15-wheezy-raspbian.img, for instance with DiskImageMounter, if you are using Mac OS X.
  • Copy bootcode.bin form the wheezy-raspbian.img to the root directory of the SDCard.
  • Copy loader.bin form the wheezy-raspbian.img to the root directory of the SDCard.
  • Copy arm128_start.elf form the wheezy-raspbian.img to the root directory of the SDCard and rename it to start.elf.
  • Copy config.txt form the wheezy-raspbian.img to the root directory of the SDCard.
  • Add the following two lines to your config.txt:
    kernel blinky.bin
    kernel_address 0×8000
  • Uncompress and copy blinky.bin to the root directory of the SDCard.

Now insert the SDCard into your Raspberry Pi and power it up. If all goes well, you should see the Raspberry Pi’s OK LED blink.
The five files, which total just over 2MBytes are probably the closest and smallest you can get to an Hello_World style program for the Raspberry Pi.

Stay tuned for how to create your own Raspberry Pi Tool Chain and how to make blinky.

Tomba, the Home Explorer

via Wolf Paulus » Embedded

Tom’s LEGO NXT 2.0 based Home Explorer uses ultrasound to detect and bypass obstacles.
When driving, the NXT scans the surrounding using its ultrasound sensor. In case it gets too close to an obstacles, the NXT stops, backs-up a little, and makes a random turn (in a pre-defined range), before moving on.
Enjoy the movie, including cool onboard footage.