Tag Archives: Embedded

tinyK20: New board with micro-SD card

via Dangerous Prototypes


Erich of MCU on Eclipse has posted an update on his open source tinyK20 project. We wrote about it previously:

Changes from the earlier version (see “tinyK20 Open Source ARM Debug/Universal Board – First Prototypes“):
1.Replaced the K20 crystal with one having a smaller footprint.
2.Added Micro SD card socket on the back (same socket as on the FRDM-K64F or FRDM-K22F).
3.Because SD cards can draw more than the 120 mA the K20 internal 3.3V provided, there is a footprint on the backside of the board to add an extra DC-DC converter.
4.Moved reset button and headers.
5.First version with transparent enclosure.

Raspberry Pi – Translator

via Wolf Paulus » Embedded | Wolf Paulus

Recently, I described how to perform speech recognition on a Raspberry Pi, using the on device sphinxbase / pocketsphinx open source speech recognition toolkit. This approach works reasonably well, but with high accuracy, only for a relatively small dictionary of words.

Like the article showed, pocketsphinx works great on a Raspberry Pi to do keyword spotting, for instance to use your voice, to launch an application. General purpose speech recognition however, is still best performed, using one of the prominent web services.

Speech Recognition via Google Service

Google’s speech recognition and related services used to be accessible and easy to integrate. Recently however, they got much more restrictive and (hard to belief, I know) Microsoft is now the place to start, when looking for decent speech related services. Still, let’s start with Google’s Speech Recognition Service, which requires an FLAC (Free Lossless Audio Codec) encoded voice sound file.

Google API Key

Accessing Google’s speech recognition service requires an API key, available through the Google Developers Console.
I followed these instructions for Chromium Developers and while the process is a little involved, even intimidating, it’s manageable.
I created a project and named in TranslatorPi and selected the following APIs for this project:

Goggle Developer Console: Enabled APIs for this project

Goggle Developer Console: Enabled APIs for this project

The important part is to create an API Key for public access. On the left side menu, select API & auth / Credentials. Here you can create the API key, a 40 character long alpha numeric string.

Installing tools and required libraries

Back on the Raspberry Pi, there are only a few more libraries needed, additionally to what was installed in the above mentioned on-device recognition project.

sudo apt-get install flac
sudo apt-get install python-requests
sudo apt-get install python-pycurl

Testing Google’s Recognition Service from a Raspberry Pi

I have the same audio setup as previously described, now allowing me to capture a FLAC encoded test recording like so:
arecord -D plughw:0,0 -f cd -c 1 -t wav -d 0 -q -r 16000 | flac - -s -f --best --sample-rate 16000 -o test.flac
..which does a high quality, wave type recording and pipes it into the flac encoder, which outputs ./test.flac

The following bash script will send the flac encoded voice sound to Google’s recognition service and display the received JSON response:

# parameter 1 : file name, containg flac encoded voiuce recording
echo Sending FLAC encoded Sound File to Google:
curl -i -X POST -H "Content-Type: audio/x-flac; rate=16000" --data-binary @$1 $url
echo '..all done'
Speech Recognition via Google Service - JSON Response

Speech Recognition via Google Service – JSON Response

  "result": [
      "alternative": [
          "transcript": "today is Sunday",
          "confidence": 0.98650438
          "transcript": "today it's Sunday"
          "transcript": "today is Sundy"
          "transcript": "today it is Sundy"
          "transcript": "today he is Sundy"
      "final": true
  "result_index": 0

More details about accessing the Google Speech API can be found here: https://github.com/gillesdemey/google-speech-v2

Building a Translator

Encoding doesn’t take long and the Google Speech Recognizer is the fastest in the industry, i.e. the transcription is available swiftly and we can send it for translation to yet another web service.

Microsoft Azure Marketplace

Creating an account at the Azure Marketplace is a little easier and the My Data section shows that I have subscribed to the free translation service, providing me with 2,000,000 Characters/month. Again, I named my project TranslatorPi. On the ‘Developers‘ page, under ‘Registered Applications‘, take note of the Client ID and Client secret, both are required for the next step.



With the speech recognition from Google and text translation from Microsoft, the strategy to build the translator looks like this:

  • Record voice sound, FLAC encode it, and send it to Google for transcription
  • Use Google’s Speech Synthesizer and synthesize the recognized utterance.
  • Use Microsoft’s translation service to translate the transcription into the target language.
  • Use Google’s Speech Synthesizer again, to synthesize the translation in the target language.

For my taste, that’s a little too much for a shell script and I use the following Python program instead:

# -*- coding: utf-8 -*-
import json
import requests
import urllib
import subprocess
import argparse
import pycurl
import StringIO
import os.path
def speak_text(language, phrase):
    tts_url = "http://translate.google.com/translate_tts?tl=" + language + "&q=" + phrase
    subprocess.call(["mplayer", tts_url], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
def transcribe():
    key = '[Google API Key]'
    stt_url = 'https://www.google.com/speech-api/v2/recognize?output=json&lang=en-us&key=' + key
    filename = 'test.flac'
    print "listening .."
        'arecord -D plughw:0,0 -f cd -c 1 -t wav -d 0 -q -r 16000 -d 3 | flac - -s -f --best --sample-rate 16000 -o ' + filename)
    print "interpreting .."
    # send the file to google speech api
    c = pycurl.Curl()
    c.setopt(pycurl.VERBOSE, 0)
    c.setopt(pycurl.URL, stt_url)
    fout = StringIO.StringIO()
    c.setopt(pycurl.WRITEFUNCTION, fout.write)
    c.setopt(pycurl.POST, 1)
    c.setopt(pycurl.HTTPHEADER, ['Content-Type: audio/x-flac; rate=16000'])
    file_size = os.path.getsize(filename)
    c.setopt(pycurl.POSTFIELDSIZE, file_size)
    fin = open(filename, 'rb')
    c.setopt(pycurl.READFUNCTION, fin.read)
    response_data = fout.getvalue()
    start_loc = response_data.find("transcript")
    temp_str = response_data[start_loc + 13:]
    end_loc = temp_str.find(""")
    final_result = temp_str[:end_loc]
    return final_result
class Translator(object):
    oauth_url = 'https://datamarket.accesscontrol.windows.net/v2/OAuth2-13'
    translation_url = 'http://api.microsofttranslator.com/V2/Ajax.svc/Translate?'
    def __init__(self):
        oauth_args = {
            'client_id': 'TranslatorPI',
            'client_secret': '[Microsoft Client Secret]',
            'scope': 'http://api.microsofttranslator.com',
            'grant_type': 'client_credentials'
        oauth_junk = json.loads(requests.post(Translator.oauth_url, data=urllib.urlencode(oauth_args)).content)
        self.headers = {'Authorization': 'Bearer ' + oauth_junk['access_token']}
    def translate(self, origin_language, destination_language, text):
        german_umlauts = {
            0xe4: u'ae',
            ord(u'ö'): u'oe',
            ord(u'ü'): u'ue',
            ord(u'ß'): None,
        translation_args = {
            'text': text,
            'to': destination_language,
            'from': origin_language
        translation_result = requests.get(Translator.translation_url + urllib.urlencode(translation_args),
        translation = translation_result.text[2:-1]
        if destination_language == 'DE':
            translation = translation.translate(german_umlauts)
        print "Translation: ", translation
        speak_text(origin_language, 'Translating ' + text)
        speak_text(destination_language, translation)
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Raspberry Pi - Translator.')
    parser.add_argument('-o', '--origin_language', help='Origin Language', required=True)
    parser.add_argument('-d', '--destination_language', help='Destination Language', required=True)
    args = parser.parse_args()
    while True:
        Translator().translate(args.origin_language, args.destination_language, transcribe())

Testing the $35 Universal Translator

So here are a few test sentences for our translator app, using English to Spanish or English to German:

  • How are you today?
  • What would you recommend on this menu?
  • Where is the nearest train station?
  • Thanks for listening.

Live Demo

This video shows the Raspberry Pi running the translator, using web services from Google and Microsoft for speech recognition, speech synthesis, and translation.

Raspberry Pi 2 – Speech Recognition on device

via Wolf Paulus » Embedded | Wolf Paulus

This is a lengthy post and very try, but it provides detailed instructions for how to build and install SphinxBase and PocketSphinx and how to generate a pronunciation dictionary and a language model, all so that speech recognition can be run directly on the Raspberry Pi, without network access. Don’t expect it to be as fast as Google’s recognizer, tho …

Creating the RASPBIAN boot MicroSD

Starting with the current RASPBIAN (Debian Wheezy) image, the creation of a bootable MicroSD Card is a well understood and well documented process.

Uncompressing the zip (again, there is no better tool than The Unarchiver, if you are on a Mac) reveals the 2015-02-16-raspbian-wheezy.img

With the MicroSD (inside an SD-Card adapter – no less than 8GB) inserted into the Mac, I run the df -h command in Terminal, to find out how to address the card. Today, it showed up as /dev/disk4s1 56Mi 14Mi 42Mi 26% 512 0 100% /Volumes/boot, which means, I run something like this, to put the boot image onto the MicroSD:

sudo diskutil unmount /dev/disk4s1
sudo dd bs=1m if=/Users/wolf/Downloads/2015-02-16-raspbian-wheezy.img of=/dev/rdisk4

… after a few minutes, once the 3.28 GB have been written onto the card, I execute:

sudo diskutil eject /dev/rdisk4

Customizing the OS

Once booted, using the sudo raspi-config allow the customization of the OS, which means that time-zone, keyboard, and other settings are adjusted, to closely match its environment.
I usually start (PI is already connected to the internet via Ethernet Cable) with

  • updating the raspi-config
  • expanding the filesystem
  • internationalization: un-check en-GB, check en-US.UTF-8 UTF-8
  • internationalization: timezone ..
  • internationalization: keyboard: change to English US
  • setting the hostname to translator, there are too many Raspberry Pis on my home network, to leave it at the default
  • make sure SSH is enabled
  • force audio out on the 3.5mm headphone jack


Given the sparse analog-to-digital support provided by the Raspberry Pi, the probably best and easiest way to connect a decent Mic to the device, is using a USB microphone. I happen to have an older Logitech USB Mic, which works perfectly fine with the Pi.

After a reboot and now with the microphone connected, let’s get started ..
ssh pi@translator with the default password ‘raspberry’ gets me in from everywhere on my local network
cat /proc/asound/cards

0 [ALSA ]: bcm2835 - bcm2835 ALSA
bcm2835 ALSA
1 [AK5370 ]: USB-Audio - AK5370
AKM AK5370 at usb-bcm2708_usb-1.2, full speed

showing that the microphone is visible and its usb extension.
Next, I edit alsa-base.conf to load snd-usb-audio like so:
sudo nano /etc/modprobe.d/alsa-base.conf
options snd-usb-audio index=-2
options snd-usb-audio index=0
and after a sudo reboot, cat /proc/asound/cards
looks like this

0 [AK5370 ]: USB-Audio - AK5370
AKM AK5370 at usb-bcm2708_usb-1.2, full speed
1 [ALSA ]: bcm2835 - bcm2835 ALSA
bcm2835 ALSA

Recording – Playback – Test

Before worrying about Speech Recognition and Speech Synthesis, let’s make sure that the basic recording and audio playback works.
Again, I have an USB Microphone connected to the Pi, as well as a speaker, using the 3.5mm audio plug.

Installing build tools and required libraries

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bison
sudo apt-get install libasound2-dev
sudo apt-get install swig
sudo apt-get install python-dev
sudo apt-get install mplayer
sudo reboot


sudo nano etc/asound.conf and enter something like this:

    type hw
    card AK5370

    type hw
    card ALSA

    type asym
        type plug
        slave.pcm "internal"
        type plug
        slave.pcm "usb"

    type asym
        type plug
        slave.pcm "internal"
        type plug
        slave.pcm "usb"


The current recording settings can be looked at with:
amixer -c 0 sget 'Mic',0
and for me that looks something like this:

  Simple mixer control 'Mic',0
  Capabilities: cvolume cvolume-joined cswitch cswitch-joined penum
  Capture channels: Mono
  Limits: Capture 0 - 78
  Mono: Capture 68 [87%] [10.00dB] [on]

alsamixer -c 0 can be used to increase the capture levels. After an increase, it looks like this:

  Mono: Capture 68 [87%] [10.00dB] [on]


The current playback settings can be looked at with:
amixer -c 1
alsamixer -c 0 can be used to increase the volume. After an increase,
amixer -c 1
it looks like this:

  Simple mixer control 'PCM',0
  Capabilities: pvolume pvolume-joined pswitch pswitch-joined penum
  Playback channels: Mono
  Limits: Playback -10239 - 400
  Mono: Playback -685 [90%] [-6.85dB] [on]

Test Recording and Playback

With the mic switched on ..
arecord -D plughw:0,0 -f cd ./test.wav .. use Control-C to stop the recording.
aplay ./test.wav

With recording and playback working, let’s get into the really cool stuff, on-device speech recognition.

Speech Recognition Toolkit

CMU Sphinx a.k.a. PocketSphinx
Currently pocket sphinx 5 pre-alpha (2015-02-15) is the most recent version. However, there are a few prerequisites that need to be installed first ..

Installing build tools and required libraries

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install bison
sudo apt-get install libasound2-dev
sudo apt-get install swig
sudo apt-get install python-dev
sudo apt-get install mplayer

Building Sphinxbase

cd ~/
wget http://sourceforge.net/projects/cmusphinx/files/sphinxbase/5prealpha/sphinxbase-5prealpha.tar.gz
tar -zxvf ./sphinxbase-5prealpha.tar.gz
cd ./sphinxbase-5prealpha
./configure --enable-fixed
make clean all
make check
sudo make install

Building PocketSphinx

cd ~/
wget http://sourceforge.net/projects/cmusphinx/files/pocketsphinx/5prealpha/pocketsphinx-5prealpha.tar.gz
tar -zxvf pocketsphinx-5prealpha.tar.gz
cd ./pocketsphinx-5prealpha
make clean all
make check
sudo make install

Creating a Language Model

Create a text file, containing a list of words/sentences we want to be recognized

For instance ..

Okay Pi
Open Garage
Start Translator
What is the weather in Ramona
What is the time

Upload the text file here: http://www.speech.cs.cmu.edu/tools/lmtool-new.html
and then download the generated Pronunciation Dictionary and Language Model

For the the text file mentioned above, this is what the tool generates:

Pronunciation Dictionary


Language Model

Language model created by QuickLM on Thu Mar 26 00:23:34 EDT 2015
Copyright (c) 1996-2010 Carnegie Mellon University and Alexander I. Rudnicky

The model is in standard ARPA format, designed by Doug Paul while he was at MITRE.

The code that was used to produce this language model is available in Open Source.
Please visit http://www.speech.cs.cmu.edu/tools/ for more information

The (fixed) discount mass is 0.5. The backoffs are computed using the ratio method.
This model based on a corpus of 6 sentences and 16 words

ngram 1=16
ngram 2=20
ngram 3=15

-0.9853 </s> -0.3010
-0.9853 <s> -0.2536
-1.7634 GARAGE -0.2536
-1.7634 IN -0.2935
-1.4624 IS -0.2858
-1.7634 OKAY -0.2935
-1.7634 OPEN -0.2935
-1.7634 PI -0.2536
-1.7634 RAMONA -0.2536
-1.7634 SHUTDOWN -0.2536
-1.7634 START -0.2935
-1.4624 THE -0.2858
-1.7634 TIME -0.2536
-1.7634 TRANSLATOR -0.2536
-1.7634 WEATHER -0.2935
-1.4624 WHAT -0.2858

-1.0792 <s> OKAY 0.0000
-1.0792 <s> OPEN 0.0000
-1.0792 <s> SHUTDOWN 0.0000
-1.0792 <s> START 0.0000
-0.7782 <s> WHAT 0.0000
-0.3010 GARAGE </s> -0.3010
-0.3010 IN RAMONA 0.0000
-0.3010 IS THE 0.0000
-0.3010 OKAY PI 0.0000
-0.3010 OPEN GARAGE 0.0000
-0.3010 PI </s> -0.3010
-0.3010 RAMONA </s> -0.3010
-0.3010 SHUTDOWN </s> -0.3010
-0.3010 START TRANSLATOR 0.0000
-0.6021 THE TIME 0.0000
-0.6021 THE WEATHER 0.0000
-0.3010 TIME </s> -0.3010
-0.3010 TRANSLATOR </s> -0.3010
-0.3010 WEATHER IN 0.0000
-0.3010 WHAT IS 0.0000

-0.3010 <s> OKAY PI
-0.3010 <s> OPEN GARAGE
-0.3010 <s> SHUTDOWN </s>
-0.3010 <s> WHAT IS
-0.3010 IN RAMONA </s>
-0.6021 IS THE TIME
-0.3010 OKAY PI </s>
-0.3010 OPEN GARAGE </s>
-0.3010 THE TIME </s>
-0.3010 WHAT IS THE


Looking carefully, the Sphinx knowledge base generator provides links to the just generated files, which make sit super convenient to pull them down to the Pi. For me it generated a base set with the name 3199:

wget http://www.speech.cs.cmu.edu/tools/product/1427343814_14328/3199.dic
wget http://www.speech.cs.cmu.edu/tools/product/1427343814_14328/3199.lm

Running Speech-recognition locally on the Raspberry Pi

Finally everything is in place, SphinxBase and PocketSphinx have been building installed, a pronunciation dictionary and a language model has been created and locally stored.
During the build process, acoustic model files for the english language, were deployed here: /usr/local/share/pocketsphinx/model/en-us/en-us

.. time to try out the the recognizer:

cd ~/
export LD_LIBRARY_PATH=/usr/local/lib
export PKG_CONFIG_PATH=/usr/local/lib/pkgconfig

pocketsphinx_continuous -hmm /usr/local/share/pocketsphinx/model/en-us/en-us -lm 3199.lm -dict 3199.dic -samprate 16000/8000/48000 -inmic yes



INFO: ps_lattice.c(1380): Bestpath score: -7682
INFO: ps_lattice.c(1384): Normalizer P(O) = alpha(:285:334) = -403763
INFO: ps_lattice.c(1441): Joint P(O,S) = -426231 P(S|O) = -22468
INFO: ngram_search.c(874): bestpath 0.01 CPU 0.003 xRT
INFO: ngram_search.c(877): bestpath 0.01 wall 0.002 xRT

Embedded Scripting (Lua, Espruino, Micro Python)


The thirty-second OSHUG meeting will take a look at the use of scripting languages with deeply embedded computing platforms, which have much more constrained resources than the platforms which were originally targeted by the languages.

Programming a microcontroller with Lua

eLua is a full version of the Lua programming language for microcontrollers, running on bare metal. Lua provides a modern high level dynamicaly typed language, with first class functions, coroutines and an API for interacting with C code, and yet which is very small and can run in a memory constrained environment. This talk will cover the Lua language and microcontroller environment, and show it running on-off-the-shelf ARM Cortex boards as well as the Mizar32, an open hardware design built especially for eLua.

Justin Cormack is a software developer based in London. He previously worked at a startup that built LED displays and retains a fondness for hardware. He organizes the London Lua User Group, which hosts talks on the Lua programming language.

Bringing JavaScript to Microcontrollers

This talk will discuss the benefits and challenges of running a modern scripting language on microcontrollers with extremely limited resources. In particular we will take a look at the Espruino JavaScript interpreter and how it addresses these challenges and manages to run in less than 8kB of RAM.

Gordon Williams has developed software for companies such as Altera, Nokia, Microsoft and Lloyds Register, but has been working on the Espruino JavaScript interpreter for the last 18 months. In his free time he enjoys making things - from little gadgets to whole cars.

Micro Python — Python for microcontrollers

Microcontrollers have recently become powerful enough to host high-level scripting languages and run meaningful programs written in them. In this talk we will explore the software and hardware of the Micro Python project, an open source implementation of Python 3 which aims to be as compatible as possible with CPython, whilst still fitting within the RAM and ROM constraints of a microcontroller. Many tricks are employed to put as much as possible within ROM, and to use the least RAM and minimal heap allocations as is feasible. The project was successfully funded via a Kickstarter campaign at the end of 2013, and the hardware is currently being manufactured at Jaltek Systems UK.

Damien George is a theoretical physicist who likes to write compilers and build robots in his spare time.

Note: Please aim to by 18:15 as the first talk will start at 18:30 prompt.

Sponsored by:

Privacy and Security (Security protocols in constrained environments, RFIDler, Indie Phone)


The thirty-first OSHUG meeting is dedicated to privacy and security, with talks on implementing security protocols in constrained environments, an SDR RFID reader/writer/emulator, and a new initiative that will use design thinking and open source to create a truly empowering mobile phone.

Security protocols in constrained environments

Implementation of security protocols such as TLS, SSH or IPsec come with a memory and compute overhead. Whilst this has become negligible in full scale environments it's still a real issue for hobbyist and embedded developers. This presentation will look at the sources of the overheads, what can be done to minimise them, and what sort of hardware platforms can be made to absorb them. The benefits and potential pitfalls of hardware specific implementations will also be examined.

Chris Swan is CTO at CohesiveFT where he helps build secure cloud based networks. He's previously been a security guy at large Swiss banks, and before that was a Weapon Engineering Officer in the Royal Navy. Chris has tinkered with electronics since pre-school, and these days has a desk littered with various dev boards and projects.

RFIDler: A Software Defined RFID Reader/Writer/Emulator

Software Defined Radio has been quietly revolutionising the world of RF. However, the same revolution has not yet taken place in RFID. The proliferation of RFID/NFC devices means that it is unlikely that you will not interact with one such device or another on a daily basis. Whether it’s your car key, door entry card, transport card, contactless credit card, passport, etc. you almost certainly have one in your pocket right now!

RFIDler is a new project, created by Aperture Labs, designed to bring the world of Software Defined Radio into the RFID spectrum. We have created a small, open source, cheap to build platform that allows any suitably powerful microprocessor access to the raw data created by the over-the-air conversation between tag and reader coil. The device can also act as a standalone ‘hacking’ platform for RFID manipulation/examination. The rest is up to you!

Adam “Major Malfunction” Laurie is a security consultant working in the field of electronic communications, and a Director of Aperture Labs Ltd., who specialise in reverse engineering of secure systems. He started in the computer industry in the late Seventies, and quickly became interested in the underlying network and data protocols.

During this period, he successfully disproved the industry lie that music CDs could not be read by computers, and wrote the world’s first CD ripper, ‘CDGRAB’. He was also involved various early open source projects, including ‘Apache-SSL’ which went on to become the de-facto standard secure web server. Since the late Nineties he has focused his attention on security, and has been the author of various papers exposing flaws in Internet services and/or software, as well as pioneering the concept of re-using military data centres (housed in underground nuclear bunkers) as secure hosting facilities.

Andy Ritchie has been working in the computer and technology industry for over 20 years for major industry players such as ICL, Informix, British Airways and Motorola. Founding his first company, Point 4 Consulting at the age of 25, he built it into a multi-million pound technology design consultancy. Point 4 provided critical back end technology and management for major web sites such as The Electronic Telegraph, MTV, United Airlines, Interflora, Credit Suisse,BT, Littlewoods and Sony. Following Point 4 he went on to found Ablaise, a company that manages the considerable intellectual property generated by Point 4, and Aperture Labs. In his spare time he manages the worlds largest and longest running security conference, Defcon. Andy's research focuses on access control systems, biometric devices and embedded systems security, and he has spoken and trained at information security conferences in Europe and the US publicly and for private and governmental audiences. He is responsible for identifying major vulnerabilities in various access control and biometric systems, and has a passion for creating devices that emulate access control tokens either electronic physical or biometric. Andy has been responsible both directly and indirectly for changing access control guidelines for several western governments. Andy is currently a director of Aperture Labs Ltd, a company that specialises in reverse engineering and security evaluations of embedded systems.

Indie: a tale of privacy, civil liberties, and a phone

Can a phone really help protect our civil liberties? Aral Balkan thinks so. And he’s embarked on an audacious journey to make one. Join us to hear the introduction of a two-year story that is only just beginning.

Aral Balkan is is founder and designer of Indie Phone, a phone that empowers mere mortals to own their own data.

Note: Please aim to by 18:15 as the first talk will start at 18:30 prompt.

Sponsored by:

Extended list of 8-bit AVR Micro-Controllers, easily programmable with the Arduino IDE

via Wolf Paulus » Embedded

A couple days back, I wrote about ‘The $3 Arduino‘, how to leave the Arduino board behind and program an ATmega168 Micro-Controller directly, still using the Arduino IDE but with the AVRMSPII programmer. Of course, the ATmega168 isn’t the only MC available for something like that. In fact, I have quite a few 8-bit AVR Micro-Controllers in a small box right here, next to my desk.
Let’s minimize the ‘Minimal Arduino’ even more, for instance by using the tiny ATtiny85 Microcontroller. Just like we did with the ‘BareBones’ definition, we add board definitions for the Mircocontrollers that the Arduino IDE doesn’t support out-of-the-box. Board definition for the missing MCs can be found here and after moving the attiny folder into the ~/Document/Arduino/hardware folder and restartig the Arduino IDE, the IDE should now know about the new MCs. More details about this can be read here.

Minimizing the Minimal Arduino

Now that the Arduino IDE knows about the really tiny ATtiny85, we can set it’s internal clock to 8Mhz and flash a small program.

To flash the chip, we use the SPI (MOSI/MISO/SCK) Pins like shown here:

  1. RSET -> ATtiny85-Pin 1
  2. GND -> ATtiny85-Pin 4
  3. MOSI -> ATtiny85-Pin 5
  4. MISO -> ATtiny85-Pin 6
  5. SCK -> ATtiny85-Pin 7
  6. +5V -> ATtiny85-Pin 8

Switching the Internal Clock to 8MHz

Using the Fuse Calculator we can find the proper ATtiny85 fuse settings, to use the internal RC Oscillator and setting it to 8Mhz.
The avrdude arguments look something like this: -U lfuse:w:0xe2:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m
Avrdude is one of the tools that the Arduino IDE deploys on your computer. You can either execute Avrdude with those arguments directly, like so:

avrdude -p t85 -b 115200 -P usb -c avrispmkII -V -e -U lfuse:w:0xe2:m -U hfuse:w:0xdf:m -U efuse:w:0xff:m

or just execute the ‘Burn Bootloader’ command in the Arduino IDE’s Tools menu.
While this will NOT burn a bootloader on the ATtiny85 chip, it will set the fuses appropriately. Either way, this step needs to be performed only once.

With the microcontroller still connected to the AT-AVR-ISP2 programmer, a simple program can be quickly uploaded:

int p = 3;                // LED connected to digital pin 13
void setup() {
  pinMode(p, OUTPUT);      // sets the digital pin as output

void loop() {
  digitalWrite(p, HIGH);   // sets the LED on
  delay(100);              // .. for 10th of a sec
  digitalWrite(p, LOW);    // sets the LED off again
  delay(1000);             //  waits for a second
  digitalWrite(p, HIGH);   // sets the LED on
  delay(500);              // .. for 1/2 a sec
  digitalWrite(p, LOW);    // sets the LED off again
  delay(500);              // .. for 1/2 a second

ATtiny2313 ($2.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 2KB ISP flash memory, 128B ISP EEPROM, 128B internal SRAM, universal serial interface (USI), full duplex UART, and debugWIRE for on-chip debugging. The device supports a throughput of 20 MIPS at 20 MHz and operates between 2.7-5.5 volts.
By executing powerful instructions in a single clock cycle, the device achieves throughputs approaching 1 MIPS per MHz, balancing power consumption and processing speed.

ATtiny84 ($3.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 8KB ISP flash memory, 512B EEPROM, 512-Byte SRAM, 12 general purpose I/O lines, 32 general purpose working registers, an 2 timers/counters (8-bit/16-bit) with two PWM channels each, internal and external interrupts, 8-channel 10-bit A/D converter, programmable gain stage (1x, 20x) for 12 differential ADC channel pairs, programmable watchdog timer with internal oscillator, internal calibrated oscillator, and four software selectable power saving modes.

ATtiny85 ($1.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 8KB ISP flash memory, 512B EEPROM, 512-Byte SRAM, 6 general purpose I/O lines, 32 general purpose working registers, one 8-bit timer/counter with compare modes, one 8-bit high speed timer/counter, USI, internal and external Interrupts, 4-channel 10-bit A/D converter, programmable watchdog timer with internal oscillator, three software selectable power saving modes, and debugWIRE for on-chip debugging. The device achieves a throughput of 20 MIPS at 20 MHz and operates between 2.7-5.5 volts.

ATmega8 ($2.00)

The low-power Atmel 8-bit AVR RISC-based microcontroller combines 8KB of programmable flash memory, 1KB of SRAM, 512K EEPROM, and a 6 or 8 channel 10-bit A/D converter. The device supports throughput of 16 MIPS at 16 MHz and operates between 2.7-5.5 volts.

ATmega168 ($4.00)

The high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller combines 16KB ISP flash memory, 1KB SRAM, 512B EEPROM, an 8-channel/10-bit A/D converter (TQFP and QFN/MLF), and debugWIRE for on-chip debugging. The device supports a throughput of 20 MIPS at 20 MHz and operates between 2.7-5.5 volts.

ATmeaga328 ($5.00)

The high-performance Atmel 8-bit AVR RISC-based microcontroller combines 32KB ISP flash memory with read-while-write capabilities, 1KB EEPROM, 2KB SRAM, 23 general purpose I/O lines, 32 general purpose working registers, three flexible timer/counters with compare modes, internal and external interrupts,serial programmable USART, a byte-oriented 2-wire serial interface, SPI serial port, 6-channel 10-bit A/D converter (8-channels in TQFP and QFN/MLF packages), programmable watchdog timer with internal oscillator, and five software selectable power saving modes. The device operates between 1.8-5.5 volts.

MC Flash (KB) SRAM (Bytes) EEPROM (Byte) SPI I2C UART ADC Chnnl (10bit) PWM Chnnl Timer RTC
ATtiny2312 2 128 128 2 1 1 4 2 No
ATtiny84 8 512 512 1 1 0 8 4 2 No
ATtiny85 8 512 512 1 1 0 4 6 2 No
ATmega8 8 1024 512 1 1 1 8 3 3 Yes
ATmega168 16 1024 512 2 1 1 8 6 3 Yes
ATmega328 32 2048 1024 2 1 1 8 6 3 Yes