Tag Archives: Python

Astro Pi: Mission Update 4

via Raspberry Pi

Astro_Pi_Logo_WEB-300px

Just over a week ago now we closed the Secondary School phase of the Astro Pi competition after a one week extension to the deadline. Students from all over the UK have uploaded their code hoping that British ESA Astronaut Tim Peake win run it on the ISS later this year!

Last week folks from the leading UK Space companies, the UK Space Agency and ESERO UK met with us at Pi Towers in Cambridge to do the judging. We used the actual flight Astro Pi units to test run the submitted code. You can see one of them on the table in the picture below:

The standard of entries was incredibly high and we were blown away by how clever some of them were!

Doug Liddle of SSTL said:

“We are delighted that the competition has reached so many school children and we hope that this inspires them to continue coding and look to Space for great career opportunities”

British ESA Astronaut Tim Peake - photo provided by UK Space Agency under CC BY-ND

British ESA Astronaut Tim Peake – photo provided by UK Space Agency under CC BY-ND

Jeremy Curtis, Head of Education at the UK Space Agency, said:

“We’re incredibly impressed with the exciting and innovative Astro Pi proposals we’ve received and look forward to seeing them in action aboard the International Space Station.

Not only will these students be learning incredibly useful coding skills, but will get the chance to translate those skills into real experiments that will take place in the unique environment of space.”

When Tim Peake flies to the ISS in December he will have the two Astro Pis in his personal cargo allowance. He’ll also have 10 especially prepared SD cards which will contain the winning applications. Time is booked into his operations schedule to deploy the Astro Pis and set the code running and afterwards he will recover any output files created. These will then be returned to their respective owners and made available online for everyone to see.

Code was received for all secondary school key stages and we even have several from key stage 2 primary schools. These were judged along with the key stage 3 entries. So without further adieu here comes a breakdown of who won and what their code does:

Each of these programs have been assigned an operational code name that will be used when talking about them over the space to ground radio. These are essentially arbitrary so don’t read into them too much!

Ops name: FLAGS

  • School: Thirsk School
  • Team name: Space-Byrds
  • Key stage: 3
  • Teacher: Dan Aldred
  • The judges had a lot of fun with this. Their program uses telemetry data provided by NORAD along with the Real Time Clock on the Astro Pi to computationally predict the location of the ISS (so it doesn’t need to be online). It then works out what country that location is within and shows its flag on the LED matrix along with a short phrase in the local language.

Ops name: MISSION CONTROL

  • School: Cottenham Village College
  • Team name: Kieran Wand
  • Key stage: 3
  • Teacher: Christopher Butcher
  • Kieran’s program is an environmental system monitor and could be used to cross check the ISS’s own life support system. It continually measures the temperature, pressure and humidity and displays these in a cycling split-screen heads up display. It has the ability to raise alarms if these measurements move outside of acceptable parameters. We were especially impressed that code had been written to compensate for thermal transfer between the Pi CPU and Astro Pi sensors.

Andy Powell of the Knowledge Transfer Network said:

“All of the judges were impressed by the quality of work and the effort that had gone into the winning KS3 projects and they produced useful, well thought through and entertaining results”

Ops name: TREES

  • School: Westminster School
  • Team name: EnviroPi
  • Key stage: 4 (and equivalent)
  • Teacher: Sam Page
  • This entry will be run in the cupola module of the ISS with the Astro Pi NoIR camera pointing out of the window. The aim is to take pictures of the ground and to later analyse them using false colour image processing. This will produce a Normalised Differentiated Vegetation Index (NDVI) for each image which is a measure of plant health. They have one piece of code which will run on the ISS to capture the images and another that will run on the ground after the mission to post process and analyse the images captured. They even tested their code by going up in a light aircraft to take pictures of the ground!

Ops name: REACTION GAMES

  • School: Lincoln UTC
  • Team name: Team Terminal
  • Key stage: 4 (and equivalent)
  • Teacher: Mark Hall
  • These students have made a whole suite of various reaction games complete with a nice little menu system to let the user choose. The games also record your response times with the eventual goal to investigate how crew reaction time changes over the course of a long term space flight. This entry caused all work to cease during the judging for about half an hour!

Lincoln UTC have also won the prize for the best overall submission in the Secondary School completion. This earns them a photograph of their school taken from space by an Airbus or SSTL satellite. Go and make a giant space invader please!

Ops name: RADIATION

  • School: Magdalen College School
  • Team name: Arthur, Alexander and Kiran
  • Key stage: 5 (and equivalent)
  • Teacher: Dr Jesse Petersen
  • This team have successfully made a radiation detector using the Raspberry Pi camera module, the possibility of which was hinted at during our Astro Pi animation video from a few months ago. The camera lens is blanked off to prevent light from getting in but this still allows high energy space radiation to get through. Due to the design of the camera the sensor sees the impacts of these particles as tiny specks of light. The code then uses OpenCV to measure the intensity of these specks and produces an overall measurement of the level of radiation happening.

What blew us away was that they had taken their Astro Pi and camera module along to the Rutherford Appleton Laboratory and fired a neutron cannon at it to test it was working!!!

The code can even compensate for dead pixels in the camera sensor. I am wondering if they killed some pixels with the neutron cannon and then had to add that code out of necessity? Brilliant.

These winning programs will be joined on the ISS by the winners of the Primary School Competition which closed in April:

Ops name: MINECRAFT

  • School: Cumnor House Girl’s School
  • Team name: Hannah Belshaw
  • Key stage: 2
  • Teacher: Peter Kelly
  • Hannah’s entry is to log data from the Astro Pi sensors but to visualise it later using structures in a Minecraft world. So columns of blocks are used to represent environmental measurements and a giant blocky model of the ISS itself (that moves) is used to represent movement and orientation. The code was written, under Hannah’s guidance, by Martin O’Hanlon who runs Stuff About Code. The data logging program that will run on the ISS produces a CSV file that can be consumed later by the visualisation code to play back what happened when Tim Peak was running it in space. The code is already online here.

Ops name: SWEATY ASTRONAUT

  • School: Cranmere Primary School
  • Team name: Cranmere Code Club
  • Key stage: 2
  • Teacher: Richard Hayler
  • Although they were entitled to have their entry coded by us at Raspberry Pi the kids of the Cranmere Code Club are collectively writing their program themselves. The aim is to try and detect the presence of a crew member by monitoring the environmental sensors of the Astro Pi. Particularly humidity. If a fluctuation is detected it will scroll a message asking if someone is there. They even made a Lego replica of the Astro Pi flight case for their testing!

Obviously the main winning prize is to have your code flown and run on the ISS. However the UK Space companies also offered a number of thematic prizes which were awarded independently of those that have been chosen to fly. Some cross over with the other winners was expected here.

  • Space Sensors
    Hannah Belshaw, from Cumnor House Girl’s School with her idea for Minecraft data visualisation.
  • Space Measurements
    Kieran Wand from Cottenham Village College for his ISS environment monitoring system.
  • Imaging and Remote Sensing
    The EnviroPi team from Westminster School with their experiment to measure plant health from space using NDVI images.
  • Space Radiation
    Magdalen College, Oxford with their Space Radiation Detector.
  • Data Fusion
    Nicole Ashworth, from Reading, for her weather reporting system; comparing historical weather data from the UK with the environment on the ISS.
  • Coding Excellence
    Sarah and Charlie Maclean for their multiplayer Labyrinth game.

Pat Norris of CGI said:

“It has been great to see so many schools getting involved in coding and we hope that this competition has inspired the next generation to take up coding, space systems or any of the many other opportunities the UK space sector offers. We were particularly impressed by the way Charlie structured his code, added explanatory comments and used best practice in developing the functionality”

We’re aiming to have all the code that was submitted to the competition on one of the ten SD cards that will fly. So your code will still fly even if it won’t be scheduled to be run in space. The hope is that, during periods of downtime, Tim may have a look through some of the other entries and run them manually. But this depends on a lot of factors outside of our control and so we can’t promise anything.

But wait, there’s more?

There is still opportunity for all schools to get involved with Astro Pi!

There will be an on-orbit activity during the mission (probably in January or February) that you can all do at the same time as Tim. After the competition winning programs have all finished the Astro Pi will enter a phase of flight data recording. Just like the black box on an aircraft.

This will make the Astro Pi continually record everything from all its sensors and save the data into a file that you can get! If you set your Astro Pi up in the same way (the software will be provided by us) then you can compare his measurements with yours taken on the ground.

There is then a lot of educational value in looking at the differences and understanding why they occur. For instance you could look at the accelerometer data to find out when ISS reboosts occurred or study the magnetometer data to find out how the earth’s magnetic field changes as they orbit the earth. A number of free educational resources will be provided that will help you to leverage the value of this exercise.

The general public can also get involved when the Sense HAT goes on general sale in a few weeks time.

Libby Jackson of the UK Space Agency said:

“Although the competition is over, the really exciting part of the project is just beginning. All of the winning entries will get see their code run in space and thousands more can take part in real life space experiments through the Flight Data phase”

IMG_0198

The post Astro Pi: Mission Update 4 appeared first on Raspberry Pi.

Raspberry Pi – Translator

via Wolf Paulus » Embedded | Wolf Paulus

Recently, I described how to perform speech recognition on a Raspberry Pi, using the on device sphinxbase / pocketsphinx open source speech recognition toolkit. This approach works reasonably well, but with high accuracy, only for a relatively small dictionary of words.

Like the article showed, pocketsphinx works great on a Raspberry Pi to do keyword spotting, for instance to use your voice, to launch an application. General purpose speech recognition however, is still best performed, using one of the prominent web services.

Speech Recognition via Google Service

Google’s speech recognition and related services used to be accessible and easy to integrate. Recently however, they got much more restrictive and (hard to belief, I know) Microsoft is now the place to start, when looking for decent speech related services. Still, let’s start with Google’s Speech Recognition Service, which requires an FLAC (Free Lossless Audio Codec) encoded voice sound file.

Google API Key

Accessing Google’s speech recognition service requires an API key, available through the Google Developers Console.
I followed these instructions for Chromium Developers and while the process is a little involved, even intimidating, it’s manageable.
I created a project and named in TranslatorPi and selected the following APIs for this project:

Goggle Developer Console: Enabled APIs for this project

Goggle Developer Console: Enabled APIs for this project

The important part is to create an API Key for public access. On the left side menu, select API & auth / Credentials. Here you can create the API key, a 40 character long alpha numeric string.

Installing tools and required libraries

Back on the Raspberry Pi, there are only a few more libraries needed, additionally to what was installed in the above mentioned on-device recognition project.

sudo apt-get install flac
sudo apt-get install python-requests
sudo apt-get install python-pycurl

Testing Google’s Recognition Service from a Raspberry Pi

I have the same audio setup as previously described, now allowing me to capture a FLAC encoded test recording like so:
arecord -D plughw:0,0 -f cd -c 1 -t wav -d 0 -q -r 16000 | flac - -s -f --best --sample-rate 16000 -o test.flac
..which does a high quality, wave type recording and pipes it into the flac encoder, which outputs ./test.flac

The following bash script will send the flac encoded voice sound to Google’s recognition service and display the received JSON response:

#!/bin/bash
# parameter 1 : file name, containg flac encoded voiuce recording
 
echo Sending FLAC encoded Sound File to Google:
key='<TRANSLATORPI API KEY>'
url='https://www.google.com/speech-api/v2/recognize?output=json&lang=en-us&key='$key
curl -i -X POST -H "Content-Type: audio/x-flac; rate=16000" --data-binary @$1 $url
echo '..all done'
Speech Recognition via Google Service - JSON Response

Speech Recognition via Google Service – JSON Response

{
  "result": [
    {
      "alternative": [
        {
          "transcript": "today is Sunday",
          "confidence": 0.98650438
        },
        {
          "transcript": "today it's Sunday"
        },
        {
          "transcript": "today is Sundy"
        },
        {
          "transcript": "today it is Sundy"
        },
        {
          "transcript": "today he is Sundy"
        }
      ],
      "final": true
    }
  ],
  "result_index": 0
}

More details about accessing the Google Speech API can be found here: https://github.com/gillesdemey/google-speech-v2

Building a Translator

Encoding doesn’t take long and the Google Speech Recognizer is the fastest in the industry, i.e. the transcription is available swiftly and we can send it for translation to yet another web service.

Microsoft Azure Marketplace

Creating an account at the Azure Marketplace is a little easier and the My Data section shows that I have subscribed to the free translation service, providing me with 2,000,000 Characters/month. Again, I named my project TranslatorPi. On the ‘Developers‘ page, under ‘Registered Applications‘, take note of the Client ID and Client secret, both are required for the next step.
regapp1

regapp2

Strategy

With the speech recognition from Google and text translation from Microsoft, the strategy to build the translator looks like this:

  • Record voice sound, FLAC encode it, and send it to Google for transcription
  • Use Google’s Speech Synthesizer and synthesize the recognized utterance.
  • Use Microsoft’s translation service to translate the transcription into the target language.
  • Use Google’s Speech Synthesizer again, to synthesize the translation in the target language.

For my taste, that’s a little too much for a shell script and I use the following Python program instead:

# -*- coding: utf-8 -*-
import json
import requests
import urllib
import subprocess
import argparse
import pycurl
import StringIO
import os.path
 
 
def speak_text(language, phrase):
    tts_url = "http://translate.google.com/translate_tts?tl=" + language + "&q=" + phrase
    subprocess.call(["mplayer", tts_url], shell=False, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
 
def transcribe():
    key = '[Google API Key]'
    stt_url = 'https://www.google.com/speech-api/v2/recognize?output=json&lang=en-us&key=' + key
    filename = 'test.flac'
    print "listening .."
    os.system(
        'arecord -D plughw:0,0 -f cd -c 1 -t wav -d 0 -q -r 16000 -d 3 | flac - -s -f --best --sample-rate 16000 -o ' + filename)
    print "interpreting .."
    # send the file to google speech api
    c = pycurl.Curl()
    c.setopt(pycurl.VERBOSE, 0)
    c.setopt(pycurl.URL, stt_url)
    fout = StringIO.StringIO()
    c.setopt(pycurl.WRITEFUNCTION, fout.write)
 
    c.setopt(pycurl.POST, 1)
    c.setopt(pycurl.HTTPHEADER, ['Content-Type: audio/x-flac; rate=16000'])
 
    file_size = os.path.getsize(filename)
    c.setopt(pycurl.POSTFIELDSIZE, file_size)
    fin = open(filename, 'rb')
    c.setopt(pycurl.READFUNCTION, fin.read)
    c.perform()
 
    response_data = fout.getvalue()
 
    start_loc = response_data.find("transcript")
    temp_str = response_data[start_loc + 13:]
    end_loc = temp_str.find(""")
    final_result = temp_str[:end_loc]
    c.close()
    return final_result
 
 
class Translator(object):
    oauth_url = 'https://datamarket.accesscontrol.windows.net/v2/OAuth2-13'
    translation_url = 'http://api.microsofttranslator.com/V2/Ajax.svc/Translate?'
 
    def __init__(self):
        oauth_args = {
            'client_id': 'TranslatorPI',
            'client_secret': '[Microsoft Client Secret]',
            'scope': 'http://api.microsofttranslator.com',
            'grant_type': 'client_credentials'
        }
        oauth_junk = json.loads(requests.post(Translator.oauth_url, data=urllib.urlencode(oauth_args)).content)
        self.headers = {'Authorization': 'Bearer ' + oauth_junk['access_token']}
 
    def translate(self, origin_language, destination_language, text):
        german_umlauts = {
            0xe4: u'ae',
            ord(u'ö'): u'oe',
            ord(u'ü'): u'ue',
            ord(u'ß'): None,
        }
 
        translation_args = {
            'text': text,
            'to': destination_language,
            'from': origin_language
        }
        translation_result = requests.get(Translator.translation_url + urllib.urlencode(translation_args),
                                          headers=self.headers)
        translation = translation_result.text[2:-1]
        if destination_language == 'DE':
            translation = translation.translate(german_umlauts)
        print "Translation: ", translation
        speak_text(origin_language, 'Translating ' + text)
        speak_text(destination_language, translation)
 
 
if __name__ == '__main__':
    parser = argparse.ArgumentParser(description='Raspberry Pi - Translator.')
    parser.add_argument('-o', '--origin_language', help='Origin Language', required=True)
    parser.add_argument('-d', '--destination_language', help='Destination Language', required=True)
    args = parser.parse_args()
    while True:
        Translator().translate(args.origin_language, args.destination_language, transcribe())

Testing the $35 Universal Translator

So here are a few test sentences for our translator app, using English to Spanish or English to German:

  • How are you today?
  • What would you recommend on this menu?
  • Where is the nearest train station?
  • Thanks for listening.

Live Demo

This video shows the Raspberry Pi running the translator, using web services from Google and Microsoft for speech recognition, speech synthesis, and translation.


Embedded Scripting (Lua, Espruino, Micro Python)

via OSHUG

The thirty-second OSHUG meeting will take a look at the use of scripting languages with deeply embedded computing platforms, which have much more constrained resources than the platforms which were originally targeted by the languages.

Programming a microcontroller with Lua

eLua is a full version of the Lua programming language for microcontrollers, running on bare metal. Lua provides a modern high level dynamicaly typed language, with first class functions, coroutines and an API for interacting with C code, and yet which is very small and can run in a memory constrained environment. This talk will cover the Lua language and microcontroller environment, and show it running on-off-the-shelf ARM Cortex boards as well as the Mizar32, an open hardware design built especially for eLua.

Justin Cormack is a software developer based in London. He previously worked at a startup that built LED displays and retains a fondness for hardware. He organizes the London Lua User Group, which hosts talks on the Lua programming language.

Bringing JavaScript to Microcontrollers

This talk will discuss the benefits and challenges of running a modern scripting language on microcontrollers with extremely limited resources. In particular we will take a look at the Espruino JavaScript interpreter and how it addresses these challenges and manages to run in less than 8kB of RAM.

Gordon Williams has developed software for companies such as Altera, Nokia, Microsoft and Lloyds Register, but has been working on the Espruino JavaScript interpreter for the last 18 months. In his free time he enjoys making things - from little gadgets to whole cars.

Micro Python — Python for microcontrollers

Microcontrollers have recently become powerful enough to host high-level scripting languages and run meaningful programs written in them. In this talk we will explore the software and hardware of the Micro Python project, an open source implementation of Python 3 which aims to be as compatible as possible with CPython, whilst still fitting within the RAM and ROM constraints of a microcontroller. Many tricks are employed to put as much as possible within ROM, and to use the least RAM and minimal heap allocations as is feasible. The project was successfully funded via a Kickstarter campaign at the end of 2013, and the hardware is currently being manufactured at Jaltek Systems UK.

Damien George is a theoretical physicist who likes to write compilers and build robots in his spare time.

Note: Please aim to by 18:15 as the first talk will start at 18:30 prompt.

Sponsored by: