Tag Archives: Video

Teach your drone what is up and down with an Arduino

via Arduino Blog

Gyroscopes and accelerometers are the primary sensors at the heart of an IMU, also known as an internal measurement unit — an electronic sensor device that measures the orientation, gravitational forces and velocity of a multicopter, and help you keep it in the air using Arduino.

Two videos made by Joop Brokking, a Maker with passion for RC model ‘copters, clearly explain how to program your own IMU so that it can be used for self-balancing your drone without Kalman filters,  libraries, or complex calculations.

Auto leveling a multicopter is pretty challenging. It means that when you release the pitch and roll controls on your transmitter the multicopter levels itself. To get this to work the flight controller of the multicopter needs to know exactly which way is down. Like a spirit level that is on top of the multicopter for the pitch and roll axis.

Very often people ask me how to make an auto level feature for their multicopter. The answer to a question like this is pretty involved and cannot be explained in one email. And that is why I made this video series.

You can find the bill of materials and code here.

Holopainting with Raspberry Pi

via Raspberry Pi

We’ve covered 2D light-painting here before. This project takes things a step further: meet 3D holopainting.

Holo_Painting-1

This project’s an unholy mixture of stop-motion, light-painting and hyperlapse from FilmSpektakel, a time-lapse and film production company in Vienna. It was made as part of a university graduation project. (With Raspberry Pis and Raspberry Pi camera boards, natch.)

Getting this footage out was a very labour-intensive process – but the results are stupendous. The subject was filmed by a ring of 24 networked Raspberry Pi cameras working like a 3d scanner, taking pictures around the ring with a delay of 83 milliseconds between each one so that movement could be captured.

Holopainting rig

 

They then cut out all of the resulting images – told you it was labour-intensive – and put them on a black background, then fed that data into a commercial light-painting stick. (If you don’t want to fork out a ton of cash for your own light-painting stick, there are instructions on building one with a Raspberry Pi over at Adafruit.)

A man dressed as a budget ninja walked the stick in front of a series of cameras set up where the original Raspberry Pi cameras had been, to create 3D images hanging in the air.

holopainting ninja

Presto: a holopainting – and the results are tremendous. Here’s a making-of video.

The Invention of #HoloPainting

Holopainting is a combination of the Light Painting, Stop Motion and Hyperlapse technique to create three dimensional light paintings. We didn’t want to use computer generated images, so we built a giant 3D scanner out of 24 Raspberry Pis with their webcams. These cameras took photos from 24 different perspectives of the person in the middle with a delay of 83 milliseconds, so the movement of the person also was recorded.

There’s a comment that often pops up when we describe a project like this: why bother? We’ll head that off right now: because you can. Because nobody’s done it before. Because the end results look phenomenal. We love it, and we’d love to see more projects like this!

The post Holopainting with Raspberry Pi appeared first on Raspberry Pi.

Machine learning for the maker community

via Arduino Blog

mellis-aday

At Arduino Day, I talked about a project I and my collaborators have been working on to bring machine learning to the maker community. Machine learning is a technique for teaching software to recognize patterns using data, e.g. for recognizing spam emails or recommending related products. Our ESP (Example-based Sensor Predictions) software recognizes patterns in real-time sensor data, like gestures made with an accelerometer or sounds recorded by a microphone. The machine learning algorithms that power this pattern recognition are specified in Arduino-like code, while the recording and tuning of example sensor data is done in an interactive graphical interface. We’re working on building up a library of code examples for different applications so that Arduino users can easily apply machine learning to a broad range of problems.

The project is a part of my research at the University of California, Berkeley and is being done in collaboration with Ben Zhang, Audrey Leung, and my advisor Björn Hartmann. We’re building on the Gesture Recognition Toolkit (GRT) and openFrameworks. The software is still rough (and Mac only for now) but we’d welcome your feedback. Installations instructions are on our GitHub project page. Please report issues on GitHub.

Our project is part of a broader wave of projects aimed at helping electronics hobbyists make more sophisticated use of sensors in their interactive projects. Also building on the GRT is ml-lib, a machine learning toolkit for Max and Pure Data. Another project in a similar vein is the Wekinator, which is featured in a free online course on machine learning for musicians and artists. Rebecca Fiebrink, the creator of Wekinator, recently participated in a panel on machine learning in the arts and taught a workshop (with Phoenix Perry) at Resonate ’16. For non-real time applications, many people use scikit-learn, a set of Python tools. There’s also a wide range of related research from the academic community, which we survey on our project wiki.

For a high-level overview, check out this visual introduction to machine learning. For a thorough introduction, there are courses on machine learning from coursera and from udacity, among others. If you’re interested in a more arts- and design-focused approach, check out alt-AI, happening in NYC next month.

If you’d like to start experimenting with machine learning and sensors, an excellent place to get started is the built-in accelerometer and gyroscope on the Arduino or Genuino 101. With our ESP system, you can use these sensors to detect gestures and incorporate them into your interactive projects!

Massimo Banzi’s guest judge at America’s greatest makers

via Arduino Blog

americagm

Massimo Banzi is among the judges on “America’s Greatest Makers” a reality competition from Mark Burnett (the reality-TV king behind “Survivor,” “The Apprentice,” and “The Voice”) in partnership with Intel which debuted last week on TBS.

In a first of its kind competition, the tv show takes 24 teams of makers from across US and puts them in head-to-head challenges to invent disruptive projects and win $1 million. The team are composed by unique people from 15 years old to 59 with ideas going to inspire a whole new audience of potential makers.

Intel_AGM_stacked_rgb_3000

 

In the first two episodes, each team pitched their device idea to the judging panel composed by Intel CEO Brian Krzanich; business and financial expert Carol Roth; comedian, serial entrepreneur and co-host of truTV’s Hack My Life Kevin Pereira; and one of the celebrity guests.

At the end of April during 4th episode guest judge Massimo Banzi joins the panel as the remaining makers compete in the “Make or Break” rounds for $100,000 and a spot in the million dollar finale. If you are not in the USA, watch the episode at this link after April 27th.

mbanziagm

In the meanwhile you can also watch a beginner maker project to learn how to do obstacle avoidance using Arduino 101. Cara Santa Maria is the trainer who’s going to guide you into the tutorial about this really important topic for projects involving moving objects like robots and drones:

Arduino101Tut-Intel

 

Follow the show on Twitter, Instagram, Facebook and use hashtag #AmericasGreatestMakers

 

You Can Build Arduino multi-device Networks with Temboo

via Arduino Blog

tembooM2M

Is there a cool Internet of Things idea that you’ve wanted to try out with your Arduino, but just haven’t had time for?  Building a network that integrates multiple sensors and boards into one cohesive application can be time-consuming and difficult.  To make it a bit easier, Temboo just introduced new Machine-to-Machine programming that lets you connect Arduino and Genuino boards running locally in a multi-device network to the Internet.  Now, you can bring all the power and flexibility of Internet connectivity to Arduino applications without giving up the benefits of using low power, local devices.

temboo-line

Our friends at Temboo now support three M2M communication protocols for Arduino boards: MQTT, CoAP, and HTTP. You can choose which to use based on the needs of your application and, once you’ve made your choice, automatically generate all the code you need to connect your Arduinos to any web service. You can also save the network configurations that you specify, making it easy to add and subtract devices or update their behavior remotely.

With Temboo M2M, you can program flexible distributed device applications in minutes. From monitoring air quality and noise levels in cities to controlling water usage in agricultural settings, networked sensors and devices enable all sorts of powerful IoT applications. You can see it all in action in the video below, which shows how they built an M2M network that monitors and controls different machines working together on a production line.