Author Archives: Brian Benchoff

BeagleBone Green, Now Wireless

via hardware – Hackaday

Over the past few years, the BeagleBone ecosystem has grown from the original BeagleBone White, followed two years later by the BeagleBone Black. The Black was the killer board of the BeagleBone family, and for a time wasn’t available anywhere at any price. TI has been kind to the SoC used in the BeagleBone, leading to last year’s release of the BeagleBone Green, The robotics-focused BeagleBone Blue, and the very recent announcement of a BeagleBone on a chip. All these boards have about the same capabilities, targeted towards different use cases; the BeagleBone on a Chip is a single module that can be dropped into an Eagle schematic. The BeagleBone Green is meant to be the low-cost that plays nicely with Seeed Studio’s Grove connectors. They’re all variations on a theme, and until now, wireless hasn’t been a built-in option.

This weekend at Maker Faire, Seeed Studio is showing off their latest edition of the BeagleBone Green. It’s the BeagleBone Green Wireless, and includes 802.11 b/g/n, and Bluetooth 4.1 LE.

As all the BeagleBones are generally the same, each with their own special bits tacked on, it’s only fair to do a line by line comparison of each board:


While the BeagleBone Blue is still in the works and due to be released this summer, the BeagleBone Green Wireless fills the WiFi and Bluetooth niche of the BeagleBone ecosystem.


WirelessAs with any single board computer with a fast ARM chip running Linux, comparisons must be made to the Raspberry Pi. Since this is the first BeagleBone released with wireless connectivity baked into the board, the most logical comparison would be against the recently released Raspberry Pi 3.

The Pi 3 includes an integrated wireless chipset for 802.11n and Bluetooth 4.1 connectivity. The BeagleBone Green Wireless has this, but also adds 802.11 b and g networks. This gives the BBGW the ability to sense when anyone is using a microwave in the vicinity – a boon for that Internet of Things thing we’ve been hearing so much about.

Unlike the Pi 3, the BBGW has connections for additional antennas in the form of two u.FL connectors. While the Pi 3 can be hacked to use external antennas, it’s not a job for the faint of heart. The availability of external antennas in a small, compact, low-power format is the ideal solution for any wireless network connectivity dealing with range or a congested network.

Grove Connectors

The BeagleBone Green Wireless and Grove Base cape

The BeagleBone Green Wireless is a Seeed joint, and as with the original BeagleBone Green, there are Grove connectors right on the edge of the board. These connectors provide one I2C bus and one serial connection each for Seeed Studio’s custom modules.

To be honest, I’m of two minds when it comes to Seeed’s Grove connectors. On one hand, breadboards and DuPont cables already exist, and with the two 46-pin headers on the BeagleBone Black, there was nothing you couldn’t wire into the BeagleBone Black. The addition of Grove connectors seems superfluous, and in the most cynical view, merely an attempt to make a system of proprietary educational electronics.

On the other hand, there really isn’t any system of easy to use, plug-in modules for the current trend of educational electronics. Just a few years ago, people were putting out boards with RS-442 into RJ45 sockets. We don’t have DE-9 connectors anymore, and a smaller, easier to use connector is appreciated, especially when the connectors are a mere $0.15/piece.

Then again, the intelligence of a Grove module is purely dependant on the operator. On the BeagleBone Green, there are two Grove connectors, one for I2C, and another for serial. Apart from some silkscreen, there is no differentiation between these two connectors. On the Grove base cape, there are exactly four different implementations using the Grove connectors: four I2C, four digital I/O, with two GPIOs each, two connectors dedicated to analog input, and two serial ports. This is the simple way to connect a lot of devices via common wires; it is not the most user friendly.


The BeagleBone Green Wireless doesn’t really do anything new. The SoC is the same, and of course the PRUs in every BeagleBone are the killer feature for really, really fast digital I/O. The addition of WiFi is nice, and the inclusion of extra antenna connectors phenomenal, but it’s nothing a USB WiFi dongle couldn’t handle.

If anything, the BeagleBone Green Wireless is a signal for the future of the BeagleBone platform. The number of versions, each with their own small take on connectivity, is the bazaar to the Raspberry Pi’s cathedral. It’s encouraging for any fan of Open Hardware, and at the very least another tool in the shed.

Filed under: Featured, hardware, linux hacks, reviews, slider

A Dis-Integrated 6502

via hardware – Hackaday

The 6502 is the classic CPU. This chip is found in the original Apple, Apple II, PET, Commodore 64, BBC Micro, Atari 2600, and 800, the original Nintendo Entertainment System, Tamagotchis, and Bender Bending Rodriguez. This was the chip that started the microcomputer revolution, and holds a special place in the heart of every nerd and technophile. The 6502 is also possibly the most studied processor, with die shots of polysilicon and metal found in VLSI textbooks and numerous simulators available online.

The only thing we haven’t seen, until now, is a version of the 6502 built out of discrete transistors. That’s what [Eric Schlaepfer] has been working on over the past year. It’s huge – 12 inches by 15 inches – has over four thousand individual components, and so far, this thing works. It’s not completely tested, but the preliminary results look good.

The MOnSter 6502 began as a thought experiment between [Eric] and [Windell Oskay], the guy behind Evil Mad Scientist and creator of the discrete 555 and dis-integrated 741 kits. After realizing that a few thousand transistors could fit on a single panel, [Eric] grabbed the netlist of the 6502 from With the help of several scripts, and placing 4,304 components into a board design, the 6502 was made dis-integrated. If you’re building a CPU made out of discrete components, it only makes sense to add a bunch of LEDs, so [Eric] threw a few of these on the data and address lines.

This is the NMOS version of the 6502, not the later, improved CMOS version. As such, this version of the 6502 doesn’t have all the instructions some programs would expect. The NMOS version is slower, more prone to noise, and is not a static CPU.

So far, the CPU is not completely tested and [eric] doesn’t expect it to run faster than a few hundred kilohertz, anyway. That means this gigantic CPU can’t be dropped into an Apple II or commodore; these computers need a CPU to run at a specific speed. It will, however, work in a custom development board.

Will the gigantic 6502 ever be for sale? That’s undetermined, but given the interest this project will receive it’s a foregone conclusion.

Filed under: classic hacks, hardware

Apple Introduces Their Answer To The Raspberry Pi

via hardware – Hackaday

Today, Apple has announced their latest bit of hardware. Following in the tradition of the Raspberry Pi, BeagleBone, and the Intel Edison, Apple have released a single board computer meant for embedded and Internet of Things applications. It’s called the Apple Device, and is sure to be a game changer in the field of low-power, Internet-enabled computing.

First off, some specs. The Apple Device uses Apple’s own A8 chip, the same dual-core 64-bit CPU found in the iPhone 6. This CPU is clocked at 1.1 GHz, and comes equipped with 1GB of LPDDR3 RAM and 4GB of eMMC Flash. I/O includes a Mini DisplayPort capable of driving a 4k display, 802.11ac, Bluetooth, and USB. In a massive break from the Apple zeitgeist of the last decade or so, the Apple Device also includes a forty pin header for expansion, much like the Raspberry Pi, BeagleBone, and Edison.

Although Apple’s first foray into the embedded computing market is a shocker, in retrospect it should come as no surprise; the introduction of HomeKit in iOS 9 laid the groundwork for an Internet of Apple Devices, embedded into toasters, refrigerators, smart homes, and cars. The Apple Device lives up to all these expectations, but what is the hands-on experience like?

See our review of the Apple Device after the break.


The first question anyone should ask when discussing an Apple Internet of Things board is, ‘what operating system does it run?’ This is, by far, the most pressing concern – you don’t need a purely graphical OS with a headless machine, and you don’t want an OS that is locked down with proprietary cruft.

Of course. the idea that Apple is built upon proprietary and locked down operating systems is a myth. iOS and Mac OS are built on Darwin, an open source kernel based on BSD. This is the core of the DeviceOS, and booting the device (through a terminal, no less!) provides everything you could ever need in a tiny, single board computer.

Apple isn’t committing themselves to a purely command line board here, though. Apple hasn’t built a computer like that for more than 30 years. There’s a Mini DisplayPort on the Apple Device, and of course that means a GUI. It’s obviously not as capable as the full feature OS X, but it is very useful. It’s not iOS, either; I’d compare it to a Chromebook – just enough to do the job, and not much else.

User Experience

Using the Apple Device is dead simple. Just plug in a USB cable, open up a terminal, and you're in.
Using the Apple Device is dead simple. Just plug in a USB cable, open up a terminal, and you’re in.

While the Apple Device is intended to run headless most of the time, in fact it’s how Apple expects you to set this device up.

To power the device, all you need to do is connect the Device to a computer and open up a terminal. This drops you into the Device’s shell, with access to the entire unix-ey subsystem. Once the WiFi credentials are set, just unplug the device from the computer, plug it into a micro USB charger, and the Device is connected to the Internet.

This USB port isn’t just for power – it’s also the way to connect keyboards, mice, peripherals, and USB thumb drives. You will, however, need the Apple Powered Device USB Hub (sold separately), that breaks out the USB into four USB Type A ports while backpowering the Apple Device. It’s a brilliant physical user interface for a device that will run headless most of the time, but still requires a few ports to be useful.

Of course, if you’re not running the Apple Device headless, you’ll need to connect a monitor. This is where the Mini DisplayPort comes in. Boot the device with the Powered Device USB Hub, plug in a monitor, and you’re presented with a ‘desktop’ that’s not really OS X, and it’s not really iOS, either. It’s  minimal, and almost chromebook-esque.

The OS looks a little bit like OS X, but it’s not. Right now, the only programs available for the DeviceOS are the HomeKit app, the Safari browser, and playing around with the settings. If you want to look something up on the Internet, just click on the Safari icon. If you want to change the WiFi address or the device name, just go into the settings.

Configured as a Desktop, you can see the brilliance of Apple in the Device. It’s not a desktop computer, but neither is a Chromebook. Considering most people can do most of their work on the web, this is a game-changing device. Combine it with Apple’s iCloud, and you have something that will be exceptionally popular.

The Downside of Apple

Apple has their hands in a lot of cookie jars. The Apple TV is their device for streaming videos and music to a TV. Giving the Apple Device this functionality would cut into sales of the Apple TV. Since the Apple Device sells for $100 less than the Apple TV, this would be a bad move, even if Apple is sitting on Billions in cash.

Also, even though the Apple Device has a 40-pin header right on the board, there is no documentation anywhere of what these pins can be used for. The Raspberry Pi’s 40-pin header is well documented and can be used for everything from environmental sensors to VGA and audio output. Apple makes no mention of what these pins can do, although we do expect a few ‘shields’ to be released in short order.


Built around the Apple A8 chip, the Apple Device is extremely capable, especially compared to its assumed competitors, the Raspberry Pi and Intel Edison.


In terms of raw horsepower, the Apple Device handily beats out the Raspberry Pi 2 and the Intel Edison. This should be expected. The A8 chip in the Apple Device is extremely powerful and beats the other single board computers handily.

But what about graphics? The GPU in the Raspberry Pi is a huge reason why that board is popular, and being able to stream a few movies to the Apple Device would mean the Apple TV will be quickly taken out of production.


3D acceleration is better, but when it comes to h.264 encoding, the Apple Device falls a little short. Even compared to the rather sluggish Raspberry Pi Zero, the Apple Device is no match for the berry boards.

This is purely speculation, but I suspect h.264 encoding is disabled in the Apple Device. The reason is clear – Apple already sells a single board computer meant for streaming video to a TV. Giving the Apple Device the same capabilities as the Apple TV would cut into that market. Sadly, it looks like the video capabilities of the Apple Device are limited to digital signage. That’s disappointing, but given Apple’s strategy for the last twenty years, not unexpected. We do look forward to the eventual hack, root, or exploit that will unlock the powerful graphics capabilities the A8 chip already has.


Right off the bat, the Apple Device is an amazing piece of hardware. It’s incredibly small, has a lot of computational power, and works with all the HomeKit devices you already own. The ability to just plug it into a computer and have a tiny *nix device connected to the Internet is great, and comparing the user interface to the Raspberry Pi, Beaglebone, or any of the Intel offerings isn’t even a fair comparison. Apple has taken the best design from their entire product line that anyone – including engineers and the tech literati – would find useful.

However, there are a few glaring limitations of the Apple Device. h.264 decoding is a big one, as the most popular use case of the Raspberry Pi seems to be setting up retro game emulators and streaming video from a network. Apple does what Apple will do, and the Device would probably cut into the market for the Apple TV.

Another shortcoming is the issue of the 40-pin header. Right now, there is no documentation whatsoever for the very small pin header located on the board. I couldn’t find anything in the *nix system relating to peripherals that might be connected to those pins. In any event, the pins are on a 1.0 mm pitch, making it very hard to plug a scope or meter onto one of those pins if you don’t have the mating connector. If anyone has seen this connector in the wild, I’d love to hear about it in the comments.

What Apple has done here is no different from the Raspberry Pi or any of the other ARM-powered single board computers released in recent years. They’ve taken a CPU from a smartphone that is now a few years old, added a few peripherals, and slapped it on a board. By itself, this is nothing new. That’s exactly what the Raspberry Pi is, and what all of the Raspberry Pi ‘clones’ are, from the Banana Pi to the Odroid.

While the hardware is somewhat predictable, the software is where this really shines. It’s not built for iOS, given this device is designed to run headless most of the time. It can be used purely through a command line, making this the perfect device for the Internet of Things. It’s also a tiny desktop computer, and somewhat usable given the power of the A8 chip. Since the Apple Device sells for about $50, Apple really hit it out of the park with this one, despite the obvious and not so obvious shortcomings. We’re really looking forward to Apple taking the popularity of the Raspberry Pi and other single board computers to the next level, and this is the device that will do it.

As a quick aside, it should be noted the Apple Device is technically Apple’s second single board computer. The first Apple product, the Apple I, released in 1976, would today fall into the same market as the Raspberry Pis, Beaglebones, and now the Apple Device. If you take the launch of the Apple I to be the date Apple was founded (April 1, 1976), today is the 40th anniversary of the first Apple product. The Apple I is now a museum piece and the finest example of Apple’s innovation over the years. The Apple Device follows in this tradition, and is nearly guaranteed to be held in as high regard as the board that came out of the [Steve]s’ garage.

Filed under: Featured, hardware, iphone hacks

Winning the Console Wars – An In-Depth Architectural Study

via Hackaday » hardware

From time to time, we at Hackaday like to publish a few engineering war stories – the tales of bravery and intrigue in getting a product to market, getting a product cancelled, and why one technology won out over another. Today’s war story is from the most brutal and savage conflicts of our time, the console wars.

The thing most people don’t realize about the console wars is that it was never really about the consoles at all. While the war was divided along the Genesis / Mega Drive and the Super Nintendo fronts, the battles were between games. Mortal Kombat was a bloody battle, but in the end, Sega won that one. The 3D graphics campaign was hard, and the Starfox offensive would be compared to the Desert Fox’s success at the Kasserine Pass. In either case, only Sega’s 32X and the British 7th Armoured Division entering Tunis would bring hostilities to an end.

In any event, these pitched battles are consigned to be interpreted and reinterpreted by historians evermore. I can only offer my war story of the console wars, and that means a deconstruction of the hardware.

An Architectural Study of the Sega Genesis and Super Nintendo

The traditional comparison between two consoles is usually presented as a series of specs, a bunch of numbers, and tick marks indicating which system wins in each category. While this does illustrate the strengths and weaknesses of each console, it is a rhetorical technique that is grossly imprecise, given the different architectures. The usual benchmark comparison is as follows:


Conventional wisdom – and people arguing on the Internet – tells you that faster is better, and with the Sega console having a higher clock speed it’s capable of doing more calculations per second. Sure, it may not be able to draw as many sprites on the screen as the SNES, but the faster processor is what allowed the Genesis / Mega Drive to have ‘faster’ games – the Sonic series, for example, and the incredible library of sports games. It’s an argument wrapped up in specs so neatly this conventional wisdom has been largely unquestioned for nearly thirty years. Even the Internet’s best console experts fall victim to the trap of comparing specs between different architectures, and it’s complete and utter baloney.

Let’s take a look at one of these numbers – the CPU speed of the SNES and the Genesis/Mega Drive. The SNES CPU, a Ricoh 5A22 is based on the 65C816 core, an oft-forgotten 16-bit offshoot of the 6502 and related chips found in everything from the Apple II, Commodore 64, and even the original Nintendo NES. The 5A22 inside the SNES is clocked at around 2.68 MHz for most games. The Sega used a 68000 CPU clocked at 7.67 MHz. By comparing just these two numbers, the SNES wins, but this isn’t necessarily the truth.

In comparing the clock speed of two different CPUs, we’re merely looking at how frequently the bus is accessed, and not the number of instructions per second.

In the 68000, each instruction requires at least eight clock cycles to complete, whereas the 65C816 – like it’s younger 6502 brother – could execute an instruction every two or three clock cycles. This means the Sega could handle around 900,000 instructions per second, maximum. The SNES could compute around 1.7 Million instructions per second, despite it’s lower clock speed.

Even though the Sega console has a faster clock, it performs fewer instructions per second.

And so we come to the crux of the argument; the statistics of the great console wars, while not wrong, are frequently misinterpreted. How then do we decide an outcome?

The Architecture of the Sega Genesis / Mega Drive

While the Sega Genesis/Mega Drive is usually cited as having a 68000 CPU, this isn’t a complete picture of what’s going on inside the Sega console. In effect, the Genesis is a dual-processor computer with two CPUs dedicated to different tasks. The 68000 handles game logic and graphics, but surprisingly not much else. A Z80 — a CPU introduced a decade before the Genesis/Mega Drive — is used for reading the state of the game pads, and playing audio.

Interestingly, the Sega Genesis / Mega Drive contains most of the components of Sega’s earlier console, the Sega Master System. With the addition of a Power Base Converter, Master System games can be played while the 68000 CPU is in idle.

The  Architecture of the Super Nintendo


The SNES is a different beast entirely. Everything is controlled through the 5A22 / 65816 CPU. The controllers are fed right into the data lines of the 5A22, and DMA instructions are able to shuttle data between the two slightly different Picture Processing Units.

An interesting difference between the two consoles are the connections between the cartridge slot and various peripheral chips. Nearly the entire cartridge connector of the Sega machine is dedicated to the address and data lines for the 68000 CPU. While there are a few control signals thrown in, it’s not enough to allow the cartridge direct access to the video display unit or the FM synthesis chip.

The cartridge connector for the SNES, on the other hand, has direct access to one picture processing unit and the audio processor. The exploitation of this capability was seen in games ranging from Star Fox with it’s SuperFX chip, to Mega Man X games with its math coprocessor, to Super Mario RPG: Legend of the Seven Stars and its Super Accelerator 1 chip that is basically an upgraded version of the main SNES CPU, the 5A22.

Comparative designs, and who won the console wars

Time to make a holistic analysis of each competing platform. By far, the SNES is a more capable console; its cartridges are able to send data directly to the PPUs and audio processors, it’s faster, and there’s more work RAM, vRAM, and audio RAM.

The Sega 'Tower of Power'. Image credit /u/bluenfee
The Sega ‘Tower of Power’. Image credit /u/bluenfee

The Genesis / Mega Drive may be seen as more expandable thanks to the Sega CD (the first CD-ROM based game console, the Sega 32X), an upgraded coprocessor for the Genesis, backwards compatibility with a Master System Power Base converter, and a number of strange cartridges like Sonic and Knuckles with ‘lock-on’ technology. However, it’s actually the SNES that is more expandable. This is most certainly not the conventional wisdom, and the difference is due to how expandability was implemented in each console.

To add additional capabilities to the SNES, game designers would add new chips to the game cartridge. Star Fox famously exploited this with the SuperFX chip, and the list of SNES enhancement chips is deserving of its own entry in Wikipedia.

In comparison, the Genesis / Mega Drive could only be expanded through kludges – either through abusing the ASIC chip between the 68000 and Z80 CPUs, or in the case of the 32X add-on, bypassing the video display unit entirely.

In any event, this is purely an academic exercise. Games sell consoles, and Nintendo’s IP portfolio – even in the early 90s – included characters that had their own cartoons, live action movies, and cereals.

While this academic exercise is completely unimportant today – there probably won’t be a game console that ships with cartridges any more – it is an interesting case study on extendable computer design.

Filed under: computer hacks, Featured, hardware

Becoming A Zombie with the Hackable Electronic Badge

via Hackaday » hardware

Last week, Parallax released an open hackable electronic badge that will eventually be used at dozens of conferences. It’s a great idea that allows badge hacks developed during one conference to be used at a later conference.

[Mark] was at the Hackable Electronics Badge premier at the 2015 Open Hardware Summit last weekend, and he just finished up the first interactive hack for this badge. It’s the zombie apocalypse in badge form, pitting humans and zombies against each other at your next con.

The zombie survival game works with the IR transmitter and receiver on the badge normally used to exchange contact information. Upon receiving the badge, the user chooses to be either a zombie or survivor. Pressing the resistive buttons attacks, heals, or infects others over IR. The game is your standard zombie apocalypse affair: zombies infect survivors, survivors attack zombies and heal the infected, and the infected turn into zombies.

Yes, a zombie apocalypse is a simple game for a wearable with IR communications, but for the Hackable Electronics Badge, it’s a great development. There will eventually be tens of thousands of these badges floating around at cons, and having this game available on day-one of a conference will make for a lot of fun.

Filed under: cons, hardware, wearable hacks


via Hackaday » hardware

HDMI is implemented on just about every piece of sufficiently advanced consumer electronics. You can find it in low-end cellphones, and a single board Linux computer without HDMI is considered crippled. There’s some interesting stuff lurking around in the HDMI spec, and at DEF CON, [Joshua Smith] laid the Consumer Electronics Control (CEC) part of HDMI out on the line, and exposed a few vulnerabilities in this protocol that’s in everything with an HDMI port.

CEC is designed to control multiple devices over an HDMI connection; it allows your TV to be controlled from your set top box, your DVD player from your TV, and passing text from one device to another for an On Screen Display. It’s a 1-wire bidirectional bus with 500bits/second of bandwidth. There are a few open source implementations like libCEC, Android HDMI-CEC, and even an Arduino implementation. The circuit to interface a microcontroller with the single CEC pin is very simple – just a handful of jellybean parts.

[Joshua]’s work is based off a talk by [Andy Davis] from Blackhat 2012 (PDF), but greatly expands on this work. After looking at a ton of devices, [Joshua] was able to find some very cool vulnerabilities in a specific Panasonic TV and a Samsung Blu-ray player.

A certain CEC command directed towards the Panasonic TV sent a command to upload new firmware from an SD card. This is somewhat odd, as you would think firmware would be automagically downloaded from an SD card, just like thousands of other consumer electronics devices. For the Samsung Blu-Ray player, a few memcpy() calls were found to be accessed by CEC commands, but they’re not easily exploitable yet.

As far as vulnerabilities go, [Joshua] has a few ideas. Game consoles and BluRay players are ubiquitous, and the holy grail – setting up a network connection over HDMI Ethernet Channel (HEC) – are the keys to the castle in a device no one  would ever think of taking a close look at.

Future work includes a refactor of the current code, and digging into more devices. There are millions of CEC-capable devices out on the market right now, and the CEC commands themselves are not standardized. The only way for HDMI CEC to be a reliable tool is to figure out commands for these devices. It’s a lot of work, but makes for a great call to action to get more people investigating this very interesting and versatile protocol.

Filed under: cons, hardware