Author Archives: Elliot Williams

What’s Inside A Neonode Laser Sensor?

via hardware – Hackaday

Every once in a while, you get your hands on a cool piece of hardware, and of course, it’s your first instinct to open it up and see how it works, right? Maybe see if it can be coaxed into doing just a little bit more than it says on the box? And so it was last Wednesday, when I was at the Embedded World trade fair, and stumbled on a cool touch display floating apparently in mid-air.

The display itself was a sort of focused Pepper’s Ghost illusion, reflected off of an expensive mirror made by Aska3D. I don’t know much more — I didn’t get to bring home one of the fancy glass plates — but it looked pretty good. But this display was interactive: you could touch the floating 2D projection as if it were actually there, and the software would respond. What was doing the touch response in mid-air? I’m a sucker for sensors, so I started asking questions and left with a small box of prototype Neonode zForce AIR sensor sticks to take apart.

The zForce sensors are essentially an array of IR lasers and photodiodes with some lenses that limit their field of view. The IR light hits your finger and bounces back to the photodiodes on the bar. Because the photodiodes have a limited angle over which they respond, they can be used to triangulate the distance of the finger above the display. Scanning quickly among the IR lasers and noting which photodiodes receive a reflection can locate a few fingertips in a 2D space, which explained the interactive part of the floating display. With one of these sensors, you can add a 2D touch surface to anything. It’s like an invisible laser harp that can also sense distance.

The intended purpose is fingertip detection, and that’s what the firmware is good at, but it must also be the case that it could detect the shape of arbitrary (concave) objects within its range, and that was going to be my hack. I got 90% of the way there in one night, thanks to affordable tools and free software that every hardware hacker should have in their toolbox. So read on for the unfortunate destruction of nice hardware, a tour through some useful command-line hardware-hacking tools, and gratuitous creation of animations from sniffed SPI-like data pulled off of some test points.

Cracking Open the Case

In retrospect, it’s probably not necessary to take one of these things apart — the diagrams on the manufacturer’s website are very true to life. Inside, you’ll find a PCB with an IR laser and photodiode every 8 mm, some custom-molded plastic lenses, and a couple of chips. Still, here are the pretty pictures.

The lenses are neat, consisting of a 45-degree mirror that allows the PCB-mounted diodes to emit and receive out of the thickness of the bar. The lasers and photodiodes share lenses, reducing the manufacturing cost. Most of the thin PCB after the first three cells is “empty”, existing just to hold the laser and photodiode chips. It’s a nice piece of hardware.

The chip-on-board ICs aren’t even covered in epoxy — although these boards are marked “prototype” so who knows if this is true of the production units. According to the advertising copy, one of these two chips is a custom ASIC that does the image processing in real-time and the other is a custom STM32 ARM microcontroller. Speculate about which is which in the comments!

The PCB is glued in place under the metal frame, and there are certainly no user-serviceable parts inside. Sadly, some bond wires got pulled loose when I was removing the PCB. When I put this one sensor stick back together, an area near the custom ASIC gets hot. Sacrificed for my idle curiosity! Sigh.

The Basics: The USB Interface

I was given a prototype version of the sensor demo kit, which includes a breakout board for the USB and I2C finger-detection functionalities of the sensors, so of course, I had to test them out. Plugging it in, and typing dmesg showed the “Neonode AirModule 30v” as a USB HID device, which means that figuring everything out about its USB functionality is going to be a cakewalk because all the data it spits out is described in a data descriptor.

I used usbhid-dump to read in the descriptor, and [Frank Zhao]’s excellent descriptor decoder to get it all in plain text. It looks like it’s a standard digitizer that supports six fingers and has a vendor-defined configuration endpoint. Here’s the descriptor dump if you want to play along. Dumps like this are great starting points if you’re making your own USB HID device, by the way. See what functionalities your mouse has.

But frankly, pouring through a descriptor file is hard work. dmesg said the sensor was attached as /dev/usb/hiddev3, so why not see what it’s doing in real-time while I wave my finger around? sudo cat /dev/usb/hiddev3 | xxd takes the raw binary output and passes it through xxd which turns it into a “human-readable” hexdump. (The genius of xxd is the -r flag which turns your edited hexdump back into a binary, enabling 1990’s-era cracking on the command line.) Just by watching the numbers change and moving my finger, I figured out which field represented the finger count, and which fields corresponded to the 16-bit X- and Y-coordinates of each finger. And it’s reporting zeroes for the un-measured fingers, as per the descriptor.

Of course, all of this, as well as the complete specs for the I2C interface are available in the zForce online documentation. The commands are wrapped up in ASN.1 format, which is a dubious convenience. Still, if all you want to do is use these devices to detect fingers over USB or I2C, it’s just a matter of reading some documentation and writing a driver.

Logic Analyzer vs. Test Points

To have a little more fun with these sensor bars, I started poking around the test points exposed on the back of the unit. The set closest to the output connector are mostly duplicates of the pins on the connector itself, and aren’t that interesting. More fun are a constellation (Pleiades?) of seven test points that seem to only be available on the sensor bars that are longer than 180 mm.

One point had a clear 21 MHz clock signal running periodically, and two other lines had what seemed to be 10.5 MHz data, strongly suggesting some kind of synchronous serial lines. Two other pins in this area emitted pulses, probably relating to the start of a sensor sweep and the start of processed data, but that wouldn’t be obvious until I wired up some jumpers and connected a logic analyzer.

I really like the open-source Sigrok software for this kind of analysis. The GUI pulseview makes it fairly painless to explore signals that you don’t yet understand, while switching up to the command-line sigrok-cli for repetitive tasks makes some fairly sophisticated extensions easy. I’ll show you what I mean.

I started off with a Saleae Logic clone, based on the same Cypress FX2 chip. These are a great deal for $5 or so, and the decent memory depth and good Sigrok support makes up for the low 24 MHz sampling rate. That gave me a good overview of the signals and confirmed that the device goes into a low-power scan mode when no fingers are present, and that triggering when pin 5 went low isolated the bulk of the data nicely. But in order to actually extract whatever data was on the three synchronous serial pins, I needed to move up in speed.

The Openbench Logic Sniffer (OLS) will do 100 MHz, which is plenty for this application, but it has a very short 24 k sample memory that it has to relay back to Sigrok over a 115,200 baud serial line. Still, I could just squeeze a full read in at 50 MHz. Using Sigrok’s SPI decoders on the clock and two data lines gave me what looked like good data. But how to interpret it? What was it?

The Command Line, Graphics, and Real-Time Fingerwaving

Getting one capture out of pulseview is easy, but figuring out what that data means required building up a tiny toolchain. The command-line and sigrok-cli to the rescue:

sigrok-cli --driver=ols:conn=/dev/ttyACM3 --config samplerate=50m --samples 20k \
            --channels 0-5 --wait-trigger --triggers 5=0 -o  

This command reads from the OLS on the specified serial port, waiting for a trigger when channel 5 goes low and outputs the data in Sigrok’s (zipped) data format.

sigrok-cli -i -P spi:clk=0:miso=1:cpha=1 -B spi | tail -c +3 > spi1.bin
sigrok-cli -i -P spi:clk=0:miso=2:cpha=1 -B spi | tail -c +3 > spi2.bin

These two commands run the SPI decoders on the acquired data. It’s not necessary to do this in a separate step unless you’d like the output in two separate files, as I did. The -P flag specifies the protocol decoder, and -B tells it to output just the decoded data in binary. Tail aligns the data by dropping the three header bytes.

Now for the real trick: plotting the data, waving my finger around interactively, and hoping to figure out what’s going on. You’d be surprised how often this works with unknown signals.

t=`date +%s`
convert -depth 8 -size 15x45+0 gray:spi1.bin -scale 200 out1_${t}.png
convert -depth 8 -size 15x45+0 gray:spi2.bin -scale 200 out2_${t}.png
convert  out1_${t}.png spacer.png out2_${t}.png +append image_${t}.png
convert  out1_${t}.png spacer.png out2_${t}.png +append foo.png

Convert is part of the Imagemagick image editing and creation toolset. You can spend hours digging into its functionality, but here I’m just taking bytes from a file, interpreting them as grayscale pixels, combining them into an image of the specified dimensions, and scaling it up so that it’s easier to see. That’s done for each data stream coming out of the sensor.

The two are then combined side-by-side (+append) with a spacer image between them, timestamped, and saved. An un-timestamped version is also written out so that I could watch progress live, using eog because it automatically reloads an image when it changes.

Cobbling all of this together yields a flow that takes the data in from the logic analyzer, decodes it into bytes, and turns the bytes into images. That was enough for me to see that I’m capturing approximate position data from (probably) the output of Neonode’s custom ASIC. But why stop there? I turned the whole endeavor into a video by combining the images at 8 FPS:

ffmpeg -r 8 -pattern_type glob -i "image_*.png" \ 
  -c:v libx264 -vf fps=8 -pix_fmt yuv420p finger_hand_movie.mp4

That’s me moving my finger just above the bar’s surface, and then out of range, and then moving one hand, and then both around in the frame. The frames with less data are what it does when nothing is detected — it lowers the scanning rate and apparently does less data processing. You can also see the reason for picking the strange width of 15 pixels in the images — there are 30 photodiodes in this bar, with data for 15 from one side apparently processed separately from the 15 on the other. Anyway, picking a width of 15 makes the images wrap around just right.

There’s a bunch of data I still don’t understand. The contents of the header and the blob that appears halfway down the scan are still a mystery. For some reason, the “height” field on the bottom side of the data is reversed from the top side — up is to the right in the top half and the left in the lower half.

But even with just what I got from dumping SPI data and plotting it out, it’s pretty apparent that I’m getting the post-processed X/Y estimate data, and it has no problems describing the shapes of simple objects, at least like the flat palm of my hand. It’s a lot richer dataset than I got from just the default finger sensor output, so I’ll call the hack a success so far. Maybe I’ll put a pair of these in a small robot, or maybe I’ll just make a (no-)touch-pad for my laptop.

Regardless, I hope you had as much fun reading along as I did hacking on these things. If you’re not a command-line ninja, I hope that I showed you some of the power that you can have by simply combining a bunch of tools together, and if you are, what are some of your favorite undersung data analysis tools like these?

Making a Cheap Radar Unit Awesome

via hardware – Hackaday

[JBeale] squeezed every last drop of performance from a $5 Doppler radar module, and the secrets of that success are half hardware, half firmware, and all hack.

On the hardware side, the first prototype radar horn was made out of cardboard with aluminum foil taped around it. With the concept proven, [JBeale] made a second horn out of thin copper-clad sheets, but reports that the performance is just about the same. The other hardware hack was simply to tack a wire on the radar module’s analog output and add a simple op-amp gain stage, which extended the sensing range well beyond the ten feet or so that these things are usually used for.

With all that signal coming in, [JBeale] separates out the noise by taking an FFT of the Doppler frequency-shift signal. Figuring that people walk around 2.2 miles per hour, [JBeale] focuses on the corresponding 70 Hz frequency bin and finds that the radar will detect people out to 80 feet. Wow!

This trick of taking an el-cheapo radar unit and amplifying the signal to do something useful isn’t new to Hackaday. [Mathieu] did it with the very same HB-100 unit way back in 2013, and then again with a more modern CDM324 model. But [JBeale]’s hacked horn and clever backend processing push out the limits of what you can expect to do with these cheap units. Kudos.

[via PJRC]

Filed under: hardware, wireless hacks

Get Inside a TCXO Clock Chip

via hardware – Hackaday

[Pete] wondered how real-time clock modules could be selling on eBay for $1.50 when the main component, the Maxim DS3231 RTC/TCXO chip, cost him more like $4 apiece. Could the cheap modules contain counterfeit chips?

Well, sure they could. But in this case, they didn’t, and [Pete] has the die shots to prove it. He started off by clipping the SOIC leads rather than desoldering — he’s not going to be reusing this chip after he’s cut it in half. Next was a stage of embrittling the case by heating it up with a lighter and dunking it in water. Then he went at it with sandpaper.

It’s cool. You can see the watch crystal inside, and all of the circuitry. The DS3231 includes a TCXO — temperature-corrected crystal oscillator — and it seems to have a bank of capacitors that it connects and disconnects depending on the chip’s temperature to keep the oscillator running at the right speed. [Pete] used one in an offline situation, and it only lost sixteen seconds over a year, so we’d say that they work fine.

If you’d like to know more about how crystals are used to keep time, check out [Jenny]’s excellent article. And if sixteen second per year is way too much for you, tune up your rubidium standard and welcome to the world of the time nuts.

Filed under: hardware

Everything You Need To Know About Logic Probes

via hardware – Hackaday

We just spent the last hour watching a video, embedded below, that is the most comprehensive treasure trove of information regarding a subject that we should all know more about — sniffing logic signals. Sure, it’s a long video, but [Joel] of [OpenTechLab] leaves no stone unturned.

At the center of the video is the open-source sigrok logic capture and analyzer. It’s great because it supports a wide variety of dirt cheap hardware platforms, including the Salae logic and its clones. Logic is where it shines, but it’ll even log data from certain scopes, multimeters, power supplies, and more. Not only can sigrok decode raw voltages into bits, but it can interpret the bits as well using protocol decoder plugins written in Python. What this all means is that someday, it will decode everything. For free.

[Joel] knows a thing or two about sigrok because he started the incredibly slick PulseView GUI project for it, but that doesn’t stop him from walking you through the command-line interface, which is really useful for automated data capture and analysis, if that’s your sort of thing. Both are worth knowing.

But it’s actually the hardware details where this video shines. He breaks down all of the logic probes on his bench, points out their design pros and cons, and uses that basis to explain just what kind of performance you can expect for $20 or so. You’ll walk away with an in-depth understanding of the whole toolchain, from grabber probes to GUIs.

Indeed, toward the end, [Joel] simulates the parasitic inductance of using flying-lead connectors and demonstrates how it destroys high-frequency signals. Even if the logic analyzer were able to sample fast enough, no signal makes it that far. But he doesn’t wave his hands around the problem — he shows you. And then he mentions hard-core-hacker-friendly tricks to get up into the hundreds of megahertz, with a hat-tip to [Bunnie Huang].

In our opinion sigrok, and the el cheapo hardware logic probes that it supports deserve a place in every hacker’s toolbox. This video is the best introduction to the software, and the topic in general, that we’ve ever seen.

We’ve featured two other videos from the [OpenTechLab], one on the ICE40 FPGA and one on high-frequency synthesis, that you might also like. So put away whatever brain-rotting Hollywood blockbuster you have on the shelf, and make some popcorn for quality nerd time.

Filed under: hardware

RoGeorge Attacks a Pulse Meter

via hardware – Hackaday

The “Crivit Sports” is an inexpensive chest-strap monitor that displays your current pulse rate on a dedicated wristwatch. This would be much more useful, and presumably more expensive, if it had a logging option, or any way to export your pulse data to a more capable device. So [RoGeorge] got to work. Each post of the (so-far) three-part series is worth a read, not the least because of the cool techniques used.

In part one, [RoGeorge] starts out by intercepting the signals. His RF sniffer? An oscilloscope probe shorted out in a loop around the heart monitor. Being able to read the signals, it was time to decode them. Doing pushups and decoding on-off keyed RF signals sounds like the ideal hacker training regimen, but instead [RoGeorge] used a signal generator, clipped to the chest monitor, to generate nice steady “heartbeats” and then read the codes off the scope without breaking a sweat.

With the encoding in hand, and some help from the Internet, he tested out his hypothesis in part two. Using an Arduino to generate the pulses logged in part one, he pulsed a coil and managed to get the heart rates displayed on the watch.

Which brings us to part three. What if there were other secrets to be discovered? Brute-forcing every possible RF signal and looking at the watch to see the result would be useful, but doing so for 8,192 possible codes would drive anyone insane. So [RoGeorge] taught himself OpenCV in Python and pointed a webcam at the watch. He wrote a routine that detected the heart icon blinking, a sign that the watch received a valid code, and then transmitted all possible codes to see which ones were valid. Besides discovering a few redundant codes, he didn’t learn much new from this exercise, but it’s a great technique.

We’re not sure what’s left to do on the Crivit. [RoGeorge] has already figured out the heart-rate data protocol, and could easily make his own logger. We are sure that we liked his thorough and automated approach to testing it all, from signal-generator-as-heartbeat to OpenCV as feedback in a brute-force routine. We can’t wait to see what’s up next.

Filed under: hardware

Completely Owning the Dreamcast Add-on You Never Had

via hardware – Hackaday

If you’ve got a SEGA Dreamcast kicking around in a closet somewhere, and you still have the underutilized add-on Visual Memory Unit (VMU), you’re in for a treat today. If not, but you enjoy incredibly detailed hacks into the depths of slightly aged silicon, you’ll be even more excited. Because [Dmitry Grinberg] has a VMU hack that will awe you with its completeness. With all the bits in place, the hacking tally is a new MAME emulator, an IDA plugin, a never-before ROM dump, and an emulator for an ARM chip that doesn’t exist, running Flappy Bird. All in a month’s work!

The VMU was a Dreamcast add-on that primarily stored game data in its flash memory, but it also had a small LCD display, a D-pad, and inter-VMU communications functions. It also had room for a standalone game which could interact with the main Dreamcast games in limited ways. [Dmitry] wanted to see what else he could do with it. Basically everything.

We can’t do this hack justice in a short write-up, but the outline is that he starts out with the datasheet for the VMU’s CPU, and goes looking for interesting instructions. Then he started reverse engineering the ROM that comes with the SDK, which was only trivially obfuscated. Along the way, he wrote his own IDA plugin for the chip. Discovery of two ROP gadgets allowed him to dump the ROM to flash, where it could be easily read out. Those of you in the VMU community will appreciate the first-ever ROM dump.

On to doing something useful with the device! [Dmitry]’s definition of useful is to have it emulate a modern CPU so that it’s a lot easier to program for. Of course, nobody writes an emulator for modern hardware directly on obsolete hardware — you emulate the obsolete hardware on your laptop to get a debug environment first. So [Dmitry] ported the emulator for the VMU’s CPU that he found in MAME from C++ to C (for reasons that we understand) and customized it for the VMU’s hardware.

Within the emulated VMU, [Dmitry] then wrote the ARM Cortex emulator that it would soon run. But what ARM Cortex to emulate? The Cortex-M0 would have been good enough, but it lacked some instructions that [Dmitry] liked, so he ended up writing an emulator of the not-available-in-silicon Cortex-M23, which had the features he wanted. Load up the Cortex emulator in the VMU, and you can write games for it in C. [Dmitry] provides two demos, naturally: a Mandlebrot set grapher, and Flappy Bird.

Amazed? Yeah, we were as well. But then this is the same guy emulated an ARM chip on the AVR architecture, just to run Linux on an ATMega1284p.

Filed under: handhelds hacks, hardware