Monthly Archives: April 2022

2022 Product Release of the Seed Eco-Home 2

via Open Source Ecology

Development work always takes longer than one thinks – and only now we’re finishing off the product release of Seed Eco-Home 2. You can see about a 1000 pictures of the process taken over the last year. Now we’re finalizing the full digital model of the Seed Eco-Home 2 mechanical-electrical-plumbing in FreeCAD to finish off the complete CAD model.

This is at a level of detail rarely found in design documents – but critical for us as we open source the complete design. We aim to do a photo shoot as we finish the house soon so we can get the house product on our website. From then, it’s getting land somewhere in the local Kansas City area, and submitting design documents to the building department for approval. This will be the first house build to test the financial model – as we begin our first builds for customers. Ideas are good, but success boils down to effective production and real customers.

We aim to build several houses for customers this year, and to start version 2 of our apprenticeship program based on lessons from last summer – so we can deliver homes to customers at a rate of 1 house every 2 weeks. Our goal is to develop a skilled crew of 24 people as our base unit of Swarm Build operation – such that each house takes 5 days to build. Once we achieve this, we intend to train 10 such crews to operate out of our headquarters in the Kansas City area within the next couple years. This would allow us to build about 500 houses per year. And it means investing significantly in our infrastructure towards a full educational campus.
Our immersion education will include a construction management and enterprise track so that we can replicate operations to other locations worldwide. You can read more assumptions about the revenue model here. The link is not light reading – it is consistent with the immersion training as our latest thinking on how to build a team of trained super-cooperators with the mindset and skill set necessary to do the work that we do. Together with the financial model, there are literally thousands of pages of design documents that you can peruse in your ample spare time.

The current thinking is that production revenue bootstrap funds the completion of the Global Village Construction Set (GVCS) by 2028 – a promise I made to the world back in 2018 and time is running out. The idea is that as soon as a cohort of 24 apprentices is up and running, we can diversify to other supporting projects – such as product release of the tractor for construction, development of the Compressed Earth Block (CEB) version of the house, development of large-scale plastic recycling for 3D printing construction materials… And much more – such as solar concrete and solar steel – materials that we produce on site using solar energy. Unheard of.

We estimate that realistically, the 50 technologies of the GVCS will take approximately $1M – each – to enterprise release. For comparison, our budget over the last decade was $3M – total. When I say ‘enterprise release’ – I mean releasing not only the open source blueprints for the product – but also the blueprints for the enterprise including how to train workers. And in the current times – who knows – we may end up building in Ukraine to help in recovery.

The post 2022 Product Release of the Seed Eco-Home 2 appeared first on Open Source Ecology.

Midwest Defenders Round Up

via SparkFun: Commerce Blog

About the Author

Jesse Brockmann is a senior software engineer with over 20 years of experience. Jesse works for a large corporation designing real-time simulation software, started programming on an Apple IIe at the age of six and has won several AVC event over the years. This is the last installment in the Spatial AI Competition Series, thanks for tuning in everyone!


Read the first and second blogs if you are interested in or need to catch up on the logistics of this project!

alt text

Curses based data display and rover control program

Continuing from where we left off...

A run datalog file is created during each run that logs the x and y position of the rover, along with the relative positions of the markers during the run. This could be used to generate a map with the locations of each marker. Due to the noisy nature of the location data of the markers, some higher level algorithm would need to be used to locate the markers with a high degree of certainty based on run data from multiple runs. Location data is collected using encoder output and heading data from the BNO-055.

Example Run Data

rover: 499.995667 499.780090 356.750000

obstacle: marker -31.577566 -145.256805 912.055542 0.993661 0.049720 0.038363

end

rover: 499.995667 499.780090 356.750000

obstacle: marker -28.925745 -146.694870 895.138916 0.995610 0.050098 0.037815

obstacle: marker 189.767776 -114.743301 1912.000000 0.697999 0.030058 0.016747

end

rover: 499.995667 499.780090 356.750000

obstacle: marker -28.925745 -146.694870 895.138916 0.995610 0.050098 0.037815

obstacle: marker 189.767776 -114.743301 1912.000000 0.697999 0.030058 0.016747

end

rover: 499.995667 499.780090 356.750000

obstacle: marker -28.925745 -146.694870 895.138916 0.995610 0.050098 0.037815

obstacle: marker 189.767776 -114.743301 1912.000000 0.697999 0.030058 0.016747

end

rover: 499.995667 499.780090 356.312500

obstacle: marker -28.668959 -147.440353 887.192322 0.995123 0.052451 0.037603

end

alt text

Path rover navigated using markers and signs

An outstanding issue was to detect unknown objects and avoid them. Code was tested to convert the depth map from the Oak-D-Lite into a point cloud that could be used to locate obstacles.


Watch Jesse's Webinar with OpenCV about this project!


alt text

A hand can be seen in the point cloud created from Oak-D-Lite depth map

This result proved that such a solution may be possible, but much work would have to be done to interpret the point cloud to provide useful information for use with navigation and the timeframe of this project did not allow us to complete a solution.

Based on our outcome, it’s clear this is a well defined problem that can be solved using an Oak-D product. A final solution would likely use the depth output or a lidar to avoid obstacles not detected by the neural network. Use of this project could be a great motivational tool for kids interested in STEM.

The final video with a demonstration of the rover

At this time the source code for this project is not open source. This is due to concerns about any reused code such as the DepthAI example code, pdcurses source etc. However the source code will be provided to anyone who requests it as part of the review process for our entry into this contest. As time permits a proper audit of the code will be done, and released once any licensing issues are resolved.


Issues That Came Up During Development

The Lego Land Rover was not available in this time frame from purchase from Lego, so was acquired through 3rd party sellers. The conversion process from a native darknet output to a Oak-D-Lite compatible blob is a bit difficult, and I actually had to create two conda environments to get the conversion working. Part of the process is in tensorflow 2, and part of it is in tensorflow 1. If tensorflow 2 was used for all steps the blob could not be created due to unsupported options. The depth information reported by Oak-D-Lite seems to have many limitations and the main one I encountered is that a small object by itself in space can have very poor Z distances reported. These are often much further away than the object actually is. Our rovers would actually run into signs and the distance reported would be as much as two meters away. At no time while traveling towards objects did the reported distance ever become less than one meter. Putting a box, or another object behind the signs seems to solve this issue. Another issue is the autofocus Oak-D-Lite, which is constantly hunting and often gets confused and the whole scene will be blurry. Also, a rover is a poor place for an autofocus camera as the vibrations will probably prevent it from working anyway. As a result we used the fixed focus camera for the majority of testing.

Another issue was the power requirements of the Oak-D-Lite. It seems unstable without a splitter for the power/data. The Raspberry Pi V4 just doesn't have enough power to keep it running. The issue is intermittent operation. It might work fine for many minutes before failing and having to restart the program or even restart the Oak-D-Lite by unplugging and replugging in. Preferably, it should report a power issue such as the Pi does when underpowered. There also seems to be issues trying to debug when using the Depth AI core. Sitting at a breakpoint for a while will cause the process to crash due to issues with the various Depth AI threads. Finally, using a Lego based vehicle is not recommended for general use in this type of testing. Even an RC car of similar cost is a much better choice due to the frail nature of a lego drivetrain in a vehicle of this size.


We would like to thank Sparkfun for providing parts for the build and Roboflow for their help and use of the roboflow software during this contest. Finally, we would like to thank OpenCV, Intel, Microsoft and Luxonis for this competition.

alt text
Jesse’s Rover Scout

Jesse has plans to use the Oak-D-Lite and the code from this competition to compete in F1TENTH and Robo-Columbus events in the future.

comments | comment feed

New products: DRV8874 and DRV8876 motor driver carriers

via Pololu Blog

We’ve expanded our selection of motor drivers again with the release of some compact carrier boards for TI’s DRV8874 and DRV8876 motor drivers, which feature current sense feedback and adjustable current limiting. These three ICs and their boards are all very similar, differing mainly by the amount of current they can handle: in a TSSOP chip package, the DRV8874 delivers up to 2.1 A continuous on our carrier board and the DRV8876 does 1.3 A. The DRV8876 chip is also available in a smaller QFN package, so for a lower-current and lower-cost option, our DRV8876 (QFN) carrier can deliver 1.1 A continuously. All three versions can drive a single brushed DC motor at voltages from 4.5 V to 37 V.

DRV8874 Single Brushed DC Motor Driver Carrier (top view).

DRV8876 Single Brushed DC Motor Driver Carrier (top view).

DRV8876 (QFN) Single Brushed DC Motor Driver Carrier (top view).

The DRV8874 and DRV8876 drivers offer a choice of control modes that includes phase/enable (PH/EN) and direct PWM (IN/IN) as well as independent half-bridge control, which lets you drive two motors unidirectionally. With their wide operating voltage range and current sense/current limiting added in, this combination of capabilities results in some unusually versatile motor driver boards, especially considering their small size. (But if you need something that works with even higher voltages, consider our similar DRV8256E and DRV8256P carrier boards too, though those don’t provide current sense feedback.)

Comparison of the DRV8874, DRV8876, and DRV8256 motor driver carriers

DRV8876 (QFN)

DRV8876

DRV8874

DRV8256E
DRV8256P
Motor channels: one
Min. operating voltage: 4.5 V
Max. operating voltage: 37 V 48 V
Max. continuous current(1): 1.1 A 1.3 A 2.1 A 1.9 A
Peak current: 3.5 A 6 A 6.4 A
Current sense feedback? 2500 mV/A 1100 mV/A none
Active current limiting: adjustable
Size: 0.6″ × 0.7″ 0.6″ × 0.6″
1-piece price: $5.95 $6.95 $9.95 $12.95 (E)
$12.95 (P)
1 On Pololu carrier board, at room temperature and without additional cooling.

Midwest Defenders Pt. 2

via SparkFun: Commerce Blog

Team Midwest Defenders Continued...

Jesse Brockmann is a senior software engineer with over 20 years of experience. Jesse works for a large corporation designing real-time simulation software, started programming on an Apple IIe at the age of six and has won several AVC event over the years. Make sure you stay up-to-date with our blog to read all about Jesse's work!


Read the last blog on Jesse's LEGO build if you are interested in the first part of this project! With the LEGO part of the build complete, now the endeavor switched to code development. The first step was to install the python depthai and run some provided examples. This proved to be successful, so next the process of building a custom AI and acquiring the images required for training was started. We printed some signs to use and took many images of these signs to train the neural network. The signs were printed in ABS, and the images on the signs were printed on photo paper and glued on. The signs were designed to be of similar scale to the Land Rover, and based on real road signs.

Initially there were two possible alternatives for designating the course. One was using tape as a line, or possibly large pieces of paper with lines drawn. The second was to use something like poker chips laid out in a line. This was ultimately chosen as less destructive, more robust, and easier to set up and teardown.

alt text
Completed rover with some signs and markers ready for testing

50+ images were taken of the signs and tokens in different lighting and angles. The images were then labeled using labelimg. A MobileSSD neural network was configured and trained locally on a desktop machine with a Nvidia 3060 Ti using example code linked from this webpage. This was then used with the depthai example python code to see the results of the training. It was detecting objects but with a low certainty and many false positives.

Research was done and it was determined that a Yolo V3 or Yolo V4 network would improve performance, and the above website also provided a link to a jupiter notebook setup for a Yolo V3/V4 framework known as darknet. The code was converted into a local python script, and anaconda was used to provide the python environment used for running on a Windows 10 machine. The label data required by Yolo is different from that of MobileNet SSD, so a python script was used to convert between the two formats.

Training was started using Yolo V4, but it was clear the data was not properly formatted for Yolo. After investigation we found the Roboflow website, and realized this could provide a much quicker path to having robust training data. All current images and labels were uploaded to Roboflow, augmentation was added and then training data was exported from roboflow that required no processing for use by darknet. This initial data was trained in Roboflow for validation, and also offline by darknet. The results showed promise, and it was determined this would be the course moving forward.

After this initial success we were contacted by Mohamed Traore from Roboflow, and he helped us improve our image augmentation settings and gave suggestions on how to improve the training. The result of this was a much more robust neural network with a very high detection rate with low false positives. Once this framework was in place 250+ more images were added to improve the detection rates for a total image count of over 3000 with augmentation.


Luxonis Oak-D LITE

Luxonis Oak-D LITE

SEN-19040
$149.95

Now that the neural network was working, it was time to interpret the data from the Oak-D-Lite and use it to make command decisions for the rover. A basic rover framework written in C++ was created from scratch and designed to be modular to allow for subsystems to be added and removed with minimal effort. This rover framework runs on Windows, Linux and also higher end microcontrollers like the Teensy with a light version of the framework.

The following subsystems were created for the Teensy microcontroller

  • Encoder - Reads pulses from hall effect sensor to determine distance traveled by rover
  • IMU - Reports the current heading of the rover in a relative system using BNO-055 IMU with zero as the initial heading and 500 meters for x and y
  • Location - Use Encoder and IMU data to find the relative location of the rover (Dead Reckoning)
  • PPM Input - Designed to take RC inputs and translate into -1.0 to 1.0 signals for use by other systems
  • Teensy Control - Sends the PWM output to the servo and ESC
  • Command - Makes the low level command decisions for the rover. The Pi provides a desired heading and speed and this subsystem uses that information to command the servo and ESC
  • Serial - Used for communication with the Pi
Teensy 4.0

Teensy 4.0

DEV-15583
$19.95
11

The following subsystems were created for the Pi or Windows machine

  • Curses - Text/Console based UI to control and display rover information
  • Image - Use DepthAI to find objects, and also display windows with detection data and camera images
  • Image Command - Use the detection data to determine the heading and speed the rover should travel
  • JoyStick - Read manual commands from a usb joystick or bluetooth joystick (not currently functional on windows)
  • Serial - Used for communication with the Teensy

We are not going to go into details for these subsystems other than the Image and Image Command as those are what make this an Oak-D rover instead of a generic rover.

The Image subsystem and a DepthAPI class provide the interface to the Oak-D-Lite. DepthAPI is a C++ class based on provided example code, modified to meet the needs of the Image subsystem. The main purpose is to provide detection data and allow debugging by providing a camera feed with detection overlay or the ability to record images or videos for later inspection.

alt text
Real time image from rover during testing

The Image Command subsystem uses the heading of the rover from the Teensy, also with the detection data to determine a course of action for the rover. It looks for objects and determines a heading to those objects by taking the current heading and atan2 of the marker X and Z position and returns a heading to intersect with the object. This along with a constant speed is sent to the Teensy to implement those commands to its best ability. Turn signs are used to allow the rover to take tighter turns then the markers will allow. A very tight turn isn’t possible using markers due to the limited field of view as the markers are not visible during these turns. When a left or right turn sign is in view the rover aims for these signs, and then when 0.95 meters from a sign does a 90 degree turn relative to the current heading.

Once the new heading is achieved the rover goes back to looking for markers. U-Turns are similar however the turn consists of two 90 degree turns instead of one. When using markers to navigate, the algorithm will do the following. If only one marker is in view it will try to keep the rover lined up to go directly over the marker. As more become available it will use the average location between two markers to decide its path. For two markers it’s in between those, for three or more it’s in between the second and third markers. This provides some look ahead information to allow the rover to make tighter turns then it could otherwise. A more advanced algorithm could be added to keep the point of interest the rover is steering towards at a constant distance in front of the rover by using all known marker locations to interpolate or extrapolate this point.

When a stop sign is detected the rover will aim for the stop sign and stop 1.1 meters from the stop sign, and automatically end navigation. Other signs such as parking, no parking and dead end signs could also be added to further enrich the navigation possibilities, but given the 3 month time frame were not pursued further.

alt text
Curses based data display and rover control program

Thanks for reading! We have the last part of this series coming out in a few days, so be sure to check back in to find out the Midwest Defenders wrapped up their project!

comments | comment feed

Direct to the Time of Flight

via SparkFun: Commerce Blog

Hello everyone and welcome back to another Friday Product Post here at SparkFun Electronics. This week, we are happy to showcase our four new direct Time of Flight sensors with two different board dimensions. Following that, we have a new, multi-voltage USB-C wall adapter. Let's jump in and take a closer look at all of this week's new products!

Pick up a full-sized or half-sized dToF Imager!

SparkFun Qwiic dToF Imager - TMF8820

SparkFun Qwiic dToF Imager - TMF8820

SEN-19036
$19.95
SparkFun Qwiic Mini dToF Imager - TMF8820

SparkFun Qwiic Mini dToF Imager - TMF8820

SEN-19218
$19.95

The SparkFun Qwiic dToF TMF8820 Imager and SparkFun Qwiic Mini dToF TMF8820 Imager are direct time-of-flight (dToF) sensors that include a single modular package with an associated Vertical Cavity Surface Emitting Laser (VCSEL) from AMS. The dToF devices are based on Single Photon Avalanche Photodiode (SPAD), time-to-digital converter (TDC) and histogram technology to achieve a 5000mm detection range. Due to the lens on the SPAD, each of these boards support 3x3 multizone output data and a very wide, dynamically adjustable field of view. We offer the TMF8820 boards in 1in. x 1in. and 0.5in. x 1in. options!


SparkFun Qwiic dToF Imager - TMF8821

SparkFun Qwiic dToF Imager - TMF8821

SEN-19037
$20.95
SparkFun Qwiic Mini dToF Imager - TMF8821

SparkFun Qwiic Mini dToF Imager - TMF8821

SEN-19451
$20.95

The SparkFun Qwiic dToF TMF8821 Imager and SparkFun Qwiic Mini dToF TMF8821 Imager are direct time-of-flight (dToF) sensors that include a single modular package with an associated Vertical Cavity Surface Emitting Laser (VCSEL) from AMS. The dToF devices are based on Single Photon Avalanche Photodiode (SPAD), time-to-digital converter (TDC) and histogram technology to achieve a 5000mm detection range. Due to the lens on the SPAD, each of these boards support 3x3, 4x4, and 3x6 multizone output data and a very wide, dynamically adjustable field of view. A multi-lens-array (MLA) inside each package above the VCSEL widens up the FoI (field of illumination). All processing of the raw data is performed on-chip and the TMF8821 provides distance information together with confidence values on its I2C interface. The high performance on-chip optical filter blocks most of the ambient light and enables distance measurements in dark and sunlight environments.


USB-C Wall Adapter - 6V-6.5V/3A, 6.5V-9V/2A, 9V-12V/1.5A (with Cable)

USB-C Wall Adapter - 6V-6.5V/3A, 6.5V-9V/2A, 9V-12V/1.5A (with Cable)

TOL-19466
$14.95

This is a USB wall adapter with an input value of AC100-240V, and an output of 6V-6.5V/3A, 6.5V-9V/2A, 9V-12V/1.5A. Along with the wall adapter, this set comes with a USB to USB-C


That's it for this week. As always, we can't wait to see what you make. Shoot us a tweet @sparkfun, or let us know on Instagram, Facebook or LinkedIn. Please be safe out there, be kind to one another, and we'll see you next week with even more new products!

Never miss a new product!

comments | comment feed