GNSS Functionality for MicroMod

via SparkFun: Commerce Blog

Hello and welcome, everyone! We are back, yet again, with new products that expand our exciting MicroMod, Qwiic, and Artemis ecosystems. We start the week off with two new GNSS/GPS Function Boards for MicroMod! These boards feature a ZED-F9P and NEO-M9N, respectively, offering two levels of accuracy at respectable price expectations. Following those, we have a new OpenLog Artemis Kit that features a version of the board without an IMU (due to supply chain constraints) as well as an offering of fun and interesting sensors that work out of the box without any codding at all! We wrap the week with an interesting electric screw driver from Wowstick. Without further ado, let's jump in and take a closer look at all of this week's new products!

Geospatial capabilities for MicroMod are a must-have.

SparkFun MicroMod GNSS Function Board - ZED-F9P

SparkFun MicroMod GNSS Function Board - ZED-F9P

GPS-19663
$274.95

With GNSS you are able to know where you are, where you're going, and how to get there from anywhere on Earth within 30 seconds. This means the higher the accuracy the better! GNSS Real Time Kinematics (RTK) has mastered dialing in the accuracy of these GNSS modules to just millimeters. The SparkFun MicroMod GNSS Function Board takes everything we love about the ZED-F9P module from u-blox and combines it with the flexibility and easy use of the MicroMod Main Board system allowing you to test and swap out different MicroMod Processors and also add in extra functionality to your GNSS project with other Function Boards, all without any soldering required! This means solderless access to the ZED-F9P module's features via UART1, SPI, and I2C ports!


SparkFun MicroMod GNSS Function Board - NEO-M9N

SparkFun MicroMod GNSS Function Board - NEO-M9N

GPS-18378
$49.95

The SparkFun NEO-M9N GNSS Function Board is a high quality geospatial board with equally impressive configuration options. The NEO-M9N module that is populated on this board is a 92-channel u-blox M9 engine GNSS receiver, meaning it can receive signals from the GPS, GLONASS, Galileo, and BeiDou constellations with ~1.5 meter accuracy. This Function Board supports concurrent reception of these four GNSSs. This maximizes position accuracy in challenging conditions, increases precision and decreases lock time. And thanks to the on-board rechargeable battery, you'll have backup power enabling the GPS to get a hot lock within seconds!


SparkFun OpenLog Artemis Kit (without IMU)

SparkFun OpenLog Artemis Kit (without IMU)

KIT-20218
$129.95

The SparkFun OpenLog Artemis (without IMU) Kit is intended to provide everything you need to log the data of your choice with the OpenLog Artemis and the Qwiic Ecosystem. Inside the kit you will find a reversible USB-A to C cable for power, a 1250mAh LiPo battery for remote applications, a 1GB microSD card, and all the necessary Qwiic cables to connect the included boards. The SparkFun Qwiic GPS Breakout, SparkFun Qwiic Scale, and SparkFun High Precision Temperature are included to enable access to data from any and all Qwiic sensors you may be using in each unique project application.


Wowstick 1F+ Electric Screwdriver Kit

Wowstick 1F+ Electric Screwdriver Kit

TOL-19653
$72.95

The Wowstick 1F+ is an electric screwdriver kit featuring 56 screw bits and a number of other accessories. It's the perfect tool for those who regularly need to tighten or loosen screws on electronic devices. The lithium-powered rechargeable driver pen features a sleek aluminum body and three LED lights to prevent any shadows being cast on the subject. The double action button allows you to reverse the direction of the drive on the fly.


That's it for this week. As always, we can't wait to see what you make. Shoot us a tweet @sparkfun, or tag us on Instagram, Facebook or LinkedIn. Please be safe out there, be kind to one another, and we'll see you next week with even more new products!

Never miss a new product!

comments | comment feed

Say “aye” to Code Club in Scotland

via Raspberry Pi

Since joining the Raspberry Pi Foundation as a Code Club Community Manager for Scotland earlier this year, I have seen first-hand the passion, dedication, and commitment of the Scottish community to support the digital, personal, and social skills of young people.

A group of smiling children hold up large cardboard Code Club logos.

Code Club launched in schools in 2012 to give opportunities to children to share and develop their love of coding through free after-school clubs. Now we have clubs across the world connecting learners in having fun with digital technologies. 

Meeting Scotland’s inspiring Code Club community

One of my first visits was to St. Mark’s Primary School in East Renfrewshire, where I met an amazing Code Club leader called Ashley Guy. Ashley only got involved in Code Club this year, but has already launched three clubs at her school!

St Mark's Primary celebrate Code Club's tenth birthday.

I went to visit her Primary 2 and 3’s club, where the children were working on creating animations in Scratch to celebrate Code Club’s tenth birthday. It was a real joy to see the young children so engaged with our projects. The young coders worked both independently and together to create their own animations.

One of the girls I spoke to made a small error while coding her project, but she smiled and said, “I made a mistake, but that’s okay because that’s how we learn!” She showed just the kind of positive, problem-solving mindset that Code Club helps to cultivate.

Another school doing something incredible at their Code Club, led by Primary 7 teacher Fiona Lindsay, is Hillside School in Aberdeenshire. I love seeing the fun things they get up to, including celebrating Code Club’s 10th birthday in style with an impressive Code Club cake.

Hillside School's cake to celebrate ten years of Code Club.

Fiona and her club are using the Code Club projects and resources to create their own exciting and challenging games. They’ve taken part in several of our online codealongs, and they also held an event at the school to showcase their great work — which even got the children’s parents coding! 

Some of the young people who attend Code Club at Hillside School sent us videos about their experiences, why they come to Code Club, and what it means to them. Young coder Abisola describes Code Club in one word:

Video transcript

Young coder Crystal said, “We can experiment with what we know and make actual projects… At Code Club we learn about new blocks in Scratch and what blocks and patterns go together to make something.” Here is Crystal sharing her favourite part of Code Club:

Video transcript

Obuma also attends the Code Club at Hillside School. She shared what she gains from attending the sessions and why she thinks other young people should join a Code Club too: 

“At Code Club we improve our teamwork skills, because there’s a lot of people in Code Club and most of the time you work together to create different things… Join [Code Club] 100%. It is so fun. It might not be something everyone would want to try, but if you did try it, then you would enjoy it.”

Obuma, young coder at Hillside School’s Code Club
Two young people at a Code Club.
Crystal and Abisola celebrate ten years of Code Club

Coding with the community 

One of the things I’ve enjoyed most as part of the Code Club team has been running an UK-wide online codealong to celebrate STEM Clubs Week. The theme was outer space, so our ‘Lost in space’ project in Scratch was the ideal fit.

Young people from St Philip Evans Primary School participating in Code Club's 'Lost in space' codealong.

During this practical coding session, classes across Scotland, England, and Wales had great fun coding the project together to animate rockets that move around space. We were thrilled by the feedback from teachers.

“The children really enjoyed the session. They are very proud of their animations and some children went on to extend their programs. All [the] children said they would love to do more codealongs!”

Teacher who took part in an online Code Club codealong
Young people from Oaklands Primary School participating in Code Club's 'Lost in space' codealong.

Thank you to everyone who got involved in the codealong. See you again at the next one.

What Scotland — and everyone in the community — can look forward to in the new term

To help you start your Code Club year with ease and fun, we will be launching new free resources for you and your club members. There’ll be a special pack filled with step-by-step instructions and engaging activities to kickstart your first session back, and a fun sticker chart to help young coders mark their progress. 

We would love to see you at our practical and interactive online workshopTen reasons why coding is fun for everyone’ on Thursday 15 September at 16:00–17:00 BST, which will get you ready for National Coding Week (19–23 September). Come along to the workshop to get useful guidance and tips on how to engage everyone with coding.

The Code Club team.

We will also be holding lots of other exciting activities and sessions throughout the upcoming school term, including for World Space Week (4–10 October), the Moonhack coding challenge in October, and World Hello Day in November. So keep an eye on our Twitter @CodeClubUK for live updates. 

Whether you’re interested in learning more about Code Club in Scotland, you have a specific question, or you just want to say hi, I’d love to hear from you. You can contact me at scotland@codeclub.org, or @CodeClubSco on Twitter. I’ll also be attending the Scottish Education Expo on 21 and 22 September along with other Code Club team members, so come along and say hello.

Get involved in Code Club today

With the new school term approaching, now is a great time to register and start a Code Club at your school. You can find out more on our website, codeclub.org, or contact us directly at support@codeclub.org 

The post Say “aye” to Code Club in Scotland appeared first on Raspberry Pi.

Dr. Kevin Eliceiri named Open Hardware Trailblazer Fellow

via Open Source Hardware Association

Dr. Kevin Eliceiri named Open Hardware Trailblazer Fellow

UW-Madison

Innovation in scientific instrumentation is an important aspect of research at
UW–Madison, in part due to efforts of researchers such as Kevin Eliceiri, professor of
medical physics and biomedical engineering.
Eliceiri, who is also an investigator for the Morgridge Institute for Research,
member of the UW Carbone Cancer Center, associate director of the McPherson Eye
Research Institute and director of the Center for Quantitative Cell Imaging, was recently
named an Open Hardware Trailblazer Fellow by the Open Source Hardware
Association (OSHWA).
Open hardware refers to the physical tools used to conduct research such as
microscopes, and like open software, helps to ensure that scientific knowledge is not
just found in research settings, but that it supports the public use of science as is the
mission of The Wisconsin Idea.
“Kevin Eliceiri is a pioneer in open source hardware and software design that
allow for richer data collection than traditional methods and support innovative research
on campus and around the world,” says Steve Ackerman, vice chancellor for research
and graduate education. “Open hardware allows for interdisciplinary collaboration and
for a research enterprise to start small and then scale up to meet their needs. Open
source hardware is a good investment and holds promise for accelerating innovation.”

The OSHWA fellowship program seeks to raise the profile of existing open hardware
work within academia, and encourages research that is accessible, collaborative and
respects user freedom.
The one year fellowship, funded by the Open Source Hardware Association, 

provides $50,000 and $100,000 grants to individuals like Eliceiri who will then document
their experience of making open source hardware to create a library of resources for
others to follow. The fellows were chosen by the program’s mentors and an OSHWA
board selection committee. 

Eliceiri says “ There is already widespread community support for making the
protocols for any published scientific study open and carefully documented but the
hardware used for most experiments whether homebuilt or commercial can often be
effectively a black box. In this age of the quest for reproducible quantitative science the
open source concept should be applied to the complete system including hardware, not
just the software used to analyze the resulting data.

Universities often try to recover the costs associated with developing new
scientific instrumentation through patenting, commercialization and startups. This
process works well at times. But for some highly specialized instrumentation, the
traditional model can be too time consuming and costly. Thus, some highly useful
innovations never reach other labs.

Open hardware and sharing designs for instruments without patenting — as an
alternative to the traditional model — is growing in popularity. Three open hardware journals have come of age in the past five years, offering venues to share how to build
research instrumentation that can be tweaked for a specific use, instead of starting from
scratch

With open hardware, anyone can replicate or reuse hardware design files for free
and this increases the accessibility of hardware tools such as specialized microscopes.

The infrastructure of desktop 3D printers is another example of how open
hardware can accelerate and broaden scientific research. The National Institute of
Health (NIH)’s 3D Print Exchange is a library designed to advance biomedical research
by allowing a researcher to print hardware on site. With local production, there is a
reduction in cost and supply chain vulnerabilities.

Since 2000, Eliceiri has been lead investigator of his lab known as the Laboratory
for Optical and Computational Instrumentation (LOCI), with a research focus developing
novel optical imaging methods for investigating cell signaling and cancer progression,
and the development of software for multidimensional image analysis. LOCI has been
contributing lead developers to several open-source imaging software packages
including FIJI, ImageJ2 and μManager. His open hardware instrumentation efforts
involve novel forms of polarization, laser scanning and multiscale imaging.

Using the open hardware laser scanning platform known as OpenScan Elicieri
plans to evaluate what are the most relevant best practices from open source software
that can be applied to hardware and what are unique open hardware criterion needs
that have to be implemented for successful sharing of open hardware.

Eliceiri, a highly cited researcher, has authored more than 260 scientific papers
on various aspects of optical imaging, image analysis, cancer and live cell imaging.

Monitor Sensor Data Here, There and Anywhere!

via SparkFun: Commerce Blog

We've shown you before how to send sensor data over WiFi, but this time we're taking it a step further. Our newest tutorial shows you how to use this WiFi data connection to then visualize your data in real time on an IoT Dashboard.

alt text

Rob and Mariah use a Qwiic-enabled air quality sensor with this project, but you could use any sensor in this type of project - your data visualization possibilities are endless!

Take a look and see how you can update your latest sensor project to include WiFi and a data visualization dashboard, and let us know how it goes by tagging us on social, we can't wait to see what you create!

comments | comment feed

Classroom activities to discuss machine learning accuracy and ethics | Hello World #18

via Raspberry Pi

In Hello World issue 18, available as a free PDF download, teacher Michael Jones shares how to use Teachable Machine with learners aged 13–14 in your classroom to investigate issues of accuracy and ethics in machine learning models.

Machine learning: Accuracy and ethics

The landscape for working with machine learning/AI/deep learning has grown considerably over the last couple of years. Students are now able to develop their understanding from the hard-coded end via resources such as Machine Learning for Kids, get their hands dirty using relatively inexpensive hardware such as the Nvidia Jetson Nano, and build a classification machine using the Google-driven Teachable Machine resources. I have used all three of the above with my students, and this article focuses on Teachable Machine.

For this module, I’m more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Michael Jones

For the worried, there is absolutely no coding involved in this resource; the ‘machine’ behind the portal does the hard work for you. For my Year 9 classes (students aged 13 to 14) undertaking a short, three-week module, this was ideal. The coding is important, but was not my focus. For this module, I’m more concerned with the fuzzy end of AI, including how credible AI decisions are, and the elephant-in-the-room aspect of bias and potential for harm.

Getting started with Teachable Machine activities

There are three possible routes to use in Teachable Machine, and my focus is the ‘Image Project’, and within this, the ‘Standard image model’. From there, you are presented with a basic training scenario template — see Hello World issue 16 (pages 84–86) for a step-by-step set-up and training guide. For this part of the project, my students trained the machine to recognise different breeds of dog, with border collie, labrador, saluki, and so on as classes. Any AI system devoted to recognition requires a substantial set of training data. Fortunately, there are a number of freely available data sets online (for example, download a folder of dog photos separated by breed by accessing helloworld.cc/dogdata). Be warned, these can be large, consisting of thousands of images. If you have more time, you may want to set students off to collect data to upload using a camera (just be aware that this can present safeguarding considerations). This is a key learning point with your students and an opportunity to discuss the time it takes to gather such data, and variations in the data (for example, images of dogs from the front, side, or top).

Drawing of a machine learning ars rover trying to decide whether it is seeing an alien or a rock.
Image recognition is a common application of machine learning technology.

Once you have downloaded your folders, upload the images to your Teachable Machine project. It is unlikely that you will be able to upload a whole subfolder at once — my students have found that the optimum number of images seems to be twelve. Remember to build this time for downloading and uploading into your lesson plan. This is a good opportunity to discuss the need for balance in the training data. Ask questions such as, “How likely would the model be to identify a saluki if the training set contained 10 salukis and 30 of the other dogs?” This is a left-field way of dropping the idea of bias into the exploration of AI — more on that later!

Accuracy issues in machine learning models

If you have got this far, the heavy lifting is complete and Google’s training engine will now do the work for you. Once you have set your model on its training, leave the system to complete its work — it takes seconds, even on large sets of data. Once it’s done, you should be ready to test you model. If all has gone well and a webcam is attached to your computer, the Output window will give a prediction of what is being viewed. Again, the article in Hello World issue 16 takes you through the exact steps of this process. Make sure you have several images ready to test. See Figure 1a for the response to an image of a saluki presented to the model. As you might expect, it is showing as a 100 percent prediction.

Screenshots from Teachable Machine showing photos of dogs classified as specific breeds with different degrees of confidence by a machine learning model.
Figure 1: Outputs of a Teachable Machine model classifying photos of dog breeds. 1a (left): Photo of a saluki. 1b (right): Photo of a Samoyed and two people.

It will spark an interesting discussion if you now try the same operation with an image with items other than the one you’re testing in it. For example see Figure 1b, in which two people are in the image along with the Samoyed dog. The model is undecided, as the people are affecting the outcome. This raises the question of accuracy. Which features are being used to identify the dogs as border collie and saluki? Why are the humans in the image throwing the model off the scent?

Getting closer to home, training a model on human faces provides an opportunity to explore AI accuracy through the question of what might differentiate a female from a male face. You can find a model at helloworld.cc/maleorfemale that contains 5418 images almost evenly spread across male and female faces (see Figure 2). Note that this model will take a little longer to train.

Screenshot from Teachable Machine showing two datasets of photos of faces labeled either male or female.
Figure 2: Two photo sets of faces labeled either male or female, uploaded to Teachable Machine.

Once trained, try the model out. Props really help — a top hat, wig, and beard give the model a testing time (pun intended). In this test (see Figure 3), I presented myself to the model face-on and, unsurprisingly, I came out as 100 percent male. However, adding a judge’s wig forces the model into a rethink, and a beard produces a variety of results, but leaves the model unsure. It might be reasonable to assume that our model uses hair length as a strong feature. Adding a top hat to the ensemble brings the model back to a 100 percent prediction that the image is of a male.

Screenshots from Teachable Machine showing two datasets of a model classifying photos of the same face as either male or female with different degrees of confidence, based on the face is wearing a wig, a fake beard, or a tophat.
Figure 3: Outputs of a Teachable Machine model classifying photos of the author’s face as male or female with different degrees of confidence. Click to enlarge.

Machine learning uses a best-fit principle. The outputs, in this case whether I am male or female, have a greater certainty of male (65 percent) versus a lesser certainty of female (35 percent) if I wear a beard (Figure 3, second image from the right). Remove the beard and the likelihood of me being female increases by 2 percent (Figure 3, second image from the left).

Bias in machine learning models

Within a fairly small set of parameters, most human faces are similar. However, when you start digging, the research points to there being bias in AI (whether this is conscious or unconscious is a debate for another day!). You can exemplify this by firstly creating classes with labels such as ‘young smart’, ‘old smart’, ‘young not smart’, and ‘old not smart’. Select images that you think would fit the classes, and train them in Teachable Machine. You can then test the model by asking your students to find images they think fit each category. Run them against the model and ask students to debate whether the AI is acting fairly, and if not, why they think that is. Who is training these models? What images are they receiving? Similarly, you could create classes of images of known past criminals and heroes. Train the model before putting yourself in front of it. How far up the percentage scale are you towards being a criminal? It soon becomes frighteningly worrying that unless you are white and seemingly middle class, AI may prove problematic to you, from decisions on financial products such as mortgages through to mistaken arrest and identification.

It soon becomes frighteningly worrying that unless you are white and seemingly middle class, AI may prove problematic to you, from decisions on financial products such as mortgages through to mistaken arrest and identification.

Michael Jones

Encourage your students to discuss how they could influence this issue of race, class, and gender bias — for example, what rules would they use for identifying suitable images for a data set? There are some interesting articles on this issue that you can share with your students at helloworld.cc/aibias1 and helloworld.cc/aibias2.

Where next with your learners?

In the classroom, you could then follow the route of building models that identify letters for words, for example. One of my students built a model that could identify a range of spoons and forks. You may notice that Teachable Machine can also be run on Arduino boards, which adds an extra dimension. Why not get your students to create their own AI assistant that responds to commands? The possibilities are there to be explored. If you’re using webcams to collect photos yourself, why not create a system that will identify students? If you are lucky enough to have a set of identical twins in your class, that adds just a little more flavour! Teachable Machine offers a hands-on way to demonstrate the issues of AI accuracy and bias, and gives students a healthy opportunity for debate.

Michael Jones is director of Computer Science at Northfleet Technology College in the UK. He is a Specialist Leader of Education and a CS Champion for the National Centre for Computing Education.

More resources for AI and data science education

At the Foundation, AI education is one of our focus areas. Here is how we are supporting you and your learners in this area already:

An image demonstrating that AI systems for object recognition do not distinguish between a real banana on a desk and the photo of a banana on a laptop screen.
  • Computing education researchers are working to answer the many open questions about what good AI and data science education looks like for young people. To learn more, you can watch the recordings from our research seminar series focused on this. We ourselves are working on research projects in this area and will share the results freely with the computing education community.
  • You can find a list of free educational resources about these topics that we’ve collated based on our research seminars, seminar participants’ recommendations, and our own work.

The post Classroom activities to discuss machine learning accuracy and ethics | Hello World #18 appeared first on Raspberry Pi.

New products: Motoron dual high-power motor controllers

via Pololu Blog

We’ve expanded our Motoron series of motor controllers with some dual high-power motor controllers: the Motoron M2S family for Arduino and Motoron M2H family for Raspberry Pi! These new Motorons have the same I²C interface as the M3S256 and M3H256, and though they only have two channels instead of the three on their smaller counterparts, they can drive much more powerful motors with up to 20 A of current at 30 V or 16 A at 40 V. There are four combinations of voltage and current ranges, available in versions designed to work as Arduino shields and as Raspberry Pi expansions.

Using a Motoron M2S Dual High-Power Motor Controller Shield with an Arduino.

Using the Motoron M2H Dual High-Power Motor Controller with a Raspberry Pi.

These eight additions bring the Motoron family up to a total of ten members overall:

Motoron Motor Controllers

M3S256



M3H256

M2S24v14



M2H24v14

M2S24v16



M2H24v16

M2S18v18



M2H18v18

M2S18v20



M2H18v20
Motor channels: triple (3) dual (2)
Max
input voltage:
48 V 40 V1 30 V1
Max nominal
battery voltage:
36 V 28 V 18 V
Max continuous
current per channel:
2 A 14 A 16 A 18 A 20 A
Available versions
for Arduino:
M3S256 M2S24v14 M2S24v16 M2S18v18 M2S18v20
Available versions
for Raspberry Pi:
M3H256 M2H24v14 M2H24v16 M2H18v18 M2H18v20
1 Absolute maximum.

As with the smaller Motorons, the high-power versions can also be stacked and their addresses configured to allow many motors to be controlled with only one I²C bus. For a stack of M2S boards on an Arduino, we recommend soldering thick wires to the kit or board-only version because 5mm terminal blocks are tall enough that they would cause short circuits within the stack. However, the M2H boards can be set up to stack safely by trimming the terminal block leads and adding extra nuts to the standoffs for additional spacing.

Three Motoron M2S dual high-power motor controller shields being controlled by an Arduino Leonardo.

Two Motoron M2H boards with terminal blocks can be stacked if you trim the leads on the terminal blocks and space out each board using hex nuts in addition to the 11mm standoffs.

It’s also possible to stack different kinds of Motoron controllers so you can control different kinds of motors:

A Motoron M2H and a Motoron M3H256 being controlled by a Raspberry Pi, allowing for independent control of five motors.

Unfortunately, the current state of the electronics supply chain is affecting how we’re making and selling these Motorons. In the past, when we released boards in multiple versions that have different MOSFET footprints, it was primarily to get us different power levels. Typically, we would make a less expensive one with smaller, lower-power MOSFETs and a more expensive one with bigger, higher-power MOSFETs. While we’re still doing this kind of thing with the M2S and M2H Motorons (the 24v14 and 18v18 use smaller MOSFETs and the 24v16 and 18v20 use bigger ones), in this case, it’s largely about maximizing parts options.

When we don’t know how many months (or years!) it will take for us to get more of a MOSFET, it’s hard to offer a product line where each model is totally dependent on one specific part. So we’ve chosen to make the different Motoron versions less distinct; the specified performance and prices are not as different between the small- and big-MOSFET versions since we want them to be viewed more interchangeably. Their performance specifications are also a little on the conservative side to give us more room to use different MOSFETs.

Even with those considerations, we still haven’t been able to get the parts to make as many of these new high-power Motorons as we want to. That’s why they are listed with a “Rationed” status in our store, with lower stock and higher pricing than we’d like. But we hope that as parts availability improves, we will eventually be able to ease up on those restrictions.

In fact, that just happened with the smaller M3S256 and M3H256: we received some long-awaited critical components that will let us make a lot more of those, so you should see more in stock soon, and we’ve already removed their Rationed status and lowered their prices!