Author Archives: Mac Bowley

Is upgrade culture out of date?

via Raspberry Pi

At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. Today, I’m writing about our habit of replacing devices with newer versions just for the sake of it.

Technology is involved in more of our lives than ever before: most of us carry a computer in our pocket everywhere we go. On the other hand, the length of time for which we use each individual piece of technology has grown very short. This is what’s referred to as upgrade culture, a cycle which sees most of us replacing our most trusted devices every two years with the latest products offered by tech giants like Apple and Samsung.

An illustration of four people using smartphones

How we got to this point is hard to determine, and there does not seem to be a single root cause for upgrade culture. This is why I want to start a conversation about it, so we can challenge our current perspectives and establish fact-based attitudes. I think it’s time that we, as individuals and as a collective, examine our relationship with new technology.

What is the natural lifespan of a device?

Digital technology is still so new that there is really no benchmark for how long digital devices should last. This means that the decision power has by default landed in the hands of device manufacturers and mobile network carriers, and for their profit margins, a two-year lifecycle of devices is beneficial.

Where do you see your role in this process as a consumer? Is it wrong to want to upgrade your phone after two years of constant use? Should phone companies slow their development, and would this hinder innovation? And, if you really need to upgrade, is there a better use for your old device than living in a drawer? These questions defy simple answers, and I want to hear what you think.

How does this affect the environment?

As with all our behaviours as consumers, the impact that upgrade culture has on the environment is an important concern. Environmental issues and climate change aren’t anything new, but they’re currently at the forefront of the global conversation, and for good reason.

Mobile devices are of course made in factories, and the concerns this raises have been covered well in many other places. The same goes for the energy needed to build technology. This energy could, at least in theory, be produced from renewable sources. Here I would like to focus on another aspect of the environmental impact device production has, which relates to the materials necessary to create the tiny components that form our technological best friends.

Some components of your phone cannot be created without extremely rare metals and other elements, such as silicon and lithium. (In fact, there are 83 stable non-radioactive elements in the periodic table, and 70 of them are used in some capacity in your phone.) Upgrade culture means there is high demand for these materials, and deposits are becoming more and more depleted. If you’re hoping there are renewable alternatives, you’ll be disappointed: a study by researchers working at Yale University found that there are currently no alternative materials that are as effective.

Then there’s the issue of how the materials are mined. The market trading these materials is highly competitive, and more often than not manufacturers buy from the companies offer the lowest prices. To maintain their profit margin, these companies have to extract as much material as possible as cheaply as they can. As you can imagine, this leads to mining practices that are less than ethical or environmentally friendly. As many of the mines are located in distant areas of developing countries, these problems may feel remote to you, but they affect a lot of people and are a direct result of the market we are creating by upgrading our devices every two years.

"Two smartphones, blank screen" by Artem Beliaikin is licensed under CC0 1.0

Many of us agree that we need to do what we can to counteract climate change, and that, to achieve anything meaningful, we have to start looking at the way we live our lives. This includes questioning how we use technology. It will be through discussion and opinion gathering that we can start to make more informed decisions — as individuals and as a society.

The obsolescence question

You probably also have that one friend/colleague/family member who swears by their five year old mobile phone and scoffs at the prices of the newest models. These people are often labeled as sticklers who are afraid to join the modern age, but is there another way to see them? The truth is, if you’ve bought a phone in the last five year, then — barring major accidents — it will most likely still function and be just as effective as it was when it came out of the box. So why are so many consumers upgrading to new devices every two years?

"Nextbit Robin Smartphone" by Bhavesh Sondagar is licensed under CC0 1.0

Again there isn’t a single reason, but I think marketing departments should shoulder much of the responsibility. Using marketing strategies, device manufacturers and mobile network carriers purposefully make us see the phones we currently own in a negative light. A common trope of mobile phone adverts is the overwrought comparison of your current device with a newly launched version. Thus, each passing day after a new model is released, our opinion of our current device worsens, even if it’s just on a subconscious level.

This marketing strategy is related to a business practice called planned obsolescence, which sees manufacturers purposefully limit the durability of their products in order to sell more units. An early example of planned obsolescence is the lightbulb, invented at the Edison company: it was relatively simple for the company to create a lightbulb that lasted 2500 hours, but it took years and a coalition of manufacturers to make a version that reliably broke after 1000 hours. We’re all aware that the lightbulb revolutionised many aspects of life, but it turns out it also had a big influence on consumer habits and what we see as acceptable practices of technology companies.

The widening digital divide

The final aspect of the impact of upgrade culture that I want to examine relates to the digital divide. This term describes the societal gap between the people with access to, and competence with, the latest technology, and the people without these privileges. To be able to upgrade, say, your mobile phone to the latest model every two years, you either need a great degree of financial freedom, or you need to tie yourself to a 24-month contract that may not be easily within your means. As a society, we revere the latest technology and hold people with access to it in high regard. What does this say to people who do not have this access?

"DeathtoStock_Creative Community5" by Denis Labrecque is licensed under CC0 1.0

Inadvertently, we are widening the digital divide by placing more value on new technology than is warranted. Innovation is exciting, and commercial success is celebrated — but do you ever stop and ask who really benefits from this? Is your new phone really that much better than the old one, or could it be that you’re mostly just basking in feeling the social rewards of having the newest bit of kit?

What about Raspberry Pi technology?

Obviously, this blog post wouldn’t be complete if we didn’t share our perspective as a technology company as well. So here’s Trading CEO Eben Upton:

On our hardware and software

“Raspberry Pi tries very hard to avoid obsoleting older products. Obviously the latest Raspberry Pi 4 runs much faster than a Raspberry Pi 1 (something like forty times faster), but a Raspbian image we release today will run on the very earliest Raspberry Pi prototypes from the summer of 2011. Cutting customers off from software support after a couple of years is unethical, and bad for business in the long term: fool me once, shame on you; fool me twice, shame on me. The best companies respect their customers’ investment in their platforms, even if that investment happened far in the past.”

“What’s even more unusual about Raspberry Pi is that we aim to keep our products available for a long period of time. So you can’t just run a 2020 software build on a 2014 Raspberry Pi 1B+: you can actually buy a brand-new 1B+ to run it on.”

On the environmental impact of our hardware

“We’re constantly working to reduce the environmental footprint of Raspberry Pi. If you look next to the USB connectors on Raspberry Pi 4, you’ll see a chunky black component. This is the reservoir capacitor, which prevents the 5V rail from drooping too far when a new USB device is plugged in. By using a polymer electrolytic capacitor, from our friends at Panasonic, we’ve been able to avoid the use of tantalum.”

“When we launched the official USB-C power supply for Raspberry Pi 4, one or two people on Twitter asked if we could eliminate the single-use plastic bag which surrounded the cable and plug assembly inside the box. Working with our partners at Kuantech, we found that we could easily do this for the white supplies, but not for the black ones. Why? Because when the box vibrates in transit, the plug scuffs against the case; this is visible on the black plastic, but not on the white.”

Raspberry Pi power supply with scuff marks

Raspberry Pi power supply with scuff mark

“So for now, if you want to eliminate single-use plastic, buy a white supply. In the meantime, we’ll be working to find a way (probably involving cunning origami) to eliminate plastic from the black supply.”

What do you think?

Time for you to discuss!

As I said, I want to hear from you about upgrade culture.

  • When was the last time you upgraded?
  • What were your reasons at the time?
  • Do you think upgrade culture should be addressed by mobile phone manufacturers and providers, or is it caused by our own consumption habits?
  • How might we address upgrade culture? Is it a problem that needs addressing?

Share your thoughts in the comments!

Upgrade culture is one of the topics for which we offer you a discussion forum on our free online course Impact of Technology. For educators, the course also covers how to facilitate classroom discussions about these topics, and a new course run has just begun — sign up today to take part for free!

The Impact of Technology online course is one of many courses developed by us with support from Google.

The post Is upgrade culture out of date? appeared first on Raspberry Pi.

Can algorithms be unethical?

via Raspberry Pi

At Raspberry Pi, we’re interested in all things to do with technology, from building new tools and helping people teach computing, to researching how young people learn to create with technology and thinking about the role tech plays in our lives and society. One of the aspects of technology I myself have been thinking about recently is algorithms.

An illustration of a desktop computer above which 5 icons are shown for privacy, culture, law, environment, and ethics

Technology impacts our lives at the level of privacy, culture, law, environment, and ethics.

All kinds of algorithms — set series of repeatable steps that computers follow to perform a task — are running in the background of our lives. Some we recognise and interact with every day, such as online search engines or navigation systems; others operate unseen and are rarely directly experienced. We let algorithms make decisions that impact our lives in both large and small ways. As such, I think we need to consider the ethics behind them.

We need to talk about ethics

Ethics are rules of conduct that are recognised as acceptable or good by society. It’s easier to discuss the ethics of a specific algorithm than to talk about ethics of algorithms as a whole. Nevertheless, it is important that we have these conversations, especially because people often see computers as ‘magic boxes’: you push a button and something magically comes out of the box, without any possibility of human influence over what that output is. This view puts power solely in the hands of the creators of the computing technology you’re using, and it isn’t guaranteed that these people have your best interests at heart or are motivated to behave ethically when designing the technology.

An icon with the word 'stakeholders' below it

Who creates the algorithms you use, and what are their motivations?

You should be critical of the output algorithms deliver to you, and if you have questions about possible flaws in an algorithm, you should not discount these as mere worries. Such questions could include:

  • Algorithms that make decisions have to use data to inform their choices. Are the data sets they use to make these decisions ethical and reliable?
  • Running an algorithm time and time again means applying the same approach time and time again. When dealing with societal problems, is there a single approach that will work successfully every time?

Below, I give two concrete examples to show where ethics come into the creation and use of algorithms. If you know other examples (or counter-examples, feel free to disagree with me), please share them in the comments.

Algorithms can be biased

Part of the ‘magic box’ mental model is the idea that computers are cold instructions followers that cannot think for themselves — so how can they be biased?

Humans aren’t born biased: we learn biases alongside everything else, as we watch the way our family and other people close to us interact with the world. Algorithms acquire biases in the same way: the developers who create them might inadvertently add their own biases.

An illustration of four people using smartphones

Humans can be biased, and therefore the algorithms they create can be biased too.

An example of this is a gang violence data analysis tool that the Met Police in London launched in 2012. Called the gang matrix, the tool held the personal information of over 300 individuals. 72% of the individuals on the matrix were non-white, and some had never committed a violent crime. In response to this, Amnesty International filed a complaint stating that the makeup of the gang matrix was influenced by police officers disproportionately labelling crimes committed by non-white individuals as gang-related.

Who curates the content we consume?

We live in a content-rich society: there is much, much more online content than one person could possibly take in. Almost every piece of content we consume is selected by algorithms; the music you listen to, the videos you watch, the articles you read, and even the products you buy.

An illustration of a phone screen showing an invented tweet asking where people get their news from

Some of you may have experienced a week in January of 2012 in which you saw a lot of either cute kittens or sad images on Facebook; if so, you may have been involved in a global social experiment that Facebook engineers performed on 600,000 of its users without their consent. Some of these users were shown overwhelmingly positive content, and others overwhelmingly negative content. The Facebook engineers monitored the users’ actions to gage how they responded. Was this experiment ethical?

In order to select content that is attractive to you, content algorithms observe the choices you make and the content you consume. The most effective algorithms give you more of the same content, with slight variation. How does this impact our beliefs and views? How do we broaden our horizons?

Why trust algorithms at all then?

People generally don’t like making decisions; almost everyone knows the discomfort of indecision. In addition, emotions have a huge effect on the decisions humans make moment to moment. Algorithms on the other hand aren’t impacted by emotions, and they can’t be indecisive.

While algorithms are not immune to bias, in general they are way less susceptible to it than humans. And if a bias is identified in an algorithm, an engineer can remove the bias by editing the algorithm or changing the dataset the algorithm uses. The same cannot be said for human biases, which are often deeply ingrained and widespread in society.

An icon showing a phone screen with an internet browser symbol

As is true for all technology, algorithms can create new problems as well as solve existing problems.

That’s why there are more and less appropriate areas for algorithms to operate in. For example, using algorithms in policing is almost always a bad idea, as the data involved is recorded by humans and is very subjective. In objective, data-driven fields, on the other hand, algorithms have been employed very successfully, such as diagnostic algorithms in medicine.

Algorithms in your life

I would love to hear what you think: this conversation requires as many views as possible to be productive. Share your thoughts on the topic in the comments! Here are some more questions to get you thinking:

  • What algorithms do you interact with every day?
  • How large are the decisions you allow algorithms to make?
  • Are there algorithms you absolutely do not trust?
  • What do you think would happen if we let algorithms decide everything?

Feel free to respond to other people’s comments and discuss the points they raise.

The ethics of algorithms is one of the topics for which we offer you a discussion forum on our free online course Impact of Technology. The course also covers how to facilitate classroom discussions about technology — if you’re an educator teaching computing or computer science, it is a great resource for you!

The post Can algorithms be unethical? appeared first on Raspberry Pi.