Connected successfully
dailynewspick.com : Collection of RSS News Feeds
menuButton

Daily News Pick Main page

+ What is an RSS feed?

Scroll down to read. Use the menu above to choose a different RSS feed. Note: In June, 2020, Reuters killed their RSS feeds. Which is a shame.

The available RSS feeds are valid news sites that are all considered to be neutral. Nothing leaning too far left, nothing leaning too far right. Plus some fun stuff. Hope you find the page useful.

If you want to know more about how this works, please visit the Tutorial page to learn to make your own RSS reader.

Current feed - IEEE Spectrum

7 Revealing Ways AIs Fail


Artificial intelligence could perform more quickly, accurately, reliably, and impartially than humans on a wide range of problems, from detecting cancer to deciding who receives an interview for a job. But AIs have also suffered numerous, sometimes deadly, failures. And the increasing ubiquity of AI means that failures can affect not just individuals but millions of people.

Increasingly, the AI community is cataloging these failures with an eye toward monitoring the risks they may pose. "There tends to be very little information for users to understand how these systems work and what it means to them," says Charlie Pownall, founder of the AI, Algorithmic and Automation Incident & Controversy Repository. "I think this directly impacts trust and confidence in these systems. There are lots of possible reasons why organizations are reluctant to get into the nitty-gritty of what exactly happened in an AI incident or controversy, not the least being potential legal exposure, but if looked at through the lens of trustworthiness, it's in their best interest to do so."

Part of the problem is that the neural network technology that drives many AI systems can break down in ways that remain a mystery to researchers. "It's unpredictable which problems artificial intelligence will be good at, because we don't understand intelligence itself very well," says computer scientist Dan Hendrycks at the University of California, Berkeley.

Here are seven examples of AI failures and what current weaknesses they reveal about artificial intelligence. Scientists discuss possible ways to deal with some of these problems; others currently defy explanation or may, philosophically speaking, lack any conclusive solution altogether.

1) Brittleness

A robot holding it head with gears and chips coming out.  Chris Philpot

Take a picture of a school bus. Flip it so it lays on its side, as it might be found in the case of an accident in the real world. A 2018 study found that state-of-the-art AIs that would normally correctly identify the school bus right-side-up failed to do so on average 97 percent of the time when it was rotated.

"They will say the school bus is a snowplow with very high confidence," says computer scientist Anh Nguyen at Auburn University, in Alabama. The AIs are not capable of a task of mental rotation "that even my 3-year-old son could do," he says.

Such a failure is an example of brittleness. An AI often "can only recognize a pattern it has seen before," Nguyen says. "If you show it a new pattern, it is easily fooled."

There are numerous troubling cases of AI brittleness. Fastening stickers on a stop sign can make an AI misread it. Changing a single pixel on an image can make an AI think a horse is a frog. Neural networks can be 99.99 percent confident that multicolor static is a picture of a lion. Medical images can get modified in a way imperceptible to the human eye so medical scans misdiagnose cancer 100 percent of the time. And so on.

One possible way to make AIs more robust against such failures is to expose them to as many confounding "adversarial" examples as possible, Hendrycks says. However, they may still fail against rare " black swan" events. "Black-swan problems such as COVID or the recession are hard for even humans to address—they may not be problems just specific to machine learning," he notes.

2) Embedded Bias

A robot holding a scale with a finer pushing down one side.  Chris Philpot

Increasingly, AI is used to help support major decisions, such as who receives a loan, the length of a jail sentence, and who gets health care first. The hope is that AIs can make decisions more impartially than people often have, but much research has found that biases embedded in the data on which these AIs are trained can result in automated discrimination en masse, posing immense risks to society.

For example, in 2019, scientists found a nationally deployed health care algorithm in the United States was racially biased, affecting millions of Americans. The AI was designed to identify which patients would benefit most from intensive-care programs, but it routinely enrolled healthier white patients into such programs ahead of black patients who were sicker.

Physician and researcher Ziad Obermeyer at the University of California, Berkeley, and his colleagues found the algorithm mistakenly assumed that people with high health care costs were also the sickest patients and most in need of care. However, due to systemic racism, "black patients are less likely to get health care when they need it, so are less likely to generate costs," he explains.

After working with the software's developer, Obermeyer and his colleagues helped design a new algorithm that analyzed other variables and displayed 84 percent less bias. "It's a lot more work, but accounting for bias is not at all impossible," he says. They recently drafted a playbook that outlines a few basic steps that governments, businesses, and other groups can implement to detect and prevent bias in existing and future software they use. These include identifying all the algorithms they employ, understanding this software's ideal target and its performance toward that goal, retraining the AI if needed, and creating a high-level oversight body.

3) Catastrophic Forgetting

A robot in front of fire with a question mark over it's head. Chris Philpot

Deepfakes—highly realistic artificially generated fake images and videos, often of celebrities, politicians, and other public figures—are becoming increasingly common on the Internet and social media, and could wreak plenty of havoc by fraudulently depicting people saying or doing things that never really happened. To develop an AI that could detect deepfakes, computer scientist Shahroz Tariq and his colleagues at Sungkyunkwan University, in South Korea, created a website where people could upload images to check their authenticity.

In the beginning, the researchers trained their neural network to spot one kind of deepfake. However, after a few months, many new types of deepfake emerged, and when they trained their AI to identify these new varieties of deepfake, it quickly forgot how to detect the old ones.

This was an example of catastrophic forgetting—the tendency of an AI to entirely and abruptly forget information it previously knew after learning new information, essentially overwriting past knowledge with new knowledge. "Artificial neural networks have a terrible memory," Tariq says.

AI researchers are pursuing a variety of strategies to prevent catastrophic forgetting so that neural networks can, as humans seem to do, continuously learn effortlessly. A simple technique is to create a specialized neural network for each new task one wants performed—say, distinguishing cats from dogs or apples from oranges—"but this is obviously not scalable, as the number of networks increases linearly with the number of tasks," says machine-learning researcher Sam Kessler at the University of Oxford, in England.

One alternative Tariq and his colleagues explored as they trained their AI to spot new kinds of deepfakes was to supply it with a small amount of data on how it identified older types so it would not forget how to detect them. Essentially, this is like reviewing a summary of a textbook chapter before an exam, Tariq says.

However, AIs may not always have access to past knowledge—for instance, when dealing with private information such as medical records. Tariq and his colleagues were trying to prevent an AI from relying on data from prior tasks. They had it train itself how to spot new deepfake types while also learning from another AI that was previously trained how to recognize older deepfake varieties. They found this "knowledge distillation" strategy was roughly 87 percent accurate at detecting the kind of low-quality deepfakes typically shared on social media.

4) Explainability

Robot pointing at a chart. Chris Philpot

Why does an AI suspect a person might be a criminal or have cancer? The explanation for this and other high-stakes predictions can have many legal, medical, and other consequences. The way in which AIs reach conclusions has long been considered a mysterious black box, leading to many attempts to devise ways to explain AIs' inner workings. "However, my recent work suggests the field of explainability is getting somewhat stuck," says Auburn's Nguyen.

Nguyen and his colleagues investigated seven different techniques that researchers have developed to attribute explanations for AI decisions—for instance, what makes an image of a matchstick a matchstick? Is it the flame or the wooden stick? They discovered that many of these methods "are quite unstable," Nguyen says. "They can give you different explanations every time."

In addition, while one attribution method might work on one set of neural networks, "it might fail completely on another set," Nguyen adds. The future of explainability may involve building databases of correct explanations, Nguyen says. Attribution methods can then go to such knowledge bases "and search for facts that might explain decisions," he says.

5) Quantifying Uncertainty

Robot holding a hand of cards and pushing chips Chris Philpot

In 2016, a Tesla Model S car on autopilot collided with a truck that was turning left in front of it in northern Florida, killing its driver— the automated driving system's first reported fatality. According to Tesla's official blog, neither the autopilot system nor the driver "noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied."

One potential way Tesla, Uber, and other companies may avoid such disasters is for their cars to do a better job at calculating and dealing with uncertainty. Currently AIs "can be very certain even though they're very wrong," Oxford's Kessler says that if an algorithm makes a decision, "we should have a robust idea of how confident it is in that decision, especially for a medical diagnosis or a self-driving car, and if it's very uncertain, then a human can intervene and give [their] own verdict or assessment of the situation."

For example, computer scientist Moloud Abdar at Deakin University in Australia and his colleagues applied several different uncertainty quantification techniques as an AI classified skin-cancer images as malignant or benign, or melanoma or not. The researcher found these methods helped prevent the AI from making overconfident diagnoses.

Autonomous vehicles remain challenging for uncertainty quantification, as current uncertainty-quantification techniques are often relatively time consuming, "and cars cannot wait for them," Abdar says. "We need to have much faster approaches."

6) Common Sense

Robot sitting on a branch and cutting it with a saw.  Chris Philpot

AIs lack common sense—the ability to reach acceptable, logical conclusions based on a vast context of everyday knowledge that people usually take for granted, says computer scientist Xiang Ren at the University of Southern California. "If you don't pay very much attention to what these models are actually learning, they can learn shortcuts that lead them to misbehave," he says.

For instance, scientists may train AIs to detect hate speech on data where such speech is unusually high, such as white supremacist forums. However, when this software is exposed to the real world, it can fail to recognize that black and gay people may respectively use the words "black" and "gay" more often than other groups. "Even if a post is quoting a news article mentioning Jewish or black or gay people without any particular sentiment, it might be misclassified as hate speech," Ren says. In contrast, "humans reading through a whole sentence can recognize when an adjective is used in a hateful context."

Previous research suggested that state-of-the-art AIs could draw logical inferences about the world with up to roughly 90 percent accuracy, suggesting they were making progress at achieving common sense. However, when Ren and his colleagues tested these models, they found even the best AI could generate logically coherent sentences with slightly less than 32 percent accuracy. When it comes to developing common sense, "one thing we care a lot [about] these days in the AI community is employing more comprehensive checklists to look at the behavior of models on multiple dimensions," he says.

7) Math

Robot holding cards with "2+2=" and "5" on them Chris Philpot

Although conventional computers are good at crunching numbers, AIs "are surprisingly not good at mathematics at all," Berkeley's Hendrycks says. "You might have the latest and greatest models that take hundreds of GPUs to train, and they're still just not as reliable as a pocket calculator."

For example, Hendrycks and his colleagues trained an AI on hundreds of thousands of math problems with step-by-step solutions. However, when tested on 12,500 problems from high school math competitions, "it only got something like 5 percent accuracy," he says. In comparison, a three-time International Mathematical Olympiad gold medalist attained 90 percent success on such problems "without a calculator," he adds.

Neural networks nowadays can learn to solve nearly every kind of problem "if you just give it enough data and enough resources, but not math," Hendrycks says. Many problems in science require a lot of math, so this current weakness of AI can limit its application in scientific research, he notes.

It remains uncertain why AI is currently bad at math. One possibility is that neural networks attack problems in a highly parallel manner like human brains, whereas math problems typically require a long series of steps to solve, so maybe the way AIs process data is not as suitable for such tasks, "in the same way that humans generally can't do huge calculations in their head," Hendrycks says. However, AI's poor performance on math "is still a niche topic: There hasn't been much traction on the problem," he adds.


DARPA SubT Finals: Meet the Teams


This is it! This week, we're at the DARPA SubTerranean Challenge Finals in Louisville KY, where more than two dozen Systems Track and Virtual Track teams will compete for millions of dollars in prize money and being able to say "we won a DARPA challenge," which is of course priceless.

We've been following SubT for years, from Tunnel Circuit to Urban Circuit to Cave (non-) Circuit. For a recent recap, have a look at this post-cave pre-final article that includes an interview with SubT Program Manager Tim Chung, but if you don't have time for that, the TLDR is that this week we're looking at both a Virtual Track as well as a Systems Track with physical robots on a real course. The Systems Track teams spent Monday checking in at the Louisville Mega Cavern competition site, and we asked each team to tell us about how they've been preparing, what they think will be most challenging, and what makes them unique.


Team CERBERUS

CERBERUS robots Team CERBERUS

CERBERUS

Country

USA, Switzerland, United Kingdom, Norway

Members

University of Nevada, Reno

ETH Zurich, Switzerland

University of California, Berkeley

Sierra Nevada Corporation

Flyability, Switzerland

Oxford Robotics Institute, United Kingdom

Norwegian University for Science and Technology (NTNU), Norway

Robots

TBA

Follow Team

Website

@CerberusSubt

Q&A: Team Lead Kostas Alexis

How have you been preparing for the SubT Final?

First of all this year's preparation was strongly influenced by Covid-19 as our team spans multiple countries, namely the US, Switzerland, Norway, and the UK. Despite the challenges, we leveled up both our weekly shake-out events and ran a 2-month team-wide integration and testing activity in Switzerland during July and August with multiple tests in diverse underground settings including multiple mines. Note that we bring a brand new set of 4 ANYmal C robots and a new generation of collision-tolerant flying robots so during this period we further built new hardware.

What do you think the biggest challenge of the SubT Final will be?

We are excited to see how the combination of vastly large spaces available in Mega Caverns can be combined with very narrow cross-sections as DARPA promises and vertical structures. We think that terrain with steep slopes and other obstacles, complex 3D geometries, as well as the dynamic obstacles will be the core challenges.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Our team coined early on the idea of legged and flying robot combination. We have remained focused on this core vision of ours and also bring fully own-developed hardware for both legged and flying systems. This is both our advantage and - in a way - our limitation as we spend a lot of time in its development. We are fully excited about the potential we see developing and we are optimistic that this will be demonstrated in the Final Event!

Team Coordinated Robotics

Coordinated Robotics Team Coordinated Robotics

Coordinated Robotics

Country

USA

Members

California State University Channel Islands

Oke Onwuka

Sequoia Middle School

Robots

TBA

Q&A: Team Lead Kevin Knoedler

How have you been preparing for the SubT Final?

Coordinated Robotics has been preparing for the SubT Final with lots of testing on our team of robots. We have been running them inside, outside, day, night and all of the circumstances that we can come up with. In Kentucky we have been busy updating all of the robots to the same standard and repairing bits of shipping damage before the Subt Final.

What do you think the biggest challenge of the SubT Final will be?

The biggest challenge for us will be pulling all of the robots together to work as a team and make sure that everything is communicating together. We did not have lab access until late July and so we had robots at individuals homes, but were generally only testing one robot at a time.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Coordinated Robotics is unique in a couple of different ways. We are one of only two unfunded teams so we take a lower budget approach to solving lots of the issues and that helps us to have some creative solutions. We are also unique in that we will be bringing a lot of robots (23) so that problems with individual robots can be tolerated as the team of robots continues to search.

Team CoSTAR

Team CoSTAR robots Team CoSTAR

CoSTAR

Country

USA, South Korea, Sweden

Members

Jet Propulsion Laboratory

California Institute of Technology

Massachusetts Institute of Technology

KAIST, South Korea

Lulea University of Technology, Sweden

Robots

TBA

Follow Team

Website

Q&A: Caltech Team Lead Joel Burdick

How have you been preparing for the SubT Final?

Since May, the team has made 4 trips to a limestone cave near Lexington Kentucky (and they are just finishing a week-long "game" there yesterday). Since February, parts or all of the team have been testing 2-3 days a week in a section of the abandoned Subway system in downtown Los Angeles.

What do you think the biggest challenge of the SubT Final will be?

That will be a tough one to answer in advance. The expected CoSTAR-specific challenges are of course the complexity of the test-site that DARPA has prepared, fatigue of the team, and the usual last-minute hardware failures: we had to have an entire new set of batteries for all of our communication nodes FedExed to us yesterday. More generally, we expect the other teams to be well prepared. Speaking only for myself, I think there will be 4-5 teams that could easily win this competition.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Previously, our team was unique with our Boston Dynamic legged mobility. We've heard that other teams maybe using Spot quadrupeds as well. So, that may no longer be a uniqueness. We shall see! More importantly, we believe our team is unique in the breadth of the participants (university team members from U.S., Europe, and Asia). Kind of like the old British empire: the sun never sets on the geographic expanse of Team CoSTAR.

Team CSIRO Data61

Team CSIRO Data61 robots Team CSIRO Data61

CSIRO Data61

Country

Australia, USA

Members

Commonwealth Scientific and Industrial Research Organisation, Australia

Emesent, Australia

Georgia Institute of Technology

Robots

TBA

Follow Team

Website

Twitter

Q&A: SubT Principal Investigator Navinda Kottege

How have you been preparing for the SubT Final?

Test, test, test. We've been testing as often as we can, simulating the competition conditions as best we can. We're very fortunate to have an extensive site here at our CSIRO lab in Brisbane that has enabled us to construct quite varied tests for our full fleet of robots. We have also done a number of offsite tests as well.

After going through the initial phases, we have converged on a good combination of platforms for our fleet. Our work horse platform from the Tunnel circuit has been the BIA5 ATR tracked robot. We have recently added Boston Dynamics Spot quadrupeds to our fleet and we are quite happy with their performance and the level of integration with our perception and navigation stack. We also have custom designed Subterra Navi drones from Emesent. Our fleet consists of two of each of these three platform types. We have also designed and built a new 'Smart node' for communication with the Rajant nodes. These are dropped from the tracked robots and automatically deploy after a delay by extending out ground plates and antennae. As described above, we have been doing extensive integration testing with the full system to shake out bugs and make improvements.

What do you think the biggest challenge of the SubT Final will be?

The biggest challenge is the unknown. It is always a learning process to discover how the robots respond to new classes of obstacle; responding to this on the fly in a new environment is extremely challenging. Given the format of two preliminary runs and one prize run, there is little to no margin for error compared to previous circuit events where there were multiple runs that contributed to the final score. Any significant damage to robots during the preliminary runs would be difficult to recover from to perform in the final run.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Our fleet uses a common sensing, mapping and navigation system across all robots, built around our Wildcat SLAM technology. This is what enables coordination between robots, and provides the accuracy required to locate detected objects. This had allowed us to easily integrate different robot platforms into our fleet. We believe this 'homogenous sensing on heterogenous platforms' paradigm gives us a unique advantage in reducing overall complexity of the development effort for the fleet and also allowing us to scale our fleet as needed. Having excellent partners in Emesent and Georgia Tech and having their full commitment and support is also a strong advantage for us.

Team CTU-CRAS-NORLAB

Team CTU-CRAS-NORLAB robots Team CTU-CRAS-NORLAB

CTU-CRAS-NORLAB

Country

Czech Republic, Canada

Members

Czech Technological University, Czech Republic

Université Laval, Canada

Robots

TBA

Follow Team

Website

Twitter

Q&A: Team Lead Tomas Svoboda

How have you been preparing for the SubT Final?

We spent most of the time preparing new platforms as we made a significant technology update. We tested the locomotion and autonomy of the new platforms in Bull Rock Cave, one of the largest caves in Czechia. We also deployed the robots in an old underground fortress to examine the system in an urban-like underground environment. The very last weeks were, however, dedicated to integration tests and system tuning.

What do you think the biggest challenge of the SubT Final will be?

Hard to say, but regarding the expected environment, the vertical shafts might be the most challenging since they are not easy to access to test and tune the system experimentally. They would also add challenges to communication.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Not sure about the other teams, but we plan to deploy all kinds of ground vehicles, tracked, wheeled, and legged platforms accompanied by several drones. We hope the diversity of the platform types would be beneficial for adapting to the possible diversity of terrains and underground challenges. Besides, we also hope the tuned communication would provide access to robots in a wider range than the last time. Optimistically, we might keep all robots connected to the communication infrastructure built during the mission, albeit the bandwidth is very limited, but should be sufficient for artifacts reporting and high-level switching of the robots' goals and autonomous behavior.

Team Explorer

Team Explorer robots Team Explorer

Explorer

Country

USA

Members

Carnegie Mellon University

Oregon State University

Robots

TBA

Follow Team

Website

Facebook

Q&A: Team Co-Lead Sebastian Scherer

How have you been preparing for the SubT Final?

Since we expect DARPA to have some surprises on the course for us, we have been practicing in a wide range of different courses around Pittsburgh including an abandoned hospital complex, a cave and limestone and coal mines. As the finals approached, we were practicing at these locations nearly daily, with debrief and debugging sessions afterward. This has helped us find the advantages of each of the platforms, ways of controlling them, and the different sensor modalities.

What do you think the biggest challenge of the SubT Final will be?

For our team the biggest challenges are steep slopes for the ground robots and thin loose obstacles that can get sucked into the props for the drones as well as narrow passages.

What is one way in which your team is unique, and why will that be an advantage during the competition?

We have developed a heterogeneous team for SubT exploration. This gives us an advantage since there is not a single platform that is optimal for all SubT environments. Tunnels are optimal for roving robots, urban environments for walking robots, and caves for flying. Our ground robots and drones are custom-designed for navigation in rough terrain and tight spaces. This gives us an advantage since we can get to places not reachable by off-the-shelf platforms.

Team MARBLE

Team MARBLE robots Team MARBLE

MARBLE

Country

USA

Members

University of Colorado, Boulder

University of Colorado, Denver

Scientific Systems Company, Inc.

University of California, Santa Cruz

Robots

TBA

Follow Team

Twitter

Q&A: Project Engineer Gene Rush

How have you been preparing for the SubT Final?

Our team has worked tirelessly over the past several months as we prepare for the SubT Final. We have invested most of our time and energy in real-world field deployments, which help us in two major ways. First, it allows us to repeatedly test the performance of our full autonomy stack, and second, it provides us the opportunity to emphasize Pit Crew and Human Supervisor training. Our PI, Sean Humbert, has always said "practice, practice, practice." In the month leading up to the event, we stayed true to this advice by holding 10 deployments across a variety of environments, including parking garages, campus buildings at the University of Colorado Boulder, and the Edgar Experimental Mine.

What do you think the biggest challenge of the SubT Final will be?

I expect the most difficult challenge will is centered around autonomous high-level decision making. Of course, mobility challenges, including treacherous terrain, stairs, and drop offs will certainly test the physical capabilities of our mobile robots. However, the scale of the environment is so great, and time so limited, that rapidly identifying the areas that likely have human survivors is vitally important and a very difficult open challenge. I expect most teams, ours included, will utilize the intuition of the Human Supervisor to make these decisions.

What is one way in which your team is unique, and why will that be an advantage during the competition?

Our team has pushed on advancing hands-off autonomy, so our robotic fleet can operate independently in the worst case scenario: a communication-denied environment. The lack of wireless communication is relatively prevalent in subterranean search and rescue missions, and therefore we expect DARPA will be stressing this part of the challenge in the SubT Final. Our autonomy solution is designed in such a way that it can operate autonomously both with and without communication back to the Human Supervisor. When we are in communication with our robotic teammates, the Human Supervisor has the ability to provide several high level commands to assist the robots in making better decisions.

Team Robotika

Team Robotika robots Team Robotika

Robotika

Country

Czech Republic, USA, Switzerland

Members

Robotika International, Czech Republic and United States

Robotika.cz, Czech Republic

Czech University of Life Science, Czech Republic

Centre for Field Robotics, Czech Republic

Cogito Team, Switzerland

Robots

Two wheeled robots

Follow Team

Website

Twitter

Q&A: Team Lead Martin Dlouhy

How have you been preparing for the SubT Final?

Our team participates in both Systems and Virtual tracks. We were using the virtual environment to develop and test our ideas and techniques and once they were sufficiently validated in the virtual world, we would transfer these results to the Systems track as well. Then, to validate this transfer, we visited a few underground spaces (mostly caves) with our physical robots to see how they perform in the real world.

What do you think the biggest challenge of the SubT Final will be?

Besides the usual challenges inherent to the underground spaces (mud, moisture, fog, condensation), we also noticed the unusual configuration of the starting point which is a sharp downhill slope. Our solution is designed to be careful about going on too steep slopes so our concern is that as things stand, the robots may hesitate to even get started. We are making some adjustments in the remaining time to account for this. Also, unlike the environment in all the previous rounds, the Mega Cavern features some really large open spaces. Our solution is designed to expect detection of obstacles somewhere in the vicinity of the robot at any given point so the concern is that a large open space may confuse its navigational system. We are looking into handling such a situation better as well.

What is one way in which your team is unique, and why will that be an advantage during the competition?

It appears that we are unique in bringing only two robots into the Finals. We have brought more into the earlier rounds to test different platforms and ultimately picked the two we are fielding this time as best suited for the expected environment. A potential benefit for us is that supervising only two robots could be easier and perhaps more efficient than managing larger numbers.


Solar and Battery Companies Rattle Utility Powerhouses


All eyes these days may be on Elon Musk's space venture—which has just put people in orbit—but here on Earth you can now get your monthly electric bill courtesy of a different Musk enterprise.

Tesla and its partner Octopus Energy Germany recently rolled out retail utility services in two large German states. It's being marketed as the "Tesla Energy Plan," and is available to any individual household in this region of 24 million people that has a solar panel system, a grid connection—and a Tesla powerwall, the Palo Alto firm's gigafactory-made 13.5 kWh battery wall unit.

The German initiative comes on the heels of a similar rollout through Octopus Energy last November in the United Kingdom.

It's too soon to say if these are the nascent strands of a "giant distributed utility," an expression Musk has long talked up, the meaning of which is not yet clear. Analysts and power insiders sketch scenes including interconnected local renewable grids that draw on short-duration battery storage (including the small batteries in electric vehicles in a garage, models for which Tesla just happens to make) combined with multi-day storage for power generated by wind and solar. For bigger national grids it gets more complicated. Even so, Tesla also now has gear on the market that institutional battery storage developers can use to run load-balancing trade operations: the consumer won't see those, but it's part of ongoing changes as renewables become more important in the power game. Being able to get a Tesla-backed power bill in the mailbox, though—that's grabbing attention. And more broadly speaking, the notion of what is and isn't a utility is in flux.

"Over the last five to 10 years we have seen an uptick in new entrants providing retail energy services," says Albert Cheung, head of global analysis at BloombergNEF. "It is now quite common to see these types of companies gain significant market share without necessarily owning any of their own generation or network assets at all."

A decade ago it became possible to get your electricity in the UK from a department store chain (though with the actual power supplied first by a Scottish utility and—as of 2018—arranged and managed by Octopus Energy). As Tesla and other makers of home energy storage systems ramp up production for modular large-scale lithium-ion batteries that can be stacked together in industrial storage facilities, new wrinkles are coming to the grid.

"There are simply going to be more and different business models out there," Cheung says. "There is going to be value in distributed energy resources at the customer's home; Whether that is a battery, an electric vehicle charger, a heat pump or other forms of flexible load, and managing these in a way that provides value to the grid will create revenue opportunities."

Aerial photo shows a very large grey rectangular roof of a factory. There is a smaller square roofed building in the foreground, and parking lots with vehicles on the left, right and foreground. Tesla Gigafactory site taking shape in Grünheide, Germany in June 2021. It is due to open in late 2021 or early 2022. Michael Dumiak

Tesla the battery-maker, with its giant new production plant nearing completion in Berlin, may be in position to supply a variety of venues with its wall-sized and cargo-container-sized units: As it does so, its controversial bet in first backing and then absorbing panel producer Solar City may start to look a little different.

Harmony Energy seems pretty pleased. The UK-based energy developer's just broken ground on a new four-acre battery storage site outside London, its third such site. Its second just came online with 68 MWh storage capacity and a 34 MW peak, with the site comprising 28 Tesla Megapack batteries. Harmony expects to be at over a gigawatt of live, operating output in the next three to four years.

The Harmony enterprise works with the UK national grid, however—that's different from Octopus's German and UK retail initiatives. Both Harmony and Octopus depend on trading and energy network management software platforms, and Tesla works with both. But while Octopus has its own in-house management platform—Kraken—Harmony engages Tesla's Autobidder.

Peter Kavanagh, Harmony's CEO, says his firm pays Tesla to operate Autobidder on its behalf—Tesla is fully licensed to trade in the UK and is an approved utility there. The batteries get charged when power is cheap; when there's low wind and no sun, energy prices may start to spike, and the batteries can discharge the power back into the grid, balancing the constant change of supply and demand, and trading on the difference to make a business.

A load-balancing trading operation is not quite the same as mainlining renewables to light a house. On any national grid, once the energy is in there, it's hard to trace the generating source—some of it will come from fossil fuels. But industrial-scale energy storage is crucial to any renewable operation: the wind dies down, the sun doesn't always shine. "Whether it's batteries or some other energy storage technology, it is key to hitting net zero carbon emissions," Kavanagh says. "Without it, you are not going to get there."

Battery research and development is burgeoning far beyond Tesla, and the difficult hunt is on to move past lithium ion. And it's not just startups and young firms in the mix: Established utility giants—the Pacific Gas & Electrics of the world, able to generate as well as retail power—are also adding battery storage, and at scale. In Germany, the large industrial utility RWE started its own battery unit and is now operating small energy storage sites in Germany and in Arizona. Newer entrants, potential energy powerhouses, are on the rise in Italy, Spain and Denmark.

The Tesla Energy plan does have German attention though, of media and energy companies alike. It's also of note that Tesla is behind the very large battery at Australia's Hornsdale Power Reserve. One German pundit imagined Octopus's Kraken management platform as a "monstrous octopus with millions of tentacles," linking a myriad of in-house electric storage units to form a huge virtual power plant. That would be something to reckon with.


Help Build the Future of Assistive Technology


This article is sponsored by California State University, Northridge (CSUN).

Your smartphone is getting smarter. Your car is driving itself. And your watch tells you when to breathe. That, as strange as it might sound, is the world we live in. Just look around you. Almost every day, there's a better or more convenient version of the latest gadget, device, or software. And that's only on the commercial end. The medical and rehabilitative tech is equally impressive — and arguably far more important. Because for those with disabilities, assistive technologies mean more than convenience. They mean freedom.

So, what is an assistive technology (AT), and who designs it? The term might be new to you, but you're undoubtedly aware of many: hearing aids, prosthetics, speech-recognition software (Hey, Siri), even the touch screen you use each day on your cell phone. They're all assistive technologies. AT, in its most basic form, is anything that helps a person achieve enhanced performance, improved function, or accelerated access to information. A car lets you travel faster than walking; a computer lets you process data at an inhuman speed; and a search engine lets you easily find information.


CSUN Master of Science in Assistive Technology Engineering

The fully online M.S. in Assistive Technology Engineering program can be completed in less than two years and allows you to collaborate with other engineers and AT professionals. GRE is not required and financial aid is available. Request more information about the program here.

That's the concept – in a simplified form, of course. The applications, however, are vast and still expanding. In addition to mechanical products and devices, the field is deeply involved in artificial intelligence, machine learning, and neuroscience. Brain machine interfaces, for instance, allow users to control prosthetics with thought alone; and in some emergency rooms, self-service kiosks can take your blood pressure, pulse and weight, all without any human intervention.

These technologies, and others like them, will only grow more prevalent with time – as will the need for engineers to design them. Those interested in the field typically enter biomedical engineering programs. These programs, although robust in design, focus often on hardware, teaching students how to apply engineering principles to medicine and health care. What many lack, however, is a focus on the user. But that's changing. Some newer programs, many of them certificates, employ a more user-centric model.

One recent example is the Master of Science in Assistive Technology Engineering at California State University, Northridge (CSUN). The degree, designed in collaboration with industry professionals, is a hybrid of sorts, focusing as much on user needs as on the development of new technologies.

CSUN, it should be noted, is no newcomer to the field. For more than three decades, the university has hosted the world's largest assistive technology conference. To give you an idea, this year's attendees included Google, Microsoft, Hulu, Amazon, and the Central Intelligence Agency.

The university is also home to a sister degree, the Master of Science in Assistive Technology and Human Services, which prepares graduates to assist and train AT users. As you can imagine, companies are aggressively recruiting engineers with this cross-functional knowledge. Good UX design is universally desired, as it's needed for both optimal function and, often, ADA compliance.

In addition to mechanical devices, the field of Assistive Technology is deeply involved in artificial intelligence, machine learning, and neuroscience

The field has implications in war as well – both during and after. Coming as no surprise, the military is investing heavily in AT hardware and research. Why? On the most basic level, the military is interested in rehabilitating combat veterans. Assistive technologies, such as prosthetic limbs, enable those wounded in combat to pursue satisfying lives in the civilian world.

Beyond that, assistive technology is a core part of the military's long-term strategic plan. Wearable electronics, such as VR headsets and night vision goggles, both fit within the military's expanding technological horizon, as do heads-up displays, exoskeletons and drone technologies.

The Future of Assistive Technology

So, what does the future have in store for AT? We'll likely see more and better commercial technologies designed for entertainment. Think artificial realities with interactive elements in the real world (a whale floating by your actual window, not a simulated one). Kevin Kelly of Wired Magazine refers to this layered reality as the "Mirrorworld." And according to him, it's going to spark the next tech platform. Imagine Facebook in the Matrix... Or, come to think of it, don't.

An increasing number of mobile apps, such as those able to detect Parkinson's disease, will also hit the market. As will new biomedical hardware, like brain and visual implants. Fortunately, commercial innovations often drive medical ones as well. And as we see an uptick in entertainment, we'll see an equal surge in medicine, with new technologies – things we haven't even considered yet – empowering those in need.

Help build the future of assistive technology! Visit CSUN's Master of Science in Assistive Technology Engineering site to learn more about the program or request more information here.







High Temperature Resistant Adhesives Beat the Heat


Selecting the right adhesive product for extreme temperature applications may seem as straightforward as reading temperature resistance values on data sheets. Some engineers will sometimes address temperature issues by simply selecting an adhesive rated for temperatures beyond their application's expected operating temperature.

However, because suppliers test adhesives so differently, temperature resistance values on data sheets are notoriously inconsistent. Master Bond's latest white paper takes a closer look at some of these crucial issues and the key factors to consider when your adhesive application has to beat the heat or cope with the cold.


Download this free whitepaper


Video Friday: Preparing for the SubT Final


Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – [Online Event]
IROS 2021 – September 27-1, 2021 – [Online Event]
Robo Boston – October 1-2, 2021 – Boston, MA, USA
WearRAcon Europe 2021 – October 5-7, 2021 – [Online Event]
ROSCon 2021 – October 20-21, 2021 – [Online Event]
Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USA

Let us know if you have suggestions for next week, and enjoy today's videos.


Team Explorer, the SubT Challenge entry from CMU and Oregon State University, is in the last stage of preparation for the competition this month inside the Mega Caverns cave complex in Louisville, Kentucky.

[ Explorer ]

Team CERBERUS is looking good for the SubT Final next week, too.

Autonomous subterranean exploration with the ANYmal C Robot inside the Hagerbach underground mine

[ ARL ]

I'm still as skeptical as I ever was about a big and almost certainly expensive two-armed robot that can do whatever you can program it to do (have fun with that) and seems to rely on an app store for functionality.

[ Unlimited Robotics ]

Project Mineral is using breakthroughs in artificial intelligence, sensors, and robotics to find ways to grow more food, more sustainably.

[ Mineral ]

Not having a torso or anything presumably makes this easier.

Next up, Digit limbo!

[ Hybrid Robotics ]

Paric completed layout of a 500 unit apartment complex utilizing the Dusty FieldPrinter solution. Autonomous layout on the plywood deck saved weeks worth of schedule, allowing the panelized walls to be placed sooner.

[ Dusty Robotics ]

Spot performs inspection in the Kidd Creek Mine, enabling operators to keep their distance from hazards.

[ Boston Dynamics ]

Digit's engineered to be a multipurpose machine. Meaning, it needs to be able to perform a collection of tasks in practically any environment. We do this by first ensuring the robot's physically capable. Then we help the robot perceive its surroundings, understand its surroundings, then reason a best course of action to navigate its environment and accomplish its task. This is where software comes into play. This is early AI in action.

[ Agility Robotics ]

This work proposes a compact robotic limb, AugLimb, that can augment our body functions and support the daily activities. The proposed device can be mounted on the user's upper arm, and transform into compact state without obstruction to wearers.

[ AugLimb ]

Ahold Delhaize and AIRLab need the help of academics who have knowledge of human-robot interactions, mobility, manipulation, programming, and sensors to accelerate the introduction of robotics in retail. In the AIRLab Stacking challenge, teams will work on algorithms that focus on smart retail applications, for example, automated product stacking.

[ PAL Robotics ]

Leica, not at all well known for making robots, is getting into the robotic reality capture business with a payload for Spot and a new drone.

Introducing BLK2FLY: Autonomous Flying Laser Scanner

[ Leica BLK ]

As much as I like Soft Robotics, I'm maybe not quite as optimistic as they are about the potential for robots to take over quite this much from humans in the near term.

[ Soft Robotics ]

Over the course of this video, the robot gets longer and longer and longer.

[ Transcend Robotics ]

This is a good challenge: attach a spool of electrical tape to your drone, which can unpredictably unspool itself and make sure it doesn't totally screw you up.

[ UZH ]

Two interesting short seminars from NCCR Robotics, including one on autonomous racing drones and "neophobic" mobile robots.

Dario Mantegazza: Neophobic Mobile Robots Avoid Potential Hazards

[ NCCR ]

This panel on Synergies between Automation and Robotics comes from ICRA 2021, and once you see the participant list, I bet you'll agree that it's worth a watch.

[ ICRA 2021 ]

CMU RI Seminars are back! This week we hear from Andrew E. Johnson, a Principal Robotics Systems Engineer in the Guidance and Control Section of the NASA Jet Propulsion Laboratory, on "The Search for Ancient Life on Mars Began with a Safe Landing."

Prior mars rover missions have all landed in flat and smooth regions, but for the Mars 2020 mission, which is seeking signs of ancient life, this was no longer acceptable. Terrain relief that is ideal for the science obviously poses significant risks for landing, so a new landing capability called Terrain Relative Navigation (TRN) was added to the mission. This talk will describe the scientific goals of the mission, the Terrain Relative Navigation system design and the successful results from landing on February 18th, 2021.

[ CMU RI Seminar ]


China’s Mars Helicopter to Support Future Rover Exploration


The first ever powered flight by an aircraft on another planetary took place in April when NASA's Ingenuity helicopter, delivered to the Red Planet along with Perseverance rover, but the idea has already taken off elsewhere.

Earlier this month a prototype "Mars surface cruise drone system" developed by a team led by Bian Chunjiang at China's National Space Science Center (NSSC) in Beijing gained approval for further development.

Like Ingenuity, which was intended purely as a technology demonstration, it uses two sets of blades on a single rotor mast to provide lift for vertical take-offs and landings in the very thin Martian atmosphere, which is around 1% the density of Earth's.

The team did consider a fixed wing approach, which other space-related research institutes in China have been developing, but found the constraints related to size, mass, power and lift best met by the single rotor mast approach.

Solar panels charge Ingenuity's batteries enough to allow one 90-second flight per Martian day. The NSSC team is however considering adopting wireless charging through the rover, or a combination of both power systems.

The total mass is 2.1 kilograms, slightly heavier than the 1.8-kg Ingenuity. It would fly at an altitude of 5-10 meters, reaching speeds of around 300 meters per minute, with a possible duration of 3 minutes per flight. Limitations include energy consumption and temperature control.

According to an article published by China Science Daily, Bian proposed development of a helicopter to help guide a rover in March 2019, which was then accepted in June that year. The idea is that by imaging areas ahead the rover could then better select routes which avoid the otherwise unseen areas that restrict and pose challenges to driving.

The small craft's miniature multispectral imaging system may also detect scientifically valuable targets, such as evidence of notable compounds, that would otherwise be missed, deliver preliminary data and direct the rover for more detailed observations.

The next steps, Bian said, will be developing the craft so as to be able to operate in the very low atmospheric pressure and frigid temperatures of Mars as well as the dust environment and other complex environmental variables.

Bian also notes that to properly support science and exploration goals the helicopter design life must be at least a few months or even beyond a year on Mars.

To properly test the vehicle, these conditions will have to be simulated here on Earth. Bian says China does not currently have facilities that can meet all of the parameters. Faced with similar challenges for Ingenuity, Caltech graduate students built a custom wind tunnel for testing, and the NSSC team may likewise need to take a bespoke approach.

"The next 5 to 6 years are a window for research." Bian said. "We hope to overcome these technical problems and allow the next Mars exploration mission to carry a drone on Mars."

When the Mars aircraft could be deployed on Mars is unknown. China's first Mars rover landed in May, but there is no backup vehicle, unlike its predecessor lunar rover missions. The country's next interplanetary mission is expected to be a complex and unprecedented Mars sample-return launching around 2028-2030.

Ingenuity's first flight was declared by NASA to be a "Wright Brothers moment." Six years after the 1903 Wright Flyer, Chinese-born Feng Ru successfully flew his own biplane. Likewise, in the coming years, China will be looking to carry out its own powered flight on another planet.


New Fuel Cell Tech Points Toward Zero-Emission Trains


This article is part of our exclusive IEEE Journal Watch series in partnership with IEEE Xplore.

Diesel and steam-powered trains have been transporting passengers and cargo around the world for more than 200 years—all the while releasing greenhouse gas emissions into the atmosphere. In the hopes of a greener future, many countries and companies are eyeing more renewable sources of locomotion. The Pittsburgh-based company Wabtec recently unveiled a battery-electric hybrid train that they say can reduce emissions "by double digits per train." More ambitiously, some are developing hydrogen-powered trains, which rather than emitting greenhouse gases, only produce water vapor and droplets.

The technology has the potential to help countries meet greenhouse gas reduction targets and slow the progression of climate change. But, producing electricity from hydrogen comes with its own challenges. For example, the fuel cells require additional heavy converters to manage their wide voltage range. The weight of these bulky converters ultimately reduces the range of the train.

In a recent advancement, researchers in the UK have designed a new converter that is substantially lighter and more compact than state-of-the art hydrogen cell converters. They describe the new design in study published August 25 in IEEE Transactions on Industrial Electronics.

Pietro Tricoli, a professor at the University of Birmingham, was involved in the study. He notes that lighter converters are needed to help maximize the range that hydrogen powered trains can travel. Therefore his team developed the newer, lighter converter, which they describe in their paper as "ground-breaking."

It uses semiconductor devices to draw energy in a controlled way from the fuel cells and deliver it to the train's motors. "Our converter directly manages any voltage variations in the fuel cells, without affecting the motor currents. A conventional system would require two separate converters to achieve this," explains Tricoli. With the power converted to AC, the motors of a train can benefit from regenerative braking, whereby energy is harvested and recycled when the train is decelerating.

The researchers first tested their design through simulations, and then developed validated it through a small-scale laboratory prototype representing the traction system of a train. The results confirm that the new converter can facilitate desirable speeds and accelerations, as well as achieve regenerative braking.

Two photos. The left shows a large red metal frame filled with silver electronics and wires. The right shows a green circuit board with blue and black components. Left: A prototype of the new hydrogen cell converter. Right: A module used at the heart of the converter.Ivan Krastev

"The main strength of the converter is the reduction of volume and weight comparted to the state of the art [converters for hydrogen fuel cells]," explains Tricoli. The main drawback, he says, is that the new converter design requires more semiconductor devices, as well as more complex circuitry and monitoring systems.

Tricoli says there's still plenty of work ahead optimizing the system, ultimately, toward a full-scale prototype. "The current plan is to engage with train manufacturers and manufacturers of traction equipment to build a second [prototype] for a hydrogen train," he says.

This past spring marked an exciting milestone when, upon the completion of a 538-day trial period, two hydrogen-powered trains successfully transported passengers across 180,000 kilometers in Germany—while emitting zero vehicle emissions.

As more advancements in hydrogen technology such as the above are made, more increasingly efficient hydrogen-powered trains become possible. All aboard!


Q&A With Co-Creator of the 6502 Processor


Few people have seen their handiwork influence the world more than Bill Mensch. He helped create the legendary 8-bit 6502 microprocessor, launched in 1975, which was the heart of groundbreaking systems including the Atari 2600, Apple II, and Commodore 64. Mensch also created the VIA 65C22 input/output chip—noted for its rich features and which was crucial to the 6502's overall popularity—and the second-generation 65C816, a 16-bit processor that powered machines such as the Apple IIGS, and the Super Nintendo console.

Many of the 65x series of chips are still in production. The processors and their variants are used as microcontrollers in commercial products, and they remain popular among hobbyists who build home-brewed computers. The surge of interest in retrocomputing has led to folks once again swapping tips on how to write polished games using the 6502 assembly code, with new titles being released for the Atari, BBC Micro, and other machines.

Mensch, an IEEE senior life member, splits his time between Arizona and Colorado, but folks in the Northeast of the United States will have the opportunity to see him as a keynote speaker at the Vintage Computer Festival in Wall, N.J., on the weekend of 8 October. In advance of Mensch's appearance, The Institute caught up with him via Zoom to talk about his career.

This interview had been condensed and edited for clarity.

The Institute: What drew you into engineering?

Bill Mensch: I went to Temple University [in Philadelphia] on the recommendation of a guidance counselor. When I got there I found they only had an associate degree in engineering technology. But I didn't know what I was doing, so I thought: Let's finish up that associate degree. Then I got a job [in 1967] as a technician at [Pennsylvania TV maker] Philco-Ford and noticed that the engineers were making about twice as much money. I also noticed I was helping the engineers figure out what Motorola was doing in high-voltage circuits—which meant that Motorola was the leader and Philco was the follower. So I went to the University of Arizona, close to where Motorola was, got my engineering degree [in 1971] and went to work for Motorola.

TI: How did you end up developing the 6502?

BM: Chuck Peddle approached me. He arrived at Motorola two years after I started. Now, this has not been written up anywhere that I'm aware of, but I think his intention was to raid Motorola for engineers. He worked with me on the peripheral interface chip (PIA) and got to see me in action. He decided I was a young, egotistical engineer who was just the right kind to go with his ego. So Chuck and I formed a partnership of sorts. He was the system engineer, and I was the semiconductor engineer. We tried to start our own company [with some other Motorola engineers] and when that didn't happen, we joined an existing [semiconductor design] company, called MOS Technology, in Pennsylvania in 1974. That's where we created the 6501 and 6502 [in 1975], and I designed the input/output chips that went with it. The intention was to [develop a US $20 microprocessor to] compete with the Intel 4040 microcontroller chipset, which sold for about $29 at the time. We weren't trying to compete with the 6800 or the 8080 [chips designed for more complex microcomputer systems].

TI: The 6502 did become the basis of a lot of microcomputer systems, and if you look at contemporary programmer books, they often talk about the quirks of the 6502's architecture and instruction set compared with other processors. What drove those design decisions?

BM: Rod Orgill and I had completed the designs of a few microprocessors before the 6501/6502. In other words, Rod and I already knew what was successful in an instruction set. And lower cost was key. So we looked at what instructions we really needed. And we figured out how to have addressable registers by using zero page [the first 256 bytes in RAM]. So you can have one byte for the op code and one byte for the address, and [the code is compact and fast]. There are limitations, but compared to other processors, zero page was a big deal.

There is a love for this little processor that's undeniable.

TI: A lot of pages in those programming books are devoted to explaining how to use the versatile interface adapter (VIA) chip and its two I/O ports, on-board timers, a serial shift register, and so on. Why so many features?

BM: I had worked on the earlier PIA chip at Motorola. That meant I understood the needs of real systems in real-world implementations. [While working at MOS] Chuck, Wil Mathis, our applications guy, and I were eating at an Arby's one day, and we talked about doing something beyond the PIA. And they were saying, "We'd like to put a couple of timers on it. We'd like a serial port," and I said, "Okay, we're going to need more register select lines." And our notes are on an Arby's napkin. And I went off and designed it. Then I had to redesign it to make it more compatible with the PIA. I also made a few changes at Apple's request. What's interesting about the VIA is that it's the most popular chip we sell today. I'm finding out more and more about how it was used in different applications.

TI: After MOS Technology, in 1978 you founded The Western Design Center, where you created the 65C816 CPU. The creators of the ARM processor credit a visit to WDC as giving them the confidence to design their own chip. Do you remember that visit?

BM: Vividly! Sophie Wilson and Steve Furber visited me and talked to me about developing a 32-bit chip. They wanted to leapfrog what Apple was rumored to be up to. But I was just finishing up the '816, and I didn't want to change horses. So when they [had success with the ARM] I was cheering them on because it wasn't something I wanted to do. But I did leave them with the idea of, "Look, if I can do it here … there are two of you; there's one of me."

TI: The 6502 and '816 are often found today in other forms, either as the physical core of a system-on-a-chip, or running on an FPGA. What are some of the latest developments?

BM: I'm excited about what's going on right now. It's more exciting than ever. I was just given these flexible 6502s printed with thin films by PragmatIC! Our chips are in IoT devices, and we have new educational boards coming out.

TI: Why do you think the original 65x series is still popular, especially among people building their own personal computers?

BM: There is a love for this little processor that's undeniable. And the reason is we packed it with love while we were designing it. We knew what we were doing. Rod and I knew from our previous experience with the Olivetti CPU and other chips. And from my work with I/O chips, I knew [how computers were used] in the real world. People want to work with the 65x chips because they are accessible. You can trust the technology.


Spot’s 3.0 Update Adds Increased Autonomy, New Door Tricks


While Boston Dynamics' Atlas humanoid spends its time learning how to dance and do parkour, the company's Spot quadruped is quietly getting much better at doing useful, valuable tasks in commercial environments. Solving tasks like dynamic path planning and door manipulation in a way that's robust enough that someone can buy your robot and not regret it is, I would argue, just as difficult (if not more difficult) as getting a robot to do a backflip.

With a short blog post today, Boston Dynamics is announcing Spot Release 3.0, representing more than a year of software improvements over Release 2.0 that we covered back in May of 2020. The highlights of Release 3.0 include autonomous dynamic replanning, cloud integration, some clever camera tricks, and a new ability to handle push-bar doors, and earlier today, we spoke with Spot Chief Engineer at Boston Dynamics Zachary Jackowski to learn more about what Spot's been up to.


Here are some highlights from Spot's Release 3.0 software upgrade today, lifted from this blog post which has the entire list:

The focus here is not just making Spot more autonomous, but making Spot more autonomous in some very specific ways that are targeted towards commercial usefulness. It's tempting to look at this stuff and say that it doesn't represent any massive new capabilities. But remember that Spot is a product, and its job is to make money, which is an enormous challenge for any robot, much less a relatively expensive quadruped.

Yellow and black four legged robot standing in a factory

For more details on the new release and a general update about Spot, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

IEEE Spectrum: So what's new with Spot 3.0, and why is this release important?

Zachary Jackowski: We've been focusing heavily on flexible autonomy that really works for our industrial customers. The thing that may not quite come through in the blog post is how iceberg-y making autonomy work on real customer sites is. Our blog post has some bullet points about "dynamic replanning" in maybe 20 words, but in doing that, we actually reengineered almost our entire autonomy system based on the failure modes of what we were seeing on our customer sites.

The biggest thing that changed is that previously, our robot mission paradigm was a linear mission where you would take the robot around your site and record a path. Obviously, that was a little bit fragile on complex sites—if you're on a construction site and someone puts a pallet in your path, you can't follow that path anymore. So we ended up engineering our autonomy system to do building scale mapping, which is a big part of why we're calling it Spot 3.0. This is state-of-the-art from an academic perspective, except that it's volume shipping in a real product, which to me represents a little bit of our insanity.

And one super cool technical nugget in this release is that we have a powerful pan/tilt/zoom camera on the robot that our customers use to take images of gauges and panels. We've added scene-based alignment and also computer vision model-based alignment so that the robot can capture the images from the same perspective, every time, perfectly framed. In pictures of the robot, you can see that there's this crash cage around the camera, but the image alignment stuff actually does inverse kinematics to command the robot's body to shift a little bit if the cage is including anything important in the frame.

When Spot is dynamically replanning around obstacles, how much flexibility does it have in where it goes?

There are a bunch of tricks to figuring out when to give up on a blocked path, and then it's very simple run of the mill route planning within an existing map. One of the really big design points of our system, which we spent a lot of time talking about during the design phase, is that it turns out in these high value facilities people really value predictability. So it's not desired that the robot starts wandering around trying to find its way somewhere.

Do you think that over time, your customers will begin to trust the robot with more autonomy and less predictability?

I think so, but there's a lot of trust to be built there. Our customers have to see the robot to do the job well for a significant amount of time, and that will come.

Can you talk a bit more about trying to do state-of-the-art work on a robot that's being deployed commercially?

I can tell you about how big the gap is. When we talk about features like this, our engineers are like, "oh yeah I could read this paper and pull this algorithm and code something up over a weekend and see it work." It's easy to get a feature to work once, make a really cool GIF, and post it to the engineering group chat room. But if you take a look at what it takes to actually ship a feature at product-level, we're talking person-years to have it reach the level of quality that someone is accustomed to buying an iPhone and just having it work perfectly all the time. You have to write all the code to product standards, implement all your tests, and get everything right there, and then you also have to visit a lot of customers, because the thing that's different about mobile robotics as a product is that it's all about how the system responds to environments that it hasn't seen before.

The blog post calls Spot 3.0 "A Sensing Solution for the Real World." What is the real world for Spot at this point, and how will that change going forward?

For Spot, 'real world' means power plants, electrical switch yards, chemical plants, breweries, automotive plants, and other living and breathing industrial facilities that have never considered the fact that a robot might one day be walking around in them. It's indoors, it's outdoors, in the dark and in direct sunlight. When you're talking about the geometric aspect of sites, that complexity we're getting pretty comfortable with.

I think the frontiers of complexity for us are things like, how do you work in a busy place with lots of untrained humans moving through it—that's an area where we're investing a lot, but it's going to be a big hill to climb and it'll take a little while before we're really comfortable in environments like that. Functional safety, certified person detectors, all that good stuff, that's a really juicy unsolved field.

Spot can now open push-bar doors, which seems like an easier problem than doors with handles, which Spot learned to open a while ago. Why'd you start with door handles first?

Push-bar doors is an easier problem! But being engineers, we did the harder problem first, because we wanted to get it done.

 

Share on Facebook Tweet about this on Twitter Join our Discord Share on LinkedIn Share on Reddit pinterest Email this to someone