+ What is an RSS feed?
Scroll down to read. Use the menu above to choose a different RSS feed. Note: In June, 2020, Reuters killed their RSS feeds. Which is a shame.
The available RSS feeds are valid news sites that are all considered to be neutral. Nothing leaning too far left, nothing leaning too far right. Plus some fun stuff. Hope you find the page useful.
If you want to know more about how this works, please visit the Tutorial page to learn to make your own RSS reader.
Current feed - b'IEEE Spectrum'
On 4 September 2018, someone known only as Rabono bought an angry cartoon cat named Dragon for 600 ether—an amount of Ethereum cryptocurrency worth about US $170,000 at the time, or $745,000 at the cryptocurrency’s value in July 2022.
It was by far the highest transaction yet for a nonfungible token (NFT), the then-new concept of a unique digital asset. And it was a headline-grabbing opportunity for CryptoKitties, the world’s first blockchain gaming hit. But the sky-high transaction obscured a more difficult truth: CryptoKitties was dying, and it had been for some time.
Dragon was never resold—a strange fate for one of the most historically relevant NFTs ever. Newer NFTs such as “The Merge,” a piece of digital art that sold for the equivalent of $92 million, left Dragon behind as the NFT market surged to record sales, totaling roughly $18 billion in 2021. Has the world simply moved on to newer blockchain projects? Or is this the fate that awaits all NFTs?
To understand the slow death of CryptoKitties, you have to start at the beginning. Blockchain technology arguably began with a 1982 paper by the computer scientist David Chaum, but it reached mainstream attention with the success of Bitcoin, a cryptocurrency created by the anonymous person or persons known as Satoshi Nakamoto. At its core, a blockchain is a simple ledger of transactions placed one after another—not unlike a very long Excel spreadsheet.
The complexity comes in how blockchains keep the ledger stable and secure without a central authority; the details of how that’s done vary among blockchains. Bitcoin, though popular as an asset and useful for money-like transactions, has limited support for doing anything else. Newer alternatives, such as Ethereum, gained popularity because they allow for complex “smart contracts”—executable code stored in the blockchain.
“Before CryptoKitties, if you were to say ‘blockchain,’ everyone would have assumed you’re talking about cryptocurrency”—Bryce Bladon
CryptoKitties was among the first projects to harness smart contracts by attaching code to data constructs called tokens, on the Ethereum blockchain. Each chunk of the game’s code (which it refers to as a “gene”) describes the attributes of a digital cat. Players buy, collect, sell, and even breed new felines. Just like individual Ethereum tokens and bitcoins, the cat’s code also ensures that the token representing each cat is unique, which is where the nonfungible token, or NFT, comes in. A fungible good is, by definition, one that can be replaced by an identical item—one bitcoin is as good as any other bitcoin. An NFT, by contrast, has unique code that applies to no other NFT.
There’s one final piece of the blockchain puzzle you need to understand: “gas.” Some blockchains, including Ethereum, charge a fee for the computational work the network must do to verify a transaction. This creates an obstacle to overworking the blockchain’s network. High demand means high fees, encouraging users to think twice before making a transaction. The resulting reduction in demand protects the network from being overloaded and transaction times from becoming excessively long. But it can be a weakness when an NFT game goes viral.
Launched on 28 November 2017 after a five-day closed beta, CryptoKitties skyrocketed in popularity on an alluring tagline: the world’s first Ethereum game.
“As soon as it launched, it pretty much immediately went viral,” says Bryce Bladon, a founding member of the team that created CryptoKitties. “That was an incredibly bewildering time.”
Sales volume surged from just 1,500 nonfungible felines on launch day to more than 52,000 on 10 December 2017, according to nonfungible.com, with many CryptoKitties selling for valuations in the hundreds or thousands of dollars. The value of the game’s algorithmically generated cats led to coverage in hundreds of publications.
Each CryptoKitty is a token, a set of data on the Ethereum blockchain. Unlike the cryptocurrencies Ethereum and Bitcoin, these tokens are nonfungible; that is, they are not interchangeable.
|Unique ID||Mother’s ID, Father’s ID||Genes|
|The unique ID makes a CryptoKitty a nonfungible token.||The token contains the kitty’s lineage and other data.||The kitty’s genes determine its unique look.|
What’s more, the game arguably drove the success of Ethereum, the blockchain used by the game. Ethereum took off like a rocket in tandem with the release of CryptoKitties, climbing from just under $300 per token at the beginning of November 2017 to just over $1,360 in January 2018.
Ethereum’s rise continued with the launch of dozens of new blockchain games based on the cryptocurrency through late 2017 and 2018. Ethermon, Ethercraft, Ether Goo, CryptoCountries, CryptoCelebrities, and CryptoCities are among the better-known examples. Some arrived within weeks of CryptoKitties.
This was the break fans of Ethereum were waiting for. Yet, in what would prove an ominous sign for the health of blockchain gaming, CryptoKitties stumbled as Ethereum dashed higher.
Daily sales peaked in early December 2017, then slid into January and, by March, averaged less than 3,000. The value of the NFTs themselves declined more slowly, a sign the game had a base of dedicated fans like Rabono, who bought Dragon well after the game’s peak. Their activity set records for the value of NFTs through 2018. This kept the game in the news but failed to lure new players.
Today, CryptoKitties is lucky to break 100 sales a day, and the total value is often less than $10,000. Large transactions, like the sale of Founder Cat #71 for 60 ether (roughly $170,000) on 30 April 2022, do still occur—but only once every few months. Most nonfungible fur-babies sell for tiny fractions of 1 ether, worth just tens of dollars in July 2022.
CryptoKitties’ plunge into obscurity is unlikely to reverse. Dapper Labs, which owns CryptoKitties, has moved on to projects such as NBA Top Shot, a platform that lets basketball fans purchase NFT “moments”—essentially video clips—from NBA games. Dapper Labs did not respond to requests for an interview about CryptoKitties. Bladon left Dapper in 2018.
One clue to the game’s demise can be found in the last post on the game’s blog (4 June 2021), which celebrates the breeding of the 2 millionth CryptoKitty. Breeding, a core mechanic of the game, lets owners pair their existing NFTs to create algorithmically generated offspring. This gave the NFTs inherent value in the game’s ecosystem. Each NFT was able to generate more NFTs, which players could then resell for profit. But this game mechanism also saturated the market. Liu Xiaofan, an assistant professor in the department of media and communication at City University of Hong Kong who coauthored a paper on CryptoKitties’ rise and fall, sees this as a flaw the game could never overcome.
“The price of a kitty depends first on rarity, and that depends on the gene side. And the second dimension is just how many kitties are on the market,” Liu says. “With more people came more kitties.”
More players meant more demand, but it also meant more opportunities to create supply through breeding new cats. This quickly diluted the rarity of each NFT.
Bladon agrees with that assessment of the breeding mechanism. “I think the criticism is valid,” he says, explaining that it was meant to provide a sense of discovery and excitement. He also hoped it would encourage players to hold on to NFTs instead of immediately selling, as breeding, in theory, provided lasting value.
The sheer volume of CryptoKitties caused another, more immediate problem: It functionally broke the Ethereum blockchain, which is the world’s second most valuable cryptocurrency by market capitalization (after Bitcoin). As explained earlier, Ethereum uses a fee called gas to price the cost of transactions. Any spike in transactions—buying, siring, and so on—will cause a spike in gas fees, and that’s exactly what happened when CryptoKitties went to the moon.
“Anything that was emblematic of CryptoKitties’ success was aped. Anything that wasn’t immediately visible was mostly ignored.”—Bryce Bladon
“Players who wanted to buy CryptoKitties incurred high gas fees,” Mihai Vicol, market analyst at Newzoo, said in an interview. “Those gas fees were anywhere from $100 to $200 per transaction. You had to pay the price of the CryptoKitty, plus the gas fee. That’s a major issue.”
The high fees weren’t just a problem for CryptoKitties. It was an issue for the entire blockchain. Anyone who wanted to transact in Ethereum, for any reason, had to pay more for gas as the game became more successful.
This dynamic remains a problem for Ethereum today. On 30 April 2022, when Yuga Labs released Otherdeeds, NFTs that promise owners metaverse real estate, it launched Ethereum gas fees into the stratosphere. The average price of gas briefly exceeded the equivalent of $450, up from about $50 the day before.
Although CryptoKitties’ demands on the network subsided as players left, gas will likely be the final nail in the game’s coffin. The median price of a CryptoKitty in the past three months is about 0.04 ether, or $40 to $50, which is often less than the gas required to complete the transaction. Even those who want to casually own and breed inexpensive CryptoKitties for fun can’t do it without spending hundreds of dollars.
The rise and fall of CryptoKitties was dramatic but gave its successors—of which there are hundreds—a chance to learn from its mistakes and move past them. Many have failed to heed the lessons: Modern blockchain gaming hits such as Axie Infinity and BinaryX had a similar initial surge in price and activity followed by a long downward spiral.
“Anything that was emblematic of CryptoKitties’ success was aped. Anything that wasn’t immediately visible was mostly ignored,” says Bladon. And it turns out many of CryptoKitties’ difficulties weren’t visible to the public. “The thing is, the CryptoKitties project did stumble. We had a lot of outages. We had to deal with a lot of people who’d never used blockchain before. We had a bug that leaked tens of thousands of dollars of Ether.” Similar problems have plagued more recent NFT projects, often on a much larger scale.
Liu isn’t sure how blockchain games can curb this problem. “The short answer is, I don’t know,” he says. “The long answer is, it’s not just a problem of blockchain games.”
World of Warcraft, for example, has faced rampant inflation for most of the game’s life. This is caused by a constant influx of gold from players and the ever-increasing value of new items introduced by expansions. The continual need for new players and items is linked to another core problem of today’s blockchain games: They’re often too simple.
“I think the biggest problem blockchain games have right now is they’re not fun, and if they’re not fun, people don’t want to invest in the game itself,” says Newzoo’s Vicol. “Everyone who spends money wants to leave the game with more money than they spent.”
That perhaps unrealistic wish becomes impossible once the downward spiral begins. Players, feeling no other attachment to the game than growing an investment, quickly flee and don’t return.
Whereas some blockchain games have seemingly ignored the perils of CryptoKitties’ quick growth and long decline, others have learned from the strain it placed on the Ethereum network. Most blockchain games now use a sidechain, a blockchain that exists independently but connects to another, more prominent “parent” blockchain. The chains are connected by a bridge that facilitates the transfer of tokens between each chain. This prevents a rise in fees on the primary blockchain, as all game activity occurs on the sidechain.
Yet even this new strategy comes with problems, because sidechains are proving to be less secure than the parent blockchain. An attack on Ronin, the sidechain used by Axie Infinity, let the hackers get away with the equivalent of $600 million. Polygon, another sidechain often used by blockchain games, had to patch an exploit that put $850 million at risk and pay a bug bounty of $2 million to the hacker who spotted the issue. Players who own NFTs on a sidechain are now warily eyeing its security.
The cryptocurrency wallet that owns the near million dollar kitten Dragon now holds barely $30-worth of Ether and hasn’t traded in NFTs for years. Wallets are anonymous, so it’s possible the person behind the wallet moved on to another. Still, it’s hard not to see the wallet’s inactivity as a sign that, for Rabono, the fun didn’t last.
Whether blockchain games and NFTs shoot to the moon or fall to zero, Bladon remains proud of what CryptoKitties accomplished and hopeful it nudged the blockchain industry in a more approachable direction.
“Before CryptoKitties, if you were to say ‘blockchain,’ everyone would have assumed you’re talking about cryptocurrency,” says Bladon. “What I’m proudest of is that it was something genuinely novel. There was real technical innovation, and seemingly, a real culture impact.”
After having cardiovascular surgery, many patients require a temporary pacemaker to help stabilize their heart rate. The device consists of a pulse generator, one or more insulated wires, and an electrode at the end of each wire.
The pulse generator—a metal case that contains electronic circuitry with a small computer and a battery—regulates the impulses sent to the heart. The wire is connected to the pulse generator on one end while the electrode is placed inside one of the heart’s chambers.
But there are several issues with temporary pacemakers: The generator limits the patient’s mobility, and the wires must be surgically removed, which can cause complications such as infection, dislodgment, torn or damaged tissues, bleeding, and blood clots.
Researchers have found that a dissolving pacemaker can make a real difference for patients. Last year a team of scientists at Northwestern University, in Evanston, Ill., developed such a device, which will allow patients to live without being tethered to external hardware. The dissolving pacemaker is 250 micrometers thick, weighs less than half a gram, and dissolves in the patient’s body, eliminating the need for surgical removal.
In May, the researchers introduced an upgraded, smart version of the pacemaker. It connects to a network of soft, flexible wearable sensors developed by the team. These sensors monitor various physiological functions to help determine when to pace the heart and at what rate. The pacing system is completely autonomous.
The device also releases an anti-inflammatory drug while it dissolves.
“When you implant any kind of foreign hardware into the human body, cells attack that object,” explains Igor Efimov, a member of the development team and a professor of biomedical engineering at the university. He says that releasing the anti-inflammatory drug will stop the body from rejecting the pacemaker.
The device isn’t available for human use yet, but Efimov expects it to be available to patients in less than five years.
The pacemaker’s outer layer is made of a bioresorbable polyurethane. The material is thin, soft, and flexible, which were important factors to consider during development because the device is placed on the surface of a patient’s heart, says John Rogers, who led the project. Rogers, an IEEE Fellow, is a professor of materials science and engineering, biomedical engineering, and neurological surgery at Northwestern.
Encased in the bioresorbable polyurethane are the electronics. On one side there is a receiver antenna and on the other side there is a transmission coil. They are connected via a serpentine radio-frequency diode.
A small wireless patch is attached to the patient’s chest, directly above the pacemaker. Electrodes are located on the side of the patch that is in direct contact with the person’s skin. Inside the device there is a small battery and a transmission coil. The electrodes record an electrocardiogram (ECG) on the patient and help regulate impulses sent to the heart.
The transmission coils in the patch and the pacemaker couple wirelessly. An oscillating magnetic field produced by the battery induces current in the patch’s coil. The current is then passed into the coil in the pacemaker to power the device and control the heart’s pacing rate.
If the pacemaker detects that the patient’s heart rate drops below a certain rate, the device activates the transmission coil in the patch. This powers the pacemaker, allowing it to bring the patient’s heart rate back up to a healthy level.
A small wireless patch attached to the patient’s chest records an ECG, which is then sent to a mobile app and available for the patient’s doctor to monitor.Northwestern University
The receiver antenna inside the pacemaker collects data on the patient’s heart rate and sends it to a mobile app using Near Field Communication protocols—the same technology used in smartphones and RFID tags. The data is monitored by the patient’s doctor.
The process of dissolution starts immediately after the device is implanted. The bioresorbable polyurethane is the first material to dissolve.
Each device will be designed and made for individual patients. If a surgeon needs a patient’s pacemaker to last for two weeks, the team will design the device so it maintains its integrity for the required time period, Rogers says.
While it is possible to measure how fast a patient’s heart is beating solely based on ECG results, Rogers says, outside factors such as physical activity, the patient’s blood type, and their resting heart rate can affect the heart. So Rogers and his team developed a network of sensors that monitor the patient’s body temperature, oxygen levels, respiration, physical activity, muscle tone, and the heart’s electrical activity.
There are currently three units: one for the chest, the forehead, and the neck. The pacemaker’s system automatically analyzes the data collected by these sensors. If it detects abnormal cardiac rhythms, the pacemaker will change the impulse of the heart rate.
“If normal activity is regained, then it stops pacing,” Efimov explains. “This is important because if you stimulate the heart when it’s unnecessary, then you risk inducing arrhythmia.”
Editor’s note: Last week, Amazon announced that it was acquiring iRobot for $1.7 billion, prompting questions about how iRobot’s camera-equipped robot vacuums will protect the data that they collect about your home. In September of 2017, we spoke with iRobot CEO Colin Angle about iRobot’s approach to data privacy, directly addressing many similar concerns. “The views expressed in the Q&A from 2017 remain true,” iRobot told us. “Over the past several years, iRobot has continued to do more to strengthen, and clearly define, its stance on privacy and security. It’s important to note that iRobot takes product security and customer privacy very seriously. We know our customers invite us into their most personal spaces—their homes—because they trust that our products will help them do more. We take that trust seriously."
The article from 7 September 2017 follows:
About a month ago, iRobot CEO Colin Angle mentioned something about sharing Roomba mapping data in an interview with Reuters. It got turned into a data privacy kerfuffle in a way that iRobot did not intend and (probably) did not deserve, as evidenced by their immediate clarification that iRobot will not sell your data or share it without your consent.
Data privacy is important, of course, especially for devices that live in your home with you. But as robots get more capable, the amount of data that they collect will increase, and sharing that data in a useful, thoughtful, and considerate way could make smart homes way smarter. To understand how iRobot is going to make this happen, we spoke with Angle about keeping your data safe, integrating robots with the future smart home, and robots that can get you a beer.
Were you expecting such a strong reaction on data privacy when you spoke with Reuters?
Colin Angle: We were all a little surprised, but it gave us an opportunity to talk a little more explicitly about our plans on that front. In order for your house to work smarter, the house needs to understand itself. If you want to be able to say, “Turn on the lights in the kitchen,” then the home needs to be able to understand what the kitchen is, and what lights are in the kitchen. And if you want that to work with a third-party device, you need a trusted, customer-in-control mechanism to allow that to happen. So, it’s not about selling data, it’s about usefully linking together different devices to make your home actually smart. The interesting part is that the limiting factor in making your home intelligent isn’t AI, it’s context. And that’s what I was talking about to Reuters.
What kinds of data can my Roomba 980 collect about my home?
Angle: The robot uses its sensors [including its camera] to understand where it is and create visual landmarks, things that are visually distinctive that it can recognize again. As the robot explores the home as a vacuum, it knows where it is relative to where it started, and it creates a 2D map of the home. None of the images ever leave the robot; the only map information that leaves the robot would be if the customer says, “I would like to see where the robot went,” and then the map is processed into a prettier form and sent up to the cloud and to your phone. If you don’t want to see it, it stays on the robot and never leaves the robot.
Do you think that there’s a perception that these maps contain much more private information about our homes than they really do?
Angle: I think that if you look at [the map], you know exactly what it is. In the future, we’d like it to have more detail, so that you could give more sophisticated commands to the robot, from “Could you vacuum my kitchen?” in which case the robot needs to know where the kitchen is, to [in the future], “Go to the kitchen and get me a beer.” In that case, the robot needs to know where the kitchen is, where the refrigerator is, what a beer is, and how to grab it. We’re at a very benign point right now, and we’re trying to establish a foundation of trust with our customers about how they have control over their data. Over time, when we want our homes to be smarter, you’ll be able to allow your robot to better understand your home, so it can do things that you would like it to do, in a trusted fashion.
“Robots are viewed as creatures in the home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker.”
Fundamentally, would the type of information that this sort of robot would be sharing with third parties be any more invasive than an Amazon Echo or Google Home?
Angle: Robots have this inherent explosive bit of interest, because they’re viewed as creatures in your home. That’s both exciting and a little scary at the same time, because people anthropomorphize and attribute much more intelligence to them than they do to a smart speaker. The amount of information that one of these robots collect is, in many ways, much less, but because it moves, it really captures people’s imagination.
Why do you think people seem to be more concerned about the idea of robots sharing their data?
Angle: I think it’s the idea that you’d have a “robot spy” in your home. Your home is your sanctuary, and people rightfully want their privacy. If we have something gathering data in their home, we’re beyond the point where a company can exploit their customers by stealthily gathering data and selling it to other people. The things you buy and place in your home are there to benefit you, not some third party. That was the fear that was unleashed by this idea of gathering and selling data unbeknownst to the customer. At iRobot, we’ve said, “Look, we’re not going to do this, we’re not going to sell your data.” We don’t even remember your map unless you tell us we can. Our very explicit strategy is building this trusted relationship with our customers, so that they feel good about the benefits that Roomba has.
How could robots like Roomba eventually come to understand more about our homes to enable more sophisticated functionality?
Angle: We’re in the land of R&D here, not Roomba products, but certainly there exists object-recognition technology that can determine what a refrigerator is, what a television is, what a table is. It would be pretty straightforward to say, if the room contains a refrigerator and an oven, it’s probably the kitchen. If a room has a bed, it’s probably a bedroom. You’ll never be 100 percent right, but rooms have purposes, and we’re certainly on a path where just by observing, a robot could identify a room.
What else do you think a robot like a Roomba could ultimately understand about your home?
Angle: We’re working on a number of things, some of which we’re happy to talk about and some of which less so at this point in time. But, why should your thermostat be on the wall? Why is one convenient place on the wall of one room the right place to measure temperature from, as opposed to where you like to sit? When you get into home control, your sensor location can be critically tied to the operation of your home. The opportunity to have the robot carry sensors with it around the home would allow the expansion from a point reading to a 2D or 3D map of those readings. As a result, the customer has a lot more control over [for example] how loud the stereo system is at a specific point, or what the temperature is at a specific point. You could also detect anomalies in the home if things are not working the way the customer would like them to work. Those are some simple examples of why moving a sensor around would matter.
“There’s a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is pointing at. That’s something where a robot makes a lot more sense.”
Another good example would be, there’s actually a pretty sizeable market for webcams in the home. People are interested in security and intruder detection, and also in how their pets are doing. But invariably, what you want to see is not what your camera is currently pointing at. Some people fill up their homes with cameras, or you put a camera on a robot, and it moves to where you want to look. That’s something where a robot makes a lot more sense, and it’s interesting, if I want to have privacy in our home and yet still have a camera I can use, it’s actually a great idea to put one on a robot, because when the robot isn’t in the room with you, it can’t see you. So, the metaphor is a lot closer to if you had a butler in your home— when they’re not around, you can have your privacy back. This is a metaphor that I think works really well as we try to architect smart homes that are both aware of themselves, and yet afford privacy.
So a mobile robot equipped with sensors and mapping technology to be able to understand your home in this way could act like a smart home manager?
Angle: A spatial information organizer. There’s a hub with a chunk of software that controls everything, and that’s not necessarily the same thing as what the robot would do. What Apple and Amazon and various smart home companies are doing are trying to build hubs where everything connects to them, but in order for these hubs to be actually smart, they need what I call spatial content: They need to understand what’s a room, and what’s in a room for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.
“A robot needs to understand what’s a room, and what’s in a room, for the entire home. Ultimately, the home itself is turning into a robot, and if the robot’s not aware of itself, it can’t do the right things.”
So, if you wanted to walk into a room and have the lights turn on and the heat come up, and if you started watching television and then left the room and wanted the television to turn off in the room you’d left and turn on in the room you’d gone to, all of those types of experiences where the home is seamlessly reacting to you require an understanding of rooms and what’s in each room. You can brute force that with lots of cameras and custom programming for your home, but I don’t believe that installations like this can be successful or scale. The solution where you own a Roomba anyway, and it just gives you all this information enabling your home to be smart, that’s an exciting vision of how we’re actually going to get smart homes.
What’s your current feeling about mobile manipulators in the home?
Angle: We’re getting there. In order for manipulation in the home to make sense, you need to know where you are, right? What’s the point of being able to get something if you don’t know where it is. So this idea that we need these maps that have information embedded in them about where stuff is and the ability to actually segment objects—there’s a hierarchy of understanding. You need to know where a room is as a first step. You need to identify where objects are—that’s recognition of larger objects. Then you need to be able to open a door, say, and now you’re processing larger objects to find handles that you can reach out and grab. The ability to do all of these things exists in research labs, and to an increasing degree in manufacturing facilities. We’re past the land of invention of the proof of principle, and into the land of, could we reduce this to a consumer price point that would make sense to people in the home. We’re well on the way—we will definitely see this kind of robot in, I would say, five to 10 years, we’ll have robots that can go and get you a beer. I don’t think it’s going to be a lot shorter than that, because we have a few steps to go, but it’s less invention and more engineering.
We should note that we spoke with Angle just before Neato announced their new D7 robot vacuum, which adds persistent, actionable maps, arguably the first step towards a better understanding of the home. Since Roombas already have the ability to make a similar sort of map, based on the sorts of things that Angle spoke about in our interview we’re expecting to see iRobot add a substantial amount of intelligence and functionality to the Roomba in the very near future.
Not long ago, a 300-mile range seemed like a healthy target for electric cars. More recently, the 520-mile (837-kilometer) Lucid Air became the world’s longest-range EV. But that record may not stand for long.
The Mercedes-Benz Vision EQXX, and its showroom-bound tech, looks to banish range anxiety for good: In April, the sleek prototype sedan completed a 621-mile (1,000-km) trek through the Alps from Mercedes’s Sindelfingen facility to the Côte d’Azur in Cassis, France, with battery juice to spare. It built on that feat in late May, when the prototype covered a world-beating, bladder-busting 747 miles (1,202 km) in a run from Germany to the Formula One circuit in Silverstone, England.
This wasn’t your usual long-distance, college-engineering project, a single-seat death trap made from Kleenex and balsa wood, with no amenities or hope of being certified for use on public roads. Despite modest power, a futuristic teardrop shape, and next-gen tech, the EQXX—developed in just 18 months—is otherwise a familiar, small Mercedes luxury sedan. That includes a dramatic sci-fi display and human-machine interface that spans the full dashboard. To underline real-world intent, Mercedes vows that the EQXX’s power train will reach showrooms by 2024. An initial showroom model, and surely more to come, will be built on the company’s new Mercedes Modular Architecture platform, designed for smaller “entry-luxury” models such as the A-Class and the CLA Coupe. While Mercedes was refining its one-off tech showpiece, it even used a current EQB model as a test mule for the power train.
“The car is an R&D project, but we’re feeding it into the development of our next compact car platform,” says Conrad Sagert, an engineer at Mercedes who is developing electric drive systems.
The engineering team included specialists with the Mercedes-EQ Formula E team, drawing from their well of electric racing experience. Developed in just 18 months, the rear-drive Vision EQXX is powered by a single radial-flux electric motor—developed entirely in-house—fed by a battery pack with just under 100 kilowatt-hours of usable energy. Inside, environmentally conscious materials include trim panels sourced from cacti, mushroom-based seat inserts and bamboo-fiber shag floor mats, all previewing potential use in showroom cars. One thing that won’t reach production by 2024 is the EQXX’s high-silicon battery anode, which Sagert says is closer to four years from showrooms. Such silicon-rich anodes, which can squeeze more range from batteries, are widely expected to be popularized over the next decade.
A 241-horsepower output delivers a reasonable 7-second trip from 0 to 60 miles per hour. But from a feathery (for an electric vehicle) 3,900-pound curb weight to wind-cheating aerodynamics, the carbon-fiber-bodied EQXX is designed for pure efficiency, not winning stoplight races. The Benz sipped electrons at 8.7 miles per kilowatt-hour on its Côte d'Azur run, nearly double the roughly 4.5 kWh of the Lucid (the current high for global EVs) and 7.5 miles per kilowatt-hour on the trip to the United Kingdom. If that electric math still seems esoteric, the England-bound Benz delivered the equivalent of 262 miles per gallon, nearly double the 141 mpg of the industry-leading Tesla Model 3 Standard Range.
A roof panel with 117 solar cells lessens the burden by powering a conventional 12-volt system to run accessories, including lighting, an audio system, and the display screens worthy of Minority Report. On the cloudy April trip to southern France, with plenty of tunnel passages, the panels saved 13 km of range. On the sunnier May drive to the U.K., the solar roof saved 43 km of range.
The Vision EQXX’s roof panel has 117 solar cells.Mercedes-Benz
Aerodynamics naturally play an essential role, including a tiny frontal area and dramatic Kamm tail whose active rear diffuser extends nearly 8 inches at speeds above 23 mph. The sidewalls of specially designed Bridgestone tires sit flush with the body and 20-inch magnesium wheels, aiding a claimed drag coefficient of 0.17, which exceeds any current production car. Surprisingly for such a slippery design, the EQXX features traditional yet aerodynamic exterior mirrors: Mercedes says the camera-based “mirrors” used on many concept cars drew too much electricity to generate a tangible benefit.
Defying today’s EV norms, the battery and motor are entirely air cooled. Eliminating liquid-cooling circuits, pumps, and fluids set off a spiral of savings in weight and packaging. To cool the battery, a smoothly shaped underbody acts as a heat sink. The design reversed the usual engineering challenge in EVs and internal combustion engine cars alike: The problem was getting heat into the system to bring battery and motor to optimal operating temperature. Active front shutters can open to boost airflow when necessary.
“We don’t get enough waste heat, so we had to insulate the electric motor. It’s still about heat management, but the other way around,” Sagert says.
Add it up and the EQXX transfers a claimed 95 percent of electric energy into forward motion, up from 90 percent for Mercedes’s current models such as the EQS. If that doesn’t sound like much gain to nonengineers, Sagert puts it another way: The EQXX reduces typical EV energy losses by 50 percent.
“We’re always hoping for this magical thing, but it’s really the sum of the details,” Sagert says.
That obsession with tiny details paid off. Based on computer and dynamometer simulations, engineers saw a 1,000-km run as a challenging target, and plotted a Mediterranean road trip to Cassis, France. Instead, the car blew away those conservative projections. Pulling into Cassis, the EQXX had 140 km of remaining range.
“We thought about waving and just driving on, but we weren’t allowed,” Sagert says, not least because Mercedes board member and chief technology officer Markus Schäfer was waiting to greet them. Mercedes then set its sights higher, and chose Silverstone and its Formula One track, ideal for a team meetup.
“We started thinking, can we do a longer run?” Sagert says. “We always wished to visit our colleagues in Formula E, who did so much for the project. But again we thought, ‘This will be really tough.’ ”
To make the runs legit, Mercedes was determined to drive at real-world speeds and conditions, not “hypermile” their way to some illusory record. The car averaged 83 kilometers per hour on its U.K. run, and 87 km/h to Cassis. Test drivers even ran the air conditioning for 8 hours of the two-day, 14 hour-and-30-minute trip to Silverstone, and encountered an autobahn road closure and snarled traffic around London.
The sleek sedan capped off the record-breaking trek with an energy-guzzling flourish: Despite some misgivings, the team handed their precious prototype to a Formula E team driver, Nyck de Vries. The Type-A racer forgot all about efficiency and pushed the car to its limits on the Silverstone F1 circuit, watched by nervous engineers. Where long-distance drivers had relied almost exclusively on regenerative braking (with four adjustable levels) during their runs, de Vries got to test the car’s novel aluminum brake rotors. Those ultralight rotors are possible because the Benz so rarely needs to use its foot-operated mechanical brakes, as telemetry readings from the track showed.
“In three laps, de Vries burned more energy using the mechanical brakes than we did on two entire runs” through Europe, Sagert says. “But it was a good feeling, that this wasn’t some show car, and that you could give it to a race driver and not have it fall apart.”
Some of this prototype tech won’t be feasible on coming production models—a carbon-fiber body, for one, is the stuff of supercars, not small-and-affordable Mercedes. Still, the EQXX offers a tantalizing taste of what’s to come, including all-day range to savor.
“This range anxiety is not a problem anymore,” Sagert says. “If your range isn’t enough today, wait two years, and the step will be big.”
This morning, Amazon and iRobot announced “a definitive merger agreement under which Amazon will acquire iRobot” for US $1.7 billion. The announcement was a surprise, to put it mildly, and we’ve barely had a chance to digest the news. But taking a look at what’s already known can still yield initial (if incomplete) answers as to why Amazon and iRobot want to team up—and whether the merger seems like a good idea.
The press release, like most press releases about acquisitions of this nature, doesn’t include much in the way of detail. But here are some quotes:
“We know that saving time matters, and chores take precious time that can be better spent doing something that customers love,” said Dave Limp, SVP of Amazon Devices. “Over many years, the iRobot team has proven its ability to reinvent how people clean with products that are incredibly practical and inventive—from cleaning when and where customers want while avoiding common obstacles in the home, to automatically emptying the collection bin. Customers love iRobot products—and I'm excited to work with the iRobot team to invent in ways that make customers' lives easier and more enjoyable.”
“Since we started iRobot, our team has been on a mission to create innovative, practical products that make customers' lives easier, leading to inventions like the Roomba and iRobot OS,” said Colin Angle, chairman and CEO of iRobot. “Amazon shares our passion for building thoughtful innovations that empower people to do more at home, and I cannot think of a better place for our team to continue our mission. I’m hugely excited to be a part of Amazon and to see what we can build together for customers in the years ahead.”
There’s not much to go on here, and iRobot has already referred us to Amazon PR, which, to be honest, feels like a bit of a punch in the gut. I love (loved?) so many things about iRobot—their quirky early history working on weird DARPA projects and even weirder toys, everything they accomplished with the PackBot (and also this), and most of all, the fact that they’ve made a successful company building useful and affordable robots for the home, which is just…it’s so hard to do that I don’t even know where to start. And nobody knows what’s going to happen to iRobot going forward. I’m sure iRobot and Amazon have all kinds of plans and promises and whatnot, but still—I’m now nervous about iRobot’s future.
Why this is a good move for Amazon is clear, but what exactly is in it for iRobot?
It seems fairly obvious why Amazon wanted to get its hands on iRobot. Amazon has been working for years to integrate itself into homes, first with audio systems (Alexa), and then video (Ring), and more recently some questionable home robots of its own, like its indoor security drone and Astro. Amazon clearly needs some help in understanding how to make home robots useful, and iRobot can likely provide some guidance, with its extraordinarily qualified team of highly experienced engineers. And needless to say, iRobot is already well established in a huge number of homes, with brand recognition comparable to something like Velcro or Xerox, in the sense that people don’t have “robot vacuums,” they have Roombas.
All those Roombas in all of those homes are also collecting a crazy amount of data for iRobot. iRobot itself has been reasonably privacy-sensitive about this, but it would be naïve not to assume that Amazon sees a lot of potential for learning much, much more about what goes on in our living rooms. This is more concerning, because Amazon has its own ideas about data privacy, and it’s unclear what this will mean for increasingly camera-reliant Roombas going forward.
I get why this is a good move for Amazon, but I must admit that I’m still trying to figure out what exactly is in it for iRobot, besides of course that “$61 per share in an all-cash transaction valued at approximately $1.7 billion.” Which, to be fair, seems like a heck of a lot of money. Usually when these kinds of mergers happen (and I’m thinking back to Google acquiring all those robotics companies in 2013), the hypothetical appeal for the robotics company is that suddenly they have a bunch more resources to spend on exciting new projects along with a big support structure to help them succeed.
It’s true that iRobot has apparently had some trouble with finding ways to innovate and grow, with their biggest potential new consumer product (the Terra lawn mower) having been on pause since 2020. It could be that big pile of cash, plus not having to worry so much about growth as a publicly traded company, plus some new Amazon-ish projects to work on could be reason enough for this acquisition.
My worry, though, is that iRobot is just going to get completely swallowed into Amazon and effectively cease to exist in a meaningful and unique way. I hope that the relationship between Amazon and iRobot will be an exception to this historical trend. Plus, there is some precedent for this—Boston Dynamics, for example, has survived multiple acquisitions while keeping its technology and philosophy more or less independent and intact. It’ll be on iRobot to very aggressively act to preserve itself, and keeping Colin Angle as CEO is a good start.
We’ll be trying to track down more folks to talk to about this over the coming weeks for a more nuanced and in-depth perspective. In the meantime, make sure to give your Roomba a hug—it’s been quite a day for little round robot vacuums.
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We also post a weekly calendar of upcoming robotics events for the next few months. Please send us your events for inclusion.
Enjoy today's videos!
This probably counts as hard mode for Ikea chair assembly.
[ Naver Lab ]
As anyone working with robotics knows, it’s mandatory to spend at least 10 percent of your time just mucking about with them because it’s fun, as GITAI illustrates with its new 10-meter robotic arm.
[ GITAI ]
Well, this is probably the weirdest example of domain randomization in simulation for quadrupeds that I’ve ever seen.
[ RSL ]
The RoboCup 2022 was held in Bangkok, Thailand. The final match was between B-Human from Bremen (jerseys in black) and HTWK Robots from Leipzig (jerseys in blue). The video starts with one of our defending robots starting a duel with the opponent. After a short time a pass is made to another robot, which tries to score a goal, but the opponent goalie is able to catch the ball. Afterwards another attacker robot is already waiting at the center circle, to take its chance to score a goal, through all four opponent robots.
[ Team B-Human ]
The mission to return Martian samples back to Earth will see a European 2.5-meter-long robotic arm pick up tubes filled with precious soil from Mars and transfer them to a rocket for a historic interplanetary delivery.
[ ESA ]
I still cannot believe that this is an approach to robotic fruit-picking that actually works.
[ Tevel Aerobotics ]
This video shows the basic performance of the humanoid robot Torobo, which is used as a research platform for JST’s Moonshot R&D program.
[ Tokyo Robotics ]
Volocopter illustrates why I always carry two violins with me everywhere. You know, just in case.
[ Volocopter ]
We address the problem of enabling quadrupedal robots to perform precise shooting skills in the real world using reinforcement learning. Developing algorithms to enable a legged robot to shoot a soccer ball to a given target is a challenging problem that combines robot motion control and planning into one task.
[ Hybrid Robotics ]
I will always love watching Cassie try very, very hard to not fall over, and then fall over. <3
I don’t think this paper is about teaching bipeds to walk with attitude, but it should be.
[ DLG ]
Modboats are capable of collective swimming in arbitrary configurations! In this video you can see three different configurations of the Modboats swim across our test space and demonstrate their capabilities.
[ ModLab ]
How have we built our autonomous driving technology to navigate the world safely? It comes down to three easy steps: Sense, Solve, and Go. Using a combination of lidar, camera, radar, and compute, the Waymo Driver can visualize the world, calculate what others may do, and proceed smoothly and safely, day and night.
[ Waymo ]
Alan Alda discusses evolutionary robotics with Hod Lipson and Jordan Pollack on Scientific American Frontiers in 1999.
Brady Watkins gives us insight into how a big company like Softbank Robotics looks into the robotics market.
[ Robohub ]
On 29 September 2020, a masked man entered a branch of the Wells Fargo bank in Washington, D.C., and handed the teller a note: “This is a robbery. Act calm give me all hundreds.” The teller complied. The man then fled the bank and jumped into a gray Tesla Model S. This was one of three bank robberies the man attempted the same day.
When FBI agents began investigating, they reviewed Washington, D.C.’s District Department of Transportation camera footage, and spotted a Tesla matching the getaway vehicle’s description. The license plate on that car showed that it was registered to Exelorate Enterprises LLC, the parent company of Steer EV—a D.C.-based monthly vehicle-subscription service.
Agents served a subpoena on Steer EV for the renter’s billing and contact details. Steer EV provided those—and also voluntarily supplied historical GPS data for the vehicle. The data showed the car driving between, and parking at, each bank at the time of the heists. The renter was arrested and, in September, sentenced to four years in prison.
“If an entity is collecting, retaining, [and] sharing historical location data on an individualized level, it’s extraordinarily difficult to de-identify that, verging on impossible.”
—John Verdi, Future of Privacy Forum
In this case, the GPS data likely came from a device Steer EV itself installed in the vehicle (neither Steer nor Tesla responded to interview requests). However, according to researchers, Tesla is potentially in a position to provide similar GPS tracks for many of its 3 million customers.
For Teslas built since mid-2017, “every time you drive, it records the whole track of where you drive, the GPS coordinates and certain other metrics for every mile driven,” says Green, a Tesla owner who has reverse engineered the company’s Autopilot data collection. “They say that they are anonymizing the trigger results,” but, he says, “you could probably match everything to a single person if you wanted to.”
Each of these trip logs, and other data “snapshots” captured by the Autopilot system that include images and video, is stripped of its identifying VIN and given a temporary, random ID number when it is uploaded to Tesla, says Green. However, he notes, that temporary ID can persist for days or weeks, connecting all the uploads made during that time.
Elon Musk, CEO of Tesla MotorsMark Mahaney/Redux
Given that some trip logs will also likely record journeys between a driver’s home, school, or place of work, guaranteeing complete anonymity is unrealistic, says John Verdi, senior vice president of policy at the Future of Privacy Forum: “If an entity is collecting, retaining, [and] sharing historical location data on an individualized level, it’s extraordinarily difficult to de-identify that, verging on impossible.”
Tesla, like all other automakers, has a policy that spells out what it can and cannot do with the data it gets from customers’ vehicles, including location information. This states that while the company does not sell customer and vehicle data, it can share that data with service providers, business partners, affiliates, some authorized third parties, and government entities according to the law.
Owners can buy a special kit for US $1,400 that allows them to access data on their own car's event data recorder, but this represents just a tiny subset of the data the company collects, and is related only to crashes. Owners living in California and Europe benefit from legislation that means Tesla will provide access to more data generated by their vehicles, although not the Autopilot snapshots and trip logs that are supposedly anonymized.
Once governments realize that a company possesses such a trove of information, it could be only a matter of time before they seek access to it. “If the data exists…and in particular exists in the domain of somebody who’s not the subject of those data, it’s much more likely that a government will eventually get access to them in some way,” says Bryant Walker Smith, an associate professor in the schools of law and engineering at the University of South Carolina.
“Individuals ought to think about their cars more like they think about their cellphones.”
—John Verdi, Future of Privacy Forum
This is not necessarily a terrible thing, Walker says, who suggests that such rich data could unlock valuable insights into which roads or intersections are dangerous. The wealth of data could also surface subtle problems in the vehicles themselves.
In many ways, the data genie is already out of the bottle, according to Verdi. “Individuals ought to think about their cars more like they think about their cellphones,” he says. “The auto industry has a lot to learn from the ways that mobile-phone operating systems handle data permissions…. Both iOS and Android have made great strides in recent years in empowering consumers when it comes to data collection, data disclosure, and data use.”
Tesla permits owners to control some data sharing, including Autopilot and road segment analytics. If they want to opt out of data collection completely, they can ask Tesla to disable the vehicle’s connectivity altogether. However, this would mean losing features such as remote services, Internet radio, voice commands, and Web browser functionality, and even safety-related over-the-air updates.
Green says he is not aware of anyone who has successfully undergone this nuclear option. The only real way to know you’ve prevented data sharing, he says, is to “go to a repair place and ask them to remove the modem out of the car.”
Tesla almost certainly has the biggest empire of customer and vehicle data among automakers. It also appears to be the most aggressive in using that data to develop its automated driving systems, and to protect its reputation in the courts of law and public opinion, even to the detriment of some of its customers.
But while the world’s most valuable automaker dominates the discussion around connected cars, others are not far behind. Elon Musk’s insight—to embrace the data-driven world that our other digital devices already inhabit—is rapidly becoming the industry standard. When our cars become as powerful and convenient as our phones, it is hardly surprising that they suffer the same challenges around surveillance, privacy, and accountability.
All these products use display panels manufactured by Samsung but have their own unique display assembly, operating system, and electronics.
I took apart a 55-inch Samsung S95B to learn just how these new displays are put together (destroying it in the process). I found an extremely thin OLED backplane that generates blue light with an equally thin QD color-converting structure that completes the optical stack. I used a UV light source, a microscope, and a spectrometer to learn a lot about how these displays work.
A few surprises:
As for the name of this technology, Samsung has used the branding OLED, QD Display, and QD-OLED, while Sony is just using OLED. Alienware uses QD-OLED to describe the new tech (as do most in the display industry).
Story from January 2022 follows:
For more than a decade now, OLED (organic light-emitting diode) displays have set the bar for screen quality, albeit at a price. That’s because they produce deep blacks, offer wide viewing angles, and have a broad color range. Meanwhile, QD (quantum dot) technologies have done a lot to improve the color purity and brightness of the more wallet-friendly LCD TVs.
In 2022, these two rival technologies will merge. The name of the resulting hybrid is still evolving, but QD-OLED seems to make sense, so I’ll use it here, although Samsung has begun to call its version of the technology QD Display.
To understand why this combination is so appealing, you have to know the basic principles behind each of these approaches to displaying a moving image.
In an LCD TV, the LED backlight, or at least a big section of it, is on all at once. The picture is created by filtering this light at the many individual pixels. Unfortunately, that filtering process isn’t perfect, and in areas that should appear black some light gets through.
In OLED displays, the red, green, and blue diodes that comprise each pixel emit light and are turned on only when they are needed. So black pixels appear truly black, while bright pixels can be run at full power, allowing unsurpassed levels of contrast.
But there’s a drawback. The colored diodes in an OLED TV degrade over time, causing what’s called “burn-in.” And with these changes happening at different rates for the red, green, and blue diodes, the degradation affects the overall ability of a display to reproduce colors accurately as it ages and also causes “ghost” images to appear where static content is frequently displayed.
Adding QDs into the mix shifts this equation. Quantum dots—nanoparticles of semiconductor material—absorb photons and then use that energy to emit light of a different wavelength. In a QD-OLED display, all the diodes emit blue light. To get red and green, the appropriate diodes are covered with red or green QDs. The result is a paper-thin display with a broad range of colors that remain accurate over time. These screens also have excellent black levels, wide viewing angles, and improved power efficiency over both OLED and LCD displays.
Samsung is the driving force behind the technology, having sunk billions into retrofitting an LCD fab in Tangjeong, South Korea, for making QD-OLED displays While other companies have published articles and demonstrated similar approaches, only
Samsung has committed to manufacturing these displays, which makes sense because it holds all of the required technology in house. Having both the OLED fab and QD expertise under one roof gives Samsung a big leg up on other QD-display manufacturers.,
Samsung first announced QD-OLED plans in 2019, then pushed out the release date a few times. It now seems likely that we will see public demos in early 2022 followed by commercial products later in the year, once the company has geared up for high-volume production. At this point, Samsung can produce a maximum of 30,000 QD-OLED panels a month; these will be used in its own products. In the grand scheme of things, that’s not that much.
Unfortunately, as with any new display technology, there are challenges associated with development and commercialization.
For one, patterning the quantum-dot layers and protecting them is complicated. Unlike QD-enabled LCD displays (commonly referred to as QLED) where red and green QDs are dispersed uniformly in a polymer film, QD-OLED requires the QD layers to be patterned and aligned with the OLEDs behind them. And that’s tricky to do. Samsung is expected to employ inkjet printing, an approach that reduces the waste of QD material.
Another issue is the leakage of blue light through the red and green QD layers. Leakage of only a few percent would have a significant effect on the viewing experience, resulting in washed-out colors. If the red and green QD layers don’t do a good job absorbing all of the blue light impinging on them, an additional blue-blocking layer would be required on top, adding to the cost and complexity.
Another challenge is that blue OLEDs degrade faster than red or green ones do. With all three colors relying on blue OLEDs in a QD-OLED design, this degradation isn’t expected to cause as severe color shifts as with traditional OLED displays, but it does decrease brightness over the life of the display.
Today, OLED TVs are typically the most expensive option on retail shelves. And while the process for making QD-OLED simplifies the OLED layer somewhat (because you need only blue diodes), it does not make the display any less expensive. In fact, due to the large number of quantum dots used, the patterning steps, and the special filtering required, QD-OLED displays are likely to be more expensive than traditional OLED ones—and way more expensive than LCD TVs with quantum-dot color purification. Early adopters may pay about US $5,000 for the first QD-OLED displays when they begin selling later this year. Those buyers will no doubt complain about the prices—while enjoying a viewing experience far better than anything they’ve had before.
In 2019, Elon Musk stood up at a Tesla day devoted to automated driving and said, “Essentially everyone’s training the network all the time, is what it amounts to. Whether Autopilot’s on or off, the network is being trained.”
Tesla’s suite of assistive and semi-autonomous technologies, collectively known as Autopilot, is among the most widely deployed—and undeniably the most controversial—driver-assistance systems on the road today. While many drivers love it, using it for a combined total of more than 5 billion kilometers, the technology has been involved in hundreds of crashes, some of them fatal, and is currently the subject of a comprehensive investigation by the National Highway Traffic Safety Administration.
This second story—in IEEE Spectrum’s series of three on Tesla’s empire of data (story 1; story 3)—focuses on how Autopilot rests on a foundation of data harvested from the company’s own customers. Although the company’s approach has unparalleled scope and includes impressive technological innovations, it also faces particular challenges—not least of which is Musk’s decision to widely deploy the misleadingly named Full Self-Driving feature as a largely untested beta.
“Right now, automated vehicles are one to two magnitudes below human drivers in terms of safety performance.”
—Henry Liu, Mcity
Most companies working on automated driving rely on a small fleet of highly instrumented test vehicles, festooned with high-resolution cameras, radars, and laser-ranging lidar devices. Some of these have been estimated to generate 750 megabytes of sensor data every second, providing a rich seam of training data for neural networks and other machine-learning systems to improve their driving skills.
Such systems have now effectively solved the task of everyday driving, including for a multitude of road users, different weather conditions, and road types, says Henry Liu, director of Mcity, a public-private mobility research partnership at the University of Michigan.
“But right now, automated vehicles are one to two magnitudes below human drivers in terms of safety performance,” says Liu. “And that’s because current automated vehicles can’t handle the curse of rarity: low-frequency, long-tail, safety-critical events that they just don’t see enough to know how to handle.” Think of a deer suddenly springing into the road, or a slick of spilled fuel.
Tesla’s bold bet is that its own customers can provide the long tail of data needed to boost self-driving cars to superhuman levels of safety. Above and beyond their contractual obligations, many are happy to do so—seeing themselves as willing participants in the development of technology that they have been told will one day soon allow them to simply sit back and enjoy being driven by the car itself.
For a start, the routing information for every trip undertaken in a recent model Autopilot-equipped Tesla is shared with the company—see the the previous installment in this series. But Tesla’s data effort goes far beyond navigation.
In Shadow Mode, operating on Tesla vehicles since 2016, if the car’s Autopilot computer is not controlling the car, it is simulating the driving process in parallel with the human driver. When its own predictions do not match the driver’s behavior, this might trigger the recording of a short “snapshot” of the car’s cameras, speed, acceleration, and other parameters for later uploading to Tesla. Snapshots are also triggered when a Tesla crashes.
After the snapshots are uploaded, a team may review them to identify human actions that the system should try to imitate, and input them as training data for its neural networks. Or they may notice that the system is failing, for instance, to properly identify road signs obscured by trees.
In that case, engineers can train a detector designed specifically for this scenario and download it to some or all Tesla vehicles. “We can beam it down to the fleet, and we can ask the fleet to please apply this detector on top of everything else you’re doing,” said Karpathy in 2020. If that detector thinks it spots such a road sign, it will capture images from the car’s cameras for later uploading,
His team would quickly receive thousands of images, which they would use to iterate the detector, and eventually roll it out to all production vehicles. “I’m not exactly sure how you build out a data set like this without the fleet,” said Karpathy. An amateur Tesla hacker who tweets using the pseudonym Green told Spectrum that he identified over 900 Autopilot test campaigns, before the company stopped numbering them in 2019.
For all the promise of Tesla’s fleet learning, Autopilot has yet to prove that it can drive as safely as a human, let alone be trusted to operate a vehicle without supervision.
Liu is bullish on Tesla’s approach to leveraging its ever-growing consumer base. “I don’t think a small…fleet will ever be able to handle these [rare] situations,” he says. “But even with these shadow drivers—and if you deploy millions of these fleet vehicles, that’s a very, very large data collection—I don’t know whether Tesla is fully utilizing them because there’s no public information really available.”
One obstacle is the sheer cost. Karpathy admitted that having a large team to assess and label images and video was expensive and said that Tesla was working on detectors that can train themselves on video clips captured in Autopilot snapshots. In June, the company duly laid off 195 people working on data annotation at a Bay Area office.
While the Autopilot does seem to have improved over the years, with Tesla allowing its operation on more roads and in more situations, serious and fatal accidents are still occurring. These may or may not have purely technical causes. Certainly, some drivers seem to be overestimating the system’s capabilities or are either accidentally or deliberately failing to supervise it sufficiently.
Other experts are worried that Tesla’s approach has more fundamental flaws. “The vast majority of the world generally believes that you’re never going to get the same level of safety with a camera-only system that you will based on a system that includes lidar,” says Dr. Matthew Weed, senior director of product management at Luminar, a company that manufacturers advanced lidar systems.
He points out that Tesla’s Shadow Mode only captures a small fraction of each car’s driving time. “When it comes to safety, the whole thing is about…your unknown unknowns,” he says. “What are the things that I don’t even know about that will cause my system to fail? Those are really difficult to ascertain in a bulk fleet” that is down-selecting data.
For all the promise of Tesla’s fleet learning and the enthusiastic support of many of its customers, Autopilot has yet to prove that it can drive as safely as a human, let alone be trusted to operate a vehicle without supervision. And there are other difficulties looming. Andrej Karpathy left Tesla in mid-July, while the company continues to face the damaging possibility of NHTSA issuing a recall for Autopilot in the United States. This would be a terrible PR (and possibly economic) blow for the company but would likely not halt its harvesting of customer data to improve the system, nor prevent its continued deployment overseas.
Tesla’s use of fleet vehicle data to develop Autopilot echoes the user-fueled rise of Internet giants like Google, YouTube, and Facebook. The more its customers drive, so Musk’s story goes, the better the system performs.
But just as tech companies have had to come to terms with their complicated relationships with data, so Tesla is beginning to see a backlash. Why does the company charge US $12,000 for a so-called “full self-driving” capability that is utterly reliant on its customers’ data? How much control do drivers have over data extracted from their daily journeys? And what happens when other entities, from companies to the government, seek access to it? These are the themes for our third story.
Early in July, Rhode Island’s governor signed legislation mandating that the state acquire 100 percent of its electricity from renewable sources by 2033. Among the state’s American peers, there’s no deadline more ambitious.
“Anything more ambitious, and I would start being a little skeptical that it would be attainable,” says Seaver Wang, a climate and energy researcher at the Breakthrough Institute.
It is true that Rhode Island is small. It is also true that the state’s conditions make it riper for such a timeframe than most of the country. But watching this tiny state go about its policy business, analysts say, might show other states how to light their own ways into a renewable future.
Rhode Island’s 2033 deadline comes in the form of a renewable-energy standard, setting a goal that electricity providers must meet by collecting a certain number of certificates. Electricity providers can earn those certificates by generating electricity from renewable sources themselves; alternatively, they can buy certificates from other providers. (Numerous other states have similar standards—Rhode Island’s current standard is actually an upgrade to an older standard—and policy wonks have mooted a national standard.)
Today, it might seem a bit optimistic to pin hopes for renewable energy on a state that still gets 89 percent of its electricity from natural gas. Much of the meager wind power that does exist comes either from other states or from the 30-megawatt Block Island Wind Farm—the first offshore wind farm in the United States—which consists of just five turbines and only came online in 2016.
But Rhode Island plans to fill the gap with as much as 600 megawatts of new wind power. To aid this effort, it has partnered with Ørsted, which could bring a critical mass of turbine expertise from Europe, where the sector is far more advanced. “I think that adds greatly to the likelihood of [Rhode Island’s] success,” says Morgan Higman, a clean-energy researcher at the Center for Strategic and International Studies, in Washington, D.C.
The policies in the package are, indeed, quite specific to Rhode Island’s position. Not only is it one of the least populous states in the United States, it already has about the lowest per capita energy consumption in the country. Moreover, powering a service-oriented economy, Rhode Island’s grid doesn’t have to accommodate many energy-intensive manufacturing firms. That makes that 2033 goal all the more achievable.
“It’s better to have attainable goals and focus on a diverse portfolio of policies to promote clean energy advancement, rather than sort of rush to meet what is essentially…a bit of a PR goal,” says Wang.
That Rhode Island is going all-in on something this maritime state might have in abundance—offshore wind—offers another lesson. Higman says it’s a good example of using a state’s own potential resources. Moreover, the partnership with Ørsted might help the state harness helpful expertise.
In similar fashion, Texans could choose to double down on that state’s own wind-power portfolio. New Mexico could potentially shape a renewable-energy supply from its bountiful sunlight. Doing this sort of thing, Higman says, “is the fastest way that we see states accelerate renewable-energy deployment.”
Rhode Island’s policy does leave some room for improvement. Its focus on renewables looks past New England’s largest source of carbon-free energy: fission. Just two nuclear power plants (Millstone in Connecticut and Seabrook in New Hampshire) pump out more than a fifth of the region’s electricity. A more inclusive policy might take note and incentivize nuclear power, too.
Perhaps most important, any discussion of energy policy should note that Rhode Island’s grid doesn’t exist in a vacuum; it’s linked in with the grids of its surrounding states in New England, New York, and beyond. (Indeed, it has repeatedly partnered on setting goals and building new offshore wind power.)
If neighboring states implement similarly aggressive standards without actually building new energy capacity, then there’s a chance that when all the renewable energy certificates are bought out, some states won’t have any renewable energy left.
But analysts are optimistic that Rhode Island can do the job. “Rhode Island does deserve some kudos for this policy,” says Wang.
“It’s really tempting to applaud states for their goals. This is a useful example of where setting a goal is not very meaningful,” adds Higman. “Identifying the means and strategies and technologies to achieve that goal is the most important thing. And Rhode Island has done that.”