Tag Archives: Street

#437992 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
This Chinese Lab Is Aiming for Big AI Breakthroughs
Will Knight | Wired
“China produces as many artificial intelligence researchers as the US, but it lags in key fields like machine learning. The government hopes to make up ground. …It set AI researchers the goal of making ‘fundamental breakthroughs by 2025’ and called for the country to be ‘the world’s primary innovation center by 2030.’ BAAI opened a year later, in Zhongguancun, a neighborhood of Beijing designed to replicate US innovation hubs such as Boston and Silicon Valley.”

ENVIRONMENT
What Elon Musk’s $100 Million Carbon Capture Prize Could Mean
James Temple | MIT Technology Review
“[Elon Musk] announced on Twitter that he plans to give away $100 million of [his $180 billion net worth] as a prize for the ‘best carbon capture technology.’ …Another $100 million could certainly help whatever venture, or ventures, clinch Musk’s prize. But it’s a tiny fraction of his wealth and will also only go so far. …Money aside, however, one thing Musk has a particular knack for is generating attention. And this is a space in need of it.”

HEALTH
Synthetic Cornea Helped a Legally Blind Man Regain His Sight
Steve Dent | Engadget
“While the implant doesn’t contain any electronics, it could help more people than any robotic eye. ‘After years of hard work, seeing a colleague implant the CorNeat KPro with ease and witnessing a fellow human being regain his sight the following day was electrifying and emotionally moving, there were a lot of tears in the room,’ said CorNeat Vision co-founder Dr. Gilad Litvin.”

BIOTECH
MIT Develops Method for Lab-Grown Plants That May Eventually Lead to Alternatives to Forestry and Farming
Darrell Etherington | TechCrunch
“If the work of these researchers can eventually be used to create a way to produce lab-grown wood for use in construction and fabrication in a way that’s scalable and efficient, then there’s tremendous potential in terms of reducing the impact on forestry globally. Eventually, the team even theorizes you could coax the growth of plant-based materials into specific target shapes, so you could also do some of the manufacturing in the lab, by growing a wood table directly for instance.”

AUTOMATION
FAA Approves First Fully Automated Commercial Drone Flights
Andy Pasztor and Katy Stech Ferek | The Wall Street Journal
“US aviation regulators have approved the first fully automated commercial drone flights, granting a small Massachusetts-based company permission to operate drones without hands-on piloting or direct observation by human controllers or observers. …The company’s Scout drones operate under predetermined flight programs and use acoustic technology to detect and avoid drones, birds, and other obstacles.”

SPACE
China’s Surging Private Space Industry Is Out to Challenge the US
Neel V. Patel | MIT Technology Review
“[The Ceres-1] was a commercial rocket—only the second from a Chinese company ever to go into space. And the launch happened less than three years after the company was founded. The achievement is a milestone for China’s fledgling—but rapidly growing—private space industry, an increasingly critical part of the country’s quest to dethrone the US as the world’s preeminent space power.”

CRYPTOCURRENCY
Janet Yellen Will Consider Limiting Use of Cryptocurrency
Timothy B. Lee | Ars Technica
“Cryptocurrencies could come under renewed regulatory scrutiny over the next four years if Janet Yellen, Joe Biden’s pick to lead the Treasury Department, gets her way. During Yellen’s Tuesday confirmation hearing before the Senate Finance Committee, Sen. Maggie Hassan (D-N.H.) asked Yellen about the use of cryptocurrency by terrorists and other criminals. ‘Cryptocurrencies are a particular concern,’ Yellen responded. ‘I think many are used—at least in a transactions sense—mainly for illicit financing.’i”

SCIENCE
Secret Ingredient Found to Power Supernovas
Thomas Lewton | Quanta
“…Only in the last few years, with the growth of supercomputers, have theorists had enough computing power to model massive stars with the complexity needed to achieve explosions. …These new simulations are giving researchers a better understanding of exactly how supernovas have shaped the universe we see today.”

Image Credit: Ricardo Gomez Angel / Unsplash Continue reading

Posted in Human Robots

#437929 These Were Our Favorite Tech Stories ...

This time last year we were commemorating the end of a decade and looking ahead to the next one. Enter the year that felt like a decade all by itself: 2020. News written in January, the before-times, feels hopelessly out of touch with all that came after. Stories published in the early days of the pandemic are, for the most part, similarly naive.

The year’s news cycle was swift and brutal, ping-ponging from pandemic to extreme social and political tension, whipsawing economies, and natural disasters. Hope. Despair. Loneliness. Grief. Grit. More hope. Another lockdown. It’s been a hell of a year.

Though 2020 was dominated by big, hairy societal change, science and technology took significant steps forward. Researchers singularly focused on the pandemic and collaborated on solutions to a degree never before seen. New technologies converged to deliver vaccines in record time. The dark side of tech, from biased algorithms to the threat of omnipresent surveillance and corporate control of artificial intelligence, continued to rear its head.

Meanwhile, AI showed uncanny command of language, joined Reddit threads, and made inroads into some of science’s grandest challenges. Mars rockets flew for the first time, and a private company delivered astronauts to the International Space Station. Deprived of night life, concerts, and festivals, millions traveled to virtual worlds instead. Anonymous jet packs flew over LA. Mysterious monoliths appeared and disappeared worldwide.

It was all, you know, very 2020. For this year’s (in-no-way-all-encompassing) list of fascinating stories in tech and science, we tried to select those that weren’t totally dated by the news, but rose above it in some way. So, without further ado: This year’s picks.

How Science Beat the Virus
Ed Yong | The Atlantic
“Much like famous initiatives such as the Manhattan Project and the Apollo program, epidemics focus the energies of large groups of scientists. …But ‘nothing in history was even close to the level of pivoting that’s happening right now,’ Madhukar Pai of McGill University told me. … No other disease has been scrutinized so intensely, by so much combined intellect, in so brief a time.”

‘It Will Change Everything’: DeepMind’s AI Makes Gigantic Leap in Solving Protein Structures
Ewen Callaway | Nature
“In some cases, AlphaFold’s structure predictions were indistinguishable from those determined using ‘gold standard’ experimental methods such as X-ray crystallography and, in recent years, cryo-electron microscopy (cryo-EM). AlphaFold might not obviate the need for these laborious and expensive methods—yet—say scientists, but the AI will make it possible to study living things in new ways.”

OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws
James Vincent | The Verge
“What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.”

Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

The Dark Side of Big Tech’s Funding for AI Research
Tom Simonite | Wired
“Timnit Gebru’s exit from Google is a powerful reminder of how thoroughly companies dominate the field, with the biggest computers and the most resources. …[Meredith] Whittaker of AI Now says properly probing the societal effects of AI is fundamentally incompatible with corporate labs. ‘That kind of research that looks at the power and politics of AI is and must be inherently adversarial to the firms that are profiting from this technology.’i”

We’re Not Prepared for the End of Moore’s Law
David Rotman | MIT Technology Review
“Quantum computing, carbon nanotube transistors, even spintronics, are enticing possibilities—but none are obvious replacements for the promise that Gordon Moore first saw in a simple integrated circuit. We need the research investments now to find out, though. Because one prediction is pretty much certain to come true: we’re always going to want more computing power.”

Inside the Race to Build the Best Quantum Computer on Earth
Gideon Lichfield | MIT Technology Review
“Regardless of whether you agree with Google’s position [on ‘quantum supremacy’] or IBM’s, the next goal is clear, Oliver says: to build a quantum computer that can do something useful. …The trouble is that it’s nearly impossible to predict what the first useful task will be, or how big a computer will be needed to perform it.”

The Secretive Company That Might End Privacy as We Know It
Kashmir Hill | The New York Times
“Searching someone by face could become as easy as Googling a name. Strangers would be able to listen in on sensitive conversations, take photos of the participants and know personal secrets. Someone walking down the street would be immediately identifiable—and his or her home address would be only a few clicks away. It would herald the end of public anonymity.”

Wrongfully Accused by an Algorithm
Kashmir Hill | The New York Times
“Mr. Williams knew that he had not committed the crime in question. What he could not have known, as he sat in the interrogation room, is that his case may be the first known account of an American being wrongfully arrested based on a flawed match from a facial recognition algorithm, according to experts on technology and the law.”

Predictive Policing Algorithms Are Racist. They Need to Be Dismantled.
Will Douglas Heaven | MIT Technology Review
“A number of studies have shown that these tools perpetuate systemic racism, and yet we still know very little about how they work, who is using them, and for what purpose. All of this needs to change before a proper reckoning can take pace. Luckily, the tide may be turning.”

The Panopticon Is Already Here
Ross Andersen | The Atlantic
“Artificial intelligence has applications in nearly every human domain, from the instant translation of spoken language to early viral-outbreak detection. But Xi [Jinping] also wants to use AI’s awesome analytical powers to push China to the cutting edge of surveillance. He wants to build an all-seeing digital system of social control, patrolled by precog algorithms that identify potential dissenters in real time.”

The Case For Cities That Aren’t Dystopian Surveillance States
Cory Doctorow | The Guardian
“Imagine a human-centered smart city that knows everything it can about things. It knows how many seats are free on every bus, it knows how busy every road is, it knows where there are short-hire bikes available and where there are potholes. …What it doesn’t know is anything about individuals in the city.”

The Modern World Has Finally Become Too Complex for Any of Us to Understand
Tim Maughan | OneZero
“One of the dominant themes of the last few years is that nothing makes sense. …I am here to tell you that the reason so much of the world seems incomprehensible is that it is incomprehensible. From social media to the global economy to supply chains, our lives rest precariously on systems that have become so complex, and we have yielded so much of it to technologies and autonomous actors that no one totally comprehends it all.”

The Conscience of Silicon Valley
Zach Baron | GQ
“What I really hoped to do, I said, was to talk about the future and how to live in it. This year feels like a crossroads; I do not need to explain what I mean by this. …I want to destroy my computer, through which I now work and ‘have drinks’ and stare at blurry simulations of my parents sometimes; I want to kneel down and pray to it like a god. I want someone—I want Jaron Lanier—to tell me where we’re going, and whether it’s going to be okay when we get there. Lanier just nodded. All right, then.”

Yes to Tech Optimism. And Pessimism.
Shira Ovide | The New York Times
“Technology is not something that exists in a bubble; it is a phenomenon that changes how we live or how our world works in ways that help and hurt. That calls for more humility and bridges across the optimism-pessimism divide from people who make technology, those of us who write about it, government officials and the public. We need to think on the bright side. And we need to consider the horribles.”

How Afrofuturism Can Help the World Mend
C. Brandon Ogbunu | Wired
“…[W. E. B. DuBois’] ‘The Comet’ helped lay the foundation for a paradigm known as Afrofuturism. A century later, as a comet carrying disease and social unrest has upended the world, Afrofuturism may be more relevant than ever. Its vision can help guide us out of the rubble, and help us to consider universes of better alternatives.”

Wikipedia Is the Last Best Place on the Internet
Richard Cooke | Wired
“More than an encyclopedia, Wikipedia has become a community, a library, a constitution, an experiment, a political manifesto—the closest thing there is to an online public square. It is one of the few remaining places that retains the faintly utopian glow of the early World Wide Web.”

Can Genetic Engineering Bring Back the American Chestnut?
Gabriel Popkin | The New York Times Magazine
“The geneticists’ research forces conservationists to confront, in a new and sometimes discomfiting way, the prospect that repairing the natural world does not necessarily mean returning to an unblemished Eden. It may instead mean embracing a role that we’ve already assumed: engineers of everything, including nature.”

At the Limits of Thought
David C. Krakauer | Aeon
“A schism is emerging in the scientific enterprise. On the one side is the human mind, the source of every story, theory, and explanation that our species holds dear. On the other stand the machines, whose algorithms possess astonishing predictive power but whose inner workings remain radically opaque to human observers.”

Is the Internet Conscious? If It Were, How Would We Know?
Meghan O’Gieblyn | Wired
“Does the internet behave like a creature with an internal life? Does it manifest the fruits of consciousness? There are certainly moments when it seems to. Google can anticipate what you’re going to type before you fully articulate it to yourself. Facebook ads can intuit that a woman is pregnant before she tells her family and friends. It is easy, in such moments, to conclude that you’re in the presence of another mind—though given the human tendency to anthropomorphize, we should be wary of quick conclusions.”

The Internet Is an Amnesia Machine
Simon Pitt | OneZero
“There was a time when I didn’t know what a Baby Yoda was. Then there was a time I couldn’t go online without reading about Baby Yoda. And now, Baby Yoda is a distant, shrugging memory. Soon there will be a generation of people who missed the whole thing and for whom Baby Yoda is as meaningless as it was for me a year ago.”

Digital Pregnancy Tests Are Almost as Powerful as the Original IBM PC
Tom Warren | The Verge
“Each test, which costs less than $5, includes a processor, RAM, a button cell battery, and a tiny LCD screen to display the result. …Foone speculates that this device is ‘probably faster at number crunching and basic I/O than the CPU used in the original IBM PC.’ IBM’s original PC was based on Intel’s 8088 microprocessor, an 8-bit chip that operated at 5Mhz. The difference here is that this is a pregnancy test you pee on and then throw away.”

The Party Goes on in Massive Online Worlds
Cecilia D’Anastasio | Wired
“We’re more stand-outside types than the types to cast a flashy glamour spell and chat up the nearest cat girl. But, hey, it’s Final Fantasy XIV online, and where my body sat in New York, the epicenter of America’s Covid-19 outbreak, there certainly weren’t any parties.”

The Facebook Groups Where People Pretend the Pandemic Isn’t Happening
Kaitlyn Tiffany | The Atlantic
“Losing track of a friend in a packed bar or screaming to be heard over a live band is not something that’s happening much in the real world at the moment, but it happens all the time in the 2,100-person Facebook group ‘a group where we all pretend we’re in the same venue.’ So does losing shoes and Juul pods, and shouting matches over which bands are the saddest, and therefore the greatest.”

Did You Fly a Jetpack Over Los Angeles This Weekend? Because the FBI Is Looking for You
Tom McKay | Gizmodo
“Did you fly a jetpack over Los Angeles at approximately 3,000 feet on Sunday? Some kind of tiny helicopter? Maybe a lawn chair with balloons tied to it? If the answer to any of the above questions is ‘yes,’ you should probably lay low for a while (by which I mean cool it on the single-occupant flying machine). That’s because passing airline pilots spotted you, and now it’s this whole thing with the FBI and the Federal Aviation Administration, both of which are investigating.”

Image Credit: Thomas Kinto / Unsplash Continue reading

Posted in Human Robots

#437924 How a Software Map of the Entire Planet ...

i
“3D map data is the scaffolding of the 21st century.”

–Edward Miller, Founder, Scape Technologies, UK

Covered in cameras, sensors, and a distinctly spaceship looking laser system, Google’s autonomous vehicles were easy to spot when they first hit public roads in 2015. The key hardware ingredient is a spinning laser fixed to the roof, called lidar, which provides the car with a pair of eyes to see the world. Lidar works by sending out beams of light and measuring the time it takes to bounce off objects back to the source. By timing the light’s journey, these depth-sensing systems construct fully 3D maps of their surroundings.

3D maps like these are essentially software copies of the real world. They will be crucial to the development of a wide range of emerging technologies including autonomous driving, drone delivery, robotics, and a fast-approaching future filled with augmented reality.

Like other rapidly improving technologies, lidar is moving quickly through its development cycle. What was an expensive technology on the roof of a well-funded research project is now becoming cheaper, more capable, and readily available to consumers. At some point, lidar will come standard on most mobile devices and is now available to early-adopting owners of the iPhone 12 Pro.

Consumer lidar represents the inevitable shift from wealthy tech companies generating our world’s map data, to a more scalable crowd-sourced approach. To develop the repository for their Street View Maps product, Google reportedly spent $1-2 billion sending cars across continents photographing every street. Compare that to a live-mapping service like Waze, which uses crowd-sourced user data from its millions of users to generate accurate and real-time traffic conditions. Though these maps serve different functions, one is a static, expensive, unchanging map of the world while the other is dynamic, real-time, and constructed by users themselves.

Soon millions of people may be scanning everything from bedrooms to neighborhoods, resulting in 3D maps of significant quality. An online search for lidar room scans demonstrates just how richly textured these three-dimensional maps are compared to anything we’ve had before. With lidar and other depth-sensing systems, we now have the tools to create exact software copies of everywhere and everything on earth.

At some point, likely aided by crowdsourcing initiatives, these maps will become living breathing, real-time representations of the world. Some refer to this idea as a “digital twin” of the planet. In a feature cover story, Kevin Kelly, the cofounder of Wired magazine, calls this concept the “mirrorworld,” a one-to-one software map of everything.

So why is that such a big deal? Take augmented reality as an example.

Of all the emerging industries dependent on such a map, none are more invested in seeing this concept emerge than those within the AR landscape. Apple, for example, is not-so-secretly developing a pair of AR glasses, which they hope will deliver a mainstream turning point for the technology.

For Apple’s AR devices to work as anticipated, they will require virtual maps of the world, a concept AR insiders call the “AR cloud,” which is synonymous with the “mirrorworld” concept. These maps will be two things. First, they will be a tool that creators use to place AR content in very specific locations; like a world canvas to paint on. Second, they will help AR devices both locate and understand the world around them so they can render content in a believable way.

Imagine walking down a street wanting to check the trading hours of a local business. Instead of pulling out your phone to do a tedious search online, you conduct the equivalent of a visual google search simply by gazing at the store. Albeit a trivial example, the AR cloud represents an entirely non-trivial new way of managing how we organize the world’s information. Access to knowledge can be shifted away from the faraway monitors in our pocket, to its relevant real-world location.

Ultimately this describes a blurring of physical and digital infrastructure. Our public and private spaces will thus be comprised equally of both.

No example demonstrates this idea better than Pokémon Go. The game is straightforward enough; users capture virtual characters scattered around the real world. Today, the game relies on traditional GPS technology to place its characters, but GPS is accurate only to within a few meters of a location. For a car navigating on a highway or locating Pikachus in the world, that level of precision is sufficient. For drone deliveries, driverless cars, or placing a Pikachu in a specific location, say on a tree branch in a park, GPS isn’t accurate enough. As astonishing as it may seem, many experimental AR cloud concepts, even entirely mapped cities, are location specific down to the centimeter.

Niantic, the $4 billion publisher behind Pokémon Go, is aggressively working on developing a crowd-sourced approach to building better AR Cloud maps by encouraging their users to scan the world for them. Their recent acquisition of 6D.ai, a mapping software company developed by the University of Oxford’s Victor Prisacariu through his work at Oxford’s Active Vision Lab, indicates Niantic’s ambition to compete with the tech giants in this space.

With 6D.ai’s technology, Niantic is developing the in-house ability to generate their own 3D maps while gaining better semantic understanding of the world. By going beyond just knowing there’s a temporary collection of orange cones in a certain location, for example, the game may one day understand the meaning behind this; that a temporary construction zone means no Pokémon should spawn here to avoid drawing players to this location.

Niantic is not the only company working on this. Many of the big tech firms you would expect have entire teams focused on map data. Facebook, for example, recently acquired the UK-based Scape technologies, a computer vision startup mapping entire cities with centimeter precision.

As our digital maps of the world improve, expect a relentless and justified discussion of privacy concerns as well. How will society react to the idea of a real-time 3D map of their bedroom living on a Facebook or Amazon server? Those horrified by the use of facial recognition AI being used in public spaces are unlikely to find comfort in the idea of a machine-readable world subject to infinite monitoring.

The ability to build high-precision maps of the world could reshape the way we engage with our planet and promises to be one of the biggest technology developments of the next decade. While these maps may stay hidden as behind-the-scenes infrastructure powering much flashier technologies that capture the world’s attention, they will soon prop up large portions of our technological future.

Keep that in mind when a car with no driver is sharing your road.

Image credit: sergio souza / Pexels Continue reading

Posted in Human Robots

#437809 Q&A: The Masterminds Behind ...

Illustration: iStockphoto

Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.

The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.

Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.

Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.

This interview has been condensed and edited for clarity.

IEEE Spectrum: How does AI handle the various parts of the self-driving problem?

Photo: Toyota

Gill Pratt

Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.

The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.

Spectrum: Can you offset the weakness in prediction with stupendous perception?

Photo: Toyota Research Institute for Burgard

Wolfram Burgard

Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.

With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.

Spectrum: When do deep learning’s limitations become apparent?

Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.

Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.

“I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur?”
—Gill Pratt, Toyota Research Institute

For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.

You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.

Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?

Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.

Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?

Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.

Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions.

Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.

Spectrum: So, what’s next—what new technique is in the offing?

Pratt: If I knew the answer, we’d do it. [Laughter]

Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?

Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.

“We are now in the age of deep learning, and we don’t know what will come after.”
—Wolfram Burgard, Toyota Research Institute

That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.

Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?

Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.

Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.

Photo: Toyota

Toyota is using this
Platform 4 automated driving test vehicle, based on the Lexus LS, to develop Level-4 self-driving capabilities for its “Chauffeur” project.

Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!

Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?

These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?

Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?

Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.

Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.

And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.

Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.

Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done. Continue reading

Posted in Human Robots

#437224 This Week’s Awesome Tech Stories From ...

VIRTUAL REALITY
How Holographic Tech Is Shrinking VR Displays to the Size of Sunglasses
Kyle Orland | Ars Technica
“…researchers at Facebook Reality Labs are using holographic film to create a prototype VR display that looks less like ski goggles and more like lightweight sunglasses. With a total thickness less than 9mm—and without significant compromises on field of view or resolution—these displays could one day make today’s bulky VR headset designs completely obsolete.”

TRANSPORTATION
Stock Surge Makes Tesla the World’s Most Valuable Automaker
Timothy B. Lee | Ars Technica
“It’s a remarkable milestone for a company that sells far fewer cars than its leading rivals. …But Wall Street is apparently very optimistic about Tesla’s prospects for future growth and profits. Many experts expect a global shift to battery electric vehicles over the next decade or two, and Tesla is leading that revolution.”

FUTURE OF FOOD
These Plant-Based Steaks Come Out of a 3D Printer
Adele Peters | Fast Company
“The startup, launched by cofounders who met while developing digital printers at HP, created custom 3D printers that aim to replicate meat by printing layers of what they call ‘alt-muscle,’ ‘alt-fat,’ and ‘alt-blood,’ forming a complex 3D model.”

AUTOMATION
The US Air Force Is Turning Old F-16s Into AI-Powered Fighters
Amit Katwala | Wired UK
“Maverick’s days are numbered. The long-awaited sequel to Top Gun is due to hit cinemas in December, but the virtuoso fighter pilots at its heart could soon be a thing of the past. The trustworthy wingman will soon be replaced by artificial intelligence, built into a drone, or an existing fighter jet with no one in the cockpit.”

ROBOTICS
NASA Wants to Build a Steam-Powered Hopping Robot to Explore Icy Worlds
Georgina Torbet | Digital Trends
“A bouncing, ball-like robot that’s powered by steam sounds like something out of a steampunk fantasy, but it could be the ideal way to explore some of the distant, icy environments of our solar system. …This round robot would be the size of a soccer ball, with instruments held in the center of a metal cage, and it would use steam-powered thrusters to make jumps from one area of terrain to the next.”

FUTURE
Could Teleporting Ever Work?
Daniel Kolitz | Gizmodo
“Have the major airlines spent decades suppressing teleportation research? Have a number of renowned scientists in the field of teleportation studies disappeared under mysterious circumstances? Is there a cork board at the FBI linking Delta Airlines, shady foreign security firms, and dozens of murdered research professors? …No. None of that is the case. Which begs the question: why doesn’t teleportation exist yet?”

ENERGY
Nuclear ‘Power Balls’ Could Make Meltdowns a Thing of the Past
Daniel Oberhaus | Wired
“Not only will these reactors be smaller and more efficient than current nuclear power plants, but their designers claim they’ll be virtually meltdown-proof. Their secret? Millions of submillimeter-size grains of uranium individually wrapped in protective shells. It’s called triso fuel, and it’s like a radioactive gobstopper.”

TECHNOLOGY
A Plan to Redesign the Internet Could Make Apps That No One Controls
Will Douglas Heaven | MIT Techology Review
“[John Perry] Barlow’s ‘home of Mind’ is ruled today by the likes of Google, Facebook, Amazon, Alibaba, Tencent, and Baidu—a small handful of the biggest companies on earth. Yet listening to the mix of computer scientists and tech investors speak at an online event on June 30 hosted by the Dfinity Foundation…it is clear that a desire for revolution is brewing.”

IMPACT
To Save the World, the UN Is Turning It Into a Computer Simulation
Will Bedingfield | Wired
“The UN has now announced its new secret recipe to achieve [its 17 sustainable development goals or SDGs]: a computer simulation called Policy Priority Inference (PPI). …PPI is a budgeting software—it simulates a government and its bureaucrats as they allocate money on projects that might move a country closer to an SDG.”

Image credit: Benjamin Suter / Unsplash Continue reading

Posted in Human Robots