Tag Archives: cases
#438762 When Robots Enter the World, Who Is ...
Over the last half decade or so, the commercialization of autonomous robots that can operate outside of structured environments has dramatically increased. But this relatively new transition of robotic technologies from research projects to commercial products comes with its share of challenges, many of which relate to the rapidly increasing visibility that these robots have in society.
Whether it's because of their appearance of agency, or because of their history in popular culture, robots frequently inspire people’s imagination. Sometimes this is a good thing, like when it leads to innovative new use cases. And sometimes this is a bad thing, like when it leads to use cases that could be classified as irresponsible or unethical. Can the people selling robots do anything about the latter? And even if they can, should they?
Roboticists understand that robots, fundamentally, are tools. We build them, we program them, and even the autonomous ones are just following the instructions that we’ve coded into them. However, that same appearance of agency that makes robots so compelling means that it may not be clear to people without much experience with or exposure to real robots that a robot itself isn’t inherently good or bad—rather, as a tool, a robot is a reflection of its designers and users.
This can put robotics companies into a difficult position. When they sell a robot to someone, that person can, hypothetically, use the robot in any way they want. Of course, this is the case with every tool, but it’s the autonomous aspect that makes robots unique. I would argue that autonomy brings with it an implied association between a robot and its maker, or in this case, the company that develops and sells it. I’m not saying that this association is necessarily a reasonable one, but I think that it exists, even if that robot has been sold to someone else who has assumed full control over everything it does.
“All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon”
—Robert Playter, Boston Dynamics
Robotics companies are certainly aware of this, because many of them are very careful about who they sell their robots to, and very explicit about what they want their robots to be doing. But once a robot is out in the wild, as it were, how far should that responsibility extend? And realistically, how far can it extend? Should robotics companies be held accountable for what their robots do in the world, or should we accept that once a robot is sold to someone else, responsibility is transferred as well? And what can be done if a robot is being used in an irresponsible or unethical way that could have a negative impact on the robotics community?
For perspective on this, we contacted folks from three different robotics companies, each of which has experience selling distinctive mobile robots to commercial end users. We asked them the same five questions about the responsibility that robotics companies have regarding the robots that they sell, and here’s what they had to say:
Do you have any restrictions on what people can do with your robots? If so, what are they, and if not, why not?
Péter Fankhauser, CEO, ANYbotics:
We closely work together with our customers to make sure that our solution provides the right approach for their problem. Thereby, the target use case is clear from the beginning and we do not work with customers interested in using our robot ANYmal outside the intended target applications. Specifically, we strictly exclude any military or weaponized uses and since the foundation of ANYbotics it is close to our heart to make human work easier, safer, and more enjoyable.
Robert Playter, CEO, Boston Dynamics:
Yes, we have restrictions on what people can do with our robots, which are outlined in our Terms and Conditions of Sale. All of our buyers, without exception, must agree that Spot will not be used to harm or intimidate people or animals, as a weapon or configured to hold a weapon. Spot, just like any product, must be used in compliance with the law.
Ryan Gariepy, CTO, Clearpath Robotics:
We do have strict restrictions and KYC processes which are based primarily on Canadian export control regulations. They depend on the type of equipment sold as well as where it is going. More generally, we also will not sell or support a robot if we know that it will create an uncontrolled safety hazard or if we have reason to believe that the buyer is unqualified to use the product. And, as always, we do not support using our products for the development of fully autonomous weapons systems.
More broadly, if you sell someone a robot, why should they be restricted in what they can do with it?
Péter Fankhauser, ANYbotics: We see the robot less as a simple object but more as an artificial workforce. This implies to us that the usage is closely coupled with the transfer of the robot and both the customer and the provider agree what the robot is expected to do. This approach is supported by what we hear from our customers with an increasing interest to pay for the robots as a service or per use.
Robert Playter, Boston Dynamics: We’re offering a product for sale. We’re going to do the best we can to stop bad actors from using our technology for harm, but we don’t have the control to regulate every use. That said, we believe that our business will be best served if our technology is used for peaceful purposes—to work alongside people as trusted assistants and remove them from harm’s way. We do not want to see our technology used to cause harm or promote violence. Our restrictions are similar to those of other manufacturers or technology companies that take steps to reduce or eliminate the violent or unlawful use of their products.
Ryan Gariepy, Clearpath Robotics: Assuming the organization doing the restricting is a private organization and the robot and its software is sold vs. leased or “managed,” there aren't strong legal reasons to restrict use. That being said, the manufacturer likewise has no obligation to continue supporting that specific robot or customer going forward. However, given that we are only at the very edge of how robots will reshape a great deal of society, it is in the best interest for the manufacturer and user to be honest with each other about their respective goals. Right now, you're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.
“If a robot is being used in a way that is irresponsible due to safety: intervene! If it’s unethical: speak up!”
—Péter Fankhauser, ANYbotics
What can you realistically do to make sure that people who buy your robots use them in the ways that you intend?
Péter Fankhauser, ANYbotics: We maintain a close collaboration with our customers to ensure their success with our solution. So for us, we have refrained from technical solutions to block unintended use.
Robert Playter, Boston Dynamics: We vet our customers to make sure that their desired applications are things that Spot can support, and are in alignment with our Terms and Conditions of Sale. We’ve turned away customers whose applications aren’t a good match with our technology. If customers misuse our technology, we’re clear in our Terms of Sale that their violations may void our warranty and prevent their robots from being updated, serviced, repaired, or replaced. We may also repossess robots that are not purchased, but leased. Finally, we will refuse future sales to customers that violate our Terms of Sale.
Ryan Gariepy, Clearpath Robotics: We typically work with our clients ahead of the purchase to make sure their expectations match reality, in particular on aspects like safety, supervisory requirements, and usability. It's far worse to sell a robot that'll sit on a shelf or worse, cause harm, then to not sell a robot at all, so we prefer to reduce the risk of this situation in advance of receiving an order or shipping a robot.
How do you evaluate the merit of edge cases, for example if someone wants to use your robot in research or art that may push the boundaries of what you personally think is responsible or ethical?
Péter Fankhauser, ANYbotics: It’s about the dialog, understanding, and figuring out alternatives that work for all involved parties and the earlier you can have this dialog the better.
Robert Playter, Boston Dynamics: There’s a clear line between exploring robots in research and art, and using the robot for violent or illegal purposes.
Ryan Gariepy, Clearpath Robotics: We have sold thousands of robots to hundreds of clients, and I do not recall the last situation that was not covered by a combination of export control and a general evaluation of the client's goals and expectations. I'm sure this will change as robots continue to drop in price and increase in flexibility and usability.
“You're not only investing in the initial purchase and relationship, you're investing in the promise of how you can help each other succeed in the future.”
—Ryan Gariepy, Clearpath Robotics
What should roboticists do if we see a robot being used in a way that we feel is unethical or irresponsible?
Péter Fankhauser, ANYbotics: If it’s irresponsible due to safety: intervene! If it’s unethical: speak up!
Robert Playter, Boston Dynamics: We want robots to be beneficial for humanity, which includes the notion of not causing harm. As an industry, we think robots will achieve long-term commercial viability only if people see robots as helpful, beneficial tools without worrying if they’re going to cause harm.
Ryan Gariepy, Clearpath Robotics: On a one off basis, they should speak to a combination of the user, the supplier or suppliers, the media, and, if safety is an immediate concern, regulatory or government agencies. If the situation in question risks becoming commonplace and is not being taken seriously, they should speak up more generally in appropriate forums—conferences, industry groups, standards bodies, and the like.
As more and more robots representing different capabilities become commercially available, these issues are likely to come up more frequently. The three companies we talked to certainly don’t represent every viewpoint, and we did reach out to other companies who declined to comment. But I would think (I would hope?) that everyone in the robotics community can agree that robots should be used in a way that makes people’s lives better. What “better” means in the context of art and research and even robots in the military may not always be easy to define, and inevitably there’ll be disagreement as to what is ethical and responsible, and what isn’t.
We’ll keep on talking about it, though, and do our best to help the robotics community to continue growing and evolving in a positive way. Let us know what you think in the comments. Continue reading
#437809 Q&A: The Masterminds Behind ...
Illustration: iStockphoto
Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.
The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.
Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.
Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.
This interview has been condensed and edited for clarity.
IEEE Spectrum: How does AI handle the various parts of the self-driving problem?
Photo: Toyota
Gill Pratt
Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.
The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.
Spectrum: Can you offset the weakness in prediction with stupendous perception?
Photo: Toyota Research Institute for Burgard
Wolfram Burgard
Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.
With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.
Spectrum: When do deep learning’s limitations become apparent?
Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.
Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.
“I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur?”
—Gill Pratt, Toyota Research Institute
For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.
You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.
Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?
Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.
Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?
Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.
Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions.
Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.
Spectrum: So, what’s next—what new technique is in the offing?
Pratt: If I knew the answer, we’d do it. [Laughter]
Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?
Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.
“We are now in the age of deep learning, and we don’t know what will come after.”
—Wolfram Burgard, Toyota Research Institute
That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.
Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?
Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.
Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.
Photo: Toyota
Toyota is using this
Platform 4 automated driving test vehicle, based on the Lexus LS, to develop Level-4 self-driving capabilities for its “Chauffeur” project.
Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!
Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?
These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?
Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?
Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.
Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.
And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.
Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.
Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done. Continue reading
#437807 Why We Need Robot Sloths
An inherent characteristic of a robot (I would argue) is embodied motion. We tend to focus on motion rather a lot with robots, and the most dynamic robots get the most attention. This isn’t to say that highly dynamic robots don’t deserve our attention, but there are other robotic philosophies that, while perhaps less visually exciting, are equally valuable under the right circumstances. Magnus Egerstedt, a robotics professor at Georgia Tech, was inspired by some sloths he met in Costa Rica to explore the idea of “slowness as a design paradigm” through an arboreal robot called SlothBot.
Since the robot moves so slowly, why use a robot at all? It may be very energy-efficient, but it’s definitely not more energy efficient than a static sensing system that’s just bolted to a tree or whatever. The robot moves, of course, but it’s also going to be much more expensive (and likely much less reliable) than a handful of static sensors that could cover a similar area. The problem with static sensors, though, is that they’re constrained by power availability, and in environments like under a dense tree canopy, you’re not going to be able to augment their lifetime with solar panels. If your goal is a long-duration study of a small area (over weeks or months or more), SlothBot is uniquely useful in this context because it can crawl out from beneath a tree to find some sun to recharge itself, sunbathe for a while, and then crawl right back again to resume collecting data.
SlothBot is such an interesting concept that we had to check in with Egerstedt with a few more questions.
IEEE Spectrum: Tell us what you find so amazing about sloths!
Magnus Egerstedt: Apart from being kind of cute, the amazing thing about sloths is that they have carved out a successful ecological niche for themselves where being slow is not only acceptable but actually beneficial. Despite their pretty extreme low-energy lifestyle, they exhibit a number of interesting and sometimes outright strange behaviors. And, behaviors having to do with territoriality, foraging, or mating look rather different when you are that slow.
Are you leveraging the slothiness of the design for this robot somehow?
Sadly, the sloth design serves no technical purpose. But we are also viewing the SlothBot as an outreach platform to get kids excited about robotics and/or conservation biology. And having the robot look like a sloth certainly cannot hurt.
“Slowness is ideal for use cases that require a long-term, persistent presence in an environment, like for monitoring tasks. I can imagine slow robots being out on farm fields for entire growing cycles, or suspended on the ocean floor keeping track of pollutants or temperature variations.”
—Magnus Egerstedt, Georgia Tech
Can you talk more about slowness as a design paradigm?
The SlothBot is part of a broader design philosophy that I have started calling “Robot Ecology.” In ecology, the connections between individuals and their environments/habitats play a central role. And the same should hold true in robotics. The robot design must be understood in the environmental context in which it is to be deployed. And, if your task is to be present in a slowly varying environment over a long time scale, being slow seems like the right way to go. Slowness is ideal for use cases that require a long-term, persistent presence in an environment, like for monitoring tasks, where the environment itself is slowly varying. I can imagine slow robots being out on farm fields for entire growing cycles, or suspended on the ocean floor keeping track of pollutants or temperature variations.
How do sloths inspire SlothBot’s functionality?
Its motions are governed by what we call survival constraints. These constraints ensure that the SlothBot is always able to get to a sunny spot to recharge. The actual performance objective that we have given to the robot is to minimize energy consumption, i.e., to simply do nothing subject to the survival constraints. The majority of the time, the robot simply sits there under the trees, measuring various things, seemingly doing absolutely nothing and being rather sloth-like. Whenever the SlothBot does move, it does not move according to some fixed schedule. Instead, it moves because it has to in order to “survive.”
How would you like to improve SlothBot?
I have a few directions I would like to take the SlothBot. One is to make the sensor suites richer to make sure that it can become a versatile and useful science instrument. Another direction involves miniaturization – I would love to see a bunch of small SlothBots “living” among the trees somewhere in a rainforest for years, providing real-time data as to what is happening to the ecosystem. Continue reading