Tag Archives: field
#437809 Q&A: The Masterminds Behind ...
Illustration: iStockphoto
Getting a car to drive itself is undoubtedly the most ambitious commercial application of artificial intelligence (AI). The research project was kicked into life by the 2004 DARPA Urban Challenge and then taken up as a business proposition, first by Alphabet, and later by the big automakers.
The industry-wide effort vacuumed up many of the world’s best roboticists and set rival companies on a multibillion-dollar acquisitions spree. It also launched a cycle of hype that paraded ever more ambitious deadlines—the most famous of which, made by Alphabet’s Sergei Brin in 2012, was that full self-driving technology would be ready by 2017. Those deadlines have all been missed.
Much of the exhilaration was inspired by the seeming miracles that a new kind of AI—deep learning—was achieving in playing games, recognizing faces, and transliterating voices. Deep learning excels at tasks involving pattern recognition—a particular challenge for older, rule-based AI techniques. However, it now seems that deep learning will not soon master the other intellectual challenges of driving, such as anticipating what human beings might do.
Among the roboticists who have been involved from the start are Gill Pratt, the chief executive officer of Toyota Research Institute (TRI) , formerly a program manager at the Defense Advanced Research Projects Agency (DARPA); and Wolfram Burgard, vice president of automated driving technology for TRI and president of the IEEE Robotics and Automation Society. The duo spoke with IEEE Spectrum’s Philip Ross at TRI’s offices in Palo Alto, Calif.
This interview has been condensed and edited for clarity.
IEEE Spectrum: How does AI handle the various parts of the self-driving problem?
Photo: Toyota
Gill Pratt
Gill Pratt: There are three different systems that you need in a self-driving car: It starts with perception, then goes to prediction, and then goes to planning.
The one that by far is the most problematic is prediction. It’s not prediction of other automated cars, because if all cars were automated, this problem would be much more simple. How do you predict what a human being is going to do? That’s difficult for deep learning to learn right now.
Spectrum: Can you offset the weakness in prediction with stupendous perception?
Photo: Toyota Research Institute for Burgard
Wolfram Burgard
Wolfram Burgard: Yes, that is what car companies basically do. A camera provides semantics, lidar provides distance, radar provides velocities. But all this comes with problems, because sometimes you look at the world from different positions—that’s called parallax. Sometimes you don’t know which range estimate that pixel belongs to. That might make the decision complicated as to whether that is a person painted onto the side of a truck or whether this is an actual person.
With deep learning there is this promise that if you throw enough data at these networks, it’s going to work—finally. But it turns out that the amount of data that you need for self-driving cars is far larger than we expected.
Spectrum: When do deep learning’s limitations become apparent?
Pratt: The way to think about deep learning is that it’s really high-performance pattern matching. You have input and output as training pairs; you say this image should lead to that result; and you just do that again and again, for hundreds of thousands, millions of times.
Here’s the logical fallacy that I think most people have fallen prey to with deep learning. A lot of what we do with our brains can be thought of as pattern matching: “Oh, I see this stop sign, so I should stop.” But it doesn’t mean all of intelligence can be done through pattern matching.
“I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur?”
—Gill Pratt, Toyota Research Institute
For instance, when I’m driving and I see a mother holding the hand of a child on a corner and trying to cross the street, I am pretty sure she’s not going to cross at a red light and jaywalk. I know from my experience being a human being that mothers and children don’t act that way. On the other hand, say there are two teenagers—with blue hair, skateboards, and a disaffected look. Are they going to jaywalk? I look at that, you look at that, and instantly the probability in your mind that they’ll jaywalk is much higher than for the mother holding the hand of the child. It’s not that you’ve seen 100,000 cases of young kids—it’s that you understand what it is to be either a teenager or a mother holding a child’s hand.
You can try to fake that kind of intelligence. If you specifically train a neural network on data like that, you could pattern-match that. But you’d have to know to do it.
Spectrum: So you’re saying that when you substitute pattern recognition for reasoning, the marginal return on the investment falls off pretty fast?
Pratt: That’s absolutely right. Unfortunately, we don’t have the ability to make an AI that thinks yet, so we don’t know what to do. We keep trying to use the deep-learning hammer to hammer more nails—we say, well, let’s just pour more data in, and more data.
Spectrum: Couldn’t you train the deep-learning system to recognize teenagers and to assign the category a high propensity for jaywalking?
Burgard: People have been doing that. But it turns out that these heuristics you come up with are extremely hard to tweak. Also, sometimes the heuristics are contradictory, which makes it extremely hard to design these expert systems based on rules. This is where the strength of the deep-learning methods lies, because somehow they encode a way to see a pattern where, for example, here’s a feature and over there is another feature; it’s about the sheer number of parameters you have available.
Our separation of the components of a self-driving AI eases the development and even the learning of the AI systems. Some companies even think about using deep learning to do the job fully, from end to end, not having any structure at all—basically, directly mapping perceptions to actions.
Pratt: There are companies that have tried it; Nvidia certainly tried it. In general, it’s been found not to work very well. So people divide the problem into blocks, where we understand what each block does, and we try to make each block work well. Some of the blocks end up more like the expert system we talked about, where we actually code things, and other blocks end up more like machine learning.
Spectrum: So, what’s next—what new technique is in the offing?
Pratt: If I knew the answer, we’d do it. [Laughter]
Spectrum: You said that if all cars on the road were automated, the problem would be easy. Why not “geofence” the heck out of the self-driving problem, and have areas where only self-driving cars are allowed?
Pratt: That means putting in constraints on the operational design domain. This includes the geography—where the car should be automated; it includes the weather, it includes the level of traffic, it includes speed. If the car is going slow enough to avoid colliding without risking a rear-end collision, that makes the problem much easier. Street trolleys operate with traffic still in some parts of the world, and that seems to work out just fine. People learn that this vehicle may stop at unexpected times. My suspicion is, that is where we’ll see Level 4 autonomy in cities. It’s going to be in the lower speeds.
“We are now in the age of deep learning, and we don’t know what will come after.”
—Wolfram Burgard, Toyota Research Institute
That’s a sweet spot in the operational design domain, without a doubt. There’s another one at high speed on a highway, because access to highways is so limited. But unfortunately there is still the occasional debris that suddenly crosses the road, and the weather gets bad. The classic example is when somebody irresponsibly ties a mattress to the top of a car and it falls off; what are you going to do? And the answer is that terrible things happen—even for humans.
Spectrum: Learning by doing worked for the first cars, the first planes, the first steam boilers, and even the first nuclear reactors. We ran risks then; why not now?
Pratt: It has to do with the times. During the era where cars took off, all kinds of accidents happened, women died in childbirth, all sorts of diseases ran rampant; the expected characteristic of life was that bad things happened. Expectations have changed. Now the chance of dying in some freak accident is quite low because of all the learning that’s gone on, the OSHA [Occupational Safety and Health Administration] rules, UL code for electrical appliances, all the building standards, medicine.
Furthermore—and we think this is very important—we believe that empathy for a human being at the wheel is a significant factor in public acceptance when there is a crash. We don’t know this for sure—it’s a speculation on our part. I’ve driven, I’ve had close calls; that could have been me that made that mistake and had that wreck. I think people are more tolerant when somebody else makes mistakes, and there’s an awful crash. In the case of an automated car, we worry that that empathy won’t be there.
Photo: Toyota
Toyota is using this
Platform 4 automated driving test vehicle, based on the Lexus LS, to develop Level-4 self-driving capabilities for its “Chauffeur” project.
Spectrum: Toyota is building a system called Guardian to back up the driver, and a more futuristic system called Chauffeur, to replace the driver. How can Chauffeur ever succeed? It has to be better than a human plus Guardian!
Pratt: In the discussions we’ve had with others in this field, we’ve talked about that a lot. What is the standard? Is it a person in a basic car? Or is it a person with a car that has active safety systems in it? And what will people think is good enough?
These systems will never be perfect—there will always be some accidents, and no matter how hard we try there will still be occasions where there will be some fatalities. At what threshold are people willing to say that’s okay?
Spectrum: You were among the first top researchers to warn against hyping self-driving technology. What did you see that so many other players did not?
Pratt: First, in my own case, during my time at DARPA I worked on robotics, not cars. So I was somewhat of an outsider. I was looking at it from a fresh perspective, and that helps a lot.
Second, [when I joined Toyota in 2015] I was joining a company that is very careful—even though we have made some giant leaps—with the Prius hybrid drive system as an example. Even so, in general, the philosophy at Toyota is kaizen—making the cars incrementally better every single day. That care meant that I was tasked with thinking very deeply about this thing before making prognostications.
And the final part: It was a new job for me. The first night after I signed the contract I felt this incredible responsibility. I couldn’t sleep that whole night, so I started to multiply out the numbers, all using a factor of 10. How many cars do we have on the road? Cars on average last 10 years, though ours last 20, but let’s call it 10. They travel on an order of 10,000 miles per year. Multiply all that out and you get 10 to the 10th miles per year for our fleet on Planet Earth, a really big number. I asked myself, if all of those cars had automated drive, how good would they have to be to tolerate the number of crashes that would still occur? And the answer was so incredibly good that I knew it would take a long time. That was five years ago.
Burgard: We are now in the age of deep learning, and we don’t know what will come after. We are still making progress with existing techniques, and they look very promising. But the gradient is not as steep as it was a few years ago.
Pratt: There isn’t anything that’s telling us that it can’t be done; I should be very clear on that. Just because we don’t know how to do it doesn’t mean it can’t be done. Continue reading
#437805 Video Friday: Quadruped Robot HyQ ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.
Four-legged HyQ balancing on two legs. Nice results from the team at IIT’s Dynamic Legged Systems Lab. And we can’t wait to see the “ninja walk,” currently shown in simulation, implemented with the real robot!
The development of balance controllers for legged robots with point feet remains a challenge when they have to traverse extremely constrained environments. We present a balance controller that has the potential to achieve line walking for quadruped robots. Our initial experiments show the 90-kg robot HyQ balancing on two feet and recovering from external pushes, as well as some changes in posture achieved without losing balance.
[ IIT ]
Thanks Victor!
Ava Robotics’ telepresence robot has been beheaded by MIT, and it now sports a coronavirus-destroying UV array.
UV-C light has proven to be effective at killing viruses and bacteria on surfaces and aerosols, but it’s unsafe for humans to be exposed. Fortunately, Ava’s telepresence robot doesn’t require any human supervision. Instead of the telepresence top, the team subbed in a UV-C array for disinfecting surfaces. Specifically, the array uses short-wavelength ultraviolet light to kill microorganisms and disrupt their DNA in a process called ultraviolet germicidal irradiation. The complete robot system is capable of mapping the space — in this case, GBFB’s warehouse — and navigating between waypoints and other specified areas. In testing the system, the team used a UV-C dosimeter, which confirmed that the robot was delivering the expected dosage of UV-C light predicted by the model.
[ MIT ]
While it’s hard enough to get quadrupedal robots to walk in complex environments, this work from the Robotic Systems Lab at ETH Zurich shows some impressive whole body planning that allows ANYmal to squeeze its body through small or weirdly shaped spaces.
[ RSL ]
Engineering researchers at North Carolina State University and Temple University have developed soft robots inspired by jellyfish that can outswim their real-life counterparts. More practically, the new jellyfish-bots highlight a technique that uses pre-stressed polymers to make soft robots more powerful.
The researchers also used the technique to make a fast-moving robot that resembles a larval insect curling its body, then jumping forward as it quickly releases its stored energy. Lastly, the researchers created a three-pronged gripping robot – with a twist. Most grippers hang open when “relaxed,” and require energy to hold on to their cargo as it is lifted and moved from point A to point B. But this claw’s default position is clenched shut. Energy is required to open the grippers, but once they’re in position, the grippers return to their “resting” mode – holding their cargo tight.
[ NC State ]
As control skills increase, we are more and more impressed by what a Cassie bipedal robot can do. Those who have been following our channel, know that we always show the limitations of our work. So while there is still much to do, you gotta like the direction things are going. Later this year, you will see this controller integrated with our real-time planner and perception system. Autonomy with agility! Watch out for us!
[ University of Michigan ]
GITAI’s S1 arm is a little less exciting than their humanoid torso, but it looks like this one might actually be going to the ISS next year.
Here’s how the humanoid would handle a similar task:
[ GITAI ]
Thanks Fan!
If you need a robot that can lift 250 kg at 10 m/s across a workspace of a thousand cubic meters, here’s your answer.
[ Fraunhofer ]
Penn engineers with funding from the National Science Foundation, have nanocardboard plates able to levitate when bright light is shone on them. This fleet of tiny aircraft could someday explore the skies of other worlds, including Mars. The thinner atmosphere there would give the flyers a boost, enabling them to carry payloads ten times as massive as they are, making them an efficient, light-weight alternative to the Mars helicopter.
[ UPenn ]
Erin Sparks, assistant professor in Plant and Soil Sciences, dreamed of a robot she could use in her research. A perfect partnership was formed when Adam Stager, then a mechanical engineering Ph.D. student, reached out about a robot he had a gut feeling might be useful in agriculture. The pair moved forward with their research with corn at the UD Farm, using the robot to capture dynamic phenotyping information of brace roots over time.
[ Sparks Lab ]
This is a video about robot spy turtles but OMG that bird drone landing gear.
[ PBS ]
If you have a DJI Mavic, you now have something new to worry about.
[ DroGone ]
I was able to spot just one single person in the warehouse footage in this video.
[ Berkshire Grey ]
Flyability has partnered with the ROBINS Project to help fill gaps in the technology used in ship inspections. Watch this video to learn more about the ROBINS project and how Flyability’s drones for confined spaces are helping make inspections on ships safer, cheaper, and more efficient.
[ Flyability ]
In this video, a mission of the Alpha Aerial Scout of Team CERBERUS during the DARPA Subterranean Challenge Urban Circuit event is presented. The Alpha Robot operates inside the Satsop Abandoned Power Plant and performs autonomous exploration. This deployment took place during the 3rd field trial of team CERBERUS during the Urban Circuit event of the DARPA Subterranean Challenge.
[ ARL ]
More excellent talks from the remote Legged Robots ICRA workshop- we’ve posted three here, but there are several other good talks this week as well.
[ ICRA 2020 Legged Robots Workshop ] Continue reading
#437800 Malleable Structure Makes Robot Arm More ...
The majority of robot arms are built out of some combination of long straight tubes and actuated joints. This isn’t surprising, since our limbs are built the same way, which was a clever and efficient bit of design. By adding more tubes and joints (or degrees of freedom), you can increase the versatility of your robot arm, but the tradeoff is that complexity, weight, and cost will increase, too.
At ICRA, researchers from Imperial College London’s REDS Lab, headed by Nicolas Rojas, introduced a design for a robot that’s built around a malleable structure rather than a rigid one, allowing you to improve how versatile the arm is without having to add extra degrees of freedom. The idea is that you’re no longer constrained to static tubes and joints but can instead reconfigure your robot to set it up exactly the way you want and easily change it whenever you feel like.
Inside of that bendable section of arm are layers and layers of mylar sheets, cut into flaps and stacked on top of one another so that each flap is overlapping or overlapped by at least 11 other flaps. The mylar is slippery enough that under most circumstances, the flaps can move smoothly against each other, letting you adjust the shape of the arm. The flaps are sealed up between latex membranes, and when air is pumped out from between the membranes, they press down on each other and turn the whole structure rigid, locking itself in whatever shape you’ve put it in.
Image: Imperial College London
The malleable part of the robot consists of layers of mylar sheets, cut into flaps that can move smoothly against each other, letting you adjust the shape of the arm. The flaps are sealed up between latex membranes, and when air is pumped out from between the membranes, they press down on each other and turn the whole structure rigid, locking itself in whatever shape you’ve put it in.
The nice thing about this system is that it’s a sort of combination of a soft robot and a rigid robot—you get the flexibility (both physical and metaphorical) of a soft system, without necessarily having to deal with all of the control problems. It’s more mechanically complex than either (as hybrid systems tend to be), but you save on cost, size, and weight, and reduce the number of actuators you need, which tend to be points of failure. You do need to deal with creating and maintaining a vacuum, and the fact that the malleable arm is not totally rigid, but depending on your application, those tradeoffs could easily be worth it.
For more details, we spoke with first author Angus B. Clark via email.
IEEE Spectrum: Where did this idea come from?
Angus Clark: The idea of malleable robots came from the realization that the majority of serial robot arms have 6 or more degrees of freedom (DoF)—usually rotary joints—yet are typically performing tasks that only require 2 or 3 DoF. The idea of a robot arm that achieves flexibility and adaptation to tasks but maintains the simplicity of a low DoF system, along with the rapid development of variable stiffness continuum robots for medical applications, inspired us to develop the malleable robot concept.
What are some ways in which a malleable robot arm could provide unique advantages, and what are some potential applications that could leverage these advantages?
Malleable robots have the ability to complete multiple traditional tasks, such as pick and place or bin picking operations, without the added bulk of extra joints that are not directly used within each task, as the flexibility of the robot arm is provided by a malleable link instead. This results in an overall smaller form factor, including weight and footprint of the robot, as well as a lower power requirement and cost of the robot as fewer joints are needed, without sacrificing adaptability. This makes the robot ideal for scenarios where any of these factors are critical, such as in space robotics—where every kilogram saved is vital—or in rehabilitation robotics, where cost reduction may facilitate adoption, to name two examples. Moreover, the collaborative soft-robot-esque nature of malleable robots also tends towards collaborative robots in factories working safely alongside and with humans.
“The idea of malleable robots came from the realization that the majority of serial robot arms have 6 or more degrees of freedom (DoF), yet are typically performing tasks that only require 2 or 3 DoF”
—Angus B. Clark, Imperial College London
Compared to a conventional rigid link between joints, what are the disadvantages of using a malleable link?
Currently the maximum stiffness of a malleable link is considerably weaker than that of an equivalent solid steel rigid link, and this is one of the key areas we are focusing research on improving as motion precision and accuracy are impacted. We have created the largest existing variable stiffness link at roughly 800 mm length and 50 mm diameter, which suits malleable robots towards small and medium size workspaces. Our current results evaluating this accuracy are good, however achieving a uniform stiffness across the entire malleable link can be problematic due to the production of wrinkles under bending in the encapsulating membrane. As demonstrated by our SCARA topology results, this can produce slight structural variations resulting in reduced accuracy.
Does the robot have any way of knowing its own shape? Potentially, could this system reconfigure itself somehow?
Currently we compute the robot topology using motion tracking, with markers placed on the joints of the robot. Using distance geometry, we are then able to obtain the forward and inverse kinematics of the robot, of which we can use to control the end effector (the gripper) of the robot. Ideally, in the future we would love to develop a system that no longer requires the use of motion tracking cameras.
As for the robot reconfiguring itself, which we call an “intrinsic malleable link,” there are many methods that have been demonstrated for controlling a continuum structure, such as using positive pressure or via tendon wires, however the ability to in real-time determine the curvature of the link, not just the joint positions, is a significant hurdle to solve. However, we hope to see future development on malleable robots work towards solving this problem.
What are you working on next?
For us, refining the kinematics of the robot to enable a robust and complete system for allowing a user to collaboratively reshape the robot, while still achieving the accuracy expected from robotic systems, is our current main goal. Malleable robots are a brand new field we have introduced, and as such provide many opportunities for development and optimization. Over the coming years, we hope to see other researchers work alongside us to solve these problems.
“Design and Workspace Characterization of Malleable Robots,” by Angus B. Clark and Nicolas Rojas from Imperial College London, was presented at ICRA 2020.
< Back to IEEE Journal Watch Continue reading