Tag Archives: be

#439257 Can a Robot Be Arrested? Hold a Patent? ...

Steven Cherry When horses were replaced by engines, for work and transportation, we didn’t need to rethink our legal frameworks. So when a fixed-in-place factory machine is replaced by a free-standing AI robot, or when a human truck driver is replaced by autonomous driving software, do we really need to make any fundamental changes to the law?

My guest today seems to think so. Or perhaps more accurately, he thinks that surprisingly, we do not; he says we need to change the laws less than we think. In case after case, he says, we just need to treat the robot more or less the same way we treat a person.

A year ago, he was giving presentations in which he argued that AIs can be patentholders. Since then, his views have advanced even further in that direction. And so last fall, he published a short but powerful treatise, The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press. In it, he argues that the law more often than not should not discriminate between AI and human behavior.

More Signal, Less Noise:
The Radio Spectrum Podcast With Steven Cherry
Sign up to be alerted when
the next episode drops.

Ryan Abbott is a Professor of Law and Health Sciences at the University of Surrey and an Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. He’s a licensed physician, and an attorney, and an acupuncturist in the United States, as well as a solicitor in England and Wales. His M.D. is from UC San Diego’s School of Medicine; his J.D. is from Yale Law School and his M.T.O.M.—Master of Traditional Oriental Medicine—degree is from Emperor’s College. And what with all that going on—and his book—I’m very happy to have as my guest today.

Ryan, I have to appear at traffic court and I have lower back pain, so, welcome to the podcast.

Ryan Abbott Well, thank you for having me. And not to worry, you can get both of those things fixed up here.

Steven Cherry Very good. Ryan, your starting point was back in 2014, and it was when you realized how much a pharmacological companies were relying on AI in the process of drug discovery.

Ryan Abbott In 2014, I was doing a couple of things. I was teaching patent law and in particular the law surrounding what it means to be an inventor. And I was doing work for biotech companies and helping protect their research in pharmaceutical R&D and patenting that sort of research.

And if you’re not aware of this, if you’re a pharmaceutical company, you can basically outsource every element of drug discovery from finding new compounds to having someone do clinical trials to do preclinical testing. And although the company I worked for didn’t end up using some of these vendors, there were a number of companies basically advertising that if you told them the therapeutic target you were interested in, they would have computers go through a large library of antibodies that they had and pick an antibody that would be the best antibody to select that target and provide you with a certain amount of data around how that antibody functions.

That was interesting to me because it made me think when we have a person who does that, they’re an inventor. And then we go when we get a patent on that antibody to treat that antigen, the target, that patent is the foundation of pretty much all patent portfolios for biological drugs. So what happens if instead of having a person do that sort of thing, you had a machine do that sort of thing? I wonder if anyone’s ever thought of that before.

And it turns out people are thinking about it since at least the 60s, but more or less saying, well, what would you need a patent for? Because a machine wouldn’t be motivated by getting a patent. So you could just leave this thing in the public domain. And I thought, well, that isn’t quite right, because, sure, a machine doesn’t care about a patent, but biotechnology companies that are investing millions or billions of dollars and finding new drugs care about patents. And if a machine can do a better job at finding a new drug, why would we not want to protect that sort of innovation? So that was my entrance into the field.

Steven Cherry So patents were your starting point. And if I understand it correctly, it’s where you began to think about parity between humans and robots or AIs a little bit more generally. I take it it’s not so much that you think that AIs are inventors—and I even said “patentholders” before and that’s not quite right—it’s more that you think that we would get the best outcomes for society when we treat them that way, that all the other legal paths lead to less innovation and invention. And that’s what we have a patent system for. So how does that work?

Ryan Abbott Right. I think that’s right. It’s a little less parity between an AI and a person and a little more parity between A.I. and human behavior. And it seems like a subtle difference, but it’s also a powerful one. For example, An AI wouldn’t own a patent not only because an A.I. doesn’t have legal personality the way a company does and couldn’t hold the patent. But even if you were going to change the law an AI wouldn’t care about getting a patent, it wouldn’t be able to exploit a patent an AI isn’t like a person; it isn’t morally deserving of rights.

But functionally an AI can behave indistinguishably from a person. And the law, by treating two different actors very differently, by treating two different types of behavior very differently, ends up having some perverse outcomes. So, for example, imagine that Pfizer had an A.I. that could replace a team of human researchers. And so when Covid-19 came along, Pfizer showed its AI the virus; the AI I found a new antibody for it, formulated it as a new drug that simulated some clinical trials, told Pfizer how to manufacture the thing, and basically Pfizer automated their whole R&D process.

If Pfizer can do that better with a machine than a person, it seems to me the sort of thing we want to be encouraging them to do. And yet, if Pfizer can’t get patents on any of that kind of activity, that’s really the primary way that the drug industry protects its research and monetizes it. And it is perhaps the industry where intellectual property is most important, although with Covid, there’s a whole lot of issues surrounding that. But in general, you see how the law would treat behavior differently. And when it does that, it encourages or discourages people in one direction without necessarily having a good reason to. And in other areas of the law, the law similarly pushes us towards human or A.I. behavior—again, without necessarily meaning to and sometimes with perverse sorts of outcomes.

Steven Cherry We’re going to get to those unintended consequences in a little bit. But in the meantime, sticking with this patent question, a little less than two years ago, you filed for two international patents for “AI generated inventions.” For something to be patentable, it has to be novel, non-obvious, and useful. What did you patent in that case?

Ryan Abbott We had two inventions come out of AI. One was for a flashing light that could preferentially attract human attention. So, for example, in an emergency situation, if you wanted to attract human or AI attention to, say, a plane crash, you could have a light flashing on this particular way. And the other is a beverage container based on fractal geometry. So looking somewhat like a snail shell that could be more useful for transportation or storage or grip by a person or a machine.

Those were two inventions that the UK and European Patent Office held were otherwise patentable. But in our case, we didn’t have a traditional human inventor. We did have someone who built an AI and we had someone who used an AI and we had someone who owned an AI. But traditionally in patent law—and, well, this depends on your jurisdiction—but in the US or the UK to be an inventor, you have to have basically thought up the entire invention as it is to be implied in practice. So, for example, if I was inventing a new drug to treat Covid-19, I would have to be the person who goes out and finds the antibody to treat it, not the pharmaceutical executive who says it’d be great if we had a Covid-19 vaccine or someone who carries out some instructions at someone else’s command.

In our case, we had someone train and program an AI, but not to solve either of those particular issues. You just trained it to do unsupervised generative learning. It produced those outputs and targeted them and identified them as having value. And to a team of patent attorneys, they were things we could file patents on. So we in those instances lacked someone who was traditionally an inventor.

Siemens recently reported in 2019 a similar case study of the WIPO first conversation on AI and IP where they had an AI that generated a new industrial component for a car. Their entire engineering team that was involved in the project thought, this is great, this looks like what we want. And when Siemens wanted to file a patent on it, the engineers said, well, we aren’t inventors on this. We had no idea what the machine was going to come up with and it was obviously valuable. And so that wouldn’t be appropriate. In the US at least saying you’re a patent inventor, if you’re not, carries a criminal penalty.

Steven Cherry WIPO is the World Intellectual Property Organization. When you file these patterns, you sort of sandbagged the patent offices with these patents, didn’t you? You didn’t tell them that there was no human inventor.

Ryan Abbott Well, initially in the UK, and in the European Patent Office, you’re allowed to not disclose an inventor for 18 months. And we wanted to do that to see if these were inventions that were substantively patentable. One of the interesting issues is that listing an inventor is generally a formalities issue rather than a substantive requirement for a patent. In fact, right now, the European Patent Office is debating whether or not there is any substantive requirement to list a human inventor at all or any kind of inventor, or whether this is really just something done again on a formalities basis. There are good reasons for listing human inventors. I’m an inventor on a few patents and it protects my moral rights. I want to be acknowledged for the work I’ve done, but in most jurisdictions, that isn’t a right that has any sort of direct financial implication, although it can signal someone’s productivity for future employers or sometimes people have contracts where they get certain benefits from it. Historically, there’s been some talk about whether or not you can have a company be an inventor and the law has come down pretty firmly against that. But if you were to list, say, IBM as an inventor for their multiple patents, it might exclude their scientists from the credit of being acknowledged. That’s very different, though, than our case, where you just don’t have a person to be acknowledged in the traditional sense and where an A.I. has functionally done the inventing. So when we did disclose that there was no traditional human inventor in the UK and in Europe, they rejected them on a formalities basis. And we filed in about a dozen other jurisdictions worldwide. Several of those have now rejected the application, but those are all under appeal.

Steven Cherry This is something of a test case. Where does it stand? You’re confident of winning in any of these courts?

Ryan Abbott I am confident we will get a patent in some jurisdictions, yes. And potentially on a different basis. One of the reasons for doing this case was, as you said, a test case. When I started talking about this in twenty fourteen and again, I was the hardly the first person to start talking about it. People would think that it was kind of vaguely maybe interesting. And within a five year period, I would have companies coming up to me after the talks saying what should we be doing about this? And less because they were having a I that was really, truly autonomous and of. R&D, but more of that teams are getting bigger, teams are getting more multidisciplinary, there’s collaborations between tech companies and traditional R&D companies, things like Microsoft and Novartis working together the boundaries of what it is that makes someone an inventor or less clear. And also, there’s a desire to make sure that companies aren’t losing protection by bringing A.I. into the process.

Steven Cherry So you came to see patents as an instance of a broader principle that you think in most cases should govern our legal regimes regarding robots and AIs. And you apply it rather broadly and we’ll get to some specific areas of the law. But first, tell us about the Principle of Legal Neutrality.

Ryan Abbott The idea is essentially that the law would better achieve its underlying goals if it did less to discriminate between A.I. and human behavior. And again, that is subtly but importantly different between treating an AI and a person the same—it’s treating their behavior the same way. So, for example, if an A.I. generates an invention without a traditional human inventor, that’s something we can protect. Not that an AI would own a patent. I think the best way to see it is in the context of certain examples. So if you had an AI driving an Uber instead of a human Uber driver, right now we hold those two different vehicles to very different liability standards. And it’s not clear there is a good reason for doing that, if what we’re trying to get out of accident law is mainly to have fewer people run over. You can have an AI running a podcast or teaching a university course or operating a cash register at McDonalds. And yet tax law treats these activities very differently in ways that encourage or discourage employers from automating. And again, without really intending to. The theory is that among many other principles of AI regulation, like fairness and transparency, and non-bias, doing less to discriminate between AI and human behavior would generate social benefits broadly.

Steven Cherry This gets us to that question of unintended consequences. And these are the two, I think, really interesting examples. In the case of tort law—and specifically liability—treating AIs and humans differently leads us to disfavor the AI. And in the case of taxes, it causes us to favor the AI. Let’s take these in turn and start with tort law.

Ryan Abbott Well, and as you point out, this isn’t necessarily the law goes one way or the other, just that it kind of pushes the law in ways that we don’t really want it to, or at least certainly haven’t intended for it to.

So, again, if you take this example in a few years that we will have a self-driving Uber. And when you want an Uber or Lyft, you go on your phone and it gives you the option of a person or a machine. You essentially have AI stepping into the shoes of people and doing exactly the same sort of thing that a person would do. But because the law of accidents is very human-centric, there are two different liability regimes for a self-driving car and a human-driving car.

For a human-driven car, we evaluate it under a negligence standard, which asks essentially what would a hypothetical reasonable person have done? So if a kid jumps out in front of your car and you slam on the brakes but you accidentally hit them, we say, well, would a reasonable human driver have avoided that collision? If yes, then you’re liable for it. If no, then you’re not. But self-driving cars are commercial products. And we have a different liability test for them, which is strict liability or product liability. And it’s a little complicated, but basically we just say, was there a defect with the A.I. and if so, that it caused the accident or not? If it caused the accident, then it’s liable. If it didn’t, then it’s not. Without getting too into the weeds on product versus negligence liability, the gist is that a strict liability system is a higher level of liability, and it means that because there’s more liability associated with it, that discourages the use of AI because there’s more costs associated with using it.

Now, that’s probably not going to be a good system if it turns out that A.I. is a better driver than a person. And it is almost certainly going to turn out that way because 94 percent of car accidents are caused by human error. More than thirty thousand people a year are killed in the US and more than a million people are killed worldwide. And a much larger number are seriously injured. Self-driving cars, while they’re definitely not perfect, are almost certainly going to be, in the not too distant future, safer than your average human driver. So the problem with having a stricter liability standard for machines than people is it discourages us from using them. But if machines are safer than people, then we really want to be encouraging their use through accident law. My proposal is, well, if you just look at the behavior, we would just ask, well, was that reasonable driving?

And again, this isn’t quite treating people and machines the same way as actors, because the self-driving car itself wouldn’t be liable if the manufacturer of the self-driving car would still be liable.

Steven Cherry The situation with taxes is just the opposite. It actually encourages the development and deployment of A.I. systems.

Ryan Abbott Yeah, and again, it does so in ways that aren’t necessarily resulting in better social outcomes. So if my university could replace me with a robot, they would. There’s a lot that goes into that decision. But let’s say that I and the robot were equally productive and about equally costly. My university would want to automate because it would save on taxes, because tax law, like tort law, is human centric.

And we also have a tax system that preferentially taxes labor over capital. So, for example, my employer pays payroll taxes that include contributions by an employer for various social service systems. In the UK that’s about a 13 percent national insurance contribution, or in the US it’s an employer portion of payroll taxes. If you have an AI do the same job, then the employer doesn’t have to pay that human-centric tax. And so tax policy is driving businesses to automate, even if they’re just doing it to save on tax money. And there are other ways in which human centric laws encourage automation without meaning to. It’s a little more complicated, but the gist is that same behavior by a person and a machine are taxed differently. And in this case, the government is encouraging businesses to automate.

There’s another reason that’s problematic besides having an unlevel playing field, and it’s that most of our tax revenue comes from, again, human-centered taxes. In the US about 35 percent of federal tax revenue comes from payroll taxes, and about 50-something percent comes from income taxes, which are largely labor-based. So if you automate jobs, you don’t get the tax revenue that you would have been paying, say me, because robots don’t pay taxes.

That sounds a little ridiculous. But less so when you realize the payroll taxes all go away and whatever would have come from an income tax maybe comes from corporate taxes, but at a much, much lower rate. So my university won’t necessarily be more profitable with the robot than me if it’s saving a little bit on taxes. But businesses pay a much smaller, effective at marginal tax rate than individuals do. Tax policy not only encourages automation, but it removes government funding.

Steven Cherry It’s not like an automated checkout system at the supermarket can pay income taxes or have payroll taxes deducted from its wages. So how would this work?

Ryan Abbott Right. Some people have suggested a literal robot tax and that would have the benefits of leveling the playing field and also ensuring tax revenue. But a literal robot tax has a lot of problems. I mean, for starters, defining a robot—is it the checkout cashier? is it a Roomba? You would have a lot of administrative gamesmanship with the IRS about this. It could penalize business models that are legitimately more efficient with automation, and it would result in a lot of administrative overhead.

So I think the solution is better as an indirect robot tax. And how would we do that? Well, we could do something like getting rid of human-centric taxes. So get rid of payroll taxes. And now suddenly you’re no longer encouraging automation quite so much. On the other hand, you would have to make that up from somewhere. And we could, for example, increase income tax rates or marginal rates on high earners. But probably a better system would be the increased tax burden on capital by doing things like raising corporate taxes, increasing capital gains taxes, or doing away with things like the stepped-up basis rule, which benefits capital.

And I think there’s two reasons to do this. One is generally historically, we’ve had a labor-tax-friendly—I mean, a highly labor taxed emphasis—because we think no one’s going to stop working for taxes [and] people will go invest their money in lower tax jurisdictions if we raise taxes.

There’s already a lot of scholarship challenging that assumption. It’s going to be even more important to challenge as automation takes greater hold and as less labor is required to make money from capital. And it will also have some impact on distributional fairness because A.I. is going to generate a tremendous amount of wealth, but likely very concentrated among people who already have that wealth. And increasing the tax burden on capital would have some distributional benefits.

Steven Cherry Finally, we get to the criminal law and here you say it’s not so easy to apply the Principle of Legal Neutrality.

Ryan Abbott Yeah, criminal laws a little more challenging to this sort of thing than other areas of the law. I tend to take a pretty functionalist attitude toward what the law is looking for. So with patents, we’re trying to incentivize and encourage innovation; with tax law, we’re trying to promote economic activity and achieve some other aims like distribution of resources. With tort law, we’re trying to reduce accidents and so forth. Criminal law is a little different because it is the branch of all that most cares about well, not what happened, but why someone did something. If an Uber runs me over, that may be a tort, but it’s a criminal law if the person driving the Uber was trying to run me over or at least behaving so carelessly that they apparently couldn’t have cared less.

And so it makes it potentially a little more challenging to think about AI behavior in a context in which the law really cares about intrinsic motivations for doing things and also where the law cares about things like retribution and culpability. “How morally blameworthy were you for doing something?” Or, “Let’s punish people for doing things not just because it has good social outcomes, but because we think that’s the best thing to do.”

And so the book is kind of looking at this AI and human behavior and thinking about whether you could ever consider AI behavior as criminal if you didn’t have a traditional person behind the eyes. So if I pick up a computer and strike you with it, that’s not the guy committing a battery. It’s me using a computer as a tool. But as we have increasingly sophisticated A.I. that is open source that many people are contributing to, it is likely to be acting in ways we traditionally think about as criminal. Without a person we can necessarily point to and say, well, this was a person who did a criminal thing; more like a company, this was an entity that did something criminal. And there isn’t a clear person who associates with that.

And indeed, the idea of criminally punishing an AI isn’t as ridiculous as it might seem. We already criminally punish artificial persons in the form of corporations, and we don’t always require some bad mindset for strict liability crimes for certain, say, unusually dangerous activities. We only require that someone broke the law, not that they have a bad mindset. And in fact, you can even have criminal liability for failure to act when you have a duty to act. So criminally punishing an artificial person without a guilty mind for failing to act is something the law already accommodates and this chapter kind of looks at whether that makes any sense in the context of AI.

Steven Cherry I think there’s also a common-sense element to this. I mean, we could have the philosophical discussion about whether robots or AIs can ever have consciousness. But leaving that aside, it’s becoming a little bit common—and it’s probably going to become much more common—to talk about a robot’s intentions or an AI’s intentions. I mean, frankly, we often impute intent to almost any entity that seems capable of organized and complex self-movement with a goal in mind. So the Roomba floor cleaner that you mentioned before might not rise to that level. But Rosie, the Maid in the Jetsons, would.

Ryan Abbott Right. And this is indeed something that is discussed often in the corporate criminal context about whether corporations have intentions. And the way one largely gets around this is by imputing the intentions of human agents onto the corporation. Some people are against holding a corporation liable for criminal acts at all, in part because the people who suffer from this are potentially innocent shareholders rather than the person of the company who did the bad thing. But other people have a different view of companies, which is that they are more than the sum of their agents, that both people behave with groupthink and in ways that are too subtle to really criminalize directly. And there is some sort of synergistic sense in which companies are legitimately thought about as independent agents, separate and aside from people, although corporations, of course, are literally made up of people, and AI isn’t.

Whether or not at a philosophical level you want to say that an AI has intent or not, certainly it functionally has intent. Right?

Whether it makes any sense to think about that under criminal liability, well, there could be some benefits to doing that. So, for example, if we have a self-driving Tesla that for one reason or another ended up targeting investment bankers, if we were to convict that Tesla of a crime, it would say to society, we’re not going to tolerate this behavior regardless of the nature of the actor. And this might change how Tesla behaves if the car was destroyed and particularly if it had some sort of follow-on punishment for Tesla.

One of the things the book looks at is that really doing something like that isn’t quite as radical is a departure as one might think, and there would be some benefits to it, but but also a lot of disadvantages to doing it. Namely, it could erode trust in the law and the way people think about machines as being morally on par with a person.

A better system would be finding a person upstream that AI’s behavior for either civil or criminal liability. And probably civil liability. Because if I did something criminal but we couldn’t find something criminal the person had done, it would not be that they had done something in their own right criminal, but that they had contributed to something that went on to cause harm.

Steven Cherry I think we can now guess what Stephen King’s next book is going to be, right? It’s Christine II, featuring the murderous Tesla.

What about other aspects of the law? I’m wondering about things like, I don’t know, constitutional freedoms. If corporations have free speech rights, why shouldn’t an AI?

Ryan Abbott Well, these issues are very much subjects of debate. I think my comment from the book would be maybe there should be some protections that afforded to A.I. behavior, but I think we’d have to be very careful in doing that and noting that we’re not doing this for an eye’s sake. The AI has no interest in whether or not it has the right to bear arms or the right to marry or the right to free speech.

It’s only something that we would want to do if we looked at it carefully and decided that we as a society would be better off granting certain legal rights to AI behavior. And indeed, that’s the theory behind rights for companies. The theory is not that a company is a person [and] is morally deserving of rights [and] it would be unfair to have it being unable to exercise its right to free speech. The theory is that granting rights to artificial entities in the form of corporations benefits people. Primarily, letting [corporations] own property and enter into contracts helps encourage commerce and entrepreneurship. Whether it is beneficial for society to have companies have the right to make certain political donations or engage in certain sorts of speech is something that we really need to look carefully at, but always bearing in mind, well, what are we doing this for? It’s to benefit society and the people—not to grant rights to something that isn’t a person for the sake of doing it.

Steven Cherry Lastly, what about areas outside of the law that support our laws? For example, the IEEE is one of the leading standards-setting bodies in the world. What role do standards play or could play as the laws around robotics and A.I. develops?

Ryan Abbott Well, I think a very important one. The law is one way in which we regulate behavior, and it isn’t the appropriate solution for every situation. The law can be fairly heavy-handed and it really also sets a basement of behavior, this sort of thing you should not do. But we expect people to have a higher standard of behavior. And so it’s not enough for companies to say, well, we’re complying with the law. They really need to be thinking carefully through their own governance of AI, their ethics of the use of A.I., individual use-cases, the risks and benefits, and acting in a way that they basically that is the best they can. And in that it shouldn’t be every company on their own either. There is an important role for industry groups to be getting together and thinking through some of these very difficult challenges and coming up with soft norms that guide their own behavior, that have the music and beneficial sorts of ways.

Steven Cherry Bad cases make bad law. And I guess one thing the standards could do is keep us from having the kinds of situations that lead to bad law.

Ryan Abbott There will always be a role for the law because there are some bad actors out there and not every company will be as interested in being as benevolent with a guy as others. But for the good corporate actors and industry players, they should indeed not have to worry about running afoul of the law because they should be exceeding any legal obligations on them in terms of their use of A.I.

Steven Cherry Well, Ryan, the Principle of Legal Neutrality is a remarkable thesis, and you’ve written a remarkable book about it. And the time to think about it and debate it is now before we blindly go down the road toward one conclusion or another without having thought about the consequences and which path better serves humanity—which better conforms to the purposes for which we make laws in the first place. Thanks for helping to plant that intellectual flag of discovery and for joining us today.

Ryan Abbott Thanks so much for having me.

Steven Cherry We’ve been speaking with Ryan Abbott, lawyer, doctor, perpetual student and polymath, though I think the only degree he doesn’t have is math, and author of The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronics Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded May 17, 2021 using Skype and Adobe Audition. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple, and wherever else you get your podcasts, or listen on the Spectrum website, which also contains transcripts of this and all our past episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry. Continue reading

Posted in Human Robots

#437386 Scary A.I. more intelligent than you

GPT-3 (Generative Pre-trained Transformer 3), is an artificial intelligence language generator that uses deep learning to produce human-like output. The high quality of its text is very difficult to distinguish from a human’s. Many scientists, researchers and engineers (including Stephen … Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots

#439100 Video Friday: Robotic Eyeball Camera

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

What if seeing devices looked like us? Eyecam is a prototype exploring the potential future design of sensing devices. Eyecam is a webcam shaped like a human eye that can see, blink, look around and observe us.

And it's open source, so you can build your own!

[ Eyecam ]

Looks like Festo will be turning some of its bionic robots into educational kits, which is a pretty cool idea.

[ Bionics4Education ]

Underwater soft robots are challenging to model and control because of their high degrees of freedom and their intricate coupling with water. In this paper, we present a method that leverages the recent development in differentiable simulation coupled with a differentiable, analytical hydrodynamic model to assist with the modeling and control of an underwater soft robot. We apply this method to Starfish, a customized soft robot design that is easy to fabricate and intuitive to manipulate.

[ MIT CSAIL ]

Rainbow Robotics, the company who made HUBO, has a new collaborative robot arm.

[ Rainbow Robotics ]

Thanks Fan!

We develop an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker. The platform also demonstrates the potential of a novel manufacturing process, which incorporates adaptive and collaborative intelligence to improve the efficiency of mass customization for the factory of the future.

[ Paper ]

Thanks Poramate!

In Sevastopol State University the team of the Laboratory of Underwater Robotics and Control Systems and Research and Production Association “Android Technika” performed tests of an underwater anropomorphic manipulator robot.

[ Sevastopol State ]

Thanks Fan!

Taiwanese company TCI Gene created a COVID test system based on their fully automated and enclosed gene testing machine QVS-96S. The system includes two ABB robots and carries out 1800 tests per day, operating 24/7. Every hour 96 virus samples tests are made with an accuracy of 99.99%.

[ ABB ]

A short video showing how a Halodi Robotics can be used in a commercial guarding application.

[ Halodi ]

During the past five years, under the NASA Early Space Innovations program, we have been developing new design optimization methods for underactuated robot hands, aiming to achieve versatile manipulation in highly constrained environments. We have prototyped hands for NASA’s Astrobee robot, an in-orbit assistive free flyer for the International Space Station.

[ ROAM Lab ]

The new, improved OTTO 1500 is a workhorse AMR designed to move heavy payloads through demanding environments faster than any other AMR on the market, with zero compromise to safety.

[ ROAM Lab ]

Very, very high performance sensing and actuation to pull this off.

[ Ishikawa Group ]

We introduce a conversational social robot designed for long-term in-home use to help with loneliness. We present a novel robot behavior design to have simple self-reflection conversations with people to improve wellness, while still being feasible, deployable, and safe.

[ HCI Lab ]

We are one of the 5 winners of the Start-up Challenge. This video illustrates what we achieved during the Swisscom 5G exploration week. Our proof-of-concept tele-excavation system is composed of a Menzi Muck M545 walking excavator automated & customized by Robotic Systems Lab and IBEX motion platform as the operator station. The operator and remote machine are connected for the first time via a 5G network infrastructure which was brought to our test field by Swisscom.

[ RSL ]

This video shows LOLA balancing on different terrain when being pushed in different directions. The robot is technically blind, not using any camera-based or prior information on the terrain (hard ground is assumed).

[ TUM ]

Autonomous driving when you cannot see the road at all because it's buried in snow is some serious autonomous driving.

[ Norlab ]

A hierarchical and robust framework for learning bipedal locomotion is presented and successfully implemented on the 3D biped robot Digit. The feasibility of the method is demonstrated by successfully transferring the learned policy in simulation to the Digit robot hardware, realizing sustained walking gaits under external force disturbances and challenging terrains not included during the training process.

[ OSU ]

This is a video summary of the Center for Robot-Assisted Search and Rescue's deployments under the direction of emergency response agencies to more than 30 disasters in five countries from 2001 (9/11 World Trade Center) to 2018 (Hurricane Michael). It includes the first use of ground robots for a disaster (WTC, 2001), the first use of small unmanned aerial systems (Hurricane Katrina 2005), and the first use of water surface vehicles (Hurricane Wilma, 2005).

[ CRASAR ]

In March, a team from the Oxford Robotics Institute collected a week of epic off-road driving data, as part of the Sense-Assess-eXplain (SAX) project.

[ Oxford Robotics ]

As a part of the AAAI 2021 Spring Symposium Series, HEBI Robotics was invited to present an Industry Talk on the symposium's topic: Machine Learning for Mobile Robot Navigation in the Wild. Included in this presentation was a short case study on one of our upcoming mobile robots that is being designed to successfully navigate unstructured environments where today's robots struggle.

[ HEBI Robotics ]

Thanks Hardik!

This Lockheed Martin Robotics Seminar is from Chad Jenkins at the University of Michigan, on “Semantic Robot Programming… and Maybe Making the World a Better Place.”

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

[ UMD ] Continue reading

Posted in Human Robots

#439095 DARPA Prepares for the Subterranean ...

The DARPA Subterranean Challenge Final Event is scheduled to take place at the Louisville Mega Cavern in Louisville, Kentucky, from September 21 to 23. We’ve followed SubT teams as they’ve explored their way through abandoned mines, unfinished nuclear reactors, and a variety of caves, and now everything comes together in one final course where the winner of the Systems Track will take home the $2 million first prize.

It’s a fitting reward for teams that have been solving some of the hardest problems in robotics, but winning isn’t going to be easy, and we’ll talk with SubT Program Manager Tim Chung about what we have to look forward to.

Since we haven’t talked about SubT in a little while (what with the unfortunate covid-related cancellation of the Systems Track Cave Circuit), here’s a quick refresher of where we are: the teams have made it through the Tunnel Circuit, the Urban Circuit, and a virtual version of the Cave Circuit, and some of them have been testing in caves of their own. The Final Event will include all of these environments, and the teams of robots will have 60 minutes to autonomously map the course, locating artifacts to score points. Since I’m not sure where on Earth there’s an underground location that combines tunnels and caves with urban structures, DARPA is going to have to get creative, and the location in which they’ve chosen to do that is Louisville, Kentucky.

The Louisville Mega Cavern is a former limestone mine, most of which is under the Louisville Zoo. It’s not all that deep, mostly less than 30 meters under the surface, but it’s enormous: with 370,000 square meters of rooms and passages, the cavern currently hosts (among other things) a business park, a zipline course, and mountain bike trails, because why not. While DARPA is keeping pretty quiet on the details, I’m guessing that they’ll be taking over a chunk of the cavern and filling it with features representing as many of the environmental challenges as they can.

To learn more about how the SubT Final Event is going to go, we spoke with SubT Program Manager Tim Chung. But first, we talked about Tim’s perspective on the success of the Urban Circuit, and how teams have been managing without an in-person Cave Circuit.

IEEE Spectrum: How did the SubT Urban Circuit go?

Tim Chung: On a couple fronts, Urban Circuit was really exciting. We were in this unfinished nuclear power plant—I’d be surprised if any of the competitors had prior experience in such a facility, or anything like it. I think that was illuminating both from an experiential point of view for the competitors, but also from a technology point of view, too.

One thing that I thought was really interesting was that we, DARPA, didn't need to make the venue more challenging. The real world is really that hard. There are places that were just really heinous for these robots to have to navigate through in order to look in every nook and cranny for artifacts. There were corners and doorways and small corridors and all these kind of things that really forced the teams to have to work hard, and the feedback was, why did DARPA have to make it so hard? But we didn’t, and in fact there were places that for the safety of the robots and personnel, we had to ensure the robots couldn’t go.

It sounds like some teams thought this course was on the more difficult side—do you think you tuned it to just the right amount of DARPA-hard?

Our calibration worked quite well. We were able to tease out and help refine and better understand what technologies are both useful and critical and also those technologies that might not necessarily get you the leap ahead capability. So as an example, the Urban Circuit really emphasized verticality, where you have to be able to sense, understand, and maneuver in three dimensions. Being able to capitalize on their robot technologies to address that verticality really stratified the teams, and showed how critical those capabilities are.

We saw teams that brought a lot of those capabilities do very well, and teams that brought baseline capabilities do what they could on the single floor that they were able to operate on. And so I think we got the Goldilocks solution for Urban Circuit that combined both difficulty and ambition.

Photos: Evan Ackerman/IEEE Spectrum

Two SubT Teams embedded networking equipment in balls that they could throw onto the course.

One of the things that I found interesting was that two teams independently came up with throwable network nodes. What was DARPA’s reaction to this? Is any solution a good solution, or was it more like the teams were trying to game the system?

You mean, do we want teams to game the rules in any way so as to get a competitive advantage? I don't think that's what the teams were doing. I think they were operating not only within the bounds of the rules, which permitted such a thing as throwable sensors where you could stand at the line and see how far you could chuck these things—not only was that acceptable by the rules, but anticipated. Behind the scenes, we tried to do exactly what these teams are doing and think through different approaches, so we explicitly didn't forbid such things in our rules because we thought it's important to have as wide an aperture as possible.

With these comms nodes specifically, I think they’re pretty clever. They were in some cases hacked together with a variety of different sports paraphernalia to see what would provide the best cushioning. You know, a lot of that happens in the field, and what it captured was that sometimes you just need to be up at two in the morning and thinking about things in a slightly different way, and that's when some nuggets of innovation can arise, and we see this all the time with operators in the field as well. They might only have duct tape or Styrofoam or whatever the case may be and that's when they come up with different ways to solve these problems. I think from DARPA’s perspective, and certainly from my perspective, wherever innovation can strike, we want to try to encourage and inspire those opportunities. I thought it was great, and it’s all part of the challenge.

Is there anything you can tell us about what your original plan had been for the Cave Circuit?

I can say that we’ve had the opportunity to go through a number of these caves scattered all throughout the country, and engage with caving communities—cavers clubs, speleologists that conduct research, and then of course the cave rescue community. The single biggest takeaway
is that every cave, and there are tens of thousands of them in the US alone, every cave has its own personality, and a lot of that personality is quite hidden from humans, because we can’t explore or access all of the cave. This led us to a number of different caves that were intriguing from a DARPA perspective but also inspirational for our Cave Circuit Virtual Competition.

How do you feel like the tuning was for the Virtual Cave Circuit?

The Virtual Competition, as you well know, was exciting in the sense that we could basically combine eight worlds into one competition, whereas the systems track competition really didn’t give us that opportunity. Even if we were able have held the Cave Circuit Systems Competition in person, it would have been at one site, and it would have been challenging to represent the level of diversity that we could with the Virtual Competition. So I think from that perspective, it’s clearly an advantage in terms of calibration—diversity gets you the ability to aggregate results to capture those that excel across all worlds as well as those that do well in one world or some worlds and not the others. I think the calibration was great in the sense that we were able to see the gamut of performance. Those that did well, did quite well, and those that have room to grow showed where those opportunities are for them as well.

We had to find ways to capture that diversity and that representativeness, and I think one of the fun ways we did that was with the different cave world tiles that we were able to combine in a variety of different ways. We also made use of a real world data set that we were able to take from a laser scan. Across the board, we had a really great chance to illustrate why virtual testing and simulation still plays such a dominant role in robotics technology development, and why I think it will continue to play an increasing role for developing these types of autonomy solutions.

Photo: Team CSIRO Data 61

How can systems track teams learn from their testing in whatever cave is local to them and effectively apply that to whatever cave environment is part of the final considering what the diversity of caves is?

I think that hits the nail on the head for what we as technologists are trying to discover—what are the transferable generalizable insights and how does that inform our technology development? As roboticists we want to optimize our systems to perform well at the tasks that they were designed to do, and oftentimes that means specialization because we get increased performance at the expense of being a generalist robot. I think in the case of SubT, we want to have our cake and eat it too—we want robots that perform well and reliably, but we want them to do so not just in one environment, which is how we tend to think about robot performance, but we want them to operate well in many environments, many of which have yet to be faced.

And I think that's kind of the nuance here, that we want robot systems to be generalists for the sake of being able to handle the unknown, namely the real world, but still achieve a high level of performance and perhaps they do that to their combined use of different technologies or advances in autonomy or perception approaches or novel mechanisms or mobility, but somehow they're still able, at least in aggregate, to achieve high performance.

We know these teams eagerly await any type of clue that DARPA can provide like about the SubT environments. From the environment previews for Tunnel, Urban, and even Cave, the teams were pivoting around and thinking a little bit differently. The takeaway, however, was that they didn't go to a clean sheet design—their systems were flexible enough that they could incorporate some of those specialist trends while still maintaining the notion of a generalist framework.

Looking ahead to the SubT Final, what can you tell us about the Louisville Mega Cavern?

As always, I’ll keep you in suspense until we get you there, but I can say that from the beginning of the SubT Challenge we had always envisioned teams of robots that are able to address not only the uncertainty of what's right in front of them, but also the uncertainty of what comes next. So I think the teams will be advantaged by thinking through subdomain awareness, or domain awareness if you want to generalize it, whether that means tuning multi-purpose robots, or deploying different robots, or employing your team of robots differently. Knowing which subdomain you are in is likely to be helpful, because then you can take advantage of those unique lessons learned through all those previous experiences then capitalize on that.

As far as specifics, I think the Mega Cavern offers many of the features important to what it means to be underground, while giving DARPA a pretty blank canvas to realize our vision of the SubT Challenge.

The SubT Final will be different from the earlier circuits in that there’s just one 60-minute run, rather than two. This is going to make things a lot more stressful for teams who have experienced bad robot days—why do it this way?

The preliminary round has two 30-minute runs, and those two runs are very similar to how we have done it during the circuits, of a single run per configuration per course. Teams will have the opportunity to show that their systems can face the obstacles in the final course, and it's the sum of those scores much like we did during the circuits, to help mitigate some of the concerns that you mentioned of having one robot somehow ruin their chances at a prize.

The prize round does give DARPA as well as the community a chance to focus on the top six teams from the preliminary round, and allows us to understand how they came to be at the top of the pack while emphasizing their technological contributions. The prize round will be one and done, but all of these teams we anticipate will be putting their best robot forward and will show the world why they deserve to win the SubT Challenge.

We’ve always thought that when called upon these robots need to operate in really challenging environments, and in the context of real world operations, there is no second chance. I don't think it's actually that much of a departure from our interests and insistence on bringing reliable technologies to the field, and those teams that might have something break here and there, that's all part of the challenge, of being resilient. Many teams struggled with robots that were debilitated on the course, and they still found ways to succeed and overcome that in the field, so maybe the rules emphasize that desire for showing up and working on game day which is consistent, I think, with how we've always envisioned it. This isn’t to say that these systems have to work perfectly, they just have to work in a way such that the team is resilient enough to tackle anything that they face.

It’s not too late for teams to enter for both the Virtual Track and the Systems Track to compete in the SubT Final, right?

Yes, that's absolutely right. Qualifications are still open, we are eager to welcome new teams to join in along with our existing competitors. I think any dark horse competitors coming into the Finals may be able to bring something that we haven't seen before, and that would be really exciting. I think it'll really make for an incredibly vibrant and illuminating final event.

The final event qualification deadline for the Systems Competition is April 21, and the qualification deadline for the Virtual Competition is June 29. More details here. Continue reading

Posted in Human Robots