Tag Archives: robot

#439257 Can a Robot Be Arrested? Hold a Patent? ...

Steven Cherry When horses were replaced by engines, for work and transportation, we didn’t need to rethink our legal frameworks. So when a fixed-in-place factory machine is replaced by a free-standing AI robot, or when a human truck driver is replaced by autonomous driving software, do we really need to make any fundamental changes to the law?

My guest today seems to think so. Or perhaps more accurately, he thinks that surprisingly, we do not; he says we need to change the laws less than we think. In case after case, he says, we just need to treat the robot more or less the same way we treat a person.

A year ago, he was giving presentations in which he argued that AIs can be patentholders. Since then, his views have advanced even further in that direction. And so last fall, he published a short but powerful treatise, The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press. In it, he argues that the law more often than not should not discriminate between AI and human behavior.

More Signal, Less Noise:
The Radio Spectrum Podcast With Steven Cherry
Sign up to be alerted when
the next episode drops.

Ryan Abbott is a Professor of Law and Health Sciences at the University of Surrey and an Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. He’s a licensed physician, and an attorney, and an acupuncturist in the United States, as well as a solicitor in England and Wales. His M.D. is from UC San Diego’s School of Medicine; his J.D. is from Yale Law School and his M.T.O.M.—Master of Traditional Oriental Medicine—degree is from Emperor’s College. And what with all that going on—and his book—I’m very happy to have as my guest today.

Ryan, I have to appear at traffic court and I have lower back pain, so, welcome to the podcast.

Ryan Abbott Well, thank you for having me. And not to worry, you can get both of those things fixed up here.

Steven Cherry Very good. Ryan, your starting point was back in 2014, and it was when you realized how much a pharmacological companies were relying on AI in the process of drug discovery.

Ryan Abbott In 2014, I was doing a couple of things. I was teaching patent law and in particular the law surrounding what it means to be an inventor. And I was doing work for biotech companies and helping protect their research in pharmaceutical R&D and patenting that sort of research.

And if you’re not aware of this, if you’re a pharmaceutical company, you can basically outsource every element of drug discovery from finding new compounds to having someone do clinical trials to do preclinical testing. And although the company I worked for didn’t end up using some of these vendors, there were a number of companies basically advertising that if you told them the therapeutic target you were interested in, they would have computers go through a large library of antibodies that they had and pick an antibody that would be the best antibody to select that target and provide you with a certain amount of data around how that antibody functions.

That was interesting to me because it made me think when we have a person who does that, they’re an inventor. And then we go when we get a patent on that antibody to treat that antigen, the target, that patent is the foundation of pretty much all patent portfolios for biological drugs. So what happens if instead of having a person do that sort of thing, you had a machine do that sort of thing? I wonder if anyone’s ever thought of that before.

And it turns out people are thinking about it since at least the 60s, but more or less saying, well, what would you need a patent for? Because a machine wouldn’t be motivated by getting a patent. So you could just leave this thing in the public domain. And I thought, well, that isn’t quite right, because, sure, a machine doesn’t care about a patent, but biotechnology companies that are investing millions or billions of dollars and finding new drugs care about patents. And if a machine can do a better job at finding a new drug, why would we not want to protect that sort of innovation? So that was my entrance into the field.

Steven Cherry So patents were your starting point. And if I understand it correctly, it’s where you began to think about parity between humans and robots or AIs a little bit more generally. I take it it’s not so much that you think that AIs are inventors—and I even said “patentholders” before and that’s not quite right—it’s more that you think that we would get the best outcomes for society when we treat them that way, that all the other legal paths lead to less innovation and invention. And that’s what we have a patent system for. So how does that work?

Ryan Abbott Right. I think that’s right. It’s a little less parity between an AI and a person and a little more parity between A.I. and human behavior. And it seems like a subtle difference, but it’s also a powerful one. For example, An AI wouldn’t own a patent not only because an A.I. doesn’t have legal personality the way a company does and couldn’t hold the patent. But even if you were going to change the law an AI wouldn’t care about getting a patent, it wouldn’t be able to exploit a patent an AI isn’t like a person; it isn’t morally deserving of rights.

But functionally an AI can behave indistinguishably from a person. And the law, by treating two different actors very differently, by treating two different types of behavior very differently, ends up having some perverse outcomes. So, for example, imagine that Pfizer had an A.I. that could replace a team of human researchers. And so when Covid-19 came along, Pfizer showed its AI the virus; the AI I found a new antibody for it, formulated it as a new drug that simulated some clinical trials, told Pfizer how to manufacture the thing, and basically Pfizer automated their whole R&D process.

If Pfizer can do that better with a machine than a person, it seems to me the sort of thing we want to be encouraging them to do. And yet, if Pfizer can’t get patents on any of that kind of activity, that’s really the primary way that the drug industry protects its research and monetizes it. And it is perhaps the industry where intellectual property is most important, although with Covid, there’s a whole lot of issues surrounding that. But in general, you see how the law would treat behavior differently. And when it does that, it encourages or discourages people in one direction without necessarily having a good reason to. And in other areas of the law, the law similarly pushes us towards human or A.I. behavior—again, without necessarily meaning to and sometimes with perverse sorts of outcomes.

Steven Cherry We’re going to get to those unintended consequences in a little bit. But in the meantime, sticking with this patent question, a little less than two years ago, you filed for two international patents for “AI generated inventions.” For something to be patentable, it has to be novel, non-obvious, and useful. What did you patent in that case?

Ryan Abbott We had two inventions come out of AI. One was for a flashing light that could preferentially attract human attention. So, for example, in an emergency situation, if you wanted to attract human or AI attention to, say, a plane crash, you could have a light flashing on this particular way. And the other is a beverage container based on fractal geometry. So looking somewhat like a snail shell that could be more useful for transportation or storage or grip by a person or a machine.

Those were two inventions that the UK and European Patent Office held were otherwise patentable. But in our case, we didn’t have a traditional human inventor. We did have someone who built an AI and we had someone who used an AI and we had someone who owned an AI. But traditionally in patent law—and, well, this depends on your jurisdiction—but in the US or the UK to be an inventor, you have to have basically thought up the entire invention as it is to be implied in practice. So, for example, if I was inventing a new drug to treat Covid-19, I would have to be the person who goes out and finds the antibody to treat it, not the pharmaceutical executive who says it’d be great if we had a Covid-19 vaccine or someone who carries out some instructions at someone else’s command.

In our case, we had someone train and program an AI, but not to solve either of those particular issues. You just trained it to do unsupervised generative learning. It produced those outputs and targeted them and identified them as having value. And to a team of patent attorneys, they were things we could file patents on. So we in those instances lacked someone who was traditionally an inventor.

Siemens recently reported in 2019 a similar case study of the WIPO first conversation on AI and IP where they had an AI that generated a new industrial component for a car. Their entire engineering team that was involved in the project thought, this is great, this looks like what we want. And when Siemens wanted to file a patent on it, the engineers said, well, we aren’t inventors on this. We had no idea what the machine was going to come up with and it was obviously valuable. And so that wouldn’t be appropriate. In the US at least saying you’re a patent inventor, if you’re not, carries a criminal penalty.

Steven Cherry WIPO is the World Intellectual Property Organization. When you file these patterns, you sort of sandbagged the patent offices with these patents, didn’t you? You didn’t tell them that there was no human inventor.

Ryan Abbott Well, initially in the UK, and in the European Patent Office, you’re allowed to not disclose an inventor for 18 months. And we wanted to do that to see if these were inventions that were substantively patentable. One of the interesting issues is that listing an inventor is generally a formalities issue rather than a substantive requirement for a patent. In fact, right now, the European Patent Office is debating whether or not there is any substantive requirement to list a human inventor at all or any kind of inventor, or whether this is really just something done again on a formalities basis. There are good reasons for listing human inventors. I’m an inventor on a few patents and it protects my moral rights. I want to be acknowledged for the work I’ve done, but in most jurisdictions, that isn’t a right that has any sort of direct financial implication, although it can signal someone’s productivity for future employers or sometimes people have contracts where they get certain benefits from it. Historically, there’s been some talk about whether or not you can have a company be an inventor and the law has come down pretty firmly against that. But if you were to list, say, IBM as an inventor for their multiple patents, it might exclude their scientists from the credit of being acknowledged. That’s very different, though, than our case, where you just don’t have a person to be acknowledged in the traditional sense and where an A.I. has functionally done the inventing. So when we did disclose that there was no traditional human inventor in the UK and in Europe, they rejected them on a formalities basis. And we filed in about a dozen other jurisdictions worldwide. Several of those have now rejected the application, but those are all under appeal.

Steven Cherry This is something of a test case. Where does it stand? You’re confident of winning in any of these courts?

Ryan Abbott I am confident we will get a patent in some jurisdictions, yes. And potentially on a different basis. One of the reasons for doing this case was, as you said, a test case. When I started talking about this in twenty fourteen and again, I was the hardly the first person to start talking about it. People would think that it was kind of vaguely maybe interesting. And within a five year period, I would have companies coming up to me after the talks saying what should we be doing about this? And less because they were having a I that was really, truly autonomous and of. R&D, but more of that teams are getting bigger, teams are getting more multidisciplinary, there’s collaborations between tech companies and traditional R&D companies, things like Microsoft and Novartis working together the boundaries of what it is that makes someone an inventor or less clear. And also, there’s a desire to make sure that companies aren’t losing protection by bringing A.I. into the process.

Steven Cherry So you came to see patents as an instance of a broader principle that you think in most cases should govern our legal regimes regarding robots and AIs. And you apply it rather broadly and we’ll get to some specific areas of the law. But first, tell us about the Principle of Legal Neutrality.

Ryan Abbott The idea is essentially that the law would better achieve its underlying goals if it did less to discriminate between A.I. and human behavior. And again, that is subtly but importantly different between treating an AI and a person the same—it’s treating their behavior the same way. So, for example, if an A.I. generates an invention without a traditional human inventor, that’s something we can protect. Not that an AI would own a patent. I think the best way to see it is in the context of certain examples. So if you had an AI driving an Uber instead of a human Uber driver, right now we hold those two different vehicles to very different liability standards. And it’s not clear there is a good reason for doing that, if what we’re trying to get out of accident law is mainly to have fewer people run over. You can have an AI running a podcast or teaching a university course or operating a cash register at McDonalds. And yet tax law treats these activities very differently in ways that encourage or discourage employers from automating. And again, without really intending to. The theory is that among many other principles of AI regulation, like fairness and transparency, and non-bias, doing less to discriminate between AI and human behavior would generate social benefits broadly.

Steven Cherry This gets us to that question of unintended consequences. And these are the two, I think, really interesting examples. In the case of tort law—and specifically liability—treating AIs and humans differently leads us to disfavor the AI. And in the case of taxes, it causes us to favor the AI. Let’s take these in turn and start with tort law.

Ryan Abbott Well, and as you point out, this isn’t necessarily the law goes one way or the other, just that it kind of pushes the law in ways that we don’t really want it to, or at least certainly haven’t intended for it to.

So, again, if you take this example in a few years that we will have a self-driving Uber. And when you want an Uber or Lyft, you go on your phone and it gives you the option of a person or a machine. You essentially have AI stepping into the shoes of people and doing exactly the same sort of thing that a person would do. But because the law of accidents is very human-centric, there are two different liability regimes for a self-driving car and a human-driving car.

For a human-driven car, we evaluate it under a negligence standard, which asks essentially what would a hypothetical reasonable person have done? So if a kid jumps out in front of your car and you slam on the brakes but you accidentally hit them, we say, well, would a reasonable human driver have avoided that collision? If yes, then you’re liable for it. If no, then you’re not. But self-driving cars are commercial products. And we have a different liability test for them, which is strict liability or product liability. And it’s a little complicated, but basically we just say, was there a defect with the A.I. and if so, that it caused the accident or not? If it caused the accident, then it’s liable. If it didn’t, then it’s not. Without getting too into the weeds on product versus negligence liability, the gist is that a strict liability system is a higher level of liability, and it means that because there’s more liability associated with it, that discourages the use of AI because there’s more costs associated with using it.

Now, that’s probably not going to be a good system if it turns out that A.I. is a better driver than a person. And it is almost certainly going to turn out that way because 94 percent of car accidents are caused by human error. More than thirty thousand people a year are killed in the US and more than a million people are killed worldwide. And a much larger number are seriously injured. Self-driving cars, while they’re definitely not perfect, are almost certainly going to be, in the not too distant future, safer than your average human driver. So the problem with having a stricter liability standard for machines than people is it discourages us from using them. But if machines are safer than people, then we really want to be encouraging their use through accident law. My proposal is, well, if you just look at the behavior, we would just ask, well, was that reasonable driving?

And again, this isn’t quite treating people and machines the same way as actors, because the self-driving car itself wouldn’t be liable if the manufacturer of the self-driving car would still be liable.

Steven Cherry The situation with taxes is just the opposite. It actually encourages the development and deployment of A.I. systems.

Ryan Abbott Yeah, and again, it does so in ways that aren’t necessarily resulting in better social outcomes. So if my university could replace me with a robot, they would. There’s a lot that goes into that decision. But let’s say that I and the robot were equally productive and about equally costly. My university would want to automate because it would save on taxes, because tax law, like tort law, is human centric.

And we also have a tax system that preferentially taxes labor over capital. So, for example, my employer pays payroll taxes that include contributions by an employer for various social service systems. In the UK that’s about a 13 percent national insurance contribution, or in the US it’s an employer portion of payroll taxes. If you have an AI do the same job, then the employer doesn’t have to pay that human-centric tax. And so tax policy is driving businesses to automate, even if they’re just doing it to save on tax money. And there are other ways in which human centric laws encourage automation without meaning to. It’s a little more complicated, but the gist is that same behavior by a person and a machine are taxed differently. And in this case, the government is encouraging businesses to automate.

There’s another reason that’s problematic besides having an unlevel playing field, and it’s that most of our tax revenue comes from, again, human-centered taxes. In the US about 35 percent of federal tax revenue comes from payroll taxes, and about 50-something percent comes from income taxes, which are largely labor-based. So if you automate jobs, you don’t get the tax revenue that you would have been paying, say me, because robots don’t pay taxes.

That sounds a little ridiculous. But less so when you realize the payroll taxes all go away and whatever would have come from an income tax maybe comes from corporate taxes, but at a much, much lower rate. So my university won’t necessarily be more profitable with the robot than me if it’s saving a little bit on taxes. But businesses pay a much smaller, effective at marginal tax rate than individuals do. Tax policy not only encourages automation, but it removes government funding.

Steven Cherry It’s not like an automated checkout system at the supermarket can pay income taxes or have payroll taxes deducted from its wages. So how would this work?

Ryan Abbott Right. Some people have suggested a literal robot tax and that would have the benefits of leveling the playing field and also ensuring tax revenue. But a literal robot tax has a lot of problems. I mean, for starters, defining a robot—is it the checkout cashier? is it a Roomba? You would have a lot of administrative gamesmanship with the IRS about this. It could penalize business models that are legitimately more efficient with automation, and it would result in a lot of administrative overhead.

So I think the solution is better as an indirect robot tax. And how would we do that? Well, we could do something like getting rid of human-centric taxes. So get rid of payroll taxes. And now suddenly you’re no longer encouraging automation quite so much. On the other hand, you would have to make that up from somewhere. And we could, for example, increase income tax rates or marginal rates on high earners. But probably a better system would be the increased tax burden on capital by doing things like raising corporate taxes, increasing capital gains taxes, or doing away with things like the stepped-up basis rule, which benefits capital.

And I think there’s two reasons to do this. One is generally historically, we’ve had a labor-tax-friendly—I mean, a highly labor taxed emphasis—because we think no one’s going to stop working for taxes [and] people will go invest their money in lower tax jurisdictions if we raise taxes.

There’s already a lot of scholarship challenging that assumption. It’s going to be even more important to challenge as automation takes greater hold and as less labor is required to make money from capital. And it will also have some impact on distributional fairness because A.I. is going to generate a tremendous amount of wealth, but likely very concentrated among people who already have that wealth. And increasing the tax burden on capital would have some distributional benefits.

Steven Cherry Finally, we get to the criminal law and here you say it’s not so easy to apply the Principle of Legal Neutrality.

Ryan Abbott Yeah, criminal laws a little more challenging to this sort of thing than other areas of the law. I tend to take a pretty functionalist attitude toward what the law is looking for. So with patents, we’re trying to incentivize and encourage innovation; with tax law, we’re trying to promote economic activity and achieve some other aims like distribution of resources. With tort law, we’re trying to reduce accidents and so forth. Criminal law is a little different because it is the branch of all that most cares about well, not what happened, but why someone did something. If an Uber runs me over, that may be a tort, but it’s a criminal law if the person driving the Uber was trying to run me over or at least behaving so carelessly that they apparently couldn’t have cared less.

And so it makes it potentially a little more challenging to think about AI behavior in a context in which the law really cares about intrinsic motivations for doing things and also where the law cares about things like retribution and culpability. “How morally blameworthy were you for doing something?” Or, “Let’s punish people for doing things not just because it has good social outcomes, but because we think that’s the best thing to do.”

And so the book is kind of looking at this AI and human behavior and thinking about whether you could ever consider AI behavior as criminal if you didn’t have a traditional person behind the eyes. So if I pick up a computer and strike you with it, that’s not the guy committing a battery. It’s me using a computer as a tool. But as we have increasingly sophisticated A.I. that is open source that many people are contributing to, it is likely to be acting in ways we traditionally think about as criminal. Without a person we can necessarily point to and say, well, this was a person who did a criminal thing; more like a company, this was an entity that did something criminal. And there isn’t a clear person who associates with that.

And indeed, the idea of criminally punishing an AI isn’t as ridiculous as it might seem. We already criminally punish artificial persons in the form of corporations, and we don’t always require some bad mindset for strict liability crimes for certain, say, unusually dangerous activities. We only require that someone broke the law, not that they have a bad mindset. And in fact, you can even have criminal liability for failure to act when you have a duty to act. So criminally punishing an artificial person without a guilty mind for failing to act is something the law already accommodates and this chapter kind of looks at whether that makes any sense in the context of AI.

Steven Cherry I think there’s also a common-sense element to this. I mean, we could have the philosophical discussion about whether robots or AIs can ever have consciousness. But leaving that aside, it’s becoming a little bit common—and it’s probably going to become much more common—to talk about a robot’s intentions or an AI’s intentions. I mean, frankly, we often impute intent to almost any entity that seems capable of organized and complex self-movement with a goal in mind. So the Roomba floor cleaner that you mentioned before might not rise to that level. But Rosie, the Maid in the Jetsons, would.

Ryan Abbott Right. And this is indeed something that is discussed often in the corporate criminal context about whether corporations have intentions. And the way one largely gets around this is by imputing the intentions of human agents onto the corporation. Some people are against holding a corporation liable for criminal acts at all, in part because the people who suffer from this are potentially innocent shareholders rather than the person of the company who did the bad thing. But other people have a different view of companies, which is that they are more than the sum of their agents, that both people behave with groupthink and in ways that are too subtle to really criminalize directly. And there is some sort of synergistic sense in which companies are legitimately thought about as independent agents, separate and aside from people, although corporations, of course, are literally made up of people, and AI isn’t.

Whether or not at a philosophical level you want to say that an AI has intent or not, certainly it functionally has intent. Right?

Whether it makes any sense to think about that under criminal liability, well, there could be some benefits to doing that. So, for example, if we have a self-driving Tesla that for one reason or another ended up targeting investment bankers, if we were to convict that Tesla of a crime, it would say to society, we’re not going to tolerate this behavior regardless of the nature of the actor. And this might change how Tesla behaves if the car was destroyed and particularly if it had some sort of follow-on punishment for Tesla.

One of the things the book looks at is that really doing something like that isn’t quite as radical is a departure as one might think, and there would be some benefits to it, but but also a lot of disadvantages to doing it. Namely, it could erode trust in the law and the way people think about machines as being morally on par with a person.

A better system would be finding a person upstream that AI’s behavior for either civil or criminal liability. And probably civil liability. Because if I did something criminal but we couldn’t find something criminal the person had done, it would not be that they had done something in their own right criminal, but that they had contributed to something that went on to cause harm.

Steven Cherry I think we can now guess what Stephen King’s next book is going to be, right? It’s Christine II, featuring the murderous Tesla.

What about other aspects of the law? I’m wondering about things like, I don’t know, constitutional freedoms. If corporations have free speech rights, why shouldn’t an AI?

Ryan Abbott Well, these issues are very much subjects of debate. I think my comment from the book would be maybe there should be some protections that afforded to A.I. behavior, but I think we’d have to be very careful in doing that and noting that we’re not doing this for an eye’s sake. The AI has no interest in whether or not it has the right to bear arms or the right to marry or the right to free speech.

It’s only something that we would want to do if we looked at it carefully and decided that we as a society would be better off granting certain legal rights to AI behavior. And indeed, that’s the theory behind rights for companies. The theory is not that a company is a person [and] is morally deserving of rights [and] it would be unfair to have it being unable to exercise its right to free speech. The theory is that granting rights to artificial entities in the form of corporations benefits people. Primarily, letting [corporations] own property and enter into contracts helps encourage commerce and entrepreneurship. Whether it is beneficial for society to have companies have the right to make certain political donations or engage in certain sorts of speech is something that we really need to look carefully at, but always bearing in mind, well, what are we doing this for? It’s to benefit society and the people—not to grant rights to something that isn’t a person for the sake of doing it.

Steven Cherry Lastly, what about areas outside of the law that support our laws? For example, the IEEE is one of the leading standards-setting bodies in the world. What role do standards play or could play as the laws around robotics and A.I. develops?

Ryan Abbott Well, I think a very important one. The law is one way in which we regulate behavior, and it isn’t the appropriate solution for every situation. The law can be fairly heavy-handed and it really also sets a basement of behavior, this sort of thing you should not do. But we expect people to have a higher standard of behavior. And so it’s not enough for companies to say, well, we’re complying with the law. They really need to be thinking carefully through their own governance of AI, their ethics of the use of A.I., individual use-cases, the risks and benefits, and acting in a way that they basically that is the best they can. And in that it shouldn’t be every company on their own either. There is an important role for industry groups to be getting together and thinking through some of these very difficult challenges and coming up with soft norms that guide their own behavior, that have the music and beneficial sorts of ways.

Steven Cherry Bad cases make bad law. And I guess one thing the standards could do is keep us from having the kinds of situations that lead to bad law.

Ryan Abbott There will always be a role for the law because there are some bad actors out there and not every company will be as interested in being as benevolent with a guy as others. But for the good corporate actors and industry players, they should indeed not have to worry about running afoul of the law because they should be exceeding any legal obligations on them in terms of their use of A.I.

Steven Cherry Well, Ryan, the Principle of Legal Neutrality is a remarkable thesis, and you’ve written a remarkable book about it. And the time to think about it and debate it is now before we blindly go down the road toward one conclusion or another without having thought about the consequences and which path better serves humanity—which better conforms to the purposes for which we make laws in the first place. Thanks for helping to plant that intellectual flag of discovery and for joining us today.

Ryan Abbott Thanks so much for having me.

Steven Cherry We’ve been speaking with Ryan Abbott, lawyer, doctor, perpetual student and polymath, though I think the only degree he doesn’t have is math, and author of The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press.

Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronics Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.

This interview was recorded May 17, 2021 using Skype and Adobe Audition. Our theme music is by Chad Crouch.

You can subscribe to Radio Spectrum on Spotify, Apple, and wherever else you get your podcasts, or listen on the Spectrum website, which also contains transcripts of this and all our past episodes. We welcome your feedback on the web or in social media.

For Radio Spectrum, I’m Steven Cherry. Continue reading

Posted in Human Robots

#439241 The MIT humanoid robot: A dynamic ...

Creating robots that can perform acrobatic movements such as flips or spinning jumps can be highly challenging. Typically, in fact, these robots require sophisticated hardware designs, motion planners and control algorithms. Continue reading

Posted in Human Robots

#439230 Using a virtual linkage representation ...

A team of researchers at Yale University has developed a new kind of algorithm to improve the functionally of a robot hand. In their paper published in the journal Science Robotics, the group describes their algorithm and then demonstrate, via videos, how it can be used. Continue reading

Posted in Human Robots

#438882 Robotics in the entertainment industry

Mesmer Entertainment Robotics demonstrate some of their humanoid animatronics, as well as their humanoid robot, Owen.

Posted in Human Robots

#439211 A highly dexterous robot hand with a ...

A team of researchers at Yale University's Department of Mechanical Engineering and Materials Science, has developed a robot hand that employs a caging mechanism. In their paper published in the journal Science Robotics, the group describes their research into applying a caging mechanism to robot hands and how well their demonstration models worked. Continue reading

Posted in Human Robots