Tag Archives: can
#439374 A model to predict how much humans and ...
Researchers at University of Michigan have recently developed a bi-directional model that can predict how much both humans and robotic agents can be trusted in situations that involve human-robot collaboration. This model, presented in a paper published in IEEE Robotics and Automation Letters, could help to allocate tasks to different agents more reliably and efficiently. Continue reading
#439366 Why Robots Can’t Be Counted On to Find ...
On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.
It seems like robots should be ready to help with something like this. But they aren’t.
JOE RAEDLE/GETTY IMAGES
A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.
What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.
To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.
According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.
Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.
And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”
Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.
Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”
The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:
The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.
“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”
Photo: Center for Robot-Assisted Search and Rescue
Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”
Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading
#439357 How the Financial Industry Can Apply AI ...
iStockphoto
THE INSTITUTE Artificial intelligence is transforming the financial services industry. The technology is being used to determine creditworthiness, identify money laundering, and detect fraud.
AI also is helping to personalize services and recommend new offerings by developing a better understanding of customers. Chatbots and other AI assistants have made it easier for clients to get answers to their questions, 24/7.
Although confidence in financial institutions is high, according to the Banking Exchange, that’s not the case with AI. Many people have raised concerns about bias, discrimination, privacy, surveillance, and transparency.
Regulations are starting to take shape to address such concerns. In April the European Commission released the first legal framework to govern use of the technology, as reported in IEEE Spectrum. The proposed European regulations cover a variety of AI applications including credit checks, chatbots, and social credit scoring, which assesses an individual’s creditworthiness based on behavior. The U.S. Federal Trade Commission in April said it expects AI to be used truthfully, fairly, and equitably when it comes to decisions about credit, insurance, and other services.
To ensure the financial industry is addressing such issues, IEEE recently launched a free guide, “Trusted Data and Artificial Intelligence Systems (AIS) for Financial Services.” The authors of the nearly 100-page playbook want to ensure that those involved in developing the technologies are not neglecting human well-being and ethical considerations.
More than 50 representatives from major banks, credit unions, pension funds, and legal and compliance groups in Canada, the United Kingdom, and the United States provided input, as did AI experts from academia and technology companies.
“This IEEE finance playbook is a milestone achievement and provides a much-needed practical road map for organizations globally to develop their trusted data and ethical AI systems.”
“We are in the business of trust. A primary goal of financial services organizations is to use client and member data to generate new products and services that deliver value,” Sami Ahmed says. He is a member of IEEE industry executive steering committee that oversaw the playbook’s creation.
Ahmed is senior vice president of data and advanced analytics of OMERS, Ontario’s municipal government employees’ pension fund and one of the largest institutional investors in Canada.
“Best-in-class guidance assembled from industry experts in IEEE’s finance playbook,” he says, “addresses emerging risks such as bias, fairness, explainability, and privacy in our data and algorithms to inform smarter business decisions and uphold that trust.”
The playbook includes a road map to help organizations develop their systems. To provide a theoretical framework, the document incorporates IEEE’s “Ethically Aligned Design” report, the IEEE 7000 series of AI standards and projects, and the Ethics Certification Program for Autonomous and Intelligent Systems.
“Design looks completely different when a product has already been developed or is in prototype form,” says John C.Havens, executive director of the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems. “The primary message of ethically aligned design is to use the methodology outlined in the document to address these issues at the outset.”
Havens adds that although IEEE isn’t well known by financial services regulatory bodies, it does have a lot of credibility in harnessing the technical community and creating consensus-based material.
“That is why IEEE is the right place to publish this playbook, which sets the groundwork for standards development in the future,” he says.
IEEE Member Pavel Abdur-Rahman, chair of the IEEE industry executive steering committee, says the document was necessary to accomplish three things. One was to “verticalize the discussion within financial services for a very industry-specific capability building dialog. Another was to involve industry participants in the cocreation of this playbook, not only to curate best practices but also to develop and drive adoption of the IEEE standards into their organizations.” Lastly, he says, “it’s the first step toward creating recommended practices for MLOps [machine-learning operations], data cooperatives, and data products and marketplaces.
Abdur-Rahman is the head of trusted data and AI at IBM Canada.
ROAD MAP AND RESOURCES
The playbook has two sections, a road map for how to build trusted AI systems and resources from experts.
The road map helps organizations identify where they are in the process of adopting responsible ethically aligned design: early, developing, advanced, or mature stage. This section also outlines 20 ways that trusted data and AI can provide value to operating units within a financial organization. Called use cases, the examples include cybersecurity, loan and deposit pricing, improving operational efficiency, and talent acquisition. Graphs are used to break down potential ethical concerns for each use case.
The key resources section includes best practices, educational videos, guidelines, and reports on codes of conduct, ethical challenges, building bots responsibly, and other topics. Among the groups contributing resources are the European Commission, IBM, the IEEE Standards Association, Microsoft, and the World Economic Forum. Also included is a report on the impact the coronavirus pandemic has had on the financial services industry in Canada. Supplemental information includes a list of 84 documents on ethical guidelines.
“We are at a critical junction of industrial-scale AI adoption and acceleration,” says Amy Shi-Nash, a member of the steering committee and the global head of analytics and data science for HSBC. “This IEEE finance playbook is a milestone achievement and provides a much-needed practical road map for organizations globally to develop their trusted data and ethical AI systems.”
To get an evaluation of the readiness of your organization’s AI system, you can anonymously take a 20-minute survey.
IEEE membership offers a wide range of benefits and opportunities for those who share a common interest in technology. If you are not already a member, consider joining IEEE and becoming part of a worldwide network of more than 400,000 students and professionals. Continue reading
#439297 5 Modern Technologies That Can Become a ...
A hundred years ago, it was difficult to imagine that humanity would be able to fly into space, create artificial intelligence, and instantly exchange information. Modern technology has greatly changed the way the current generation of people lives. But there are still incredible discoveries that humanity has not yet made. Among them is the solution …
The post 5 Modern Technologies That Can Become a Key to Immortality appeared first on TFOT. Continue reading
#439257 Can a Robot Be Arrested? Hold a Patent? ...
Steven Cherry When horses were replaced by engines, for work and transportation, we didn’t need to rethink our legal frameworks. So when a fixed-in-place factory machine is replaced by a free-standing AI robot, or when a human truck driver is replaced by autonomous driving software, do we really need to make any fundamental changes to the law?
My guest today seems to think so. Or perhaps more accurately, he thinks that surprisingly, we do not; he says we need to change the laws less than we think. In case after case, he says, we just need to treat the robot more or less the same way we treat a person.
A year ago, he was giving presentations in which he argued that AIs can be patentholders. Since then, his views have advanced even further in that direction. And so last fall, he published a short but powerful treatise, The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press. In it, he argues that the law more often than not should not discriminate between AI and human behavior.
More Signal, Less Noise:
The Radio Spectrum Podcast With Steven Cherry
Sign up to be alerted when
the next episode drops.
Ryan Abbott is a Professor of Law and Health Sciences at the University of Surrey and an Adjunct Assistant Professor of Medicine at the David Geffen School of Medicine at UCLA. He’s a licensed physician, and an attorney, and an acupuncturist in the United States, as well as a solicitor in England and Wales. His M.D. is from UC San Diego’s School of Medicine; his J.D. is from Yale Law School and his M.T.O.M.—Master of Traditional Oriental Medicine—degree is from Emperor’s College. And what with all that going on—and his book—I’m very happy to have as my guest today.
Ryan, I have to appear at traffic court and I have lower back pain, so, welcome to the podcast.
Ryan Abbott Well, thank you for having me. And not to worry, you can get both of those things fixed up here.
Steven Cherry Very good. Ryan, your starting point was back in 2014, and it was when you realized how much a pharmacological companies were relying on AI in the process of drug discovery.
Ryan Abbott In 2014, I was doing a couple of things. I was teaching patent law and in particular the law surrounding what it means to be an inventor. And I was doing work for biotech companies and helping protect their research in pharmaceutical R&D and patenting that sort of research.
And if you’re not aware of this, if you’re a pharmaceutical company, you can basically outsource every element of drug discovery from finding new compounds to having someone do clinical trials to do preclinical testing. And although the company I worked for didn’t end up using some of these vendors, there were a number of companies basically advertising that if you told them the therapeutic target you were interested in, they would have computers go through a large library of antibodies that they had and pick an antibody that would be the best antibody to select that target and provide you with a certain amount of data around how that antibody functions.
That was interesting to me because it made me think when we have a person who does that, they’re an inventor. And then we go when we get a patent on that antibody to treat that antigen, the target, that patent is the foundation of pretty much all patent portfolios for biological drugs. So what happens if instead of having a person do that sort of thing, you had a machine do that sort of thing? I wonder if anyone’s ever thought of that before.
And it turns out people are thinking about it since at least the 60s, but more or less saying, well, what would you need a patent for? Because a machine wouldn’t be motivated by getting a patent. So you could just leave this thing in the public domain. And I thought, well, that isn’t quite right, because, sure, a machine doesn’t care about a patent, but biotechnology companies that are investing millions or billions of dollars and finding new drugs care about patents. And if a machine can do a better job at finding a new drug, why would we not want to protect that sort of innovation? So that was my entrance into the field.
Steven Cherry So patents were your starting point. And if I understand it correctly, it’s where you began to think about parity between humans and robots or AIs a little bit more generally. I take it it’s not so much that you think that AIs are inventors—and I even said “patentholders” before and that’s not quite right—it’s more that you think that we would get the best outcomes for society when we treat them that way, that all the other legal paths lead to less innovation and invention. And that’s what we have a patent system for. So how does that work?
Ryan Abbott Right. I think that’s right. It’s a little less parity between an AI and a person and a little more parity between A.I. and human behavior. And it seems like a subtle difference, but it’s also a powerful one. For example, An AI wouldn’t own a patent not only because an A.I. doesn’t have legal personality the way a company does and couldn’t hold the patent. But even if you were going to change the law an AI wouldn’t care about getting a patent, it wouldn’t be able to exploit a patent an AI isn’t like a person; it isn’t morally deserving of rights.
But functionally an AI can behave indistinguishably from a person. And the law, by treating two different actors very differently, by treating two different types of behavior very differently, ends up having some perverse outcomes. So, for example, imagine that Pfizer had an A.I. that could replace a team of human researchers. And so when Covid-19 came along, Pfizer showed its AI the virus; the AI I found a new antibody for it, formulated it as a new drug that simulated some clinical trials, told Pfizer how to manufacture the thing, and basically Pfizer automated their whole R&D process.
If Pfizer can do that better with a machine than a person, it seems to me the sort of thing we want to be encouraging them to do. And yet, if Pfizer can’t get patents on any of that kind of activity, that’s really the primary way that the drug industry protects its research and monetizes it. And it is perhaps the industry where intellectual property is most important, although with Covid, there’s a whole lot of issues surrounding that. But in general, you see how the law would treat behavior differently. And when it does that, it encourages or discourages people in one direction without necessarily having a good reason to. And in other areas of the law, the law similarly pushes us towards human or A.I. behavior—again, without necessarily meaning to and sometimes with perverse sorts of outcomes.
Steven Cherry We’re going to get to those unintended consequences in a little bit. But in the meantime, sticking with this patent question, a little less than two years ago, you filed for two international patents for “AI generated inventions.” For something to be patentable, it has to be novel, non-obvious, and useful. What did you patent in that case?
Ryan Abbott We had two inventions come out of AI. One was for a flashing light that could preferentially attract human attention. So, for example, in an emergency situation, if you wanted to attract human or AI attention to, say, a plane crash, you could have a light flashing on this particular way. And the other is a beverage container based on fractal geometry. So looking somewhat like a snail shell that could be more useful for transportation or storage or grip by a person or a machine.
Those were two inventions that the UK and European Patent Office held were otherwise patentable. But in our case, we didn’t have a traditional human inventor. We did have someone who built an AI and we had someone who used an AI and we had someone who owned an AI. But traditionally in patent law—and, well, this depends on your jurisdiction—but in the US or the UK to be an inventor, you have to have basically thought up the entire invention as it is to be implied in practice. So, for example, if I was inventing a new drug to treat Covid-19, I would have to be the person who goes out and finds the antibody to treat it, not the pharmaceutical executive who says it’d be great if we had a Covid-19 vaccine or someone who carries out some instructions at someone else’s command.
In our case, we had someone train and program an AI, but not to solve either of those particular issues. You just trained it to do unsupervised generative learning. It produced those outputs and targeted them and identified them as having value. And to a team of patent attorneys, they were things we could file patents on. So we in those instances lacked someone who was traditionally an inventor.
Siemens recently reported in 2019 a similar case study of the WIPO first conversation on AI and IP where they had an AI that generated a new industrial component for a car. Their entire engineering team that was involved in the project thought, this is great, this looks like what we want. And when Siemens wanted to file a patent on it, the engineers said, well, we aren’t inventors on this. We had no idea what the machine was going to come up with and it was obviously valuable. And so that wouldn’t be appropriate. In the US at least saying you’re a patent inventor, if you’re not, carries a criminal penalty.
Steven Cherry WIPO is the World Intellectual Property Organization. When you file these patterns, you sort of sandbagged the patent offices with these patents, didn’t you? You didn’t tell them that there was no human inventor.
Ryan Abbott Well, initially in the UK, and in the European Patent Office, you’re allowed to not disclose an inventor for 18 months. And we wanted to do that to see if these were inventions that were substantively patentable. One of the interesting issues is that listing an inventor is generally a formalities issue rather than a substantive requirement for a patent. In fact, right now, the European Patent Office is debating whether or not there is any substantive requirement to list a human inventor at all or any kind of inventor, or whether this is really just something done again on a formalities basis. There are good reasons for listing human inventors. I’m an inventor on a few patents and it protects my moral rights. I want to be acknowledged for the work I’ve done, but in most jurisdictions, that isn’t a right that has any sort of direct financial implication, although it can signal someone’s productivity for future employers or sometimes people have contracts where they get certain benefits from it. Historically, there’s been some talk about whether or not you can have a company be an inventor and the law has come down pretty firmly against that. But if you were to list, say, IBM as an inventor for their multiple patents, it might exclude their scientists from the credit of being acknowledged. That’s very different, though, than our case, where you just don’t have a person to be acknowledged in the traditional sense and where an A.I. has functionally done the inventing. So when we did disclose that there was no traditional human inventor in the UK and in Europe, they rejected them on a formalities basis. And we filed in about a dozen other jurisdictions worldwide. Several of those have now rejected the application, but those are all under appeal.
Steven Cherry This is something of a test case. Where does it stand? You’re confident of winning in any of these courts?
Ryan Abbott I am confident we will get a patent in some jurisdictions, yes. And potentially on a different basis. One of the reasons for doing this case was, as you said, a test case. When I started talking about this in twenty fourteen and again, I was the hardly the first person to start talking about it. People would think that it was kind of vaguely maybe interesting. And within a five year period, I would have companies coming up to me after the talks saying what should we be doing about this? And less because they were having a I that was really, truly autonomous and of. R&D, but more of that teams are getting bigger, teams are getting more multidisciplinary, there’s collaborations between tech companies and traditional R&D companies, things like Microsoft and Novartis working together the boundaries of what it is that makes someone an inventor or less clear. And also, there’s a desire to make sure that companies aren’t losing protection by bringing A.I. into the process.
Steven Cherry So you came to see patents as an instance of a broader principle that you think in most cases should govern our legal regimes regarding robots and AIs. And you apply it rather broadly and we’ll get to some specific areas of the law. But first, tell us about the Principle of Legal Neutrality.
Ryan Abbott The idea is essentially that the law would better achieve its underlying goals if it did less to discriminate between A.I. and human behavior. And again, that is subtly but importantly different between treating an AI and a person the same—it’s treating their behavior the same way. So, for example, if an A.I. generates an invention without a traditional human inventor, that’s something we can protect. Not that an AI would own a patent. I think the best way to see it is in the context of certain examples. So if you had an AI driving an Uber instead of a human Uber driver, right now we hold those two different vehicles to very different liability standards. And it’s not clear there is a good reason for doing that, if what we’re trying to get out of accident law is mainly to have fewer people run over. You can have an AI running a podcast or teaching a university course or operating a cash register at McDonalds. And yet tax law treats these activities very differently in ways that encourage or discourage employers from automating. And again, without really intending to. The theory is that among many other principles of AI regulation, like fairness and transparency, and non-bias, doing less to discriminate between AI and human behavior would generate social benefits broadly.
Steven Cherry This gets us to that question of unintended consequences. And these are the two, I think, really interesting examples. In the case of tort law—and specifically liability—treating AIs and humans differently leads us to disfavor the AI. And in the case of taxes, it causes us to favor the AI. Let’s take these in turn and start with tort law.
Ryan Abbott Well, and as you point out, this isn’t necessarily the law goes one way or the other, just that it kind of pushes the law in ways that we don’t really want it to, or at least certainly haven’t intended for it to.
So, again, if you take this example in a few years that we will have a self-driving Uber. And when you want an Uber or Lyft, you go on your phone and it gives you the option of a person or a machine. You essentially have AI stepping into the shoes of people and doing exactly the same sort of thing that a person would do. But because the law of accidents is very human-centric, there are two different liability regimes for a self-driving car and a human-driving car.
For a human-driven car, we evaluate it under a negligence standard, which asks essentially what would a hypothetical reasonable person have done? So if a kid jumps out in front of your car and you slam on the brakes but you accidentally hit them, we say, well, would a reasonable human driver have avoided that collision? If yes, then you’re liable for it. If no, then you’re not. But self-driving cars are commercial products. And we have a different liability test for them, which is strict liability or product liability. And it’s a little complicated, but basically we just say, was there a defect with the A.I. and if so, that it caused the accident or not? If it caused the accident, then it’s liable. If it didn’t, then it’s not. Without getting too into the weeds on product versus negligence liability, the gist is that a strict liability system is a higher level of liability, and it means that because there’s more liability associated with it, that discourages the use of AI because there’s more costs associated with using it.
Now, that’s probably not going to be a good system if it turns out that A.I. is a better driver than a person. And it is almost certainly going to turn out that way because 94 percent of car accidents are caused by human error. More than thirty thousand people a year are killed in the US and more than a million people are killed worldwide. And a much larger number are seriously injured. Self-driving cars, while they’re definitely not perfect, are almost certainly going to be, in the not too distant future, safer than your average human driver. So the problem with having a stricter liability standard for machines than people is it discourages us from using them. But if machines are safer than people, then we really want to be encouraging their use through accident law. My proposal is, well, if you just look at the behavior, we would just ask, well, was that reasonable driving?
And again, this isn’t quite treating people and machines the same way as actors, because the self-driving car itself wouldn’t be liable if the manufacturer of the self-driving car would still be liable.
Steven Cherry The situation with taxes is just the opposite. It actually encourages the development and deployment of A.I. systems.
Ryan Abbott Yeah, and again, it does so in ways that aren’t necessarily resulting in better social outcomes. So if my university could replace me with a robot, they would. There’s a lot that goes into that decision. But let’s say that I and the robot were equally productive and about equally costly. My university would want to automate because it would save on taxes, because tax law, like tort law, is human centric.
And we also have a tax system that preferentially taxes labor over capital. So, for example, my employer pays payroll taxes that include contributions by an employer for various social service systems. In the UK that’s about a 13 percent national insurance contribution, or in the US it’s an employer portion of payroll taxes. If you have an AI do the same job, then the employer doesn’t have to pay that human-centric tax. And so tax policy is driving businesses to automate, even if they’re just doing it to save on tax money. And there are other ways in which human centric laws encourage automation without meaning to. It’s a little more complicated, but the gist is that same behavior by a person and a machine are taxed differently. And in this case, the government is encouraging businesses to automate.
There’s another reason that’s problematic besides having an unlevel playing field, and it’s that most of our tax revenue comes from, again, human-centered taxes. In the US about 35 percent of federal tax revenue comes from payroll taxes, and about 50-something percent comes from income taxes, which are largely labor-based. So if you automate jobs, you don’t get the tax revenue that you would have been paying, say me, because robots don’t pay taxes.
That sounds a little ridiculous. But less so when you realize the payroll taxes all go away and whatever would have come from an income tax maybe comes from corporate taxes, but at a much, much lower rate. So my university won’t necessarily be more profitable with the robot than me if it’s saving a little bit on taxes. But businesses pay a much smaller, effective at marginal tax rate than individuals do. Tax policy not only encourages automation, but it removes government funding.
Steven Cherry It’s not like an automated checkout system at the supermarket can pay income taxes or have payroll taxes deducted from its wages. So how would this work?
Ryan Abbott Right. Some people have suggested a literal robot tax and that would have the benefits of leveling the playing field and also ensuring tax revenue. But a literal robot tax has a lot of problems. I mean, for starters, defining a robot—is it the checkout cashier? is it a Roomba? You would have a lot of administrative gamesmanship with the IRS about this. It could penalize business models that are legitimately more efficient with automation, and it would result in a lot of administrative overhead.
So I think the solution is better as an indirect robot tax. And how would we do that? Well, we could do something like getting rid of human-centric taxes. So get rid of payroll taxes. And now suddenly you’re no longer encouraging automation quite so much. On the other hand, you would have to make that up from somewhere. And we could, for example, increase income tax rates or marginal rates on high earners. But probably a better system would be the increased tax burden on capital by doing things like raising corporate taxes, increasing capital gains taxes, or doing away with things like the stepped-up basis rule, which benefits capital.
And I think there’s two reasons to do this. One is generally historically, we’ve had a labor-tax-friendly—I mean, a highly labor taxed emphasis—because we think no one’s going to stop working for taxes [and] people will go invest their money in lower tax jurisdictions if we raise taxes.
There’s already a lot of scholarship challenging that assumption. It’s going to be even more important to challenge as automation takes greater hold and as less labor is required to make money from capital. And it will also have some impact on distributional fairness because A.I. is going to generate a tremendous amount of wealth, but likely very concentrated among people who already have that wealth. And increasing the tax burden on capital would have some distributional benefits.
Steven Cherry Finally, we get to the criminal law and here you say it’s not so easy to apply the Principle of Legal Neutrality.
Ryan Abbott Yeah, criminal laws a little more challenging to this sort of thing than other areas of the law. I tend to take a pretty functionalist attitude toward what the law is looking for. So with patents, we’re trying to incentivize and encourage innovation; with tax law, we’re trying to promote economic activity and achieve some other aims like distribution of resources. With tort law, we’re trying to reduce accidents and so forth. Criminal law is a little different because it is the branch of all that most cares about well, not what happened, but why someone did something. If an Uber runs me over, that may be a tort, but it’s a criminal law if the person driving the Uber was trying to run me over or at least behaving so carelessly that they apparently couldn’t have cared less.
And so it makes it potentially a little more challenging to think about AI behavior in a context in which the law really cares about intrinsic motivations for doing things and also where the law cares about things like retribution and culpability. “How morally blameworthy were you for doing something?” Or, “Let’s punish people for doing things not just because it has good social outcomes, but because we think that’s the best thing to do.”
And so the book is kind of looking at this AI and human behavior and thinking about whether you could ever consider AI behavior as criminal if you didn’t have a traditional person behind the eyes. So if I pick up a computer and strike you with it, that’s not the guy committing a battery. It’s me using a computer as a tool. But as we have increasingly sophisticated A.I. that is open source that many people are contributing to, it is likely to be acting in ways we traditionally think about as criminal. Without a person we can necessarily point to and say, well, this was a person who did a criminal thing; more like a company, this was an entity that did something criminal. And there isn’t a clear person who associates with that.
And indeed, the idea of criminally punishing an AI isn’t as ridiculous as it might seem. We already criminally punish artificial persons in the form of corporations, and we don’t always require some bad mindset for strict liability crimes for certain, say, unusually dangerous activities. We only require that someone broke the law, not that they have a bad mindset. And in fact, you can even have criminal liability for failure to act when you have a duty to act. So criminally punishing an artificial person without a guilty mind for failing to act is something the law already accommodates and this chapter kind of looks at whether that makes any sense in the context of AI.
Steven Cherry I think there’s also a common-sense element to this. I mean, we could have the philosophical discussion about whether robots or AIs can ever have consciousness. But leaving that aside, it’s becoming a little bit common—and it’s probably going to become much more common—to talk about a robot’s intentions or an AI’s intentions. I mean, frankly, we often impute intent to almost any entity that seems capable of organized and complex self-movement with a goal in mind. So the Roomba floor cleaner that you mentioned before might not rise to that level. But Rosie, the Maid in the Jetsons, would.
Ryan Abbott Right. And this is indeed something that is discussed often in the corporate criminal context about whether corporations have intentions. And the way one largely gets around this is by imputing the intentions of human agents onto the corporation. Some people are against holding a corporation liable for criminal acts at all, in part because the people who suffer from this are potentially innocent shareholders rather than the person of the company who did the bad thing. But other people have a different view of companies, which is that they are more than the sum of their agents, that both people behave with groupthink and in ways that are too subtle to really criminalize directly. And there is some sort of synergistic sense in which companies are legitimately thought about as independent agents, separate and aside from people, although corporations, of course, are literally made up of people, and AI isn’t.
Whether or not at a philosophical level you want to say that an AI has intent or not, certainly it functionally has intent. Right?
Whether it makes any sense to think about that under criminal liability, well, there could be some benefits to doing that. So, for example, if we have a self-driving Tesla that for one reason or another ended up targeting investment bankers, if we were to convict that Tesla of a crime, it would say to society, we’re not going to tolerate this behavior regardless of the nature of the actor. And this might change how Tesla behaves if the car was destroyed and particularly if it had some sort of follow-on punishment for Tesla.
One of the things the book looks at is that really doing something like that isn’t quite as radical is a departure as one might think, and there would be some benefits to it, but but also a lot of disadvantages to doing it. Namely, it could erode trust in the law and the way people think about machines as being morally on par with a person.
A better system would be finding a person upstream that AI’s behavior for either civil or criminal liability. And probably civil liability. Because if I did something criminal but we couldn’t find something criminal the person had done, it would not be that they had done something in their own right criminal, but that they had contributed to something that went on to cause harm.
Steven Cherry I think we can now guess what Stephen King’s next book is going to be, right? It’s Christine II, featuring the murderous Tesla.
What about other aspects of the law? I’m wondering about things like, I don’t know, constitutional freedoms. If corporations have free speech rights, why shouldn’t an AI?
Ryan Abbott Well, these issues are very much subjects of debate. I think my comment from the book would be maybe there should be some protections that afforded to A.I. behavior, but I think we’d have to be very careful in doing that and noting that we’re not doing this for an eye’s sake. The AI has no interest in whether or not it has the right to bear arms or the right to marry or the right to free speech.
It’s only something that we would want to do if we looked at it carefully and decided that we as a society would be better off granting certain legal rights to AI behavior. And indeed, that’s the theory behind rights for companies. The theory is not that a company is a person [and] is morally deserving of rights [and] it would be unfair to have it being unable to exercise its right to free speech. The theory is that granting rights to artificial entities in the form of corporations benefits people. Primarily, letting [corporations] own property and enter into contracts helps encourage commerce and entrepreneurship. Whether it is beneficial for society to have companies have the right to make certain political donations or engage in certain sorts of speech is something that we really need to look carefully at, but always bearing in mind, well, what are we doing this for? It’s to benefit society and the people—not to grant rights to something that isn’t a person for the sake of doing it.
Steven Cherry Lastly, what about areas outside of the law that support our laws? For example, the IEEE is one of the leading standards-setting bodies in the world. What role do standards play or could play as the laws around robotics and A.I. develops?
Ryan Abbott Well, I think a very important one. The law is one way in which we regulate behavior, and it isn’t the appropriate solution for every situation. The law can be fairly heavy-handed and it really also sets a basement of behavior, this sort of thing you should not do. But we expect people to have a higher standard of behavior. And so it’s not enough for companies to say, well, we’re complying with the law. They really need to be thinking carefully through their own governance of AI, their ethics of the use of A.I., individual use-cases, the risks and benefits, and acting in a way that they basically that is the best they can. And in that it shouldn’t be every company on their own either. There is an important role for industry groups to be getting together and thinking through some of these very difficult challenges and coming up with soft norms that guide their own behavior, that have the music and beneficial sorts of ways.
Steven Cherry Bad cases make bad law. And I guess one thing the standards could do is keep us from having the kinds of situations that lead to bad law.
Ryan Abbott There will always be a role for the law because there are some bad actors out there and not every company will be as interested in being as benevolent with a guy as others. But for the good corporate actors and industry players, they should indeed not have to worry about running afoul of the law because they should be exceeding any legal obligations on them in terms of their use of A.I.
Steven Cherry Well, Ryan, the Principle of Legal Neutrality is a remarkable thesis, and you’ve written a remarkable book about it. And the time to think about it and debate it is now before we blindly go down the road toward one conclusion or another without having thought about the consequences and which path better serves humanity—which better conforms to the purposes for which we make laws in the first place. Thanks for helping to plant that intellectual flag of discovery and for joining us today.
Ryan Abbott Thanks so much for having me.
Steven Cherry We’ve been speaking with Ryan Abbott, lawyer, doctor, perpetual student and polymath, though I think the only degree he doesn’t have is math, and author of The Reasonable Robot: Artificial Intelligence and the Law, published by Cambridge University Press.
Radio Spectrum is brought to you by IEEE Spectrum, the member magazine of the Institute of Electrical and Electronics Engineers, a professional organization dedicated to advancing technology for the benefit of humanity.
This interview was recorded May 17, 2021 using Skype and Adobe Audition. Our theme music is by Chad Crouch.
You can subscribe to Radio Spectrum on Spotify, Apple, and wherever else you get your podcasts, or listen on the Spectrum website, which also contains transcripts of this and all our past episodes. We welcome your feedback on the web or in social media.
For Radio Spectrum, I’m Steven Cherry. Continue reading