Category Archives: Human Robots
The rise of artificially intelligent machines will come at a cost—but with the potential to disrupt and transform society on a scale not seen since the Industrial Revolution. Jobs will be lost, but new fields of innovation will open up.
The changes ahead will require us to rethink attitudes and philosophies, not to mention laws and regulations. Some people are already debating the implications of an automated world, giving rise to think tanks and conferences on AI, such as the annual We Robot forum, which takes a scholarly approach to policy issues.
A registered patent attorney and board-certified physician, Ryan Abbott writes about the impact of artificial intelligence on intellectual property, health and tort law. We talked to him last year about the thorny issues surrounding patent ownership when the mother of invention is a machine. Now Abbott has waded into the equally prickly space of tort law and who—or what—is responsible when machines cause accidents.
“These are very popular topics,” says Abbott during an interview. “These technologies are going to fundamentally change the way we interact with machines. They’re going to fundamentally change society—and they have major legal implications.”
A professor of law and health sciences at the University of Surrey’s School of Law and adjunct assistant professor of medicine at the David Geffen School of Medicine at UCLA, Abbott is not the first to tackle the legal implications of computer-caused accidents.
In 2014, for example, a major report on RoboLaw from the European Union suggested creating a type of insurance fund to compensate those injured by AI computers. A previous article in the Boston Globe that surveyed experts across fields ranging from philosophy to robotics seemed to find consensus on one thing: the legal status of smart robots will require a “balancing act.”
Abbott appears to be the first to suggest in a soon-to-be-published paper that tort law treat AI machines like people when it comes to liability issues. And, perhaps more radically, he suggests people be judged against the competency of a computer when AI proves to be consistently safer than a human being.
Who’s legally responsible when self-driving cars crash? https://t.co/4BJm6SB4xL
— Singularity Hub (@singularityhub) January 30, 2017
Currently, the law treats machines as if they were all created equal, as simple consumer products. In most cases, when an accident occurs, standards of strict product liability law apply. In other words, unless a consumer uses a product in an outrageous way or grossly ignores safety warnings, the manufacturer is automatically considered at fault.
“Most injuries people cause are evaluated under a negligence standard, which requires unreasonable conduct to establish liability,” Abbott notes in his paper, tentatively titled, “Allocating Liability for Computer-Generated Torts.”
“However, when computers cause the same injuries, a strict liability standard applies. This distinction has significant financial consequences and corresponding impact on the rate of technology adoption. It discourages automation, because machines entail greater liability than people.”
Turning thinking machines into people—at least in a court of law—doesn’t absolve companies of responsibility, but allows them to accept more risk while still making machines that are safe to use, according to Abbott.
“I think my proposal is a creative way to tinker with the way the law works to incentivize automation without forcing it,” he says.
Abbott argues his point with a case study focusing on self-driving vehicles, possibly the most immediately disruptive technology of today—and already deemed safer than human drivers, despite some high-profile accidents last year involving Tesla’s Autopilot system.
“Self-driving cars are here among us and going to be all over the place very soon,” notes Abbott, adding that shifting the tort burden from strict liability to negligence would quicken the adoption of driverless technology, improve safety and ultimately save lives.
In 2015, for instance, more than 35,000 people in the United States died in traffic accidents, most caused by human error, according to the Insurance Institute for Highway Safety Highway Loss Data Institute. Cornell professor of computer science and director of the university’s Intelligent Information Systems Institute Bart Selman recently told journalist Michael Belfiore that driverless cars would be tenfold safer than humans within three years and 100 times safer within a decade. The savings in human lives and damages are evident.
The US National Highway Traffic Safety Administration just put an exclamation mark on the point when it released its full findings on the Tesla fatality in Florida. The agency cleared the Autopilot system of any fault in the accident and even praised its safety design, according to a story in TechCrunch. The report noted that crash rates involving Tesla cars have dropped by nearly 40 percent since Autopilot came online.
Safety is also the big reason why Abbott argues that in the not-too-distant future, human error in tort law will be measured against the unerring competency of machines.
“This means that defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done,” Abbott writes. “While this will mean that the average person’s best efforts will no longer be sufficient to avoid liability, the rule would benefit the general welfare.”
The human anxiety level over the coming machine revolution is already high, so wouldn’t this just add to our exponential anxiety? Abbott argues his proposals aren’t about diminishing human abilities, but recognizing the reality that machines will be safer to do some jobs more than humans.
And not just behind the wheel of a car. IBM’s Watson, among other AI systems, is already working in the medical field, including oncology. Meanwhile, a 2016 study in the British Medical Journal reported that human error is the third-leading cause of death in the United States.
If Watson MD has a higher safety record than Dr. Smith, who would you choose for treatment?
“Ultimately, we’re all consumers and potential accident victims, and I think that is something people could support,” Abbott says. “I think when people see the positive impact of it, it will change attitudes.”
Image Credit: Shutterstock Continue reading
In the quest for ever more powerful computers, researchers are beginning to build quantum computers—machines that exploit the strange properties of physics on the smallest of scales.
The field has been making progress in recent years, and quantum computing company D-Wave is one of the pioneers. Researchers at Google, NASA, and elsewhere have been studying how they can use D-Wave’s chips to solve tricky problems far faster than a classical computer could.
Although the field is making progress, it is still largely the domain of an elite group of physicists and computer scientists. However, more minds working a problem tend to be better than fewer. And to that end, D-Wave took a bold step toward democratizing quantum computing last week by releasing an open-source version of its basic quantum computing software, Qbsolv.
“D-Wave is driving the hardware forward,” D-Wave International president Bo Ewald told Wired. “But we need more smart people thinking about applications, and another set thinking about software tools.”
Qbsolv is intended to allow more developers to program D-Wave’s computers without the requisite PhD in quantum physics. That is, more people will be able to think about how they’d use a quantum computer—and even begin writing software applications too.
This has profound implications for the future of computing. But first, a little background.
What is quantum computing?
To understand the significance of D-Wave's announcement, let's take a quantum leap back to the 80s. In 1982, Nobel Prize-winning physicist Richard Feynman suggested that computing could become inconceivably faster by utilizing the basic laws of quantum physics.
While digital computers already use physics to process binary digits — or bits — comprised of 1s and 0s, Feynman suggested not using bits at all but rather quantum bits, or qubits. Unlike classical bits, qubits can exist simultaneously as both 1s and 0s. This might be described as the probability a qubit is either 1 or 0, but it's actually more subtle than that and relies on a property intrinsic to quantum physics that is impossible to emulate using simple probabilities.
D-Wave chip. Image Credit: D-Wave SystemsWhile such theoretical concepts are fascinating, the practical implication is that simultaneously being in both states means jointly considering each outcome in a calculation. Thus a single qubit may concurrently perform two calculations, two qubits may perform four, three qubits eight, and so forth, producing exponentially increasingly speed.
Utilizing just thirty qubits means simultaneously performing more than one billion calculations.
Within the genus of quantum computing there exist two species: annealing and gate model. D-Wave’s computer is of the annealing species, but both types are being developed.
Annealing quantum computers contain an array of qubits contributing to an overall system energy. The system will seek a configuration to minimize the total energy. This minimal energy state corresponds to the solution of the problem programmed into the computer. Annealing quantum computers will find what is likely the correct configuration nearly instantaneously.
Gate model quantum computers, on the other hand, begin with the qubits in a certain configuration. These are then operated upon by a series of transformations, or gates, which rearrange each qubit’s value depending on the other values and the specific program running. The final configuration then yields a distribution of results weighted by different probabilities.
Gate model quantum computers are more complex but can work on more kinds of problems. Annealing computers are simpler but best suited for fewer kinds of problems.
"Our view is that annealers will be enterprise ready before gate model systems," says Matt Johnson, CEO of QC Ware, which is developing applications for both annealing and gate model machines.
How open source can accelerate progress
Open-sourcing may appear a counter-productive business strategy. To understand why this strategy will likely work for quantum computing, it’s important to note there have been two principal challenges to advancing the field to date.
The first problem is the lack of algorithms optimized for a quantum processor. While such an algorithm would have exponential speedup compared to a digital processor, at present there are only three such procedures known. Open-source software will allow physicists, mathematicians, and entrepreneurs to develop additional and more efficient means of solving certain problems.
At zero barrier to entry, quantum software engineers won’t need to purchase expensive and restricted proprietary software and invest months and years developing prototypes.
The second challenge is extending coherence time, or the time in which qubits maintain their special quantum properties. Perhaps surprisingly, open-sourcing the software will also drive innovation in the hardware and perhaps result in faster breakthroughs.
This surprising fact is due to the symbiotic relationship between software and hardware, the same feedback loop that results in exponential advancement of both.
The better the software, the greater the motivation to innovate hardware to operate it. The better the hardware, the greater the motivation to innovate software to execute instructions on it.
Advancement in one spurs advancement in the other.
What is quantum computing good for?
While the power of quantum computing is impressive, it does not mean that existing software simply runs a billion times faster. Rather, quantum computers have certain types of problems they are good at solving and those they aren't.
Though a quantum computer will always yield an answer to a computation, it won’t always be the correct answer. But this doesn’t mean it has malfunctioned.
Remember, qubits are probabilistic, and the answers must be too. There is a probability that the answer is correct, and a complementary probability it is wrong. The software aims to maximize the chance it is correct and apply this to problems that do not always require a precisely correct answer.
A prime example is modeling of molecular interactions.
Such “quantum chemistry” is so complex that only the simplest molecules can be analyzed by today’s digital computers. But fully developed quantum computers would not have any difficulty evaluating even the most complex processes. Google is already making forays in this field.
The implication of this is that pharmaceutical production or the discovery of new materials could be made much more efficient and happen at a faster rate thanks to quantum computer simulations. Quantum computing may also accelerate development of artificial intelligence. As AI logic is based on calculating the probabilities of many possible choices, it would be an ideal candidate for quantum computation.
Investors are also now scrambling to insert themselves into the quantum computing ecosystem. In addition to the obvious candidates, banks, aerospace companies, and cybersecurity firms are developing ways to protect against quantum computers’ ability to crack conventional cryptographic codes.
In the near future an entire library of optimized algorithms will be created, forming the basis of the first commercially available quantum computing software. Innovative programs will soon be developed by individuals, academics, and entrepreneurs at zero sunk cost. We will see a proliferation of garage tech startups advancing the pillars of quantum chemistry and AI, but also new killer apps which have not yet been conceived.
Hardware manufacturers will be quick to take advantage of this democratization. The quantum computers themselves will continue to improve, featuring larger numbers of qubits and longer coherence times. They’ll also become more available via access to cloud processing.
This is just the dawn of quantum computing—it’ll be an exciting field to watch in coming years.
Image Credit: D-Wave Systems Continue reading
R&D lab Draper is using genetic engineering and optoelectronics to build cybernetic insects Continue reading
Elon Musk has a plan to help humans keep up with artificial intelligence. Continue reading