Tag Archives: humanoids

#429350 The Struggle to Make AI Less Biased Than ...

The dirty little secret is out about artificial intelligence.
No, not the one about machines taking over the world. That’s an old one. This one is more insidious. Data scientists, AI experts and others have long suspected it would be a problem. But it’s only within the last couple of years, as AI or some version of machine learning has become nearly ubiquitous in our lives, that the issue has come to the forefront.
AI is prejudiced. Sexism. Ageism. Racism. Name an -ism, and more likely than not, the results produced by our machines have a bias in one or more ways. But an emerging think tank dubbed Diversity.ai believes our machines can do better than their creators when it comes to breaking down stereotypes and other barriers to inclusion.
The problem has been well documented: in 2015, for example, Google’s photo app embarrassingly tagged some black people as gorillas. A recent pre-print paper reported widespread human bias in the metadata for a popular database of Flickr images used to train neural networks. Even more disturbing was an investigative report last year by ProPublica that found software used to predict future criminal behavior—a la the film “Minority Report”—was biased against minorities.
For Anastasia Georgievskaya, the aha moment that machines can learn prejudice came during work on an AI-judged beauty contest developed by Youth Laboratories, a company she co-founded in 2015 that uses machine vision and AI to study aging. Almost all the winners picked by the computer jury were white.
“I thought that discrimination by the robots is likely, but only in a very distant future,” says Georgievskaya by email. “But when we started working on Beauty.AI, we realized that people are discriminating [against] other people by age, gender, race and many other parameters, and nobody is talking about it.”
Algorithms can always be improved, but a machine can only learn from the data it is fed.
“We struggled to find the data sets of older people and people of color to be able to train our deep neural networks,” Georgievskaya says. “And after the first and second Beauty.ai contests, we realized that it is a major problem.”
Age bias in available clinical data has frustrated Alex Zhavoronkov, CEO of Insilico Medicine, Inc., a bioinformatics company that combines genomics, big data analysis and deep learning for drug discovery related to aging and age-related diseases. A project called Aging.ai that uses a deep neural network trained on hundreds of human blood tests to predict age had high errors in older populations.
“Our company came to study aging not only because we want to extend healthy productive longevity, but to fix one important problem in the pharmaceutical industry—age bias,” Zhavoronkov says. “Many clinical trials cut off patient enrollment by age, and thousands of healthy but older people miss their chance to get a treatment.”
Georgievskaya and like-minded scientists not only recognized the problem, they started to study it in depth—and do something about it.
“We realized that it’s essential to develop routines that test AI algorithms for discrimination and bias, and started experimenting with the data, methods and the metrics,” she says. “Our company is not only focused on beauty, but also on healthcare and visual-imaging biomarkers of health. And there we found many problems in age, gender, race and wealth bias.”
As Zhavoronkov envisions it, Diversity.ai will bring together a “diverse group of people with a very ‘fresh’ perspective, who are not afraid of thinking out of the box. Essentially, it is a discussion group with many practical projects and personal and group goals.”
His own goal? “My personal goal is to prove that the elderly are being discriminated [against], and develop highly accurate multi-modal biomarkers of chronological and biological aging. I also want to solve the racial bias and identify the fine equilibrium between the predictive power and discrimination in the [deep neural networks].”
The group’s advisory board is still coming together, but already includes representatives from Elon Musk’s billion-dollar non-profit AI research company Open.AI, computing company Nvidia, a leading South Korean futurist, and the Future of Humanity Institute at the University of Oxford.
Nell Watson, founder and CEO of Poikos, a startup that developed a 3D body scanner for mobile devices, is one of the advisory board members. She’s also an adjunct in the Artificial Intelligence and Robotics track at Singularity University. She recently began OpenEth.og, what she calls a non-profit machine ethics research company that hopes to advance the field of machine ethics by developing a framework for analyzing various ethical situations.
She sees OpenEth.org and Diversity.ai as natural allies toward the goal of developing ethical, objective AI.
She explains that the OpenEth team is developing a blockchain-based public ledger system capable of analyzing contracts for adherence to a structure of ethics.
“[It] provides a classification of the contract's contents, without necessarily needing for the contract itself to be public,” she explains. That means companies can safeguard proprietary algorithms while providing public proof that it adheres to ethical standards.
“It also allows for a public signing of the ownership/responsibility for a given agent, so that anyone interacting with a machine will know where it came from and whether the ruleset that it's running under is compatible with their own values,” she adds. “It's a very ambitious project, but we are making steady progress, and I expect it to play a piece of many roles necessary in safeguarding against algorithmic bias.”
Georgievskaya says she hopes Diversity.ai can hold a conference later this year to continue to build awareness around issues of AI bias and begin work to scrub discrimination from our machines.
“Technologies and algorithms surround us everywhere and became an essential part of our daily life,” she says. “We definitely need to teach algorithms to treat us in the right way, so that we can live peacefully in [the] future.”
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#429346 When Intelligent Machines Cause ...

The rise of artificially intelligent machines will come at a cost—but with the potential to disrupt and transform society on a scale not seen since the Industrial Revolution. Jobs will be lost, but new fields of innovation will open up.
The changes ahead will require us to rethink attitudes and philosophies, not to mention laws and regulations. Some people are already debating the implications of an automated world, giving rise to think tanks and conferences on AI, such as the annual We Robot forum, which takes a scholarly approach to policy issues.
A registered patent attorney and board-certified physician, Ryan Abbott writes about the impact of artificial intelligence on intellectual property, health and tort law. We talked to him last year about the thorny issues surrounding patent ownership when the mother of invention is a machine. Now Abbott has waded into the equally prickly space of tort law and who—or what—is responsible when machines cause accidents.
“These are very popular topics,” says Abbott during an interview. “These technologies are going to fundamentally change the way we interact with machines. They’re going to fundamentally change society—and they have major legal implications.”
A professor of law and health sciences at the University of Surrey’s School of Law and adjunct assistant professor of medicine at the David Geffen School of Medicine at UCLA, Abbott is not the first to tackle the legal implications of computer-caused accidents.
In 2014, for example, a major report on RoboLaw from the European Union suggested creating a type of insurance fund to compensate those injured by AI computers. A previous article in the Boston Globe that surveyed experts across fields ranging from philosophy to robotics seemed to find consensus on one thing: the legal status of smart robots will require a “balancing act.”
Abbott appears to be the first to suggest in a soon-to-be-published paper that tort law treat AI machines like people when it comes to liability issues. And, perhaps more radically, he suggests people be judged against the competency of a computer when AI proves to be consistently safer than a human being.

Who’s legally responsible when self-driving cars crash? https://t.co/4BJm6SB4xL
— Singularity Hub (@singularityhub) January 30, 2017

Currently, the law treats machines as if they were all created equal, as simple consumer products. In most cases, when an accident occurs, standards of strict product liability law apply. In other words, unless a consumer uses a product in an outrageous way or grossly ignores safety warnings, the manufacturer is automatically considered at fault.
“Most injuries people cause are evaluated under a negligence standard, which requires unreasonable conduct to establish liability,” Abbott notes in his paper, tentatively titled, “Allocating Liability for Computer-Generated Torts.”
“However, when computers cause the same injuries, a strict liability standard applies. This distinction has significant financial consequences and corresponding impact on the rate of technology adoption. It discourages automation, because machines entail greater liability than people.”
Turning thinking machines into people—at least in a court of law—doesn’t absolve companies of responsibility, but allows them to accept more risk while still making machines that are safe to use, according to Abbott.
“I think my proposal is a creative way to tinker with the way the law works to incentivize automation without forcing it,” he says.
Abbott argues his point with a case study focusing on self-driving vehicles, possibly the most immediately disruptive technology of today—and already deemed safer than human drivers, despite some high-profile accidents last year involving Tesla’s Autopilot system.
“Self-driving cars are here among us and going to be all over the place very soon,” notes Abbott, adding that shifting the tort burden from strict liability to negligence would quicken the adoption of driverless technology, improve safety and ultimately save lives.
In 2015, for instance, more than 35,000 people in the United States died in traffic accidents, most caused by human error, according to the Insurance Institute for Highway Safety Highway Loss Data Institute. Cornell professor of computer science and director of the university’s Intelligent Information Systems Institute Bart Selman recently told journalist Michael Belfiore that driverless cars would be tenfold safer than humans within three years and 100 times safer within a decade. The savings in human lives and damages are evident.
The US National Highway Traffic Safety Administration just put an exclamation mark on the point when it released its full findings on the Tesla fatality in Florida. The agency cleared the Autopilot system of any fault in the accident and even praised its safety design, according to a story in TechCrunch. The report noted that crash rates involving Tesla cars have dropped by nearly 40 percent since Autopilot came online.
Safety is also the big reason why Abbott argues that in the not-too-distant future, human error in tort law will be measured against the unerring competency of machines.
“This means that defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done,” Abbott writes. “While this will mean that the average person’s best efforts will no longer be sufficient to avoid liability, the rule would benefit the general welfare.”
The human anxiety level over the coming machine revolution is already high, so wouldn’t this just add to our exponential anxiety? Abbott argues his proposals aren’t about diminishing human abilities, but recognizing the reality that machines will be safer to do some jobs more than humans.
And not just behind the wheel of a car. IBM’s Watson, among other AI systems, is already working in the medical field, including oncology. Meanwhile, a 2016 study in the British Medical Journal reported that human error is the third-leading cause of death in the United States.
If Watson MD has a higher safety record than Dr. Smith, who would you choose for treatment?
“Ultimately, we’re all consumers and potential accident victims, and I think that is something people could support,” Abbott says. “I think when people see the positive impact of it, it will change attitudes.”
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#429333 Quantum Computing Progress Will Speed Up ...

In the quest for ever more powerful computers, researchers are beginning to build quantum computers—machines that exploit the strange properties of physics on the smallest of scales.
The field has been making progress in recent years, and quantum computing company D-Wave is one of the pioneers. Researchers at Google, NASA, and elsewhere have been studying how they can use D-Wave’s chips to solve tricky problems far faster than a classical computer could.
Although the field is making progress, it is still largely the domain of an elite group of physicists and computer scientists. However, more minds working a problem tend to be better than fewer. And to that end, D-Wave took a bold step toward democratizing quantum computing last week by releasing an open-source version of its basic quantum computing software, Qbsolv.
“D-Wave is driving the hardware forward,” D-Wave International president Bo Ewald told Wired. “But we need more smart people thinking about applications, and another set thinking about software tools.”
Qbsolv is intended to allow more developers to program D-Wave’s computers without the requisite PhD in quantum physics. That is, more people will be able to think about how they’d use a quantum computer—and even begin writing software applications too.
This has profound implications for the future of computing. But first, a little background.
What is quantum computing?
To understand the significance of D-Wave's announcement, let's take a quantum leap back to the 80s. In 1982, Nobel Prize-winning physicist Richard Feynman suggested that computing could become inconceivably faster by utilizing the basic laws of quantum physics.
While digital computers already use physics to process binary digits — or bits — comprised of 1s and 0s, Feynman suggested not using bits at all but rather quantum bits, or qubits. Unlike classical bits, qubits can exist simultaneously as both 1s and 0s. This might be described as the probability a qubit is either 1 or 0, but it's actually more subtle than that and relies on a property intrinsic to quantum physics that is impossible to emulate using simple probabilities.
D-Wave chip. Image Credit: D-Wave SystemsWhile such theoretical concepts are fascinating, the practical implication is that simultaneously being in both states means jointly considering each outcome in a calculation. Thus a single qubit may concurrently perform two calculations, two qubits may perform four, three qubits eight, and so forth, producing exponentially increasingly speed.
Utilizing just thirty qubits means simultaneously performing more than one billion calculations.
Within the genus of quantum computing there exist two species: annealing and gate model. D-Wave’s computer is of the annealing species, but both types are being developed.
Annealing quantum computers contain an array of qubits contributing to an overall system energy. The system will seek a configuration to minimize the total energy. This minimal energy state corresponds to the solution of the problem programmed into the computer. Annealing quantum computers will find what is likely the correct configuration nearly instantaneously.
Gate model quantum computers, on the other hand, begin with the qubits in a certain configuration. These are then operated upon by a series of transformations, or gates, which rearrange each qubit’s value depending on the other values and the specific program running. The final configuration then yields a distribution of results weighted by different probabilities.
Gate model quantum computers are more complex but can work on more kinds of problems. Annealing computers are simpler but best suited for fewer kinds of problems.
"Our view is that annealers will be enterprise ready before gate model systems," says Matt Johnson, CEO of QC Ware, which is developing applications for both annealing and gate model machines.

How open source can accelerate progress
Open-sourcing may appear a counter-productive business strategy. To understand why this strategy will likely work for quantum computing, it’s important to note there have been two principal challenges to advancing the field to date.
The first problem is the lack of algorithms optimized for a quantum processor. While such an algorithm would have exponential speedup compared to a digital processor, at present there are only three such procedures known. Open-source software will allow physicists, mathematicians, and entrepreneurs to develop additional and more efficient means of solving certain problems.
At zero barrier to entry, quantum software engineers won’t need to purchase expensive and restricted proprietary software and invest months and years developing prototypes.
The second challenge is extending coherence time, or the time in which qubits maintain their special quantum properties. Perhaps surprisingly, open-sourcing the software will also drive innovation in the hardware and perhaps result in faster breakthroughs.
This surprising fact is due to the symbiotic relationship between software and hardware, the same feedback loop that results in exponential advancement of both.
The better the software, the greater the motivation to innovate hardware to operate it. The better the hardware, the greater the motivation to innovate software to execute instructions on it.
Advancement in one spurs advancement in the other.
What is quantum computing good for?
While the power of quantum computing is impressive, it does not mean that existing software simply runs a billion times faster. Rather, quantum computers have certain types of problems they are good at solving and those they aren't.
Though a quantum computer will always yield an answer to a computation, it won’t always be the correct answer. But this doesn’t mean it has malfunctioned.
Remember, qubits are probabilistic, and the answers must be too. There is a probability that the answer is correct, and a complementary probability it is wrong. The software aims to maximize the chance it is correct and apply this to problems that do not always require a precisely correct answer.
A prime example is modeling of molecular interactions.
Such “quantum chemistry” is so complex that only the simplest molecules can be analyzed by today’s digital computers. But fully developed quantum computers would not have any difficulty evaluating even the most complex processes. Google is already making forays in this field.
The implication of this is that pharmaceutical production or the discovery of new materials could be made much more efficient and happen at a faster rate thanks to quantum computer simulations. Quantum computing may also accelerate development of artificial intelligence. As AI logic is based on calculating the probabilities of many possible choices, it would be an ideal candidate for quantum computation.
Investors are also now scrambling to insert themselves into the quantum computing ecosystem. In addition to the obvious candidates, banks, aerospace companies, and cybersecurity firms are developing ways to protect against quantum computers’ ability to crack conventional cryptographic codes.
Looking ahead
In the near future an entire library of optimized algorithms will be created, forming the basis of the first commercially available quantum computing software. Innovative programs will soon be developed by individuals, academics, and entrepreneurs at zero sunk cost. We will see a proliferation of garage tech startups advancing the pillars of quantum chemistry and AI, but also new killer apps which have not yet been conceived.
Hardware manufacturers will be quick to take advantage of this democratization. The quantum computers themselves will continue to improve, featuring larger numbers of qubits and longer coherence times. They’ll also become more available via access to cloud processing.
This is just the dawn of quantum computing—it’ll be an exciting field to watch in coming years.
Image Credit: D-Wave Systems Continue reading

Posted in Human Robots

#429319 DragonflEye Project Wants to Turn ...

R&D lab Draper is using genetic engineering and optoelectronics to build cybernetic insects Continue reading

Posted in Human Robots

#429313 Elon Musk Sees Brain-Computer Systems in ...

Elon Musk has a plan to help humans keep up with artificial intelligence. Continue reading

Posted in Human Robots