Category Archives: Human Robots
Humanoid robot raises funds on charity in Poland
We’re one step away from a big revolution in Poland. An extraordinary volunteer at the 25th Final of the Great Orchestra of Christmas Charity. One of the fund-raisers will be a humanoid robot Pepper from Opole.
The foundation – the Great Orchestra of Christmas Charity is the biggest non-governmental charity in Poland. Every year, for 24 years now, it collects funds in order to save lives of patients in polish hospitals. Every January there is a final of the Great Orchestra of Christmas Charity during which people raise money that is later used to buy a specialist equipment for hospitals.
This year in Opole, during the Great Orchestra of Christmas Charity, there will be an exceptional fund-raiser – robot Pepper from project from Opole city – Weegree One. He is a humanoid robot which is designed to coexist with people. His innovation isn’t based only on the appearance, but also on the way he can communicate and help people. Pepper talks, recognizes and reacts to emotions, moves and lives autonomously. These exact skills make it possible for him to join the fund-raise. In many foreign countries robots are present on a daily basis, they work in hospitals, shops or in customer service. Does it seem unreal? And yet, robots in our daily lives become reality and exist to help us.
In 24 years the foundation collected and donated money to support hospitals in the amount of over 720 milion złotych (~165 milion euro). That is almost 40 000 state-of-the-art medical devices that went to more than 600 hospitals in the entire Poland. The every year GOCC final is an opportunity to bid fantastic items granted by famous people. Last year it included the skis with the autograph of the President of Poland, Andrzej Duda.
The robot just like every other volunteer will have one fund raising can assigned to him. The money collected this year will be donated to children and senior citizens. It is a remarkable idea to let the robot be present at the final and fund-raise. Without any doubt he attracts attention thanks to which he will be able to help the Great Orchestra of Christmas Charity as good as it gets. There will an opportunity to talk to the robot, take a picture and of course support the people in need with a donation
The post Humanoid robot raises funds on charity in Poland appeared first on Roboticmagazine. Continue reading
From time to time, the Singularity Hub editorial team unearths a gem from the archives and wants to share it all over again. It's usually a piece that was popular back then and we think is still relevant now. This is one of those articles. It was originally published October 7, 2015. We hope you enjoy it!
You’ve heard the chatter: Robots and AI want your job. One famous study predicted 47% of today’s jobs may be automated by 2034. And if you want to know how likely it is you’ll be replaced by a robot, check out this BBC tool. (Writer = 33%. Yay?)
But nowhere is automation as immediately evident as it is in manufacturing. It’s been going on for decades, most obviously in automotive assembly and heavy machinery. Increasingly, however, more advanced robot factory workers are branching out.
You may remember a few years ago notorious manufacturer of iPhones, Foxconn, made headlines by declaring they’d replace factory workers with a million robots. Well, they got the timing wrong. They did develop said bots (not a million), but they weren’t ready to take over for humans when it came to the precise work of assembling circuit boards and other electronics. That said, the basic message was right, even if the timing wasn't.
As the saying goes, robots are good for tasks that are dirty, dull, or dangerous—and soon, we’ll add delicate to the list. Consider, for example, MIT’s new sensored robotic hand made of silicone. The hand can guess an object’s size and shape and ID it from a list, and it can handle items as diverse as an egg or a compact disc.
These skills will be very useful. A good bit (though not all) of the remaining work yet to be automated in manufacturing is the stuff requiring a delicate human touch.
A recent BCG Perspectives report on automation pegged four industries—already accounting for the lion’s share of global robot use—to lead the charge in coming years. These included machinery and transportation equipment, but they also included computers and electronic products and electrical equipment, appliances, and components.
In other words, those Foxconn iPhones will be made by robots.
There are two particularly strong drivers behind adoption: capability and cost competitiveness. Both are tied to quick advances in computing and AI, therefore we’re seeing gains in capability matched by falling costs of factory robots.
Across China, manufacturers are following in Foxconn’s footsteps.
At Shenzen Rapoo Technology Co. humans work next to 80 robotic arms assembling computer mice and keyboards. The bots have enabled the company to cut its workforce from over 3,000 in 2010 to less than 1,000 today. China has accounted for the most robot sales worldwide for two years running. And BCG expects 50% of robotics shipments will go to China and the US alone in the next decade.
While capability accounts for what can be automated, however, it’s how much robots cost compared to human labor that drives when they’ll be adopted. Electronics manufacturers are increasingly employing robots because they’re more capable and higher-than-average wages make them relatively more attractive.
And here’s the interesting bit: once the cost of robots falls below a certain point—assuming they can produce as much or more than human workers—the labor cost advantage that has driven offshoring in recent decades will all but disappear. While future iPhones may be robot-made, they likely won’t all be made in China.
According to BCG, a little over a decade ago, Chinese labor costs were roughly 1/20 of those in the US—but today, that gap has nearly closed. Meanwhile, in the four industries above, robotic systems in the US currently average $10 to $20 an hour to operate—which is already below the cost of equivalent American workers.
BCG expects those costs to fall even further, and the robots to gain more abilities.
“We project, therefore, that robots will perform 40 to 45 percent of production tasks in each of these industries by , compared with fewer than 10 percent today.”
As China’s wages rise and the cost of increasingly capable robots falls—expect to not only see China adopt more robots, but expect to see US firms bring some manufacturing back home. (Just don’t expect them to hire too many more humans.)
These trends apply elsewhere too. South Korea, for example, is roboticizing faster than anyone, and 40% of manufacturing jobs there may be automated by 2025.
But according to BCG, the revolution won’t be equally revolutionary everywhere at once. Some industries—like textiles—are still relatively more difficult to automate, and labor costs are lower than in other industries. Automation will then be slower.
Meanwhile, regulations favoring humans over robots may prevent quick adoption in certain countries. BCG notes that of the top 25 manufacturing export economies, many of the slowest to adopt robots are in Europe, despite having some of the highest wages in the world and aging workforces. Among other factors, labor laws in these countries may make it difficult to replace human workers with robots.
All this hints at the emergence of a fascinating shift in the global economy: In the future, large manufacturing countries may not just compete for the cost of human labor—they’ll increasingly compete with robotics adoption too. BCG writes:
"We believe that as wage gaps between low-cost and high-cost economies continue to narrow, robot adoption could emerge as an important new factor that will contribute to redrawing the competitive balance among economies in global manufacturing.
The future's manufacturing powerhouses, then, will be those countries in which the robot revolution takes root earliest and moves swiftest.
Image Credit: Shutterstock.com Continue reading
Artificial intelligence has bested the world's top poker players in a 120,000-hand match of Heads-Up, No-Limit Texas Hold'em poker. Continue reading
The dirty little secret is out about artificial intelligence.
No, not the one about machines taking over the world. That’s an old one. This one is more insidious. Data scientists, AI experts and others have long suspected it would be a problem. But it’s only within the last couple of years, as AI or some version of machine learning has become nearly ubiquitous in our lives, that the issue has come to the forefront.
AI is prejudiced. Sexism. Ageism. Racism. Name an -ism, and more likely than not, the results produced by our machines have a bias in one or more ways. But an emerging think tank dubbed Diversity.ai believes our machines can do better than their creators when it comes to breaking down stereotypes and other barriers to inclusion.
The problem has been well documented: in 2015, for example, Google’s photo app embarrassingly tagged some black people as gorillas. A recent pre-print paper reported widespread human bias in the metadata for a popular database of Flickr images used to train neural networks. Even more disturbing was an investigative report last year by ProPublica that found software used to predict future criminal behavior—a la the film “Minority Report”—was biased against minorities.
For Anastasia Georgievskaya, the aha moment that machines can learn prejudice came during work on an AI-judged beauty contest developed by Youth Laboratories, a company she co-founded in 2015 that uses machine vision and AI to study aging. Almost all the winners picked by the computer jury were white.
“I thought that discrimination by the robots is likely, but only in a very distant future,” says Georgievskaya by email. “But when we started working on Beauty.AI, we realized that people are discriminating [against] other people by age, gender, race and many other parameters, and nobody is talking about it.”
Algorithms can always be improved, but a machine can only learn from the data it is fed.
“We struggled to find the data sets of older people and people of color to be able to train our deep neural networks,” Georgievskaya says. “And after the first and second Beauty.ai contests, we realized that it is a major problem.”
Age bias in available clinical data has frustrated Alex Zhavoronkov, CEO of Insilico Medicine, Inc., a bioinformatics company that combines genomics, big data analysis and deep learning for drug discovery related to aging and age-related diseases. A project called Aging.ai that uses a deep neural network trained on hundreds of human blood tests to predict age had high errors in older populations.
“Our company came to study aging not only because we want to extend healthy productive longevity, but to fix one important problem in the pharmaceutical industry—age bias,” Zhavoronkov says. “Many clinical trials cut off patient enrollment by age, and thousands of healthy but older people miss their chance to get a treatment.”
Georgievskaya and like-minded scientists not only recognized the problem, they started to study it in depth—and do something about it.
“We realized that it’s essential to develop routines that test AI algorithms for discrimination and bias, and started experimenting with the data, methods and the metrics,” she says. “Our company is not only focused on beauty, but also on healthcare and visual-imaging biomarkers of health. And there we found many problems in age, gender, race and wealth bias.”
As Zhavoronkov envisions it, Diversity.ai will bring together a “diverse group of people with a very ‘fresh’ perspective, who are not afraid of thinking out of the box. Essentially, it is a discussion group with many practical projects and personal and group goals.”
His own goal? “My personal goal is to prove that the elderly are being discriminated [against], and develop highly accurate multi-modal biomarkers of chronological and biological aging. I also want to solve the racial bias and identify the fine equilibrium between the predictive power and discrimination in the [deep neural networks].”
The group’s advisory board is still coming together, but already includes representatives from Elon Musk’s billion-dollar non-profit AI research company Open.AI, computing company Nvidia, a leading South Korean futurist, and the Future of Humanity Institute at the University of Oxford.
Nell Watson, founder and CEO of Poikos, a startup that developed a 3D body scanner for mobile devices, is one of the advisory board members. She’s also an adjunct in the Artificial Intelligence and Robotics track at Singularity University. She recently began OpenEth.og, what she calls a non-profit machine ethics research company that hopes to advance the field of machine ethics by developing a framework for analyzing various ethical situations.
She sees OpenEth.org and Diversity.ai as natural allies toward the goal of developing ethical, objective AI.
She explains that the OpenEth team is developing a blockchain-based public ledger system capable of analyzing contracts for adherence to a structure of ethics.
“[It] provides a classification of the contract's contents, without necessarily needing for the contract itself to be public,” she explains. That means companies can safeguard proprietary algorithms while providing public proof that it adheres to ethical standards.
“It also allows for a public signing of the ownership/responsibility for a given agent, so that anyone interacting with a machine will know where it came from and whether the ruleset that it's running under is compatible with their own values,” she adds. “It's a very ambitious project, but we are making steady progress, and I expect it to play a piece of many roles necessary in safeguarding against algorithmic bias.”
Georgievskaya says she hopes Diversity.ai can hold a conference later this year to continue to build awareness around issues of AI bias and begin work to scrub discrimination from our machines.
“Technologies and algorithms surround us everywhere and became an essential part of our daily life,” she says. “We definitely need to teach algorithms to treat us in the right way, so that we can live peacefully in [the] future.”
Image Credit: Shutterstock Continue reading
The rise of artificially intelligent machines will come at a cost—but with the potential to disrupt and transform society on a scale not seen since the Industrial Revolution. Jobs will be lost, but new fields of innovation will open up.
The changes ahead will require us to rethink attitudes and philosophies, not to mention laws and regulations. Some people are already debating the implications of an automated world, giving rise to think tanks and conferences on AI, such as the annual We Robot forum, which takes a scholarly approach to policy issues.
A registered patent attorney and board-certified physician, Ryan Abbott writes about the impact of artificial intelligence on intellectual property, health and tort law. We talked to him last year about the thorny issues surrounding patent ownership when the mother of invention is a machine. Now Abbott has waded into the equally prickly space of tort law and who—or what—is responsible when machines cause accidents.
“These are very popular topics,” says Abbott during an interview. “These technologies are going to fundamentally change the way we interact with machines. They’re going to fundamentally change society—and they have major legal implications.”
A professor of law and health sciences at the University of Surrey’s School of Law and adjunct assistant professor of medicine at the David Geffen School of Medicine at UCLA, Abbott is not the first to tackle the legal implications of computer-caused accidents.
In 2014, for example, a major report on RoboLaw from the European Union suggested creating a type of insurance fund to compensate those injured by AI computers. A previous article in the Boston Globe that surveyed experts across fields ranging from philosophy to robotics seemed to find consensus on one thing: the legal status of smart robots will require a “balancing act.”
Abbott appears to be the first to suggest in a soon-to-be-published paper that tort law treat AI machines like people when it comes to liability issues. And, perhaps more radically, he suggests people be judged against the competency of a computer when AI proves to be consistently safer than a human being.
Who’s legally responsible when self-driving cars crash? https://t.co/4BJm6SB4xL
— Singularity Hub (@singularityhub) January 30, 2017
Currently, the law treats machines as if they were all created equal, as simple consumer products. In most cases, when an accident occurs, standards of strict product liability law apply. In other words, unless a consumer uses a product in an outrageous way or grossly ignores safety warnings, the manufacturer is automatically considered at fault.
“Most injuries people cause are evaluated under a negligence standard, which requires unreasonable conduct to establish liability,” Abbott notes in his paper, tentatively titled, “Allocating Liability for Computer-Generated Torts.”
“However, when computers cause the same injuries, a strict liability standard applies. This distinction has significant financial consequences and corresponding impact on the rate of technology adoption. It discourages automation, because machines entail greater liability than people.”
Turning thinking machines into people—at least in a court of law—doesn’t absolve companies of responsibility, but allows them to accept more risk while still making machines that are safe to use, according to Abbott.
“I think my proposal is a creative way to tinker with the way the law works to incentivize automation without forcing it,” he says.
Abbott argues his point with a case study focusing on self-driving vehicles, possibly the most immediately disruptive technology of today—and already deemed safer than human drivers, despite some high-profile accidents last year involving Tesla’s Autopilot system.
“Self-driving cars are here among us and going to be all over the place very soon,” notes Abbott, adding that shifting the tort burden from strict liability to negligence would quicken the adoption of driverless technology, improve safety and ultimately save lives.
In 2015, for instance, more than 35,000 people in the United States died in traffic accidents, most caused by human error, according to the Insurance Institute for Highway Safety Highway Loss Data Institute. Cornell professor of computer science and director of the university’s Intelligent Information Systems Institute Bart Selman recently told journalist Michael Belfiore that driverless cars would be tenfold safer than humans within three years and 100 times safer within a decade. The savings in human lives and damages are evident.
The US National Highway Traffic Safety Administration just put an exclamation mark on the point when it released its full findings on the Tesla fatality in Florida. The agency cleared the Autopilot system of any fault in the accident and even praised its safety design, according to a story in TechCrunch. The report noted that crash rates involving Tesla cars have dropped by nearly 40 percent since Autopilot came online.
Safety is also the big reason why Abbott argues that in the not-too-distant future, human error in tort law will be measured against the unerring competency of machines.
“This means that defendants would no longer have their liability based on what a hypothetical, reasonable person would have done in their situation, but what a computer would have done,” Abbott writes. “While this will mean that the average person’s best efforts will no longer be sufficient to avoid liability, the rule would benefit the general welfare.”
The human anxiety level over the coming machine revolution is already high, so wouldn’t this just add to our exponential anxiety? Abbott argues his proposals aren’t about diminishing human abilities, but recognizing the reality that machines will be safer to do some jobs more than humans.
And not just behind the wheel of a car. IBM’s Watson, among other AI systems, is already working in the medical field, including oncology. Meanwhile, a 2016 study in the British Medical Journal reported that human error is the third-leading cause of death in the United States.
If Watson MD has a higher safety record than Dr. Smith, who would you choose for treatment?
“Ultimately, we’re all consumers and potential accident victims, and I think that is something people could support,” Abbott says. “I think when people see the positive impact of it, it will change attitudes.”
Image Credit: Shutterstock Continue reading