Tag Archives: robot

#430106 No More Playing Games: AlphaGo AI to ...

Humankind lost another important battle with artificial intelligence (AI) last month when AlphaGo beat the world’s leading Go player Kie Je by three games to zero.
AlphaGo is an AI program developed by DeepMind, part of Google’s parent company Alphabet. Last year it beat another leading player, Lee Se-dol, by four games to one, but since then AlphaGo has substantially improved.
Kie Je described AlphaGo’s skill as “like a god of Go.”
AlphaGo will now retire from playing Go, leaving behind a legacy of games played against itself. They’ve been described by one Go expert as like “games from far in the future,” which humans will study for years to improve their own play.
Ready, set, Go
Go is an ancient game that essentially pits two players—one playing black pieces the other white—for dominance on board usually marked with 19 horizontal and 19 vertical lines.
A typical game of Go: simple to learn but a lifetime to master.Flickr/Alper Cugun, CC BYGo is a far more difficult game for computers to play than chess, because the number of possible moves in each position is much larger. This makes searching many moves ahead—feasible for computers in chess—very difficult in Go.
DeepMind’s breakthrough was the development of general-purpose learning algorithms that can, in principle, be trained in more societal-relevant domains than Go.
DeepMind says the research team behind AlphaGo is looking to pursue other complex problems, such as finding new cures for diseases, dramatically reducing energy consumption or inventing revolutionary new materials. It adds:
"If AI systems prove they are able to unearth significant new knowledge and strategies in these domains too, the breakthroughs could be truly remarkable. We can’t wait to see what comes next."
This does open up many opportunities for the future, but challenges still remain.
Neuroscience meets AI
AlphaGo combines the two most powerful ideas about learning to emerge from the past few decades: deep learning and reinforcement learning. Remarkably, both were originally inspired by how biological brains learn from experience.
In the human brain, sensory information is processed in a series of layers. For instance, visual information is first transformed in the retina, then in the midbrain, and then through many different areas of the cerebral cortex.
This creates a hierarchy of representations where simple, local features are extracted first, and then more complex, global features are built from these.
The AI equivalent is called deep learning; deep because it involves many layers of processing in simple neuron-like computing units.
But to survive in the world, animals need to not only recognize sensory information, but also act on it. Generations of scientists and psychologists have studied how animals learn to take a series of actions that maximize their reward.
This has led to mathematical theories of reinforcement learning that can now be implemented in AI systems. The most powerful of these is temporal difference learning, which improves actions by maximizing expectation of future reward.
The best moves
By combining deep learning and reinforcement learning in a series of artificial neural networks, AlphaGo first learned human expert-level play in Go from 30 million moves from human games.
But then it started playing against itself, using the outcome of each game to relentlessly refine its decisions about the best move in each board position. A value network learned to predict the likely outcome given any position, while a policy network learned the best action to take in each situation.
Although it couldn’t sample every possible board position, AlphaGo’s neural networks extracted key ideas about strategies that work well in any position. It is these countless hours of self-play that led to AlphaGo’s improvement over the past year.
Unfortunately, as yet there is no known way to interrogate the network to directly read out what these key ideas are. Instead, we can only study its games and hope to learn from these.
This is one of the problems with using such neural network algorithms to help make decisions in, for instance, the legal system: they can’t explain their reasoning.
We still understand relatively little about how biological brains actually learn, and neuroscience will continue to provide new inspiration for improvements in AI.
Humans can learn to become expert Go players based on far less experience than AlphaGo needed to reach that level, so there is clearly room for further developing the algorithms.
Also, much of AlphaGo’s power is based on a technique called back-propagation learning that helps it correct errors. But the relationship between this and learning in real brains is still unclear.
What’s next?
The game of Go provided a nicely constrained development platform for optimizing these learning algorithms. But many real-world problems are messier than this and have less opportunity for the equivalent of self-play (for instance self-driving cars).
So, are there problems to which the current algorithms can be fairly immediately applied?
One example may be optimization in controlled industrial settings. Here the goal is often to complete a complex series of tasks while satisfying multiple constraints and minimizing cost.
As long as the possibilities can be accurately simulated, these algorithms can explore and learn from a vastly larger space of outcomes than will ever be possible for humans. Thus DeepMind’s bold claims seem likely to be realized, and as the company says, we can’t wait to see what comes next.
This article was originally published on The Conversation. Read the original article. Continue reading

Posted in Human Robots

#429980 Artificial Intelligence? What about Real ...

Professor Hiroshi Ishiguro, creator of “Erica” – one of the most complex humanoid robots yet – thinks that desire and intention are the key prerequisites to humanizing Artificial Intelligence (AI) in the next 3 years. This entails the application of … Continue reading

Posted in Human Robots

#430097 Interactive Robotic Design Tool

PITTSBURGH – A new interactive design tool developed by Carnegie Mellon University’s Robotics Institute enables both novices and experts to build customized legged or wheeled robots using 3D-printed components and off-the-shelf actuators.
Using a familiar drag-and-drop interface, individuals can choose from a library of components and place them into the design. The tool suggests components that are compatible with each other, offers potential placements of actuators and can automatically generate structural components to connect those actuators.
Once the design is complete, the tool provides a physical simulation environment to test the robot before fabricating it, enabling users to iteratively adjust the design to achieve a desired look or motion.
“The process of creating new robotic systems today is notoriously challenging, time-consuming and resource-intensive,” said Stelian Coros, assistant professor of robotics. “In the not-so-distant future, however, robots will be part of the fabric of daily life and more people — not just roboticists — will want to customize robots. This type of interactive design tool would make this possible for just about anybody.”
Today, robotics Ph.D. student Ruta Desai will present a report on the design tool she developed with Coros and master’s student Ye Yuan at the IEEE International Conference on Robotics and Automation (ICRA 2017) in Singapore.
Coros’ team designed a number of robots with the tool and verified its feasibility by fabricating two — a wheeled robot with a manipulator arm that can hold a pen for drawing, and a four-legged “puppy” robot that can walk forward or sideways.
“The system makes it easy to experiment with different body proportions and motor configurations, and see how these decisions affect the robot’s ability to do certain tasks,” said Desai. “For instance, we discovered in simulation that some of our preliminary designs for the puppy enabled it to only walk forward, not sideways. We corrected that for the final design. The motions of the robot we actually built matched the desired motion we demonstrated in simulation very well.”
The research team developed models of how actuators, off-the-shelf brackets and 3D-printable structural components can be combined to form complex robotic systems. The iterative design process enables users to experiment by changing the number and location of actuators and to adjust the physical dimensions of the robot. The tool includes an auto-completion feature that allows it to automatically generate assemblies of components by searching through possible arrangements.
“Our work aims to make robotics more accessible to casual users,” Coros said. “This is important because people who play an active role in creating robotic devices for their own use are more likely to have positive feelings and higher quality interactions with them. This could accelerate the adoption of robots in everyday life.”
The National Science Foundation supported this research.
###
About Carnegie Mellon University: Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.
The press release above is provided by:
Carnegie Mellon University
5000 Forbes Ave.
Pittsburgh, PA 15213
412-268-2900
Fax: 412-268-6929
Contact: Byron Spice
412-268-9068
bspice@cs.cmu.edu
The post Interactive Robotic Design Tool appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#430096 What Happens When Cyborg Tech Goes ...

The age of the cyborg may be closer than we think. Rapidly improving medical robotics, wearables, and implants means many humans are already part machine, and this trend is only likely to continue.
It is most noticeable in the field of medical prosthetics where high-performance titanium and carbon fiber replacements for limbs have become commonplace. The use of “blades” by Paralympians has even raised questions over whether they actually offer an advantage over biological limbs.
For decades, myoelectric prosthetics—powered artificial limbs that read electrical signals from the muscles to allow the user to control the device—have provided patients with mechanical replacements for lost hands.
Now, advances in robotics are resulting in prosthetic hands that are getting close to matching the originals in terms of dexterity. The Michelangelo prosthetic hand is fully articulated and precise enough to carry out tasks like cooking and ironing.
Researchers have even demonstrated robotic hands that have a sense of touch and can be controlled using the mind. And just last month another group showed that fitting a standard myoelectric arm with a camera and a computer vision system allowed it to “see” and grab objects without the user having to move a muscle.

Medical exoskeletons are already commercially available—most notably, ReWalk and Ekso Bionics devices designed to help those with spinal cord injuries stand and walk. Elsewhere, this technology is being used to rehabilitate people after strokes or other traumatic injuries by guiding their limbs through their full range of motion.
At present, these technologies are aimed solely at those who have been injured or incapacitated, but an editorial in Science Robotics last week warned that may not always be the case.
“There needs to be a debate on the future evolution of technologies as the pace of robotics and AI is accelerating,” the authors wrote.
“It seems certain that future assistive technologies will not only compensate for human disability but also drive human capacities beyond our innate physiological levels. The associated transformative influence will bring on broad social, political, and economic issues.”
This can already be seen with the development of military exoskeletons designed to boost soldiers’ endurance. More bizarrely, Japanese researchers have recently floated the idea of adding to our limbs rather than replacing them. The MetaLimbs project gives users two extra robotic arms that can be controlled using sensors on their legs and feet.

Last week’s issue of Science Robotics actually included a study demonstrating that a soft robotic exosuit was actually more effective at lightening the load on a runner when it didn’t follow a human’s natural running pattern and instead used computer simulations to decide what forces to apply.
This suggests there is considerable room for machines to not only augment the power of our muscles but even optimize the biomechanics of our movement. And as the authors of the editorial note, biomechanics is only one strand of research where scientists are trying to replicate and ultimately improve our abilities.
Devices like cochlear implants have been used to restore hearing in the deaf for decades and there are a number of experimental efforts to create bionic eyes to help the blind see again. Efforts to augment our intelligence with neural implants have been widely reported on in recent months.
Admittedly, there is still a long way to go before people start demanding to amputate their arm so they can get a shiny, new robotic one. And it’s likely the companies driving for consumer-grade neural interfaces are overestimating how many people will voluntarily undergo brain surgery.
But we’ve already taken the first steps towards merging our biological selves with machines.
You can argue smartphones are already essentially a prosthetic designed to boost communication and memory. And more overtly cyborg-like augmentations are likely to appear in many of our lifetimes.
What then does that mean for humankind? Natural evolution has long relied on mutation conferring minute but significant advantages to individuals that gradually spread throughout populations. If new prosthetic technologies start to confer these advantages overnight the effects could be very patchy.
The worry is that the latest augmentations are only available to the few who can afford them and in just a few generations you could end up with an elite who not only dwarf the rest of humanity financially but also physically and cognitively.
At the same time, these technologies hold huge promise to restore a decent standard of living to the countless people incapacitated by injury or disease. And if applied equitably, devices aimed at augmenting our abilities could better equip us to face the many challenges society faces.
But as the authors of the editorial note, the conversation on how best to guide us through this next stage of our evolution needs to start now. Because these devices have so far been focused on restoring functions that have been lost, we have largely missed the fact that they are now reaching the point where they can improve those functions or even enable new ones.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#430094 Configuration and manipulation of soft ...

Traditional rigid-bodied robots are stiff, with few degrees of freedom, placing limits on many applications. Recently, more engineers are learning from the soft flexibility properties of living beings to advance bionic soft robotics. The main characteristics of soft robots are flexibility, deformability and energy-absorbtion. Continue reading

Posted in Human Robots