Tag Archives: version
#435707 AI Agents Startle Researchers With ...
After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.
After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.
The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.
In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.”
According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”
Reinforcement is a hot field of AI research right now. OpenAI’s researchers used the technique when they trained a team of bots to play the video game Dota 2, which squashed a world-champion human team last April. The Alphabet subsidiary DeepMind has used it to triumph in the ancient board game Go and the video game StarCraft.
Aniruddha Kembhavi, a researcher at the Allen Institute for Artificial Intelligence (AI2) in Seattle, says games such as hide-and-seek offer a good way for AI agents to learn “foundational skills.” He worked on a team that taught their AllenAI to play Pictionary with humans, viewing the gameplay as a way for the AI to work on common sense reasoning and communication. “We are, however, quite far away from being able to translate these preliminary findings in highly simplified environments into the real world,” says Kembhavi.
Illustration: OpenAI
AI agents construct a fort during a hide-and-seek game developed by OpenAI.
In OpenAI’s game of hide-and-seek, both the hiders and the seekers received a reward only if they won the game, leaving the AI players to develop their own strategies. Within a simple 3D environment containing walls, blocks, and ramps, the players first learned to run around and chase each other (strategy 1). The hiders next learned to move the blocks around to build forts (2), and then the seekers learned to move the ramps (3), enabling them to jump inside the forts. Then the hiders learned to move all the ramps into their forts before the seekers could use them (4).
The two strategies that surprised the researchers came next. First the seekers learned that they could jump onto a box and “surf” it over to a fort (5), allowing them to jump in—a maneuver that the researchers hadn’t realized was physically possible in the game environment. So as a final countermeasure, the hiders learned to lock all the boxes into place (6) so they weren’t available for use as surfboards.
Illustration: OpenAI
An AI agent uses a nearby box to surf its way into a competitor’s fort.
In this circumstance, having AI agents behave in an unexpected way wasn’t a problem: They found different paths to their rewards, but didn’t cause any trouble. However, you can imagine situations in which the outcome would be rather serious. Robots acting in the real world could do real damage. And then there’s Nick Bostrom’s famous example of a paper clip factory run by an AI, whose goal is to make as many paper clips as possible. As Bostrom told IEEE Spectrum back in 2014, the AI might realize that “human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.”
Bowen Baker, another member of the OpenAI research team, notes that it’s hard to predict all the ways an AI agent will act inside an environment—even a simple one. “Building these environments is hard,” he says. “The agents will come up with these unexpected behaviors, which will be a safety problem down the road when you put them in more complex environments.”
AI researcher Katja Hofmann at Microsoft Research Cambridge, in England, has seen a lot of gameplay by AI agents: She started a competition that uses Minecraft as the playing field. She says the emergent behavior seen in this game, and in prior experiments by other researchers, shows that games can be a useful for studies of safe and responsible AI.
“I find demonstrations like this, in games and game-like settings, a great way to explore the capabilities and limitations of existing approaches in a safe environment,” says Hofmann. “Results like these will help us develop a better understanding on how to validate and debug reinforcement learning systems–a crucial step on the path towards real-world applications.”
Baker says there’s also a hopeful takeaway from the surprises in the hide-and-seek experiment. “If you put these agents into a rich enough environment they will find strategies that we never knew were possible,” he says. “Maybe they can solve problems that we can’t imagine solutions to.” Continue reading
#435664 Swarm Robots Mimic Ant Jaws to Flip and ...
Small robots are appealing because they’re simple, cheap, and it’s easy to make a lot of them. Unfortunately, being simple and cheap means that each robot individually can’t do a whole lot. To make up for this, you can do what insects do—leverage that simplicity and low-cost to just make a huge swarm of simple robots, and together, they can cooperate to carry out relatively complex tasks.
Using insects as an example does set a bit of an unfair expectation for the poor robots, since insects are (let’s be honest) generally smarter and much more versatile than a robot on their scale could ever hope to be. Most robots with insect-like capabilities (like DASH and its family) are really too big and complex to be turned into swarms, because to make a vast amount of small robots, things like motors aren’t going to work because they’re too expensive.
The question, then, is to how to make a swarm of inexpensive small robots with insect-like mobility that don’t need motors to get around, and Jamie Paik’s Reconfigurable Robotics Lab at EPFL has an answer, inspired by trap-jaw ants.
Let’s talk about trap-jaw ants for just a second, because they’re insane. You can read this 2006 paper about them if you’re particularly interested in insane ants (and who isn’t!), but if you just want to hear the insane bit, it’s that trap-jaw ants can fire themselves into the air by biting the ground (!). In just 0.06 millisecond, their half-millimeter long mandibles can close at a top speed of 64 meters per second, which works out to an acceleration of about 100,000 g’s. Biting the ground causes the ant’s head to snap back with a force of 300 times the body weight of the ant itself, which launches the ant upwards. The ants can fly 8 centimeters vertically, and up to 15 cm horizontally—this is a lot, for an ant that’s just a few millimeters long.
Trap-jaw ants can fire themselves into the air by biting the ground, causing the ant’s head to snap back with a force of 300 times the body weight of the ant itself
EPFL’s robots, called Tribots, look nothing at all like trap-jaw ants, which personally I am fine with. They’re about 5 cm tall, weighing 10 grams each, and can be built on a flat sheet, and then folded into a tripod shape, origami-style. Or maybe it’s kirigami, because there’s some cutting involved. The Tribots are fully autonomous, meaning they have onboard power and control, including proximity sensors that allow them to detect objects and avoid them.
Photo: Marc Delachaux/EPFL
EPFL researchers Zhenishbek Zhakypov and Jamie Paik.
Avoiding objects is where the trap-jaw ants come in. Using two different shape-memory actuators (a spring and a latch, similar to how the ant’s jaw works), the Tribots can move around using a bunch of different techniques that can adapt to the terrain that they’re on, including:
Vertical jumping for height
Horizontal jumping for distance
Somersault jumping to clear obstacles
Walking on textured terrain with short hops (called “flic-flac” walking)
Crawling on flat surfaces
Here’s the robot in action:
Tribot’s maximum vertical jump is 14 cm (2.5 times its height), and horizontally it can jump about 23 cm (almost 4 times its length). Tribot is actually quite efficient in these movements, with a cost of transport much lower than similarly-sized robots, on par with insects themselves.
Working together, small groups of Tribots can complete tasks that a single robot couldn’t do alone. One example is pushing a heavy object a set distance. It turns out that you need five Tribots for this task—a leader robot, two worker robots, a monitor robot to measure the distance that the object has been pushed, and then a messenger robot to relay communications around the obstacle.
Image: EPFL
Five Tribots collaborate to move an object to a desired position, using coordination between a leader, two workers, a monitor, and a messenger robot. The leader orders the two worker robots to push the object while the monitor measures the relative position of the object. As the object blocks the two-way link between the leader and the monitor, the messenger maintains the communication link.
The researchers acknowledge that the current version of the hardware is limited in pretty much every way (mobility, sensing, and computation), but it does a reasonable job of demonstrating what’s possible with the concept. The plan going forward is to automate fabrication in order to “enable on-demand, ’push-button-manufactured’” robots.
“Designing minimal and scalable insect-inspired multi-locomotion millirobots,” by Zhenishbek Zhakypov, Kazuaki Mori, Koh Hosoda, and Jamie Paik from EPFL and Osaka University, is published in the current issue of Nature.
[ RRL ] via [ EPFL ] Continue reading
#435634 Robot Made of Clay Can Sculpt Its Own ...
We’re very familiar with a wide variety of transforming robots—whether for submarines or drones, transformation is a way of making a single robot adaptable to different environments or tasks. Usually, these robots are restricted to a discrete number of configurations—perhaps two or three different forms—because of the constraints imposed by the rigid structures that robots are typically made of.
Soft robotics has the potential to change all this, with robots that don’t have fixed forms but instead can transform themselves into whatever shape will enable them to do what they need to do. At ICRA in Montreal earlier this year, researchers from Yale University demonstrated a creative approach toward a transforming robot powered by string and air, with a body made primarily out of clay.
Photo: Evan Ackerman
The robot is actuated by two different kinds of “skin,” one layered on top of another. There’s a locomotion skin, made of a pattern of pneumatic bladders that can roll the robot forward or backward when the bladders are inflated sequentially. On top of that is the morphing skin, which is cable-driven, and can sculpt the underlying material into a variety of shapes, including spheres, cylinders, and dumbbells. The robot itself consists of both of those skins wrapped around a chunk of clay, with the actuators driven by offboard power and control. Here it is in action:
The Yale researchers have been experimenting with morphing robots that use foams and tensegrity structures for their bodies, but that stuff provides a “restoring force,” springing back into its original shape once the actuation stops. Clay is different because it holds whatever shape it’s formed into, making the robot more energy efficient. And if the dumbbell shape stops being useful, the morphing layer can just squeeze it back into a cylinder or a sphere.
While this robot, and the sample transformation shown in the video, are relatively simplistic, the researchers suggest some ways in which a more complex version could be used in the future:
Photo: IEEE Xplore
This robot’s morphing skin sculpts its clay body into different shapes.
Applications where morphing and locomotion might serve as complementary functions are abundant. For the example skins presented in this work, a search-and-rescue operation could use the clay as a medium to hold a payload such as sensors or transmitters. More broadly, applications include resource-limited conditions where supply chains for materiel are sparse. For example, the morphing sequence shown in Fig. 4 [above] could be used to transform from a rolling sphere to a pseudo-jointed robotic arm. With such a morphing system, it would be possible to robotically morph matter into different forms to perform different functions.
Read this article for free on IEEE Xplore until 5 September 2019
Morphing Robots Using Robotic Skins That Sculpt Clay, by Dylan S. Shah, Michelle C. Yuen, Liana G. Tilton, Ellen J. Yang, and Rebecca Kramer-Bottiglio from Yale University, was presented at ICRA 2019 in Montreal.
[ Yale Faboratory ]
< Back to IEEE Journal Watch Continue reading