Tag Archives: machine

#437982 Superintelligent AI May Be Impossible to ...

It may be theoretically impossible for humans to control a superintelligent AI, a new study finds. Worse still, the research also quashes any hope for detecting such an unstoppable AI when it’s on the verge of being created.

Slightly less grim is the timetable. By at least one estimate, many decades lie ahead before any such existential computational reckoning could be in the cards for humanity.

Alongside news of AI besting humans at games such as chess, Go and Jeopardy have come fears that superintelligent machines smarter than the best human minds might one day run amok. “The question about whether superintelligence could be controlled if created is quite old,” says study lead author Manuel Alfonseca, a computer scientist at the Autonomous University of Madrid. “It goes back at least to Asimov’s First Law of Robotics, in the 1940s.”

The Three Laws of Robotics, first introduced in Isaac Asimov's 1942 short story “Runaround,” are as follows:

A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In 2014, philosopher Nick Bostrom, director of the Future of Humanity Institute at the University of Oxford, not only explored ways in which a superintelligent AI could destroy us but also investigated potential control strategies for such a machine—and the reasons they might not work.

Bostrom outlined two possible types of solutions of this “control problem.” One is to control what the AI can do, such as keeping it from connecting to the Internet, and the other is to control what it wants to do, such as teaching it rules and values so it would act in the best interests of humanity. The problem with the former is that Bostrom thought a supersmart machine could probably break free from any bonds we could make. With the latter, he essentially feared that humans might not be smart enough to train a superintelligent AI.

Now Alfonseca and his colleagues suggest it may be impossible to control a superintelligent AI, due to fundamental limits inherent to computing itself. They detailed their findings this month in the Journal of Artificial Intelligence Research.

The researchers suggested that any algorithm that sought to ensure a superintelligent AI cannot harm people had to first simulate the machine’s behavior to predict the potential consequences of its actions. This containment algorithm then would need to halt the supersmart machine if it might indeed do harm.

However, the scientists said it was impossible for any containment algorithm to simulate the AI’s behavior and predict with absolute certainty whether its actions might lead to harm. The algorithm could fail to correctly simulate the AI’s behavior or accurately predict the consequences of the AI’s actions and not recognize such failures.

“Asimov’s first law of robotics has been proved to be incomputable,” Alfonseca says, “and therefore unfeasible.”

We may not even know if we have created a superintelligent machine, the researchers say. This is a consequence of Rice’s theorem, which essentially states that one cannot in general figure anything out about what a computer program might output just by looking at the program, Alfonseca explains.

On the other hand, there’s no need to spruce up the guest room for our future robot overlords quite yet. Three important caveats to the research still leave plenty of uncertainty to the group’s predictions.

First, Alfonseca estimates AI’s moment of truth remains, he says, “At least two centuries in the future.”

Second, he says researchers do not know if so-called artificial general intelligence, also known as strong AI, is theoretically even feasible. “That is, a machine as intelligent as we are in an ample variety of fields,” Alfonseca explains.

Last, Alfonseca says, “We have not proved that superintelligences can never be controlled—only that they can’t always be controlled.”

Although it may not be possible to control a superintelligent artificial general intelligence, it should be possible to control a superintelligent narrow AI—one specialized for certain functions instead of being capable of a broad range of tasks like humans. “We already have superintelligences of this type,” Alfonseca says. “For instance, we have machines that can compute mathematics much faster than we can. This is [narrow] superintelligence, isn’t it?” Continue reading

Posted in Human Robots

#437974 China Wants to Be the World’s AI ...

China’s star has been steadily rising for decades. Besides slashing extreme poverty rates from 88 percent to under 2 percent in just 30 years, the country has become a global powerhouse in manufacturing and technology. Its pace of growth may slow due to an aging population, but China is nonetheless one of the world’s biggest players in multiple cutting-edge tech fields.

One of these fields, and perhaps the most significant, is artificial intelligence. The Chinese government announced a plan in 2017 to become the world leader in AI by 2030, and has since poured billions of dollars into AI projects and research across academia, government, and private industry. The government’s venture capital fund is investing over $30 billion in AI; the northeastern city of Tianjin budgeted $16 billion for advancing AI; and a $2 billion AI research park is being built in Beijing.

On top of these huge investments, the government and private companies in China have access to an unprecedented quantity of data, on everything from citizens’ health to their smartphone use. WeChat, a multi-functional app where people can chat, date, send payments, hail rides, read news, and more, gives the CCP full access to user data upon request; as one BBC journalist put it, WeChat “was ahead of the game on the global stage and it has found its way into all corners of people’s existence. It could deliver to the Communist Party a life map of pretty much everybody in this country, citizens and foreigners alike.” And that’s just one (albeit big) source of data.

Many believe these factors are giving China a serious leg up in AI development, even providing enough of a boost that its progress will surpass that of the US.

But there’s more to AI than data, and there’s more to progress than investing billions of dollars. Analyzing China’s potential to become a world leader in AI—or in any technology that requires consistent innovation—from multiple angles provides a more nuanced picture of its strengths and limitations. In a June 2020 article in Foreign Affairs, Oxford fellows Carl Benedikt Frey and Michael Osborne argued that China’s big advantages may not actually be that advantageous in the long run—and its limitations may be very limiting.

Moving the AI Needle
To get an idea of who’s likely to take the lead in AI, it could help to first consider how the technology will advance beyond its current state.

To put it plainly, AI is somewhat stuck at the moment. Algorithms and neural networks continue to achieve new and impressive feats—like DeepMind’s AlphaFold accurately predicting protein structures or OpenAI’s GPT-3 writing convincing articles based on short prompts—but for the most part these systems’ capabilities are still defined as narrow intelligence: completing a specific task for which the system was painstakingly trained on loads of data.

(It’s worth noting here that some have speculated OpenAI’s GPT-3 may be an exception, the first example of machine intelligence that, while not “general,” has surpassed the definition of “narrow”; the algorithm was trained to write text, but ended up being able to translate between languages, write code, autocomplete images, do math, and perform other language-related tasks it wasn’t specifically trained for. However, all of GPT-3’s capabilities are limited to skills it learned in the language domain, whether spoken, written, or programming language).

Both AlphaFold’s and GPT-3’s success was due largely to the massive datasets they were trained on; no revolutionary new training methods or architectures were involved. If all it was going to take to advance AI was a continuation or scaling-up of this paradigm—more input data yields increased capability—China could well have an advantage.

But one of the biggest hurdles AI needs to clear to advance in leaps and bounds rather than baby steps is precisely this reliance on extensive, task-specific data. Other significant challenges include the technology’s fast approach to the limits of current computing power and its immense energy consumption.

Thus, while China’s trove of data may give it an advantage now, it may not be much of a long-term foothold on the climb to AI dominance. It’s useful for building products that incorporate or rely on today’s AI, but not for pushing the needle on how artificially intelligent systems learn. WeChat data on users’ spending habits, for example, would be valuable in building an AI that helps people save money or suggests items they might want to purchase. It will enable (and already has enabled) highly tailored products that will earn their creators and the companies that use them a lot of money.

But data quantity isn’t what’s going to advance AI. As Frey and Osborne put it, “Data efficiency is the holy grail of further progress in artificial intelligence.”

To that end, research teams in academia and private industry are working on ways to make AI less data-hungry. New training methods like one-shot learning and less-than-one-shot learning have begun to emerge, along with myriad efforts to make AI that learns more like the human brain.

While not insignificant, these advancements still fall into the “baby steps” category. No one knows how AI is going to progress beyond these small steps—and that uncertainty, in Frey and Osborne’s opinion, is a major speed bump on China’s fast-track to AI dominance.

How Innovation Happens
A lot of great inventions have happened by accident, and some of the world’s most successful companies started in garages, dorm rooms, or similarly low-budget, nondescript circumstances (including Google, Facebook, Amazon, and Apple, to name a few). Innovation, the authors point out, often happens “through serendipity and recombination, as inventors and entrepreneurs interact and exchange ideas.”

Frey and Osborne argue that although China has great reserves of talent and a history of building on technologies conceived elsewhere, it doesn’t yet have a glowing track record in terms of innovation. They note that of the 100 most-cited patents from 2003 to present, none came from China. Giants Tencent, Alibaba, and Baidu are all wildly successful in the Chinese market, but they’re rooted in technologies or business models that came out of the US and were tweaked for the Chinese population.

“The most innovative societies have always been those that allowed people to pursue controversial ideas,” Frey and Osborne write. China’s heavy censorship of the internet and surveillance of citizens don’t quite encourage the pursuit of controversial ideas. The country’s social credit system rewards people who follow the rules and punishes those who step out of line. Frey adds that top-down execution of problem-solving is effective when the problem at hand is clearly defined—and the next big leaps in AI are not.

It’s debatable how strongly a culture of social conformism can impact technological innovation, and of course there can be exceptions. But a relevant historical example is the Soviet Union, which, despite heavy investment in science and technology that briefly rivaled the US in fields like nuclear energy and space exploration, ended up lagging far behind primarily due to political and cultural factors.

Similarly, China’s focus on computer science in its education system could give it an edge—but, as Frey told me in an email, “The best students are not necessarily the best researchers. Being a good researcher also requires coming up with new ideas.”

Winner Take All?
Beyond the question of whether China will achieve AI dominance is the issue of how it will use the powerful technology. Several of the ways China has already implemented AI could be considered morally questionable, from facial recognition systems used aggressively against ethnic minorities to smart glasses for policemen that can pull up information about whoever the wearer looks at.

This isn’t to say the US would use AI for purely ethical purposes. The military’s Project Maven, for example, used artificially intelligent algorithms to identify insurgent targets in Iraq and Syria, and American law enforcement agencies are also using (mostly unregulated) facial recognition systems.

It’s conceivable that “dominance” in AI won’t go to one country; each nation could meet milestones in different ways, or meet different milestones. Researchers from both countries, at least in the academic sphere, could (and likely will) continue to collaborate and share their work, as they’ve done on many projects to date.

If one country does take the lead, it will certainly see some major advantages as a result. Brookings Institute fellow Indermit Gill goes so far as to say that whoever leads in AI in 2030 will “rule the world” until 2100. But Gill points out that in addition to considering each country’s strengths, we should consider how willing they are to improve upon their weaknesses.

While China leads in investment and the US in innovation, both nations are grappling with huge economic inequalities that could negatively impact technological uptake. “Attitudes toward the social change that accompanies new technologies matter as much as the technologies, pointing to the need for complementary policies that shape the economy and society,” Gill writes.

Will China’s leadership be willing to relax its grip to foster innovation? Will the US business environment be enough to compete with China’s data, investment, and education advantages? And can both countries find a way to distribute technology’s economic benefits more equitably?

Time will tell, but it seems we’ve got our work cut out for us—and China does too.

Image Credit: Adam Birkett on Unsplash Continue reading

Posted in Human Robots

#437964 How Explainable Artificial Intelligence ...

The field of artificial intelligence has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.

Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.

Learning From Experience
One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions.

This approach has led to algorithms that have independently learned to play chess at a superhuman level and prove mathematical theorems without any human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.

Through reinforcement learning, AIs are independently learning to solve problems that even humans struggle to figure out. This has got me and many other researchers thinking less about what AI can learn and more about what humans can learn from AI. A computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.

Peering Into the Black Box
Unfortunately, the minds of superhuman AIs are currently out of reach to us humans. AIs make terrible teachers and are what we in the computer science world call “black boxes.”

AI simply spits out solutions without giving reasons for its solutions. Computer scientists have been trying for decades to open this black box, and recent research has shown that many AI algorithms actually do think in ways that are similar to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.

The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube.

The Rubik’s Cube is basically a pathfinding problem: Find a path from point A—a scrambled Rubik’s Cube—to point B—a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis.

My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.

Solutions to the Rubik’s Cube can be broken down into a few generalized steps—the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios.

Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another. The Rubik’s Cube naturally fits into this step-by-step framework, which gives us the opportunity to open the black box of our algorithm more easily. Creating AI algorithms that have this ability could allow people to collaborate with AI and break down a wide variety of complex problems into easy-to-understand steps.

A step-by-step refinement approach can make it easier for humans to understand why AIs do the things they do. Forest Agostinelli, CC BY-ND

Collaboration Leads to Innovation
Our process starts with using one’s own intuition to define a step-by-step plan thought to potentially solve a complex problem. The algorithm then looks at each individual step and gives feedback about which steps are possible, which are impossible and ways the plan could be improved. The human then refines the initial plan using the advice from the AI, and the process repeats until the problem is solved. The hope is that the person and the AI will eventually converge to a kind of mutual understanding.

Currently, our algorithm is able to consider a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. Our team’s next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik’s Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.

People are intuitive in a way unmatched by any AI, but machines are far better in their computational power and algorithmic rigor. This back and forth between man and machine utilizes the strengths from both. I believe this type of collaboration will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Serg Antonov / Unsplash Continue reading

Posted in Human Robots

#437957 Meet Assembloids, Mini Human Brains With ...

It’s not often that a twitching, snowman-shaped blob of 3D human tissue makes someone’s day.

But when Dr. Sergiu Pasca at Stanford University witnessed the tiny movement, he knew his lab had achieved something special. You see, the blob was evolved from three lab-grown chunks of human tissue: a mini-brain, mini-spinal cord, and mini-muscle. Each individual component, churned to eerie humanoid perfection inside bubbling incubators, is already a work of scientific genius. But Pasca took the extra step, marinating the three components together inside a soup of nutrients.

The result was a bizarre, Lego-like human tissue that replicates the basic circuits behind how we decide to move. Without external prompting, when churned together like ice cream, the three ingredients physically linked up into a fully functional circuit. The 3D mini-brain, through the information highway formed by the artificial spinal cord, was able to make the lab-grown muscle twitch on demand.

In other words, if you think isolated mini-brains—known formally as brain organoids—floating in a jar is creepy, upgrade your nightmares. The next big thing in probing the brain is assembloids—free-floating brain circuits—that now combine brain tissue with an external output.

The end goal isn’t to freak people out. Rather, it’s to recapitulate our nervous system, from input to output, inside the controlled environment of a Petri dish. An autonomous, living brain-spinal cord-muscle entity is an invaluable model for figuring out how our own brains direct the intricate muscle movements that allow us stay upright, walk, or type on a keyboard.

It’s the nexus toward more dexterous brain-machine interfaces, and a model to understand when brain-muscle connections fail—as in devastating conditions like Lou Gehrig’s disease or Parkinson’s, where people slowly lose muscle control due to the gradual death of neurons that control muscle function. Assembloids are a sort of “mini-me,” a workaround for testing potential treatments on a simple “replica” of a person rather than directly on a human.

From Organoids to Assembloids
The miniature snippet of the human nervous system has been a long time in the making.

It all started in 2014, when Dr. Madeleine Lancaster, then a post-doc at Stanford, grew a shockingly intricate 3D replica of human brain tissue inside a whirling incubator. Revolutionarily different than standard cell cultures, which grind up brain tissue to reconstruct as a flat network of cells, Lancaster’s 3D brain organoids were incredibly sophisticated in their recapitulation of the human brain during development. Subsequent studies further solidified their similarity to the developing brain of a fetus—not just in terms of neuron types, but also their connections and structure.

With the finding that these mini-brains sparked with electrical activity, bioethicists increasingly raised red flags that the blobs of human brain tissue—no larger than the size of a pea at most—could harbor the potential to develop a sense of awareness if further matured and with external input and output.

Despite these concerns, brain organoids became an instant hit. Because they’re made of human tissue—often taken from actual human patients and converted into stem-cell-like states—organoids harbor the same genetic makeup as their donors. This makes it possible to study perplexing conditions such as autism, schizophrenia, or other brain disorders in a dish. What’s more, because they’re grown in the lab, it’s possible to genetically edit the mini-brains to test potential genetic culprits in the search for a cure.

Yet mini-brains had an Achilles’ heel: not all were made the same. Rather, depending on the region of the brain that was reverse engineered, the cells had to be persuaded by different cocktails of chemical soups and maintained in isolation. It was a stark contrast to our own developing brains, where regions are connected through highways of neural networks and work in tandem.

Pasca faced the problem head-on. Betting on the brain’s self-assembling capacity, his team hypothesized that it might be possible to grow different mini-brains, each reflecting a different brain region, and have them fuse together into a synchronized band of neuron circuits to process information. Last year, his idea paid off.

In one mind-blowing study, his team grew two separate portions of the brain into blobs, one representing the cortex, the other a deeper part of the brain known to control reward and movement, called the striatum. Shockingly, when put together, the two blobs of human brain tissue fused into a functional couple, automatically establishing neural highways that resulted in one of the most sophisticated recapitulations of a human brain. Pasca crowned this tissue engineering crème-de-la-crème “assembloids,” a portmanteau between “assemble” and “organoids.”

“We have demonstrated that regionalized brain spheroids can be put together to form fused structures called brain assembloids,” said Pasca at the time.” [They] can then be used to investigate developmental processes that were previously inaccessible.”

And if that’s possible for wiring up a lab-grown brain, why wouldn’t it work for larger neural circuits?

Assembloids, Assemble
The new study is the fruition of that idea.

The team started with human skin cells, scraped off of eight healthy people, and transformed them into a stem-cell-like state, called iPSCs. These cells have long been touted as the breakthrough for personalized medical treatment, before each reflects the genetic makeup of its original host.

Using two separate cocktails, the team then generated mini-brains and mini-spinal cords using these iPSCs. The two components were placed together “in close proximity” for three days inside a lab incubator, gently floating around each other in an intricate dance. To the team’s surprise, under the microscope using tracers that glow in the dark, they saw highways of branches extending from one organoid to the other like arms in a tight embrace. When stimulated with electricity, the links fired up, suggesting that the connections weren’t just for show—they’re capable of transmitting information.

“We made the parts,” said Pasca, “but they knew how to put themselves together.”

Then came the ménage à trois. Once the mini-brain and spinal cord formed their double-decker ice cream scoop, the team overlaid them onto a layer of muscle cells—cultured separately into a human-like muscular structure. The end result was a somewhat bizarre and silly-looking snowman, made of three oddly-shaped spherical balls.

Yet against all odds, the brain-spinal cord assembly reached out to the lab-grown muscle. Using a variety of tools, including measuring muscle contraction, the team found that this utterly Frankenstein-like snowman was able to make the muscle component contract—in a way similar to how our muscles twitch when needed.

“Skeletal muscle doesn’t usually contract on its own,” said Pasca. “Seeing that first twitch in a lab dish immediately after cortical stimulation is something that’s not soon forgotten.”

When tested for longevity, the contraption lasted for up to 10 weeks without any sort of breakdown. Far from a one-shot wonder, the isolated circuit worked even better the longer each component was connected.

Pasca isn’t the first to give mini-brains an output channel. Last year, the queen of brain organoids, Lancaster, chopped up mature mini-brains into slices, which were then linked to muscle tissue through a cultured spinal cord. Assembloids are a step up, showing that it’s possible to automatically sew multiple nerve-linked structures together, such as brain and muscle, sans slicing.

The question is what happens when these assembloids become more sophisticated, edging ever closer to the inherent wiring that powers our movements. Pasca’s study targets outputs, but what about inputs? Can we wire input channels, such as retinal cells, to mini-brains that have a rudimentary visual cortex to process those examples? Learning, after all, depends on examples of our world, which are processed inside computational circuits and delivered as outputs—potentially, muscle contractions.

To be clear, few would argue that today’s mini-brains are capable of any sort of consciousness or awareness. But as mini-brains get increasingly more sophisticated, at what point can we consider them a sort of AI, capable of computation or even something that mimics thought? We don’t yet have an answer—but the debates are on.

Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#437940 How Boston Dynamics Taught Its Robots to ...

A week ago, Boston Dynamics posted a video of Atlas, Spot, and Handle dancing to “Do You Love Me.” It was, according to the video description, a way “to celebrate the start of what we hope will be a happier year.” As of today the video has been viewed nearly 24 million times, and the popularity is no surprise, considering the compelling mix of technical prowess and creativity on display.

Strictly speaking, the stuff going on in the video isn’t groundbreaking, in the sense that we’re not seeing any of the robots demonstrate fundamentally new capabilities, but that shouldn’t take away from how impressive it is—you’re seeing state-of-the-art in humanoid robotics, quadrupedal robotics, and whatever-the-heck-Handle-is robotics.

What is unique about this video from Boston Dynamics is the artistic component. We know that Atlas can do some practical tasks, and we know it can do some gymnastics and some parkour, but dancing is certainly something new. To learn more about what it took to make these dancing robots happen (and it’s much more complicated than it might seem), we spoke with Aaron Saunders, Boston Dynamics’ VP of Engineering.

Saunders started at Boston Dynamics in 2003, meaning that he’s been a fundamental part of a huge number of Boston Dynamics’ robots, even the ones you may have forgotten about. Remember LittleDog, for example? A team of two designed and built that adorable little quadruped, and Saunders was one of them.

While he’s been part of the Atlas project since the beginning (and had a hand in just about everything else that Boston Dynamics works on), Saunders has spent the last few years leading the Atlas team specifically, and he was kind enough to answer our questions about their dancing robots.

IEEE Spectrum: What’s your sense of how the Internet has been reacting to the video?

Aaron Saunders: We have different expectations for the videos that we make; this one was definitely anchored in fun for us. The response on YouTube was record-setting for us: We received hundreds of emails and calls with people expressing their enthusiasm, and also sharing their ideas for what we should do next, what about this song, what about this dance move, so that was really fun. My favorite reaction was one that I got from my 94-year-old grandma, who watched the video on YouTube and then sent a message through the family asking if I’d taught the robot those sweet moves. I think this video connected with a broader audience, because it mixed the old-school music with new technology.

We haven’t seen Atlas move like this before—can you talk about how you made it happen?

We started by working with dancers and a choreographer to create an initial concept for the dance by composing and assembling a routine. One of the challenges, and probably the core challenge for Atlas in particular, was adjusting human dance moves so that they could be performed on the robot. To do that, we used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go “that would be easy” or “that would be hard” or “that scares me.” And then we’d have a discussion, try different things in simulation, and make adjustments to find a compatible set of moves that we could execute on Atlas.

Throughout the project, the time frame for creating those new dance moves got shorter and shorter as we built tools, and as an example, eventually we were able to use that toolchain to create one of Atlas’ ballet moves in just one day, the day before we filmed, and it worked. So it’s not hand-scripted or hand-coded, it’s about having a pipeline that lets you take a diverse set of motions, that you can describe through a variety of different inputs, and push them through and onto the robot.

Image: Boston Dynamics

Were there some things that were particularly difficult to translate from human dancers to Atlas? Or, things that Atlas could do better than humans?

Some of the spinning turns in the ballet parts took more iterations to get to work, because they were the furthest from leaping and running and some of the other things that we have more experience with, so they challenged both the machine and the software in new ways. We definitely learned not to underestimate how flexible and strong dancers are—when you take elite athletes and you try to do what they do but with a robot, it’s a hard problem. It’s humbling. Fundamentally, I don’t think that Atlas has the range of motion or power that these athletes do, although we continue developing our robots towards that, because we believe that in order to broadly deploy these kinds of robots commercially, and eventually in a home, we think they need to have this level of performance.

One thing that robots are really good at is doing something over and over again the exact same way. So once we dialed in what we wanted to do, the robots could just do it again and again as we played with different camera angles.

I can understand how you could use human dancers to help you put together a routine with Atlas, but how did that work with Spot, and particularly with Handle?

I think the people we worked with actually had a lot of talent for thinking about motion, and thinking about how to express themselves through motion. And our robots do motion really well—they’re dynamic, they’re exciting, they balance. So I think what we found was that the dancers connected with the way the robots moved, and then shaped that into a story, and it didn’t matter whether there were two legs or four legs. When you don’t necessarily have a template of animal motion or human behavior, you just have to think a little harder about how to go about doing something, and that’s true for more pragmatic commercial behaviors as well.

“We used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go ‘that would be easy’ or ‘that would be hard’ or ‘that scares me.’”
—Aaron Saunders, Boston Dynamics

How does the experience that you get teaching robots to dance, or to do gymnastics or parkour, inform your approach to robotics for commercial applications?

We think that the skills inherent in dance and parkour, like agility, balance, and perception, are fundamental to a wide variety of robot applications. Maybe more importantly, finding that intersection between building a new robot capability and having fun has been Boston Dynamics’ recipe for robotics—it’s a great way to advance.

One good example is how when you push limits by asking your robots to do these dynamic motions over a period of several days, you learn a lot about the robustness of your hardware. Spot, through its productization, has become incredibly robust, and required almost no maintenance—it could just dance all day long once you taught it to. And the reason it’s so robust today is because of all those lessons we learned from previous things that may have just seemed weird and fun. You’ve got to go into uncharted territory to even know what you don’t know.

Image: Boston Dynamics

It’s often hard to tell from watching videos like these how much time it took to make things work the way you wanted them to, and how representative they are of the actual capabilities of the robots. Can you talk about that?

Let me try to answer in the context of this video, but I think the same is true for all of the videos that we post. We work hard to make something, and once it works, it works. For Atlas, most of the robot control existed from our previous work, like the work that we’ve done on parkour, which sent us down a path of using model predictive controllers that account for dynamics and balance. We used those to run on the robot a set of dance steps that we’d designed offline with the dancers and choreographer. So, a lot of time, months, we spent thinking about the dance and composing the motions and iterating in simulation.

Dancing required a lot of strength and speed, so we even upgraded some of Atlas’ hardware to give it more power. Dance might be the highest power thing we’ve done to date—even though you might think parkour looks way more explosive, the amount of motion and speed that you have in dance is incredible. That also took a lot of time over the course of months; creating the capability in the machine to go along with the capability in the algorithms.

Once we had the final sequence that you see in the video, we only filmed for two days. Much of that time was spent figuring out how to move the camera through a scene with a bunch of robots in it to capture one continuous two-minute shot, and while we ran and filmed the dance routine multiple times, we could repeat it quite reliably. There was no cutting or splicing in that opening two-minute shot.

There were definitely some failures in the hardware that required maintenance, and our robots stumbled and fell down sometimes. These behaviors are not meant to be productized and to be a 100 percent reliable, but they’re definitely repeatable. We try to be honest with showing things that we can do, not a snippet of something that we did once. I think there’s an honesty required in saying that you’ve achieved something, and that’s definitely important for us.

You mentioned that Spot is now robust enough to dance all day. How about Atlas? If you kept on replacing its batteries, could it dance all day, too?

Atlas, as a machine, is still, you know… there are only a handful of them in the world, they’re complicated, and reliability was not a main focus. We would definitely break the robot from time to time. But the robustness of the hardware, in the context of what we were trying to do, was really great. And without that robustness, we wouldn’t have been able to make the video at all. I think Atlas is a little more like a helicopter, where there’s a higher ratio between the time you spend doing maintenance and the time you spend operating. Whereas with Spot, the expectation is that it’s more like a car, where you can run it for a long time before you have to touch it.

When you’re teaching Atlas to do new things, is it using any kind of machine learning? And if not, why not?

As a company, we’ve explored a lot of things, but Atlas is not using a learning controller right now. I expect that a day will come when we will. Atlas’ current dance performance uses a mixture of what we like to call reflexive control, which is a combination of reacting to forces, online and offline trajectory optimization, and model predictive control. We leverage these techniques because they’re a reliable way of unlocking really high performance stuff, and we understand how to wield these tools really well. We haven’t found the end of the road in terms of what we can do with them.

We plan on using learning to extend and build on the foundation of software and hardware that we’ve developed, but I think that we, along with the community, are still trying to figure out where the right places to apply these tools are. I think you’ll see that as part of our natural progression.

Image: Boston Dynamics

Much of Atlas’ dynamic motion comes from its lower body at the moment, but parkour makes use of upper body strength and agility as well, and we’ve seen some recent concept images showing Atlas doing vaults and pullups. Can you tell us more?

Humans and animals do amazing things using their legs, but they do even more amazing things when they use their whole bodies. I think parkour provides a fantastic framework that allows us to progress towards whole body mobility. Walking and running was just the start of that journey. We’re progressing through more complex dynamic behaviors like jumping and spinning, that’s what we’ve been working on for the last couple of years. And the next step is to explore how using arms to push and pull on the world could extend that agility.

One of the missions that I’ve given to the Atlas team is to start working on leveraging the arms as much as we leverage the legs to enhance and extend our mobility, and I’m really excited about what we’re going to be working on over the next couple of years, because it’s going to open up a lot more opportunities for us to do exciting stuff with Atlas.

What’s your perspective on hydraulic versus electric actuators for highly dynamic robots?

Across my career at Boston Dynamics, I’ve felt passionately connected to so many different types of technology, but I’ve settled into a place where I really don’t think this is an either-or conversation anymore. I think the selection of actuator technology really depends on the size of the robot that you’re building, what you want that robot to do, where you want it to go, and many other factors. Ultimately, it’s good to have both kinds of actuators in your toolbox, and I love having access to both—and we’ve used both with great success to make really impressive dynamic machines.

I think the only delineation between hydraulic and electric actuators that appears to be distinct for me is probably in scale. It’s really challenging to make tiny hydraulic things because the industry just doesn’t do a lot of that, and the reciprocal is that the industry also doesn’t tend to make massive electrical things. So, you may find that to be a natural division between these two technologies.

Besides what you’re working on at Boston Dynamics, what recent robotics research are you most excited about?

For us as a company, we really love to follow advances in sensing, computer vision, terrain perception, these are all things where the better they get, the more we can do. For me personally, one of the things I like to follow is manipulation research, and in particular manipulation research that advances our understanding of complex, friction-based interactions like sliding and pushing, or moving compliant things like ropes.

We’re seeing a shift from just pinching things, lifting them, moving them, and dropping them, to much more meaningful interactions with the environment. Research in that type of manipulation I think is going to unlock the potential for mobile manipulators, and I think it’s really going to open up the ability for robots to interact with the world in a rich way.

Is there anything else you’d like people to take away from this video?

For me personally, and I think it’s because I spend so much of my time immersed in robotics and have a deep appreciation for what a robot is and what its capabilities and limitations are, one of my strong desires is for more people to spend more time with robots. We see a lot of opinions and ideas from people looking at our videos on YouTube, and it seems to me that if more people had opportunities to think about and learn about and spend time with robots, that new level of understanding could help them imagine new ways in which robots could be useful in our daily lives. I think the possibilities are really exciting, and I just want more people to be able to take that journey. Continue reading

Posted in Human Robots