Tag Archives: possible

#436180 Bipedal Robot Cassie Cal Learns to ...

There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.

UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.

Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.

This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.

For a bit more detail, we spoke with Albert Li via email.

Image: UC Berkeley

UC Berkeley’s Cassie Cal getting ready to juggle.

IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?

Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.

What keeps Cassie from juggling indefinitely?

There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.

Would Cassie be able to juggle while walking (or hovershoe-ing)?

Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.

Can you give an example of a practical task that would be made possible by using a controller like this?

Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.

Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.

You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next?

We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).

On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.

[ Hybrid Robotics Lab ] Continue reading

Posted in Human Robots

#436176 We’re Making Progress in Explainable ...

Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.

But machine learning algorithms cannot explain the decisions they make.
How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions?

This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.

What Makes You Say That?
In some circumstances, you can see a road to explainable AI already. Take OpenAI’s GTP-2 model, or IBM’s Project Debater. Both of these generate text based on a large corpus of training data, and try to make it as relevant as possible to the prompt that’s given. If these models were also able to provide a quick run-down of the top few sources in that corpus of training data they were drawing information from, it may be easier to understand where the “argument” (or poetic essay about unicorns) was coming from.

This is similar to the approach Google is now looking at for its image classifiers. Many algorithms are more sensitive to textures and the relationship between adjacent pixels in an image, rather than recognizing objects by their outlines as humans do. This leads to strange results: some algorithms can happily identify a totally scrambled image of a polar bear, but not a polar bear silhouette.

Previous attempts to make image classifiers explainable relied on significance mapping. In this method, the algorithm would highlight the areas of the image that contributed the most statistical weight to making the decision. This is usually determined by changing groups of pixels in the image and seeing which contribute to the biggest change in the algorithm’s impression of what the image is. For example, if the algorithm is trying to recognize a stop sign, changing the background is unlikely to be as important as changing the sign.

Google’s new approach changes the way that its algorithm recognizes objects, by examining them at several different resolutions and searching for matches to different “sub-objects” within the main object. You or I might recognize an ambulance from its flashing lights, its tires, and its logo; we might zoom in on the basketball held by an NBA player to deduce their occupation, and so on. By linking the overall categorization of an image to these “concepts,” the algorithm can explain its decision: I categorized this as a cat because of its tail and whiskers.

Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.

Can You Explain What You Don’t Understand?
While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?

At one end of the spectrum, Good Old-Fashioned AI or GOFAI dreamed up machines that would be entirely based on symbolic logic. The machine would be hard-coded with the concept of a dog, a flower, cars, and so forth, alongside all of the symbolic “rules” which we internalize, allowing us to distinguish between dogs, flowers, and cars. (You can imagine a similar approach to a conversational AI would teach it words and strict grammatical structures from the top down, rather than “learning” languages from statistical associations between letters and words in training data, as GPT-2 broadly does.)

Such a system would be able to explain itself, because it would deal in high-level, human-understandable concepts. The equation is closer to: “ball” + “stitches” + “white” = “baseball”, rather than a set of millions of numbers linking various pathways together. There are elements of GOFAI in Google’s new approach to explaining its image recognition: the new algorithm can recognize objects based on the sub-objects they contain. To do this, it requires at least a rudimentary understanding of what those sub-objects look like, and the rules that link objects to sub-objects, such as “cats have whiskers.”

The issue, of course, is the—maybe impossible—labor-intensive task of defining all these symbolic concepts and every conceivable rule that could possibly link them together by hand. The difficulty of creating systems like this, which could handle the “combinatorial explosion” present in reality, helped to lead to the first AI winter.

Meanwhile, neural networks rely on training themselves on vast sets of data. Without the “labeling” of supervised learning, this process might bear no relation to any concepts a human could understand (and therefore be utterly inexplicable).

Somewhere between these two, hope explainable AI enthusiasts, is a happy medium that can crunch colossal amounts of data, giving us all of the benefits that recent, neural-network AI has bestowed, while showing its working in terms that humans can understand.

Image Credit: Image by Seanbatty from Pixabay Continue reading

Posted in Human Robots

#436167 Is it Time for Tech to Stop Moving Fast ...

On Monday, I attended the 2019 Fall Conference of Stanford’s Institute for Human Centered Artificial Intelligence (HAI). That same night I watched the Season 6 opener for the HBO TV show Silicon Valley. And the debates featured in both surrounded the responsibility of tech companies for the societal effects of the technologies they produce. The two events have jumbled together in my mind, perhaps because I was in a bit of a brain fog, thanks to the nasty combination of a head cold and the smoke that descended on Silicon Valley from the northern California wildfires. But perhaps that mixture turned out to be a good thing.

What is clear, in spite of the smoke, is that this issue is something a lot of people are talking about, inside and outside of Silicon Valley (witness the viral video of Rep. Alexandria Ocasio-Cortez (D-NY) grilling Facebook CEO Mark Zuckerberg).

So, to add to that conversation, here’s my HBO Silicon Valley/Stanford HAI conference mashup.

Silicon Valley’s fictional CEO Richard Hendriks, in the opening scene of the episode, tells Congress that Facebook, Google, and Amazon only care about exploiting personal data for profit. He states:

“These companies are kings, and they rule over kingdoms far larger than any nation in history.”

Meanwhile Marietje Schaake, former member of the European Parliament and a fellow at HAI, told the conference audience of 900:

“There is a lot of power in the hands of few actors—Facebook decides who is a news source, Microsoft will run the defense department’s cloud…. I believe we need a deeper debate about which tasks need to stay in the hands of the public.”

Eric Schmidt, former CEO and executive chairman of Google, agreed. He says:

“It is important that we debate now the ethics of what we are doing, and the impact of the technology that we are building.”

Stanford Associate Professor Ge Wang, also speaking at the HAI conference, pointed out:

“‘Doing no harm’ is a vital goal, and it is not easy. But it is different from a proactive goal, to ‘do good.’”

Had Silicon Valley’s Hendricks been there, he would have agreed. He said in the episode:

“Just because it’s successful, doesn’t mean it’s good. Hiroshima was a successful implementation.”

The speakers at the HAI conference discussed the implications of moving fast and breaking things, of putting untested and unregulated technology into the world now that we know that things like public trust and even democracy can be broken.

Google’s Schmidt told the HAI audience:

“I don’t think that everything that is possible should be put into the wild in society, we should answer the question, collectively, how much risk are we willing to take.

And Silicon Valley denizens real and fictional no longer think it’s OK to just say sorry afterwards. Says Schmidt:

“When you ask Facebook about various scandals, how can they still say ‘We are very sorry; we have a lot of learning to do.’ This kind of naiveté stands out of proportion to the power tech companies have. With great power should come great responsibility, or at least modesty.”

Schaake argued:

“We need more guarantees, institutions, and policies than stated good intentions. It’s about more than promises.”

Fictional CEO Hendricks thinks saying sorry is a cop-out as well. In the episode, a developer admits that his app collected user data in spite of Hendricks assuring Congress that his company doesn’t do that:

“You didn’t know at the time,” the developer says. “Don’t beat yourself up about it. But in the future, stop saying it. Or don’t; I don’t care. Maybe it will be like Google saying ‘Don’t be evil,’ or Facebook saying ‘I’m sorry, we’ll do better.’”

Hendricks doesn’t buy it:

“This stops now. I’m the boss, and this is over.”

(Well, he is fictional.)

How can government, the tech world, and the general public address this in a more comprehensive way? Out in the real world, the “what to do” discussion at Stanford HAI surrounded regulation—how much, what kind, and when.

Says the European Parliament’s Schaake:

“An often-heard argument is that government should refrain from regulating tech because [regulation] will stifle innovation. [That argument] implies that innovation is more important than democracy or the rule of law. Our problems don’t stem from over regulation, but under regulation of technologies.”

But when should that regulation happen. Stanford provost emeritus John Etchemendy, speaking from the audience at the HAI conference, said:

“I’ve been an advocate of not trying to regulate before you understand it. Like San Francisco banning of use of facial recognition is not a good example of regulation; there are uses of facial recognition that we should allow. We want regulations that are just right, that prevent the bad things and allow the good things. So we are going to get it wrong either way, if we regulate to soon or hold off, we will get some things wrong.”

Schaake would opt for regulating sooner rather than later. She says that she often hears the argument that it is too early to regulate artificial intelligence—as well as the argument that it is too late to regulate ad-based political advertising, or online privacy. Neither, to her, makes sense. She told the HAI attendees:

“We need more than guarantees than stated good intentions.”

U.S. Chief Technology Officer Michael Kratsios would go with later rather than sooner. (And, yes, the country has a CTO. President Barack Obama created the position in 2009; Kratsios is the fourth to hold the office and the first under President Donald Trump. He was confirmed in August.) Also speaking at the HAI conference, Kratsios argued:

“I don’t think we should be running to regulate anything. We are a leader [in technology] not because we had great regulations, but we have taken a free market approach. We have done great in driving innovation in technologies that are born free, like the Internet. Technologies born in captivity, like autonomous vehicles, lag behind.”

In the fictional world of HBO’s Silicon Valley, startup founder Hendricks has a solution—a technical one of course: the decentralized Internet. He tells Congress:

“The way we win is by creating a new, decentralized Internet, one where the behavior of companies like this will be impossible, forever. Where it is the users, not the kings, who have sovereign control over their data. I will help you build an Internet that is of the people, by the people, and for the people.”

(This is not a fictional concept, though it is a long way from wide use. Also called the decentralized Web, the concept takes the content on today’s Web and fragments it, and then replicates and scatters those fragments to hosts around the world, increasing privacy and reducing the ability of governments to restrict access.)

If neither regulation nor technology comes to make the world safe from the unforeseen effects of new technologies, there is one more hope, according to Schaake: the millennials and subsequent generations.

Tech companies can no longer pursue growth at all costs, not if they want to keep attracting the talent they need, says Schaake. She noted that, “the young generation looks at the environment, at homeless on the streets,” and they expect their companies to tackle those and other issues and make the world a better place. Continue reading

Posted in Human Robots

#436151 Natural Language Processing Dates Back ...

This is part one of a six-part series on the history of natural language processing.

We’re in the middle of a boom time for natural language processing (NLP), the field of computer science that focuses on linguistic interactions between humans and machines. Thanks to advances in machine learning over the past decade, we’ve seen vast improvements in speech recognition and machine translation software. Language generators are now good enough to write coherent news articles, and virtual agents like Siri and Alexa are becoming part of our daily lives.

Most trace the origins of this field back to the beginning of the computer age, when Alan Turing, writing in 1950, imagined a smart machine that could interact fluently with a human via typed text on a screen. For this reason, machine-generated language is mostly understood as a digital phenomenon—and a central goal of artificial intelligence (AI) research.

This six-part series will challenge that common understanding of NLP. In fact, attempts to design formal rules and machines that can analyze, process, and generate language go back hundreds of years.

Attempts to design formal rules and machines that can analyze, process, and generate language go back hundreds of years.

While specific technologies have changed over time, the basic idea of treating language as a material that can be artificially manipulated by rule-based systems has been pursued by many people in many cultures and for many different reasons. These historical experiments reveal the promise and perils of attempting to simulate human language in non-human ways—and they hold lessons for today’s practitioners of cutting-edge NLP techniques.

The story begins in medieval Spain. In the late 1200s, a Jewish mystic by the name of Abraham Abulafia sat down at a table in his small house in Barcelona, picked up a quill, dipped it in ink, and began combining the letters of the Hebrew alphabet in strange and seemingly random ways. Aleph with Bet, Bet with Gimmel, Gimmel with Aleph and Bet, and so on.

Abulafia called this practice “the science of the combination of letters.” He wasn’t actually combining letters at random; instead he was carefully following a secret set of rules that he had devised while studying an ancient Kabbalistic text called the Sefer Yetsirah. This book describes how God created “all that is formed and all that is spoken” by combining Hebrew letters according to sacred formulas. In one section, God exhausts all possible two-letter combinations of the 22 Hebrew letters.

By studying the Sefer Yetsirah, Abulafia gained the insight that linguistic symbols can be manipulated with formal rules in order to create new, interesting, insightful sentences. To this end, he spent months generating thousands of combinations of the 22 letters of the Hebrew alphabet and eventually emerged with a series of books that he claimed were endowed with prophetic wisdom.

For Abulafia, generating language according to divine rules offered insight into the sacred and the unknown, or as he put it, allowed him to “grasp things which by human tradition or by thyself thou would not be able to know.”

Combining letters to generate language allows thou to “grasp things which by human tradition or by thyself thou would not be able to know.”
—Abraham Abulafia, mystic

But other Jewish scholars considered this rudimentary language generation a dangerous act that bordered on the profane. The Talmud tells stories of rabbis who, by the magical act of permuting language according to the formulas set out in the Sefer Yetsirah, created artificial creatures called golems. In these tales, rabbis manipulated the letters of the Hebrew alphabet to replicate God’s act of creation, using the sacred formulas to imbue inanimate objects with life.

In some of these myths, the rabbis used this skill for practical reasons, to make animals to eat when hungry or servants to help them with domestic duties. But many of these golem stories end badly. In one particularly well-known fable, Judah Loew ben Bezalel, the 16th century rabbi of Prague, used the sacred practice of letter combinatorics to conjure a golem to protect the Jewish community from antisemitic attacks, only to see the golem turn violently on him instead.

This “science of the combination of letters” was a rudimentary form of natural language processing, as it involved combining letters of the Hebrew alphabet according to specific rules. For Kabbalists, it was a double-edged sword: a way to access new forms of knowledge and wisdom, but also an inherently dangerous practice that could bring about unintended consequences.

This tension reappears throughout the long history of language processing, and still echoes in discussions about the most cutting-edge NLP technology of our digital era.

This is the first installment of a six-part series on the history of natural language processing. Come back next Monday for part two, “In the 17th Century, Leibniz Dreamed of a Machine That Could Calculate Ideas​.”

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#436140 Let’s Build Robots That Are as Smart ...

Illustration: Nicholas Little

Let’s face it: Robots are dumb. At best they are idiot savants, capable of doing one thing really well. In general, even those robots require specialized environments in which to do their one thing really well. This is why autonomous cars or robots for home health care are so difficult to build. They’ll need to react to an uncountable number of situations, and they’ll need a generalized understanding of the world in order to navigate them all.

Babies as young as two months already understand that an unsupported object will fall, while five-month-old babies know materials like sand and water will pour from a container rather than plop out as a single chunk. Robots lack these understandings, which hinders them as they try to navigate the world without a prescribed task and movement.

But we could see robots with a generalized understanding of the world (and the processing power required to wield it) thanks to the video-game industry. Researchers are bringing physics engines—the software that provides real-time physical interactions in complex video-game worlds—to robotics. The goal is to develop robots’ understanding in order to learn about the world in the same way babies do.

Giving robots a baby’s sense of physics helps them navigate the real world and can even save on computing power, according to Lochlainn Wilson, the CEO of SE4, a Japanese company building robots that could operate on Mars. SE4 plans to avoid the problems of latency caused by distance from Earth to Mars by building robots that can operate independently for a few hours before receiving more instructions from Earth.

Wilson says that his company uses simple physics engines such as PhysX to help build more-independent robots. He adds that if you can tie a physics engine to a coprocessor on the robot, the real-time basic physics intuitions won’t take compute cycles away from the robot’s primary processor, which will often be focused on a more complicated task.

Wilson’s firm occasionally still turns to a traditional graphics engine, such as Unity or the Unreal Engine, to handle the demands of a robot’s movement. In certain cases, however, such as a robot accounting for friction or understanding force, you really need a robust physics engine, Wilson says, not a graphics engine that simply simulates a virtual environment. For his projects, he often turns to the open-source Bullet Physics engine built by Erwin Coumans, who is now an employee at Google.

Bullet is a popular physics-engine option, but it isn’t the only one out there. Nvidia Corp., for example, has realized that its gaming and physics engines are well-placed to handle the computing demands required by robots. In a lab in Seattle, Nvidia is working with teams from the University of Washington to build kitchen robots, fully articulated robot hands and more, all equipped with Nvidia’s tech.

When I visited the lab, I watched a robot arm move boxes of food from counters to cabinets. That’s fairly straightforward, but that same robot arm could avoid my body if I got in its way, and it could adapt if I moved a box of food or dropped it onto the floor.

The robot could also understand that less pressure is needed to grasp something like a cardboard box of Cheez-It crackers versus something more durable like an aluminum can of tomato soup.

Nvidia’s silicon has already helped advance the fields of artificial intelligence and computer vision by making it possible to process multiple decisions in parallel. It’s possible that the company’s new focus on virtual worlds will help advance the field of robotics and teach robots to think like babies.

This article appears in the November 2019 print issue as “Robots as Smart as Babies.” Continue reading

Posted in Human Robots