Tag Archives: of

#439439 Swarms of tiny dumb robots found to ...

A team of researchers affiliated with several institutions in Europe has found that swarms of tiny dumb vibrating robots are capable of carrying out sophisticated actions such as transporting objects or squeezing through tunnels. In their paper published in the journal Science Robotics, the group describes experiments they conducted with tiny dumb robots they called “bugs.” Continue reading

Posted in Human Robots

#439424 AI and Robots Are a Minefield of ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”

First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.

However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings.

We judge robots’ or AI’s tasks in comparison to humans
If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.

I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.

Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated.

We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.

However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.

Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.

HYUNG TAEK YOON

Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.

This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.

Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning.

Robotic demonstrations still rely on algorithms set for specialized tasks in bounded situations.

So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”

Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process.

Humans process information qualitatively, and computers, quantitively
Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.

As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates.

There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough”

This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints.

Actually, we do not know what we are doing
Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.

We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.

HYUNG TAEK YOON

Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously.

My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.

Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.

Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.

Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.

Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs. Continue reading

Posted in Human Robots

#439400 A Neuron’s Sense of Timing Encodes ...

We like to think of brains as computers: A physical system that processes inputs and spits out outputs. But, obviously, what’s between your ears bears little resemblance to your laptop.

Computer scientists know the intimate details of how computers store and process information because they design and build them. But neuroscientists didn’t build brains, which makes them a bit like a piece of alien technology they’ve found and are trying to reverse engineer.

At this point, researchers have catalogued the components fairly well. We know the brain is a vast and intricate network of cells called neurons that communicate by way of electrical and chemical signals. What’s harder to figure out is how this network makes sense of the world.

To do that, scientists try to tie behavior to activity in the brain by listening to the chatter of its neurons firing. If neurons in a region get rowdy when a person is eating chocolate, well, those cells might be processing taste or directing chewing. This method has mostly focused on the frequency at which neurons fire—that is, how often they fire in a given period of time.

But frequency alone is an imprecise measure. For years, research in rats has suggested that when neurons fire relative to their peers—during navigation of spaces in particular—may also encode information. This process, in which the timing of some neurons grows increasingly out of step with their neighbors, is called “phase precession.”

It wasn’t known if phase precession was widespread in mammals, but recent studies have found it in bats and marmosets. And now, a new study has shown that it happens in humans too, strengthening the case that phase precession may occur across species.

The new study also found evidence of phase precession outside of spatial tasks, lending some weight to the idea it may be a more general process in learning throughout the brain.

The paper was published in the journal Cell last month by a Columbia University team of researchers led by neuroscientist and biomedical engineer Josh Jacobs.

The researchers say more studies are needed to flesh out the role of phase precession in the brain, and how or if it contributes to learning is still uncertain.

But to Salman Qasim, a post-doctoral fellow on Jacobs’ team and lead author of the paper, the patterns are tantalizing. “[Phase precession is] so prominent and prevalent in the rodent brain that it makes you want to assume it’s a generalizable mechanism,” he told Quanta Magazine this month.

Rat Brains to Human Brains
Though phase precession in rats has been studied for decades, it’s taken longer to unearth it in humans for a couple reasons. For one, it’s more challenging to study in humans at the level of neurons because it requires placing electrodes deep in the brain. Also, our patterns of brain activity are subtler and more complex, making them harder to untangle.

To solve the first challenge, the team analyzed decade-old recordings of neural chatter from 13 patients with drug-resistant epilepsy. As a part of their treatment, the patients had electrodes implanted to map the storms of activity during a seizure.

In one test, they navigated a two-dimensional virtual world—like a simple video game—on a laptop. Their brain activity was recorded as they were instructed to drive and drop off “passengers” at six stores around the perimeter of a rectangular track.

The team combed through this activity for hints of phase precession.

Active regions of the brain tend to fire together at a steady rate. These rhythms, called brain waves, are like a metronome or internal clock. Phase precession occurs when individual neurons fall out of step with the prevailing brain waves nearby. In navigation of spaces, like in this study, a particular type of neuron, called a “place cell,” fires earlier and earlier compared to its peers as the subject approaches and passes through a region. Its early firing eventually links up with the late firing of the next place cell in the chain, strengthening the synapse between the two and encoding a path through space.

In rats, theta waves in the hippocampus, which is a region associated with navigation, are strong and clear, making precession easier to pick out. In humans, they’re weaker and more variable. So the team used a clever statistical analysis to widen the observed wave frequencies into a range. And that’s when the phase precession clearly stood out.

This result lined up with prior navigation studies in rats. But the team went a step further.

In another part of the brain, the frontal cortex, they found phase precession in neurons not involved in navigation. The timing of these cells fell out of step with their neighbors as the subject achieved the goal of dropping passengers off at one of the stores. This indicated phase precession may also encode the sequence of steps leading up to a goal.

The findings, therefore, extend the occurrence of phase precession to humans and to new tasks and regions in the brain. The researchers say this suggests the phenomenon may be a general mechanism that encodes experiences over time. Indeed, other research—some very recent and not yet peer-reviewed—validates this idea, tying it to the processing of sounds, smells, and series of images.

And, the cherry on top, the process compresses experience to the length of a single brain wave. That is, an experience that takes seconds—say, a rat moving through several locations in the real world—is compressed to the fraction of a second it takes the associated neurons to fire in sequence.

In theory, this could help explain how we learn so fast from so few examples. Something artificial intelligence algorithms struggle to do.

As enticing as the research is, however, both the team involved in the study and other researchers say it’s still too early to draw definitive conclusions. There are other theories for how humans learn so quickly, and it’s possible phase precession is an artifact of the way the brain functions as opposed to a driver of its information processing.

That said, the results justify more serious investigation.

“Anyone who looks at brain activity as much as we do knows that it’s often a chaotic, stochastic mess,” Qasim told Wired last month. “So when you see some order emerge in that chaos, you want to ascribe to it some sort of functional purpose.”

Only time will tell if that order is a fundamental neural algorithm or something else.

Image Credit: Daniele Franchi / Unsplash Continue reading

Posted in Human Robots

#439384 Using optogenetics to control movement ...

A team of researchers from the University of Toronto and Lunenfeld-Tanenbaum Research Institute, has developed a technique for controlling the movements of a live nematode using laser light. In their paper published in the journal Science Robotics, the group describes their technique. Adriana San-Miguel with North Carolina State University has published a Focus piece in the same journal issue outlining the work done by the team. Continue reading

Posted in Human Robots

#439376 Japan’s SoftBank suspends ...

Japan's SoftBank has suspended production of its humanoid robot Pepper, a company spokeswoman said Tuesday, seven years after the conglomerate unveiled the signature chatty white android to much fanfare. Continue reading

Posted in Human Robots