Tag Archives: cognitive
#439455 AI and Robots Are a Minefield of ...
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”
First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.
However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings.
We judge robots’ or AI’s tasks in comparison to humans
If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.
I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.
Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated.
We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.
However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.
Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.
HYUNG TAEK YOON
Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.
This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.
Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning.
Robotic demonstrations still rely on algorithms set for specialized tasks in bounded situations.
So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”
Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process.
Humans process information qualitatively, and computers, quantitively
Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.
As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates.
There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough”
This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints.
Actually, we do not know what we are doing
Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.
We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.
HYUNG TAEK YOON
Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously.
My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.
Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.
Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.
Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.
Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs. Continue reading
#439424 AI and Robots Are a Minefield of ...
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”
First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.
However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings.
We judge robots’ or AI’s tasks in comparison to humans
If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.
I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.
Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated.
We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.
However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.
Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.
HYUNG TAEK YOON
Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.
This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.
Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning.
Robotic demonstrations still rely on algorithms set for specialized tasks in bounded situations.
So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”
Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process.
Humans process information qualitatively, and computers, quantitively
Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.
As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates.
There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough”
This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints.
Actually, we do not know what we are doing
Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.
We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.
HYUNG TAEK YOON
Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously.
My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.
Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.
Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.
Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.
Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs. Continue reading
#438785 Video Friday: A Blimp For Your Cat
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.
Shiny robotic cat toy blimp!
I am pretty sure this is Google Translate getting things wrong, but the About page mentions that the blimp will “take you to your destination after appearing in the death of God.”
[ NTT DoCoMo ] via [ RobotStart ]
If you have yet to see this real-time video of Perseverance landing on Mars, drop everything and watch it.
During the press conference, someone commented that this is the first time anyone on the team who designed and built this system has ever seen it in operation, since it could only be tested at the component scale on Earth. This landing system has blown my mind since Curiosity.
Here's a better look at where Percy ended up:
[ NASA ]
The fact that Digit can just walk up and down wet, slippery, muddy hills without breaking a sweat is (still) astonishing.
[ Agility Robotics ]
SkyMul wants drones to take over the task of tying rebar, which looks like just the sort of thing we'd rather robots be doing so that we don't have to:
The tech certainly looks promising, and SkyMul says that they're looking for some additional support to bring things to the pilot stage.
[ SkyMul ]
Thanks Eohan!
Flatcat is a pet-like, playful robot that reacts to touch. Flatcat feels everything exactly: Cuddle with it, romp around with it, or just watch it do weird things of its own accord. We are sure that flatcat will amaze you, like us, and caress your soul.
I don't totally understand it, but I want it anyway.
[ Flatcat ]
Thanks Oswald!
This is how I would have a romantic dinner date if I couldn't get together in person. Herman the UR3 and an OptiTrack system let me remotely make a romantic meal!
[ Dave's Armoury ]
Here, we propose a novel design of deformable propellers inspired by dragonfly wings. The structure of these propellers includes a flexible segment similar to the nodus on a dragonfly wing. This flexible segment can bend, twist and even fold upon collision, absorbing force upon impact and protecting the propeller from damage.
[ Paper ]
Thanks Van!
In the 1970s, The CIA created the world's first miniaturized unmanned aerial vehicle, or UAV, which was intended to be a clandestine listening device. The Insectothopter was never deployed operationally, but was still revolutionary for its time.
It may never have been deployed (not that they'll admit to, anyway), but it was definitely operational and could fly controllably.
[ CIA ]
Research labs are starting to get Digits, which means we're going to get a much better idea of what its limitations are.
[ Ohio State ]
This video shows the latest achievements for LOLA walking on undetected uneven terrain. The robot is technically blind, not using any camera-based or prior information on the terrain.
[ TUM ]
We define “robotic contact juggling” to be the purposeful control of the motion of a three-dimensional smooth object as it rolls freely on a motion-controlled robot manipulator, or “hand.” While specific examples of robotic contact juggling have been studied before, in this paper we provide the first general formulation and solution method for the case of an arbitrary smooth object in single-point rolling contact on an arbitrary smooth hand.
[ Paper ]
Thanks Fan!
A couple of new cobots from ABB, designed to work safely around humans.
[ ABB ]
Thanks Fan!
It's worth watching at least a little bit of Adam Savage testing Spot's new arm, because we get to see Spot try, fail, and eventually succeed at an autonomous door-opening behavior at the 10 minute mark.
[ Tested ]
SVR discusses diversity with guest speakers Dr. Michelle Johnson from the GRASP Lab at UPenn; Dr Ariel Anders from Women in Robotics and first technical hire at Robust.ai; Alka Roy from The Responsible Innovation Project; and Kenechukwu C. Mbanesi and Kenya Andrews from Black in Robotics. The discussion here is moderated by Dr. Ken Goldberg—artist, roboticist and Director of the CITRIS People and Robots Lab—and Andra Keay from Silicon Valley Robotics.
[ SVR ]
RAS presents a Soft Robotics Debate on Bioinspired vs. Biohybrid Design.
In this debate, we will bring together experts in Bioinspiration and Biohybrid design to discuss the necessary steps to make more competent soft robots. We will try to answer whether bioinspired research should focus more on developing new bioinspired material and structures or on the integration of living and artificial structures in biohybrid designs.
[ RAS SoRo ]
IFRR presents a Colloquium on Human Robot Interaction.
Across many application domains, robots are expected to work in human environments, side by side with people. The users will vary substantially in background, training, physical and cognitive abilities, and readiness to adopt technology. Robotic products are expected to not only be intuitive, easy to use, and responsive to the needs and states of their users, but they must also be designed with these differences in mind, making human-robot interaction (HRI) a key area of research.
[ IFRR ]
Vijay Kumar, Nemirovsky Family Dean and Professor at Penn Engineering, gives an introduction to ENIAC day and David Patterson, Pardee Professor of Computer Science, Emeritus at the University of California at Berkeley, speaks about the legacy of the ENIAC and its impact on computer architecture today. This video is comprised of lectures one and two of nine total lectures in the ENIAC Day series.
There are more interesting ENIAC videos at the link below, but we'll highlight this particular one, about the women of the ENIAC, also known as the First Programmers.
[ ENIAC Day ] Continue reading
#437701 Robotics, AI, and Cloud Computing ...
IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.
IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.
The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.
Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.
All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.
Image: IBM Research
IBM RXN helps predict chemical reaction outcomes or design retrosynthesis in seconds.
In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.
Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.
The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.
IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?
According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”
Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.
While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.
“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”
There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.
Photo: Michael Buholzer
Teodoro Laino of IBM Research Europe.
“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe. “On the other hand, we are also pushing at bringing the entire system to a service level.”
Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.
Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”
< Back to IEEE COVID-19 Resources Continue reading
#437667 17 Teams to Take Part in DARPA’s ...
Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.
While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.
From DARPA’s press releases:
DARPA’s Subterranean (SubT) Challenge will host its Cave Circuit Virtual Competition, which focuses on innovative solutions to map, navigate, and search complex, simulated cave environments November 17. Qualified teams have until Oct. 15 to develop and submit software-based solutions for the Cave Circuit via the SubT Virtual Portal, where their technologies will face unknown cave environments in the cloud-based SubT Simulator. Until then, teams can refine their roster of selected virtual robot models, choose sensor payloads, and continue to test autonomy approaches to maximize their score.
The Cave Circuit also introduces new simulation capabilities, including digital twins of Systems Competition robots to choose from, marsupial-style platforms combining air and ground robots, and breadcrumb nodes that can be dropped by robots to serve as communications relays. Each robot configuration has an associated cost, measured in SubT Credits – an in-simulation currency – based on performance characteristics such as speed, mobility, sensing, and battery life.
Each team’s simulated robots must navigate realistic caves, with features including natural terrain and dynamic rock falls, while they search for and locate various artifacts on the course within five meters of accuracy to score points during a 60-minute timed run. A correct report is worth one point. Each course contains 20 artifacts, which means each team has the potential for a maximum score of 20 points. Teams can leverage numerous practice worlds and even build their own worlds using the cave tiles found in the SubT Tech Repo to perfect their approach before they submit one official solution for scoring. The DARPA team will then evaluate the solution on a set of hidden competition scenarios.
Of the 17 qualified teams (you can see all of them here), there are a handful that we’ll quickly point out. Team BARCS, from Michigan Tech, was the winner of the SubT Virtual Urban Circuit, meaning that they may be the team to beat on Cave as well, although the course is likely to be unique enough that things will get interesting. Some Systems Track teams to watch include Coordinated Robotics, CTU-CRAS-NORLAB, MARBLE, NUS SEDS, and Robotika, and there are also a handful of brand new teams as well.
Now, just because there’s no dedicated Cave Circuit for the Systems Track teams, it doesn’t mean that there won’t be a Cave component (perhaps even a significant one) in the final event, which as far as we know is still scheduled to happen in fall of next year. We’ve heard that many of the Systems Track teams have been testing out their robots in caves anyway, and as the virtual event gets closer, we’ll be doing a sort of Virtual Systems Track series that highlights how different teams are doing mock Cave Circuits in caves they’ve found for themselves.
For more, we checked in with DARPA SubT program manager Dr. Timothy H. Chung.
IEEE Spectrum: Was it a difficult decision to cancel the Systems Track for Cave?
Tim Chung: The decision to go virtual only was heart wrenching, because I think DARPA’s role is to offer up opportunities that may be unimaginable for some of our competitors, like opening up a cave-type site for this competition. We crawled and climbed through a number of these sites, and I share the sense of disappointment that both our team and the competitors have that we won’t be able to share all the advances that have been made since the Urban Circuit. But what we’ve been able to do is pour a lot of our energy and the insights that we got from crawling around in those caves into what’s going to be a really great opportunity on the Virtual Competition side. And whether it’s a global pandemic, or just lack of access to physical sites like caves, virtual environments are an opportunity that we want to develop.
“The simulator offers us a chance to look at where things could be … it really allows for us to find where some of those limits are in the technology based only on our imagination.”
—Timothy H. Chung, DARPA
What kind of new features will be included in the Virtual Cave Circuit for this competition?
I’m really excited about these particular features because we’re seeing an opportunity for increased synergy between the physical and the virtual. The first I’d say is that we scanned some of the Systems Track robots using photogrammetry and combined that with some additional models that we got from the systems competitors themselves to turn their systems robots into virtual models. We often talk about the sim to real transfer and how successful we can get a simulation to transfer over to the physical world, but now we’ve taken something from the physical world and made it virtual. We’ve validated the controllers as well as the kinematics of the robots, we’ve iterated with the systems competitors themselves, and now we have these 13 robots (air and ground) in the SubT Tech Repo that now all virtual competitors can take advantage of.
We also have additional robot capability. Those comms bread crumbs are common among many of the competitors, so we’ve adopted that in the virtual world, and now you have comms relay nodes that are baked in to the SubT Simulator—you can have either six or twelve comms nodes that you can drop from a variety of our ground robot platforms. We have the marsupial deployment capability now, so now we have parent ground robots that can be mixed and matched with different child drones to become marsupial pairs.
And this is something I’ve been planning for for a while: we now have the ability to trigger things like rock falls. They still don’t quite look like Indiana Jones with the boulder coming down the corridor, but this comes really close. In addition to it just being an interesting and realistic consideration, we get to really dynamically test and stress the robots’ ability to navigate and recognize that something has changed in the environment and respond to it.
Image: DARPA
DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment.
No simulation is perfect, so can you talk to us about what kinds of things aren’t being simulated right now? Where does the simulator not match up to reality?
I think that question is foundational to any conversation about simulation. I’ll give you a couple of examples:
We have the ability to represent wholesale damage to a robot, but it’s not at the actuator or component level. So there’s not a reliability model, although I think that would be really interesting to incorporate so that you could do assessments on things like mean time to failure. But if a robot falls off a ledge, it can be disabled by virtue of being too damaged to continue.
With communications, and this is one that’s near and dear not only to my heart but also to all of those that have lived through developing communication systems and robotic systems, we’ve gone through and conducted RF surveys of underground environments to get a better handle on what propagation effects are. There’s a lot of research that has gone into this, and trying to carry through some of that realism, we do have path loss models for RF communications baked into the SubT Simulator. For example, when you drop a bread crumb node, it’s using a path loss model so that it can represent the degradation of signal as you go farther into a cave. Now, we’re not modeling it at the Maxwell equations level, which I think would be awesome, but we’re not quite there yet.
We do have things like battery depletion, sensor degradation to the extent that simulators can degrade sensor inputs, and things like that. It’s just amazing how close we can get in some places, and how far away we still are in others, and I think showing where the limits are of how far you can get simulation is all part and parcel of why SubT Challenge wants to have both System and Virtual tracks. Simulation can be an accelerant, but it’s not going to be the panacea for development and innovation, and I think all the competitors are cognizant those limitations.
One of the most amazing things about the SubT Virtual Track is that all of the robots operate fully autonomously, without the human(s) in the loop that the System Track teams have when they compete. Why make the Virtual Track even more challenging in that way?
I think it’s one of the defining, delineating attributes of the Virtual Track. Our continued vision for the simulation side is that the simulator offers us a chance to look at where things could be, and allows for us to explore things like larger scales, or increased complexity, or types of environments that we can’t physically gain access to—it really allows for us to find where some of those limits are in the technology based only on our imagination, and this is one of the intrinsic values of simulation.
But I think finding a way to incorporate human input, or more generally human factors like teleoperation interfaces and the in-situ stress that you might not be able to recreate in the context of a virtual competition provided a good reason for us to delineate the two competitions, with the Virtual Competition really being about the role of fully autonomous or self-sufficient systems going off and doing their solution without human guidance, while also acknowledging that the real world has conditions that would not necessarily be represented by a fully simulated version. Having said that, I think cognitive engineering still has an incredibly important role to play in human robot interaction.
What do we have to look forward to during the Virtual Competition Showcase?
We have a number of additional features and capabilities that we’ve baked into the simulator that will allow for us to derive some additional insights into our competition runs. Those insights might involve things like the performance of one or more robots in a given scenario, or the impact of the environment on different types of robots, and what I can tease is that this will be an opportunity for us to showcase both the technology and also the excitement of the robots competing in the virtual environment. I’m trying not to give too many spoilers, but we’ll have an opportunity to really get into the details.
Check back as we get closer to the 17 November event for more on the DARPA SubT Challenge. Continue reading