Tag Archives: robots
#439499 Why Robots Can’t Be Counted On to Find ...
On Thursday, a portion of the 12-story Champlain Towers South condominium building in Surfside, Florida (just outside of Miami) suffered a catastrophic partial collapse. As of Saturday morning, according to the Miami Herald, 159 people are still missing, and rescuers are removing debris with careful urgency while using dogs and microphones to search for survivors still trapped within a massive pile of tangled rubble.
It seems like robots should be ready to help with something like this. But they aren’t.
A Miami-Dade Fire Rescue official and a K-9 continue the search and rescue operations in the partially collapsed 12-story Champlain Towers South condo building on June 24, 2021 in Surfside, Florida.
JOE RAEDLE/GETTY IMAGES
The picture above shows what the site of the collapse in Florida looks like. It’s highly unstructured, and would pose a challenge for most legged robots to traverse, although you could see a tracked robot being able to manage it. But there are already humans and dogs working there, and as long as the environment is safe to move over, it’s not necessary or practical to duplicate that functionality with a robot, especially when time is critical.
What is desperately needed right now is a way of not just locating people underneath all of that rubble, but also getting an understanding of the structure of the rubble around a person, and what exactly is between that person and the surface. For that, we don’t need robots that can get over rubble: we need robots that can get into rubble. And we don’t have them.
To understand why, we talked with Robin Murphy at Texas A&M, who directs the Humanitarian Robotics and AI Laboratory, formerly the Center for Robot-Assisted Search and Rescue (CRASAR), which is now a non-profit. Murphy has been involved in applying robotic technology to disasters worldwide, including 9/11, Fukushima, and Hurricane Harvey. The work she’s doing isn’t abstract research—CRASAR deploys teams of trained professionals with proven robotic technology to assist (when asked) with disasters around the world, and then uses those experiences as the foundation of a data-driven approach to improve disaster robotics technology and training.
According to Murphy, using robots to explore rubble of collapsed buildings is, for the moment, not possible in any kind of way that could be realistically used on a disaster site. Rubble, generally, is a wildly unstructured and unpredictable environment. Most robots are simply too big to fit through rubble, and the environment isn’t friendly to very small robots either, since there’s frequently water from ruptured plumbing making everything muddy and slippery, among many other physical hazards. Wireless communication or localization is often impossible, so tethers are required, which solves the comms and power problems but can easily get caught or tangled on obstacles.
Even if you can build a robot small enough and durable enough to be able to physically fit through the kinds of voids that you’d find in the rubble of a collapsed building (like these snake robots were able to do in Mexico in 2017), useful mobility is about more than just following existing passages. Many disaster scenarios in robotics research assume that objectives are accessible if you just follow the right path, but real disasters aren’t like that, and large voids may require some amount of forced entry, if entry is even possible at all. An ability to forcefully burrow, which doesn’t really exist yet in this context but is an active topic of research, is critical for a robot to be able to move around in rubble where there may not be any tunnels or voids leading it where it wants to go.
And even if you can build a robot that can successfully burrow its way through rubble, there’s the question of what value it’s able to provide once it gets where it needs to be. Robotic sensing systems are in general not designed for extreme close quarters, and visual sensors like cameras can rapidly get damaged or get so much dirt on them that they become useless. Murphy explains that ideally, a rubble-exploring robot would be able to do more than just locate victims, but would also be able to use its sensors to assist in their rescue. “Trained rescuers need to see the internal structure of the rubble, not just the state of the victim. Imagine a surgeon who needs to find a bullet in a shooting victim, but does not have any idea of the layout of the victims organs; if the surgeon just cuts straight down, they may make matters worse. Same thing with collapses, it’s like the game of pick-up sticks. But if a structural specialist can see inside the pile of pick-up sticks, they can extract the victim faster and safer with less risk of a secondary collapse.”
Besides these technical challenges, the other huge part to all of this is that any system that you’d hope to use in the context of rescuing people must be fully mature. It’s obviously unethical to take a research-grade robot into a situation like the Florida building collapse and spend time and resources trying to prove that it works. “Robots that get used for disasters are typically used every day for similar tasks,” explains Murphy. For example, it wouldn’t be surprising to see drones being used to survey the parts of the building in Florida that are still standing to make sure that it’s safe for people to work nearby, because drones are a mature and widely adopted technology that has already proven itself. Until a disaster robot has achieved a similar level of maturity, we’re not likely to see it take place in an active rescue.
Keeping in mind that there are no existing robots that fulfill all of the above criteria for actual use, we asked Murphy to describe her ideal disaster robot for us. “It would look like a very long, miniature ferret,” she says. “A long, flexible, snake-like body, with small legs and paws that can grab and push and shove.” The robo-ferret would be able to burrow, to wiggle and squish and squeeze its way through tight twists and turns, and would be equipped with functional eyelids to protect and clean its sensors. But since there are no robo-ferrets, what existing robot would Murphy like to see in Florida right now? “I’m not there in Miami,” Murphy tells us, “but my first thought when I saw this was I really hope that one day we’re able to commercialize Japan’s Active Scope Camera.”
The Active Scope Camera was developed at Tohoku University by Satoshi Tadokoro about 15 years ago. It operates kind of like a long, skinny, radially symmetrical bristlebot with the ability to push itself forward:
The hose is covered by inclined cilia. Motors with eccentric mass are installed in the cable and excite vibration and cause an up-and-down motion of the cable. The tips of the cilia stick on the floor when the cable moves down and propel the body. Meanwhile, the tips slip against the floor, and the body does not move back when it moves up. A repetition of this process showed that the cable can slowly move in a narrow space of rubble piles.
“It's quirky, but the idea of being able to get into those small spaces and go about 30 feet in and look around is a big deal,” Murphy says. But the last publication we can find about this system is nearly a decade old—if it works so well, we asked Murphy, why isn’t it more widely available to be used after a building collapses? “When a disaster happens, there’s a little bit of interest, and some funding. But then that funding goes away until the next disaster. And after a certain point, there’s just no financial incentive to create an actual product that’s reliable in hardware and software and sensors, because fortunately events like this building collapse are rare.”
Dr. Satoshi Tadokoro inserting the Active Scope Camera robot at the 2007 Berkman Plaza II (Jacksonville, FL) parking garage collapse.
Photo: Center for Robot-Assisted Search and Rescue
The fortunate rarity of disasters like these complicates the development cycle of disaster robots as well, says Murphy. That’s part of the reason why CRASAR exists in the first place—it’s a way for robotics researchers to understand what first responders need from robots, and to test those robots in realistic disaster scenarios to determine best practices. “I think this is a case where policy and government can actually help,” Murphy tells us. “They can help by saying, we do actually need this, and we’re going to support the development of useful disaster robots.”
Robots should be able to help out in the situation happening right now in Florida, and we should be spending more time and effort on research in that direction that could potentially be saving lives. We’re close, but as with so many aspects of practical robotics, it feels like we’ve been close for years. There are systems out there with a lot of potential, they just need all help necessary to cross the gap from research project to a practical, useful system that can be deployed when needed. Continue reading
#439495 Legged Robots Do Surprisingly Well in ...
Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.
In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.
Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean. Sort of like pronk, except in space. Let’s make it so!
Just look at that robot bok!
This reorientation technique was developed using deep reinforcement learning, and then transferred from simulation to a real SpaceBok robot, albeit in two degrees of freedom rather than three. The real challenge with this method is just how complicated things get when you start wiggling multiple limbs in the air trying to get to a specific configuration, since the dynamics here are (as the paper puts it) “highly non-linear,” and it proved somewhat difficult to even simulate everything well enough. What you see in the simulation, incidentally, is an environment similar to Ceres, the largest asteroid in the asteroid belt, which has a surface gravity of 0.03g.
Although SpaceBok has “space” right in the name, it’s not especially optimized for this particular kind of motion. As the video shows, having an actuated hip joint could make the difference between a reliable soft landing and, uh, not. Not landing softly is a big deal, because an uncontrolled bounce could send the robot flying huge distances, which is what happened to the Philae lander on comet 67P/Churyumov–Gerasimenko back in 2014.
For more details on SpaceBok’s space booking, we spoke with the paper’s first author, Nikita Rudin, via email.
IEEE Spectrum: Why are legs ideal for mobility in low gravity environments?
Rudin: In low gravity environments, rolling on wheels becomes more difficult because of reduced traction. However, legs can exploit the low gravity and use high jumps to move efficiently. With high jumps, you can also clear large obstacles along the way, which is harder to do in higher gravity.
Were there unique challenges to training your controller in 2D and 3D relative to training controllers for terrestrial legged robot motion?
The main challenge is the long flight phase, which is not present in terrestrial locomotion. In earth gravity, robots (and animals) use reaction forces from the ground to balance. During a jump, they don't usually need to re-orient themselves. In the case of low gravity, we have extended flight phases (multiple seconds) and only short contacts with the ground. The robot needs to be able to re-orient / balance in the air. Otherwise, a small disturbance at the moment of the jump will slowly flip the robot. In short, in low gravity, there is a new control problem that can be neglected on Earth.
Besides the addition of a hip joint, what other modifications would you like to make to the robot to enhance its capabilities? Would a tail be useful, for example? Or very heavy shoes?
A tail is a very interesting idea and heavy shoes would definitely help, however, they increase the total weight, which is costly in space. We actually add some minor weight to feet already (in the paper we analyze the effect of these weights). Another interesting addition would be a joint in the center of the robot allowing it to do cat-like backbone torsion.
How does the difficulty of this problem change as the gravity changes?
With changing gravity you change the importance of mid-air re-orientation compared to ground contacts. For locomotion, low-gravity is harder from the reasoning above. However, if the robot is dropped and needs to perform a flip before landing, higher gravity is harder because you have less time for the whole process.
What are you working on next?
We have a few ideas for the next projects including a legged robot specifically designed and certified for space and exploring cat-like re-orientation on earth with smaller/faster robots. We would also like to simulate a zero-g environment on earth by dropping the robot from a few dozens of meters into a safety net, and of course, a parabolic flight is still very much one of our objectives. However, we will probably need a smaller robot there as well.
Cat-Like Jumping and Landing of Legged Robots in Low Gravity Using Deep Reinforcement Learning, by Nikita Rudin, Hendrik Kolvenbach, Vassilios Tsounis, and Marco Hutter from ETH Zurich, is published in IEEE Transactions on Robotics. Continue reading
#439487 SoftBank Stops Making Pepper Robots, ...
Reuters is reporting that SoftBank stopped manufacturing Pepper robots at some point last year due to low demand, and by September, will cut about half of the 330 positions at SoftBank Robotics Europe in France. Most of the positions will be in Q&A, sales, and service, which hopefully leaves SoftBank Robotics’ research and development group mostly intact. But the cuts reflect poor long-term sales, with SoftBank Robotics Europe having lost over 100 million Euros in the past three years, according to French business news site JDN. Speaking with Nikkei, SoftBank said that this doesn’t actually mean a permanent end for Pepper, and that they “plan to resume production if demand recovers.” But things aren’t looking good.
Reuters says that “only” 27,000 Peppers were produced, but that sure seems like a lot of Peppers to me. Perhaps too many—a huge number of Peppers were used by SoftBank itself in its retail stores, and a hundred at once were turned into a cheerleading squad for the SoftBank Hawks baseball team because of the pandemic. There’s nothing wrong with either of those things, but it’s hard to use them to gauge how successful Pepper has actually been.
I won’t try to argue that Pepper would necessarily have been commercially viable in the long(er) term, since it’s a very capable robot in some ways, but not very capable in others. For example, Pepper has arms and hands with individually articulated fingers, but the robot can’t actually do much in the way of useful grasping or manipulation. SoftBank positioned Pepper as a robot that can attract attention and provide useful, socially interactive information in public places. Besides SoftBank’s own stores, Peppers have been used in banks, malls, airports, and other places of that nature. A lot of what Pepper seems to have uniquely offered was novelty, though, which ultimately may not be sustainable for a commercial robot, because at some point, the novelty just wears off and you’re basically left with a very cool looking (but expensive) kiosk.
Having said all that, the sheer number of Peppers that SoftBank put out in the world could be one of the most significant impacts that the robot has had. The fact that Pepper was able to successfully operate for long enough, and in enough places, that it even had a chance to stop becoming novel and instead become normal is an enormous achievement for Pepper specifically as well as for social robots more broadly. Angelica Lim, who worked with Pepper at SoftBank Robotics Europe for three years before founding the Rosie Lab at SFU, shared some perspective with us on this:
There has never been a robot with the ambition of Pepper. Its mission was huge—be adaptable and robust to different purposes and locations: loud sushi shops, quiet banks, and hospitals that change from hour to hour. Compare that with Alexa which has a pretty stable and quiet environment—the home. On top of that, the robot needed to respond to different ages, cultures, countries and languages. The only thing I can think of that comes close is the smartphone, and the expectation for it is much lower compared to the humanoid Pepper. Ten years ago, it was unthinkable that we could leave a robot on “in the wild” for days, weeks, months and years, and yet Pepper did it thanks to the team at SoftBank Robotics.
Peppers are still being used in education today, from elementary schools and high schools to research labs in North America, Asia and Europe. The next generation will grow up programming these, like they did with the Apple personal computer. I’m confident it’s just the next step to technology that adapts to us as humans rather than the other way around.
Pepper has been an amazing platform for HRI research as well as for STEM education more broadly, and our hope is that Pepper will continue to be impactful in those ways, whether or not any more of these robots are ever made. We also hope that SoftBank does whatever is necessary to make sure that Peppers remain useful and accessible well into the future in both software and hardware. But perhaps we’re being too pessimistic here—this is certainly not good news, but despite how it looks we don’t know for sure that it’s catastrophic for Pepper. All we can do is wait and see what happens at SoftBank Robotics Europe over the next six months, and hope that Pepper continues to get the support that it deserves. Continue reading
#439455 AI and Robots Are a Minefield of ...
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”
First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.
However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings.
We judge robots’ or AI’s tasks in comparison to humans
If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.
I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.
Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated.
We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.
However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.
Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.
HYUNG TAEK YOON
Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.
This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.
Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning.
Robotic demonstrations still rely on algorithms set for specialized tasks in bounded situations.
So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”
Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process.
Humans process information qualitatively, and computers, quantitively
Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.
As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates.
There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough”
This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints.
Actually, we do not know what we are doing
Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.
We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.
HYUNG TAEK YOON
Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously.
My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.
Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.
Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.
Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.
Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs. Continue reading
#439439 Swarms of tiny dumb robots found to ...
A team of researchers affiliated with several institutions in Europe has found that swarms of tiny dumb vibrating robots are capable of carrying out sophisticated actions such as transporting objects or squeezing through tunnels. In their paper published in the journal Science Robotics, the group describes experiments they conducted with tiny dumb robots they called “bugs.” Continue reading