Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439431 What Will Education be Like in the ...

Image by Mediamodifier from Pixabay The field of education is changing under the influence of technology and artificial intelligence. Old educational methods are already being transformed today and may lose their relevance in the future. In a few years, your teacher may be a computer program. And in the distant future, education will be like …

The post What Will Education be Like in the Future? appeared first on TFOT. Continue reading

Posted in Human Robots

#439429 12 Robotics Teams Will Hunt For ...

Last week, DARPA announced the twelve teams who will be competing in the Virtual Track of the DARPA Subterranean Challenge Finals, scheduled to take place in September in Louisville, KY. The robots and the environment may be virtual, but the prize money is very real, with $1.5 million of DARPA cash on the table for the teams who are able to find the most subterranean artifacts in the shortest amount of time.

You can check out the list of Virtual Track competitors here, but we’ll be paying particularly close attention to Team Coordinated Robotics and Team BARCS, who have been trading first and second place back and forth across the three previous competitions. But there are many other strong contenders, and since nearly a year will have passed between the Final and the previous Cave Circuit, there’s been plenty of time for all teams to have developed creative new ideas and improvements.

As a quick reminder, the SubT Final will include elements of tunnels, caves, and the urban underground. As before, teams will be using simulated models of real robots to explore the environment looking for artifacts (like injured survivors, cell phones, backpacks, and even hazardous gas), and they’ll have to manage things like austere navigation, degraded sensing and communication, dynamic obstacles, and rough terrain.

While we’re not sure exactly what the Virtual Track is going to look like, one of the exciting aspects of a virtual competition like this is how DARPA is not constrained by things like available physical space or funding. They could make a virtual course that incorporates the inside of the Egyptian pyramids, the Cheyenne Mountain military complex, and my basement, if they were so inclined. We are expecting a combination of the overall themes of the three previous virtual courses (tunnel, cave, and urban), but connected up somehow, and likely with a few surprises thrown in for good measure.

To some extent, the Virtual Track represents the best case scenario for SubT robots, in the sense that fewer things will just spontaneously go wrong. This is something of a compromise, since things very often spontaneously go wrong when you’re dealing with real robots in the real world. This is not to diminish the challenges of the Virtual Track in the least—even the virtual robots aren’t invincible, and their software will need to keep them from running into simulated walls or falling down simulated stairs. But as far as I know, the virtual robots will not experience damage during transport to the event, electronics shorting, motors burning out, emergency stop buttons being accidentally pressed, and that sort of thing. If anything, this makes the Virtual Track more exciting to watch, because you’re seeing teams of virtual robots on their absolute best behavior challenging each other primarily on the cleverness and efficiency of their programmers.

The other reason that the Virtual Track is more exciting is that unlike the Systems Track, there are no humans in the loop at all. Teams submit their software to DARPA, and then sit back and relax (or not) and watch their robots compete all by themselves in real time. This is a hugely ambitious way to do things, because a single human even a little bit in the loop can provide the kind of critical contextual world knowledge and mission perspective that robots often lack. A human in there somewhere is fine in the near to medium term, but full autonomy is the dream.

As for the Systems Track (which involves real robots on the physical course in Louisville), we’re not yet sure who all of the final competitors will be. The pandemic has made travel complicated, and some international teams aren’t yet sure whether they’ll be able to make it. Either way, we’ll be there at the end of September, when we’ll be able to watch both the Systems and Virtual Track teams compete for the SubT Final championship. Continue reading

Posted in Human Robots

#439424 AI and Robots Are a Minefield of ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

Most people associate artificial intelligence with robots as an inseparable pair. In fact, the term “artificial intelligence” is rarely used in research labs. Terminology specific to certain kinds of AI and other smart technologies are more relevant. Whenever I’m asked the question “Is this robot operated by AI?”, I hesitate to answer—wondering whether it would be appropriate to call the algorithms we develop “artificial intelligence.”

First used by scientists such as John McCarthy and Marvin Minsky in the 1950s, and frequently appearing in sci-fi novels or films for decades, AI is now being used in smartphone virtual assistants and autonomous vehicle algorithms. Both historically and today, AI can mean many different things—which can cause confusion.

However, people often express the preconception that AI is an artificially realized version of human intelligence. And that preconception might come from our cognitive bias as human beings.

We judge robots’ or AI’s tasks in comparison to humans
If you happened to follow this news in 2017, how did you feel when AlphaGo, AI developed by DeepMind, defeated 9-dan Go player Lee Sedol? You may have been surprised or terrified, thinking that AI has surpassed the ability of geniuses. Still, winning a game with an exponential number of possible moves like Go only means that AI has exceeded a very limited part of human intelligence. The same goes for IBM’s AI, Watson, which competed in ‘Jeopardy!’, the television quiz show.

I believe many were impressed to see the Mini Cheetah, developed in my MIT Biomimetic Robotics Laboratory, perform a backflip. While jumping backwards and landing on the ground is very dynamic, eye-catching and, of course, difficult for humans, the algorithm for the particular motion is incredibly simple compared to one that enables stable walking that requires much more complex feedback loops. Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated. This gap occurs because we tend to think of a task’s difficulty based on human standards.

Achieving robot tasks that are seemingly easy for us is often extremely difficult and complicated.

We tend to generalize AI functionality after watching a single robot demonstration. When we see someone on the street doing backflips, we tend to assume this person would be good at walking and running, and also be flexible and athletic enough to be good at other sports. Very likely, such judgement about this person would not be wrong.

However, can we also apply this judgement to robots? It’s easy for us to generalize and determine AI performance based on an observation of a specific robot motion or function, just as we do with humans. By watching a video of a robot hand-solving Rubik’s Cube at OpenAI, an AI research lab, we think that the AI can perform all other simpler tasks because it can perform such a complex one. We overlook the fact that this AI’s neural network was only trained for a limited type of task; solving the Rubik’s Cube in that configuration. If the situation changes—for example, holding the cube upside down while manipulating it—the algorithm does not work as well as might be expected.

Unlike AI, humans can combine individual skills and apply them to multiple complicated tasks. Once we learn how to solve a Rubik’s Cube, we can quickly work on the cube even when we’re told to hold it upside down, though it may feel strange at first. Human intelligence can naturally combine the objectives of not dropping the cube and solving the cube. Most robot algorithms will require new data or reprogramming to do so. A person who can spread jam on bread with a spoon can do the same using a fork. It is obvious. We understand the concept of “spreading” jam, and can quickly get used to using a completely different tool. Also, while autonomous vehicles require actual data for each situation, human drivers can make rational decisions based on pre-learned concepts to respond to countless situations. These examples show one characteristic of human intelligence in stark contrast to robot algorithms, which cannot perform tasks with insufficient data.

HYUNG TAEK YOON

Mammals have continuously been evolving for more than 65 million years. The entire time humans spent on learning math, using languages, and playing games would sum up to a mere 10,000 years. In other words, humanity spent a tremendous amount of time developing abilities directly related to survival, such as walking, running, and using our hands. Therefore, it may not be surprising that computers can compute much faster than humans, as they were developed for this purpose in the first place. Likewise, it is natural that computers cannot easily obtain the ability to freely use hands and feet for various purposes as humans do. These skills have been attained through evolution for over 10 million years.

This is why it is unreasonable to compare robot or AI performance from demonstrations to that of an animal or human’s abilities. It would be rash to believe that robot technologies involving walking and running like animals are complete, while watching videos of the Cheetah robot running across fields at MIT and leaping over obstacles. Numerous robot demonstrations still rely on algorithms set for specialized tasks in bounded situations. There is a tendency, in fact, for researchers to select demonstrations that seem difficult, as it can produce a strong impression. However, this level of difficulty is from the human perspective, which may be irrelevant to the actual algorithm performance.

Humans are easily influenced by instantaneous and reflective perception before any logical thoughts. And this cognitive bias is strengthened when the subject is very complicated and difficult to analyze logically—for example, a robot that uses machine learning.

Robotic demonstrations still rely on algorithms set for specialized tasks in bounded situations.

So where does our human cognitive bias come from? I believe it comes from our psychological tendency to subconsciously anthropomorphize the subjects we see. Humans have evolved as social animals, probably developing the ability to understand and empathize with each other in the process. Our tendency to anthropomorphize subjects would have come from the same evolutionary process. People tend to use the expression “teaching robots” when they refer to programing algorithms. Nevertheless, we are used to using anthropomorphized expressions. As the 18th century philosopher David Hume said, “There is a universal tendency among mankind to conceive all beings like themselves. We find human faces in the moon, armies in the clouds.”

Of course, we not only anthropomorphize subjects’ appearance but also their state of mind. For example, when Boston Dynamics released a video of its engineers kicking a robot, many viewers reacted by saying “this is cruel,” and that they “pity the robot.” A comment saying, “one day, robots will take revenge on that engineer” received likes. In reality, the engineer was simply testing the robot’s balancing algorithm. However, before any thought process to comprehend this situation, the aggressive motion of kicking combined with the struggling of the animal-like robot is instantaneously transmitted to our brains, leaving a strong impression. Like this, such instantaneous anthropomorphism has a deep effect on our cognitive process.

Humans process information qualitatively, and computers, quantitively
Looking around, our daily lives are filled with algorithms, as can be seen by machines and services that run on these algorithms. All algorithms operate on numbers. We use the terms such as “objective function,” which is a numerical function that represents a certain objective. Many algorithms have the sole purpose of reaching the maximum or minimum value of this function, and an algorithm’s characteristics differ based on how it achieves this.
The goal of tasks such as winning a game of Go or chess are relatively easy to quantify. The easier quantification is, the better the algorithms work. On the contrary, humans often make decisions without quantitative thinking.

As an example, consider cleaning a room. The way we clean differs subtly from day to day, depending on the situation, depending on whose room it is, and depending on how one feels. Were we trying to maximize a certain function in this process? We did no such thing. The act of cleaning has been done with an abstract objective of “clean enough.” Besides, the standard for how much is “enough” changes easily. This standard may be different among people, causing conflicts particularly among family members or roommates.

There are many other examples. When you wash your face every day, which quantitative indicators do you intend to maximize with your hand movements? How hard do you rub? When choosing what to wear? When choosing what to have for dinner? When choosing which dish to wash first? The list goes on. We are used to making decisions that are good enough by putting together information we already have. However, we often do not check whether every single decision is optimized. Most of the time, it is impossible to know because we would have to satisfy numerous contradicting indicators with limited data. When selecting groceries with a friend at the store, we cannot each quantify standards for groceries and make a decision based on these numerical values. Usually, when one picks something out, the other will either say “OK!” or suggest another option. This is very different from saying this vegetable “is the optimal choice!” It is more like saying “this is good enough”

This operational difference between people and algorithms may cause troubles when designing work or services we expect robots to perform. This is because while algorithms perform tasks based on quantitative values, humans’ satisfaction, the outcome of the task, is difficult to be quantified completely. It is not easy to quantify the goal of a task that must adapt to individual preferences or changing circumstances like the aforementioned room cleaning or dishwashing tasks. That is, to coexist with humans, robots may have to evolve not to optimize particular functions, but to achieve “good enough.” Of course, the latter is much more difficult to achieve robustly in real-life situations where you need to manage so many conflicting objectives and qualitative constraints.

Actually, we do not know what we are doing
Try to recall the most recent meal you had before reading this. Can you remember what you had? Then, can you also remember the process of chewing and swallowing the food? Do you know what exactly your tongue was doing at that very moment? Our tongue does so many things for us. It helps us put food in our mouths, distribute the food between our teeth, swallow the finely chewed pieces, or even send large pieces back toward our teeth, if needed. We can naturally do all of this, even while talking to a friend, using your tongue also in charge of pronunciation. How much do our conscious decisions contribute to the movement of our tongues that accomplish so many complex tasks simultaneously? It may seem like we are moving our tongues as we want, but in fact, there are more moments when the tongue is moving automatically, taking high-level commands from our consciousness. This is why we cannot remember detailed movements of our tongues during a meal. We know little about their movement in the first place.

We may assume that our hands are the most consciously controllable organ, but many hand movements also happen automatically and unconsciously, or subconsciously at most. For those who disagree, try putting something like keys in your pocket and take it back out. In that short moment, countless micromanipulations instantly and seamlessly coordinated to complete the task. We often cannot perceive each action separately. We do not even know what units we should divide them into, so we collectively express them as abstract words such as organize, wash, apply, rub, wipe, etc. These verbs are qualitatively defined. They often refer to the aggregate of fine movements and manipulations, whose composition changes depending on the situations. Of course, it is easy even for children to understand and think of this concept, but from the perspective of algorithm development, these words are endlessly vague and abstract.

HYUNG TAEK YOON

Let’s try to teach how to make a sandwich by spreading peanut butter on bread. We can show how this is done and explain with a few simple words. Let’s assume a slightly different situation. Say there is an alien who uses the same language as us, but knows nothing about human civilization or culture. (I know this assumption is already contradictory…, but please bear with me.) Can we explain over the phone how to make a peanut butter sandwich? We will probably get stuck trying to explain how to scoop peanut butter out of the jar. Even grasping the slice of bread is not so simple. We have to grasp the bread strongly enough so we can spread the peanut butter, but not so much so as to ruin the shape of the soft bread. At the same time, we should not drop the bread either. It is easy for us to think of how to grasp the bread, but it will not be easy to express this through speech or text, let alone in a function. Even if it is a human who is learning a task, can we learn a carpenter’s work over the phone? Can we precisely correct tennis or golf postures over the phone? It is difficult to discern to what extent the details we see are done either consciously or unconsciously.

My point is that not everything we do with our hands and feet can directly be expressed with our language. Things that happen in between successive actions often automatically occur unconsciously, and thus we explain our actions in a much simpler way than how they actually take place. This is why our actions seem very simple, and why we forget how incredible they really are. The limitations of expression often lead to underestimation of actual complexity. We should recognize the fact that difficulty of language depiction can hinder research progress in fields where words are not well developed.

Until recently, AI has been practically applied in information services related to data processing. Some prominent examples today include voice recognition and facial recognition. Now, we are entering a new era of AI that can effectively perform physical services in our midst. That is, the time is coming in which automation of complex physical tasks becomes imperative.

Particularly, our increasingly aging society poses a huge challenge. Shortage of labor is no longer a vague social problem. It is urgent that we discuss how to develop technologies that augment humans’ capability, allowing us to focus on more valuable work and pursue lives uniquely human. This is why not only engineers but also members of society from various fields should improve their understanding of AI and unconscious cognitive biases. It is easy to misunderstand artificial intelligence, as noted above, because it is substantively unlike human intelligence.

Things that are very natural among humans may be cognitive biases for AI and robots. Without a clear understanding of our cognitive biases, we cannot set the appropriate directions for technology research, application, and policy. In order for productive development as a scientific community, we need keen attention to our cognition and deliberate debate in the process of promoting appropriate development and applications of technology.

Sangbae Kim leads the Biomimetic Robotics Laboratory at MIT. The preceding is an adaptation of a blog Kim posted in June for Naver Labs. Continue reading

Posted in Human Robots

#439420 This Week’s Awesome Tech Stories From ...

COMPUTING
Tapping Into the Brain to Help a Paralyzed Man Speak
Pam Belluck | The New York Times
“He has not been able to speak since 2003, when he was paralyzed at age 20 by a severe stroke after a terrible car crash. Now, in a scientific milestone, researchers have tapped into the speech areas of his brain—allowing him to produce comprehensible words and sentences simply by trying to say them. When the man, known by his nickname, Pancho, tries to speak, electrodes implanted in his brain transmit signals to a computer that displays his intended words on the screen.”

TRANSPORTATION
This Tiny, $6,800 Car Runs on Solar Power
Adele Peters | Fast Company
“The Squad, a new urban car from an Amsterdam-based startup, is barely bigger than a bicycle: Parked sideways, up to four of the vehicles can fit in a standard parking spot. The electric two-seater’s tiny size is one reason that it doesn’t use much energy—and in a typical day of city driving, it can run entirely on power from a solar panel on its own roof. A swappable battery provides extra power when needed.”

3D PRINTING
World’s First 3D-Printed Stainless Steel Bridge Spans a Dutch Canal
Adam Williams | New Atlas
“MX3D has finally realized its ambitious plan to install what’s described as the world’s first 3D-printed steel bridge over a canal in Amsterdam. The Queen of the Netherlands has officially opened the bridge to the public and, as well as an eye-catching design, it features hidden sensors that are collecting data on its structural integrity, crowd behavior, and more.”

ARTIFICIAL INTELLIGENCE
The Computer Scientist Training AI to Think With Analogies
John Pavlus | Quanta
“Melanie Mitchell has worked on digital minds for decades. She says they’ll never truly be like ours until they can make analogies. … ‘Today’s state-of-the-art neural networks are very good at certain tasks,’ she said, ‘but they’re very bad at taking what they’ve learned in one kind of situation and transferring it to another’—the essence of analogy.”

SPACE
Here’s Why Richard Branson’s Flight Matters—and Yes, It Really Matters
Eric Berger | Ars Technica
“During the last 50 years, the vast majority of human flights into space—more than 95 percent—have been undertaken by government astronauts on government-designed and -funded vehicles. Starting with Branson and going forward, it seems likely that 95 percent of human spaceflights over the next half century, if not more, will take place on privately built vehicles by private citizens. It’s a moment that has been a long time coming.”

ENVIRONMENT
Sweeping ‘Green Deal’ Promises to Revamp EU Economy, Slash Carbon Pollution
Tim De Chant | Ars Technica
“The…proposal would cut carbon pollution 55 percent below 1990 levels by leaning heavily on renewable energy and electric vehicles while also introducing a border carbon adjustment on imports and taxing aviation and maritime fuels. Together, the reforms signal the beginning of the end of fossil fuels in the EU. ‘The fossil fuel economy has reached its limits,’ said European Commission President Ursula von der Leyen.”

FUTURE
Why I’m a Proud Solutionist
Jason Crawford | MIT Technology Review
“Debates about technology and progress are often framed in terms of ‘optimism’ vs. ‘pessimism.’ …It’s tempting to choose sides. …But this represents a false choice. History provides us with powerful examples of people who were brutally honest in identifying a crisis but were equally active in seeking solutions.”

ETHICS
As Use of AI Spreads, Congress Looks to Rein It In
Tom Simonite | Wired
“There’s bipartisan agreement in Washington that the US government should do more to support development of artificial intelligence technology. At the same time, parts of the US government are working to place limits on algorithms to prevent discrimination, injustice, or waste. The White House, lawmakers from both parties, and federal agencies including the Department of Defense and the National Institute for Standards and Technology are all working on bills or projects to constrain potential downsides of AI.”

Image Credit: Maximalfocus / Unsplash Continue reading

Posted in Human Robots

#439418 Video Friday: Fluidic Fingers

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Humanoids 2020 – July 19-21, 2021 – [Online Event]
RO-MAN 2021 – August 8-12, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 21-23, 2021 – New Orleans, LA, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

This 3D printed hand uses fluidic circuits (which respond differently to different input pressures) to create a soft robotic hand that only needs one input source to actuate three fingers independently.

[ UMD ]

Thanks, Fan!

Nano quadcopters are ideal for gas source localization (GSL) as they are safe, agile and inexpensive. However, their extremely restricted sensors and computational resources make GSL a daunting challenge. In this work, we propose a novel bug algorithm named ‘Sniffy Bug’, which allows a fully autonomous swarm of gas-seeking nano quadcopters to localize a gas source in an unknown, cluttered and GPS-denied environments.

[ MAVLab ]

Large-scale aerial deployment of miniature sensors in tough environmental conditions requires a deployment device that is lightweight, robust and steerable. We present a novel samara-inspired autorotating craft that is capable of autorotating and diving.

[ Paper ]

Scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have recently created a new algorithm to help a robot find efficient motion plans to ensure physical safety of its human counterpart. In this case, the bot helped put a jacket on a human, which could potentially prove to be a powerful tool in expanding assistance for those with disabilities or limited mobility.

[ MIT CSAIL ]

Listening to the language here about SoftBank's Whiz cleaning robot, I’ve got some concerns.

My worry is that the value that the robot is adding here is mostly in perception of cleaning, rather than actually, you know, cleaning. Which is still value, and that’s fine, but whether it’s long term commercially viable is less certain.

[ SoftBank ]

This paper presents a novel method for multi-legged robots to probe and test the terrain for collapses using its legs while walking. The proposed method improves on existing terrain probing approaches, and integrates the probing action into a walking cycle. A follow the-leader strategy with a suitable gait and stance is presented and implemented on a hexapod robot.

[ CSIRO ]

Robotics researchers from NVIDIA and University of Southern California presented their work at the 2021 Robotics: Science and Systems (RSS) conference called DiSECt, the first differentiable simulator for robotic cutting. The simulator accurately predicts the forces acting on a knife as it presses and slices through natural soft materials, such as fruits and vegetables.

[ NVIDIA ]

These videos from Moley Robotics have too many cuts in them to properly judge how skilled the robot is, but as far as I know, it only cooks the “perfect” steak in the sense that it will cook a steak of a given weight for a given time.

[ Moley ]

Most hands are designed for general purpose, as it’s very tedious to make task-specific hands. Existing methods battle trade-offs between the complexity of designs critical for contact-rich tasks, and the practical constraints of manufacturing, and contact handling.

This led researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to create a new method to computationally optimize the shape and control of a robotic manipulator for a specific task. Their system uses software to manipulate the design, simulate the robot doing a task, and then provide an optimization score to assess the design and control.

[ MIT CSAIL ]

Drone Adventures maps wildlife in Namibia from above.

[ Drone Adventures ]

Some impressive electronics disassembly tasks using a planner that just unscrews things, shakes them, and sees whether it then needs to unscrew more things.

[ Imagine ]

The reconfigurable robot ReQuBiS can very well transition into biped, quadruped and snake configurations without the need of re-arranging modules, unlike most state-of-the-art models. Its design allows the robot to split into two agents to perform tasks in parallel for biped and snake mobility.

[ Paper ] via [ IvLabs ]

Thanks, Fan!

World Vision Kenya aims to improve the climate resilience of nine villages in Tana River County, sustainably manage the ecosystem and climate change, and restore the communities’ livelihoods by reseeding the hotspot areas with indigenous trees, covering at least 250 acres for every village. This can be challenging to achieve, considering the vast areas needing coverage. That’s why World Vision Kenya partnered with Kenya Flying Labs to help make this process faster, easier, and more efficient (and more fun!).

[ WeRobotics ]

Pieter Abbeel’s Robot Brains Podcast has started posting video versions of the episodes, if you’re into that sort of thing. There are interesting excerpts as well, a few of which we can share here.

[ Robot Brains ]

RSS took place this week with paper presentations, talks, Q&As, and more, but here are two of the keynotes that are definitely worth watching.

[ RSS 2021 ] Continue reading

Posted in Human Robots