Tag Archives: psychology

#438807 Visible Touch: How Cameras Can Help ...

The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs.

To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.

A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.”

Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.

However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision.

“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.

This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device.

The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities.

In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.

As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.” Continue reading

Posted in Human Robots

#437820 In-Shoe Sensors and Mobile Robots Keep ...

In shoe sensor

Researchers at Stevens Institute of Technology are leveraging some of the newest mechanical and robotic technologies to help some of our oldest populations stay healthy, active, and independent.

Yi Guo, professor of electrical and computer engineering and director of the Robotics and Automation Laboratory, and Damiano Zanotto, assistant professor of mechanical engineering, and director of the Wearable Robotic Systems Laboratory, are collaborating with Ashley Lytle, assistant professor in Stevens’ College of Arts and Letters, and Ashwini K. Rao of Columbia University Medical Center, to combine an assistive mobile robot companion with wearable in-shoe sensors in a system designed to help elderly individuals maintain the balance and motion they need to thrive.

“Balance and motion can be significant issues for this population, and if elderly people fall and experience an injury, they are less likely to stay fit and exercise,” Guo said. “As a consequence, their level of fitness and performance decreases. Our mobile robot companion can help decrease the chances of falling and contribute to a healthy lifestyle by keeping their walking function at a good level.”

The mobile robots are designed to lead walking sessions and using the in-shoe sensors, monitor the user’s gait, indicate issues, and adjust the exercise speed and pace. The initiative is part of a four-year National Science Foundation research project.

“For the first time, we’re integrating our wearable sensing technology with an autonomous mobile robot,” said Zanotto, who worked with elderly people at Columbia University Medical Center for three years before coming to Stevens in 2016. “It’s exciting to be combining these different areas of expertise to leverage the strong points of wearable sensing technology, such as accurately capturing human movement, with the advantages of mobile robotics, such as much larger computational powers.”

The team is developing algorithms that fuse real-time data from smart, unobtrusive, in-shoe sensors and advanced on-board sensors to inform the robot’s navigation protocols and control the way the robot interacts with elderly individuals. It’s a promising way to assist seniors in safely doing walking exercises and maintaining their quality of life.

Bringing the benefits of the lab to life

Guo and Zanotto are working with Lytle, an expert in social and health psychology, to implement a social connectivity capability and make the bi-directional interaction between human and robot even more intuitive, engaging, and meaningful for seniors.

“Especially during COVID, it’s important for elderly people living on their own to connect socially with family and friends,” Zanotto said, “and the robot companion will also offer teleconferencing tools to provide that interaction in an intuitive and transparent way.”

“We want to use the robot for social connectedness, perhaps integrating it with a conversation agent such as Alexa,” Guo added. “The goal is to make it a companion robot that can sense, for example, that you are cooking, or you’re in the living room, and help with things you would do there.”

It’s a powerful example of how abstract concepts can have meaningful real-life benefits.

“As engineers, we tend to work in the lab, trying to optimize our algorithms and devices and technologies,” Zanotto noted, “but at the end of the day, what we do has limited value unless it has impact on real life. It’s fascinating to see how the devices and technologies we’re developing in the lab can be applied to make a difference for real people.”

Maintaining balance in a global pandemic

Although COVID-19 has delayed the planned testing at a senior center in New York City, it has not stopped the team’s progress.

“Although we can’t test on elderly populations yet, our students are still testing in the lab,” Guo said. “This summer and fall, for the first time, the students validated the system’s real-time ability to monitor and assess the dynamic margin of stability during walking—in other words, to evaluate whether the person following the robot is walking normally or has a risk of falling. They’re also designing parameters for the robot to give early warnings and feedback that help the human subjects correct posture and gait issues while walking.”

Those warnings would be literally underfoot, as the in-shoe sensors would pulse like a vibrating cell phone to deliver immediate directional information to the subject.

“We’re not the first to use this vibrotactile stimuli technology, but this application is new,” Zanotto said.

So far, the team has published papers in top robotics publication venues including IEEE Transactions on Neural Systems and Rehabilitation Engineering and the 2020 IEEE International Conference on Robotics and Automation (ICRA). It’s a big step toward realizing the synergies of bringing the technical expertise of engineers to bear on the clinical focus on biometrics—and the real lives of seniors everywhere. Continue reading

Posted in Human Robots

#437614 Video Friday: Poimo Is a Portable ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.

[ UCSD ]

Thanks Ioana!

Shark Robotics, French and European leader in Unmanned Ground Vehicles, is announcing today a disinfection add-on for Boston Dynamics Spot robot, designed to fight the COVID-19 pandemic. The Spot robot with Shark’s purpose-built disinfection payload can decontaminate up to 2,000 m2 in 15 minutes, in any space that needs to be sanitized – such as hospitals, metro stations, offices, warehouses or facilities.

[ Shark Robotics ]

Here’s an update on the Poimo portable inflatable mobility project we wrote about a little while ago; while not strictly robotics, it seems like it holds some promise for rapidly developing different soft structures that robotics might find useful.

[ University of Tokyo ]

Thanks Ryuma!

Pretty cool that you can do useful force feedback teleop while video chatting through a “regular broadband Internet connection.” Although, what “regular” means to you is a bit subjective, right?

[ HEBI Robotics ]

Thanks Dave!

While NASA's Mars rover Perseverance travels through space toward the Red Planet, its nearly identical rover twin is hard at work on Earth. The vehicle system test bed (VSTB) rover named OPTIMISM is a full-scale engineering version of the Mars-bound rover. It is used to test hardware and software before the commands are sent up to the Perseverance rover.

[ NASA ]

Jacquard takes ordinary, familiar objects and enhances them with new digital abilities and experiences, while remaining true to their original purpose — like being your favorite jacket, backpack or a pair of shoes that you love to wear.

Our ambition is simple: to make life easier. By staying connected to your digital world, your things can do so much more. Skip a song by brushing your sleeve. Take a picture by tapping on a shoulder strap. Get reminded about the phone you left behind with a blink of light or a haptic buzz on your cuff.

[ Google ATAP ]

Should you attend the IROS 2020 workshop on “Planetary Exploration Robots: Challenges and Opportunities”? Of course you should!

[ Workshop ]

Kuka makes a lot of these videos where I can’t help but think that if they put as much effort into programming the robot as they did into producing the video, the result would be much more impressive.

[ Kuka ]

The Colorado School of Mines is one of the first customers to buy a Spot robot from Boston Dynamics to help with robotics research. Watch as scientists take Spot into the school's mine for the first time.

[ HCR ] via [ CNET ]

A very interesting soft(ish) actuator from Ayato Kanada at Kyushu University's Control Engineering Lab.

A flexible ultrasonic motor (FUSM), which generates linear motion as a novel soft actuator. This motor consists of a single metal cube stator with a hole and an elastic elongated coil spring inserted into the hole. When voltages are applied to piezoelectric plates on the stator, the coil spring moves back and forward as a linear slider. In the FUSM that uses the friction drive as the principle, the most important parameter for optimizing its output is the preload between the stator and slider. The coil spring has a slightly larger diameter than the stator hole and generates the preload by expanding in a radial direction. The coil springs act not only as a flexible slider but also as a resistive positional sensor. Changes in the resistance between the stator and the coil spring end are converted to a voltage and used for position detection.

[ Control Engineering Lab ]

Thanks Ayato!

We show how to use the limbs of a quadruped robot to identify fine-grained soil, representative for Martian regolith.

[ Paper ] via [ ANYmal Research ]

PR2 is serving breakfast and cleaning up afterwards. It’s slow, but all you have to do is eat and leave.

That poor PR2 is a little more naked than it's probably comfortable with.

[ EASE ]

NVIDIA researchers present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped robot (the Unitree Laikago).

[ NVIDIA ]

What's interesting about this assembly task is that the robot is using its arm only for positioning, and doing the actual assembly with just fingers.

[ RC2L ]

In this electronics assembly application, Kawasaki's cobot duAro2 uses a tool changing station to tackle a multitude of tasks and assemble different CPU models.

Okay but can it apply thermal paste to a CPU in the right way? Personally, I find that impossible.

[ Kawasaki ]

You only need to watch this video long enough to appreciate the concept of putting a robot on a robot.

[ Impress ]

In this lecture, we’ll hear from the man behind one of the biggest robotics companies in the world, Boston Dynamics, whose robotic dog, Spot, has been used to encourage social distancing in Singapore and is now getting ready for FDA approval to be able to measure patients’ vital signs in hospitals.

[ Alan Turing Institute ]

Greg Kahn from UC Berkeley wrote in to share his recent dissertation talk on “Mobile Robot Learning.”

In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency–how to learn using a limited amount of data? (2) supervision–how to tell the robot what to do? and (3) safety–how to ensure the robot and environment are not damaged or destroyed during learning? In this talk, I will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges and show results which enable ground and aerial robots to navigate in complex indoor and outdoor environments.

[ UC Berkeley ]

Thanks Greg!

Leila Takayama from UC Santa Cruz (and previously Google X and Willow Garage) gives a talk entitled “Toward a more human-centered future of robotics.”

Robots are no longer only in outer space, in factory cages, or in our imaginations. We interact with robotic agents when withdrawing cash from bank ATMs, driving cars with adaptive cruise control, and tuning our smart home thermostats. In the moment of those interactions with robotic agents, we behave in ways that do not necessarily align with the rational belief that robots are just plain machines. Through a combination of controlled experiments and field studies, we use theories and concepts from the social sciences to explore ways that human and robotic agents come together, including how people interact with personal robots and how people interact through telepresence robots. Together, we will explore topics and raise questions about the psychology of human-robot interaction and how we could invent a future of a more human-centered robotics that we actually want to live in.

[ Leila Takayama ]

Roboticist and stand-up comedian Naomi Fitter from Oregon State University gives a talk on “Everything I Know about Telepresence.”

Telepresence robots hold promise to connect people by providing videoconferencing and navigation abilities in far-away environments. At the same time, the impacts of current commercial telepresence robots are not well understood, and circumstances of robot use including internet connection stability, odd personalizations, and interpersonal relationship between a robot operator and people co-located with the robot can overshadow the benefit of the robot itself. And although the idea of telepresence robots has been around for over two decades, available nonverbal expressive abilities through telepresence robots are limited, and suitable operator user interfaces for the robot (for example, controls that allow for the operator to hold a conversation and move the robot simultaneously) remain elusive. So where should we be using telepresence robots? Are there any pitfalls to watch out for? What do we know about potential robot expressivity and user interfaces? This talk will cover my attempts to address these questions and ways in which the robotics research community can build off of this work

[ Talking Robotics ] Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#436176 We’re Making Progress in Explainable ...

Machine learning algorithms are starting to exceed human performance in many narrow and specific domains, such as image recognition and certain types of medical diagnoses. They’re also rapidly improving in more complex domains such as generating eerily human-like text. We increasingly rely on machine learning algorithms to make decisions on a wide range of topics, from what we collectively spend billions of hours watching to who gets the job.

But machine learning algorithms cannot explain the decisions they make.
How can we justify putting these systems in charge of decisions that affect people’s lives if we don’t understand how they’re arriving at those decisions?

This desire to get more than raw numbers from machine learning algorithms has led to a renewed focus on explainable AI: algorithms that can make a decision or take an action, and tell you the reasons behind it.

What Makes You Say That?
In some circumstances, you can see a road to explainable AI already. Take OpenAI’s GTP-2 model, or IBM’s Project Debater. Both of these generate text based on a large corpus of training data, and try to make it as relevant as possible to the prompt that’s given. If these models were also able to provide a quick run-down of the top few sources in that corpus of training data they were drawing information from, it may be easier to understand where the “argument” (or poetic essay about unicorns) was coming from.

This is similar to the approach Google is now looking at for its image classifiers. Many algorithms are more sensitive to textures and the relationship between adjacent pixels in an image, rather than recognizing objects by their outlines as humans do. This leads to strange results: some algorithms can happily identify a totally scrambled image of a polar bear, but not a polar bear silhouette.

Previous attempts to make image classifiers explainable relied on significance mapping. In this method, the algorithm would highlight the areas of the image that contributed the most statistical weight to making the decision. This is usually determined by changing groups of pixels in the image and seeing which contribute to the biggest change in the algorithm’s impression of what the image is. For example, if the algorithm is trying to recognize a stop sign, changing the background is unlikely to be as important as changing the sign.

Google’s new approach changes the way that its algorithm recognizes objects, by examining them at several different resolutions and searching for matches to different “sub-objects” within the main object. You or I might recognize an ambulance from its flashing lights, its tires, and its logo; we might zoom in on the basketball held by an NBA player to deduce their occupation, and so on. By linking the overall categorization of an image to these “concepts,” the algorithm can explain its decision: I categorized this as a cat because of its tail and whiskers.

Even in this experiment, though, the “psychology” of the algorithm in decision-making is counter-intuitive. For example, in the basketball case, the most important factor in making the decision was actually the player’s jerseys rather than the basketball.

Can You Explain What You Don’t Understand?
While it may seem trivial, the conflict here is a fundamental one in approaches to artificial intelligence. Namely, how far can you get with mere statistical associations between huge sets of data, and how much do you need to introduce abstract concepts for real intelligence to arise?

At one end of the spectrum, Good Old-Fashioned AI or GOFAI dreamed up machines that would be entirely based on symbolic logic. The machine would be hard-coded with the concept of a dog, a flower, cars, and so forth, alongside all of the symbolic “rules” which we internalize, allowing us to distinguish between dogs, flowers, and cars. (You can imagine a similar approach to a conversational AI would teach it words and strict grammatical structures from the top down, rather than “learning” languages from statistical associations between letters and words in training data, as GPT-2 broadly does.)

Such a system would be able to explain itself, because it would deal in high-level, human-understandable concepts. The equation is closer to: “ball” + “stitches” + “white” = “baseball”, rather than a set of millions of numbers linking various pathways together. There are elements of GOFAI in Google’s new approach to explaining its image recognition: the new algorithm can recognize objects based on the sub-objects they contain. To do this, it requires at least a rudimentary understanding of what those sub-objects look like, and the rules that link objects to sub-objects, such as “cats have whiskers.”

The issue, of course, is the—maybe impossible—labor-intensive task of defining all these symbolic concepts and every conceivable rule that could possibly link them together by hand. The difficulty of creating systems like this, which could handle the “combinatorial explosion” present in reality, helped to lead to the first AI winter.

Meanwhile, neural networks rely on training themselves on vast sets of data. Without the “labeling” of supervised learning, this process might bear no relation to any concepts a human could understand (and therefore be utterly inexplicable).

Somewhere between these two, hope explainable AI enthusiasts, is a happy medium that can crunch colossal amounts of data, giving us all of the benefits that recent, neural-network AI has bestowed, while showing its working in terms that humans can understand.

Image Credit: Image by Seanbatty from Pixabay Continue reading

Posted in Human Robots