Tag Archives: time

#437590 Why We Need a Robot Registry


I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?” Continue reading

Posted in Human Robots

#437585 Dart-Shooting Drone Attacks Trees for ...

We all know how robots are great at going to places where you can’t (or shouldn’t) send a human. We also know how robots are great at doing repetitive tasks. These characteristics have the potential to make robots ideal for setting up wireless sensor networks in hazardous environments—that is, they could deploy a whole bunch of self-contained sensor nodes that create a network that can monitor a very large area for a very long time.

When it comes to using drones to set up sensor networks, you’ve generally got two options: A drone that just drops sensors on the ground (easy but inaccurate and limited locations), or using a drone with some sort of manipulator on it to stick sensors in specific places (complicated and risky). A third option, under development by researchers at Imperial College London’s Aerial Robotics Lab, provides the accuracy of direct contact with the safety and ease of use of passive dropping by instead using the drone as a launching platform for laser-aimed, sensor-equipped darts.

These darts (which the researchers refer to as aerodynamically stabilized, spine-equipped sensor pods) can embed themselves in relatively soft targets from up to 4 meters away with an accuracy of about 10 centimeters after being fired from a spring-loaded launcher. They’re not quite as accurate as a drone with a manipulator, but it’s pretty good, and the drone can maintain a safe distance from the surface that it’s trying to add a sensor to. Obviously, the spine is only going to work on things like wood, but the researchers point out that there are plenty of attachment mechanisms that could be used, including magnets, adhesives, chemical bonding, or microspines.

Indoor tests using magnets showed the system to be quite reliable, but at close range (within a meter of the target) the darts sometimes bounced off rather than sticking. From between 1 and 4 meters away, the darts stuck between 90 and 100 percent of the time. Initial outdoor tests were also successful, although the system was under manual control. The researchers say that “regular and safe operations should be carried out autonomously,” which, yeah, you’d just have to deal with all of the extra sensing and hardware required to autonomously fly beneath the canopy of a forest. That’s happening next, as the researchers plan to add “vision state estimation and positioning, as well as a depth sensor” to avoid some trees and fire sensors into others.

And if all of that goes well, they’ll consider trying to get each drone to carry multiple darts. Look out, trees: You’re about to be pierced for science.

“Unmanned Aerial Sensor Placement for Cluttered Environments,” by André Farinha, Raphael Zufferey, Peter Zheng, Sophie F. Armanini, and Mirko Kovac from Imperial College London, was published in IEEE Robotics and Automation Letters.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437577 A Swarm of Cyborg Cockroaches That Lives ...

Digital Nature Group at the University of Tsukuba in Japan is working towards a “post ubiquitous computing era consisting of seamless combination of computational resources and non-computational resources.” By “non-computational resources,” they mean leveraging the natural world, which for better or worse includes insects.

At small scales, the capabilities of insects far exceed the capabilities of robots. I get that. And I get that turning cockroaches into an army of insect cyborgs could be useful in a variety of ways. But what makes me fundamentally uncomfortable is the idea that “in the future, they’ll appear out of nowhere without us recognizing it, fulfilling their tasks and then hiding.” In other words, you’ll have cyborg cockroaches hiding all over your house, all the time.

Warning: This article contains video of cockroaches being modified with cybernetic implants that some people may find upsetting.

Remote controlling cockroaches isn’t a new idea, and it’s a fairly simple one. By stimulating the left or right antenna nerves of the cockroach, you can make it think that it’s running into something, and get it to turn in the opposite direction. Add wireless connectivity, some fiducial markers, an overhead camera system, and a bunch of cyborg cockroaches, and you have a resilient swarm that can collaborate on tasks. The researchers suggest that the swarm could be used as a display (by making each cockroach into a pixel), to transport objects, or to draw things. There’s also some mention of “input or haptic interfaces or an audio device,” which frankly sounds horrible.

The reason to use cockroaches is that you can take advantage of their impressive ruggedness, efficiency, high power to weight ratio, and mobility. They can also feed themselves, meaning that whenever you don’t need the swarm to perform some task for you, you can deactivate the control system and let them scurry off to find crumbs in dark places.

There are many other swarm robotic platforms that can perform what you’re seeing these cyborg roaches do, but according to the researchers, the reason to use cockroaches is that you can take advantage of their impressive ruggedness, efficiency, high power to weight ratio, and mobility. They’re a lot messier (yay biology!), but they can also feed themselves, meaning that whenever you don’t need the swarm to perform some task for you, you can deactivate the control system and let them scurry off to find crumbs in dark places. And when you need them again, turn the control system on and experience the nightmare of your cyborg cockroach swarm reassembling itself from all over your house.

While we’re on the subject of cockroach hacking, we would be doing you a disservice if we didn’t share some of project leader Yuga Tsukuda’s other projects. Here’s a cockroach-powered clock, about which the researchers note that “it is difficult to control the cockroaches when trying to control them by electrical stimulation because they move spontaneously. However, by cutting off the head and removing the brain, they do not move spontaneously and the control by the computer becomes easy.” So, zombie cockroaches. Good then.

And if that’s not enough for you, how about this:

The researchers describe this project as an “attempt to use cockroaches for makeup by sticking them on the face.” They stick electrodes into the cockroaches to make them wiggle their legs when electrical stimulation is applied. And the peacock feathers? They “make the cockroach movement bigger, and create a cosmic mystery.” Continue reading

Posted in Human Robots

#437575 AI-Directed Robotic Hand Learns How to ...

Reaching for a nearby object seems like a mindless task, but the action requires a sophisticated neural network that took humans millions of years to evolve. Now, robots are acquiring that same ability using artificial neural networks. In a recent study, a robotic hand “learns” to pick up objects of different shapes and hardness using three different grasping motions.

The key to this development is something called a spiking neuron. Like real neurons in the brain, artificial neurons in a spiking neural network (SNN) fire together to encode and process temporal information. Researchers study SNNs because this approach may yield insights into how biological neural networks function, including our own.

“The programming of humanoid or bio-inspired robots is complex,” says Juan Camilo Vasquez Tieck, a research scientist at FZI Forschungszentrum Informatik in Karlsruhe, Germany. “And classical robotics programming methods are not always suitable to take advantage of their capabilities.”

Conventional robotic systems must perform extensive calculations, Tieck says, to track trajectories and grasp objects. But a robotic system like Tieck’s, which relies on a SNN, first trains its neural net to better model system and object motions. After which it grasps items more autonomously—by adapting to the motion in real-time.

The new robotic system by Tieck and his colleagues uses an existing robotic hand, called a Schunk SVH 5-finger hand, which has the same number of fingers and joints as a human hand.

The researchers incorporated a SNN into their system, which is divided into several sub-networks. One sub-network controls each finger individually, either flexing or extending the finger. Another concerns each type of grasping movement, for example whether the robotic hand will need to do a pinching, spherical or cylindrical movement.

For each finger, a neural circuit detects contact with an object using the currents of the motors and the velocity of the joints. When contact with an object is detected, a controller is activated to regulate how much force the finger exerts.

“This way, the movements of generic grasping motions are adapted to objects with different shapes, stiffness and sizes,” says Tieck. The system can also adapt its grasping motion quickly if the object moves or deforms.

The robotic grasping system is described in a study published October 24 in IEEE Robotics and Automation Letters. The researchers’ robotic hand used its three different grasping motions on objects without knowing their properties. Target objects included a plastic bottle, a soft ball, a tennis ball, a sponge, a rubber duck, different balloons, a pen, and a tissue pack. The researchers found, for one, that pinching motions required more precision than cylindrical or spherical grasping motions.

“For this approach, the next step is to incorporate visual information from event-based cameras and integrate arm motion with SNNs,” says Tieck. “Additionally, we would like to extend the hand with haptic sensors.”

The long-term goal, he says, is to develop “a system that can perform grasping similar to humans, without intensive planning for contact points or intense stability analysis, and [that is] able to adapt to different objects using visual and haptic feedback.” Continue reading

Posted in Human Robots