Tag Archives: Science Robotics

#436984 Robots to the Rescue: How They Can Help ...

As the coronavirus pandemic forces people to keep their distance, could this be robots‘ time to shine? A group of scientists think so, and they’re calling for robots to do the “dull, dirty, and dangerous jobs” of infectious disease management.

Social distancing has emerged as one of the most effective strategies for slowing the spread of COVID-19, but it’s also bringing many jobs to a standstill and severely restricting our daily lives. And unfortunately, the one group that can’t rely on its protective benefits are the medical and emergency services workers we’re relying on to save us.

Robots could be a solution, according to the editorial board of Science Robotics, by helping replace humans in a host of critical tasks, from disinfecting hospitals to collecting patient samples and automating lab tests.

According to the authors, the key areas where robots could help are clinical care, logistics, and reconnaissance, which refers to tasks like identifying the infected or making sure people comply with quarantines or social distancing requirements. Outside of the medical sphere, robots could also help keep the economy and infrastructure going by standing in for humans in factories or vital utilities like waste management or power plants.

When it comes to clinical care, robots can play important roles in disease prevention, diagnosis and screening, and patient care, the researchers say. Robots have already been widely deployed to disinfect hospitals and other public spaces either using UV light that kills bugs or by repurposing agricultural robots and drones to spray disinfectant, reducing the exposure of cleaning staff to potentially contaminated surfaces. They are also being used to carry out crucial deliveries of food and medication without exposing humans.

But they could also play an important role in tracking the disease, say the researchers. Thermal cameras combined with image recognition algorithms are already being used to detect potential cases at places like airports, but incorporating them into mobile robots or drones could greatly expand the coverage of screening programs.

A more complex challenge—but one that could significantly reduce medical workers’ exposure to the virus—would be to design robots that could automate the collection of nasal swabs used to test for COVID-19. Similarly automated blood collection for tests could be of significant help, and researchers are already investigating using ultrasound to help robots locate veins to draw blood from.

Convincing people it’s safe to let a robot stick a swab up their nose or jab a needle in their arm might be a hard sell right now, but a potentially more realistic scenario would be to get robots to carry out laboratory tests on collected samples to reduce exposure to lab technicians. Commercial laboratory automation systems already exist, so this might be a more achievable near-term goal.

Not all solutions need to be automated, though. While autonomous systems will be helpful for reducing the workload of stretched health workers, remote systems can still provide useful distancing. Remote control robotics systems are already becoming increasingly common in the delicate business of surgery, so it would be entirely feasible to create remote systems to carry out more prosaic medical tasks.

Such systems would make it possible for experts to contribute remotely in many different places without having to travel. And robotic systems could combine medical tasks like patient monitoring with equally important social interaction for people who may have been shut off from human contact.

In a teleconference last week Guang-Zhong Yang, a medical roboticist from Carnegie Mellon University and founding editor of Science Robotics, highlighted the importance of including both doctors and patients in the design of these robots to ensure they are safe and effective, but also to make sure people trust them to observe social protocols and not invade their privacy.

But Yang also stressed the importance of putting the pieces in place to enable the rapid development and deployment of solutions. During the 2015 Ebola outbreak, the White House Office of Science and Technology Policy and the National Science Foundation organized workshops to identify where robotics could help deal with epidemics.

But once the threat receded, attention shifted elsewhere, and by the time the next pandemic came around little progress had been made on potential solutions. The result is that it’s unclear how much help robots will really be able to provide to the COVID-19 response.

That means it’s crucial to invest in a sustained research effort into this field, say the paper’s authors, with more funding and multidisciplinary research partnerships between government agencies and industry so that next time around we will be prepared.

“These events are rare and then it’s just that people start to direct their efforts to other applications,” said Yang. “So I think this time we really need to nail it, because without a sustained approach to this history will repeat itself and robots won’t be ready.”

Image Credit: ABB’s YuMi collaborative robot. Image courtesy of ABB Continue reading

Posted in Human Robots

#436466 How Two Robots Learned to Grill and ...

The list of things robots can do seems to be growing by the week. They can play sports, help us explore outer space and the deep sea, take over some of our boring everyday tasks, and even assemble Ikea furniture.

Now they can add one more accomplishment to the list: grilling and serving a hot dog.

It seems like a pretty straightforward task, and as far as grilling goes, hot dogs are about as easy as it gets (along with, maybe, burgers? Hot dogs require more rotation, but it’s easier to tell when they’re done since they’re lighter in color).

Let’s paint a picture: you’re manning the grill at your family’s annual Fourth of July celebration. You’ve got a 10-pack of plump, juicy beef franks and a hungry crowd of relatives whose food-to-alcohol ratio is getting pretty skewed—they need some solid calories, pronto. What are the steps you need to take to get those franks from package to plate?

Each one needs to be placed on the grill, rotated every couple minutes for even cooking, removed from the grill when you deem it’s done, then—if you’re the kind of guy or gal who goes the extra mile—placed in a bun and dressed with ketchup, mustard, pickles, and the like before being handed over to salivating, too-loud Uncle Hector or sweet, bored Cousin Margaret.

While carrying out your grillmaster duties, you know better than to drop the hot dogs on the ground, leave them cooking on one side for too long, squeeze them to the point of breaking or bursting, and any other hot-dog-ruining amateur moves.

But for a robot, that’s a lot to figure out, especially if they have no prior knowledge of grilling hot dogs (which, well, most robots don’t).

As described in a paper published in this week’s Science Robotics, a team from Boston University programmed two robotic arms to use reinforcement learning—a branch of machine learning in which software gathers information about its environment then learns from it by replaying its experiences and incorporating rewards—to cook and serve hot dogs.

The team used a set of formulas to specify and combine tasks (“pick up hot dog and place on the grill”), meet safety requirements (“always avoid collisions”), and incorporate general prior knowledge (“you cannot pick up another hot dog if you are already holding one”).

Baxter and Jaco—as the two robots were dubbed—were trained through computer simulations. The paper’s authors emphasized their use of what they call a “formal specification language” for training the software, with the aim of generating easily-interpretable task descriptions. In reinforcement learning, they explain, being able to understand how a reward function influences an AI’s learning process is a key component in understanding the system’s behavior—but most systems lack this quality, and are thus likely to be lumped into the ‘black box’ of AI.

The robots’ decisions throughout the hot dog prep process—when to turn a hot dog, when to take it off the grill, and so on—are, the authors write, “easily interpretable from the beginning because the language is very similar to plain English.”

Besides being a step towards more explainable AI systems, Baxter and Jaco are another example of fast-food robots—following in the footsteps of their burger and pizza counterparts—that may take over some repetitive manual tasks currently performed by human workers. As robots’ capabilities improve through incremental progress like this, they’ll be able to take on additional tasks.

In a not-so-distant future, then, you just may find yourself throwing back drinks with Uncle Hector and Cousin Margaret while your robotic replacement mans the grill, churning out hot dogs that are perfectly cooked every time.

Image Credit: Image by Muhammad Ribkhan from Pixabay Continue reading

Posted in Human Robots

#436426 Video Friday: This Robot Refuses to Fall ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.

[ Skydio ]

Sphero RVR is one of the 15 robots on our robot gift guide this year. Here’s a new video Sphero just released showing some of the things you can do with the robot.

[ RVR ]

NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.

[ NimbRo ]

Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.

[ DARPA SubT ]

Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.

[ Bristol ]

Happy Holidays from ABB!

Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.

[ ABB ]

We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.

And here’s a real pelican eel:

[ Science Robotics ]

Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.

[ Delft Dynamics ]

Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.

[ Paper ] via [ ROAM Lab ]

I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.

[ MotionLib ] via [ RobotStart ]

In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.

[ Paper ]

New weirdness from Toio!

[ Toio ]

Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.

[ Misty Robotics ]

We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.

[ CMU ]

How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.

[ Roboy Research Reviews ]

What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she “taught” her artistic style to a machine — and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. “Part of the beauty of human and machine systems is their inherent, shared fallibility,” she says.

[ TED ]

Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.

[ IEEE TechEthics ]

Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.

[ AI Podcast ]

This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”

Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.

[ CMU RI ] Continue reading

Posted in Human Robots

#436155 This MIT Robot Wants to Use Your ...

MIT researchers have demonstrated a new kind of teleoperation system that allows a two-legged robot to “borrow” a human operator’s physical skills to move with greater agility. The system works a bit like those haptic suits from the Spielberg movie “Ready Player One.” But while the suits in the film were used to connect humans to their VR avatars, the MIT suit connects the operator to a real robot.

The robot is called Little HERMES, and it’s currently just a pair of little legs, about a third the size of an average adult. It can step and jump in place or walk a short distance while supported by a gantry. While that in itself is not very impressive, the researchers say their approach could help bring capable disaster robots closer to reality. They explain that, despite recent advances, building fully autonomous robots with motor and decision-making skills comparable to those of humans remains a challenge. That’s where a more advanced teleoperation system could help.

The researchers, João Ramos, now an assistant professor at the University of Illinois at Urbana-Champaign, and Sangbae Kim, director of MIT’s Biomimetic Robotics Lab, describe the project in this week’s issue of Science Robotics. In the paper, they argue that existing teleoperation systems often can’t effectively match the operator’s motions to that of a robot. In addition, conventional systems provide no physical feedback to the human teleoperator about what the robot is doing. Their new approach addresses these two limitations, and to see how it would work in practice, they built Little HERMES.

Image: Science Robotics

The main components of MIT’s bipedal robot Little HERMES: (A) Custom actuators designed to withstand impact and capable of producing high torque. (B) Lightweight limbs with low inertia and fast leg swing. (C) Impact-robust and lightweight foot sensors with three-axis contact force sensor. (D) Ruggedized IMU to estimates the robot’s torso posture, angular rate, and linear acceleration. (E) Real-time computer sbRIO 9606 from National Instruments for robot control. (F) Two three-cell lithium-polymer batteries in series. (G) Rigid and lightweight frame to minimize the robot mass.

Early this year, the MIT researchers wrote an in-depth article for IEEE Spectrum about the project, which includes Little HERMES and also its big brother, HERMES (for Highly Efficient Robotic Mechanisms and Electromechanical System). In that article, they describe the two main components of the system:

[…] We are building a telerobotic system that has two parts: a humanoid capable of nimble, dynamic behaviors, and a new kind of two-way human-machine interface that sends your motions to the robot and the robot’s motions to you. So if the robot steps on debris and starts to lose its balance, the operator feels the same instability and instinctively reacts to avoid falling. We then capture that physical response and send it back to the robot, which helps it avoid falling, too. Through this human-robot link, the robot can harness the operator’s innate motor skills and split-second reflexes to keep its footing.

You could say we’re putting a human brain inside the machine.

Image: Science Robotics

The human-machine interface built by the MIT researchers for controlling Little HERMES is different from conventional ones in that it relies on the operator’s reflexes to improve the robot’s stability. The researchers call it the balance-feedback interface, or BFI. The main modules of the BFI include: (A) Custom interface attachments for torso and feet designed to capture human motion data at high speed (1 kHz). (B) Two underactuated modules to track the position and orientation of the torso and apply forces to the operator. (C) Each actuation module has three DoFs, one of which is a push/pull rod actuated by a DC brushless motor. (D) A series of linkages with passive joints connected to the operator’s feet and track their spatial translation. (E) Real-time controller cRIO 9082 from National Instruments to close the BFI control loop. (F) Force plate to estimated the operator’s center of pressure position and measure the shear and normal components of the operator’s net contact force.

Here’s more footage of the experiments, showing Little HERMES stepping and jumping in place, walking a few steps forward and backward, and balancing. Watch until the end to see a compilation of unsuccessful stepping experiments. Poor Little HERMES!

In the new Science Robotics paper, the MIT researchers explain how they solved one of the key challenges in making their teleoperation system effective:

The challenge of this strategy lies in properly mapping human body motion to the machine while simultaneously informing the operator how closely the robot is reproducing the movement. Therefore, we propose a solution for this bilateral feedback policy to control a bipedal robot to take steps, jump, and walk in synchrony with a human operator. Such dynamic synchronization was achieved by (i) scaling the core components of human locomotion data to robot proportions in real time and (ii) applying feedback forces to the operator that are proportional to the relative velocity between human and robot.

Little HERMES is now taking its first steps, quite literally, but the researchers say they hope to use robotic legs with similar design as part of a more advanced humanoid. One possibility they’ve envisioned is a fast-moving quadruped robot that could run through various kinds of terrain and then transform into a bipedal robot that would use its hands to perform dexterous manipulations. This could involve merging some of the robots the MIT researchers have built in their lab, possibly creating hybrids between Cheetah and HERMES, or Mini Cheetah and Little HERMES. We can’t wait to see what the resulting robots will look like.

[ Science Robotics ] Continue reading

Posted in Human Robots

#435816 This Light-based Nervous System Helps ...

Last night, way past midnight, I stumbled onto my porch blindly grasping for my keys after a hellish day of international travel. Lights were low, I was half-asleep, yet my hand grabbed the keychain, found the lock, and opened the door.

If you’re rolling your eyes—yeah, it’s not exactly an epic feat for a human. Thanks to the intricate wiring between our brain and millions of sensors dotted on—and inside—our skin, we know exactly where our hand is in space and what it’s touching without needing visual confirmation. But this combined sense of the internal and the external is completely lost to robots, which generally rely on computer vision or surface mechanosensors to track their movements and their interaction with the outside world. It’s not always a winning strategy.

What if, instead, we could give robots an artificial nervous system?

This month, a team led by Dr. Rob Shepard at Cornell University did just that, with a seriously clever twist. Rather than mimicking the electric signals in our nervous system, his team turned to light. By embedding optical fibers inside a 3D printed stretchable material, the team engineered an “optical lace” that can detect changes in pressure less than a fraction of a pound, and pinpoint the location to a spot half the width of a tiny needle.

The invention isn’t just an artificial skin. Instead, the delicate fibers can be distributed both inside a robot and on its surface, giving it both a sense of tactile touch and—most importantly—an idea of its own body position in space. Optical lace isn’t a superficial coating of mechanical sensors; it’s an entire platform that may finally endow robots with nerve-like networks throughout the body.

Eventually, engineers hope to use this fleshy, washable material to coat the sharp, cold metal interior of current robots, transforming C-3PO more into the human-like hosts of Westworld. Robots with a “bodily” sense could act as better caretakers for the elderly, said Shepard, because they can assist fragile people without inadvertently bruising or otherwise harming them. The results were published in Science Robotics.

An Unconventional Marriage
The optical lace is especially creative because it marries two contrasting ideas: one biological-inspired, the other wholly alien.

The overarching idea for optical lace is based on the animal kingdom. Through sight, hearing, smell, taste, touch, and other senses, we’re able to interpret the outside world—something scientists call exteroception. Thanks to our nervous system, we perform these computations subconsciously, allowing us to constantly “perceive” what’s going on around us.

Our other perception is purely internal. Proprioception (sorry, it’s not called “inception” though it should be) is how we know where our body parts are in space without having to look at them, which lets us perform complex tasks when blind. Although less intuitive than exteroception, proprioception also relies on stretching and other deformations within the muscles and tendons and receptors under the skin, which generate electrical currents that shoot up into the brain for further interpretation.

In other words, in theory it’s possible to recreate both perceptions with a single information-carrying system.

Here’s where the alien factor comes in. Rather than using electrical properties, the team turned to light as their data carrier. They had good reason. “Compared with electricity, light carries information faster and with higher data densities,” the team explained. Light can also transmit in multiple directions simultaneously, and is less susceptible to electromagnetic interference. Although optical nervous systems don’t exist in the biological world, the team decided to improve on Mother Nature and give it a shot.

Optical Lace
The construction starts with engineering a “sheath” for the optical nerve fibers. The team first used an elastic polyurethane—a synthetic material used in foam cushioning, for example—to make a lattice structure filled with large pores, somewhat like a lattice pie crust. Thanks to rapid, high-resolution 3D printing, the scaffold can have different stiffness from top to bottom. To increase sensitivity to the outside world, the team made the top of the lattice soft and pliable, to better transfer force to mechanical sensors. In contrast, the “deeper” regions held their structure better, and kept their structure under pressure.

Now the fun part. The team next threaded stretchable “light guides” into the scaffold. These fibers transmit photons, and are illuminated with a blue LED light. One, the input light guide, ran horizontally across the soft top part of the scaffold. Others ran perpendicular to the input in a “U” shape, going from more surface regions to deeper ones. These are the output guides. The architecture loosely resembles the wiring in our skin and flesh.

Normally, the output guides are separated from the input by a small air gap. When pressed down, the input light fiber distorts slightly, and if the pressure is high enough, it contacts one of the output guides. This causes light from the input fiber to “leak” to the output one, so that it lights up—the stronger the pressure, the brighter the output.

“When the structure deforms, you have contact between the input line and the output lines, and the light jumps into these output loops in the structure, so you can tell where the contact is happening,” said study author Patricia Xu. “The intensity of this determines the intensity of the deformation itself.”

Double Perception
As a proof-of-concept for proprioception, the team made a cylindrical lace with one input and 12 output channels. They varied the stiffness of the scaffold along the cylinder, and by pressing down at different points, were able to calculate how much each part stretched and deformed—a prominent precursor to knowing where different regions of the structure are moving in space. It’s a very rudimentary sort of proprioception, but one that will become more sophisticated with increasing numbers of strategically-placed mechanosensors.

The test for exteroception was a whole lot stranger. Here, the team engineered another optical lace with 15 output channels and turned it into a squishy piano. When pressed down, an Arduino microcontroller translated light output signals into sound based on the position of each touch. The stronger the pressure, the louder the volume. While not a musical masterpiece, the demo proved their point: the optical lace faithfully reported the strength and location of each touch.

A More Efficient Robot
Although remarkably novel, the optical lace isn’t yet ready for prime time. One problem is scalability: because of light loss, the material is limited to a certain size. However, rather than coating an entire robot, it may help to add optical lace to body parts where perception is critical—for example, fingertips and hands.

The team sees plenty of potential to keep developing the artificial flesh. Depending on particular needs, both the light guides and scaffold can be modified for sensitivity, spatial resolution, and accuracy. Multiple optical fibers that measure for different aspects—pressure, pain, temperature—can potentially be embedded in the same region, giving robots a multitude of senses.

In this way, we hope to reduce the number of electronics and combine signals from multiple sensors without losing information, the authors said. By taking inspiration from biological networks, it may even be possible to use various inputs through an optical lace to control how the robot behaves, closing the loop from sensation to action.

Image Credit: Cornell Organic Robotics Lab. A flexible, porous lattice structure is threaded with stretchable optical fibers containing more than a dozen mechanosensors and attached to an LED light. When the lattice structure is pressed, the sensors pinpoint changes in the photon flow. Continue reading

Posted in Human Robots