Tag Archives: face

#437707 Video Friday: This Robot Will Restock ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Tokyo startup Telexistence has recently unveiled a new robot called the Model-T, an advanced teleoperated humanoid that can use tools and grasp a wide range of objects. Japanese convenience store chain FamilyMart plans to test the Model-T to restock shelves in up to 20 stores by 2022. In the trial, a human “pilot” will operate the robot remotely, handling items like beverage bottles, rice balls, sandwiches, and bento boxes.

With Model-T and AWP, FamilyMart and TX aim to realize a completely new store operation by remoteizing and automating the merchandise restocking work, which requires a large number of labor-hours. As a result, stores can operate with less number of workers and enable them to recruit employees regardless of the store’s physical location.

[ Telexistence ]

Quadruped dance-off should be a new robotics competition at IROS or ICRA.

I dunno though, that moonwalk might keep Spot in the lead…

[ Unitree ]

Through a hybrid of simulation and real-life training, this air muscle robot is learning to play table tennis.

Table tennis requires to execute fast and precise motions. To gain precision it is necessary to explore in this high-speed regimes, however, exploration can be safety-critical at the same time. The combination of RL and muscular soft robots allows to close this gap. While robots actuated by pneumatic artificial muscles generate high forces that are required for e.g. smashing, they also offer safe execution of explosive motions due to antagonistic actuation.

To enable practical training without real balls, we introduce Hybrid Sim and Real Training (HYSR) that replays prerecorded real balls in simulation while executing actions on the real system. In this manner, RL can learn the challenging motor control of the PAM-driven robot while executing ~15000 hitting motions.

[ Max Planck Institute ]

Thanks Dieter!

Anthony Cowley wrote in to share his recent thesis work on UPSLAM, a fast and lightweight SLAM technique that records data in panoramic depth images (just PNGs) that are easy to visualize and even easier to share between robots, even on low-bandwidth networks.

[ UPenn ]

Thanks Anthony!

GITAI’s G1 is the space dedicated general-purpose robot. G1 robot will enable automation of various tasks internally & externally on space stations and for lunar base development.

[ Gitai ]

The University of Michigan has a fancy new treadmill that’s built right into the floor, which proves to be a bit much for Mini Cheetah.

But Cassie Blue won’t get stuck on no treadmill! She goes for a 0.3 mile walk across campus, which ends when a certain someone ran the gantry into Cassie Blue’s foot.

[ Michigan Robotics ]

Some serious quadruped research going on at UT Austin Human Centered Robotics Lab.

[ HCRL ]

Will Burrard-Lucas has spent lockdown upgrading his slightly indestructible BeetleCam wildlife photographing robot.

[ Will Burrard-Lucas ]

Teleoperated surgical robots are becoming commonplace in operating rooms, but many are massive (sometimes taking up an entire room) and are difficult to manipulate, especially if a complication arises and the robot needs to removed from the patient. A new collaboration between the Wyss Institute, Harvard University, and Sony Corporation has created the mini-RCM, a surgical robot the size of a tennis ball that weighs as much as a penny, and performed significantly better than manually operated tools in delicate mock-surgical procedures. Importantly, its small size means it is more comparable to the human tissues and structures on which it operates, and it can easily be removed by hand if needed.

[ Harvard Wyss ]

Yaskawa appears to be working on a robot that can scan you with a temperature gun and then jam a mask on your face?

[ Motoman ]

Maybe we should just not have people working in mines anymore, how about that?

[ Exyn ]

Many current human-robot interactive systems tend to use accurate and fast – but also costly – actuators and tracking systems to establish working prototypes that are safe to use and deploy for user studies. This paper presents an embedded framework to build a desktop space for human-robot interaction, using an open-source robot arm, as well as two RGB cameras connected to a Raspberry Pi-based controller that allow a fast yet low-cost object tracking and manipulation in 3D. We show in our evaluations that this facilitates prototyping a number of systems in which user and robot arm can commonly interact with physical objects.

[ Paper ]

IBM Research is proud to host professor Yoshua Bengio — one of the world’s leading experts in AI — in a discussion of how AI can contribute to the fight against COVID-19.

[ IBM Research ]

Ira Pastor, ideaXme life sciences ambassador interviews Professor Dr. Hiroshi Ishiguro, the Director of the Intelligent Robotics Laboratory, of the Department of Systems Innovation, in the Graduate School of Engineering Science, at Osaka University, Japan.

[ ideaXme ]

A CVPR talk from Stanford’s Chelsea Finn on “Generalization in Visuomotor Learning.”

[ Stanford ] Continue reading

Posted in Human Robots

#437667 17 Teams to Take Part in DARPA’s ...

Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.

While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.

From DARPA’s press releases:

DARPA’s Subterranean (SubT) Challenge will host its Cave Circuit Virtual Competition, which focuses on innovative solutions to map, navigate, and search complex, simulated cave environments November 17. Qualified teams have until Oct. 15 to develop and submit software-based solutions for the Cave Circuit via the SubT Virtual Portal, where their technologies will face unknown cave environments in the cloud-based SubT Simulator. Until then, teams can refine their roster of selected virtual robot models, choose sensor payloads, and continue to test autonomy approaches to maximize their score.

The Cave Circuit also introduces new simulation capabilities, including digital twins of Systems Competition robots to choose from, marsupial-style platforms combining air and ground robots, and breadcrumb nodes that can be dropped by robots to serve as communications relays. Each robot configuration has an associated cost, measured in SubT Credits – an in-simulation currency – based on performance characteristics such as speed, mobility, sensing, and battery life.

Each team’s simulated robots must navigate realistic caves, with features including natural terrain and dynamic rock falls, while they search for and locate various artifacts on the course within five meters of accuracy to score points during a 60-minute timed run. A correct report is worth one point. Each course contains 20 artifacts, which means each team has the potential for a maximum score of 20 points. Teams can leverage numerous practice worlds and even build their own worlds using the cave tiles found in the SubT Tech Repo to perfect their approach before they submit one official solution for scoring. The DARPA team will then evaluate the solution on a set of hidden competition scenarios.

Of the 17 qualified teams (you can see all of them here), there are a handful that we’ll quickly point out. Team BARCS, from Michigan Tech, was the winner of the SubT Virtual Urban Circuit, meaning that they may be the team to beat on Cave as well, although the course is likely to be unique enough that things will get interesting. Some Systems Track teams to watch include Coordinated Robotics, CTU-CRAS-NORLAB, MARBLE, NUS SEDS, and Robotika, and there are also a handful of brand new teams as well.

Now, just because there’s no dedicated Cave Circuit for the Systems Track teams, it doesn’t mean that there won’t be a Cave component (perhaps even a significant one) in the final event, which as far as we know is still scheduled to happen in fall of next year. We’ve heard that many of the Systems Track teams have been testing out their robots in caves anyway, and as the virtual event gets closer, we’ll be doing a sort of Virtual Systems Track series that highlights how different teams are doing mock Cave Circuits in caves they’ve found for themselves.

For more, we checked in with DARPA SubT program manager Dr. Timothy H. Chung.

IEEE Spectrum: Was it a difficult decision to cancel the Systems Track for Cave?

Tim Chung: The decision to go virtual only was heart wrenching, because I think DARPA’s role is to offer up opportunities that may be unimaginable for some of our competitors, like opening up a cave-type site for this competition. We crawled and climbed through a number of these sites, and I share the sense of disappointment that both our team and the competitors have that we won’t be able to share all the advances that have been made since the Urban Circuit. But what we’ve been able to do is pour a lot of our energy and the insights that we got from crawling around in those caves into what’s going to be a really great opportunity on the Virtual Competition side. And whether it’s a global pandemic, or just lack of access to physical sites like caves, virtual environments are an opportunity that we want to develop.

“The simulator offers us a chance to look at where things could be … it really allows for us to find where some of those limits are in the technology based only on our imagination.”
—Timothy H. Chung, DARPA

What kind of new features will be included in the Virtual Cave Circuit for this competition?

I’m really excited about these particular features because we’re seeing an opportunity for increased synergy between the physical and the virtual. The first I’d say is that we scanned some of the Systems Track robots using photogrammetry and combined that with some additional models that we got from the systems competitors themselves to turn their systems robots into virtual models. We often talk about the sim to real transfer and how successful we can get a simulation to transfer over to the physical world, but now we’ve taken something from the physical world and made it virtual. We’ve validated the controllers as well as the kinematics of the robots, we’ve iterated with the systems competitors themselves, and now we have these 13 robots (air and ground) in the SubT Tech Repo that now all virtual competitors can take advantage of.

We also have additional robot capability. Those comms bread crumbs are common among many of the competitors, so we’ve adopted that in the virtual world, and now you have comms relay nodes that are baked in to the SubT Simulator—you can have either six or twelve comms nodes that you can drop from a variety of our ground robot platforms. We have the marsupial deployment capability now, so now we have parent ground robots that can be mixed and matched with different child drones to become marsupial pairs.

And this is something I’ve been planning for for a while: we now have the ability to trigger things like rock falls. They still don’t quite look like Indiana Jones with the boulder coming down the corridor, but this comes really close. In addition to it just being an interesting and realistic consideration, we get to really dynamically test and stress the robots’ ability to navigate and recognize that something has changed in the environment and respond to it.

Image: DARPA

DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment.

No simulation is perfect, so can you talk to us about what kinds of things aren’t being simulated right now? Where does the simulator not match up to reality?

I think that question is foundational to any conversation about simulation. I’ll give you a couple of examples:

We have the ability to represent wholesale damage to a robot, but it’s not at the actuator or component level. So there’s not a reliability model, although I think that would be really interesting to incorporate so that you could do assessments on things like mean time to failure. But if a robot falls off a ledge, it can be disabled by virtue of being too damaged to continue.

With communications, and this is one that’s near and dear not only to my heart but also to all of those that have lived through developing communication systems and robotic systems, we’ve gone through and conducted RF surveys of underground environments to get a better handle on what propagation effects are. There’s a lot of research that has gone into this, and trying to carry through some of that realism, we do have path loss models for RF communications baked into the SubT Simulator. For example, when you drop a bread crumb node, it’s using a path loss model so that it can represent the degradation of signal as you go farther into a cave. Now, we’re not modeling it at the Maxwell equations level, which I think would be awesome, but we’re not quite there yet.

We do have things like battery depletion, sensor degradation to the extent that simulators can degrade sensor inputs, and things like that. It’s just amazing how close we can get in some places, and how far away we still are in others, and I think showing where the limits are of how far you can get simulation is all part and parcel of why SubT Challenge wants to have both System and Virtual tracks. Simulation can be an accelerant, but it’s not going to be the panacea for development and innovation, and I think all the competitors are cognizant those limitations.

One of the most amazing things about the SubT Virtual Track is that all of the robots operate fully autonomously, without the human(s) in the loop that the System Track teams have when they compete. Why make the Virtual Track even more challenging in that way?

I think it’s one of the defining, delineating attributes of the Virtual Track. Our continued vision for the simulation side is that the simulator offers us a chance to look at where things could be, and allows for us to explore things like larger scales, or increased complexity, or types of environments that we can’t physically gain access to—it really allows for us to find where some of those limits are in the technology based only on our imagination, and this is one of the intrinsic values of simulation.

But I think finding a way to incorporate human input, or more generally human factors like teleoperation interfaces and the in-situ stress that you might not be able to recreate in the context of a virtual competition provided a good reason for us to delineate the two competitions, with the Virtual Competition really being about the role of fully autonomous or self-sufficient systems going off and doing their solution without human guidance, while also acknowledging that the real world has conditions that would not necessarily be represented by a fully simulated version. Having said that, I think cognitive engineering still has an incredibly important role to play in human robot interaction.

What do we have to look forward to during the Virtual Competition Showcase?

We have a number of additional features and capabilities that we’ve baked into the simulator that will allow for us to derive some additional insights into our competition runs. Those insights might involve things like the performance of one or more robots in a given scenario, or the impact of the environment on different types of robots, and what I can tease is that this will be an opportunity for us to showcase both the technology and also the excitement of the robots competing in the virtual environment. I’m trying not to give too many spoilers, but we’ll have an opportunity to really get into the details.

Check back as we get closer to the 17 November event for more on the DARPA SubT Challenge. Continue reading

Posted in Human Robots

#437645 How Robots Became Essential Workers in ...

Photo: Sivaram V/Reuters

A robot, developed by Asimov Robotics to spread awareness about the coronavirus, holds a tray with face masks and sanitizer.

As the coronavirus emergency exploded into a full-blown pandemic in early 2020, forcing countless businesses to shutter, robot-making companies found themselves in an unusual situation: Many saw a surge in orders. Robots don’t need masks, can be easily disinfected, and, of course, they don’t get sick.

An army of automatons has since been deployed all over the world to help with the crisis: They are monitoring patients, sanitizing hospitals, making deliveries, and helping frontline medical workers reduce their exposure to the virus. Not all robots operate autonomously—many, in fact, require direct human supervision, and most are limited to simple, repetitive tasks. But robot makers say the experience they’ve gained during this trial-by-fire deployment will make their future machines smarter and more capable. These photos illustrate how robots are helping us fight this pandemic—and how they might be able to assist with the next one.

DROID TEAM

Photo: Clement Uwiringiyimana/Reuters

A squad of robots serves as the first line of defense against person-to-person transmission at a medical center in Kigali, Rwanda. Patients walking into the facility get their temperature checked by the machines, which are equipped with thermal cameras atop their heads. Developed by UBTech Robotics, in China, the robots also use their distinctive appearance—they resemble characters out of a Star Wars movie—to get people’s attention and remind them to wash their hands and wear masks.

Photo: Clement Uwiringiyimana/Reuters

SAY “AAH”
To speed up COVID-19 testing, a team of Danish doctors and engineers at the University of Southern Denmark and at Lifeline Robotics is developing a fully automated swab robot. It uses computer vision and machine learning to identify the perfect target spot inside the person’s throat; then a robotic arm with a long swab reaches in to collect the sample—all done with a swiftness and consistency that humans can’t match. In this photo, one of the creators, Esben Østergaard, puts his neck on the line to demonstrate that the robot is safe.

Photo: University of Southern Denmark

GERM ZAPPER
After six of its doctors became infected with the coronavirus, the Sassarese hospital in Sardinia, Italy, tightened its safety measures. It also brought in the robots. The machines, developed by UVD Robots, use lidar to navigate autonomously. Each bot carries an array of powerful short-wavelength ultraviolet-C lights that destroy the genetic material of viruses and other pathogens after a few minutes of exposure. Now there is a spike in demand for UV-disinfection robots as hospitals worldwide deploy them to sterilize intensive care units and operating theaters.

Photo: UVD Robots

RUNNING ERRANDS

In medical facilities, an ideal role for robots is taking over repetitive chores so that nurses and physicians can spend their time doing more important tasks. At Shenzhen Third People’s Hospital, in China, a robot called Aimbot drives down the hallways, enforcing face-mask and social-distancing rules and spraying disinfectant. At a hospital near Austin, Texas, a humanoid robot developed by Diligent Robotics fetches supplies and brings them to patients’ rooms. It repeats this task day and night, tirelessly, allowing the hospital staff to spend more time interacting with patients.

Photos, left: Diligent Robotics; Right: UBTech Robotics

THE DOCTOR IS IN
Nurses and doctors at Circolo Hospital in Varese, in northern Italy—the country’s hardest-hit region—use robots as their avatars, enabling them to check on their patients around the clock while minimizing exposure and conserving protective equipment. The robots, developed by Chinese firm Sanbot, are equipped with cameras and microphones and can also access patient data like blood oxygen levels. Telepresence robots, originally designed for offices, are becoming an invaluable tool for medical workers treating highly infectious diseases like COVID-19, reducing the risk that they’ll contract the pathogen they’re fighting against.

Photo: Miguel Medina/AFP/Getty Images

HELP FROM ABOVE

Photo: Zipline

Authorities in several countries attempted to use drones to enforce lockdowns and social-distancing rules, but the effectiveness of such measures remains unclear. A better use of drones was for making deliveries. In the United States, startup Zipline deployed its fixed-wing autonomous aircraft to connect two medical facilities 17 kilometers apart. For the staff at the Huntersville Medical Center, in North Carolina, masks, gowns, and gloves literally fell from the skies. The hope is that drones like Zipline’s will one day be able to deliver other kinds of critical materials, transport test samples, and distribute drugs and vaccines.

Photos: Zipline

SPECIAL DELIVERY
It’s not quite a robot takeover, but the streets and sidewalks of dozens of cities around the world have seen a proliferation of hurrying wheeled machines. Delivery robots are now in high demand as online orders continue to skyrocket.

In Hamburg, the six-wheeled robots developed by Starship Technologies navigate using cameras, GPS, and radar to bring groceries to customers.

Photo: Christian Charisius/Picture Alliance/Getty Images

In Medellín, Colombia, a startup called Rappi deployed a fleet of robots, built by Kiwibot, to deliver takeout to people in lockdown.

Photo: Joaquin Sarmiento/AFP/Getty Images

China’s JD.com, one of the country’s largest e-commerce companies, is using 20 robots to transport goods in Changsha, Hunan province; each vehicle has 22 separate compartments, which customers unlock using face authentication.

Photos: TPG/Getty Images

LIFE THROUGH ROBOTS
Robots can’t replace real human interaction, of course, but they can help people feel more connected at a time when meetings and other social activities are mostly on hold.

In Ostend, Belgium, ZoraBots brought one of its waist-high robots, equipped with cameras, microphones, and a screen, to a nursing home, allowing residents like Jozef Gouwy to virtually communicate with loved ones despite a ban on in-person visits.

Photo: Yves Herman/Reuters

In Manila, nearly 200 high school students took turns “teleporting” into a tall wheeled robot, developed by the school’s robotics club, to walk on stage during their graduation ceremony.

Photo: Ezra Acayan/Getty Images

And while Japan’s Chiba Zoological Park was temporarily closed due to the pandemic, the zoo used an autonomous robotic vehicle called RakuRo, equipped with 360-degree cameras, to offer virtual tours to children quarantined at home.

Photo: Tomohiro Ohsumi/Getty Images

SENTRY ROBOTS
Offices, stores, and medical centers are adopting robots as enforcers of a new coronavirus code.

At Fortis Hospital in Bangalore, India, a robot called Mitra uses a thermal camera to perform a preliminary screening of patients.

Photo: Manjunath Kiran/AFP/Getty Images

In Tunisia, the police use a tanklike robot to patrol the streets of its capital city, Tunis, verifying that citizens have permission to go out during curfew hours.

Photo: Khaled Nasraoui/Picture Alliance/Getty Images

And in Singapore, the Bishan-Ang Moh Kio Park unleashed a Spot robot dog, developed by Boston Dynamics, to search for social-distancing violators. Spot won’t bark at them but will rather play a recorded message reminding park-goers to keep their distance.

Photo: Roslan Rahman/AFP/Getty Images

This article appears in the October 2020 print issue as “How Robots Became Essential Workers.” Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437577 A Swarm of Cyborg Cockroaches That Lives ...

Digital Nature Group at the University of Tsukuba in Japan is working towards a “post ubiquitous computing era consisting of seamless combination of computational resources and non-computational resources.” By “non-computational resources,” they mean leveraging the natural world, which for better or worse includes insects.

At small scales, the capabilities of insects far exceed the capabilities of robots. I get that. And I get that turning cockroaches into an army of insect cyborgs could be useful in a variety of ways. But what makes me fundamentally uncomfortable is the idea that “in the future, they’ll appear out of nowhere without us recognizing it, fulfilling their tasks and then hiding.” In other words, you’ll have cyborg cockroaches hiding all over your house, all the time.

Warning: This article contains video of cockroaches being modified with cybernetic implants that some people may find upsetting.

Remote controlling cockroaches isn’t a new idea, and it’s a fairly simple one. By stimulating the left or right antenna nerves of the cockroach, you can make it think that it’s running into something, and get it to turn in the opposite direction. Add wireless connectivity, some fiducial markers, an overhead camera system, and a bunch of cyborg cockroaches, and you have a resilient swarm that can collaborate on tasks. The researchers suggest that the swarm could be used as a display (by making each cockroach into a pixel), to transport objects, or to draw things. There’s also some mention of “input or haptic interfaces or an audio device,” which frankly sounds horrible.

The reason to use cockroaches is that you can take advantage of their impressive ruggedness, efficiency, high power to weight ratio, and mobility. They can also feed themselves, meaning that whenever you don’t need the swarm to perform some task for you, you can deactivate the control system and let them scurry off to find crumbs in dark places.

There are many other swarm robotic platforms that can perform what you’re seeing these cyborg roaches do, but according to the researchers, the reason to use cockroaches is that you can take advantage of their impressive ruggedness, efficiency, high power to weight ratio, and mobility. They’re a lot messier (yay biology!), but they can also feed themselves, meaning that whenever you don’t need the swarm to perform some task for you, you can deactivate the control system and let them scurry off to find crumbs in dark places. And when you need them again, turn the control system on and experience the nightmare of your cyborg cockroach swarm reassembling itself from all over your house.

While we’re on the subject of cockroach hacking, we would be doing you a disservice if we didn’t share some of project leader Yuga Tsukuda’s other projects. Here’s a cockroach-powered clock, about which the researchers note that “it is difficult to control the cockroaches when trying to control them by electrical stimulation because they move spontaneously. However, by cutting off the head and removing the brain, they do not move spontaneously and the control by the computer becomes easy.” So, zombie cockroaches. Good then.

And if that’s not enough for you, how about this:

The researchers describe this project as an “attempt to use cockroaches for makeup by sticking them on the face.” They stick electrodes into the cockroaches to make them wiggle their legs when electrical stimulation is applied. And the peacock feathers? They “make the cockroach movement bigger, and create a cosmic mystery.” Continue reading

Posted in Human Robots