Tag Archives: need

#437598 Video Friday: Sarcos Is Developing a New ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

NASA’s Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) spacecraft unfurled its robotic arm Oct. 20, 2020, and in a first for the agency, briefly touched an asteroid to collect dust and pebbles from the surface for delivery to Earth in 2023.

[ NASA ]

New from David Zarrouk’s lab at BGU is AmphiSTAR, which Zarrouk describes as “a kind of a ground-water drone inspired by the cockroaches (sprawling) and by the Basilisk lizard (running over water). The robot hovers due to the collision of its propellers with the water (hydrodynamics not aerodynamics). The robot can crawl and swim at high and low speeds and smoothly transition between the two. It can reach 3.5 m/s on ground and 1.5m/s in water.”

AmphiSTAR will be presented at IROS, starting next week!

[ BGU ]

This is unfortunately not a great video of a video that was taken at a SoftBank Hawks baseball game in Japan last week, but it’s showing an Atlas robot doing an honestly kind of impressive dance routine to support the team.

ロボット応援団に人型ロボット『ATLAS』がアメリカからリモートで緊急参戦!!!
ホークスビジョンの映像をお楽しみ下さい♪#sbhawks #Pepper #spot pic.twitter.com/6aTYn8GGli
— 福岡ソフトバンクホークス(公式) (@HAWKS_official)
October 16, 2020

Editor’s Note: The tweet embed above is not working for some reason—see the video here.

[ SoftBank Hawks ]

Thanks Thomas!

Sarcos is working on a new robot, which looks to be the torso of their powered exoskeleton with the human relocated somewhere else.

[ Sarcos ]

The biggest holiday of the year, International Sloth Day, was on Tuesday! To celebrate, here’s Slothbot!

[ NSF ]

This is one of those simple-seeming tasks that are really difficult for robots.

I love self-resetting training environments.

[ MIT CSAIL ]

The Chiel lab collaborates with engineers at the Center for Biologically Inspired Robotics Research at Case Western Reserve University to design novel worm-like robots that have potential applications in search-and-rescue missions, endoscopic medicine, or other scenarios requiring navigation through narrow spaces.

[ Case Western ]

ANYbotics partnered with Losinger Marazzi to explore ANYmal’s potential of patrolling construction sites to identify and report safety issues. With such a complex environment, only a robot designed to navigate difficult terrain is able to bring digitalization to such a physically demanding industry.

[ ANYbotics ]

Happy 2018 Halloween from Clearpath Robotics!

[ Clearpath ]

Overcoming illumination variance is a critical factor in vision-based navigation. Existing methods tackled this radical illumination variance issue by proposing camera control or high dynamic range (HDR) image fusion. Despite these efforts, we have found that the vision-based approaches still suffer from overcoming darkness. This paper presents real-time image synthesizing from carefully controlled seed low dynamic range (LDR) image, to enable visual simultaneous localization and mapping (SLAM) in an extremely dark environment (less than 10 lux).

[ KAIST ]

What can MoveIt do? Who knows! Let's find out!

[ MoveIt ]

Thanks Dave!

Here we pick a cube from a starting point, manipulate it within the hand, and then put it back. To explore the capabilities of the hand, no sensors were used in this demonstration. The RBO Hand 3 uses soft pneumatic actuators made of silicone. The softness imparts considerable robustness against variations in object pose and size. This lets us design manipulation funnels that work reliably without needing sensor feedback. We take advantage of this reliability to chain these funnels into more complex multi-step manipulation plans.

[ TU Berlin ]

If this was a real solar array, King Louie would have totally cleaned it. Mostly.

[ BYU ]

Autonomous exploration is a fundamental problem for various applications of unmanned aerial vehicles(UAVs). Existing methods, however, were demonstrated to have low efficiency, due to the lack of optimality consideration, conservative motion plans and low decision frequencies. In this paper, we propose FUEL, a hierarchical framework that can support Fast UAV ExpLoration in complex unknown environments.

[ HKUST ]

Countless precise repetitions? This is the perfect task for a robot, thought researchers at the University of Liverpool in the Department of Chemistry, and without further ado they developed an automation solution that can carry out and monitor research tasks, making autonomous decisions about what to do next.

[ Kuka ]

This video shows a demonstration of central results of the SecondHands project. In the context of maintenance and repair tasks, in warehouse environments, the collaborative humanoid robot ARMAR-6 demonstrates a number of cognitive and sensorimotor abilities such as 1) recognition of the need of help based on speech, force, haptics and visual scene and action interpretation, 2) collaborative bimanual manipulation of large objects, 3) compliant mobile manipulation, 4) grasping known and unknown objects and tools, 5) human-robot interaction (object and tool handover) 6) natural dialog and 7) force predictive control.

[ SecondHands ]

In celebration of Ada Lovelace Day, Silicon Valley Robotics hosted a panel of Women in Robotics.

[ Robohub ]

As part of the upcoming virtual IROS conference, HEBI robotics is putting together a tutorial on robotics actuation. While I’m sure HEBI would like you to take a long look at their own actuators, we’ve been assured that no matter what kind of actuators you use, this tutorial will still be informative and useful.

[ YouTube ] via [ HEBI Robotics ]

Thanks Dave!

This week’s UMD Lockheed Martin Robotics Seminar comes from Julie Shah at MIT, on “Enhancing Human Capability with Intelligent Machine Teammates.”

Every team has top performers- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways.

[ UMD ]

Matthew Piccoli gives a talk to the UPenn GRASP Lab on “Trading Complexities: Smart Motors and Dumb Vehicles.”

We will discuss my research journey through Penn making the world's smallest, simplest flying vehicles, and in parallel making the most complex brushless motors. What do they have in common? We'll touch on why the quadrotor went from an obscure type of helicopter to the current ubiquitous drone. Finally, we'll get into my life after Penn and what tools I'm creating to further drone and robot designs of the future.

[ UPenn ] Continue reading

Posted in Human Robots

#437596 IROS Robotics Conference Is Online Now ...

The 2020 International Conference on Intelligent Robots and Systems (IROS) was originally going to be held in Las Vegas this week. Like ICRA last spring, IROS has transitioned to a completely online conference, which is wonderful news: Now everyone everywhere can participate in IROS without having to spend a dime on travel.

IROS officially opened yesterday, and the best news is that registration is entirely free! We’ll take a quick look at what IROS has on offer this year, which includes some stuff that’s brand news to IROS.

Registration for IROS is super easy, and did we mention that it’s free? To register, just go here and fill out a quick and easy form. You don’t even have to be an IEEE Member or anything like that, although in our unbiased opinion, an IEEE membership is well worth it. Once you get the confirmation email, go to https://www.iros2020.org/ondemand/, put in the email address you used to register, and that’s it, you’ve got IROS!

Here are some highlights:

Plenaries and Keynotes
Without the normal space and time constraints, you won’t have to pick and choose between any of the three plenaries or 10 keynotes. Some of them are fancier than others, but we’re used to that sort of thing by now. It’s worth noting that all three plenaries (and three of the 10 keynotes) are given by extraordinarily talented women, which is excellent to see.

Technical Tracks
There are over 1,400 technical talks, divided up into 12 categories of 20 sessions each. Note that each of the 12 categories that you see on the main page can be scrolled through to show all 20 of the sessions; if there’s a bright red arrow pointing left or right you can scroll, and if the arrow is transparent, you’ve reached the end.

On the session page, you’ll see an autoplaying advertisement (that you can mute but not stop), below which each talk has a preview slide, a link to a ~15 minute presentation video, and another link to a PDF of the paper. No supplementary videos are available, which is a bit disappointing. While you can leave a comment on the video, there’s no way of interacting with the author(s) directly through the IROS site, so you’ll have to check the paper for an email address if you want to ask a question.

Award Finalists
IROS has thoughtfully grouped all of the paper award finalists together into nine sessions. These are some truly outstanding papers, and it’s worth watching these sessions even if you’re not interested in specific subject matter.

Workshops and Tutorials
This stuff is a little more impacted by asynchronicity and on-demandedness, and some of the workshops and tutorials have already taken place. But IROS has done a good job at collecting videos of everything and making them easy to access, and the dedicated websites for the workshops and tutorials themselves sometimes have more detailed info. If you’re having trouble finding where the workshops and tutorial section is, try the “Entrance” drop-down menu up at the top.

IROS Original Series
In place of social events and lab tours, IROS this year has come up with the “IROS Original Series,” which “hosts unique content that would be difficult to see at in-person events.” Right now, there are some interviews with a diverse group of interesting roboticists, and hopefully more will show up later on.

Enjoy!
Everything on the IROS On-Demand site should be available for at least the next month, so there’s no need to try and watch a thousand presentations over three days (which is what we normally have to do). So, relax, and enjoy yourself a bit by browsing all the options. And additional content will be made available over the next several weeks, so make sure to check back often to see what’s new.

[ IROS 2020 ] Continue reading

Posted in Human Robots

#437592 Coordinated Robotics Wins DARPA SubT ...

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

Huge congratulations to Coordinated Robotics! It’s worth pointing out that the top three teams were separated by an incredibly small handful of points, and on a slightly different day, with slightly different artifact positions, any of them could have come out on top. This doesn’t diminish Coordinated Robotics’ victory in the least—it means that the competition was fierce, and that the problem of autonomous cave exploration with robots has been solved (virtually, at least) in several different but effective ways.

We know Coordinated Robotics pretty well at this point, but here’s an introduction video:

You heard that right—Coordinated Robotics is just Kevin Knoedler, all by himself. This would be astonishing, if we weren’t already familiar with Kevin’s abilities: He won NASA’s virtual Space Robotics Challenge by himself in 2017, and Coordinated Robotics placed first in the DARPA SubT Virtual Tunnel Circuit and second in the Virtual Urban Circuit. We asked Kevin how he managed to do so spectacularly well (again), and here’s what he told us:

IEEE Spectrum: Can you describe what it was like to watch your team of robots on the live stream, and to see them score the most points?

Kevin Knoedler: It was exciting and stressful watching the live stream. It was exciting as the top few scores were quite close for the cave circuit. It was stressful because I started out behind and worked my way up, but did not do well on the final world. Luckily, not doing well on the first and last worlds was offset by better scores on many of the runs in between. DARPA did a very nice job with their live stream of the cave circuit results.

How did you decide on the makeup of your team, and on what sensors to use?

To decide on the makeup of the team I experimented with quite a few different vehicles. I had a lot of trouble with the X2 and other small ground vehicles flipping over. Based on that I looked at the larger ground vehicles that also had a sensor capable of identifying drop-offs. The vehicles that met those criteria for me were the Marble HD2, Marble Husky, Ozbot ATR, and the Absolem. Of those ground vehicles I went with the Marble HD2. It had a downward looking depth camera that I could use to detect drop-offs and was much more stable on the varied terrain than the X2. I had used the X3 aerial vehicle before and so that was my first choice for an aerial platform.

What were some things that you learned in Tunnel and Urban that you were able to incorporate into your strategy for Cave?

In the Tunnel circuit I had learned a strategy to use ground vehicles and in the Urban circuit I had learned a strategy to use aerial vehicles. At a high level that was the biggest thing I learned from the previous circuits that I was able to apply to the Cave circuit. At a lower level I was able to apply many of the development and testing strategies from the previous circuits to the Cave circuit.

What aspect of the cave environment was most challenging for your robots?

I would say it wasn't just one aspect of the cave environment that was challenging for the robots. There were quite a few challenging aspects of the cave environment. For the ground vehicles there were frequently paths that looked good as the robot started on the path, but turned into drop-offs or difficult boulder crawls. While it was fun to see the robot plan well enough to slowly execute paths over the boulders, I was wishing that the robot was smart enough to try a different path rather than wasting so much time crawling over the large boulders. For the aerial vehicles the combination of tight paths along with large vertical spaces was the biggest challenge in the environment. The large open vertical areas were particularly challenging for my aerial robots. They could easily lose track of their position without enough nearby features to track and it was challenging to find the correct path in and out of such large vertical areas.

How will you be preparing for the SubT Final?

To prepare for the SubT Final the vehicles will be getting a lot smarter. The ground vehicles will be better at navigation and communicating with one another. The aerial vehicles will be better able to handle large vertical areas both from a positioning and a planning point of view. Finally, all of the vehicles will do a better job coordinating what areas have been explored and what areas have good leads for further exploration.

Image: DARPA

The final score for the DARPA SubT Cave Circuit virtual competition.

We also had a chance to ask SubT program manager Tim Chung a few questions at yesterday’s post-event press conference, about the course itself and what he thinks teams should have learned from the competition:

IEEE Spectrum: Having looked through some real caves, can you give some examples of some of the most significant differences between this simulation and real caves? And with the enormous variety of caves out there, how generalizable are the solutions that teams came up with?

Tim Chung: Many of the caves that I’ve had to crawl through and gotten bumps and scrapes from had a couple of different features that I’ll highlight. The first is the variations in moisture— a lot of these caves were naturally formed with streams and such, so many of the caves we went to had significant mud, flowing water, and such. And so one of the things we're not capturing in the SubT simulator is explicitly anything that would submerge the robots, or otherwise short any of their systems. So from that perspective, that's one difference that's certainly notable.

And then the other difference I think is the granularity of the terrain, whether it's rubble, sand, or just raw dirt, friction coefficients are all across the board, and I think that's one of the things that any terrestrial simulator will both struggle with and potentially benefit from— that is, terramechanics simulation abilities. Given the emphasis on mobility in the SubT simulation, we’re capturing just a sliver of the complexity of terramechanics, but I think that's probably another take away that you'll certainly see— where there’s that distinction between physical and virtual technologies.

To answer your second question about generalizability— that’s the multi-million dollar question! It’s definitely at the crux of why we have eight diverse worlds, both in size verticality, dimensions, constraint passageways, etc. But this is eight out of countless variations, and the goal of course is to be able to investigate what those key dependencies are. What I'll say is that the out of the seventy three different virtual cave tiles, which are the building blocks that make up these virtual worlds, quite a number of them were not only inspired by real world caves, but were specifically designed so that we can essentially use these tiles as unit tests going forward. So, if I want to simulate vertical inclines, here are the tiles that are the vertical vertical unit tests for robots, and that’s how we’re trying to to think through how to tease out that generalizability factor.

What are some observations from this event that you think systems track teams should pay attention to as they prepare for the final event?

One of the key things about the virtual competition is that you submit your software, and that's it. So you have to design everything from state management to failure mode triage, really thinking about what could go wrong and then building out your autonomous capabilities either to react to some of those conditions, or to anticipate them. And to be honest I think that the humans in the loop that we have in the systems competition really are key enablers of their capability, but also could someday (if not already) be a crutch that we might not be able to develop.

Thinking through some of the failure modes in a fully autonomous software deployed setting are going to be incredibly valuable for the systems competitors, so that for example the human supervisor doesn't have to worry about those failure modes as much, or can respond in a more supervisory way rather than trying to joystick the robot around. I think that's going to be one of the greatest impacts, thinking through what it means to send these robots off to autonomously get you the information you need and complete the mission

This isn’t to say that the humans aren't going to be useful and continue to play a role of course, but I think this shifting of the role of the human supervisor from being a state manager to being more of a tactical commander will dramatically highlight the impact of the virtual side on the systems side.

What, if anything, should we take away from one person teams being able to do so consistently well in the virtual circuit?

It’s a really interesting question. I think part of it has to do with systems integration versus software integration. There's something to be said for the richness of the technologies that can be developed, and how many people it requires to be able to develop some of those technologies. With the systems competitors, having one person try to build, manage, deploy, service, and operate all of those robots is still functionally quite challenging, whereas in the virtual competition, it really is a software deployment more than anything else. And so I think the commonality of single person teams may just be a virtue of the virtual competition not having some of those person-intensive requirements.

In terms of their strong performance, I give credit to all of these really talented folks who are taking upon themselves to jump into the competitor pool and see how well they do, and I think that just goes to show you that whether you're one person or ten people people or a hundred people on a team, a good idea translated and executed well really goes a long way.

Looking ahead, teams have a year to prepare for the final event, which is still scheduled to be held sometime in fall 2021. And even though there was no cave event for systems track teams, the fact that the final event will be a combination of tunnel, urban, and cave circuits means that systems track teams have been figuring out how to get their robots to work in caves anyway, and we’ll be bringing you some of their stories over the next few weeks.

[ DARPA SubT ] Continue reading

Posted in Human Robots

#437590 Why We Need a Robot Registry


I have a confession to make: A robot haunts my nightmares. For me, Boston Dynamics’ Spot robot is 32.5 kilograms (71.1 pounds) of pure terror. It can climb stairs. It can open doors. Seeing it in a video cannot prepare you for the moment you cross paths on a trade-show floor. Now that companies can buy a Spot robot for US $74,500, you might encounter Spot anywhere.

Spot robots now patrol public parks in Singapore to enforce social distancing during the pandemic. They meet with COVID-19 patients at Boston’s Brigham and Women’s Hospital so that doctors can conduct remote consultations. Imagine coming across Spot while walking in the park or returning to your car in a parking garage. Wouldn’t you want to know why this hunk of metal is there and who’s operating it? Or at least whom to call to report a malfunction?

Robots are becoming more prominent in daily life, which is why I think governments need to create national registries of robots. Such a registry would let citizens and law enforcement look up the owner of any roaming robot, as well as learn that robot’s purpose. It’s not a far-fetched idea: The U.S. Federal Aviation Administration already has a registry for drones.

Governments could create national databases that require any companies operating robots in public spaces to report the robot make and model, its purpose, and whom to contact if the robot breaks down or causes problems. To allow anyone to use the database, all public robots would have an easily identifiable marker or model number on their bodies. Think of it as a license plate or pet microchip, but for bots.

There are some smaller-scale registries today. San Jose’s Department of Transportation (SJDOT), for example, is working with Kiwibot, a delivery robot manufacturer, to get real-time data from the robots as they roam the city’s streets. The Kiwibots report their location to SJDOT using the open-source Mobility Data Specification, which was originally developed by Los Angeles to track Bird scooters.

Real-time location reporting makes sense for Kiwibots and Spots wandering the streets, but it’s probably overkill for bots confined to cleaning floors or patrolling parking lots. That said, any robots that come in contact with the general public should clearly provide basic credentials and a way to hold their operators accountable. Given that many robots use cameras, people may also be interested in looking up who’s collecting and using that data.

I starting thinking about robot registries after Spot became available in June for anyone to purchase. The idea gained specificity after listening to Andra Keay, founder and managing director at Silicon Valley Robotics, discuss her five rules of ethical robotics at an Arm event in October. I had already been thinking that we needed some way to track robots, but her suggestion to tie robot license plates to a formal registry made me realize that people also need a way to clearly identify individual robots.

Keay pointed out that in addition to sating public curiosity and keeping an eye on robots that could cause harm, a registry could also track robots that have been hacked. For example, robots at risk of being hacked and running amok could be required to report their movements to a database, even if they’re typically restricted to a grocery store or warehouse. While we’re at it, Spot robots should be required to have sirens, because there’s no way I want one of those sneaking up on me.

This article appears in the December 2020 print issue as “Who’s Behind That Robot?” Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots