Tag Archives: animatronics
#438882 Robotics in the entertainment industry
Mesmer Entertainment Robotics demonstrate some of their humanoid animatronics, as well as their humanoid robot, Owen.
#438448 Build humanoid robots with today’s ...
Is it possible to build advanced AI humanoid androids with today’s tech, if there’s a drastic shift in human perception and aversity, or a sudden critical need arises? This video explores the very real possibility.
#437826 Video Friday: Skydio 2 Drone Is Back on ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.
Skydio, which makes what we’re pretty sure is the most intelligent consumer drone (or maybe just drone period) in existence, has been dealing with COVID-19 just like the rest of us. Even so, they’ve managed to push out a major software update, and pre-orders for the Skydio 2 are now open again.
If you think you might want one, read our review, after which you’ll be sure you want one.
[ Skydio ]
Worried about people with COVID entering your workplace? Misty II has your front desk covered, in a way that’s quite a bit friendlier than many other options.
Misty II provides a dynamic and interactive screening experience that delivers a joyful experience in an otherwise depressing moment while also delivering state of the art thermal scanning and health screening. We have already found that employees, customers, and visitors appreciate the novelty of interacting with a clever and personable robot. Misty II engages dynamically, both visually and verbally. Companies appreciate using a solution with a blackbody-referenced thermal camera that provides high accuracy and a short screening process for efficiency. Putting a robot to work in this role shifts not only how people look at the screening process but also how robots can take on useful assignments in business, schools and homes.
[ Misty Robotics ]
Thanks Tim!
I’m definitely the one in the middle.
[ Agility Robotics ]
NASA’s Ingenuity helicopter is traveling to Mars attached to the belly of the Perseverance rover and must safely detach to begin the first attempt at powered flight on another planet. Tests done at NASA’s Jet Propulsion Laboratory and Lockheed Martin Space show the sequence of events that will bring the helicopter down to the Martian surface.
[ JPL ]
Here’s a sequence of videos of Cassie Blue making it (or mostly making it) up a 22-degree slope.
My mood these days is Cassie at 1:09.
[ University of Michigan ]
Thanks Jesse!
This is somewhere on the line between home automation and robotics, but it’s a cool idea: A baby crib that “uses computer vision and machine learning to recognize subtle changes” in an infant’s movement, and proactively bounces them to keep them sleeping peacefully.
It costs $1000, but how much value do you put on 24 months of your own sleep?
[ Cradlewise ]
Thanks Ben!
As captive marine mammal shows have fallen from favor; and the catching, transporting and breeding of marine animals has become more restricted, the marine park industry as a viable business has become more challenging – yet the audience appetite for this type of entertainment and education has remained constant.
Real-time Animatronics provide a way to reinvent the marine entertainment industry with a sustainable, safe, and profitable future. Show venues include aquariums, marine parks, theme parks, fountain shows, cruise lines, resort hotels, shopping malls, museums, and more.
[ EdgeFX ] via [ Gizmodo ]
Robotic cabling is surprisingly complex and kinda cool to watch.
The video shows the sophisticated robot application “Automatic control cabinet cabling”, which Fraunhofer IPA implemented together with the company Rittal. The software pitasc, developed at Fraunhofer IPA, is used for force-controlled assembly processes. Two UR robot arms carry out the task together. The modular pitasc system enables the robot arms to move and rotate in parallel. They work hand in hand, with one robot holding the cable and the second bringing it to the starting position for the cabling. The robots can find, tighten, hold ready, lay, plug in, fix, move freely or immerse cables. They can also perform push-ins and pull tests.
[ Fraunhofer ]
This is from 2018, but the concept is still pretty neat.
We propose to perform a novel investigation into the ability of a propulsively hopping robot to reach targets of high science value on the icy, rugged terrains of Ocean Worlds. The employment of a multi-hop architecture allows for the rapid traverse of great distances, enabling a single mission to reach multiple geologic units within a timespan conducive to system survival in a harsh radiation environment. We further propose that the use of a propulsive hopping technique obviates the need for terrain topographic and strength assumptions and allows for complete terrain agnosticism; a key strength of this concept.
[ NASA ]
Aerial-aquatic robots possess the unique ability of operating in both air and water. However, this capability comes with tremendous challenges, such as communication incompati- bility, increased airborne mass, potentially inefficient operation in each of the environments and manufacturing difficulties. Such robots, therefore, typically have small payloads and a limited operational envelope, often making their field usage impractical. We propose a novel robotic water sampling approach that combines the robust technologies of multirotors and underwater micro-vehicles into a single integrated tool usable for field operations.
[ Imperial ]
Event cameras are bio-inspired vision sensors with microsecond latency resolution, much larger dynamic range and hundred times lower power consumption than standard cameras. This 20-minute talk gives a short tutorial on event cameras and show their applications on computer vision, drones, and cars.
[ UZH ]
We interviewed Paul Newman, Perla Maiolino and Lars Kunze, ORI academics, to hear what gets them excited about robots in the future and any advice they have for those interested in the field.
[ Oxford Robotics Institute ]
Two projects from the Rehabilitation Engineering Lab at ETH Zurich, including a self-stabilizing wheelchair and a soft exoskeleton for grasping assistance.
[ ETH Zurich ]
Silicon Valley Robotics hosted an online conversation about robotics and racism. Moderated by Andra Keay, the panel featured Maynard Holliday, Tom Williams, Monroe Kennedy III, Jasmine Lawrence, Chad Jenkins, and Ken Goldberg.
[ SVR ]
The ICRA Legged Locomotion workshop has been taking place online, and while we’re not getting a robot mosh pit, there are still some great talks. We’ll post two here, but for more, follow the legged robots YouTube channel at the link below.
[ YouTube ] Continue reading →
#437579 Disney Research Makes Robotic Gaze ...
While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.
The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.
In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.
Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:
The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.
What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.
The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.
In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:
Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.
Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.
Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.
Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.
Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.
“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor
Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
From the paper:
Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.
The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.
“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.
< Back to IEEE Journal Watch Continue reading →