Tag Archives: old
#437583 Video Friday: Attack of the Hexapod ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
IROS 2020 – October 25-25, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
Happy Halloween from HEBI Robotics!
Thanks Hardik!
[ HEBI Robotics ]
Happy Halloween from Berkshire Grey!
[ Berkshire Grey ]
These are some preliminary results of our lab’s new work on using reinforcement learning to train neural networks to imitate common bipedal gait behaviors, without using any motion capture data or reference trajectories. Our method is described in an upcoming submission to ICRA 2021. Work by Jonah Siekmann and Yesh Godse.
[ OSU DRL ]
The northern goshawk is a fast, powerful raptor that flies effortlessly through forests. This bird was the design inspiration for the next-generation drone developed by scientifics of the Laboratory of Intelligent Systems of EPFL led by Dario Floreano. They carefully studied the shape of the bird’s wings and tail and its flight behavior, and used that information to develop a drone with similar characteristics.
The engineers already designed a bird-inspired drone with morphing wing back in 2016. In a step forward, their new model can adjust the shape of its wing and tail thanks to its artificial feathers. Flying this new type of drone isn’t easy, due to the large number of wing and tail configurations possible. To take full advantage of the drone’s flight capabilities, Floreano’s team plans to incorporate artificial intelligence into the drone’s flight system so that it can fly semi-automatically. The team’s research has been published in Science Robotics.
[ EPFL ]
Oopsie.
[ Roborace ]
We’ve covered MIT’s Roboats in the past, but now they’re big enough to keep a couple of people afloat.
Self-driving boats have been able to transport small items for years, but adding human passengers has felt somewhat intangible due to the current size of the vessels. Roboat II is the “half-scale” boat in the growing body of work, and joins the previously developed quarter-scale Roboat, which is 1 meter long. The third installment, which is under construction in Amsterdam and is considered to be “full scale,” is 4 meters long and aims to carry anywhere from four to six passengers.
[ MIT ]
With a training technique commonly used to teach dogs to sit and stay, Johns Hopkins University computer scientists showed a robot how to teach itself several new tricks, including stacking blocks. With the method, the robot, named Spot, was able to learn in days what typically takes a month.
[ JHU ]
Exyn, a pioneer in autonomous aerial robot systems for complex, GPS-denied industrial environments, today announced the first dog, Kody, to successfully fly a drone at Number 9 Coal Mine, in Lansford, PA. Selected to carry out this mission was the new autonomous aerial robot, the ExynAero.
Yes, this is obviously a publicity stunt, and Kody is only flying the drone in the sense that he’s pushing the launch button and then taking a nap. But that’s also the point— drone autonomy doesn’t get much fuller than this, despite the challenge of the environment.
[ Exyn ]
In this video object instance segmentation and shape completion are combined with classical regrasp planning to perform pick-place of novel objects. It is demonstrated with a UR5, Robotiq 85 parallel-jaw gripper, and Structure depth sensor with three rearrangement tasks: bin packing (minimize the height of the packing), placing bottles onto coasters, and arrange blocks from tallest to shortest (according to the longest edge). The system also accounts for uncertainty in the segmentation/completion by avoiding grasping or placing on parts of the object where perceptual uncertainty is predicted to be high.
[ Paper ] via [ Northeastern ]
Thanks Marcus!
U can’t touch this!
[ University of Tokyo ]
We introduce a way to enable more natural interaction between humans and robots through Mixed Reality, by using a shared coordinate system. Azure Spatial Anchors, which already supports colocalizing multiple HoloLens and smartphone devices in the same space, has now been extended to support robots equipped with cameras. This allows humans and robots sharing the same space to interact naturally: humans can see the plan and intention of the robot, while the robot can interpret commands given from the person’s perspective. We hope that this can be a building block in the future of humans and robots being collaborators and coworkers.
[ Microsoft ]
Some very high jumps from the skinniest quadruped ever.
[ ODRI ]
In this video we present recent efforts to make our humanoid robot LOLA ready for multi-contact locomotion, i.e. additional hand-environment support for extra stabilization during walking.
[ TUM ]
Classic bike moves from Dr. Guero.
[ Dr. Guero ]
For a robotics company, iRobot is OLD.
[ iRobot ]
The Canadian Space Agency presents Juno, a preliminary version of a rover that could one day be sent to the Moon or Mars. Juno can navigate autonomously or be operated remotely. The Lunar Exploration Analogue Deployment (LEAD) consisted in replicating scenarios of a lunar sample return mission.
[ CSA ]
How exactly does the Waymo Driver handle a cat cutting across its driving path? Jonathan N., a Product Manager on our Perception team, breaks it all down for us.
Now do kangaroos.
[ Waymo ]
Jibo is hard at work at MIT playing games with kids.
Children’s creativity plummets as they enter elementary school. Social interactions with peers and playful environments have been shown to foster creativity in children. Digital pedagogical tools often lack the creativity benefits of co-located social interaction with peers. In this work, we leverage a social embodied robot as a playful peer and designed Escape!Bot, a game involving child-robot co-play, where the robot is a social agent that scaffolds for creativity during gameplay.
[ Paper ]
It’s nice when convenience stores are convenient even for the folks who have to do the restocking.
Who’s moving the crates around, though?
[ Telexistence ]
Hi, fans ! Join the ROS World 2020, opening November 12th , and see the footage of ROBOTIS’ ROS platform robots 🙂
[ ROS World 2020 ]
ML/RL methods are often viewed as a magical black box, and while that’s not true, learned policies are nonetheless a valuable tool that can work in conjunction with the underlying physics of the robot. In this video, Agility CTO Jonathan Hurst – wearing his professor hat at Oregon State University – presents some recent student work on using learned policies as a control method for highly dynamic legged robots.
[ Agility Robotics ]
Here’s an ICRA Legged Robots workshop talk from Marco Hutter at ETH Zürich, on Autonomy for ANYmal.
Recent advances in legged robots and their locomotion skills has led to systems that are skilled and mature enough for real-world deployment. In particular, quadrupedal robots have reached a level of mobility to navigate complex environments, which enables them to take over inspection or surveillance jobs in place like offshore industrial plants, in underground areas, or on construction sites. In this talk, I will present our research work with the quadruped ANYmal and explain some of the underlying technologies for locomotion control, environment perception, and mission autonomy. I will show how these robots can learn and plan complex maneuvers, how they can navigate through unknown environments, and how they are able to conduct surveillance, inspection, or exploration scenarios.
[ RSL ] Continue reading →
#437579 Disney Research Makes Robotic Gaze ...
While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.
The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.
In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.
Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:
The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.
What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.
The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.
In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:
Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.
Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.
Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.
Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.
Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.
“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor
Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
From the paper:
Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.
The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.
“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.
< Back to IEEE Journal Watch Continue reading →
#437571 Video Friday: Snugglebot Is What We All ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
IROS 2020 – October 25-25, 2020 – [Online]
Robotica 2020 – November 10-14, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.
Snugglebot is what we all need right now.
[ Snugglebot ]
In his video message on his prayer intention for November, Pope Francis emphasizes that progress in robotics and artificial intelligence (AI) be oriented “towards respecting the dignity of the person and of Creation”.
[ Vatican News ]
KaPOW!
Apparently it's supposed to do that—the disruptor flies off backwards to reduce recoil on the robot, and has its own parachute to keep it from going too far.
[ Ghost Robotics ]
Animals have many muscles, receptors, and neurons which compose feedback loops. In this study, we designed artificial muscles, receptors, and neurons without any microprocessors, or software-based controllers. We imitate the reflexive rule observed in walking experiments of cats, as a result, the Pneumatic Brainless Robot II emerged running motion (a leg trajectory and a gait pattern) through the interaction between the body, the ground, and the artificial reflexes. We envision that the simple reflex circuit we discovered will be a candidate for a minimal model for describing the principles of animal locomotion.
Find the paper, “Brainless Running: A Quasi-quadruped Robot with Decentralized Spinal Reflexes by Solely Mechanical Devices,” on IROS On-Demand.
[ IROS ]
Thanks Yoichi!
I have no idea what these guys are saying, but they're talking about robots that serve chocolate!
The world of experience of the Zotter Schokoladen Manufaktur of managing director Josef Zotter counts more than 270,000 visitors annually. Since March 2019, this world of chocolate in Bergl near Riegersburg in Austria has been enriched by a new attraction: the world's first chocolate and praline robot from KUKA delights young and old alike and serves up chocolate and pralines to guests according to their personal taste.
[ Zotter ]
This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. The solution properly handles the challenging situations where the intent of the target and the dense environments are unknown to the UAV. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions. Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.
[ FAST Lab ]
Southwest Research Institute developed a cable management system for collaborative robotics, or “cobots.” Dress packs used on cobots can create problems when cables are too tight (e-stops) or loose (tangling). SwRI developed ADDRESS, or the Adaptive DRESing System, to provide smarter cobot dress packs that address e-stops and tangling.
[ SWRI ]
A quick demonstration of the acoustic contact sensor in the RBO Hand 2. An embedded microphone records the sound inside of the pneumatic finger. Depending on which part of the finger makes contact, the sound is a little bit different. We create a sensor that recognizes these small changes and predicts the contact location from the sound. The visualization on the left shows the recorded sound (top) and which of the nine contact classes the sensor is currently predicting (bottom).
[ TU Berlin ]
The MAVLab won the prize for the “most innovative design” in the IMAV 2018 indoor competition, in which drones had to fly through windows, gates, and follow a predetermined flight path. The prize was awarded for the demonstration of a fully autonomous version of the “DelFly Nimble”, a tailless flapping wing drone.
In order to fly by itself, the DelFly Nimble was equipped with a single, small camera and a small processor allowing onboard vision processing and control. The jury of international experts in the field praised the agility and autonomous flight capabilities of the DelFly Nimble.
[ MAVLab ]
A reactive walking controller for the Open Dynamic Robot Initiative's skinny quadruped.
[ ODRI ]
Mobile service robots are already able to recognize people and objects while navigating autonomously through their operating environments. But what is the ideal position of the robot to interact with a user? To solve this problem, Fraunhofer IPA developed an approach that connects navigation, 3D environment modeling, and person detection to find the optimal goal pose for HRI.
[ Fraunhofer ]
Yaskawa has been in robotics for a very, very long time.
[ Yaskawa ]
Black in Robotics IROS launch event, featuring Carlotta Berry.
[ Black in Robotics ]
What is AI? I have no idea! But these folks have some opinions.
[ MIT ]
Aerial-based Observations of Volcanic Emissions (ABOVE) is an international collaborative project that is changing the way we sample volcanic gas emissions. Harnessing recent advances in drone technology, unoccupied aerial systems (UAS) in the ABOVE fleet are able to acquire aerial measurements of volcanic gases directly from within previously inaccessible volcanic plumes. In May 2019, a team of 30 researchers undertook an ambitious field deployment to two volcanoes – Tavurvur (Rabaul) and Manam in Papua New Guinea – both amongst the most prodigious emitters of sulphur dioxide on Earth, and yet lacking any measurements of how much carbon they emit to the atmosphere.
[ ABOVE ]
A talk from IHMC's Robert Griffin for ICCAS 2020, including a few updates on their Nadia humanoid.
[ IHMC ] Continue reading →
#437554 Ending the COVID-19 Pandemic
Photo: F.J. Jimenez/Getty Images
The approach of a new year is always a time to take stock and be hopeful. This year, though, reflection and hope are more than de rigueur—they’re rejuvenating. We’re coming off a year in which doctors, engineers, and scientists took on the most dire public threat in decades, and in the new year we’ll see the greatest results of those global efforts. COVID-19 vaccines are just months away, and biomedical testing is being revolutionized.
At IEEE Spectrum we focus on the high-tech solutions: Can artificial intelligence (AI) be used to diagnose COVID-19 using cough recordings? Can mathematical modeling determine whether preventive measures against COVID-19 work? Can big data and AI provide accurate pandemic forecasting?
Consider our story “AI Recognizes COVID-19 in the Sound of a Cough,” reported by Megan Scudellari in our Human OS blog. Using a cellphone-recorded cough, machine-learning models can now detect coronavirus with 90 percent accuracy, even in people with no symptoms. It’s a remarkable research milestone. This AI model sifts through hundreds of factors to distinguish the COVID-19 cough from those of bronchitis, whooping cough, and asthma.
But while such high-tech triumphs give us hope, the no-tech solutions are mostly what we have to work with. Soon, as our Numbers Don’t Lie columnist, Vaclav Smil, pointed out in a recent email, we will have near-instantaneous home testing, and we will have an ability to use big data to crunch every move and every outbreak. But we are nowhere near that yet. So let’s use, as he says, some old-fashioned kindergarten epidemiology, the no-tech measures, while we work to get there:
Masks: Wear them. If we all did so, we could cut transmission by two-thirds, perhaps even 80 percent.
Hands: Wash them.
Social distancing: If we could all stay home for two weeks, we could see enormous declines in COVID-19 transmission.
These are all time-tested solutions, proven effective ages ago in countless outbreaks of diseases including typhoid and cholera. They’re inexpensive and easy to prescribe, and the regimens are easy to follow.
The conflict between public health and individual rights and privacy, however, is less easy to resolve. Even during the pandemic of 1918–19, there was widespread resistance to mask wearing and social distancing. Fifty million people died—675,000 in the United States alone. Today, we are up to 240,000 deaths in the United States, and the end is not in sight. Antiflu measures were framed in 1918 as a way to protect the troops fighting in World War I, and people who refused to wear masks were called out as “dangerous slackers.” There was a world war, and yet it was still hard to convince people of the need for even such simple measures.
Personally, I have found the resistance to these easy fixes startling. I wouldn’t want maskless, gloveless doctors taking me through a surgical procedure. Or waltzing in from lunch without washing their hands. I’m sure you wouldn’t, either.
Science-based medicine has been one of the world’s greatest and most fundamental advances. In recent years, it has been turbocharged by breakthroughs in genetics technologies, advanced materials, high-tech diagnostics, and implants and other electronics-based interventions. Such leaps have already saved untold lives, but there’s much more to be done. And there will be many more pandemics ahead for humanity.
< Back to IEEE COVID-19 Resources Continue reading →
#437460 This Week’s Awesome Tech Stories From ...
ARTIFICIAL INTELLIGENCE
A Radical New Technique Lets AI Learn With Practically No Data
Karen Hao | MIT Technology Review
“Shown photos of a horse and a rhino, and told a unicorn is something in between, [children] can recognize the mythical creature in a picture book the first time they see it. …Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call ‘less than one’-shot, or LO-shot, learning.”
FUTURE
Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”
HEALTH
The Race for a Super-Antibody Against the Coronavirus
Apoorva Mandavilli | The New York Times
“Dozens of companies and academic groups are racing to develop antibody therapies. …But some scientists are betting on a dark horse: Prometheus, a ragtag group of scientists who are months behind in the competition—and yet may ultimately deliver the most powerful antibody.”
SPACE
How to Build a Spacecraft to Save the World
Daniel Oberhaus | Wired
“The goal of the Double Asteroid Redirection Test, or DART, is to slam the [spacecraft] into a small asteroid orbiting a larger asteroid 7 million miles from Earth. …It should be able to change the asteroid’s orbit just enough to be detectable from Earth, demonstrating that this kind of strike could nudge an oncoming threat out of Earth’s way. Beyond that, everything is just an educated guess, which is exactly why NASA needs to punch an asteroid with a robot.”
TRANSPORTATION
Inside Gravity’s Daring Mission to Make Jetpacks a Reality
Oliver Franklin-Wallis | Wired
“The first time someone flies a jetpack, a curious thing happens: just as their body leaves the ground, their legs start to flail. …It’s as if the vestibular system can’t quite believe what’s happening. This isn’t natural. Then suddenly, thrust exceeds weight, and—they’re aloft. …It’s that moment, lift-off, that has given jetpacks an enduring appeal for over a century.”
FUTURE OF FOOD
Inside Singapore’s Huge Bet on Vertical Farming
Megan Tatum | MIT Technology Review
“…to cram all [of Singapore’s] gleaming towers and nearly 6 million people into a land mass half the size of Los Angeles, it has sacrificed many things, including food production. Farms make up no more than 1% of its total land (in the United States it’s 40%), forcing the small city-state to shell out around $10 billion each year importing 90% of its food. Here was an example of technology that could change all that.”
COMPUTING
The Effort to Build the Mathematical Library of the Future
Kevin Hartnett | Quanta
“Digitizing mathematics is a longtime dream. The expected benefits range from the mundane—computers grading students’ homework—to the transcendent: using artificial intelligence to discover new mathematics and find new solutions to old problems.”
Image credit: Kevin Mueller / Unsplash Continue reading →