Tag Archives: years

#437630 How Toyota Research Envisions the Future ...

Yesterday, the Toyota Research Institute (TRI) showed off some of the projects that it’s been working on recently, including a ceiling-mounted robot that could one day help us with household chores. That system is just one example of how TRI envisions the future of robotics and artificial intelligence. As TRI CEO Gill Pratt told us, the company is focusing on robotics and AI technology for “amplifying, rather than replacing, human beings.” In other words, Toyota wants to develop robots not for convenience or to do our jobs for us, but rather to allow people to continue to live and work independently even as we age.

To better understand Toyota’s vision of robotics 15 to 20 years from now, it’s worth watching the 20-minute video below, which depicts various scenarios “where the application of robotic capabilities is enabling members of an aging society to live full and independent lives in spite of the challenges that getting older brings.” It’s a long video, but it helps explains TRI’s perspective on how robots will collaborate with humans in our daily lives over the next couple of decades.

Those are some interesting conceptual telepresence-controlled bipeds they’ve got running around in that video, right?

For more details, we sent TRI some questions on how it plans to go from concepts like the ones shown in the video to real products that can be deployed in human environments. Below are answers from TRI CEO Gill Pratt, who is also chief scientist for Toyota Motor Corp.; Steffi Paepcke, senior UX designer at TRI; and Max Bajracharya, VP of robotics at TRI.

IEEE Spectrum: TRI seems to have a more explicit focus on eventual commercialization than most of the robotics research that we cover. At what point TRI starts to think about things like reliability and cost?

Photo: TRI

Toyota is exploring robots capable of manipulating dishes in a sink and a dishwasher, performing experiments and simulations to make sure that the robots can handle a wide range of conditions.

Gill Pratt: It’s a really interesting question, because the normal way to think about this would be to say, well, both reliability and cost are product development tasks. But actually, we need to think about it at the earliest possible stage with research as well. The hardware that we use in the laboratory for doing experiments, we don’t worry about cost there, or not nearly as much as you’d worry about for a product. However, in terms of what research we do, we very much have to think about, is it possible (if the research is successful) for it to end up in a product that has a reasonable cost. Because if a customer can’t afford what we come up with, maybe it has some academic value but it’s not actually going to make a difference in their quality of life in the real world. So we think about cost very much from the beginning.

The same is true with reliability. Right now, we’re working very hard to make our control techniques robust to wide variations in the environment. For instance, in work that Russ Tedrake is doing with manipulating dishes in a sink and a dishwasher, both in physical testing and in simulation, we’re doing thousands and now millions of different experiments to make sure that we can handle the edge cases and it works over a very wide range of conditions.

A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time. Some researchers have been very good about showing the blooper reel too, to show that some of the time, robots don’t work.

“A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time.”
—Gill Pratt, TRI

In the spirit of sharing things that didn’t work, can you tell us a bit about some of the robots that TRI has had under development that didn’t make it into the demo yesterday because they were abandoned along the way?

Steffi Paepcke: We’re really looking at how we can connect people; it can be hard to stay in touch and see our loved ones as much as we would like to. There have been a few prototypes that we’ve worked on that had to be put on the shelf, at least for the time being. We were exploring how to use light so that people could be ambiently aware of one another across distances. I was very excited about that—the internal name was “glowing orb.” For a variety of reasons, it didn’t work out, but it was really fascinating to investigate different modalities for keeping in touch.

Another prototype we worked on—we found through our research that grocery shopping is obviously an important part of life, and for a lot of older adults, it’s not necessarily the right answer to always have groceries delivered. Getting up and getting out of the house keeps you physically active, and a lot of people prefer to continue doing it themselves. But it can be challenging, especially if you’re purchasing heavy items that you need to transport. We had a prototype that assisted with grocery shopping, but when we pivoted our focus to Japan, we found that the inside of a Japanese home really needs to stay inside, and the outside needs to stay outside, so a robot that traverses both domains is probably not the right fit for a Japanese audience, and those were some really valuable lessons for us.

Photo: TRI

Toyota recently demonstrated a gantry robot that would hang from the ceiling to perform tasks like wiping surfaces and clearing clutter.

I love that TRI is exploring things like the gantry robot both in terms of near-term research and as part of its long-term vision, but is a robot like this actually worth pursuing? Or more generally, what’s the right way to compromise between making an environment robot friendly, and asking humans to make changes to their homes?

Max Bajracharya: We think a lot about the problems that we’re trying to address in a holistic way. We don’t want to just give people a robot, and assume that they’re not going to change anything about their lifestyle. We have a lot of evidence from people who use automated vacuum cleaners that people will adapt to the tools you give them, and they’ll change their lifestyle. So we want to think about what is that trade between changing the environment, and giving people robotic assistance and tools.

We certainly think that there are ways to make the gantry system plausible. The one you saw today is obviously a prototype and does require significant infrastructure. If we’re going to retrofit a home, that isn’t going to be the way to do it. But we still feel like we’re very much in the prototype phase, where we’re trying to understand whether this is worth it to be able to bypass navigation challenges, and coming up with the pros and cons of the gantry system. We’re evaluating whether we think this is the right approach to solving the problem.

To what extent do you think humans should be either directly or indirectly in the loop with home and service robots?

Bajracharya: Our goal is to amplify people, so achieving this is going to require robots to be in a loop with people in some form. One thing we have learned is that using people in a slow loop with robots, such as teaching them or helping them when they make mistakes, gives a robot an important advantage over one that has to do everything perfectly 100 percent of the time. In unstructured human environments, robots are going to encounter corner cases, and are going to need to learn to adapt. People will likely play an important role in helping the robots learn. Continue reading

Posted in Human Robots

#437616 Innovative YUJIN 3D LiDAR, Now Shipping!

Recently Yujin Robot launched a new 3D LiDAR for indoor service robot, AGVs/AMRs and smart factory. The YRL3 series is a line of precise laser sensors for vertical and horizontal scanning to detect environments or objects. The Yujin Robot YRL3 series LiDAR is designed for indoor applications and utilizes an innovative 3D scanning LiDAR for a 270°(Horizontal) x 90°(vertical) dynamic field of view as a single channel. The fundamental principle is based on direct ToF (Time of Flight) and designed to measure distances towards surroundings. YRL3 collect useful data including ranges, angles, intensities and Cartesian coordinates (x,y,z). Real-time vertical right-angle adjustment is possible and supports powerful S/W package for autonomous driving devices.

“In recent years, our product lineup expanded to include models for the Fourth Industrial Revolution,” shares the marketing team of Yujin Robot. These models namely are Kobuki, the ROS reference research robot platform used by robotics research labs around the world, the Yujin LiDAR range-finding scanning sensor for LiDAR-based autonomous driving, AMS solution (Autonomous Mobility Solution) for customized autonomous driving. The company continues to push the boundaries of robotics and artificial intelligence, developing game-changing autonomous solutions that give companies around the world an edge over the competition.

Photo: Yujin

YUJIN 3D LiDAR, Now Shipping! Indoor 3D LiDAR for AGVs/AMRs, Service Robots, and Factories Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437583 Video Friday: Attack of the Hexapod ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Happy Halloween from HEBI Robotics!

Thanks Hardik!

[ HEBI Robotics ]

Happy Halloween from Berkshire Grey!

[ Berkshire Grey ]

These are some preliminary results of our lab’s new work on using reinforcement learning to train neural networks to imitate common bipedal gait behaviors, without using any motion capture data or reference trajectories. Our method is described in an upcoming submission to ICRA 2021. Work by Jonah Siekmann and Yesh Godse.

[ OSU DRL ]

The northern goshawk is a fast, powerful raptor that flies effortlessly through forests. This bird was the design inspiration for the next-generation drone developed by scientifics of the Laboratory of Intelligent Systems of EPFL led by Dario Floreano. They carefully studied the shape of the bird’s wings and tail and its flight behavior, and used that information to develop a drone with similar characteristics.

The engineers already designed a bird-inspired drone with morphing wing back in 2016. In a step forward, their new model can adjust the shape of its wing and tail thanks to its artificial feathers. Flying this new type of drone isn’t easy, due to the large number of wing and tail configurations possible. To take full advantage of the drone’s flight capabilities, Floreano’s team plans to incorporate artificial intelligence into the drone’s flight system so that it can fly semi-automatically. The team’s research has been published in Science Robotics.

[ EPFL ]

Oopsie.

[ Roborace ]

We’ve covered MIT’s Roboats in the past, but now they’re big enough to keep a couple of people afloat.

Self-driving boats have been able to transport small items for years, but adding human passengers has felt somewhat intangible due to the current size of the vessels. Roboat II is the “half-scale” boat in the growing body of work, and joins the previously developed quarter-scale Roboat, which is 1 meter long. The third installment, which is under construction in Amsterdam and is considered to be “full scale,” is 4 meters long and aims to carry anywhere from four to six passengers.

[ MIT ]

With a training technique commonly used to teach dogs to sit and stay, Johns Hopkins University computer scientists showed a robot how to teach itself several new tricks, including stacking blocks. With the method, the robot, named Spot, was able to learn in days what typically takes a month.

[ JHU ]

Exyn, a pioneer in autonomous aerial robot systems for complex, GPS-denied industrial environments, today announced the first dog, Kody, to successfully fly a drone at Number 9 Coal Mine, in Lansford, PA. Selected to carry out this mission was the new autonomous aerial robot, the ExynAero.

Yes, this is obviously a publicity stunt, and Kody is only flying the drone in the sense that he’s pushing the launch button and then taking a nap. But that’s also the point— drone autonomy doesn’t get much fuller than this, despite the challenge of the environment.

[ Exyn ]

In this video object instance segmentation and shape completion are combined with classical regrasp planning to perform pick-place of novel objects. It is demonstrated with a UR5, Robotiq 85 parallel-jaw gripper, and Structure depth sensor with three rearrangement tasks: bin packing (minimize the height of the packing), placing bottles onto coasters, and arrange blocks from tallest to shortest (according to the longest edge). The system also accounts for uncertainty in the segmentation/completion by avoiding grasping or placing on parts of the object where perceptual uncertainty is predicted to be high.

[ Paper ] via [ Northeastern ]

Thanks Marcus!

U can’t touch this!

[ University of Tokyo ]

We introduce a way to enable more natural interaction between humans and robots through Mixed Reality, by using a shared coordinate system. Azure Spatial Anchors, which already supports colocalizing multiple HoloLens and smartphone devices in the same space, has now been extended to support robots equipped with cameras. This allows humans and robots sharing the same space to interact naturally: humans can see the plan and intention of the robot, while the robot can interpret commands given from the person’s perspective. We hope that this can be a building block in the future of humans and robots being collaborators and coworkers.

[ Microsoft ]

Some very high jumps from the skinniest quadruped ever.

[ ODRI ]

In this video we present recent efforts to make our humanoid robot LOLA ready for multi-contact locomotion, i.e. additional hand-environment support for extra stabilization during walking.

[ TUM ]

Classic bike moves from Dr. Guero.

[ Dr. Guero ]

For a robotics company, iRobot is OLD.

[ iRobot ]

The Canadian Space Agency presents Juno, a preliminary version of a rover that could one day be sent to the Moon or Mars. Juno can navigate autonomously or be operated remotely. The Lunar Exploration Analogue Deployment (LEAD) consisted in replicating scenarios of a lunar sample return mission.

[ CSA ]

How exactly does the Waymo Driver handle a cat cutting across its driving path? Jonathan N., a Product Manager on our Perception team, breaks it all down for us.

Now do kangaroos.

[ Waymo ]

Jibo is hard at work at MIT playing games with kids.

Children’s creativity plummets as they enter elementary school. Social interactions with peers and playful environments have been shown to foster creativity in children. Digital pedagogical tools often lack the creativity benefits of co-located social interaction with peers. In this work, we leverage a social embodied robot as a playful peer and designed Escape!Bot, a game involving child-robot co-play, where the robot is a social agent that scaffolds for creativity during gameplay.

[ Paper ]

It’s nice when convenience stores are convenient even for the folks who have to do the restocking.

Who’s moving the crates around, though?

[ Telexistence ]

Hi, fans ! Join the ROS World 2020, opening November 12th , and see the footage of ROBOTIS’ ROS platform robots 🙂

[ ROS World 2020 ]

ML/RL methods are often viewed as a magical black box, and while that’s not true, learned policies are nonetheless a valuable tool that can work in conjunction with the underlying physics of the robot. In this video, Agility CTO Jonathan Hurst – wearing his professor hat at Oregon State University – presents some recent student work on using learned policies as a control method for highly dynamic legged robots.

[ Agility Robotics ]

Here’s an ICRA Legged Robots workshop talk from Marco Hutter at ETH Zürich, on Autonomy for ANYmal.

Recent advances in legged robots and their locomotion skills has led to systems that are skilled and mature enough for real-world deployment. In particular, quadrupedal robots have reached a level of mobility to navigate complex environments, which enables them to take over inspection or surveillance jobs in place like offshore industrial plants, in underground areas, or on construction sites. In this talk, I will present our research work with the quadruped ANYmal and explain some of the underlying technologies for locomotion control, environment perception, and mission autonomy. I will show how these robots can learn and plan complex maneuvers, how they can navigate through unknown environments, and how they are able to conduct surveillance, inspection, or exploration scenarios.

[ RSL ] Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots