Tag Archives: demo

#437695 Video Friday: Even Robots Know That You ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
Other Than Human – September 3-10, 2020 – Stockholm, Sweden
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

From the Robotics and Perception Group at UZH comes Flightmare, a simulation environment for drones that combines a slick rendering engine with a robust physics engine that can run as fast as your system can handle.

Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc.

[ Flightmare ]

Quadruped robots yelling at people to maintain social distancing is really starting to become a thing, for better or worse.

We introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to overcrowded pedestrians.

[ Project ]

Thanks Fan!

The Personal Robotics Group at Oregon State University is looking at UV germicidal irradiation for surface disinfection with a Fetch Manipulator Robot.

Fetch Robot disinfecting dance party woo!

[ Oregon State ]

How could you not take a mask from this robot?

[ Reachy ]

This work presents the design, development and autonomous navigation of the alpha-version of our Resilient Micro Flyer, a new type of collision-tolerant small aerial robot tailored to traversing and searching within highly confined environments including manhole-sized tubes. The robot is particularly lightweight and agile, while it implements a rigid collision-tolerant design which renders it resilient during forcible interaction with the environment. Furthermore, the design of the system is enhanced through passive flaps ensuring smoother and more compliant collision which was identified to be especially useful in very confined settings.

[ ARL ]

Pepper can make maps and autonomously navigate, which is interesting, but not as interesting as its posture when it's wandering around.

Dat backing into the charging dock tho.

[ Pepper ]

RatChair a strategy for displacing big objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position.

This is from 2015, why isn't all of my furniture autonomous yet?!

[ KAIST ]

The new SeaDrone Pro is designed to be the underwater equivalent of a quadrotor. This video is a rendering, but we've been assured that it does actually exist.

[ SeaDrone ]

Thanks Eduardo!

Porous Loops is a lightweight composite facade panel that shows the potential of 3D printing of mineral foams for building scale applications.

[ ETH ]

Thanks Fan!

Here's an interesting idea for a robotic gripper- it's what appears to be a snap bracelet coupled to a pneumatic actuator that allows the snap bracelet to be reset.

[ Georgia Tech ]

Graze is developing a commercial robotic lawnmower. They're also doing a sort of crowdfunded investment thing, which probably explains the painfully overproduced nature of the following video:

A couple things about this: the hard part, which the video skips over almost entirely, is the mapping, localization, and understanding where to mow and where not to mow. The pitch deck seems to suggest that this is mostly done through computer vision, a thing that's perhaps easy to do under controlled ideal conditions, but difficult to apply to a world full lawns that are all different. The commercial aspect is interesting because golf courses are likely as standardized as you can get, but the emphasis here on how much money they can make without really addressing any of the technical stuff makes me raise an eyebrow or two.

[ Graze ]

The record & playback X-series arm demo allows the user to record the arm's movements while motors are torqued off. Then, the user may torque the motor's on and watch the movements they just made playback!

[ Interbotix ]

Shadow Robot has a new teleop system for its hand. I'm guessing that it's even trickier to use than it looks.

[ Shadow Robot ]

Quanser Interactive Labs is a collection of virtual hardware-based laboratory activities that supplement traditional or online courses. Same as working with physical systems in the lab, students work with virtual twins of Quanser's most popular plants, develop their mathematical models, implement and simulate the dynamic behavior of these systems, design controllers, and validate them on a high-fidelity 3D real-time virtual models. The virtual systems not only look like the real ones, they also behave, can be manipulated, measured, and controlled like real devices. And finally, when students go to the lab, they can deploy their virtually-validated designs on actual physical equipment.

[ Quanser ]

This video shows robot-assisted heart surgery. It's amazing to watch if you haven't seen this sort of thing before, but be aware that there is a lot of blood.

This video demonstrates a fascinating case of robotic left atrial myxoma excision, narrated by Joel Dunning, Middlesbrough, UK. The Robotic platform provides superior visualisation and enhanced dexterity, through keyhole incisions. Robotic surgery is an integral part of our Minimally Invasive Cardiothoracic Surgery Program.

[ Tristan D. Yan ]

Thanks Fan!

In this talk, we present our work on learning control policies directly in simulation that are deployed onto real drones without any fine tuning. The presentation covers autonomous drone racing, drone acrobatics, and uncertainty estimation in deep networks.

[ RPG ] Continue reading

Posted in Human Robots

#437630 How Toyota Research Envisions the Future ...

Yesterday, the Toyota Research Institute (TRI) showed off some of the projects that it’s been working on recently, including a ceiling-mounted robot that could one day help us with household chores. That system is just one example of how TRI envisions the future of robotics and artificial intelligence. As TRI CEO Gill Pratt told us, the company is focusing on robotics and AI technology for “amplifying, rather than replacing, human beings.” In other words, Toyota wants to develop robots not for convenience or to do our jobs for us, but rather to allow people to continue to live and work independently even as we age.

To better understand Toyota’s vision of robotics 15 to 20 years from now, it’s worth watching the 20-minute video below, which depicts various scenarios “where the application of robotic capabilities is enabling members of an aging society to live full and independent lives in spite of the challenges that getting older brings.” It’s a long video, but it helps explains TRI’s perspective on how robots will collaborate with humans in our daily lives over the next couple of decades.

Those are some interesting conceptual telepresence-controlled bipeds they’ve got running around in that video, right?

For more details, we sent TRI some questions on how it plans to go from concepts like the ones shown in the video to real products that can be deployed in human environments. Below are answers from TRI CEO Gill Pratt, who is also chief scientist for Toyota Motor Corp.; Steffi Paepcke, senior UX designer at TRI; and Max Bajracharya, VP of robotics at TRI.

IEEE Spectrum: TRI seems to have a more explicit focus on eventual commercialization than most of the robotics research that we cover. At what point TRI starts to think about things like reliability and cost?

Photo: TRI

Toyota is exploring robots capable of manipulating dishes in a sink and a dishwasher, performing experiments and simulations to make sure that the robots can handle a wide range of conditions.

Gill Pratt: It’s a really interesting question, because the normal way to think about this would be to say, well, both reliability and cost are product development tasks. But actually, we need to think about it at the earliest possible stage with research as well. The hardware that we use in the laboratory for doing experiments, we don’t worry about cost there, or not nearly as much as you’d worry about for a product. However, in terms of what research we do, we very much have to think about, is it possible (if the research is successful) for it to end up in a product that has a reasonable cost. Because if a customer can’t afford what we come up with, maybe it has some academic value but it’s not actually going to make a difference in their quality of life in the real world. So we think about cost very much from the beginning.

The same is true with reliability. Right now, we’re working very hard to make our control techniques robust to wide variations in the environment. For instance, in work that Russ Tedrake is doing with manipulating dishes in a sink and a dishwasher, both in physical testing and in simulation, we’re doing thousands and now millions of different experiments to make sure that we can handle the edge cases and it works over a very wide range of conditions.

A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time. Some researchers have been very good about showing the blooper reel too, to show that some of the time, robots don’t work.

“A tremendous amount of work that we do is trying to bring robotics out of the age of doing demonstrations. There’s been a history of robotics where for some time, things have not been reliable, so we’d catch the robot succeeding just once and then show that video to the world, and people would get the mis-impression that it worked all of the time.”
—Gill Pratt, TRI

In the spirit of sharing things that didn’t work, can you tell us a bit about some of the robots that TRI has had under development that didn’t make it into the demo yesterday because they were abandoned along the way?

Steffi Paepcke: We’re really looking at how we can connect people; it can be hard to stay in touch and see our loved ones as much as we would like to. There have been a few prototypes that we’ve worked on that had to be put on the shelf, at least for the time being. We were exploring how to use light so that people could be ambiently aware of one another across distances. I was very excited about that—the internal name was “glowing orb.” For a variety of reasons, it didn’t work out, but it was really fascinating to investigate different modalities for keeping in touch.

Another prototype we worked on—we found through our research that grocery shopping is obviously an important part of life, and for a lot of older adults, it’s not necessarily the right answer to always have groceries delivered. Getting up and getting out of the house keeps you physically active, and a lot of people prefer to continue doing it themselves. But it can be challenging, especially if you’re purchasing heavy items that you need to transport. We had a prototype that assisted with grocery shopping, but when we pivoted our focus to Japan, we found that the inside of a Japanese home really needs to stay inside, and the outside needs to stay outside, so a robot that traverses both domains is probably not the right fit for a Japanese audience, and those were some really valuable lessons for us.

Photo: TRI

Toyota recently demonstrated a gantry robot that would hang from the ceiling to perform tasks like wiping surfaces and clearing clutter.

I love that TRI is exploring things like the gantry robot both in terms of near-term research and as part of its long-term vision, but is a robot like this actually worth pursuing? Or more generally, what’s the right way to compromise between making an environment robot friendly, and asking humans to make changes to their homes?

Max Bajracharya: We think a lot about the problems that we’re trying to address in a holistic way. We don’t want to just give people a robot, and assume that they’re not going to change anything about their lifestyle. We have a lot of evidence from people who use automated vacuum cleaners that people will adapt to the tools you give them, and they’ll change their lifestyle. So we want to think about what is that trade between changing the environment, and giving people robotic assistance and tools.

We certainly think that there are ways to make the gantry system plausible. The one you saw today is obviously a prototype and does require significant infrastructure. If we’re going to retrofit a home, that isn’t going to be the way to do it. But we still feel like we’re very much in the prototype phase, where we’re trying to understand whether this is worth it to be able to bypass navigation challenges, and coming up with the pros and cons of the gantry system. We’re evaluating whether we think this is the right approach to solving the problem.

To what extent do you think humans should be either directly or indirectly in the loop with home and service robots?

Bajracharya: Our goal is to amplify people, so achieving this is going to require robots to be in a loop with people in some form. One thing we have learned is that using people in a slow loop with robots, such as teaching them or helping them when they make mistakes, gives a robot an important advantage over one that has to do everything perfectly 100 percent of the time. In unstructured human environments, robots are going to encounter corner cases, and are going to need to learn to adapt. People will likely play an important role in helping the robots learn. Continue reading

Posted in Human Robots

#436507 The Weird, the Wacky, the Just Plain ...

As you know if you’ve ever been to, heard of, or read about the annual Consumer Electronics Show in Vegas, there’s no shortage of tech in any form: gadgets, gizmos, and concepts abound. You probably couldn’t see them all in a month even if you spent all day every day trying.

Given the sheer scale of the show, the number of exhibitors, and the inherent subjectivity of bestowing superlatives, it’s hard to pick out the coolest tech from CES. But I’m going to do it anyway; in no particular order, here are some of the products and concepts that I personally found most intriguing at this year’s event.

e-Novia’s Haptic Gloves
Italian startup e-Novia’s Weart glove uses a ‘sensing core’ to record tactile sensations and an ‘actuation core’ to reproduce those sensations onto the wearer’s skin. Haptic gloves will bring touch to VR and AR experiences, making them that much more life-like. The tech could also be applied to digitization of materials and in gaming and entertainment.

e-Novia’s modular haptic glove
I expected a full glove, but in fact there were two rings that attached to my fingers. Weart co-founder Giovanni Spagnoletti explained that they’re taking a modular approach, so as to better tailor the technology to different experiences. He then walked me through a virtual reality experience that was a sort of simulated science experiment: I had to lift a glass beaker, place it on a stove, pour in an ingredient, open a safe to access some dry ice, add that, and so on. As I went through the steps, I felt the beaker heat up and cool off at the expected times, and felt the liquid moving inside, as well as the pressure of my fingertips against the numbered buttons on the safe.

A virtual (but tactile) science experiment
There was a slight delay between my taking an action and feeling the corresponding tactile sensation, but on the whole, the haptic glove definitely made the experience more realistic—and more fun. Slightly less fun but definitely more significant, Spagnoletti told me Weart is working with a medical group to bring tactile sensations to VR training for surgeons.

Sarcos Robotics’ Exoskeleton
That tire may as well be a feather
Sarcos Robotics unveiled its Guardian XO full-body exoskeleton, which it says can safely lift up to 200 pounds across an extended work session. What’s cool about this particular exoskeleton is that it’s not just a prototype; the company announced a partnership with Delta airlines, which will be trialing the technology for aircraft maintenance, engine repair, and luggage handling. In a demo, I watched a petite female volunteer strap into the exoskeleton and easily lift a 50-pound weight with one hand, and a Sarcos employee lift and attach a heavy component of a propeller; she explained that the strength-augmenting function of the exoskeleton can easily be switched on or off—and the wearer’s hands released—to facilitate multi-step tasks.

Hyundai’s Flying Taxi
Where to?
Hyundai and Uber partnered to unveil an air taxi concept. With a 49-foot wingspan, 4 lift rotors, and 4 tilt rotors, the aircraft would be manned by a pilot and could carry 4 passengers at speeds up to 180 miles per hour. The companies say you’ll be able to ride across your city in one of these by 2030—we’ll see if the regulatory environment, public opinion, and other factors outside of technological capability let that happen.

Mercedes’ Avatar Concept Car
Welcome to the future
As evident from its name, Mercedes’ sweet new Vision AVTR concept car was inspired by the movie Avatar; director James Cameron helped design it. The all-electric car has no steering wheel, transparent doors, seats made of vegan leather, and 33 reptilian-scale-like flaps on the back; its design is meant to connect the driver with both the car and the surrounding environment in a natural, seamless way.

Next-generation scrolling
Offered the chance to ‘drive’ the car, I jumped on it. Placing my hand on the center console started the engine, and within seconds it had synced to my heartbeat, which reverberated through the car. The whole dashboard, from driver door to passenger door, is one big LED display. It showed a virtual landscape I could select by holding up my hand: as I moved my hand from left to right, different images were projected onto my open palm. Closing my hand on an image selected it, and suddenly it looked like I was in the middle of a lush green mountain range. Applying slight forward pressure on the center console made the car advance in the virtual landscape; it was essentially like playing a really cool video game.

Mercedes is aiming to have a carbon-neutral production fleet by 2039, and to reduce the amount of energy it uses during production by 40 percent by 2030. It’s unclear when—or whether—the man-machine-nature connecting features of the Vision AVTR will start showing up in production, but I for one will be on the lookout.

Waverly Labs’ In-Ear Translator
Waverly Labs unveiled its Ambassador translator earlier this year and has it on display at the show. It’s worn on the ear and uses a far-field microphone array with speech recognition to translate real-time conversations in 20 different languages. Besides in-ear audio, translations can also appear as text on an app or be broadcast live in a conference environment.

It’s kind of like a giant talking earring
I stopped by the booth and tested out the translator with Waverly senior software engineer Georgiy Konovalov. We each hooked on an earpiece, and first, he spoke to me in Russian. After a delay of a couple seconds, I heard his words in—slightly robotic, but fully comprehensible—English. Then we switched: I spoke to him in Spanish, my words popped up on his phone screen in Cyrillic, and he translated them back to English for me out loud.

On the whole, the demo was pretty cool. If you’ve ever been lost in a foreign country whose language you don’t speak, imagine how handy a gadget like this would come in. Let’s just hope that once they’re more widespread, these products don’t end up discouraging people from learning languages.

Not to be outdone, Google also announced updates to its Translate product, which is being deployed at information desks in JFK airport’s international terminal, in sports stadiums in Qatar, and by some large hotel chains.

Stratuscent’s Digital Nose
AI is making steady progress towards achieving human-like vision and hearing—but there’s been less work done on mimicking our sense of smell (maybe because it’s less useful in everyday applications). Stratuscent’s digital nose, which it says is based on NASA patents, uses chemical receptors and AI to identify both simple chemicals and complex scents. The company is aiming to create the world’s first comprehensive database of everyday scents, which it says it will use to make “intelligent decisions” for customers. What kind of decisions remains to be seen—and smelled.

Banner Image Credit: The Mercedes Vision AVTR concept car. Photo by Vanessa Bates Ramirez Continue reading

Posted in Human Robots

#436100 Labrador Systems Developing Affordable ...

Developing robots for the home is still a challenge, especially if you want those robots to interact with people and help them do practical, useful things. However, the potential markets for home robots are huge, and one of the most compelling markets is for home robots that can assist humans who need them. Today, Labrador Systems, a startup based in California, is announcing a pre-seed funding round of $2 million (led by SOSV’s hardware accelerator HAX with participation from Amazon’s Alexa Fund and iRobot Ventures, among others) with the goal of expanding development and conducting pilot studies of “a new [assistive robot] platform for supporting home health.”

Labrador was founded two years ago by Mike Dooley and Nikolai Romanov. Both Mike and Nikolai have backgrounds in consumer robotics at Evolution Robotics and iRobot, but as an ’80s gamer, Mike’s bio (or at least the parts of his bio on LinkedIn) caught my attention: From 1995 to 1997, Mike worked at Brøderbund Software, helping to manage play testing for games like Myst and Riven and the Where in the World is Carmen San Diego series. He then spent three years at Lego as the product manager for MindStorms. After doing some marginally less interesting things, Mike was the VP of product development at Evolution Robotics from 2006 to 2012, where he led the team that developed the Mint floor sweeping robot. Evolution was acquired by iRobot in 2012, and Mike ended up as the VP of product development over there until 2017, when he co-founded Labrador.

I was pretty much sold at Where in the World is Carmen San Diego (the original version of which I played from a 5.25” floppy on my dad’s Apple IIe)*, but as you can see from all that other stuff, Mike knows what he’s doing in robotics as well.

And according to Labrador’s press release, what they’re doing is this:

Labrador Systems is an early stage technology company developing a new generation of assistive robots to help people live more independently. The company’s core focus is creating affordable solutions that address practical and physical needs at a fraction of the cost of commercial robots. … Labrador’s technology platform offers an affordable solution to improve the quality of care while promoting independence and successful aging.

Labrador’s personal robot, the company’s first offering, will enter pilot studies in 2020.

That’s about as light on detail as a press release gets, but there’s a bit more on Labrador’s website, including:

Our core focus is creating affordable solutions that address practical and physical needs. (we are not a social robot company)
By affordable, we mean products and technologies that will be available at less than 1/10th the cost of commercial robots.
We achieve those low costs by fusing the latest technologies coming out of augmented reality with robotics to move things in the real world.

The only hardware we’ve actually seen from Labrador at this point is a demo that they put together for Amazon’s re:MARS conference, which took place a few months ago, showing a “demonstration project” called Smart Walker:

This isn’t the home assistance robot that Labrador got its funding for, but rather a demonstration of some of their technology. So of course, the question is, what’s Labrador working on, then? It’s still a secret, but Mike Dooley was able to give us a few more details.

IEEE Spectrum: Your website shows a smart walker concept—how is that related to the assistive robot that you’re working on?

Mike Dooley: The smart walker was a request from a major senior living organization to have our robot (which is really good at navigation) guide residents from place to place within their communities. To test the idea with residents, it turned out to be much quicker to take the navigation system from the robot and put it on an existing rollator walker. So when you see the clips of the technology in the smart walker video on our website, that’s actually the robot’s navigation system localizing in real time and path planning in an environment.

“Assistive robot” can cover a huge range of designs and capabilities—can you give us any more detail about your robot, and what it’ll be able to do?

One of the core features of our robot is to help people move things where they have difficulty moving themselves, particularly in the home setting. That may sound trivial, but to someone who has impaired mobility, it can be a major daily challenge and negatively impact their life and health in a number of ways. Some examples we repeatedly hear are people not staying hydrated or taking their medication on time simply because there is a distance between where they are and the items they need. Once we have those base capabilities, i.e. the ability to navigate around a home and move things within it, then the robot becomes a platform for a wider variety of applications.

What made you decide to develop assistive robots, and why are robots a good solution for seniors who want to live independently?

Supporting independent living has been seen as a massive opportunity in robotics for some time, but also as something off in the future. The turning point for me was watching my mother enter that stage in her life and seeing her transition to using a cane, then a walker, and eventually to a wheelchair. That made the problems very real for me. It also made things much clearer about how we could start addressing specific needs with the tools that are becoming available now.

In terms of why robots can be a good solution, the basic answer is the level of need is so overwhelming that even helping with “basic” tasks can make an appreciable difference in the quality of someone’s daily life. It’s also very much about giving individuals a degree of control back over their environment. That applies to seniors as well as others whose world starts getting more complex to manage as their abilities become more impaired.

What are the particular challenges of developing assistive robots, and how are you addressing them? Why do you think there aren’t more robotics startups in this space?

The setting (operating in homes and personal spaces) and the core purpose of the product (aiding a wide variety of individuals) bring a lot of complexity to any capability you want to build into an assistive robot. Our approach is to put as much structure as we can into the system to make it functional, affordable, understandable and reliable.

I think one of the reasons you don’t see more startups in the space is that a lot of roboticists want to skip ahead and do the fancy stuff, such as taking on human-level capabilities around things like manipulation. Those are very interesting research topics, but we think those are also very far away from being practical solutions you can productize for people to use in their homes.

How do you think assistive robots and human caregivers should work together?

The ideal scenario is allowing caregivers to focus more of their time on the high-touch, personal side of care. The robot can offload the more basic support tasks as well as extend the impact of the caregiver for the long hours of the day they can’t be with someone at their home. We see that applying to both paid care providers as well as the 40 million unpaid family members and friends that provide assistance.

The robot is really there as a tool, both for individuals in need and the people that help them. What’s promising in the research discussions we’ve had so far, is that even when a caregiver is present, giving control back to the individual for simple things can mean a lot in the relationship between them and the caregiver.

What should we look forward to from Labrador in 2020?

Our big goal in 2020 is to start placing the next version of the robot with individuals with different types of needs to let them experience it naturally in their own homes and provide feedback on what they like, what don’t like and how we can make it better. We are currently reaching out to companies in the healthcare and home health fields to participate in those studies and test specific applications related to their services. We plan to share more detail about those studies and the robot itself as we get further into 2020.

If you’re an organization (or individual) who wants to possibly try out Labrador’s prototype, the company encourages you to connect with them through their website. And as we learn more about what Labrador is up to, we’ll have updates for you, presumably in 2020.

[ Labrador Systems ]

* I just lost an hour of my life after finding out that you can play Where in the World is Carmen San Diego in your browser for free. Continue reading

Posted in Human Robots

#436094 Agility Robotics Unveils Upgraded Digit ...

Last time we saw Agility Robotics’ Digit biped, it was picking up a box from a Ford delivery van and autonomously dropping it off on a porch, while at the same time managing to not trip over stairs, grass, or small children. As a demo, it was pretty impressive, but of course there’s an enormous gap between making a video of a robot doing a successful autonomous delivery and letting that robot out into the semi-structured world and expecting it to reliably do a good job.

Agility Robotics is aware of this, of course, and over the last six months they’ve been making substantial improvements to Digit to make it more capable and robust. A new video posted today shows what’s new with the latest version of Digit—Digit v2.

We appreciate Agility Robotics foregoing music in the video, which lets us hear exactly what Digit sounds like in operation. The most noticeable changes are in Digit’s feet, torso, and arms, and I was particularly impressed to see Digit reposition the box on the table before grasping it to make sure that it could get a good grip. Otherwise, it’s hard to tell what’s new, so we asked Agility Robotics’ CEO Damion Shelton to get us up to speed.

IEEE Spectrum: Can you summarize the differences between Digit v1 and v2? We’re particularly interested in the new feet.

Damion Shelton: The feet now include a roll degree of freedom, so that Digit can resist lateral forces without needing to side step. This allows Digit v2 to balance on one foot statically, which Digit v1 and Cassie could not do. The larger foot also dramatically decreases load per unit area, for improved performance on very soft surfaces like sand.

The perception stack includes four Intel RealSense cameras used for obstacle detection and pick/place, plus the lidar. In Digit v1, the perception systems were brought up incrementally over time for development purposes. In Digit v2, all perception systems are active from the beginning and tied to a dedicated computer. The perception system is used for a number of additional things beyond manipulation, which we’ll start to show in the next few weeks.

The torso changes are a bit more behind-the-scenes. All of the electronics in it are now fully custom, thermally managed, and environmentally sealed. We’ve also included power and ethernet to a payload bay that can fit either a NUC or Jetson module (or other customer payload).

What exactly are we seeing in the video in terms of Digit’s autonomous capabilities?

At the moment this is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade. This is v2 hardware, so there’s one more full version in development prior to the 2020 launch, which will expand the autonomy envelope significantly.

“This is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade”
—Damion Shelton, Agility Robotics

What are some unique features or capabilities of Digit v2 that might not be obvious from the video?

For those who’ve used Cassie robots, the power-up and power-down ergonomics are a lot more user friendly. Digit can be disassembled into carry-on luggage sized pieces (give or take) in under 5 minutes for easy transport. The battery charges in-situ using a normal laptop-style charger.

I’m curious about this “stompy” sort of gait that we see in Digit and many other bipedal robots—are there significant challenges or drawbacks to implementing a more human-like (and presumably quieter) heel-toe gait?

There are no drawbacks other than increased complexity in controls and foot design. With Digit v2, the larger surface area helps with the noise, and v2 has similar or better passive-dynamic performance as compared to Cassie or Digit v1. The foot design is brand new, and new behaviors like heel-toe are an active area of development.

How close is Digit v2 to a system that you’d be comfortable operating commercially?

We’re on track for a 2020 launch for Digit v3. Changes from v2 to v3 are mostly bug-fix in nature, with a few regulatory upgrades like full battery certification. Safety is a major concern for us, and we have launch customers that will be operating Digit in a safe environment, with a phased approach to relaxing operational constraints. Digit operates almost exclusively under force control (as with cobots more generally), but at the moment we’ll err on the side of caution during operation until we have the stats to back up safety and reliability. The legged robot industry has too much potential for us to screw it up by behaving irresponsibly.

It will be a while before Digit (or any other humanoid robot) is operating fully autonomously in crowds of people, but there are so many large market opportunities (think indoor factory/warehouse environments) to address prior to that point that we expect to mature the operational safety side of things well in advance of having saturated the more robot-tolerant markets.

[ Agility Robotics ] Continue reading

Posted in Human Robots