Tag Archives: work

#439836 Video Friday: Dusty at Work

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ROSCon 2021 – October 20-21, 2021 – [Online Event]Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USALet us know if you have suggestions for next week, and enjoy today's videos.
I love watching Dusty Robotics' field printer at work. I don't know whether it's intentional or not, but it's go so much personality somehow.

[ Dusty Robotics ]
A busy commuter is ready to walk out the door, only to realize they've misplaced their keys and must search through piles of stuff to find them. Rapidly sifting through clutter, they wish they could figure out which pile was hiding the keys. Researchers at MIT have created a robotic system that can do just that. The system, RFusion, is a robotic arm with a camera and radio frequency (RF) antenna attached to its gripper. It fuses signals from the antenna with visual input from the camera to locate and retrieve an item, even if the item is buried under a pile and completely out of view.
While finding lost keys is helpful, RFusion could have many broader applications in the future, like sorting through piles to fulfill orders in a warehouse, identifying and installing components in an auto manufacturing plant, or helping an elderly individual perform daily tasks in the home, though the current prototype isn't quite fast enough yet for these uses.[ MIT ]
CSIRO Data61 had, I'm pretty sure, the most massive robots in the entire SubT competition. And this is how you solve doors with a massive robot.

[ CSIRO ]
You know how robots are supposed to be doing things that are too dangerous for humans? I think sailing through a hurricane qualifies..

This second video, also captured by this poor Saildrone, is if anything even worse:

[ Saildrone ] via [ NOAA ]
Soft Robotics can handle my taquitos anytime.

[ Soft Robotics ]
This is brilliant, if likely unaffordable for most people.

[ Eric Paulos ]
I do not understand this robot at all, nor can I tell whether it's friendly or potentially dangerous or both.

[ Keunwook Kim ]
This sort of thing really shouldn't have to exist for social home robots, but I'm glad it does, I guess?

It costs $100, though.
[ Digital Dream Labs ]
If you watch this video closely, you'll see that whenever a simulated ANYmal falls over, it vanishes from existence. This is a new technique for teaching robots to walk by threatening them with extinction if they fail.

But seriously how do I get this as a screensaver?
[ RSL ]
Zimbabwe Flying Labs' Tawanda Chihambakwe shares how Zimbabwe Flying Labs got their start, using drones for STEM programs, and how drones impact conservation and agriculture.
[ Zimbabwe Flying Labs ]
DARPA thoughtfully provides a video tour of the location of every artifact on the SubT Final prize course. Some of them are hidden extraordinarily well.

Also posted by DARPA this week are full prize round run videos for every team; here are the top three: MARBLE, CSIRO Data61, and CERBERUS.

[ DARPA SubT ]
An ICRA 2021 plenary talk from Fumihito Arai at the University of Tokyo, on “Robotics and Automation in Micro & Nano-Scales.”
[ ICRA 2021 ]
This week's UPenn GRASP Lab Seminar comes from Rahul Mangharam, on “What can we learn from Autonomous Racing?”

[ UPenn ] Continue reading

Posted in Human Robots

#439693 Agility Robotics’ Digit is Getting ...

Agility Robotics' Digit humanoid has been taking a bit of a break from work during the pandemic. Most of what we've seen from Agility and Digit over the past year and a half has been decidedly research-y. Don't get me wrong, Digit's been busy making humans look bad and not falling over when it really should have done, but remember that Agility's goal is to make Digit into a useful, practical robot. It's not a research platform—as Agility puts it, Digit is intended to “accelerate business productivity and people's pursuit of a more fulfilling life.” As far as I can make out, this is a fancier way of saying that Digit should really be spending its time doing dull repetitive tasks so that humans don't have to, and in a new video posted today, the robot shows how it can help out with boring warehouse tote shuffling.

The highlights here for me are really in the combination of legged mobility and object manipulation. Right at the beginning of the video, you see Digit squatting all the way down, grasping a tote bin, shuffling backwards to get the bin out from under the counter, and then standing again. There's an unfortunate cut there, but the sequence is shown again at 0:44, and you can see how Digit pulls the tote towards itself and then regrasps it before lifting. Clever. And at 1:20, the robot gives a tote that it just placed on a shelf a little nudge with one arm to make sure it's in the right spot.

These are all very small things, but I think of them as highlights because all of the big things seem to be more or less solved in this scenario. Digit has no problem lifting things, walking around, and not mowing over the occasional human, and once that stuff is all sorted, whether the robot is able to effectively work in an environment like this is to some extent reflected in all of these other little human-obvious things that often make the difference between success and failure.
The clear question, though, is why Digit (or, more broadly, any bipedal robot) is the right robot to be doing this kind of job. There are other robots out there already doing tasks like these in warehouses, and they generally have wheeled bases and manipulation systems specifically designed to move totes and do nothing else. If you were to use one of those robots instead of Digit, my guess is that you'd pay less for it, it would be somewhat safer, and it would likely do the job more efficiently. Fundamentally, Digit can't out box-move a box-moving robot. But the critical thing to consider here is that as soon as you run out of boxes to move, Digit can do all kinds of other things thanks to its versatile humanoid design, while your box-moving robot can only sit in the corner and be sad until more boxes show up.
“We did not set out to build a humanoid robot. We set out to solve mobility.”
—Agility CTO Jonathan Hurst
“Digit is very, very flexible automation,” Agility CTO Jonathan Hurst told us when we asked him about this. “The value of what we're doing is in generality, and having a robot that's going be able to work carrying totes for three or four hours, then go unload boxes from trailers for three or four hours, keep up with you if you change your workflow entirely. Many of these spaces are designed specifically around the human form factor, and it's possible for a robot like Digit to do all of these different boring, repetitive jobs. And then when things get complicated, humans are still doing it.”
The value of having a human-like robot in a human environment comes into play as soon as you start thinking about typical warehouse situations that would be trivial for a human to solve but that are impossible for wheeled robots. For example, Hurst says that Digit is capable of using a stool to reach objects on high shelves. You could, of course, design a wheeled robot with an extension system to allow it to reach high shelves, but you're now adding more cost and complexity, and the whole point of a generalist humanoid robot is that in human environments, you just don't have to worry about environmental challenges. Or that's the idea, anyway, but as Hurst explains, the fact that Digit ended up with a mostly humanoid form factor was more like a side effect of designing with specific capabilities in mind:
We did not set out to build a humanoid robot. We set out to solve mobility, and we've been on a methodical path towards understanding physical interaction in the world. Agility started with our robot Cassie, and one of the big problems with Cassie was that we didn't have enough inertia in the robot's body to counteract the leg swinging forward, which is why Digit has an upright torso. We wanted to give ourselves more control authority in the yaw direction with Cassie, so we experimented with putting a tail on the robot, and it turns out that the best tail is a pair of bilaterally symmetrical tails, one on either side.
Our goal was to design a machine that can go where people go while manipulating things in the world, and we ended up with this kind of form factor. It's a very different path for us to have gotten here than the vast majority of humanoid robots, and there's an awful lot of subtlety that is in our machine that is absent in most other machines.IEEE Spectrum: So are you saying that Digit's arms sort of started out as tails to help Cassie with yaw control?
Jonathan Hurst: There are many examples like this—we've been going down this path where we find a solution to a problem like yaw control, and it happens to look like it does with animals, but it's also a solution that's optimal in several different ways, like physical interaction and being able to catch the robot when it falls. It's not like it's a compromise between one thing and another thing, it's straight up the right solution for these three different performance design goals.
Looking back, we started by asking, should we put a reaction wheel or a gyro on Cassie for yaw control? Well, that's just wasted mass. We could use a tail, and there are a lot of nice robots with tails, but usually they're for controlling pitch. It's the same with animals; if you look at lizards, they use their tails for mid-air reorienting to land on their feet after they jump. Cassie doesn't need a tail for that, but we only have a couple of small feet on the ground to work with. And if you look at other bipedal animals, every one of them has some other way of getting that yaw authority. If you watch an ostrich run, when it turns, it sticks its wing out to get the control that it needs.
And so all of these things just fall into place, and a bilaterally symmetrical pair of tails is the best way to control yaw in a biped. When you see Digit walking and its arms are swinging, that's not something that we added to make the motion look right. It looks right because it literally is right—it's the physics of mobility. And that's a good sign for us that we're on the right path to getting the performance that we want.
“We're going for general purpose, but starting with some of the easiest use cases.”
—Agility CTO Jonathan Hurst
Spectrum: We've seen Digit demonstrating very impressive mobility skills. Why are we seeing a demo in a semi-constrained warehouse environment instead of somewhere that would more directly leverage Digit's unique advantages?
Jonathan Hurst: It's about finding the earliest, most appropriate, and most valuable use cases. There's a lot to this robot, and we're not going to be just a tote packing robot. We're not building a specialized robot for this one application, but we have a couple of pretty big logistics partners who are interested in the flexibility and the manipulation capabilities of this machine. And yeah, what you're seeing now is the robot on a flattish floor, but it's also not going to be tripped up by a curb, or a step, or, a wire cover, or other things on the ground. You don't have to worry about anything like that. So next, it's an easy transition next to unloading trailers, where it's going to have to be stepping over gaps and up and down things and around boxes on the floor and stuff like that. We're going for general purpose, but starting with some of the easiest use cases.
Damion Shelton, CEO: We're trying to prune down the industry space, to get to something where there's a clear value proposition with a partner and deploying there. We can respect the difficulty of the general purpose use case and work to deploy early and profitably, as opposed to continuing to push for the outdoor applications. The blessing and the curse of the Ford opportunity is that it's super interesting, but also super hard. And so it's very motivating, and it's clear to us that that's where one of the ultimate opportunities is, but it's also far enough away from a deployment timeline that it just doesn't map on to a viable business model.
This is a point that every robotics company runs into sooner or later, where aspirations have to succumb to the reality of selling robots in a long-term sustainable way. It's definitely not a bad thing, it just means that we may have to adjust our expectations accordingly. No matter what kind of flashy cutting-edge capabilities your robot has, if it can't cost effectively do dull or dirty or dangerous stuff, nobody's going to pay you money for it. And cost effective usefulness is, arguably, one of the biggest challenges in bipedal robotics right now. In the past, I've been impressed by Digit's weightlifting skills, or its ability to climb steep and muddy hills. I'll be just as impressed when it starts making money for Agility by doing boring repetitive tasks in warehouses, because that means that Agility will be able to keep working towards those more complex, more exciting things. “It's not general manipulation, and we're not solving the grand challenges of robotics,” says Hurst. “Yet. But we're on our way.” Continue reading

Posted in Human Robots

#439640 Elon Musk says Tesla’s robot will ...

After dominating the electric vehicle market and throwing his hat into the billionaire space race, Tesla boss Elon Musk announced the latest frontier he's aiming to conquer: humanoid robots. Continue reading

Posted in Human Robots

#439110 Robotic Exoskeletons Could One Day Walk ...

Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.

Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.

One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.

Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.

Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.

Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.

According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.

In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”

In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .

Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.

However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading

Posted in Human Robots

#439105 This Robot Taught Itself to Walk in a ...

Recently, in a Berkeley lab, a robot called Cassie taught itself to walk, a little like a toddler might. Through trial and error, it learned to move in a simulated world. Then its handlers sent it strolling through a minefield of real-world tests to see how it’d fare.

And, as it turns out, it fared pretty damn well. With no further fine-tuning, the robot—which is basically just a pair of legs—was able to walk in all directions, squat down while walking, right itself when pushed off balance, and adjust to different kinds of surfaces.

It’s the first time a machine learning approach known as reinforcement learning has been so successfully applied in two-legged robots.

This likely isn’t the first robot video you’ve seen, nor the most polished.

For years, the internet has been enthralled by videos of robots doing far more than walking and regaining their balance. All that is table stakes these days. Boston Dynamics, the heavyweight champ of robot videos, regularly releases mind-blowing footage of robots doing parkour, back flips, and complex dance routines. At times, it can seem the world of iRobot is just around the corner.

This sense of awe is well-earned. Boston Dynamics is one of the world’s top makers of advanced robots.

But they still have to meticulously hand program and choreograph the movements of the robots in their videos. This is a powerful approach, and the Boston Dynamics team has done incredible things with it.

In real-world situations, however, robots need to be robust and resilient. They need to regularly deal with the unexpected, and no amount of choreography will do. Which is how, it’s hoped, machine learning can help.

Reinforcement learning has been most famously exploited by Alphabet’s DeepMind to train algorithms that thrash humans at some the most difficult games. Simplistically, it’s modeled on the way we learn. Touch the stove, get burned, don’t touch the damn thing again; say please, get a jelly bean, politely ask for another.

In Cassie’s case, the Berkeley team used reinforcement learning to train an algorithm to walk in a simulation. It’s not the first AI to learn to walk in this manner. But going from simulation to the real world doesn’t always translate.

Subtle differences between the two can (literally) trip up a fledgling robot as it tries out its sim skills for the first time.

To overcome this challenge, the researchers used two simulations instead of one. The first simulation, an open source training environment called MuJoCo, was where the algorithm drew upon a large library of possible movements and, through trial and error, learned to apply them. The second simulation, called Matlab SimMechanics, served as a low-stakes testing ground that more precisely matched real-world conditions.

Once the algorithm was good enough, it graduated to Cassie.

And amazingly, it didn’t need further polishing. Said another way, when it was born into the physical world—it knew how to walk just fine. In addition, it was also quite robust. The researchers write that two motors in Cassie’s knee malfunctioned during the experiment, but the robot was able to adjust and keep on trucking.

Other labs have been hard at work applying machine learning to robotics.

Last year Google used reinforcement learning to train a (simpler) four-legged robot. And OpenAI has used it with robotic arms. Boston Dynamics, too, will likely explore ways to augment their robots with machine learning. New approaches—like this one aimed at training multi-skilled robots or this one offering continuous learning beyond training—may also move the dial. It’s early yet, however, and there’s no telling when machine learning will exceed more traditional methods.

And in the meantime, Boston Dynamics bots are testing the commercial waters.

Still, robotics researchers, who were not part of the Berkeley team, think the approach is promising. Edward Johns, head of Imperial College London’s Robot Learning Lab, told MIT Technology Review, “This is one of the most successful examples I have seen.”

The Berkeley team hopes to build on that success by trying out “more dynamic and agile behaviors.” So, might a self-taught parkour-Cassie be headed our way? We’ll see.

Image Credit: University of California Berkeley Hybrid Robotics via YouTube Continue reading

Posted in Human Robots