Tag Archives: operator

#439768 DARPA SubT Finals: Robot Operator Wisdom

Each of the DARPA Subterranean Challenge teams is allowed to bring up to 20 people to the Louisville Mega Cavern for the final event. Of those 20 people, only five can accompany the robots to the course staging area to set up the robots. And of those five, just one person can be what DARPA calls the Human Supervisor.

The Human Supervisor role, which most teams refer to as Robot Operator, is the only person allowed to interface with the robots while they're on the course. Or, it's probably more accurate to say that the team's base station computer is the only thing allowed to interface with robots on the course, and the human operator is the only person allowed to use the base station. The operator can talk to their teammates at the staging area, but that's about it—the rest of the team can't even look at the base station screens.
Robot operator is a unique job that can be different for each team, depending on what kinds of robots that team has deployed, how autonomous those robots are, and what strategy the team is using during the competition. On the second day of the SubT preliminary competition, we talked with robot operators from all eight Systems Track teams to learn more about their robots, exactly what they do during the competition runs, and their approach to autonomy.
“DARPA is interested in approaches that are highly autonomous without the need for substantive human interventions; capable of remotely mapping and/or navigating complex and dynamic terrain; and able to operate with degraded and unreliable communication links. The team is permitted to have a single Human Supervisor at a Base Station… The Human Supervisor is permitted to view, access, and/or analyze both course data and status data. Only the Human Supervisor is permitted to use wireless communications with the systems during the competition run.” DARPA's idea here is that most of the robots competing in SubT will be mostly autonomous most of the time, hence their use of “supervisor” rather than “operator.” Requiring substantial human-in-the-loop-ness is problematic for a couple of reasons—first, direct supervision requires constant communication, and we've seen how problematic communication can be on the SubT course. And second, operation means the need for a skilled and experienced operator, which is fine if you're a SubT team that's been practicing for years but could be impractical for a system of robots that's being deployed operationally.
So how are teams making the robot operator role work, and how close are they to being robot supervisors instead? I went around the team garages on the second day of preliminary runs, and asked each team operator the same three questions about their roles. I also asked the operators, “What is one question I should I ask the next operator I talk to?” I added this as a bonus question, with each operator answering a question suggested by a different team operator.
Team RobotikaRobot Operator: Martin DlouhyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
This is the third time we've participated in a SubT event; we've tried various robots, small ones, bigger ones, but for us, these two robots seem to be optimal. Because we are flying from Czech Republic, the robots have to fit in our checked luggage. We also don't have the smaller robots or the drones that we had because like three weeks ago, we didn't even know if we would be allowed to enter the United States. So this is optimal for what we can bring to the competition, and we would like to demonstrate that we can do something with a simple solution.
Once your team of robots is on the course, what do you do during the run?
We have two robots, so it's easier than for some other teams. When the robots are in network range, I have some small tools to locally analyze data to help find artifacts that are hard for the robots to see, like the cellphone or the gas source. If everything goes fine, I basically don't have to be there. We've been more successful in the Virtual SubT competition because over half our team are software developers. We've really pushed hard to make the Virtual and System software as close as possible, and in Virtual, it's fully autonomous from beginning to end. There's one step that I do manually as operator—the robots have neural networks to recognize artifacts, but it's on me to click confirm to submit the artifact reports to DARPA.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
I would actually like an operator-less solution, and we could run it, but it's still useful to have a human operator—it's safer for the robot, because it's obvious to a human when the robot is not doing well.
Bonus operator question: What are the lowest and highest level decisions you have to make?
The lowest level is, I open the code and change it on the fly. I did it yesterday to change some of the safety parameters. I do this all the time, it's normal. The highest level is asking the team, “guys, how are we going to run our robots today.”
Team MARBLERobot Operator: Dan RileyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We've been using the Huskies [wheeled robots] since the beginning of the competition, it's a reliable platform with a lot of terrain capability. It's a workhorse that can do a lot of stuff. We were also using a tank-like robot at one time, but we had traversability issues so we decided to drop that one for this competition. We also had UAVs, because there's a lot of value in not having to worry about the ground while getting to areas that you can't get to with a ground robot, but unfortunately we had to drop that too because of the number of people and time that we had. We decided to focus on what we knew we could do well, and make sure that our baseline system was super solid. And we added the Spot robots within the last two months mostly to access areas that the Huskies can't, like going up and down stairs and tricky terrain. It's fast, and we really like it.
Our team of robots is closely related to our deployment strategy. The way our planner and multi-robot coordination works is that the first robot really just plows through the course looking for big frontiers and new areas, and then subsequent robots will fill in the space behind looking for more detail. So we deploy the Spots first to push the environment since they're faster than the Huskies, and the Huskies will follow along and fill in the communications network.
We know we don't want to run five robots tomorrow. Before we got here, we saw the huge cavern and thought that running more robots would be better. But based on the first couple runs, we now know that the space inside is much smaller, so we think four robots is good.
Once your team of robots is on the course, what do you do during the run?
The main thing I'm watching for is artifact reports from robots. While I'm waiting for artifact reports, I'm monitoring where the robots are going, and mainly I want to see them going to new areas. If I see them backtracking or going where another robot has explored already, I have the ability to send them new goal points in another area. When I get an artifact report, I look at the image to verify that it's a good report. For objects that may not be visible, like the cell phone [which has to be detected through the wireless signal it emits], if it's early in the mission I'll generally wait and see if I get any other reports from another robot on it. The localization isn't great on those artifacts, so once I do submit, if it doesn't score, I have to look around to find an area where it might be. For instance, we found this giant room with lots of shelves and stuff, and that's a great place to put a cell phone, and sure enough, that's where the cell phone was.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
We pride ourselves on our autonomy. From the very beginning, that was our goal, and actually in earlier competitions I had very little control over the robot, I could not even send it a goal point. All I was getting was reports—it was a one-way street of information. I might have been able to stop the robot, but that was about it. Later on, we added the goal point capability and an option to drive the robot if I need to take over to get it out of a situation.
I'm actually the lead for our Virtual Track team as well, and that's already decision-free. We're running the exact same software stack on our robots, and the only difference is that the virtual system also does artifact reporting. Honestly, I'd say that we're more effective having the human be able to make some decisions, but the exact same system works pretty well without having any human at all.
Bonus operator question: How much sleep did you get last night?
I got eight hours, and I could have had more, except I sat around watching TV for a while. We stressed ourselves out a lot during the first two competitions, and we had so many problems. It was horrible, so we said, “we're not doing that again!” A lot of our problems started with the setup and launching phase, just getting the robots started up and ready to go and out of the gate. So we spent a ton of time making sure that our startup procedures were all automated. And when you're able to start up easily, things just go well.
Team ExplorerRobot Operator: Chao CaoTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We tried to diversify our robots for the different kinds of environments in the challenge. We have wheeled vehicles, aerial vehicles, and legged vehicles (Spot robots). Our wheeled vehicles are different sizes; two are relatively big and one is smaller, and two are articulated in the middle to give them better mobility performance in rough terrain. Our smaller drones can be launched from the bigger ground robots, and we have a larger drone with better battery life and more payload.
In total, there are 11 robots, which is quite a lot to be managed by a single human operator under a constrained time limit, but if we manage those robots well, we can explore quite a large three dimensional area.
Once your team of robots is on the course, what do you do during the run?
Most of the time, to be honest, it's like playing a video game. It's about allocating resources to gain rewards (which in this case are artifacts) by getting the robots spread out to maximize coverage of the course. I'm monitoring the status of the robots, where they're at, and what they're doing. Most of the time I rely on the autonomy of the robots, including for exploration, coordination between multiple robots, and detecting artifacts. But there are still times when the robots might need my help, for example yesterday one of the bigger robots got itself stuck in the cave branch but I was able to intervene and get it to drive out.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Humans have a semantic understanding of the environment. Just by looking at a camera image, I can predict what an environment will be like and how risky it will be, but robots don't have that kind of higher level decision capability. So I might want a specific kind of robot to go into a specific kind of environment based on what I see, and I can redirect robots to go into areas that are a better fit for them. For me as an operator, at least from my personal experience, I think it's still quite challenging for robots to perform this kind of semantic understanding, and I still have to make those decisions.
Bonus operator question: What is your flow for decision making?
Before each run, we'll have a discussion among all the team members to figure out a rough game plan, including a deployment sequence—which robots go first, should the drones be launched from the ground vehicles or from the staging area. During the run, things are changing, and I have to make decisions based on the environment. I'll talk to the pit crew about what I can see through the base station, and then I'll make an initial proposal based on my instincts for what I think we should do. But I'm very focused during the run and have a lot of tasks to do, so my teammates will think about time constraints and how conservative we want to be and where other robots are because I can't think through all of those possibilities, and then they'll give me feedback. Usually this back and forth is quick and smooth.

The Robot Operator is the only person allowed to interface with the robots while they're on the course—the operators pretty much controls the entire run by themselves.DARPA
Team CTU-CRAS-NORLABRobot Operator: Vojtech SalnskyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We chose many different platforms. We have some tracked robots, wheeled robots, Spot robots, and some other experimental UGVs [small hexapods and one big hexapod], and every UGV has a different ability to traverse terrain, and we are trying to cover all possible locomotion types to be able to traverse anything on the course. Besides the UGVs, we're using UAVs as well that are able to go through both narrow corridors and bigger spaces.
We brought a large number of robots, but the number that we're using, about ten, is enough to be able to explore a large part of the environment. Deploying more would be really hard for the pit crew of only five people, and there isn't enough space for more robots.
Once your team of robots is on the course, what do you do during the run?
It differs run by run, but the robots are mostly autonomous, so they decide where to go and I'm looking for artifact detections uploaded by the robots and approving or disapproving them. If I see that a robot is stuck somewhere, I can help it decide where to go. If it looks like a robot may lose communications, I can move some robots to make a chain from other robots to extend our network. I can do high level direction for exploration, but I don't have to—the robots are updating their maps and making decisions to best explore the whole environment.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Terrain assessment is subtle. At a higher level, the operator has to decide where to send a walking robot and where to send a rolling robot. It's tiny details on the ground and a feeling about the environment that help the operator make those decisions, and that is not done autonomously.
Bonus operator question: How much bandwidth do you have?
I'm on the edge. I have a map, I have some subsampled images, I have detections, I have topological maps, but it would be better to have everything in 4K and dense point clouds.
Team CSIRO Data61Robot Operator: Brendan TiddTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We've got three robot types that are here today—Spot legged robots, big tracked robots called Titans, and drones. The legged ones have been pretty amazing, especially for urban environments with narrow stairs and doorways. The tracked robots are really good in the tricky terrain of cave environments. And the drones can obviously add situational awareness from higher altitudes and detect those high artifacts.
Once your team of robots is on the course, what do you do during the run?
We use the term “operator” but I'm actually supervising. Our robots are all autonomous, they all know how to divide and conquer, they're all going to optimize exploring for depth, trying to split up where they can and not get in each other's way. In particular the Spots and the Titans have a special relationship where the Titan will give way to the Spot if they ever cross paths, for obvious reasons. So my role during the run is to coordinate node placement, that's something that we haven't automated—we've got a lot of information that comes back that I use to decide on good places to put nodes, and probably the next step is to automate that process. I also decide where to launch the drone. The launch itself is one click, but it still requires me to know where a good place is. If everything goes right, in general the robots will just do their thing.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
The node drop thing is vital, but I think it's quite a complex thing to automate because there are so many different aspects to consider. The node mesh is very dynamic, it's affected by all the robots that are around it and obviously by the environment. Similarly, the drone launch, but that requires the robots to know when it's worth it to launch a drone. So those two things, but also pushing on the nav stack to make sure it can handle the crazy stuff. And I guess the other side is the detection. It's not a trivial thing knowing what's a false positive or not, that's a hard thing to automate.
Bonus operator question: How stressed are you, knowing that it's just you controlling all the robots during the run?
Coping with that is a thing! I've got music playing when I'm operating, I actually play in a metal band and we get on stage sometimes and the feeling is very similar, so it's really helpful to have the music there. But also the team, you know? I'm confident in our system, and if I wasn't, that would really affect my mental state. But we test a lot, and all that preparedness helps with the stress.
Team CoSTARRobot Operator: Kyohei OtsuTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We have wheeled vehicles, legged vehicles, and aerial drones, so we can cover many terrains, handle stairs, and fly over obstacles. We picked three completely different mobility systems to be able to use many different strategies. The robots can autonomously adjust their roles by themselves; some explore, some help with communication for other robots. The number of robots we use depends on the environment—yesterday we deployed seven robots onto the course because we assumed that the environment would be huge, but it's a bit smaller than we expected, so we'll adapt our number to fit that environment.
Once your team of robots is on the course, what do you do during the run?
Our robots are autonomous, and I think we have very good autonomy software. During setup the robots need some operator attention; I have to make sure that everything is working including sensors, mobility systems, and all the algorithms. But after that, once I send the robot into the course, I totally forget about it and focus on another robot. Sometimes I intervene to better distribute our team of robots—that's something that a human is good at, using prior knowledge to understand the environment. And I look at artifact reports, that's most of my job.
In the first phases of the Subterranean Challenge, we were getting low level information from the robots and sometimes using low level commands. But as the project proceeded and our technology matured, we found that it was too difficult for the operator, so we added functionality for the robot to make all of those low level decisions, and the operator just deals with high level decisions.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible? [answered by CoSTAR co-Team Lead Joel Burdick]
Two things: the system reports that it thinks it found an artifact, and the operator has to confirm yes or no. He has to also confirm that the location seems right. The other thing is that our multi-robot coordination isn't as sophisticated as it could be, so the operator may have to retask robots to different areas. If we had another year, we'd be much closer to automating those things.
Bonus Operator Question: Would you prefer if your system was completely autonomous and your job was not necessary?
Yeah, I'd prefer that!
Team Coordinated RoboticsRobot Operator: Kevin KnoedlerTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
The ideal mix in my mind is a fleet of small drones with lidar, but they are very hard to test, and very hard to get right. Ground vehicles aren't necessarily easier to get right, but they're easier to test, and if you can test something, you're a lot more likely to succeed. So that's really the big difference with the team of robots we have here.
Once your team of robots is on the course, what do you do during the run?
Some of the robots have an automatic search function where if they find something they report back, and what I'd like to be doing is just monitoring. But, the search function only works in larger areas. So right now the goal is for me to drive them through the narrow areas, get them into the wider areas, and let them go, but getting them to that search area is something that I mostly need to do manually one at a time.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Ideally, the robots would be able to get through those narrow areas on their own. It's actually a simpler problem to solve than larger areas, it's just not where we focused our effort.
Bonus operator question: How many interfaces do you use to control your robots?
We have one computer with two monitors, one controller, and that's it.
Team CERBERUSRobot Operator: Marco TranzattoTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We have a mix of legged and flying robots, supported by a rover carrying a wireless antenna. The idea is to take legged robots for harsh environments where wheel robots may not perform as well, combined with aerial scouts that can explore the environment fast to provide initial situational awareness to the operator so that I can decide where to deploy the legged machines. So the goal is to combine the legged and flying robots in a unified mission to give as much information as possible to the human operator. We also had some bigger robots, but we found them to be a bit too big for the environment that DARPA has prepared for us, so we're not going to deploy them.
Once your team of robots is on the course, what do you do during the run?
We use two main modes: one is fully autonomous on the robots, and the other one is supervised autonomy where I have an overview of what the robots are doing and can override specific actions. Based on the high level information that I can see, I can decide to control a single robot to give it a manual waypoint to reposition it to a different frontier inside the environment. I can go from high level control down to giving these single commands, but the commands are still relatively high level, like “go here and explore.” Each robot has artifact scoring capabilities, and all these artifact detections are sent to the base station once the robot is in communication range, and the human operator has to say, “okay this looks like a possible artifact so I accept it” and then can submit the position either as reported by the robot or the optimized position reported by the mapping server.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Each robot is autonomous by itself. But the cooperation between robots is still like… The operator has to set bounding boxes to tell each robot where to explore. The operator has a global overview, and then inside these boxes, the robots are autonomous. So I think at the moment in our pipeline, we still need a centralized human supervisor to say which robot explores in which direction. We are close to automating this, but we're not there yet.
Bonus operator question: What is one thing you would add to make your life as an operator easier?
I would like to have a more centralized way to give commands to the robots. At the moment I need to select each robot and give it a specific command. It would be very helpful to have a centralized map where I can tell a robot to say explore in a given area while considering data from a different robot. This was in our plan, but we didn't manage to deploy it yet. Continue reading

Posted in Human Robots

#439100 Video Friday: Robotic Eyeball Camera

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
RoboCup 2021 – June 22-28, 2021 – [Online Event]
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today's videos.

What if seeing devices looked like us? Eyecam is a prototype exploring the potential future design of sensing devices. Eyecam is a webcam shaped like a human eye that can see, blink, look around and observe us.

And it's open source, so you can build your own!

[ Eyecam ]

Looks like Festo will be turning some of its bionic robots into educational kits, which is a pretty cool idea.

[ Bionics4Education ]

Underwater soft robots are challenging to model and control because of their high degrees of freedom and their intricate coupling with water. In this paper, we present a method that leverages the recent development in differentiable simulation coupled with a differentiable, analytical hydrodynamic model to assist with the modeling and control of an underwater soft robot. We apply this method to Starfish, a customized soft robot design that is easy to fabricate and intuitive to manipulate.

[ MIT CSAIL ]

Rainbow Robotics, the company who made HUBO, has a new collaborative robot arm.

[ Rainbow Robotics ]

Thanks Fan!

We develop an integrated robotic platform for advanced collaborative robots and demonstrates an application of multiple robots collaboratively transporting an object to different positions in a factory environment. The proposed platform integrates a drone, a mobile manipulator robot, and a dual-arm robot to work autonomously, while also collaborating with a human worker. The platform also demonstrates the potential of a novel manufacturing process, which incorporates adaptive and collaborative intelligence to improve the efficiency of mass customization for the factory of the future.

[ Paper ]

Thanks Poramate!

In Sevastopol State University the team of the Laboratory of Underwater Robotics and Control Systems and Research and Production Association “Android Technika” performed tests of an underwater anropomorphic manipulator robot.

[ Sevastopol State ]

Thanks Fan!

Taiwanese company TCI Gene created a COVID test system based on their fully automated and enclosed gene testing machine QVS-96S. The system includes two ABB robots and carries out 1800 tests per day, operating 24/7. Every hour 96 virus samples tests are made with an accuracy of 99.99%.

[ ABB ]

A short video showing how a Halodi Robotics can be used in a commercial guarding application.

[ Halodi ]

During the past five years, under the NASA Early Space Innovations program, we have been developing new design optimization methods for underactuated robot hands, aiming to achieve versatile manipulation in highly constrained environments. We have prototyped hands for NASA’s Astrobee robot, an in-orbit assistive free flyer for the International Space Station.

[ ROAM Lab ]

The new, improved OTTO 1500 is a workhorse AMR designed to move heavy payloads through demanding environments faster than any other AMR on the market, with zero compromise to safety.

[ ROAM Lab ]

Very, very high performance sensing and actuation to pull this off.

[ Ishikawa Group ]

We introduce a conversational social robot designed for long-term in-home use to help with loneliness. We present a novel robot behavior design to have simple self-reflection conversations with people to improve wellness, while still being feasible, deployable, and safe.

[ HCI Lab ]

We are one of the 5 winners of the Start-up Challenge. This video illustrates what we achieved during the Swisscom 5G exploration week. Our proof-of-concept tele-excavation system is composed of a Menzi Muck M545 walking excavator automated & customized by Robotic Systems Lab and IBEX motion platform as the operator station. The operator and remote machine are connected for the first time via a 5G network infrastructure which was brought to our test field by Swisscom.

[ RSL ]

This video shows LOLA balancing on different terrain when being pushed in different directions. The robot is technically blind, not using any camera-based or prior information on the terrain (hard ground is assumed).

[ TUM ]

Autonomous driving when you cannot see the road at all because it's buried in snow is some serious autonomous driving.

[ Norlab ]

A hierarchical and robust framework for learning bipedal locomotion is presented and successfully implemented on the 3D biped robot Digit. The feasibility of the method is demonstrated by successfully transferring the learned policy in simulation to the Digit robot hardware, realizing sustained walking gaits under external force disturbances and challenging terrains not included during the training process.

[ OSU ]

This is a video summary of the Center for Robot-Assisted Search and Rescue's deployments under the direction of emergency response agencies to more than 30 disasters in five countries from 2001 (9/11 World Trade Center) to 2018 (Hurricane Michael). It includes the first use of ground robots for a disaster (WTC, 2001), the first use of small unmanned aerial systems (Hurricane Katrina 2005), and the first use of water surface vehicles (Hurricane Wilma, 2005).

[ CRASAR ]

In March, a team from the Oxford Robotics Institute collected a week of epic off-road driving data, as part of the Sense-Assess-eXplain (SAX) project.

[ Oxford Robotics ]

As a part of the AAAI 2021 Spring Symposium Series, HEBI Robotics was invited to present an Industry Talk on the symposium's topic: Machine Learning for Mobile Robot Navigation in the Wild. Included in this presentation was a short case study on one of our upcoming mobile robots that is being designed to successfully navigate unstructured environments where today's robots struggle.

[ HEBI Robotics ]

Thanks Hardik!

This Lockheed Martin Robotics Seminar is from Chad Jenkins at the University of Michigan, on “Semantic Robot Programming… and Maybe Making the World a Better Place.”

I will present our efforts towards accessible and general methods of robot programming from the demonstrations of human users. Our recent work has focused on Semantic Robot Programming (SRP), a declarative paradigm for robot programming by demonstration that builds on semantic mapping. In contrast to procedural methods for motion imitation in configuration space, SRP is suited to generalize user demonstrations of goal scenes in workspace, such as for manipulation in cluttered environments. SRP extends our efforts to crowdsource robot learning from demonstration at scale through messaging protocols suited to web/cloud robotics. With such scaling of robotics in mind, prospects for cultivating both equal opportunity and technological excellence will be discussed in the context of broadening and strengthening Title IX and Title VI.

[ UMD ] Continue reading

Posted in Human Robots

#439036 Video Friday: Shadow Plays Jenga, and ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – Coral Gables, FL, USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The Shadow Robot team couldn't resist! Our Operator, Joanna, is using the Shadow Teleoperation System which, fun and games aside, can help those in difficult, dangerous and distant jobs.

Shadow could challenge this MIT Jenga-playing robot, but I bet they wouldn't win:

[ Shadow Robot ]

Digit is gradually stomping the Agility Robotics logo into a big grassy field fully autonomously.

[ Agility Robotics ]

This is a pretty great and very short robotic magic show.

[ Mario the Magician ]

A research team at the Georgia Institute of Technology has developed a modular solution for drone delivery of larger packages without the need for a complex fleet of drones of varying sizes. By allowing teams of small drones to collaboratively lift objects using an adaptive control algorithm, the strategy could allow a wide range of packages to be delivered using a combination of several standard-sized vehicles.

[ GA Tech ]

I've seen this done using vision before, but Flexiv's Rizon 4s can keep a ball moving along a specific trajectory using only force sensing and control.

[ Flexiv ]

Thanks Yunfan!

This combination of a 3D aerial projection system and a sensing interface can be used as an interactive and intuitive control system for things like robot arms, but in this case, it's being used to make simulated pottery. Much less messy than the traditional way of doing it.

More details on Takafumi Matsumaru's work at the Bio-Robotics & Human-Mechatronics Laboratory at Waseda University at the link below.

[ BLHM ]

U.S. Vice President Kamala Harris called astronauts Shannon Walker and Kate Rubins on the ISS, and they brought up Astrobee, at which point Shannon reaches over and rips Honey right off of her charging dock to get her on camera.

[ NASA ]

Here's a quick three minute update on Perseverance and Ingenuity from JPL.

[ Mars 2020 ]

Rigid grippers used in existing aerial manipulators require precise positioning to achieve successful grasps and transmit large contact forces that may destabilize the drone. This limits the speed during grasping and prevents “dynamic grasping,” where the drone attempts to grasp an object while moving. On the other hand, biological systems (e.g. birds) rely on compliant and soft parts to dampen contact forces and compensate for grasping inaccuracy, enabling impressive feats. This paper presents the first prototype of a soft drone—a quadrotor where traditional (i.e. rigid) landing gears are replaced with a soft tendon-actuated gripper to enable aggressive grasping.

[ MIT ]

In this video we present results from a field deployment inside the Løkken Mine underground pyrite mine in Norway. The Løkken mine was operative from 1654 to 1987 and contains narrow but long corridors, alongside vast rooms and challenging vertical stopes. In this field study we evaluated selected autonomous exploration and visual search capabilities of a subset of the aerial robots of Team CERBERUS towards the goal of complete subterranean autonomy.

[ Team CERBERUS ]

What you can do with a 1,000 FPS projector with a high speed tracking system.

[ Ishikawa Group ]

ANYbotics’ collaboration with BASF, one of the largest global chemical manufacturers, displays the efficiency, quality, and scalability of robotic inspection and data-collection capabilities in complex industrial environments.

[ ANYbotics ]

Does your robot arm need a stylish jacket?

[ Fraunhofer ]

Trossen Robotics unboxes a Unitree A1, and it's actually an unboxing where they have to figure out everything from scratch.

[ Trossen ]

Robots have learned to drive cars, assist in surgeries―and vacuum our floors. But can they navigate the unwritten rules of a busy sidewalk? Until they can, robotics experts Leila Takayama and Chris Nicholson believe, robots won’t be able to fulfill their immense potential. In this conversation, Chris and Leila explore the future of robotics and the role open source will play in it.

[ Red Hat ]

Christoph Bartneck's keynote at the 6th Joint UAE Symposium on Social Robotics, focusing on what roles robots can play during the Covid crisis and why so many social robots fail in the market.

[ HIT Lab ]

Decision-making based on arbitrary criteria is legal in some contexts, such as employment, and not in others, such as criminal sentencing. As algorithms replace human deciders, HAI-EIS fellow Kathleen Creel argues arbitrariness at scale is morally and legally problematic. In this HAI seminar, she explains how the heart of this moral issue relates to domination and a lack of sufficient opportunity for autonomy. It relates in interesting ways to the moral wrong of discrimination. She proposes technically informed solutions that can lessen the impact of algorithms at scale and so mitigate or avoid the moral harm identified.

[ Stanford HAI ]

Sawyer B. Fuller speaks on Autonomous Insect-Sized Robots at the UC Berkeley EECS Colloquium series.

Sub-gram (insect-sized) robots have enormous potential that is largely untapped. From a research perspective, their extreme size, weight, and power (SWaP) constraints also forces us to reimagine everything from how they compute their control laws to how they are fabricated. These questions are the focus of the Autonomous Insect Robotics Laboratory at the University of Washington. I will discuss potential applications for insect robots and recent advances from our group. These include the first wireless flights of a sub-gram flapping-wing robot that weighs barely more than a toothpick. I will describe efforts to expand its capabilities, including the first multimodal ground-flight locomotion, the first demonstration of steering control, and how to find chemical plume sources by integrating the smelling apparatus of a live moth. I will also describe a backpack for live beetles with a steerable camera and conceptual design of robots that could scale all the way down to the “gnat robots” first envisioned by Flynn & Brooks in the ‘80s.

[ UC Berkeley ]

Thanks Fan!

Joshua Vander Hook, Computer Scientist, NIAC Fellow, and Technical Group Supervisor at NASA JPL, presents an overview of the AI Group(s) at JPL, and recent work on single and multi-agent autonomous systems supporting space exploration, Earth science, NASA technology development, and national defense programs.

[ UMD ] Continue reading

Posted in Human Robots

#439010 Video Friday: Nanotube-Powered Insect ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

If you’ve ever swatted a mosquito away from your face, only to have it return again (and again and again), you know that insects can be remarkably acrobatic and resilient in flight. Those traits help them navigate the aerial world, with all of its wind gusts, obstacles, and general uncertainty. Such traits are also hard to build into flying robots, but MIT Assistant Professor Kevin Yufeng Chen has built a system that approaches insects’ agility.

Chen’s actuators can flap nearly 500 times per second, giving the drone insect-like resilience. “You can hit it when it’s flying, and it can recover,” says Chen. “It can also do aggressive maneuvers like somersaults in the air.” And it weighs in at just 0.6 grams, approximately the mass of a large bumble bee. The drone looks a bit like a tiny cassette tape with wings, though Chen is working on a new prototype shaped like a dragonfly.

[ MIT ]

National Robotics Week is April 3-11, 2021!

[ NRW ]

This is in a motion capture environment, but still, super impressive!

[ Paper ]

Thanks Fan!

Why wait for Boston Dynamics to add an arm to your Spot if you can just do it yourself?

[ ETHZ ]

This video shows the deep-sea free swimming of soft robot in the South China Sea. The soft robot was grasped by a robotic arm on ‘HAIMA’ ROV and reached the bottom of the South China Sea (depth of 3,224 m). After the releasing, the soft robot was actuated with an on-board AC voltage of 8 kV at 1 Hz and demonstrated free swimming locomotion with its flapping fins.

Um, did they bring it back?

[ Nature ]

Quadruped Yuki Mini is 12 DOF robot equipped with a Raspberry Pi that runs ROS. Also, BUNNIES!

[ Lingkang Zhang ]

Thanks Lingkang!

Deployment of drone swarms usually relies on inter-agent communication or visual markers that are mounted on the vehicles to simplify their mutual detection. The vswarm package enables decentralized vision-based control of drone swarms without relying on inter-agent communication or visual fiducial markers. The results show that the drones can safely navigate in an outdoor environment despite substantial background clutter and difficult lighting conditions.

[ Vswarm ]

A conventional adopted method for operating a waiter robot is based on the static position control, where pre-defined goal positions are marked on a map. However, this solution is not optimal in a dynamic setting, such as in a coffee shop or an outdoor catering event, because the customers often change their positions. We explore an alternative human-robot interface design where a human operator communicates the identity of the customer to the robot instead. Inspired by how [a] human communicates, we propose a framework for communicating a visual goal to the robot, through interactive two-way communications.

[ Paper ]

Thanks Poramate!

In this video, LOLA reacts to undetected ground height changes, including a drop and leg-in-hole experiment. Further tests show the robustness to vertical disturbances using a seesaw. The robot is technically blind, not using any camera-based or prior information on the terrain.

[ TUM ]

RaiSim is a cross-platform multi-body physics engine for robotics and AI. It fully supports Linux, Mac OS, and Windows.

[ RaiSim ]

Thanks Fan!

The next generation of LoCoBot is here. The LoCoBot is an ROS research rover for mapping, navigation and manipulation (optional) that enables researchers, educators and students alike to focus on high level code development instead of hardware and building out lower level code. Development on the LoCoBot is simplified with open source software, full ROS-mapping and navigation packages and modular opensource Python API that allows users to move the platform as well as (optional) manipulator in as few as 10 lines of code.

[ Trossen ]

MIT Media Lab Research Specialist Dr. Kate Darling looks at how robots are portrayed in popular film and TV shows.

Kate's book, The New Breed: What Our History with Animals Reveals about Our Future with Robots can be pre-ordered now and comes out next month.

[ Kate Darling ]

The current autonomous mobility systems for planetary exploration are wheeled rovers, limited to flat, gently-sloping terrains and agglomerate regolith. These vehicles cannot tolerate instability and operate within a low-risk envelope (i.e., low-incline driving to avoid toppling). Here, we present ‘Mars Dogs’ (MD), four-legged robotic dogs, the next evolution of extreme planetary exploration.

[ Team CoSTAR ]

In 2020, first-year PhD students at the MIT Media Lab were tasked with a special project—to reimagine the Lab and write sci-fi stories about the MIT Media Lab in the year 2050. “But, we are researchers. We don't only write fiction, we also do science! So, we did what scientists do! We used a secret time machine under the MIT dome to go to the year 2050 and see what’s going on there! Luckily, the Media Lab still exists and we met someone…really cool!” Enjoy this interview of Cyber Joe, AI Mentor for MIT Media Lab Students of 2050.

[ MIT ]

In this talk, we will give an overview of the diverse research we do at CSIRO’s Robotics and Autonomous Systems Group and delve into some specific technologies we have developed including SLAM and Legged robotics. We will also give insights into CSIRO’s participation in the current DARPA Subterranean Challenge where we are deploying a fleet of heterogeneous robots into GPS-denied unknown underground environments.

[ GRASP Seminar ]

Marco Hutter (ETH) and Hae-Won Park (KAIST) talk about “Robotics Inspired by Nature.”

[ Swiss-Korean Science Club ]

Thanks Fan!

In this keynote, Guy Hoffman Assistant Professor and the Mills Family Faculty Fellow in the Sibley School of Mechanical and Aerospace Engineering at Cornell University, discusses “The Social Uncanny of Robotic Companions.”

[ Designerly HRI ] Continue reading

Posted in Human Robots

#438080 Boston Dynamics’ Spot Robot Is Now ...

Boston Dynamics has been working on an arm for its Spot quadruped for at least five years now. There have been plenty of teasers along the way, including this 45-second clip from early 2018 of Spot using its arm to open a door, which at 85 million views seems to be Boston Dynamics’ most popular video ever by a huge margin. Obviously, there’s a substantial amount of interest in turning Spot from a highly dynamic but mostly passive sensor platform into a mobile manipulator that can interact with its environment.

As anyone who’s done mobile manipulation will tell you, actually building an arm is just the first step—the really tricky part is getting that arm to do exactly what you want it to do. In particular, Spot’s arm needs to be able to interact with the world with some amount of autonomy in order to be commercially useful, because you can’t expect a human (remote or otherwise) to spend all their time positioning individual joints or whatever to pick something up. So the real question about this arm is whether Boston Dynamics has managed to get it to a point where it’s autonomous enough that users with relatively little robotics experience will be able to get it to do useful tasks without driving themselves nuts.

Today, Boston Dynamics is announcing commercial availability of the Spot arm, along with some improved software called Scout plus a self-charging dock that’ll give the robot even more independence. And to figure out exactly what Spot’s new arm can do, we spoke with Zachary Jackowski, Spot Chief Engineer at Boston Dynamics.

Although Boston Dynamics’ focus has been on dynamic mobility and legged robots, the company has been working on manipulation for a very long time. We first saw an arm prototype on an early iteration of Spot in 2016, where it demonstrated some impressive functionality, including loading a dishwasher and fetching a beer in a way that only resulted in a minor catastrophe. But we’re guessing that Spot’s arm can trace its history back to BigDog’s crazy powerful hydraulic face-arm, which was causing mayhem with cinder blocks back in 2013:

Spot’s arm is not quite that powerful (it has to drag cinder blocks along the ground rather than fling them into space), but you can certainly see the resemblance. Here’s the video that Boston Dynamics posted yesterday to introduce Spot’s new arm:

A couple of things jumped out from this video right away. First, Spot is doing whole body manipulation with its arm, as opposed to just acting as a four-legged base that brings the arm where it needs to go. Planning looks to be very tightly integrated, such that if you ask the robot to manipulate an object, its arm, legs, and torso all work together to optimize that manipulation. Also, when Spot flips that electrical switch, you see the robot successfully grasp the switch, and then reposition its body in a way that looks like it provides better leverage for the flip, which is a neat trick. It looks like it may be able to use the strength of its legs to augment the strength of its arm, as when it’s dragging the cinder block around, which is surely an homage to BigDog. The digging of a hole is particularly impressive. But again, the real question is how much of this is autonomous or semi-autonomous in a way that will be commercially useful?

Before we get to our interview with Spot Chief Engineer Zack Jackowski, it’s worth watching one more video that Boston Dynamics shared with us:

This is notable because Spot is opening a door that’s not ADA compliant, and the robot is doing it with a simple two-finger gripper. Most robots you see interacting with doors rely on ADA compliant hardware, meaning (among other things) a handle that can be pushed rather than a knob that has to be twisted, because it’s much more challenging for a robot to grasp and twist a smooth round door knob than it is to just kinda bash down on a handle. That capability, combined with Spot being able to pass through a spring-loaded door, potentially opens up a much wider array of human environments to the robot, and that’s where we started our conversation with Jackowski.

IEEE Spectrum: At what point did you decide that for Spot’s arm to be useful, it had to be able to handle round door knobs?

Zachary Jackowski: We're like a lot of roboticists, where someone in a meeting about manipulation would say “it's time for the round doorknob” and people would start groaning a little bit. But the reality is that, in order to make a robot useful, you have to engage with the environments that users have. Spot’s arm uses a very simple gripper—it’s a one degree of freedom gripper, but a ton of thought has gone into all of the fine geometric contours of it such that it can grab that ADA compliant lever handle, and it’ll also do an enclosing grasp around a round door knob. The major point of a robot like Spot is to engage with the environment you have, and so you can’t cut out stuff like round door knobs.

We're thrilled to be launching the arm and getting it out with users and to have them start telling us what doors it works really well on, and what they're having trouble with. And we're going to be working on rapidly improving all this stuff. We went through a few campaigns of like, “this isn’t ready until we can open every single door at Boston Dynamics!” But every single door at Boston Dynamics and at our test lab is a small fraction of all the doors in the world. So we're prepared to learn a lot this year.

When we see Spot open a door, or when it does those other manipulation behaviors in the launch video, how much of that is autonomous, how much is scripted, and to what extent is there a human in the loop?

All of the scenes where the robot does a pick, like the snow scene or the laundry scene, that is actually an almost fully integrated autonomous behavior that has a bit of a script wrapped around it. We trained a detector for an object, and the robot is identifying that object in the environment, picking it, and putting it in the bin all autonomously. The scripted part of that is telling the robot to perform a series of picks.

One of the things that we’re excited about, and that roboticists have been excited about going back probably all the way to the DRC, is semi-autonomous manipulation. And so we have modes built into the interface where if you see an object that you want the robot to grab, all you have to do is tap that object on the screen, and the robot will walk up to it, use the depth camera in its gripper to capture a depth map, and plan a grasp on its own in real time. That’s all built-in, too.

The jump rope—robots don’t just go and jump rope on their own. We scripted an arm motion to move the rope, and wrote a script using our API to coordinate all three robots. Drawing “Boston Dynamics” in chalk in our parking lot was scripted also. One of our engineers wrote a really cool G-code interpreter that vectorizes graphics so that Spot can draw them.

So for an end user, if you wanted Spot to autonomously flip some switches for you, you’d just have to train Spot on your switches, and then Spot could autonomously perform the task?

There are a couple of ways that task could break down depending on how you’re interfacing with the robot. If you’re a tablet user, you’d probably just identify the switch yourself on the tablet’s screen, and the robot will figure out the grasp, and grasp it. Then you’ll enter a constrained manipulation mode on the tablet, and the robot will be able to actuate the switch. But the robot will take care of the complicated controls aspects, like figuring out how hard it has to pull, the center of rotation of the switch, and so on.

The video of Spot digging was pretty cool—how did that work?

That’s mostly a scripted behavior. There are some really interesting control systems topics in there, like how you’d actually do the right kinds of force control while you insert the trowel into the dirt, and how to maintain robot stability while you do it. The higher level task of how to make a good hole in the dirt—that’s scripted. But the part of the problem that’s actually digging, you need the right control system to actually do that, or you’ll dig your trowel into the ground and flip your robot over.

The last time we saw Boston Dynamics robots flipping switches and turning valves I think might have been during the DRC in 2015, when they had expert robot operators with control over every degree of freedom. How are things different now with Spot, and will non-experts in the commercial space really be able to get the robot to do useful tasks?

A lot of the things, like “pick the stuff up in the room,” or ‘turn that switch,” can all be done by a lightly trained operator using just the tablet interface. If you want to actually command all of Spot’s arm degrees of freedom, you can do that— not through the tablet, but the API does expose all of it. That’s actually a notable difference from the base robot; we’ve never opened up the part of the API that lets you command individual leg degrees of freedom, because we don’t think it’s productive for someone to do that. The arm is a little bit different. There are a lot of smart people working on arm motion planning algorithms, and maybe you want to plan your arm trajectory in a super precise way and then do a DRC-style interface where you click to approve it. You can do all that through the API if you want, but fundamentally, it’s also user friendly. It follows our general API design philosophy of giving you the highest level pieces of the toolbox that will enable you to solve a complex problem that we haven't thought of.

Looking back on it now, it’s really cool to see, after so many years, robots do the stuff that Gill Pratt was excited about kicking off with the DRC. And now it’s just a thing you can buy.

Is Spot’s arm safe?

You should follow the same safety rules that you’d follow when working with Spot normally, and that’s that you shouldn’t get within two meters of the robot when it’s powered on. Spot is not a cobot. You shouldn’t hug it. Fundamentally, the places where the robot is the most valuable are places where people don’t want to be, or shouldn’t be.

We’ve seen how people reacted to earlier videos of Spot using its arm—can you help us set some reasonable expectations for what this means for Spot?

You know, it gets right back to the normal assumptions about our robots that people make that aren’t quite reality. All of this manipulation work we’re doing— the robot’s really acting as a tool. Even if it’s an autonomous behavior, it’s a tool. The robot is digging a hole because it’s got a set of instructions that say “apply this much force over this much distance here, here, and here.”

It’s not digging a hole and planting a tree because it loves trees, as much as I’d love to build a robot that works like that.

Photo: Boston Dynamics

There isn’t too much to say about the dock, except that it’s a requirement for making Spot long-term autonomous. The uncomfortable looking charging contacts that Spot impales itself on also include hardwired network connectivity, which is important because Spot often comes back home with a huge amount of data that all needs to be offloaded and processed. Docking and undocking are autonomous— as soon as the robot sees the fiducial markers on the dock, auto docking is enabled and it takes one click to settle the robot down.

During a brief remote demo, we also learned some other interesting things about Spot’s updated remote interface. It’s very latency tolerant, since you don’t have to drive the robot directly (although you can if you want to). Click a point on the camera view and Spot will move there autonomously while avoiding obstacles, meaning that even if you’re dealing with seconds of lag, the robot will continue making safe progress. This will be especially important if (when?) Spot starts exploring the Moon.

The remote interface also has an option to adjust how close Spot can get to obstacles, or to turn the obstacle avoidance off altogether. The latter functionality is useful if Spot sees something as an obstacle that really isn’t, like a curtain, while the former is useful if the robot is operating in an environment where it needs to give an especially wide berth to objects that could be dangerous to run into. “The robot’s not perfect—robots will never be perfect,” Jackowski reminds us, which is something we really (seriously) appreciate hearing from folks working on powerful, dynamic robots. “No matter how good the robot is, you should always de-risk as much as possible.”

Another part of that de-risking is having the user let Spot know when it’s about to go up or down some stairs by putting into “Stair Mode” with a toggle switch in the remote interface. Stairs are still a challenge for Spot, and Stair Mode slows the robot down and encourages it to pitch its body more aggressively to get a better view of the stairs. You’re encouraged to use stair mode, and also encouraged to send Spot up and down stairs with its “head” pointing up the stairs both ways, but these are not requirements for stair navigation— if you want to, you can send Spot down stairs head first without putting it in stair mode. Jackowski says that eventually, Spot will detect stairways by itself even when not in stair mode and adjust itself accordingly, but for now, that de-risking is solidly in the hands of the user.

Spot’s sensor payload, which is what we were trying out for the demo, provided a great opportunity for us to hear Spot STOMP STOMP STOMPING all over the place, which was also an opportunity for us to ask Jackowski why they can’t make Spot a little quieter. “It’s advantageous for Spot to step a little bit hard for the same reason it’s advantageous for you to step a little bit hard if you’re walking around blindfolded—that reason is that it really lets you know where the ground is, particularly when you’re not sure what to expect.” He adds, “It’s all in the name of robustness— the robot might be a little louder, but it’s a little more sure of its footing.”

Boston Dynamics isn’t yet ready to disclose the price of an arm-equipped Spot, but if you’re a potential customer, now is the time to contact the Boston Dynamics sales team to ask them about it. As a reminder, the base model of Spot costs US $74,500, with extra sensing or compute adding a substantial premium on top of that.

There will be a livestream launch event taking place at 11am ET today, during which Boston Dynamics’ CEO Robert Playter, VP of Marketing Michael Perry, and other folks from Boston Dynamics will make presentations on this new stuff. It’ll be live at this link, or you can watch it below. Continue reading

Posted in Human Robots