Tag Archives: large

#437628 Video Friday: An In-Depth Look at Mesmer ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Bear Robotics, a robotics and artificial intelligence company, and SoftBank Robotics Group, a leading robotics manufacturer and solutions provider, have collaborated to bring a new robot named Servi to the food service and hospitality field.

[ Bear Robotics ]

A literal in-depth look at Engineered Arts’ Mesmer android.

[ Engineered Arts ]

Is your robot running ROS? Is it connected to the Internet? Are you actually in control of it right now? Are you sure?

I appreciate how the researchers admitted to finding two of their own robots as part of the scan, a Baxter and a drone.

[ Brown ]

Smile Robotics describes this as “(possibly) world’s first full-autonomous clear-up-the-table robot.”

We’re not qualified to make a judgement on the world firstness, but personally I hate clearing tables, so this robot has my vote.

Smile Robotics founder and CEO Takashi Ogura, along with chief engineer Mitsutaka Kabasawa and engineer Kazuya Kobayashi, are former Google roboticists. Ogura also worked at SCHAFT. Smile says its robot uses ROS and is controlled by a framework written mainly in Rust, adding: “We are hiring Rustacean Roboticists!”

[ Smile Robotics ]

We’re not entirely sure why, but Panasonic has released plans for an Internet of Things system for hamsters.

We devised a recipe for a “small animal healthcare device” that can measure the weight and activity of small animals, the temperature and humidity of the breeding environment, and manage their health. This healthcare device visualizes the health status and breeding environment of small animals and manages their health to promote early detection of diseases. While imagining the scene where a healthcare device is actually used for an important small animal that we treat with affection, we hope to help overcome the current difficult situation through manufacturing.

[ Panasonic ] via [ RobotStart ]

Researchers at Yale have developed a robotic fabric, a breakthrough that could lead to such innovations as adaptive clothing, self-deploying shelters, or lightweight shape-changing machinery.

The researchers focused on processing functional materials into fiber-form so they could be integrated into fabrics while retaining its advantageous properties. For example, they made variable stiffness fibers out of an epoxy embedded with particles of Field’s metal, an alloy that liquifies at relatively low temperatures. When cool, the particles are solid metal and make the material stiffer; when warm, the particles melt into liquid and make the material softer.

[ Yale ]

In collaboration with Armasuisse and SBB, RSL demonstrated the use of a teleoperated Menzi Muck M545 to clean up a rock slide in Central Switzerland. The machine can be operated from a teloperation platform with visual and motion feedback. The walking excavator features an active chassis that can adapt to uneven terrain.

[ ETHZ RSL ]

An international team of JKU researchers is continuing to develop their vision for robots made out of soft materials. A new article in the journal “Communications Materials” demonstrates just how these kinds of soft machines react using weak magnetic fields to move very quickly. A triangle-shaped robot can roll itself in air at high speed and walk forward when exposed to an alternating in-plane square wave magnetic field (3.5 mT, 1.5 Hz). The diameter of the robot is 18 mm with a thickness of 80 µm. A six-arm robot can grab, transport, and release non-magnetic objects such as a polyurethane foam cube controlled by a permanent magnet.

Okay but tell me more about that cute sheep.

[ JKU ]

Interbotix has this “research level robotic crawler,” which both looks mean and runs ROS, a dangerous combination.

And here’s how it all came together:

[ Interbotix ]

I guess if you call them “loitering missile systems” rather than “drones that blow things up” people are less likely to get upset?

[ AeroVironment ]

In this video, we show a planner for a master dual-arm robot to manipulate tethered tools with an assistant dual-arm robot’s help. The assistant robot provides assistance to the master robot by manipulating the tool cable and avoiding collisions. The provided assistance allows the master robot to perform tool placements on the robot workspace table to regrasp the tool, which would typically fail since the tool cable tension may change the tool positions. It also allows the master robot to perform tool handovers, which would normally cause entanglements or collisions with the cable and the environment without the assistance.

[ Harada Lab ]

This video shows a flexible and robust robotic system for autonomous drawing on 3D surfaces. The system takes 2D drawing strokes and a 3D target surface (mesh or point clouds) as input. It maps the 2D strokes onto the 3D surface and generates a robot motion to draw the mapped strokes using visual recognition, grasp pose reasoning, and motion planning.

[ Harada Lab ]

Weekly mobility test. This time the Warthog takes on a fallen tree. Will it cross it? The answer is in the video!

And the answer is: kinda?

[ NORLAB ]

One of the advantages of walking machines is their ability to apply forces in all directions and of various magnitudes to the environment. Many of the multi-legged robots are equipped with point contact feet as these simplify the design and control of the robot. The iStruct project focuses on the development of a foot that allows extensive contact with the environment.

[ DFKI ]

An urgent medical transport was simulated in NASA’s second Systems Integration and Operationalization (SIO) demonstration Sept. 28 with partner Bell Textron Inc. Bell used the remotely-piloted APT 70 to conduct a flight representing an urgent medical transport mission. It is envisioned in the future that an operational APT 70 could provide rapid medical transport for blood, organs, and perishable medical supplies (payload up to 70 pounds). The APT 70 is estimated to move three times as fast as ground transportation.

Always a little suspicious when the video just shows the drone flying, and sitting on the ground, but not that tricky transition between those two states.

[ NASA ]

A Lockheed Martin Robotics Seminar on “Socially Assistive Mobile Robots,” by Yi Guo from Stevens Institute of Technology.

The use of autonomous mobile robots in human environments is on the rise. Assistive robots have been seen in real-world environments, such as robot guides in airports, robot polices in public parks, and patrolling robots in supermarkets. In this talk, I will first present current research activities conducted in the Robotics and Automation Laboratory at Stevens. I’ll then focus on robot-assisted pedestrian regulation, where pedestrian flows are regulated and optimized through passive human-robot interaction.

[ UMD ]

This week’s CMU RI Seminar is by CMU’s Zachary Manchester, on “The World’s Tiniest Space Program.”

The aerospace industry has experienced a dramatic shift over the last decade: Flying a spacecraft has gone from something only national governments and large defense contractors could afford to something a small startup can accomplish on a shoestring budget. A virtuous cycle has developed where lower costs have led to more launches and the growth of new markets for space-based data. However, many barriers remain. This talk will focus on driving these trends to their ultimate limit by harnessing advances in electronics, planning, and control to build spacecraft that cost less than a new smartphone and can be deployed in large numbers.

[ CMU RI ] Continue reading

Posted in Human Robots

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437598 Video Friday: Sarcos Is Developing a New ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

NASA’s Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) spacecraft unfurled its robotic arm Oct. 20, 2020, and in a first for the agency, briefly touched an asteroid to collect dust and pebbles from the surface for delivery to Earth in 2023.

[ NASA ]

New from David Zarrouk’s lab at BGU is AmphiSTAR, which Zarrouk describes as “a kind of a ground-water drone inspired by the cockroaches (sprawling) and by the Basilisk lizard (running over water). The robot hovers due to the collision of its propellers with the water (hydrodynamics not aerodynamics). The robot can crawl and swim at high and low speeds and smoothly transition between the two. It can reach 3.5 m/s on ground and 1.5m/s in water.”

AmphiSTAR will be presented at IROS, starting next week!

[ BGU ]

This is unfortunately not a great video of a video that was taken at a SoftBank Hawks baseball game in Japan last week, but it’s showing an Atlas robot doing an honestly kind of impressive dance routine to support the team.

ロボット応援団に人型ロボット『ATLAS』がアメリカからリモートで緊急参戦!!!
ホークスビジョンの映像をお楽しみ下さい♪#sbhawks #Pepper #spot pic.twitter.com/6aTYn8GGli
— 福岡ソフトバンクホークス(公式) (@HAWKS_official)
October 16, 2020

Editor’s Note: The tweet embed above is not working for some reason—see the video here.

[ SoftBank Hawks ]

Thanks Thomas!

Sarcos is working on a new robot, which looks to be the torso of their powered exoskeleton with the human relocated somewhere else.

[ Sarcos ]

The biggest holiday of the year, International Sloth Day, was on Tuesday! To celebrate, here’s Slothbot!

[ NSF ]

This is one of those simple-seeming tasks that are really difficult for robots.

I love self-resetting training environments.

[ MIT CSAIL ]

The Chiel lab collaborates with engineers at the Center for Biologically Inspired Robotics Research at Case Western Reserve University to design novel worm-like robots that have potential applications in search-and-rescue missions, endoscopic medicine, or other scenarios requiring navigation through narrow spaces.

[ Case Western ]

ANYbotics partnered with Losinger Marazzi to explore ANYmal’s potential of patrolling construction sites to identify and report safety issues. With such a complex environment, only a robot designed to navigate difficult terrain is able to bring digitalization to such a physically demanding industry.

[ ANYbotics ]

Happy 2018 Halloween from Clearpath Robotics!

[ Clearpath ]

Overcoming illumination variance is a critical factor in vision-based navigation. Existing methods tackled this radical illumination variance issue by proposing camera control or high dynamic range (HDR) image fusion. Despite these efforts, we have found that the vision-based approaches still suffer from overcoming darkness. This paper presents real-time image synthesizing from carefully controlled seed low dynamic range (LDR) image, to enable visual simultaneous localization and mapping (SLAM) in an extremely dark environment (less than 10 lux).

[ KAIST ]

What can MoveIt do? Who knows! Let's find out!

[ MoveIt ]

Thanks Dave!

Here we pick a cube from a starting point, manipulate it within the hand, and then put it back. To explore the capabilities of the hand, no sensors were used in this demonstration. The RBO Hand 3 uses soft pneumatic actuators made of silicone. The softness imparts considerable robustness against variations in object pose and size. This lets us design manipulation funnels that work reliably without needing sensor feedback. We take advantage of this reliability to chain these funnels into more complex multi-step manipulation plans.

[ TU Berlin ]

If this was a real solar array, King Louie would have totally cleaned it. Mostly.

[ BYU ]

Autonomous exploration is a fundamental problem for various applications of unmanned aerial vehicles(UAVs). Existing methods, however, were demonstrated to have low efficiency, due to the lack of optimality consideration, conservative motion plans and low decision frequencies. In this paper, we propose FUEL, a hierarchical framework that can support Fast UAV ExpLoration in complex unknown environments.

[ HKUST ]

Countless precise repetitions? This is the perfect task for a robot, thought researchers at the University of Liverpool in the Department of Chemistry, and without further ado they developed an automation solution that can carry out and monitor research tasks, making autonomous decisions about what to do next.

[ Kuka ]

This video shows a demonstration of central results of the SecondHands project. In the context of maintenance and repair tasks, in warehouse environments, the collaborative humanoid robot ARMAR-6 demonstrates a number of cognitive and sensorimotor abilities such as 1) recognition of the need of help based on speech, force, haptics and visual scene and action interpretation, 2) collaborative bimanual manipulation of large objects, 3) compliant mobile manipulation, 4) grasping known and unknown objects and tools, 5) human-robot interaction (object and tool handover) 6) natural dialog and 7) force predictive control.

[ SecondHands ]

In celebration of Ada Lovelace Day, Silicon Valley Robotics hosted a panel of Women in Robotics.

[ Robohub ]

As part of the upcoming virtual IROS conference, HEBI robotics is putting together a tutorial on robotics actuation. While I’m sure HEBI would like you to take a long look at their own actuators, we’ve been assured that no matter what kind of actuators you use, this tutorial will still be informative and useful.

[ YouTube ] via [ HEBI Robotics ]

Thanks Dave!

This week’s UMD Lockheed Martin Robotics Seminar comes from Julie Shah at MIT, on “Enhancing Human Capability with Intelligent Machine Teammates.”

Every team has top performers- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways.

[ UMD ]

Matthew Piccoli gives a talk to the UPenn GRASP Lab on “Trading Complexities: Smart Motors and Dumb Vehicles.”

We will discuss my research journey through Penn making the world's smallest, simplest flying vehicles, and in parallel making the most complex brushless motors. What do they have in common? We'll touch on why the quadrotor went from an obscure type of helicopter to the current ubiquitous drone. Finally, we'll get into my life after Penn and what tools I'm creating to further drone and robot designs of the future.

[ UPenn ] Continue reading

Posted in Human Robots

#437592 Coordinated Robotics Wins DARPA SubT ...

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

Huge congratulations to Coordinated Robotics! It’s worth pointing out that the top three teams were separated by an incredibly small handful of points, and on a slightly different day, with slightly different artifact positions, any of them could have come out on top. This doesn’t diminish Coordinated Robotics’ victory in the least—it means that the competition was fierce, and that the problem of autonomous cave exploration with robots has been solved (virtually, at least) in several different but effective ways.

We know Coordinated Robotics pretty well at this point, but here’s an introduction video:

You heard that right—Coordinated Robotics is just Kevin Knoedler, all by himself. This would be astonishing, if we weren’t already familiar with Kevin’s abilities: He won NASA’s virtual Space Robotics Challenge by himself in 2017, and Coordinated Robotics placed first in the DARPA SubT Virtual Tunnel Circuit and second in the Virtual Urban Circuit. We asked Kevin how he managed to do so spectacularly well (again), and here’s what he told us:

IEEE Spectrum: Can you describe what it was like to watch your team of robots on the live stream, and to see them score the most points?

Kevin Knoedler: It was exciting and stressful watching the live stream. It was exciting as the top few scores were quite close for the cave circuit. It was stressful because I started out behind and worked my way up, but did not do well on the final world. Luckily, not doing well on the first and last worlds was offset by better scores on many of the runs in between. DARPA did a very nice job with their live stream of the cave circuit results.

How did you decide on the makeup of your team, and on what sensors to use?

To decide on the makeup of the team I experimented with quite a few different vehicles. I had a lot of trouble with the X2 and other small ground vehicles flipping over. Based on that I looked at the larger ground vehicles that also had a sensor capable of identifying drop-offs. The vehicles that met those criteria for me were the Marble HD2, Marble Husky, Ozbot ATR, and the Absolem. Of those ground vehicles I went with the Marble HD2. It had a downward looking depth camera that I could use to detect drop-offs and was much more stable on the varied terrain than the X2. I had used the X3 aerial vehicle before and so that was my first choice for an aerial platform.

What were some things that you learned in Tunnel and Urban that you were able to incorporate into your strategy for Cave?

In the Tunnel circuit I had learned a strategy to use ground vehicles and in the Urban circuit I had learned a strategy to use aerial vehicles. At a high level that was the biggest thing I learned from the previous circuits that I was able to apply to the Cave circuit. At a lower level I was able to apply many of the development and testing strategies from the previous circuits to the Cave circuit.

What aspect of the cave environment was most challenging for your robots?

I would say it wasn't just one aspect of the cave environment that was challenging for the robots. There were quite a few challenging aspects of the cave environment. For the ground vehicles there were frequently paths that looked good as the robot started on the path, but turned into drop-offs or difficult boulder crawls. While it was fun to see the robot plan well enough to slowly execute paths over the boulders, I was wishing that the robot was smart enough to try a different path rather than wasting so much time crawling over the large boulders. For the aerial vehicles the combination of tight paths along with large vertical spaces was the biggest challenge in the environment. The large open vertical areas were particularly challenging for my aerial robots. They could easily lose track of their position without enough nearby features to track and it was challenging to find the correct path in and out of such large vertical areas.

How will you be preparing for the SubT Final?

To prepare for the SubT Final the vehicles will be getting a lot smarter. The ground vehicles will be better at navigation and communicating with one another. The aerial vehicles will be better able to handle large vertical areas both from a positioning and a planning point of view. Finally, all of the vehicles will do a better job coordinating what areas have been explored and what areas have good leads for further exploration.

Image: DARPA

The final score for the DARPA SubT Cave Circuit virtual competition.

We also had a chance to ask SubT program manager Tim Chung a few questions at yesterday’s post-event press conference, about the course itself and what he thinks teams should have learned from the competition:

IEEE Spectrum: Having looked through some real caves, can you give some examples of some of the most significant differences between this simulation and real caves? And with the enormous variety of caves out there, how generalizable are the solutions that teams came up with?

Tim Chung: Many of the caves that I’ve had to crawl through and gotten bumps and scrapes from had a couple of different features that I’ll highlight. The first is the variations in moisture— a lot of these caves were naturally formed with streams and such, so many of the caves we went to had significant mud, flowing water, and such. And so one of the things we're not capturing in the SubT simulator is explicitly anything that would submerge the robots, or otherwise short any of their systems. So from that perspective, that's one difference that's certainly notable.

And then the other difference I think is the granularity of the terrain, whether it's rubble, sand, or just raw dirt, friction coefficients are all across the board, and I think that's one of the things that any terrestrial simulator will both struggle with and potentially benefit from— that is, terramechanics simulation abilities. Given the emphasis on mobility in the SubT simulation, we’re capturing just a sliver of the complexity of terramechanics, but I think that's probably another take away that you'll certainly see— where there’s that distinction between physical and virtual technologies.

To answer your second question about generalizability— that’s the multi-million dollar question! It’s definitely at the crux of why we have eight diverse worlds, both in size verticality, dimensions, constraint passageways, etc. But this is eight out of countless variations, and the goal of course is to be able to investigate what those key dependencies are. What I'll say is that the out of the seventy three different virtual cave tiles, which are the building blocks that make up these virtual worlds, quite a number of them were not only inspired by real world caves, but were specifically designed so that we can essentially use these tiles as unit tests going forward. So, if I want to simulate vertical inclines, here are the tiles that are the vertical vertical unit tests for robots, and that’s how we’re trying to to think through how to tease out that generalizability factor.

What are some observations from this event that you think systems track teams should pay attention to as they prepare for the final event?

One of the key things about the virtual competition is that you submit your software, and that's it. So you have to design everything from state management to failure mode triage, really thinking about what could go wrong and then building out your autonomous capabilities either to react to some of those conditions, or to anticipate them. And to be honest I think that the humans in the loop that we have in the systems competition really are key enablers of their capability, but also could someday (if not already) be a crutch that we might not be able to develop.

Thinking through some of the failure modes in a fully autonomous software deployed setting are going to be incredibly valuable for the systems competitors, so that for example the human supervisor doesn't have to worry about those failure modes as much, or can respond in a more supervisory way rather than trying to joystick the robot around. I think that's going to be one of the greatest impacts, thinking through what it means to send these robots off to autonomously get you the information you need and complete the mission

This isn’t to say that the humans aren't going to be useful and continue to play a role of course, but I think this shifting of the role of the human supervisor from being a state manager to being more of a tactical commander will dramatically highlight the impact of the virtual side on the systems side.

What, if anything, should we take away from one person teams being able to do so consistently well in the virtual circuit?

It’s a really interesting question. I think part of it has to do with systems integration versus software integration. There's something to be said for the richness of the technologies that can be developed, and how many people it requires to be able to develop some of those technologies. With the systems competitors, having one person try to build, manage, deploy, service, and operate all of those robots is still functionally quite challenging, whereas in the virtual competition, it really is a software deployment more than anything else. And so I think the commonality of single person teams may just be a virtue of the virtual competition not having some of those person-intensive requirements.

In terms of their strong performance, I give credit to all of these really talented folks who are taking upon themselves to jump into the competitor pool and see how well they do, and I think that just goes to show you that whether you're one person or ten people people or a hundred people on a team, a good idea translated and executed well really goes a long way.

Looking ahead, teams have a year to prepare for the final event, which is still scheduled to be held sometime in fall 2021. And even though there was no cave event for systems track teams, the fact that the final event will be a combination of tunnel, urban, and cave circuits means that systems track teams have been figuring out how to get their robots to work in caves anyway, and we’ll be bringing you some of their stories over the next few weeks.

[ DARPA SubT ] Continue reading

Posted in Human Robots