Tag Archives: Team

#437620 The Trillion-Transistor Chip That Just ...

The history of computer chips is a thrilling tale of extreme miniaturization.

The smaller, the better is a trend that’s given birth to the digital world as we know it. So, why on earth would you want to reverse course and make chips a lot bigger? Well, while there’s no particularly good reason to have a chip the size of an iPad in an iPad, such a chip may prove to be genius for more specific uses, like artificial intelligence or simulations of the physical world.

At least, that’s what Cerebras, the maker of the biggest computer chip in the world, is hoping.

The Cerebras Wafer-Scale Engine is massive any way you slice it. The chip is 8.5 inches to a side and houses 1.2 trillion transistors. The next biggest chip, NVIDIA’s A100 GPU, measures an inch to a side and has a mere 54 billion transistors. The former is new, largely untested and, so far, one-of-a-kind. The latter is well-loved, mass-produced, and has taken over the world of AI and supercomputing in the last decade.

So can Goliath flip the script on David? Cerebras is on a mission to find out.

Big Chips Beyond AI
When Cerebras first came out of stealth last year, the company said it could significantly speed up the training of deep learning models.

Since then, the WSE has made its way into a handful of supercomputing labs, where the company’s customers are putting it through its paces. One of those labs, the National Energy Technology Laboratory, is looking to see what it can do beyond AI.

So, in a recent trial, researchers pitted the chip—which is housed in an all-in-one system about the size of a dorm room mini-fridge called the CS-1—against a supercomputer in a fluid dynamics simulation. Simulating the movement of fluids is a common supercomputer application useful for solving complex problems like weather forecasting and airplane wing design.

The trial was described in a preprint paper written by a team led by Cerebras’s Michael James and NETL’s Dirk Van Essendelft and presented at the supercomputing conference SC20 this week. The team said the CS-1 completed a simulation of combustion in a power plant roughly 200 times faster than it took the Joule 2.0 supercomputer to do a similar task.

The CS-1 was actually faster-than-real-time. As Cerebrus wrote in a blog post, “It can tell you what is going to happen in the future faster than the laws of physics produce the same result.”

The researchers said the CS-1’s performance couldn’t be matched by any number of CPUs and GPUs. And CEO and cofounder Andrew Feldman told VentureBeat that would be true “no matter how large the supercomputer is.” At a point, scaling a supercomputer like Joule no longer produces better results in this kind of problem. That’s why Joule’s simulation speed peaked at 16,384 cores, a fraction of its total 86,400 cores.

A comparison of the two machines drives the point home. Joule is the 81st fastest supercomputer in the world, takes up dozens of server racks, consumes up to 450 kilowatts of power, and required tens of millions of dollars to build. The CS-1, by comparison, fits in a third of a server rack, consumes 20 kilowatts of power, and sells for a few million dollars.

While the task is niche (but useful) and the problem well-suited to the CS-1, it’s still a pretty stunning result. So how’d they pull it off? It’s all in the design.

Cut the Commute
Computer chips begin life on a big piece of silicon called a wafer. Multiple chips are etched onto the same wafer and then the wafer is cut into individual chips. While the WSE is also etched onto a silicon wafer, the wafer is left intact as a single, operating unit. This wafer-scale chip contains almost 400,000 processing cores. Each core is connected to its own dedicated memory and its four neighboring cores.

Putting that many cores on a single chip and giving them their own memory is why the WSE is bigger; it’s also why, in this case, it’s better.

Most large-scale computing tasks depend on massively parallel processing. Researchers distribute the task among hundreds or thousands of chips. The chips need to work in concert, so they’re in constant communication, shuttling information back and forth. A similar process takes place within each chip, as information moves between processor cores, which are doing the calculations, and shared memory to store the results.

It’s a little like an old-timey company that does all its business on paper.

The company uses couriers to send and collect documents from other branches and archives across town. The couriers know the best routes through the city, but the trips take some minimum amount of time determined by the distance between the branches and archives, the courier’s top speed, and how many other couriers are on the road. In short, distance and traffic slow things down.

Now, imagine the company builds a brand new gleaming skyscraper. Every branch is moved into the new building and every worker gets a small filing cabinet in their office to store documents. Now any document they need can be stored and retrieved in the time it takes to step across the office or down the hall to their neighbor’s office. The information commute has all but disappeared. Everything’s in the same house.

Cerebras’s megachip is a bit like that skyscraper. The way it shuttles information—aided further by its specially tailored compiling software—is far more efficient compared to a traditional supercomputer that needs to network a ton of traditional chips.

Simulating the World as It Unfolds
It’s worth noting the chip can only handle problems small enough to fit on the wafer. But such problems may have quite practical applications because of the machine’s ability to do high-fidelity simulation in real-time. The authors note, for example, the machine should in theory be able to accurately simulate the air flow around a helicopter trying to land on a flight deck and semi-automate the process—something not possible with traditional chips.

Another opportunity, they note, would be to use a simulation as input to train a neural network also residing on the chip. In an intriguing and related example, a Caltech machine learning technique recently proved to be 1,000 times faster at solving the same kind of partial differential equations at play here to simulate fluid dynamics.

They also note that improvements in the chip (and others like it, should they arrive) will push back the limits of what can be accomplished. Already, Cerebras has teased the release of its next-generation chip, which will have 2.6 trillion transistors, 850,00 cores, and more than double the memory.

Of course, it still remains to be seen whether wafer-scale computing really takes off. The idea has been around for decades, but Cerebras is the first to pursue it seriously. Clearly, they believe they’ve solved the problem in a way that’s useful and economical.

Other new architectures are also being pursued in the lab. Memristor-based neuromorphic chips, for example, mimic the brain by putting processing and memory into individual transistor-like components. And of course, quantum computers are in a separate lane, but tackle similar problems.

It could be that one of these technologies eventually rises to rule them all. Or, and this seems just as likely, computing may splinter into a bizarre quilt of radical chips, all stitched together to make the most of each depending on the situation.

Image credit: Cerebras Continue reading

Posted in Human Robots

#437616 Innovative YUJIN 3D LiDAR, Now Shipping!

Recently Yujin Robot launched a new 3D LiDAR for indoor service robot, AGVs/AMRs and smart factory. The YRL3 series is a line of precise laser sensors for vertical and horizontal scanning to detect environments or objects. The Yujin Robot YRL3 series LiDAR is designed for indoor applications and utilizes an innovative 3D scanning LiDAR for a 270°(Horizontal) x 90°(vertical) dynamic field of view as a single channel. The fundamental principle is based on direct ToF (Time of Flight) and designed to measure distances towards surroundings. YRL3 collect useful data including ranges, angles, intensities and Cartesian coordinates (x,y,z). Real-time vertical right-angle adjustment is possible and supports powerful S/W package for autonomous driving devices.

“In recent years, our product lineup expanded to include models for the Fourth Industrial Revolution,” shares the marketing team of Yujin Robot. These models namely are Kobuki, the ROS reference research robot platform used by robotics research labs around the world, the Yujin LiDAR range-finding scanning sensor for LiDAR-based autonomous driving, AMS solution (Autonomous Mobility Solution) for customized autonomous driving. The company continues to push the boundaries of robotics and artificial intelligence, developing game-changing autonomous solutions that give companies around the world an edge over the competition.

Photo: Yujin

YUJIN 3D LiDAR, Now Shipping! Indoor 3D LiDAR for AGVs/AMRs, Service Robots, and Factories Continue reading

Posted in Human Robots

#437600 Brain-Inspired Robot Controller Uses ...

Robots operating in the real world are starting to find themselves constrained by the amount of computing power they have available. Computers are certainly getting faster and more efficient, but they’re not keeping up with the potential of robotic systems, which have access to better sensors and more data, which in turn makes decision making more complex. A relatively new kind of computing device called a memristor could potentially help robotics smash through this barrier, through a combination of lower complexity, lower cost, and higher speed.

In a paper published today in Science Robotics, a team of researchers from the University of Southern California in Los Angeles and the Air Force Research Laboratory in Rome, N.Y., demonstrate a simple self-balancing robot that uses memristors to form a highly effective analog control system, inspired by the functional structure of the human brain.

First, we should go over just what the heck a memristor is. As the name suggests, it’s a type of memory that is resistance-based. That is, the resistance of a memristor can be programmed, and the memristor remembers that resistance even after it’s powered off (the resistance depends on the magnitude of the voltage applied to the memristor’s two terminals and the length of time that voltage has been applied). Memristors are potentially the ideal hybrid between RAM and flash memory, offering high speed, high density, non-volatile storage. So that’s cool, but what we’re most interested in as far as robot control systems go is that memristors store resistance, making them analog devices rather than digital ones.

By adding a memristor to an analog circuit with inputs from a gyroscope and an accelerometer, the researchers created a completely analog Kalman filter, which coupled to a second memristor functioned as a PD controller.

Nowadays, the word “analog” sounds like a bad thing, but robots are stuck in an analog world, and any physical interactions they have with the world (mediated through sensors) are fundamentally analog in nature. The challenge is that an analog signal is often “messy”—full of noise and non-linearities—and as such, the usual approach now is to get it converted to a digital signal and then processed to get anything useful out of it. This is fine, but it’s also not particularly fast or efficient. Where memristors come in is that they’re inherently analog, and in addition to storing data, they can also act as tiny analog computers, which is pretty wild.

By adding a memristor to an analog circuit with inputs from a gyroscope and an accelerometer, the researchers, led by Wei Wu, an associate professor of electrical engineering at USC, created a completely analog and completely physical Kalman filter to remove noise from the sensor signal. In addition, they used a second memristor can be used to turn that sensor data into a proportional-derivative (PD) controller. Next they put those two components together to build an analogy system that can do a bunch of the work required to keep an inverted pendulum robot upright far more efficiently than a traditional system. The difference in performance is readily apparent:

The shaking you see in the traditionally-controlled robot on the bottom comes from the non-linearity of the dynamic system, which changes faster than the on-board controller can keep up with. The memristors substantially reduce the cycle time, so the robot can balance much more smoothly. Specifically, cycle time is reduced from 3,034 microseconds to just 6 microseconds.

Of course, there’s more going on here, like motor drivers and a digital computer to talk to them, so this robot is really a hybrid system. But guess what? As the researchers point out, so are we!

The human brain consists of the cerebrum, the cerebellum, and the brainstem. The cerebrum is a major part of the brain in charge of vision, hearing, and thinking, whereas the cerebellum plays an important role in motion control. Through this cooperation of the cerebrum and the cerebellum, the human brain can conduct multiple tasks simultaneously with extremely low power consumption. Inspired by this, we developed a hybrid analog-digital computation platform, in which the digital component runs the high-level algorithm, whereas the analog component is responsible for sensor fusion and motion control.

By offloading a bunch of computation onto the memristors, the higher brain functions of the robot have more breathing room. Overall, you reduce power, space, and cost, while substantially improving performance. This has only become possible relatively recently due to memristor advances and availability, and the researchers expect that memristor-based hybrid computing will soon be able to “improve the robustness and the performance of mobile robotic systems with higher” degrees of freedom.

“A memristor-based hybrid analog-digital computing platform for mobile robotics,” by Buyun Chen, Hao Yang, Boxiang Song, Deming Meng, Xiaodong Yan, Yuanrui Li, Yunxiang Wang, Pan Hu, Tse-Hsien Ou, Mark Barnell, Qing Wu, Han Wang, and Wei Wu, from USC Viterbi and AFRL, was published in Science Robotics. Continue reading

Posted in Human Robots

#437598 Video Friday: Sarcos Is Developing a New ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

NASA’s Origins, Spectral Interpretation, Resource Identification, Security, Regolith Explorer (OSIRIS-REx) spacecraft unfurled its robotic arm Oct. 20, 2020, and in a first for the agency, briefly touched an asteroid to collect dust and pebbles from the surface for delivery to Earth in 2023.

[ NASA ]

New from David Zarrouk’s lab at BGU is AmphiSTAR, which Zarrouk describes as “a kind of a ground-water drone inspired by the cockroaches (sprawling) and by the Basilisk lizard (running over water). The robot hovers due to the collision of its propellers with the water (hydrodynamics not aerodynamics). The robot can crawl and swim at high and low speeds and smoothly transition between the two. It can reach 3.5 m/s on ground and 1.5m/s in water.”

AmphiSTAR will be presented at IROS, starting next week!

[ BGU ]

This is unfortunately not a great video of a video that was taken at a SoftBank Hawks baseball game in Japan last week, but it’s showing an Atlas robot doing an honestly kind of impressive dance routine to support the team.

ロボット応援団に人型ロボット『ATLAS』がアメリカからリモートで緊急参戦!!!
ホークスビジョンの映像をお楽しみ下さい♪#sbhawks #Pepper #spot pic.twitter.com/6aTYn8GGli
— 福岡ソフトバンクホークス(公式) (@HAWKS_official)
October 16, 2020

Editor’s Note: The tweet embed above is not working for some reason—see the video here.

[ SoftBank Hawks ]

Thanks Thomas!

Sarcos is working on a new robot, which looks to be the torso of their powered exoskeleton with the human relocated somewhere else.

[ Sarcos ]

The biggest holiday of the year, International Sloth Day, was on Tuesday! To celebrate, here’s Slothbot!

[ NSF ]

This is one of those simple-seeming tasks that are really difficult for robots.

I love self-resetting training environments.

[ MIT CSAIL ]

The Chiel lab collaborates with engineers at the Center for Biologically Inspired Robotics Research at Case Western Reserve University to design novel worm-like robots that have potential applications in search-and-rescue missions, endoscopic medicine, or other scenarios requiring navigation through narrow spaces.

[ Case Western ]

ANYbotics partnered with Losinger Marazzi to explore ANYmal’s potential of patrolling construction sites to identify and report safety issues. With such a complex environment, only a robot designed to navigate difficult terrain is able to bring digitalization to such a physically demanding industry.

[ ANYbotics ]

Happy 2018 Halloween from Clearpath Robotics!

[ Clearpath ]

Overcoming illumination variance is a critical factor in vision-based navigation. Existing methods tackled this radical illumination variance issue by proposing camera control or high dynamic range (HDR) image fusion. Despite these efforts, we have found that the vision-based approaches still suffer from overcoming darkness. This paper presents real-time image synthesizing from carefully controlled seed low dynamic range (LDR) image, to enable visual simultaneous localization and mapping (SLAM) in an extremely dark environment (less than 10 lux).

[ KAIST ]

What can MoveIt do? Who knows! Let's find out!

[ MoveIt ]

Thanks Dave!

Here we pick a cube from a starting point, manipulate it within the hand, and then put it back. To explore the capabilities of the hand, no sensors were used in this demonstration. The RBO Hand 3 uses soft pneumatic actuators made of silicone. The softness imparts considerable robustness against variations in object pose and size. This lets us design manipulation funnels that work reliably without needing sensor feedback. We take advantage of this reliability to chain these funnels into more complex multi-step manipulation plans.

[ TU Berlin ]

If this was a real solar array, King Louie would have totally cleaned it. Mostly.

[ BYU ]

Autonomous exploration is a fundamental problem for various applications of unmanned aerial vehicles(UAVs). Existing methods, however, were demonstrated to have low efficiency, due to the lack of optimality consideration, conservative motion plans and low decision frequencies. In this paper, we propose FUEL, a hierarchical framework that can support Fast UAV ExpLoration in complex unknown environments.

[ HKUST ]

Countless precise repetitions? This is the perfect task for a robot, thought researchers at the University of Liverpool in the Department of Chemistry, and without further ado they developed an automation solution that can carry out and monitor research tasks, making autonomous decisions about what to do next.

[ Kuka ]

This video shows a demonstration of central results of the SecondHands project. In the context of maintenance and repair tasks, in warehouse environments, the collaborative humanoid robot ARMAR-6 demonstrates a number of cognitive and sensorimotor abilities such as 1) recognition of the need of help based on speech, force, haptics and visual scene and action interpretation, 2) collaborative bimanual manipulation of large objects, 3) compliant mobile manipulation, 4) grasping known and unknown objects and tools, 5) human-robot interaction (object and tool handover) 6) natural dialog and 7) force predictive control.

[ SecondHands ]

In celebration of Ada Lovelace Day, Silicon Valley Robotics hosted a panel of Women in Robotics.

[ Robohub ]

As part of the upcoming virtual IROS conference, HEBI robotics is putting together a tutorial on robotics actuation. While I’m sure HEBI would like you to take a long look at their own actuators, we’ve been assured that no matter what kind of actuators you use, this tutorial will still be informative and useful.

[ YouTube ] via [ HEBI Robotics ]

Thanks Dave!

This week’s UMD Lockheed Martin Robotics Seminar comes from Julie Shah at MIT, on “Enhancing Human Capability with Intelligent Machine Teammates.”

Every team has top performers- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways.

[ UMD ]

Matthew Piccoli gives a talk to the UPenn GRASP Lab on “Trading Complexities: Smart Motors and Dumb Vehicles.”

We will discuss my research journey through Penn making the world's smallest, simplest flying vehicles, and in parallel making the most complex brushless motors. What do they have in common? We'll touch on why the quadrotor went from an obscure type of helicopter to the current ubiquitous drone. Finally, we'll get into my life after Penn and what tools I'm creating to further drone and robot designs of the future.

[ UPenn ] Continue reading

Posted in Human Robots

#437592 Coordinated Robotics Wins DARPA SubT ...

DARPA held the Virtual Cave Circuit event of the Subterranean Challenge on Tuesday in the form of a several hour-long livestream. We got to watch (along with all of the competing teams) as virtual robots explored virtual caves fully autonomously, dodging rockfalls, spotting artifacts, scoring points, and sometimes running into stuff and falling over.

Expert commentary was provided by DARPA, and we were able to watch multiple teams running at once, skipping from highlight to highlight. It was really very well done (you can watch an archive of the entire stream here), but they made us wait until the very end to learn who won: First place went to Coordinated Robotics, with BARCS taking second, and third place going to newcomer Team Dynamo.

Huge congratulations to Coordinated Robotics! It’s worth pointing out that the top three teams were separated by an incredibly small handful of points, and on a slightly different day, with slightly different artifact positions, any of them could have come out on top. This doesn’t diminish Coordinated Robotics’ victory in the least—it means that the competition was fierce, and that the problem of autonomous cave exploration with robots has been solved (virtually, at least) in several different but effective ways.

We know Coordinated Robotics pretty well at this point, but here’s an introduction video:

You heard that right—Coordinated Robotics is just Kevin Knoedler, all by himself. This would be astonishing, if we weren’t already familiar with Kevin’s abilities: He won NASA’s virtual Space Robotics Challenge by himself in 2017, and Coordinated Robotics placed first in the DARPA SubT Virtual Tunnel Circuit and second in the Virtual Urban Circuit. We asked Kevin how he managed to do so spectacularly well (again), and here’s what he told us:

IEEE Spectrum: Can you describe what it was like to watch your team of robots on the live stream, and to see them score the most points?

Kevin Knoedler: It was exciting and stressful watching the live stream. It was exciting as the top few scores were quite close for the cave circuit. It was stressful because I started out behind and worked my way up, but did not do well on the final world. Luckily, not doing well on the first and last worlds was offset by better scores on many of the runs in between. DARPA did a very nice job with their live stream of the cave circuit results.

How did you decide on the makeup of your team, and on what sensors to use?

To decide on the makeup of the team I experimented with quite a few different vehicles. I had a lot of trouble with the X2 and other small ground vehicles flipping over. Based on that I looked at the larger ground vehicles that also had a sensor capable of identifying drop-offs. The vehicles that met those criteria for me were the Marble HD2, Marble Husky, Ozbot ATR, and the Absolem. Of those ground vehicles I went with the Marble HD2. It had a downward looking depth camera that I could use to detect drop-offs and was much more stable on the varied terrain than the X2. I had used the X3 aerial vehicle before and so that was my first choice for an aerial platform.

What were some things that you learned in Tunnel and Urban that you were able to incorporate into your strategy for Cave?

In the Tunnel circuit I had learned a strategy to use ground vehicles and in the Urban circuit I had learned a strategy to use aerial vehicles. At a high level that was the biggest thing I learned from the previous circuits that I was able to apply to the Cave circuit. At a lower level I was able to apply many of the development and testing strategies from the previous circuits to the Cave circuit.

What aspect of the cave environment was most challenging for your robots?

I would say it wasn't just one aspect of the cave environment that was challenging for the robots. There were quite a few challenging aspects of the cave environment. For the ground vehicles there were frequently paths that looked good as the robot started on the path, but turned into drop-offs or difficult boulder crawls. While it was fun to see the robot plan well enough to slowly execute paths over the boulders, I was wishing that the robot was smart enough to try a different path rather than wasting so much time crawling over the large boulders. For the aerial vehicles the combination of tight paths along with large vertical spaces was the biggest challenge in the environment. The large open vertical areas were particularly challenging for my aerial robots. They could easily lose track of their position without enough nearby features to track and it was challenging to find the correct path in and out of such large vertical areas.

How will you be preparing for the SubT Final?

To prepare for the SubT Final the vehicles will be getting a lot smarter. The ground vehicles will be better at navigation and communicating with one another. The aerial vehicles will be better able to handle large vertical areas both from a positioning and a planning point of view. Finally, all of the vehicles will do a better job coordinating what areas have been explored and what areas have good leads for further exploration.

Image: DARPA

The final score for the DARPA SubT Cave Circuit virtual competition.

We also had a chance to ask SubT program manager Tim Chung a few questions at yesterday’s post-event press conference, about the course itself and what he thinks teams should have learned from the competition:

IEEE Spectrum: Having looked through some real caves, can you give some examples of some of the most significant differences between this simulation and real caves? And with the enormous variety of caves out there, how generalizable are the solutions that teams came up with?

Tim Chung: Many of the caves that I’ve had to crawl through and gotten bumps and scrapes from had a couple of different features that I’ll highlight. The first is the variations in moisture— a lot of these caves were naturally formed with streams and such, so many of the caves we went to had significant mud, flowing water, and such. And so one of the things we're not capturing in the SubT simulator is explicitly anything that would submerge the robots, or otherwise short any of their systems. So from that perspective, that's one difference that's certainly notable.

And then the other difference I think is the granularity of the terrain, whether it's rubble, sand, or just raw dirt, friction coefficients are all across the board, and I think that's one of the things that any terrestrial simulator will both struggle with and potentially benefit from— that is, terramechanics simulation abilities. Given the emphasis on mobility in the SubT simulation, we’re capturing just a sliver of the complexity of terramechanics, but I think that's probably another take away that you'll certainly see— where there’s that distinction between physical and virtual technologies.

To answer your second question about generalizability— that’s the multi-million dollar question! It’s definitely at the crux of why we have eight diverse worlds, both in size verticality, dimensions, constraint passageways, etc. But this is eight out of countless variations, and the goal of course is to be able to investigate what those key dependencies are. What I'll say is that the out of the seventy three different virtual cave tiles, which are the building blocks that make up these virtual worlds, quite a number of them were not only inspired by real world caves, but were specifically designed so that we can essentially use these tiles as unit tests going forward. So, if I want to simulate vertical inclines, here are the tiles that are the vertical vertical unit tests for robots, and that’s how we’re trying to to think through how to tease out that generalizability factor.

What are some observations from this event that you think systems track teams should pay attention to as they prepare for the final event?

One of the key things about the virtual competition is that you submit your software, and that's it. So you have to design everything from state management to failure mode triage, really thinking about what could go wrong and then building out your autonomous capabilities either to react to some of those conditions, or to anticipate them. And to be honest I think that the humans in the loop that we have in the systems competition really are key enablers of their capability, but also could someday (if not already) be a crutch that we might not be able to develop.

Thinking through some of the failure modes in a fully autonomous software deployed setting are going to be incredibly valuable for the systems competitors, so that for example the human supervisor doesn't have to worry about those failure modes as much, or can respond in a more supervisory way rather than trying to joystick the robot around. I think that's going to be one of the greatest impacts, thinking through what it means to send these robots off to autonomously get you the information you need and complete the mission

This isn’t to say that the humans aren't going to be useful and continue to play a role of course, but I think this shifting of the role of the human supervisor from being a state manager to being more of a tactical commander will dramatically highlight the impact of the virtual side on the systems side.

What, if anything, should we take away from one person teams being able to do so consistently well in the virtual circuit?

It’s a really interesting question. I think part of it has to do with systems integration versus software integration. There's something to be said for the richness of the technologies that can be developed, and how many people it requires to be able to develop some of those technologies. With the systems competitors, having one person try to build, manage, deploy, service, and operate all of those robots is still functionally quite challenging, whereas in the virtual competition, it really is a software deployment more than anything else. And so I think the commonality of single person teams may just be a virtue of the virtual competition not having some of those person-intensive requirements.

In terms of their strong performance, I give credit to all of these really talented folks who are taking upon themselves to jump into the competitor pool and see how well they do, and I think that just goes to show you that whether you're one person or ten people people or a hundred people on a team, a good idea translated and executed well really goes a long way.

Looking ahead, teams have a year to prepare for the final event, which is still scheduled to be held sometime in fall 2021. And even though there was no cave event for systems track teams, the fact that the final event will be a combination of tunnel, urban, and cave circuits means that systems track teams have been figuring out how to get their robots to work in caves anyway, and we’ll be bringing you some of their stories over the next few weeks.

[ DARPA SubT ] Continue reading

Posted in Human Robots