Tag Archives: going

#437673 Can AI and Automation Deliver a COVID-19 ...

Illustration: Marysia Machulska

Within moments of meeting each other at a conference last year, Nathan Collins and Yann Gaston-Mathé began devising a plan to work together. Gaston-Mathé runs a startup that applies automated software to the design of new drug candidates. Collins leads a team that uses an automated chemistry platform to synthesize new drug candidates.

“There was an obvious synergy between their technology and ours,” recalls Gaston-Mathé, CEO and cofounder of Paris-based Iktos.

In late 2019, the pair launched a project to create a brand-new antiviral drug that would block a specific protein exploited by influenza viruses. Then the COVID-19 pandemic erupted across the world stage, and Gaston-Mathé and Collins learned that the viral culprit, SARS-CoV-2, relied on a protein that was 97 percent similar to their influenza protein. The partners pivoted.

Their companies are just two of hundreds of biotech firms eager to overhaul the drug-discovery process, often with the aid of artificial intelligence (AI) tools. The first set of antiviral drugs to treat COVID-19 will likely come from sifting through existing drugs. Remdesivir, for example, was originally developed to treat Ebola, and it has been shown to speed the recovery of hospitalized COVID-19 patients. But a drug made for one condition often has side effects and limited potency when applied to another. If researchers can produce an ­antiviral that specifically targets SARS-CoV-2, the drug would likely be safer and more effective than a repurposed drug.

There’s one big problem: Traditional drug discovery is far too slow to react to a pandemic. Designing a drug from scratch typically takes three to five years—and that’s before human clinical trials. “Our goal, with the combination of AI and automation, is to reduce that down to six months or less,” says Collins, who is chief strategy officer at SRI Biosciences, a division of the Silicon Valley research nonprofit SRI International. “We want to get this to be very, very fast.”

That sentiment is shared by small biotech firms and big pharmaceutical companies alike, many of which are now ramping up automated technologies backed by supercomputing power to predict, design, and test new antivirals—for this pandemic as well as the next—with unprecedented speed and scope.

“The entire industry is embracing these tools,” says Kara Carter, president of the International Society for Antiviral Research and executive vice president of infectious disease at Evotec, a drug-discovery company in Hamburg. “Not only do we need [new antivirals] to treat the SARS-CoV-2 infection in the population, which is probably here to stay, but we’ll also need them to treat future agents that arrive.”

There are currentlyabout 200 known viruses that infect humans. Although viruses represent less than 14 percent of all known human pathogens, they make up two-thirds of all new human pathogens discovered since 1980.

Antiviral drugs are fundamentally different from vaccines, which teach a person’s immune system to mount a defense against a viral invader, and antibody treatments, which enhance the body’s immune response. By contrast, anti­virals are chemical compounds that directly block a virus after a person has become infected. They do this by binding to specific proteins and preventing them from functioning, so that the virus cannot copy itself or enter or exit a cell.

The SARS-CoV-2 virus has an estimated 25 to 29 proteins, but not all of them are suitable drug targets. Researchers are investigating, among other targets, the virus’s exterior spike protein, which binds to a receptor on a human cell; two scissorlike enzymes, called proteases, that cut up long strings of viral proteins into functional pieces inside the cell; and a polymerase complex that makes the cell churn out copies of the virus’s genetic material, in the form of single-stranded RNA.

But it’s not enough for a drug candidate to simply attach to a target protein. Chemists also consider how tightly the compound binds to its target, whether it binds to other things as well, how quickly it metabolizes in the body, and so on. A drug candidate may have 10 to 20 such objectives. “Very often those objectives can appear to be anticorrelated or contradictory with each other,” says Gaston-Mathé.

Compared with antibiotics, antiviral drug discovery has proceeded at a snail’s pace. Scientists advanced from isolating the first antibacterial molecules in 1910 to developing an arsenal of powerful antibiotics by 1944. By contrast, it took until 1951 for researchers to be able to routinely grow large amounts of virus particles in cells in a dish, a breakthrough that earned the inventors a Nobel Prize in Medicine in 1954.

And the lag between the discovery of a virus and the creation of a treatment can be heartbreaking. According to the World Health Organization, 71 million people worldwide have chronic hepatitis C, a major cause of liver cancer. The virus that causes the infection was discovered in 1989, but effective antiviral drugs didn’t hit the market until 2014.

While many antibiotics work on a range of microbes, most antivirals are highly specific to a single virus—what those in the business call “one bug, one drug.” It takes a detailed understanding of a virus to develop an antiviral against it, says Che Colpitts, a virologist at Queen’s University, in Canada, who works on antivirals against RNA viruses. “When a new virus emerges, like SARS-CoV-2, we’re at a big disadvantage.”

Making drugs to stop viruses is hard for three main reasons. First, viruses are the Spartans of the pathogen world: They’re frugal, brutal, and expert at evading the human immune system. About 20 to 250 nanometers in diameter, viruses rely on just a few parts to operate, hijacking host cells to reproduce and often destroying those cells upon departure. They employ tricks to camouflage their presence from the host’s immune system, including preventing infected cells from sending out molecular distress beacons. “Viruses are really small, so they only have a few components, so there’s not that many drug targets available to start with,” says Colpitts.

Second, viruses replicate quickly, typically doubling in number in hours or days. This constant copying of their genetic material enables viruses to evolve quickly, producing mutations able to sidestep drug effects. The virus that causes AIDS soon develops resistance when exposed to a single drug. That’s why a cocktail of antiviral drugs is used to treat HIV infection.

Finally, unlike bacteria, which can exist independently outside human cells, viruses invade human cells to propagate, so any drug designed to eliminate a virus needs to spare the host cell. A drug that fails to distinguish between a virus and a cell can cause serious side effects. “Discriminating between the two is really quite difficult,” says Evotec’s Carter, who has worked in antiviral drug discovery for over three decades.

And then there’s the money barrier. Developing antivirals is rarely profitable. Health-policy researchers at the London School of Economics recently estimated that the average cost of developing a new drug is US $1 billion, and up to $2.8 billion for cancer and other specialty drugs. Because antivirals are usually taken for only short periods of time or during short outbreaks of disease, companies rarely recoup what they spent developing the drug, much less turn a profit, says Carter.

To change the status quo, drug discovery needs fresh approaches that leverage new technologies, rather than incremental improvements, says Christian Tidona, managing director of BioMed X, an independent research institute in Heidelberg, Germany. “We need breakthroughs.”

Putting Drug Development on Autopilot
Earlier this year, SRI Biosciences and Iktos began collaborating on a way to use artificial intelligence and automated chemistry to rapidly identify new drugs to target the COVID-19 virus. Within four months, they had designed and synthesized a first round of antiviral candidates. Here’s how they’re doing it.

1/5

STEP 1: Iktos’s AI platform uses deep-learning algorithms in an iterative process to come up with new molecular structures likely to bind to and disable a specific coronavirus protein. Illustrations: Chris Philpot

2/5

STEP 2: SRI Biosciences’s SynFini system is a three-part automated chemistry suite for producing new compounds. Starting with a target compound from Iktos, SynRoute uses machine learning to analyze and optimize routes for creating that compound, with results in about 10 seconds. It prioritizes routes based on cost, likelihood of success, and ease of implementation.

3/5

STEP 3: SynJet, an automated inkjet printer platform, tests the routes by printing out tiny quantities of chemical ingredients to see how they react. If the right compound is produced, the platform tests it.

4/5

STEP 4: AutoSyn, an automated tabletop chemical plant, synthesizes milligrams to grams of the desired compound for further testing. Computer-selected “maps” dictate paths through the plant’s modular components.

5/5

STEP 5: The most promising compounds are tested against live virus samples.

Previous
Next

Iktos’s AI platform was created by a medicinal chemist and an AI expert. To tackle SARS-CoV-2, the company used generative models—deep-learning algorithms that generate new data—to “imagine” molecular structures with a good chance of disabling a key coronavirus protein.

For a new drug target, the software proposes and evaluates roughly 1 million compounds, says Gaston-Mathé. It’s an iterative process: At each step, the system generates 100 virtual compounds, which are tested in silico with predictive models to see how closely they meet the objectives. The test results are then used to design the next batch of compounds. “It’s like we have a very, very fast chemist who is designing compounds, testing compounds, getting back the data, then designing another batch of compounds,” he says.

The computer isn’t as smart as a human chemist, Gaston-Mathé notes, but it’s much faster, so it can explore far more of what people in the field call “chemical space”—the set of all possible organic compounds. Unexplored chemical space is huge: Biochemists estimate that there are at least 1063 possible druglike molecules, and that 99.9 percent of all possible small molecules or compounds have never been synthesized.

Still, designing a chemical compound isn’t the hardest part of creating a new drug. After a drug candidate is designed, it must be synthesized, and the highly manual process for synthesizing a new chemical hasn’t changed much in 200 years. It can take days to plan a synthesis process and then months to years to optimize it for manufacture.

That’s why Gaston-Mathé was eager to send Iktos’s AI-generated designs to Collins’s team at SRI Biosciences. With $13.8 million from the Defense Advanced Research Projects Agency, SRI Biosciences spent the last four years automating the synthesis process. The company’s automated suite of three technologies, called SynFini, can produce new chemical compounds in just hours or days, says Collins.

First, machine-learning software devises possible routes for making a desired molecule. Next, an inkjet printer platform tests the routes by printing out and mixing tiny quantities of chemical ingredients to see how they react with one another; if the right compound is produced, the platform runs tests on it. Finally, a tabletop chemical plant synthesizes milligrams to grams of the desired compound.

Less than four months after Iktos and SRI Biosciences announced their collaboration, they had designed and synthesized a first round of antiviral candidates for SARS-CoV-2. Now they’re testing how well the compounds work on actual samples of the virus.

Out of 10
63 possible druglike molecules, 99.9 percent have never been synthesized.

Theirs isn’t the only collaborationapplying new tools to drug discovery. In late March, Alex Zhavoronkov, CEO of Hong Kong–based Insilico Medicine, came across a YouTube video showing three virtual-reality avatars positioning colorful, sticklike fragments in the side of a bulbous blue protein. The three researchers were using VR to explore how compounds might bind to a SARS-CoV-2 enzyme. Zhavoronkov contacted the startup that created the simulation—Nanome, in San Diego—and invited it to examine Insilico’s ­AI-generated molecules in virtual reality.

Insilico runs an AI platform that uses biological data to train deep-learning algorithms, then uses those algorithms to identify molecules with druglike features that will likely bind to a protein target. A four-day training sprint in late January yielded 100 molecules that appear to bind to an important SARS-CoV-2 protease. The company recently began synthesizing some of those molecules for laboratory testing.

Nanome’s VR software, meanwhile, allows researchers to import a molecular structure, then view and manipulate it on the scale of individual atoms. Like human chess players who use computer programs to explore potential moves, chemists can use VR to predict how to make molecules more druglike, says Nanome CEO Steve McCloskey. “The tighter the interface between the human and the computer, the more information goes both ways,” he says.

Zhavoronkov sent data about several of Insilico’s compounds to Nanome, which re-created them in VR. Nanome’s chemist demonstrated chemical tweaks to potentially improve each compound. “It was a very good experience,” says Zhavoronkov.

Meanwhile, in March, Takeda Pharmaceutical Co., of Japan, invited Schrödinger, a New York–based company that develops chemical-simulation software, to join an alliance working on antivirals. Schrödinger’s AI focuses on the physics of how proteins interact with small molecules and one another.

The software sifts through billions of molecules per week to predict a compound’s properties, and it optimizes for multiple desired properties simultaneously, says Karen Akinsanya, chief biomedical scientist and head of discovery R&D at Schrödinger. “There’s a huge sense of urgency here to come up with a potent molecule, but also to come up with molecules that are going to be well tolerated” by the body, she says. Drug developers are seeking compounds that can be broadly used and easily administered, such as an oral drug rather than an intravenous drug, she adds.

Schrödinger evaluated four protein targets and performed virtual screens for two of them, a computing-intensive process. In June, Google Cloud donated the equivalent of 16 million hours of Nvidia GPU time for the company’s calculations. Next, the alliance’s drug companies will synthesize and test the most promising compounds identified by the virtual screens.

Other companies, including Amazon Web Services, IBM, and Intel, as well as several U.S. national labs are also donating time and resources to the Covid-19 High Performance Computing Consortium. The consortium is supporting 87 projects, which now have access to 6.8 million CPU cores, 50,000 GPUs, and 600 petaflops of computational resources.

While advanced technologies could transform early drug discovery, any new drug candidate still has a long road after that. It must be tested in animals, manufactured in large batches for clinical trials, then tested in a series of trials that, for antivirals, lasts an average of seven years.

In May, the BioMed X Institute in Germany launched a five-year project to build a Rapid Antiviral Response Platform, which would speed drug discovery all the way through manufacturing for clinical trials. The €40 million ($47 million) project, backed by drug companies, will identify ­outside-the-box proposals from young scientists, then provide space and funding to develop their ideas.

“We’ll focus on technologies that allow us to go from identification of a new virus to 10,000 doses of a novel potential therapeutic ready for trials in less than six months,” says BioMed X’s Tidona, who leads the project.

While a vaccine will likely arrive long before a bespoke antiviral does, experts expect COVID-19 to be with us for a long time, so the effort to develop a direct-acting, potent antiviral continues. Plus, having new antivirals—and tools to rapidly create more—can only help us prepare for the next pandemic, whether it comes next month or in another 102 years.

“We’ve got to start thinking differently about how to be more responsive to these kinds of threats,” says Collins. “It’s pushing us out of our comfort zones.”

This article appears in the October 2020 print issue as “Automating Antivirals.” Continue reading

Posted in Human Robots

#437671 Video Friday: Researchers 3D Print ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The Giant Gundam in Yokohama is actually way cooler than I thought it was going to be.

[ Gundam Factory ] via [ YouTube ]

A new 3D-printing method will make it easier to manufacture and control the shape of soft robots, artificial muscles and wearable devices. Researchers at UC San Diego show that by controlling the printing temperature of liquid crystal elastomer, or LCE, they can control the material’s degree of stiffness and ability to contract—also known as degree of actuation. What’s more, they are able to change the stiffness of different areas in the same material by exposing it to heat.

[ UCSD ]

Thanks Ioana!

This is the first successful reactive stepping test on our new torque-controlled biped robot named Bolt. The robot has 3 active degrees of freedom per leg and one passive joint in ankle. Since there is no active joint in ankle, the robot only relies on step location and timing adaptation to stabilize its motion. Not only can the robot perform stepping without active ankles, but it is also capable of rejecting external disturbances as we showed in this video.

[ ODRI ]

The curling robot “Curly” is the first AI-based robot to demonstrate competitive curling skills in an icy real environment with its high uncertainties. Scientists from seven different Korean research institutions including Prof. Klaus-Robert Müller, head of the machine-learning group at TU Berlin and guest professor at Korea University, have developed an AI-based curling robot.

[ TU Berlin ]

MoonRanger, a small robotic rover being developed by Carnegie Mellon University and its spinoff Astrobotic, has completed its preliminary design review in preparation for a 2022 mission to search for signs of water at the moon’s south pole. Red Whittaker explains why the new MoonRanger Lunar Explorer design is innovative and different from prior planetary rovers.

[ CMU ]

Cobalt’s security robot can now navigate unmodified elevators, which is an impressive feat.

Also, EXTERMINATE!

[ Cobalt ]

OrionStar, the robotics company invested in by Cheetah Mobile, announced the Robotic Coffee Master. Incorporating 3,000 hours of AI learning, 30,000 hours of robotic arm testing and machine vision training, the Robotic Coffee Master can perform complex brewing techniques, such as curves and spirals, with millimeter-level stability and accuracy (reset error ≤ 0.1mm).

[ Cheetah Mobile ]

DARPA OFFensive Swarm-Enabled Tactics (OFFSET) researchers recently tested swarms of autonomous air and ground vehicles at the Leschi Town Combined Arms Collective Training Facility (CACTF), located at Joint Base Lewis-McChord (JBLM) in Washington. The Leschi Town field experiment is the fourth of six planned experiments for the OFFSET program, which seeks to develop large-scale teams of collaborative autonomous systems capable of supporting ground forces operating in urban environments.

[ DARPA ]

Here are some highlights from Team Explorer’s SubT Urban competition back in February.

[ Team Explorer ]

Researchers with the Skoltech Intelligent Space Robotics Laboratory have developed a system that allows easy interaction with a micro-quadcopter with LEDs that can be used for light-painting. The researchers used a 92x92x29 mm Crazyflie 2.0 quadrotor that weighs just 27 grams, equipped with a light reflector and an array of controllable RGB LEDs. The control system consists of a glove equipped with an inertial measurement unit (IMU; an electronic device that tracks the movement of a user’s hand), and a base station that runs a machine learning algorithm.

[ Skoltech ]

“DeKonBot” is the prototype of a cleaning and disinfection robot for potentially contaminated surfaces in buildings such as door handles, light switches or elevator buttons. While other cleaning robots often spray the cleaning agents over a large area, DeKonBot autonomously identifies the surface to be cleaned.

[ Fraunhofer IPA ]

On Oct. 20, the OSIRIS-REx mission will perform the first attempt of its Touch-And-Go (TAG) sample collection event. Not only will the spacecraft navigate to the surface using innovative navigation techniques, but it could also collect the largest sample since the Apollo missions.

[ NASA ]

With all the robotics research that seems to happen in places where snow is more of an occasional novelty or annoyance, it’s good to see NORLAB taking things more seriously

[ NORLAB ]

Telexistence’s Model-T robot works very slowly, but very safely, restocking shelves.

[ Telexistence ] via [ YouTube ]

Roboy 3.0 will be unveiled next month!

[ Roboy ]

KUKA ready2_educate is your training cell for hands-on education in robotics. It is especially aimed at schools, universities and company training facilities. The training cell is a complete starter package and your perfect partner for entry into robotics.

[ KUKA ]

A UPenn GRASP Lab Special Seminar on Data Driven Perception for Autonomy, presented by Dapo Afolabi from UC Berkeley.

Perception systems form a crucial part of autonomous and artificial intelligence systems since they convert data about the relationship between an autonomous system and its environment into meaningful information. Perception systems can be difficult to build since they may involve modeling complex physical systems or other autonomous agents. In such scenarios, data driven models may be used to augment physics based models for perception. In this talk, I will present work making use of data driven models for perception tasks, highlighting the benefit of such approaches for autonomous systems.

[ GRASP Lab ]

A Maryland Robotics Center Special Robotics Seminar on Underwater Autonomy, presented by Ioannis Rekleitis from the University of South Carolina.

This talk presents an overview of algorithmic problems related to marine robotics, with a particular focus on increasing the autonomy of robotic systems in challenging environments. I will talk about vision-based state estimation and mapping of underwater caves. An application of monitoring coral reefs is going to be discussed. I will also talk about several vehicles used at the University of South Carolina such as drifters, underwater, and surface vehicles. In addition, a short overview of the current projects will be discussed. The work that I will present has a strong algorithmic flavour, while it is validated in real hardware. Experimental results from several testing campaigns will be presented.

[ MRC ]

This week’s CMU RI Seminar comes from Scott Niekum at UT Austin, on Scaling Probabilistically Safe Learning to Robotics.

Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient off-policy policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards safe robot learning with modest amounts of real-world data.

[ CMU RI ] Continue reading

Posted in Human Robots

#437667 17 Teams to Take Part in DARPA’s ...

Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.

While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.

From DARPA’s press releases:

DARPA’s Subterranean (SubT) Challenge will host its Cave Circuit Virtual Competition, which focuses on innovative solutions to map, navigate, and search complex, simulated cave environments November 17. Qualified teams have until Oct. 15 to develop and submit software-based solutions for the Cave Circuit via the SubT Virtual Portal, where their technologies will face unknown cave environments in the cloud-based SubT Simulator. Until then, teams can refine their roster of selected virtual robot models, choose sensor payloads, and continue to test autonomy approaches to maximize their score.

The Cave Circuit also introduces new simulation capabilities, including digital twins of Systems Competition robots to choose from, marsupial-style platforms combining air and ground robots, and breadcrumb nodes that can be dropped by robots to serve as communications relays. Each robot configuration has an associated cost, measured in SubT Credits – an in-simulation currency – based on performance characteristics such as speed, mobility, sensing, and battery life.

Each team’s simulated robots must navigate realistic caves, with features including natural terrain and dynamic rock falls, while they search for and locate various artifacts on the course within five meters of accuracy to score points during a 60-minute timed run. A correct report is worth one point. Each course contains 20 artifacts, which means each team has the potential for a maximum score of 20 points. Teams can leverage numerous practice worlds and even build their own worlds using the cave tiles found in the SubT Tech Repo to perfect their approach before they submit one official solution for scoring. The DARPA team will then evaluate the solution on a set of hidden competition scenarios.

Of the 17 qualified teams (you can see all of them here), there are a handful that we’ll quickly point out. Team BARCS, from Michigan Tech, was the winner of the SubT Virtual Urban Circuit, meaning that they may be the team to beat on Cave as well, although the course is likely to be unique enough that things will get interesting. Some Systems Track teams to watch include Coordinated Robotics, CTU-CRAS-NORLAB, MARBLE, NUS SEDS, and Robotika, and there are also a handful of brand new teams as well.

Now, just because there’s no dedicated Cave Circuit for the Systems Track teams, it doesn’t mean that there won’t be a Cave component (perhaps even a significant one) in the final event, which as far as we know is still scheduled to happen in fall of next year. We’ve heard that many of the Systems Track teams have been testing out their robots in caves anyway, and as the virtual event gets closer, we’ll be doing a sort of Virtual Systems Track series that highlights how different teams are doing mock Cave Circuits in caves they’ve found for themselves.

For more, we checked in with DARPA SubT program manager Dr. Timothy H. Chung.

IEEE Spectrum: Was it a difficult decision to cancel the Systems Track for Cave?

Tim Chung: The decision to go virtual only was heart wrenching, because I think DARPA’s role is to offer up opportunities that may be unimaginable for some of our competitors, like opening up a cave-type site for this competition. We crawled and climbed through a number of these sites, and I share the sense of disappointment that both our team and the competitors have that we won’t be able to share all the advances that have been made since the Urban Circuit. But what we’ve been able to do is pour a lot of our energy and the insights that we got from crawling around in those caves into what’s going to be a really great opportunity on the Virtual Competition side. And whether it’s a global pandemic, or just lack of access to physical sites like caves, virtual environments are an opportunity that we want to develop.

“The simulator offers us a chance to look at where things could be … it really allows for us to find where some of those limits are in the technology based only on our imagination.”
—Timothy H. Chung, DARPA

What kind of new features will be included in the Virtual Cave Circuit for this competition?

I’m really excited about these particular features because we’re seeing an opportunity for increased synergy between the physical and the virtual. The first I’d say is that we scanned some of the Systems Track robots using photogrammetry and combined that with some additional models that we got from the systems competitors themselves to turn their systems robots into virtual models. We often talk about the sim to real transfer and how successful we can get a simulation to transfer over to the physical world, but now we’ve taken something from the physical world and made it virtual. We’ve validated the controllers as well as the kinematics of the robots, we’ve iterated with the systems competitors themselves, and now we have these 13 robots (air and ground) in the SubT Tech Repo that now all virtual competitors can take advantage of.

We also have additional robot capability. Those comms bread crumbs are common among many of the competitors, so we’ve adopted that in the virtual world, and now you have comms relay nodes that are baked in to the SubT Simulator—you can have either six or twelve comms nodes that you can drop from a variety of our ground robot platforms. We have the marsupial deployment capability now, so now we have parent ground robots that can be mixed and matched with different child drones to become marsupial pairs.

And this is something I’ve been planning for for a while: we now have the ability to trigger things like rock falls. They still don’t quite look like Indiana Jones with the boulder coming down the corridor, but this comes really close. In addition to it just being an interesting and realistic consideration, we get to really dynamically test and stress the robots’ ability to navigate and recognize that something has changed in the environment and respond to it.

Image: DARPA

DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment.

No simulation is perfect, so can you talk to us about what kinds of things aren’t being simulated right now? Where does the simulator not match up to reality?

I think that question is foundational to any conversation about simulation. I’ll give you a couple of examples:

We have the ability to represent wholesale damage to a robot, but it’s not at the actuator or component level. So there’s not a reliability model, although I think that would be really interesting to incorporate so that you could do assessments on things like mean time to failure. But if a robot falls off a ledge, it can be disabled by virtue of being too damaged to continue.

With communications, and this is one that’s near and dear not only to my heart but also to all of those that have lived through developing communication systems and robotic systems, we’ve gone through and conducted RF surveys of underground environments to get a better handle on what propagation effects are. There’s a lot of research that has gone into this, and trying to carry through some of that realism, we do have path loss models for RF communications baked into the SubT Simulator. For example, when you drop a bread crumb node, it’s using a path loss model so that it can represent the degradation of signal as you go farther into a cave. Now, we’re not modeling it at the Maxwell equations level, which I think would be awesome, but we’re not quite there yet.

We do have things like battery depletion, sensor degradation to the extent that simulators can degrade sensor inputs, and things like that. It’s just amazing how close we can get in some places, and how far away we still are in others, and I think showing where the limits are of how far you can get simulation is all part and parcel of why SubT Challenge wants to have both System and Virtual tracks. Simulation can be an accelerant, but it’s not going to be the panacea for development and innovation, and I think all the competitors are cognizant those limitations.

One of the most amazing things about the SubT Virtual Track is that all of the robots operate fully autonomously, without the human(s) in the loop that the System Track teams have when they compete. Why make the Virtual Track even more challenging in that way?

I think it’s one of the defining, delineating attributes of the Virtual Track. Our continued vision for the simulation side is that the simulator offers us a chance to look at where things could be, and allows for us to explore things like larger scales, or increased complexity, or types of environments that we can’t physically gain access to—it really allows for us to find where some of those limits are in the technology based only on our imagination, and this is one of the intrinsic values of simulation.

But I think finding a way to incorporate human input, or more generally human factors like teleoperation interfaces and the in-situ stress that you might not be able to recreate in the context of a virtual competition provided a good reason for us to delineate the two competitions, with the Virtual Competition really being about the role of fully autonomous or self-sufficient systems going off and doing their solution without human guidance, while also acknowledging that the real world has conditions that would not necessarily be represented by a fully simulated version. Having said that, I think cognitive engineering still has an incredibly important role to play in human robot interaction.

What do we have to look forward to during the Virtual Competition Showcase?

We have a number of additional features and capabilities that we’ve baked into the simulator that will allow for us to derive some additional insights into our competition runs. Those insights might involve things like the performance of one or more robots in a given scenario, or the impact of the environment on different types of robots, and what I can tease is that this will be an opportunity for us to showcase both the technology and also the excitement of the robots competing in the virtual environment. I’m trying not to give too many spoilers, but we’ll have an opportunity to really get into the details.

Check back as we get closer to the 17 November event for more on the DARPA SubT Challenge. Continue reading

Posted in Human Robots

#437639 Boston Dynamics’ Spot Is Helping ...

In terms of places where you absolutely want a robot to go instead of you, what remains of the utterly destroyed Chernobyl Reactor 4 should be very near the top of your list. The reactor, which suffered a catastrophic meltdown in 1986, has been covered up in almost every way possible in an effort to keep its nuclear core contained. But eventually, that nuclear material is going to have to be dealt with somehow, and in order to do that, it’s important to understand which bits of it are just really bad, and which bits are the actual worst. And this is where Spot is stepping in to help.

The big open space that Spot is walking through is right next to what’s left of Reactor 4. Within six months of the disaster, Reactor 4 was covered in a sarcophagus made of concrete and steel to try and keep all the nasty nuclear fuel from leaking out more than it already had, and it still contains “30 tons of highly contaminated dust, 16 tons of uranium and plutonium, and 200 tons of radioactive lava.” Oof. Over the next 10 years, the sarcophagus slowly deteriorated, and despite the addition of that gigantic network of steel support beams that you can see in the video, in the late 1990s it was decided to erect an enormous building over the entire mess to try and stabilize it for as long as possible.

Reactor 4 is now snugly inside the massive New Safe Confinement (NSC) structure, and the idea is that eventually, the structure will allow for the safe disassembly of what’s left of the reactor, although nobody is quite sure how to do that. This is all just to say that the area inside of the containment structure offers a lot of good opportunities for robots to take over from humans.

This particular Spot is owned by the U.K. Atomic Energy Authority, and was packed off to Russia with the assistance of the Robotics and Artificial Intelligence in Nuclear (RAIN) initiative and the National Centre for Nuclear Robotics. Dr. Dave Megson-Smith, who is a researcher at the University of Bristol, in the U.K., and part of the Hot Robotics Facility at the National Nuclear User Facility, was one of the scientists lucky enough to accompany Spot on its adventure. Megson-Smith specializes in sensor development, and he equipped Spot with a collimated radiation sensor in addition to its mapping payload. “We actually built a map of the radiation coming out of the front wall of Chernobyl power plant as we were in there with it,” Megson-Smith told us, and was able to share this picture, which shows a map of gamma photon count rate:

Image: University of Bristol

Researchers equipped Spot with a collimated radiation sensor and use one of the data readings (gamma photon count rate) to create a map of the radiation coming out of the front wall of the Chernobyl power plant.

So what’s the reason you’d want to use a very expensive legged robot to wander around what looks like a very flat and robot friendly floor? As it turns out, the floor is very dusty in there, and a priority inside the NSC is to keep dust down as much as possible, since the dust is radioactive and gets on everything and is consequently the easiest way for radioactivity to escape the NSC. “You want to minimize picking up material, so we consider the total contact surface area,” says Megson-Smith. “If you use a legged system rather than a wheeled or tracked system, you have a much smaller footprint and you disturb the environment a lot less.” While it’s nice that Spot is nimble and can climb stairs and stuff, tracked vehicles can do that as well, so in this case, the primary driving factor of choosing a robot to work inside Chernobyl is minimizing those contact points.

Right now, routine weekly measurements in contaminated spaces at Chernobyl are done by humans, which puts those humans at risk. Spot, or a robot like it, could potentially take over from those humans, as a sort of “automated safety checker”

Right now, routine weekly measurements in contaminated spaces at Chernobyl are done by humans, which puts those humans at risk. Spot, or a robot like it, could potentially take over from those humans, as a sort of “automated safety checker” able to work in medium level contaminated environments.” As far as more dangerous areas go, there’s a lot of uncertainty about what Spot is actually capable of, according to Megson-Smith. “What you think the problems are, and what the industry thinks the problems are, are subtly different things.

We were thinking that we’d have to make robots incredibly radiation proof to go into these contaminated environments, but they said, “can you just give us a system that we can send into places where humans already can go, but where we just don’t want to send humans.” Making robots incredibly radiation proof is challenging, and without extensive testing and ruggedizing, failures can be frequent, as many robots discovered at Fukushima. Indeed, Megson-Smith that in Fukushima there’s a particular section that’s known as a “robot graveyard” where robots just go to die, and they’ve had to up their standards again and again to keep the robots from failing. “So the thing they’re worried about with Spot is, what is its tolerance? What components will fail, and what can we do to harden it?” he says. “We’re approaching Boston Dynamics at the moment to see if they’ll work with us to address some of those questions.

There’s been a small amount of testing of how robots fair under harsh radiation, Megson-Smith told us, including (relatively recently) a KUKA LBR800 arm, which “stopped operating after a large radiation dose of 164.55(±1.09) Gy to its end effector, and the component causing the failure was an optical encoder.” And in case you’re wondering how much radiation that is, a 1 to 2 Gy dose to the entire body gets you acute radiation sickness and possibly death, while 8 Gy is usually just straight-up death. The goal here is not to kill robots (I mean, it sort of is), but as Megson-Smith says, “if we can work out what the weak points are in a robotic system, can we address those, can we redesign those, or at least understand when they might start to fail?” Now all he has to do is convince Boston Dynamics to send them a Spot that they can zap until it keels over.

The goal for Spot in the short term is fully autonomous radiation mapping, which seems very possible. It’ll also get tested with a wider range of sensor packages, and (happily for the robot) this will all take place safely back at home in the U.K. As far as Chernobyl is concerned, robots will likely have a substantial role to play in the near future. “Ultimately, Chernobyl has to be taken apart and decommissioned. That’s the long-term plan for the facility. To do that, you first need to understand everything, which is where we come in with our sensor systems and robotic platforms,” Megson-Smith tells us. “Since there are entire swathes of the Chernobyl nuclear plant where people can’t go in, we’d need robots like Spot to do those environmental characterizations.” Continue reading

Posted in Human Robots

#437635 Toyota Research Demonstrates ...

Over the last several years, Toyota has been putting more muscle into forward-looking robotics research than just about anyone. In addition to the Toyota Research Institute (TRI), there’s that massive 175-acre robot-powered city of the future that Toyota still plans to build next to Mount Fuji. Even Toyota itself acknowledges that it might be crazy, but that’s just how they roll—as TRI CEO Gill Pratt told me a while back, when Toyota decides to do something, they really do go all-in on it.

TRI has been focusing heavily on home robots, which is reflective of the long-term nature of what TRI is trying to do, because home robots are both the place where we’ll need robots the most at the same time as they’re the place where it’s going to be hardest to deploy them. The unpredictable nature of homes, and the fact that homes tend to have squishy fragile people in them, are robot-unfriendly characteristics, but as the population continues to age (an increasingly acute problem in Japan), homes offer an enormous amount of potential for helping us maintain our independence.

Today, Toyota is showing off some of the research that it’s been working on recently, in the form of a virtual reality presentation in lieu of an in-person press event. For journalists, TRI pre-loaded the recording onto a VR headset, which was FedEx’ed to my house. You can watch the entire 40-minute presentation in 360 video on YouTube (or in VR if you have a headset of your own), but if you don’t watch the whole thing, you should at least check out the full-on GLaDOS (with arms) that TRI thinks belongs in your home.

The presentation features an introduction from Gill Pratt, who looks entirely too comfortable embedded inside of one of TRI’s telepresence robots. The event also covers a lot of territory, but the highlight is almost certainly the new hardware that TRI demonstrates.

Soft bubble gripper

Photo: TRI

This is a “soft bubble gripper,” under development at TRI’s Cambridge, Mass., branch. These passively-compliant, air-filled grippers make it easier to grasp many different kinds of objects safely, but the nifty thing is that they’ve got cameras inside of them watching a pattern of dots on the interior of the soft membrane.

When the outside of the bubble makes contact with an object, the bubble deforms, and the deformation of the dot pattern on the inside can be tracked by the camera to determine both directions and magnitudes of forces. This is a concept that we’ve seen elsewhere before, but TRI’s implementation is a clever way of making an inherently safe end effector that can still perform all the sensing you need it to do for relatively complex manipulation tasks.

The bubble gripper was presented at ICRA this year, and you can read the technical paper here.

Ceiling-mounted home robot

Photo: TRI

I don’t know whether robots dangling from the ceiling was somehow sinister pre-Portal, but it sure as heck is for me having played through that game a couple of times, and it’s since been reinforced by AUTO from WALL-E.

The reason that we generally see robots mounted on the floor or on tables or on mobile bases is that we’re bipeds, not bats, and giving a robot access to a human-like workspace is easiest to do if you also give that robot a human-like position and orientation. And if you want to be able to reach stuff high up, you do what TRI did with their previous generation of kitchen manipulator, and just give it the ability to make itself super tall. But TRI is convinced it’s a good place to put our future home robots:

One innovative concept is a “gantry robot” that would descend from an overhead framework to perform tasks such as loading the dishwasher, wiping surfaces, and clearing clutter. By traveling on the ceiling, the robot avoids the problems of navigating household floor clutter and navigating cramped spaces. When not in use, the robot would tuck itself up out of the way. To further investigate this idea, the team has built a laboratory prototype robot that can do all the same tasks as a floor-based mobile robot but with the innovative overhead mobility system.

Another obvious problem with the gantry robot is that you have to install all kinds of stuff in your ceiling for this to work, which makes it very impractical (if not totally impossible) to introduce a system like this into a home that wasn’t built specifically for it. If, however, you do build a home with a robot like this in mind, the animation below from TRI shows how it could be extra useful. Suddenly, stairs are a non-issue. Payload is presumably also a non-issue, since loads can be transferred to the ceiling. Batteries become unnecessary, so the whole robot can be much lighter weight, which in turn makes it safer. Sensors get a fantastic view, and obstacle avoidance becomes trivial.

Robots as “time machines”

Photo: TRI

TRI’s presentation covered more than what we’ve highlighted here—our focus has been on the hardware prototypes, but TRI had more to talk about, including learning through demonstration, scaling learning through simulation, and how TRI has been working with users to figure out what research directions should be explored. It’s all available right now on YouTube, and it’s well worth 40 minutes of your time.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings”
—Gill Pratt, TRI

It’s only been five years since Toyota announced the $1 billion investment that established TRI, and it feels like the progress that’s been made since then has been substantial. It’s not often that vision, resources, and long-term commitment come together like this, and TRI’s emphasis on making life better for people is one of the things that helps to keep us optimistic about the future of robotics.

“What we’re really focused on is this principle idea of amplifying, rather than replacing, human beings,” Gill Pratt told us. “And what it means to amplify a person, particularly as they’re aging—what we’re really trying to do is build a time machine. This may sound fanciful, and of course we can’t build a real time machine, but maybe we can build robotic assistants to make our lives as we age seem as if we are actually using a time machine.” He explains that it doesn’t mean building robots for convenience or to do our jobs for us. “It means building technology that enables us to continue to live and to work and to relate to each other as if we were younger,” he says. “And that’s really what our main goal is.” Continue reading

Posted in Human Robots