Tag Archives: virtual

#437721 Video Friday: Child Robot Learning to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

We first met Ibuki, Hiroshi Ishiguro’s latest humanoid robot, a couple of years ago. A recent video shows how Ishiguro and his team are teaching the robot to express its emotional state through gait and body posture while moving.

This paper presents a subjective evaluation of the emotions of a wheeled mobile humanoid robot expressing emotions during movement by replicating human gait-induced upper body motion. For this purpose, we proposed the robot equipped with a vertical oscillation mechanism that generates such motion by focusing on human center-of-mass trajectory. In the experiment, participants watched videos of the robot’s different emotional gait-induced upper body motions, and assess the type of emotion shown, and their confidence level in their answer.

[ Hiroshi Ishiguro Lab ] via [ RobotStart ]

ICYMI: This is a zinc-air battery made partly of Kevlar that can be used to support weight, not just add to it.

Like biological fat reserves store energy in animals, a new rechargeable zinc battery integrates into the structure of a robot to provide much more energy, a team led by the University of Michigan has shown.

The new battery works by passing hydroxide ions between a zinc electrode and the air side through an electrolyte membrane. That membrane is partly a network of aramid nanofibers—the carbon-based fibers found in Kevlar vests—and a new water-based polymer gel. The gel helps shuttle the hydroxide ions between the electrodes. Made with cheap, abundant and largely nontoxic materials, the battery is more environmentally friendly than those currently in use. The gel and aramid nanofibers will not catch fire if the battery is damaged, unlike the flammable electrolyte in lithium ion batteries. The aramid nanofibers could be upcycled from retired body armor.

[ University of Michigan ]

In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.

[ CMU ]

Captured on Aug. 11 during the second rehearsal of the OSIRIS-REx mission’s sample collection event, this series of images shows the SamCam imager’s field of view as the NASA spacecraft approaches asteroid Bennu’s surface. The rehearsal brought the spacecraft through the first three maneuvers of the sampling sequence to a point approximately 131 feet (40 meters) above the surface, after which the spacecraft performed a back-away burn.

These images were captured over a 13.5-minute period. The imaging sequence begins at approximately 420 feet (128 meters) above the surface – before the spacecraft executes the “Checkpoint” maneuver – and runs through to the “Matchpoint” maneuver, with the last image taken approximately 144 feet (44 meters) above the surface of Bennu.

[ NASA ]

The DARPA AlphaDogfight Trials Final Event took place yesterday; the livestream is like 5 hours long, but you can skip ahead to 4:39 ish to see the AI winner take on a human F-16 pilot in simulation.

Some things to keep in mind about the result: The AI had perfect situational knowledge while the human pilot had to use eyeballs, and in particular, the AI did very well at lining up its (virtual) gun with the human during fast passing maneuvers, which is the sort of thing that autonomous systems excel at but is not necessarily reflective of better strategy.

[ DARPA ]

Coming soon from Clearpath Robotics!

[ Clearpath ]

This video introduces Preferred Networks’ Hand type A, a tendon-driven robot gripper with passively switchable underactuated surface.

[ Preferred Networks ]

CYBATHLON 2020 will take place on 13 – 14 November 2020 – at the teams’ home bases. They will set up their infrastructure for the competition and film their races. Instead of starting directly next to each other, the pilots will start individually and under the supervision of CYBATHLON officials. From Zurich, the competitions will be broadcast through a new platform in a unique live programme.

[ Cybathlon ]

In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.

[ UZH ]

With the help of the software pitasc from Fraunhofer IPA, an assembly task is no longer programmed point by point, but workpiece-related. Thus, pitasc adapts the assembly process itself for new product variants with the help of updated parameters.

[ Fraunhofer ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale ]

This work presents a novel loco-manipulation control framework for the execution of complex tasks with kinodynamic constraints using mobile manipulators. As a representative example, we consider the handling and re-positioning of pallet jacks in unstructured environments. While these results reveal with a proof-of- concept the effectiveness of the proposed framework, they also demonstrate the high potential of mobile manipulators for relieving human workers from such repetitive and labor intensive tasks. We believe that this extended functionality can contribute to increasing the usability of mobile manipulators in different application scenarios.

[ Paper ] via [ IIT ]

I don’t know why this dinosaur ice cream serving robot needs to blow smoke out of its nose, but I like it.

[ Connected Robotics ] via [ RobotStart ]

Guardian S remote visual inspection and surveillance robots make laying cable runs in confined or hard to reach spaces easy. With advanced maneuverability and the ability to climb vertical, ferrous surfaces, the robot reaches areas that are not always easily accessible.

[ Sarcos ]

Looks like the company that bought Anki is working on an add-on to let cars charge while they drive.

[ Digital Dream Labs ]

Chris Atkeson gives a brief talk for the CMU Robotics Institute orientation.

[ CMU RI ]

A UofT Robotics Seminar, featuring Russ Tedrake from MIT and TRI on “Feedback Control for Manipulation.”

Control theory has an answer for just about everything, but seems to fall short when it comes to closing a feedback loop using a camera, dealing with the dynamics of contact, and reasoning about robustness over the distribution of tasks one might find in the kitchen. Recent examples from RL and imitation learning demonstrate great promise, but don’t leverage the rigorous tools from systems theory. I’d like to discuss why, and describe some recent results of closing feedback loops from pixels for “category-level” robot manipulation.

[ UofT ] Continue reading

Posted in Human Robots

#437701 Robotics, AI, and Cloud Computing ...

IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.

IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.

The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.

Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.

All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.

Image: IBM Research

IBM RXN helps predict chemical reaction outcomes or design retrosynthesis in seconds.

In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.

Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.

The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.

IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?

According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”

Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.

While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.

“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”

There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.

Photo: Michael Buholzer

Teodoro Laino of IBM Research Europe.

“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe. “On the other hand, we are also pushing at bringing the entire system to a service level.”

Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.

Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”

< Back to IEEE COVID-19 Resources Continue reading

Posted in Human Robots

#437695 Video Friday: Even Robots Know That You ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
Other Than Human – September 3-10, 2020 – Stockholm, Sweden
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

From the Robotics and Perception Group at UZH comes Flightmare, a simulation environment for drones that combines a slick rendering engine with a robust physics engine that can run as fast as your system can handle.

Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc.

[ Flightmare ]

Quadruped robots yelling at people to maintain social distancing is really starting to become a thing, for better or worse.

We introduce a fully autonomous surveillance robot based on a quadruped platform that can promote social distancing in complex urban environments. Specifically, to achieve autonomy, we mount multiple cameras and a 3D LiDAR on the legged robot. The robot then uses an onboard real-time social distancing detection system to track nearby pedestrian groups. Next, the robot uses a crowd-aware navigation algorithm to move freely in highly dynamic scenarios. The robot finally uses a crowd aware routing algorithm to effectively promote social distancing by using human-friendly verbal cues to send suggestions to overcrowded pedestrians.

[ Project ]

Thanks Fan!

The Personal Robotics Group at Oregon State University is looking at UV germicidal irradiation for surface disinfection with a Fetch Manipulator Robot.

Fetch Robot disinfecting dance party woo!

[ Oregon State ]

How could you not take a mask from this robot?

[ Reachy ]

This work presents the design, development and autonomous navigation of the alpha-version of our Resilient Micro Flyer, a new type of collision-tolerant small aerial robot tailored to traversing and searching within highly confined environments including manhole-sized tubes. The robot is particularly lightweight and agile, while it implements a rigid collision-tolerant design which renders it resilient during forcible interaction with the environment. Furthermore, the design of the system is enhanced through passive flaps ensuring smoother and more compliant collision which was identified to be especially useful in very confined settings.

[ ARL ]

Pepper can make maps and autonomously navigate, which is interesting, but not as interesting as its posture when it's wandering around.

Dat backing into the charging dock tho.

[ Pepper ]

RatChair a strategy for displacing big objects by attaching relatively small vibration sources. After learning how several random bursts of vibration affect its pose, an optimization algorithm discovers the optimal sequence of vibration patterns required to (slowly but surely) move the object to a specified position.

This is from 2015, why isn't all of my furniture autonomous yet?!

[ KAIST ]

The new SeaDrone Pro is designed to be the underwater equivalent of a quadrotor. This video is a rendering, but we've been assured that it does actually exist.

[ SeaDrone ]

Thanks Eduardo!

Porous Loops is a lightweight composite facade panel that shows the potential of 3D printing of mineral foams for building scale applications.

[ ETH ]

Thanks Fan!

Here's an interesting idea for a robotic gripper- it's what appears to be a snap bracelet coupled to a pneumatic actuator that allows the snap bracelet to be reset.

[ Georgia Tech ]

Graze is developing a commercial robotic lawnmower. They're also doing a sort of crowdfunded investment thing, which probably explains the painfully overproduced nature of the following video:

A couple things about this: the hard part, which the video skips over almost entirely, is the mapping, localization, and understanding where to mow and where not to mow. The pitch deck seems to suggest that this is mostly done through computer vision, a thing that's perhaps easy to do under controlled ideal conditions, but difficult to apply to a world full lawns that are all different. The commercial aspect is interesting because golf courses are likely as standardized as you can get, but the emphasis here on how much money they can make without really addressing any of the technical stuff makes me raise an eyebrow or two.

[ Graze ]

The record & playback X-series arm demo allows the user to record the arm's movements while motors are torqued off. Then, the user may torque the motor's on and watch the movements they just made playback!

[ Interbotix ]

Shadow Robot has a new teleop system for its hand. I'm guessing that it's even trickier to use than it looks.

[ Shadow Robot ]

Quanser Interactive Labs is a collection of virtual hardware-based laboratory activities that supplement traditional or online courses. Same as working with physical systems in the lab, students work with virtual twins of Quanser's most popular plants, develop their mathematical models, implement and simulate the dynamic behavior of these systems, design controllers, and validate them on a high-fidelity 3D real-time virtual models. The virtual systems not only look like the real ones, they also behave, can be manipulated, measured, and controlled like real devices. And finally, when students go to the lab, they can deploy their virtually-validated designs on actual physical equipment.

[ Quanser ]

This video shows robot-assisted heart surgery. It's amazing to watch if you haven't seen this sort of thing before, but be aware that there is a lot of blood.

This video demonstrates a fascinating case of robotic left atrial myxoma excision, narrated by Joel Dunning, Middlesbrough, UK. The Robotic platform provides superior visualisation and enhanced dexterity, through keyhole incisions. Robotic surgery is an integral part of our Minimally Invasive Cardiothoracic Surgery Program.

[ Tristan D. Yan ]

Thanks Fan!

In this talk, we present our work on learning control policies directly in simulation that are deployed onto real drones without any fine tuning. The presentation covers autonomous drone racing, drone acrobatics, and uncertainty estimation in deep networks.

[ RPG ] Continue reading

Posted in Human Robots

#437673 Can AI and Automation Deliver a COVID-19 ...

Illustration: Marysia Machulska

Within moments of meeting each other at a conference last year, Nathan Collins and Yann Gaston-Mathé began devising a plan to work together. Gaston-Mathé runs a startup that applies automated software to the design of new drug candidates. Collins leads a team that uses an automated chemistry platform to synthesize new drug candidates.

“There was an obvious synergy between their technology and ours,” recalls Gaston-Mathé, CEO and cofounder of Paris-based Iktos.

In late 2019, the pair launched a project to create a brand-new antiviral drug that would block a specific protein exploited by influenza viruses. Then the COVID-19 pandemic erupted across the world stage, and Gaston-Mathé and Collins learned that the viral culprit, SARS-CoV-2, relied on a protein that was 97 percent similar to their influenza protein. The partners pivoted.

Their companies are just two of hundreds of biotech firms eager to overhaul the drug-discovery process, often with the aid of artificial intelligence (AI) tools. The first set of antiviral drugs to treat COVID-19 will likely come from sifting through existing drugs. Remdesivir, for example, was originally developed to treat Ebola, and it has been shown to speed the recovery of hospitalized COVID-19 patients. But a drug made for one condition often has side effects and limited potency when applied to another. If researchers can produce an ­antiviral that specifically targets SARS-CoV-2, the drug would likely be safer and more effective than a repurposed drug.

There’s one big problem: Traditional drug discovery is far too slow to react to a pandemic. Designing a drug from scratch typically takes three to five years—and that’s before human clinical trials. “Our goal, with the combination of AI and automation, is to reduce that down to six months or less,” says Collins, who is chief strategy officer at SRI Biosciences, a division of the Silicon Valley research nonprofit SRI International. “We want to get this to be very, very fast.”

That sentiment is shared by small biotech firms and big pharmaceutical companies alike, many of which are now ramping up automated technologies backed by supercomputing power to predict, design, and test new antivirals—for this pandemic as well as the next—with unprecedented speed and scope.

“The entire industry is embracing these tools,” says Kara Carter, president of the International Society for Antiviral Research and executive vice president of infectious disease at Evotec, a drug-discovery company in Hamburg. “Not only do we need [new antivirals] to treat the SARS-CoV-2 infection in the population, which is probably here to stay, but we’ll also need them to treat future agents that arrive.”

There are currentlyabout 200 known viruses that infect humans. Although viruses represent less than 14 percent of all known human pathogens, they make up two-thirds of all new human pathogens discovered since 1980.

Antiviral drugs are fundamentally different from vaccines, which teach a person’s immune system to mount a defense against a viral invader, and antibody treatments, which enhance the body’s immune response. By contrast, anti­virals are chemical compounds that directly block a virus after a person has become infected. They do this by binding to specific proteins and preventing them from functioning, so that the virus cannot copy itself or enter or exit a cell.

The SARS-CoV-2 virus has an estimated 25 to 29 proteins, but not all of them are suitable drug targets. Researchers are investigating, among other targets, the virus’s exterior spike protein, which binds to a receptor on a human cell; two scissorlike enzymes, called proteases, that cut up long strings of viral proteins into functional pieces inside the cell; and a polymerase complex that makes the cell churn out copies of the virus’s genetic material, in the form of single-stranded RNA.

But it’s not enough for a drug candidate to simply attach to a target protein. Chemists also consider how tightly the compound binds to its target, whether it binds to other things as well, how quickly it metabolizes in the body, and so on. A drug candidate may have 10 to 20 such objectives. “Very often those objectives can appear to be anticorrelated or contradictory with each other,” says Gaston-Mathé.

Compared with antibiotics, antiviral drug discovery has proceeded at a snail’s pace. Scientists advanced from isolating the first antibacterial molecules in 1910 to developing an arsenal of powerful antibiotics by 1944. By contrast, it took until 1951 for researchers to be able to routinely grow large amounts of virus particles in cells in a dish, a breakthrough that earned the inventors a Nobel Prize in Medicine in 1954.

And the lag between the discovery of a virus and the creation of a treatment can be heartbreaking. According to the World Health Organization, 71 million people worldwide have chronic hepatitis C, a major cause of liver cancer. The virus that causes the infection was discovered in 1989, but effective antiviral drugs didn’t hit the market until 2014.

While many antibiotics work on a range of microbes, most antivirals are highly specific to a single virus—what those in the business call “one bug, one drug.” It takes a detailed understanding of a virus to develop an antiviral against it, says Che Colpitts, a virologist at Queen’s University, in Canada, who works on antivirals against RNA viruses. “When a new virus emerges, like SARS-CoV-2, we’re at a big disadvantage.”

Making drugs to stop viruses is hard for three main reasons. First, viruses are the Spartans of the pathogen world: They’re frugal, brutal, and expert at evading the human immune system. About 20 to 250 nanometers in diameter, viruses rely on just a few parts to operate, hijacking host cells to reproduce and often destroying those cells upon departure. They employ tricks to camouflage their presence from the host’s immune system, including preventing infected cells from sending out molecular distress beacons. “Viruses are really small, so they only have a few components, so there’s not that many drug targets available to start with,” says Colpitts.

Second, viruses replicate quickly, typically doubling in number in hours or days. This constant copying of their genetic material enables viruses to evolve quickly, producing mutations able to sidestep drug effects. The virus that causes AIDS soon develops resistance when exposed to a single drug. That’s why a cocktail of antiviral drugs is used to treat HIV infection.

Finally, unlike bacteria, which can exist independently outside human cells, viruses invade human cells to propagate, so any drug designed to eliminate a virus needs to spare the host cell. A drug that fails to distinguish between a virus and a cell can cause serious side effects. “Discriminating between the two is really quite difficult,” says Evotec’s Carter, who has worked in antiviral drug discovery for over three decades.

And then there’s the money barrier. Developing antivirals is rarely profitable. Health-policy researchers at the London School of Economics recently estimated that the average cost of developing a new drug is US $1 billion, and up to $2.8 billion for cancer and other specialty drugs. Because antivirals are usually taken for only short periods of time or during short outbreaks of disease, companies rarely recoup what they spent developing the drug, much less turn a profit, says Carter.

To change the status quo, drug discovery needs fresh approaches that leverage new technologies, rather than incremental improvements, says Christian Tidona, managing director of BioMed X, an independent research institute in Heidelberg, Germany. “We need breakthroughs.”

Putting Drug Development on Autopilot
Earlier this year, SRI Biosciences and Iktos began collaborating on a way to use artificial intelligence and automated chemistry to rapidly identify new drugs to target the COVID-19 virus. Within four months, they had designed and synthesized a first round of antiviral candidates. Here’s how they’re doing it.

1/5

STEP 1: Iktos’s AI platform uses deep-learning algorithms in an iterative process to come up with new molecular structures likely to bind to and disable a specific coronavirus protein. Illustrations: Chris Philpot

2/5

STEP 2: SRI Biosciences’s SynFini system is a three-part automated chemistry suite for producing new compounds. Starting with a target compound from Iktos, SynRoute uses machine learning to analyze and optimize routes for creating that compound, with results in about 10 seconds. It prioritizes routes based on cost, likelihood of success, and ease of implementation.

3/5

STEP 3: SynJet, an automated inkjet printer platform, tests the routes by printing out tiny quantities of chemical ingredients to see how they react. If the right compound is produced, the platform tests it.

4/5

STEP 4: AutoSyn, an automated tabletop chemical plant, synthesizes milligrams to grams of the desired compound for further testing. Computer-selected “maps” dictate paths through the plant’s modular components.

5/5

STEP 5: The most promising compounds are tested against live virus samples.

Previous
Next

Iktos’s AI platform was created by a medicinal chemist and an AI expert. To tackle SARS-CoV-2, the company used generative models—deep-learning algorithms that generate new data—to “imagine” molecular structures with a good chance of disabling a key coronavirus protein.

For a new drug target, the software proposes and evaluates roughly 1 million compounds, says Gaston-Mathé. It’s an iterative process: At each step, the system generates 100 virtual compounds, which are tested in silico with predictive models to see how closely they meet the objectives. The test results are then used to design the next batch of compounds. “It’s like we have a very, very fast chemist who is designing compounds, testing compounds, getting back the data, then designing another batch of compounds,” he says.

The computer isn’t as smart as a human chemist, Gaston-Mathé notes, but it’s much faster, so it can explore far more of what people in the field call “chemical space”—the set of all possible organic compounds. Unexplored chemical space is huge: Biochemists estimate that there are at least 1063 possible druglike molecules, and that 99.9 percent of all possible small molecules or compounds have never been synthesized.

Still, designing a chemical compound isn’t the hardest part of creating a new drug. After a drug candidate is designed, it must be synthesized, and the highly manual process for synthesizing a new chemical hasn’t changed much in 200 years. It can take days to plan a synthesis process and then months to years to optimize it for manufacture.

That’s why Gaston-Mathé was eager to send Iktos’s AI-generated designs to Collins’s team at SRI Biosciences. With $13.8 million from the Defense Advanced Research Projects Agency, SRI Biosciences spent the last four years automating the synthesis process. The company’s automated suite of three technologies, called SynFini, can produce new chemical compounds in just hours or days, says Collins.

First, machine-learning software devises possible routes for making a desired molecule. Next, an inkjet printer platform tests the routes by printing out and mixing tiny quantities of chemical ingredients to see how they react with one another; if the right compound is produced, the platform runs tests on it. Finally, a tabletop chemical plant synthesizes milligrams to grams of the desired compound.

Less than four months after Iktos and SRI Biosciences announced their collaboration, they had designed and synthesized a first round of antiviral candidates for SARS-CoV-2. Now they’re testing how well the compounds work on actual samples of the virus.

Out of 10
63 possible druglike molecules, 99.9 percent have never been synthesized.

Theirs isn’t the only collaborationapplying new tools to drug discovery. In late March, Alex Zhavoronkov, CEO of Hong Kong–based Insilico Medicine, came across a YouTube video showing three virtual-reality avatars positioning colorful, sticklike fragments in the side of a bulbous blue protein. The three researchers were using VR to explore how compounds might bind to a SARS-CoV-2 enzyme. Zhavoronkov contacted the startup that created the simulation—Nanome, in San Diego—and invited it to examine Insilico’s ­AI-generated molecules in virtual reality.

Insilico runs an AI platform that uses biological data to train deep-learning algorithms, then uses those algorithms to identify molecules with druglike features that will likely bind to a protein target. A four-day training sprint in late January yielded 100 molecules that appear to bind to an important SARS-CoV-2 protease. The company recently began synthesizing some of those molecules for laboratory testing.

Nanome’s VR software, meanwhile, allows researchers to import a molecular structure, then view and manipulate it on the scale of individual atoms. Like human chess players who use computer programs to explore potential moves, chemists can use VR to predict how to make molecules more druglike, says Nanome CEO Steve McCloskey. “The tighter the interface between the human and the computer, the more information goes both ways,” he says.

Zhavoronkov sent data about several of Insilico’s compounds to Nanome, which re-created them in VR. Nanome’s chemist demonstrated chemical tweaks to potentially improve each compound. “It was a very good experience,” says Zhavoronkov.

Meanwhile, in March, Takeda Pharmaceutical Co., of Japan, invited Schrödinger, a New York–based company that develops chemical-simulation software, to join an alliance working on antivirals. Schrödinger’s AI focuses on the physics of how proteins interact with small molecules and one another.

The software sifts through billions of molecules per week to predict a compound’s properties, and it optimizes for multiple desired properties simultaneously, says Karen Akinsanya, chief biomedical scientist and head of discovery R&D at Schrödinger. “There’s a huge sense of urgency here to come up with a potent molecule, but also to come up with molecules that are going to be well tolerated” by the body, she says. Drug developers are seeking compounds that can be broadly used and easily administered, such as an oral drug rather than an intravenous drug, she adds.

Schrödinger evaluated four protein targets and performed virtual screens for two of them, a computing-intensive process. In June, Google Cloud donated the equivalent of 16 million hours of Nvidia GPU time for the company’s calculations. Next, the alliance’s drug companies will synthesize and test the most promising compounds identified by the virtual screens.

Other companies, including Amazon Web Services, IBM, and Intel, as well as several U.S. national labs are also donating time and resources to the Covid-19 High Performance Computing Consortium. The consortium is supporting 87 projects, which now have access to 6.8 million CPU cores, 50,000 GPUs, and 600 petaflops of computational resources.

While advanced technologies could transform early drug discovery, any new drug candidate still has a long road after that. It must be tested in animals, manufactured in large batches for clinical trials, then tested in a series of trials that, for antivirals, lasts an average of seven years.

In May, the BioMed X Institute in Germany launched a five-year project to build a Rapid Antiviral Response Platform, which would speed drug discovery all the way through manufacturing for clinical trials. The €40 million ($47 million) project, backed by drug companies, will identify ­outside-the-box proposals from young scientists, then provide space and funding to develop their ideas.

“We’ll focus on technologies that allow us to go from identification of a new virus to 10,000 doses of a novel potential therapeutic ready for trials in less than six months,” says BioMed X’s Tidona, who leads the project.

While a vaccine will likely arrive long before a bespoke antiviral does, experts expect COVID-19 to be with us for a long time, so the effort to develop a direct-acting, potent antiviral continues. Plus, having new antivirals—and tools to rapidly create more—can only help us prepare for the next pandemic, whether it comes next month or in another 102 years.

“We’ve got to start thinking differently about how to be more responsive to these kinds of threats,” says Collins. “It’s pushing us out of our comfort zones.”

This article appears in the October 2020 print issue as “Automating Antivirals.” Continue reading

Posted in Human Robots

#437667 17 Teams to Take Part in DARPA’s ...

Among all of the other in-person events that have been totally wrecked by COVID-19 is the Cave Circuit of the DARPA Subterranean Challenge. DARPA has already hosted the in-person events for the Tunnel and Urban SubT circuits (see our previous coverage here), and the plan had always been for a trio of events representing three uniquely different underground environments in advance of the SubT Finals, which will somehow combine everything into one bonkers course.

While the SubT Urban Circuit event snuck in just under the lockdown wire in late February, DARPA made the difficult (but prudent) decision to cancel the in-person Cave Circuit event. What this means is that there will be no Systems Track Cave competition, which is a serious disappointment—we were very much looking forward to watching teams of robots navigating through an entirely unpredictable natural environment with a lot of verticality. Fortunately, DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment that’s as dynamic and detailed as DARPA can make it.

From DARPA’s press releases:

DARPA’s Subterranean (SubT) Challenge will host its Cave Circuit Virtual Competition, which focuses on innovative solutions to map, navigate, and search complex, simulated cave environments November 17. Qualified teams have until Oct. 15 to develop and submit software-based solutions for the Cave Circuit via the SubT Virtual Portal, where their technologies will face unknown cave environments in the cloud-based SubT Simulator. Until then, teams can refine their roster of selected virtual robot models, choose sensor payloads, and continue to test autonomy approaches to maximize their score.

The Cave Circuit also introduces new simulation capabilities, including digital twins of Systems Competition robots to choose from, marsupial-style platforms combining air and ground robots, and breadcrumb nodes that can be dropped by robots to serve as communications relays. Each robot configuration has an associated cost, measured in SubT Credits – an in-simulation currency – based on performance characteristics such as speed, mobility, sensing, and battery life.

Each team’s simulated robots must navigate realistic caves, with features including natural terrain and dynamic rock falls, while they search for and locate various artifacts on the course within five meters of accuracy to score points during a 60-minute timed run. A correct report is worth one point. Each course contains 20 artifacts, which means each team has the potential for a maximum score of 20 points. Teams can leverage numerous practice worlds and even build their own worlds using the cave tiles found in the SubT Tech Repo to perfect their approach before they submit one official solution for scoring. The DARPA team will then evaluate the solution on a set of hidden competition scenarios.

Of the 17 qualified teams (you can see all of them here), there are a handful that we’ll quickly point out. Team BARCS, from Michigan Tech, was the winner of the SubT Virtual Urban Circuit, meaning that they may be the team to beat on Cave as well, although the course is likely to be unique enough that things will get interesting. Some Systems Track teams to watch include Coordinated Robotics, CTU-CRAS-NORLAB, MARBLE, NUS SEDS, and Robotika, and there are also a handful of brand new teams as well.

Now, just because there’s no dedicated Cave Circuit for the Systems Track teams, it doesn’t mean that there won’t be a Cave component (perhaps even a significant one) in the final event, which as far as we know is still scheduled to happen in fall of next year. We’ve heard that many of the Systems Track teams have been testing out their robots in caves anyway, and as the virtual event gets closer, we’ll be doing a sort of Virtual Systems Track series that highlights how different teams are doing mock Cave Circuits in caves they’ve found for themselves.

For more, we checked in with DARPA SubT program manager Dr. Timothy H. Chung.

IEEE Spectrum: Was it a difficult decision to cancel the Systems Track for Cave?

Tim Chung: The decision to go virtual only was heart wrenching, because I think DARPA’s role is to offer up opportunities that may be unimaginable for some of our competitors, like opening up a cave-type site for this competition. We crawled and climbed through a number of these sites, and I share the sense of disappointment that both our team and the competitors have that we won’t be able to share all the advances that have been made since the Urban Circuit. But what we’ve been able to do is pour a lot of our energy and the insights that we got from crawling around in those caves into what’s going to be a really great opportunity on the Virtual Competition side. And whether it’s a global pandemic, or just lack of access to physical sites like caves, virtual environments are an opportunity that we want to develop.

“The simulator offers us a chance to look at where things could be … it really allows for us to find where some of those limits are in the technology based only on our imagination.”
—Timothy H. Chung, DARPA

What kind of new features will be included in the Virtual Cave Circuit for this competition?

I’m really excited about these particular features because we’re seeing an opportunity for increased synergy between the physical and the virtual. The first I’d say is that we scanned some of the Systems Track robots using photogrammetry and combined that with some additional models that we got from the systems competitors themselves to turn their systems robots into virtual models. We often talk about the sim to real transfer and how successful we can get a simulation to transfer over to the physical world, but now we’ve taken something from the physical world and made it virtual. We’ve validated the controllers as well as the kinematics of the robots, we’ve iterated with the systems competitors themselves, and now we have these 13 robots (air and ground) in the SubT Tech Repo that now all virtual competitors can take advantage of.

We also have additional robot capability. Those comms bread crumbs are common among many of the competitors, so we’ve adopted that in the virtual world, and now you have comms relay nodes that are baked in to the SubT Simulator—you can have either six or twelve comms nodes that you can drop from a variety of our ground robot platforms. We have the marsupial deployment capability now, so now we have parent ground robots that can be mixed and matched with different child drones to become marsupial pairs.

And this is something I’ve been planning for for a while: we now have the ability to trigger things like rock falls. They still don’t quite look like Indiana Jones with the boulder coming down the corridor, but this comes really close. In addition to it just being an interesting and realistic consideration, we get to really dynamically test and stress the robots’ ability to navigate and recognize that something has changed in the environment and respond to it.

Image: DARPA

DARPA is still running a Virtual Cave Circuit, and 17 teams will be taking part in this competition featuring a simulated cave environment.

No simulation is perfect, so can you talk to us about what kinds of things aren’t being simulated right now? Where does the simulator not match up to reality?

I think that question is foundational to any conversation about simulation. I’ll give you a couple of examples:

We have the ability to represent wholesale damage to a robot, but it’s not at the actuator or component level. So there’s not a reliability model, although I think that would be really interesting to incorporate so that you could do assessments on things like mean time to failure. But if a robot falls off a ledge, it can be disabled by virtue of being too damaged to continue.

With communications, and this is one that’s near and dear not only to my heart but also to all of those that have lived through developing communication systems and robotic systems, we’ve gone through and conducted RF surveys of underground environments to get a better handle on what propagation effects are. There’s a lot of research that has gone into this, and trying to carry through some of that realism, we do have path loss models for RF communications baked into the SubT Simulator. For example, when you drop a bread crumb node, it’s using a path loss model so that it can represent the degradation of signal as you go farther into a cave. Now, we’re not modeling it at the Maxwell equations level, which I think would be awesome, but we’re not quite there yet.

We do have things like battery depletion, sensor degradation to the extent that simulators can degrade sensor inputs, and things like that. It’s just amazing how close we can get in some places, and how far away we still are in others, and I think showing where the limits are of how far you can get simulation is all part and parcel of why SubT Challenge wants to have both System and Virtual tracks. Simulation can be an accelerant, but it’s not going to be the panacea for development and innovation, and I think all the competitors are cognizant those limitations.

One of the most amazing things about the SubT Virtual Track is that all of the robots operate fully autonomously, without the human(s) in the loop that the System Track teams have when they compete. Why make the Virtual Track even more challenging in that way?

I think it’s one of the defining, delineating attributes of the Virtual Track. Our continued vision for the simulation side is that the simulator offers us a chance to look at where things could be, and allows for us to explore things like larger scales, or increased complexity, or types of environments that we can’t physically gain access to—it really allows for us to find where some of those limits are in the technology based only on our imagination, and this is one of the intrinsic values of simulation.

But I think finding a way to incorporate human input, or more generally human factors like teleoperation interfaces and the in-situ stress that you might not be able to recreate in the context of a virtual competition provided a good reason for us to delineate the two competitions, with the Virtual Competition really being about the role of fully autonomous or self-sufficient systems going off and doing their solution without human guidance, while also acknowledging that the real world has conditions that would not necessarily be represented by a fully simulated version. Having said that, I think cognitive engineering still has an incredibly important role to play in human robot interaction.

What do we have to look forward to during the Virtual Competition Showcase?

We have a number of additional features and capabilities that we’ve baked into the simulator that will allow for us to derive some additional insights into our competition runs. Those insights might involve things like the performance of one or more robots in a given scenario, or the impact of the environment on different types of robots, and what I can tease is that this will be an opportunity for us to showcase both the technology and also the excitement of the robots competing in the virtual environment. I’m trying not to give too many spoilers, but we’ll have an opportunity to really get into the details.

Check back as we get closer to the 17 November event for more on the DARPA SubT Challenge. Continue reading

Posted in Human Robots