Tag Archives: Resistance

#438755 Soft Legged Robot Uses Pneumatic ...

Soft robots are inherently safe, highly resilient, and potentially very cheap, making them promising for a wide array of applications. But development on them has been a bit slow relative to other areas of robotics, at least partially because soft robots can’t directly benefit from the massive increase in computing power and sensor and actuator availability that we’ve seen over the last few decades. Instead, roboticists have had to get creative to find ways of achieving the functionality of conventional robotics components using soft materials and compatible power sources.

In the current issue of Science Robotics, researchers from UC San Diego demonstrate a soft walking robot with four legs that moves with a turtle-like gait controlled by a pneumatic circuit system made from tubes and valves. This air-powered nervous system can actuate multiple degrees of freedom in sequence from a single source of pressurized air, offering a huge reduction in complexity and bringing a very basic form of decision making onto the robot itself.

Generally, when people talk about soft robots, the robots are only mostly soft. There are some components that are very difficult to make soft, including pressure sources and the necessary electronics to direct that pressure between different soft actuators in a way that can be used for propulsion. What’s really cool about this robot is that researchers have managed to take a pressure source (either a single tether or an onboard CO2 cartridge) and direct it to four different legs, each with three different air chambers, using an oscillating three valve circuit made entirely of soft materials.

Photo: UCSD

The pneumatic circuit that powers and controls the soft quadruped.

The inspiration for this can be found in biology—natural organisms, including quadrupeds, use nervous system components called central pattern generators (CPGs) to prompt repetitive motions with limbs that are used for walking, flying, and swimming. This is obviously more complicated in some organisms than in others, and is typically mediated by sensory feedback, but the underlying structure of a CPG is basically just a repeating circuit that drives muscles in sequence to produce a stable, continuous gait. In this case, we’ve got pneumatic muscles being driven in opposing pairs, resulting in a diagonal couplet gait, where diagonally opposed limbs rotate forwards and backwards at the same time.

Diagram: Science Robotics

(J) Pneumatic logic circuit for rhythmic leg motion. A constant positive pressure source (P+) applied to three inverter components causes a high-pressure state to propagate around the circuit, with a delay at each inverter. While the input to one inverter is high, the attached actuator (i.e., A1, A2, or A3) is inflated. This sequence of high-pressure states causes each pair of legs of the robot to rotate in a direction determined by the pneumatic connections. (K) By reversing the sequence of activation of the pneumatic oscillator circuit, the attached actuators inflate in a new sequence (A1, A3, and A2), causing (L) the legs of the robot to rotate in reverse. (M) Schematic bottom view of the robot with the directions of leg motions indicated for forward walking.

Diagram: Science Robotics

Each of the valves acts as an inverter by switching the normally closed half (top) to open and the normally open half (bottom) to closed.

The circuit itself is made up of three bistable pneumatic valves connected by tubing that acts as a delay by providing resistance to the gas moving through it that can be adjusted by altering the tube’s length and inner diameter. Within the circuit, the movement of the pressurized gas acts as both a source of energy and as a signal, since wherever the pressure is in the circuit is where the legs are moving. The simplest circuit uses only three valves, and can keep the robot walking in one single direction, but more valves can add more complex leg control options. For example, the researchers were able to use seven valves to tune the phase offset of the gait, and even just one additional valve (albeit of a slightly more complex design) could enable reversal of the system, causing the robot to walk backwards in response to input from a soft sensor. And with another complex valve, a manual (tethered) controller could be used for omnidirectional movement.

This work has some similarities to the rover that JPL is developing to explore Venus—that rover isn’t a soft robot, of course, but it operates under similar constraints in that it can’t rely on conventional electronic systems for autonomous navigation or control. It turns out that there are plenty of clever ways to use mechanical (or in this case, pneumatic) intelligence to make robots with relatively complex autonomous behaviors, meaning that in the future, soft (or soft-ish) robots could find valuable roles in situations where using a non-compliant system is not a good option.

For more on why we should be so excited about soft robots and just how soft a soft robot needs to be, we spoke with Michael Tolley, who runs the Bioinspired Robotics and Design Lab at UCSD, and Dylan Drotman, the paper’s first author.

IEEE Spectrum: What can soft robots do for us that more rigid robotic designs can’t?

Michael Tolley: At the very highest level, one of the fundamental assumptions of robotics is that you have rigid bodies connected at joints, and all your motion happens at these joints. That's a really nice approach because it makes the math easy, frankly, and it simplifies control. But when you look around us in nature, even though animals do have bones and joints, the way we interact with the world is much more complicated than that simple story. I’m interested in where we can take advantage of material properties in robotics. If you look at robots that have to operate in very unknown environments, I think you can build in some of the intelligence for how to deal with those environments into the body of the robot itself. And that’s the category this work really falls under—it's about navigating the world.

Dylan Drotman: Walking through confined spaces is a good example. With the rigid legged robot, you would have to completely change the way that the legs move to walk through a confined space, while if you have flexible legs, like the robot in our paper, you can use relatively simple control strategies to squeeze through an area you wouldn’t be able to get through with a rigid system.

How smart can a soft robot get?

Drotman: Right now we have a sensor on the front that's connected through a fluidic transmission to a bistable valve that causes the robot to reverse. We could add other sensors around the robot to allow it to change direction whenever it runs into an obstacle to effectively make an electronics-free version of a Roomba.

Tolley: Stepping back a little bit from that, one could make an argument that we’re using basic memory elements to generate very basic signals. There’s nothing in principle that would stop someone from making a pneumatic computer—it’s just very complicated to make something that complex. I think you could build on this and do more intelligent decision making, but using this specific design and the components we’re using, it’s likely to be things that are more direct responses to the environment.

How well would robots like these scale down?

Drotman: At the moment we’re manufacturing these components by hand, so the idea would be to make something more like a printed circuit board instead, and looking at how the channel sizes and the valve design would affect the actuation properties. We’ll also be coming up with new circuits, and different designs for the circuits themselves.

Tolley: Down to centimeter or millimeter scale, I don’t think you’d have fundamental fluid flow problems. I think you’re going to be limited more by system design constraints. You’ll have to be able to locomote while carrying around your pressure source, and possibly some other components that are also still rigid. When you start to talk about really small scales, though, it's not as clear to me that you really need an intrinsically soft robot. If you think about insects, their structural geometry can make them behave like they’re soft, but they’re not intrinsically soft.

Should we be thinking about soft robots and compliant robots in the same way, or are they fundamentally different?

Tolley: There’s certainly a connection between the two. You could have a compliant robot that behaves in a very similar way to an intrinsically soft robot, or a robot made of intrinsically soft materials. At that point, it comes down to design and manufacturing and practical limitations on what you can make. I think when you get down to small scales, the two sort of get connected.

There was some interesting work several years ago on using explosions to power soft robots. Is that still a thing?

Tolley: One of the opportunities with soft robots is that with material compliance, you have the potential to store energy. I think there’s exciting potential there for rapid motion with a soft body. Combustion is one way of doing that with power coming from a chemical source all at once, but you could also use a relatively weak muscle that over time stores up energy in a soft body and then releases it.

Is it realistic to expect complete softness from soft robots, or will they likely always have rigid components because they have to store or generate and move pressurized gas somehow?

Tolley: If you look in nature, you do have soft pumps like the heart, but although it’s soft, it’s still relatively stiff. Like, if you grab a heart, it’s not totally squishy. I haven’t done it, but I’d imagine. If you have a container that you’re pressurizing, it has to be stiff enough to not just blow up like a balloon. Certainly pneumatics or hydraulics are not the only way to go for soft actuators; there has been some really nice work on smart muscles and smart materials like hydraulic electrostatic (HASEL) actuators. They seem promising, but all of these actuators have challenges. We’ve chosen to stick with pressurized pneumatics in the near term; longer term, I think you’ll start to see more of these smart material actuators become more practical.

Personally, I don’t have any problem with soft robots having some rigid components. Most animals on land have some rigid components, but they can still take advantage of being soft, so it’s probably going to be a combination. But I do also like the vision of making an entirely soft, squishy thing. Continue reading

Posted in Human Robots

#437673 Can AI and Automation Deliver a COVID-19 ...

Illustration: Marysia Machulska

Within moments of meeting each other at a conference last year, Nathan Collins and Yann Gaston-Mathé began devising a plan to work together. Gaston-Mathé runs a startup that applies automated software to the design of new drug candidates. Collins leads a team that uses an automated chemistry platform to synthesize new drug candidates.

“There was an obvious synergy between their technology and ours,” recalls Gaston-Mathé, CEO and cofounder of Paris-based Iktos.

In late 2019, the pair launched a project to create a brand-new antiviral drug that would block a specific protein exploited by influenza viruses. Then the COVID-19 pandemic erupted across the world stage, and Gaston-Mathé and Collins learned that the viral culprit, SARS-CoV-2, relied on a protein that was 97 percent similar to their influenza protein. The partners pivoted.

Their companies are just two of hundreds of biotech firms eager to overhaul the drug-discovery process, often with the aid of artificial intelligence (AI) tools. The first set of antiviral drugs to treat COVID-19 will likely come from sifting through existing drugs. Remdesivir, for example, was originally developed to treat Ebola, and it has been shown to speed the recovery of hospitalized COVID-19 patients. But a drug made for one condition often has side effects and limited potency when applied to another. If researchers can produce an ­antiviral that specifically targets SARS-CoV-2, the drug would likely be safer and more effective than a repurposed drug.

There’s one big problem: Traditional drug discovery is far too slow to react to a pandemic. Designing a drug from scratch typically takes three to five years—and that’s before human clinical trials. “Our goal, with the combination of AI and automation, is to reduce that down to six months or less,” says Collins, who is chief strategy officer at SRI Biosciences, a division of the Silicon Valley research nonprofit SRI International. “We want to get this to be very, very fast.”

That sentiment is shared by small biotech firms and big pharmaceutical companies alike, many of which are now ramping up automated technologies backed by supercomputing power to predict, design, and test new antivirals—for this pandemic as well as the next—with unprecedented speed and scope.

“The entire industry is embracing these tools,” says Kara Carter, president of the International Society for Antiviral Research and executive vice president of infectious disease at Evotec, a drug-discovery company in Hamburg. “Not only do we need [new antivirals] to treat the SARS-CoV-2 infection in the population, which is probably here to stay, but we’ll also need them to treat future agents that arrive.”

There are currentlyabout 200 known viruses that infect humans. Although viruses represent less than 14 percent of all known human pathogens, they make up two-thirds of all new human pathogens discovered since 1980.

Antiviral drugs are fundamentally different from vaccines, which teach a person’s immune system to mount a defense against a viral invader, and antibody treatments, which enhance the body’s immune response. By contrast, anti­virals are chemical compounds that directly block a virus after a person has become infected. They do this by binding to specific proteins and preventing them from functioning, so that the virus cannot copy itself or enter or exit a cell.

The SARS-CoV-2 virus has an estimated 25 to 29 proteins, but not all of them are suitable drug targets. Researchers are investigating, among other targets, the virus’s exterior spike protein, which binds to a receptor on a human cell; two scissorlike enzymes, called proteases, that cut up long strings of viral proteins into functional pieces inside the cell; and a polymerase complex that makes the cell churn out copies of the virus’s genetic material, in the form of single-stranded RNA.

But it’s not enough for a drug candidate to simply attach to a target protein. Chemists also consider how tightly the compound binds to its target, whether it binds to other things as well, how quickly it metabolizes in the body, and so on. A drug candidate may have 10 to 20 such objectives. “Very often those objectives can appear to be anticorrelated or contradictory with each other,” says Gaston-Mathé.

Compared with antibiotics, antiviral drug discovery has proceeded at a snail’s pace. Scientists advanced from isolating the first antibacterial molecules in 1910 to developing an arsenal of powerful antibiotics by 1944. By contrast, it took until 1951 for researchers to be able to routinely grow large amounts of virus particles in cells in a dish, a breakthrough that earned the inventors a Nobel Prize in Medicine in 1954.

And the lag between the discovery of a virus and the creation of a treatment can be heartbreaking. According to the World Health Organization, 71 million people worldwide have chronic hepatitis C, a major cause of liver cancer. The virus that causes the infection was discovered in 1989, but effective antiviral drugs didn’t hit the market until 2014.

While many antibiotics work on a range of microbes, most antivirals are highly specific to a single virus—what those in the business call “one bug, one drug.” It takes a detailed understanding of a virus to develop an antiviral against it, says Che Colpitts, a virologist at Queen’s University, in Canada, who works on antivirals against RNA viruses. “When a new virus emerges, like SARS-CoV-2, we’re at a big disadvantage.”

Making drugs to stop viruses is hard for three main reasons. First, viruses are the Spartans of the pathogen world: They’re frugal, brutal, and expert at evading the human immune system. About 20 to 250 nanometers in diameter, viruses rely on just a few parts to operate, hijacking host cells to reproduce and often destroying those cells upon departure. They employ tricks to camouflage their presence from the host’s immune system, including preventing infected cells from sending out molecular distress beacons. “Viruses are really small, so they only have a few components, so there’s not that many drug targets available to start with,” says Colpitts.

Second, viruses replicate quickly, typically doubling in number in hours or days. This constant copying of their genetic material enables viruses to evolve quickly, producing mutations able to sidestep drug effects. The virus that causes AIDS soon develops resistance when exposed to a single drug. That’s why a cocktail of antiviral drugs is used to treat HIV infection.

Finally, unlike bacteria, which can exist independently outside human cells, viruses invade human cells to propagate, so any drug designed to eliminate a virus needs to spare the host cell. A drug that fails to distinguish between a virus and a cell can cause serious side effects. “Discriminating between the two is really quite difficult,” says Evotec’s Carter, who has worked in antiviral drug discovery for over three decades.

And then there’s the money barrier. Developing antivirals is rarely profitable. Health-policy researchers at the London School of Economics recently estimated that the average cost of developing a new drug is US $1 billion, and up to $2.8 billion for cancer and other specialty drugs. Because antivirals are usually taken for only short periods of time or during short outbreaks of disease, companies rarely recoup what they spent developing the drug, much less turn a profit, says Carter.

To change the status quo, drug discovery needs fresh approaches that leverage new technologies, rather than incremental improvements, says Christian Tidona, managing director of BioMed X, an independent research institute in Heidelberg, Germany. “We need breakthroughs.”

Putting Drug Development on Autopilot
Earlier this year, SRI Biosciences and Iktos began collaborating on a way to use artificial intelligence and automated chemistry to rapidly identify new drugs to target the COVID-19 virus. Within four months, they had designed and synthesized a first round of antiviral candidates. Here’s how they’re doing it.

1/5

STEP 1: Iktos’s AI platform uses deep-learning algorithms in an iterative process to come up with new molecular structures likely to bind to and disable a specific coronavirus protein. Illustrations: Chris Philpot

2/5

STEP 2: SRI Biosciences’s SynFini system is a three-part automated chemistry suite for producing new compounds. Starting with a target compound from Iktos, SynRoute uses machine learning to analyze and optimize routes for creating that compound, with results in about 10 seconds. It prioritizes routes based on cost, likelihood of success, and ease of implementation.

3/5

STEP 3: SynJet, an automated inkjet printer platform, tests the routes by printing out tiny quantities of chemical ingredients to see how they react. If the right compound is produced, the platform tests it.

4/5

STEP 4: AutoSyn, an automated tabletop chemical plant, synthesizes milligrams to grams of the desired compound for further testing. Computer-selected “maps” dictate paths through the plant’s modular components.

5/5

STEP 5: The most promising compounds are tested against live virus samples.

Previous
Next

Iktos’s AI platform was created by a medicinal chemist and an AI expert. To tackle SARS-CoV-2, the company used generative models—deep-learning algorithms that generate new data—to “imagine” molecular structures with a good chance of disabling a key coronavirus protein.

For a new drug target, the software proposes and evaluates roughly 1 million compounds, says Gaston-Mathé. It’s an iterative process: At each step, the system generates 100 virtual compounds, which are tested in silico with predictive models to see how closely they meet the objectives. The test results are then used to design the next batch of compounds. “It’s like we have a very, very fast chemist who is designing compounds, testing compounds, getting back the data, then designing another batch of compounds,” he says.

The computer isn’t as smart as a human chemist, Gaston-Mathé notes, but it’s much faster, so it can explore far more of what people in the field call “chemical space”—the set of all possible organic compounds. Unexplored chemical space is huge: Biochemists estimate that there are at least 1063 possible druglike molecules, and that 99.9 percent of all possible small molecules or compounds have never been synthesized.

Still, designing a chemical compound isn’t the hardest part of creating a new drug. After a drug candidate is designed, it must be synthesized, and the highly manual process for synthesizing a new chemical hasn’t changed much in 200 years. It can take days to plan a synthesis process and then months to years to optimize it for manufacture.

That’s why Gaston-Mathé was eager to send Iktos’s AI-generated designs to Collins’s team at SRI Biosciences. With $13.8 million from the Defense Advanced Research Projects Agency, SRI Biosciences spent the last four years automating the synthesis process. The company’s automated suite of three technologies, called SynFini, can produce new chemical compounds in just hours or days, says Collins.

First, machine-learning software devises possible routes for making a desired molecule. Next, an inkjet printer platform tests the routes by printing out and mixing tiny quantities of chemical ingredients to see how they react with one another; if the right compound is produced, the platform runs tests on it. Finally, a tabletop chemical plant synthesizes milligrams to grams of the desired compound.

Less than four months after Iktos and SRI Biosciences announced their collaboration, they had designed and synthesized a first round of antiviral candidates for SARS-CoV-2. Now they’re testing how well the compounds work on actual samples of the virus.

Out of 10
63 possible druglike molecules, 99.9 percent have never been synthesized.

Theirs isn’t the only collaborationapplying new tools to drug discovery. In late March, Alex Zhavoronkov, CEO of Hong Kong–based Insilico Medicine, came across a YouTube video showing three virtual-reality avatars positioning colorful, sticklike fragments in the side of a bulbous blue protein. The three researchers were using VR to explore how compounds might bind to a SARS-CoV-2 enzyme. Zhavoronkov contacted the startup that created the simulation—Nanome, in San Diego—and invited it to examine Insilico’s ­AI-generated molecules in virtual reality.

Insilico runs an AI platform that uses biological data to train deep-learning algorithms, then uses those algorithms to identify molecules with druglike features that will likely bind to a protein target. A four-day training sprint in late January yielded 100 molecules that appear to bind to an important SARS-CoV-2 protease. The company recently began synthesizing some of those molecules for laboratory testing.

Nanome’s VR software, meanwhile, allows researchers to import a molecular structure, then view and manipulate it on the scale of individual atoms. Like human chess players who use computer programs to explore potential moves, chemists can use VR to predict how to make molecules more druglike, says Nanome CEO Steve McCloskey. “The tighter the interface between the human and the computer, the more information goes both ways,” he says.

Zhavoronkov sent data about several of Insilico’s compounds to Nanome, which re-created them in VR. Nanome’s chemist demonstrated chemical tweaks to potentially improve each compound. “It was a very good experience,” says Zhavoronkov.

Meanwhile, in March, Takeda Pharmaceutical Co., of Japan, invited Schrödinger, a New York–based company that develops chemical-simulation software, to join an alliance working on antivirals. Schrödinger’s AI focuses on the physics of how proteins interact with small molecules and one another.

The software sifts through billions of molecules per week to predict a compound’s properties, and it optimizes for multiple desired properties simultaneously, says Karen Akinsanya, chief biomedical scientist and head of discovery R&D at Schrödinger. “There’s a huge sense of urgency here to come up with a potent molecule, but also to come up with molecules that are going to be well tolerated” by the body, she says. Drug developers are seeking compounds that can be broadly used and easily administered, such as an oral drug rather than an intravenous drug, she adds.

Schrödinger evaluated four protein targets and performed virtual screens for two of them, a computing-intensive process. In June, Google Cloud donated the equivalent of 16 million hours of Nvidia GPU time for the company’s calculations. Next, the alliance’s drug companies will synthesize and test the most promising compounds identified by the virtual screens.

Other companies, including Amazon Web Services, IBM, and Intel, as well as several U.S. national labs are also donating time and resources to the Covid-19 High Performance Computing Consortium. The consortium is supporting 87 projects, which now have access to 6.8 million CPU cores, 50,000 GPUs, and 600 petaflops of computational resources.

While advanced technologies could transform early drug discovery, any new drug candidate still has a long road after that. It must be tested in animals, manufactured in large batches for clinical trials, then tested in a series of trials that, for antivirals, lasts an average of seven years.

In May, the BioMed X Institute in Germany launched a five-year project to build a Rapid Antiviral Response Platform, which would speed drug discovery all the way through manufacturing for clinical trials. The €40 million ($47 million) project, backed by drug companies, will identify ­outside-the-box proposals from young scientists, then provide space and funding to develop their ideas.

“We’ll focus on technologies that allow us to go from identification of a new virus to 10,000 doses of a novel potential therapeutic ready for trials in less than six months,” says BioMed X’s Tidona, who leads the project.

While a vaccine will likely arrive long before a bespoke antiviral does, experts expect COVID-19 to be with us for a long time, so the effort to develop a direct-acting, potent antiviral continues. Plus, having new antivirals—and tools to rapidly create more—can only help us prepare for the next pandemic, whether it comes next month or in another 102 years.

“We’ve got to start thinking differently about how to be more responsive to these kinds of threats,” says Collins. “It’s pushing us out of our comfort zones.”

This article appears in the October 2020 print issue as “Automating Antivirals.” Continue reading

Posted in Human Robots

#437614 Video Friday: Poimo Is a Portable ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Engineers at the University of California San Diego have built a squid-like robot that can swim untethered, propelling itself by generating jets of water. The robot carries its own power source inside its body. It can also carry a sensor, such as a camera, for underwater exploration.

[ UCSD ]

Thanks Ioana!

Shark Robotics, French and European leader in Unmanned Ground Vehicles, is announcing today a disinfection add-on for Boston Dynamics Spot robot, designed to fight the COVID-19 pandemic. The Spot robot with Shark’s purpose-built disinfection payload can decontaminate up to 2,000 m2 in 15 minutes, in any space that needs to be sanitized – such as hospitals, metro stations, offices, warehouses or facilities.

[ Shark Robotics ]

Here’s an update on the Poimo portable inflatable mobility project we wrote about a little while ago; while not strictly robotics, it seems like it holds some promise for rapidly developing different soft structures that robotics might find useful.

[ University of Tokyo ]

Thanks Ryuma!

Pretty cool that you can do useful force feedback teleop while video chatting through a “regular broadband Internet connection.” Although, what “regular” means to you is a bit subjective, right?

[ HEBI Robotics ]

Thanks Dave!

While NASA's Mars rover Perseverance travels through space toward the Red Planet, its nearly identical rover twin is hard at work on Earth. The vehicle system test bed (VSTB) rover named OPTIMISM is a full-scale engineering version of the Mars-bound rover. It is used to test hardware and software before the commands are sent up to the Perseverance rover.

[ NASA ]

Jacquard takes ordinary, familiar objects and enhances them with new digital abilities and experiences, while remaining true to their original purpose — like being your favorite jacket, backpack or a pair of shoes that you love to wear.

Our ambition is simple: to make life easier. By staying connected to your digital world, your things can do so much more. Skip a song by brushing your sleeve. Take a picture by tapping on a shoulder strap. Get reminded about the phone you left behind with a blink of light or a haptic buzz on your cuff.

[ Google ATAP ]

Should you attend the IROS 2020 workshop on “Planetary Exploration Robots: Challenges and Opportunities”? Of course you should!

[ Workshop ]

Kuka makes a lot of these videos where I can’t help but think that if they put as much effort into programming the robot as they did into producing the video, the result would be much more impressive.

[ Kuka ]

The Colorado School of Mines is one of the first customers to buy a Spot robot from Boston Dynamics to help with robotics research. Watch as scientists take Spot into the school's mine for the first time.

[ HCR ] via [ CNET ]

A very interesting soft(ish) actuator from Ayato Kanada at Kyushu University's Control Engineering Lab.

A flexible ultrasonic motor (FUSM), which generates linear motion as a novel soft actuator. This motor consists of a single metal cube stator with a hole and an elastic elongated coil spring inserted into the hole. When voltages are applied to piezoelectric plates on the stator, the coil spring moves back and forward as a linear slider. In the FUSM that uses the friction drive as the principle, the most important parameter for optimizing its output is the preload between the stator and slider. The coil spring has a slightly larger diameter than the stator hole and generates the preload by expanding in a radial direction. The coil springs act not only as a flexible slider but also as a resistive positional sensor. Changes in the resistance between the stator and the coil spring end are converted to a voltage and used for position detection.

[ Control Engineering Lab ]

Thanks Ayato!

We show how to use the limbs of a quadruped robot to identify fine-grained soil, representative for Martian regolith.

[ Paper ] via [ ANYmal Research ]

PR2 is serving breakfast and cleaning up afterwards. It’s slow, but all you have to do is eat and leave.

That poor PR2 is a little more naked than it's probably comfortable with.

[ EASE ]

NVIDIA researchers present a hierarchical framework that combines model-based control and reinforcement learning (RL) to synthesize robust controllers for a quadruped robot (the Unitree Laikago).

[ NVIDIA ]

What's interesting about this assembly task is that the robot is using its arm only for positioning, and doing the actual assembly with just fingers.

[ RC2L ]

In this electronics assembly application, Kawasaki's cobot duAro2 uses a tool changing station to tackle a multitude of tasks and assemble different CPU models.

Okay but can it apply thermal paste to a CPU in the right way? Personally, I find that impossible.

[ Kawasaki ]

You only need to watch this video long enough to appreciate the concept of putting a robot on a robot.

[ Impress ]

In this lecture, we’ll hear from the man behind one of the biggest robotics companies in the world, Boston Dynamics, whose robotic dog, Spot, has been used to encourage social distancing in Singapore and is now getting ready for FDA approval to be able to measure patients’ vital signs in hospitals.

[ Alan Turing Institute ]

Greg Kahn from UC Berkeley wrote in to share his recent dissertation talk on “Mobile Robot Learning.”

In order to create mobile robots that can autonomously navigate real-world environments, we need generalizable perception and control systems that can reason about the outcomes of navigational decisions. Learning-based methods, in which the robot learns to navigate by observing the outcomes of navigational decisions in the real world, offer considerable promise for obtaining these intelligent navigation systems. However, there are many challenges impeding mobile robots from autonomously learning to act in the real-world, in particular (1) sample-efficiency–how to learn using a limited amount of data? (2) supervision–how to tell the robot what to do? and (3) safety–how to ensure the robot and environment are not damaged or destroyed during learning? In this talk, I will present deep reinforcement learning methods for addressing these real world mobile robot learning challenges and show results which enable ground and aerial robots to navigate in complex indoor and outdoor environments.

[ UC Berkeley ]

Thanks Greg!

Leila Takayama from UC Santa Cruz (and previously Google X and Willow Garage) gives a talk entitled “Toward a more human-centered future of robotics.”

Robots are no longer only in outer space, in factory cages, or in our imaginations. We interact with robotic agents when withdrawing cash from bank ATMs, driving cars with adaptive cruise control, and tuning our smart home thermostats. In the moment of those interactions with robotic agents, we behave in ways that do not necessarily align with the rational belief that robots are just plain machines. Through a combination of controlled experiments and field studies, we use theories and concepts from the social sciences to explore ways that human and robotic agents come together, including how people interact with personal robots and how people interact through telepresence robots. Together, we will explore topics and raise questions about the psychology of human-robot interaction and how we could invent a future of a more human-centered robotics that we actually want to live in.

[ Leila Takayama ]

Roboticist and stand-up comedian Naomi Fitter from Oregon State University gives a talk on “Everything I Know about Telepresence.”

Telepresence robots hold promise to connect people by providing videoconferencing and navigation abilities in far-away environments. At the same time, the impacts of current commercial telepresence robots are not well understood, and circumstances of robot use including internet connection stability, odd personalizations, and interpersonal relationship between a robot operator and people co-located with the robot can overshadow the benefit of the robot itself. And although the idea of telepresence robots has been around for over two decades, available nonverbal expressive abilities through telepresence robots are limited, and suitable operator user interfaces for the robot (for example, controls that allow for the operator to hold a conversation and move the robot simultaneously) remain elusive. So where should we be using telepresence robots? Are there any pitfalls to watch out for? What do we know about potential robot expressivity and user interfaces? This talk will cover my attempts to address these questions and ways in which the robotics research community can build off of this work

[ Talking Robotics ] Continue reading

Posted in Human Robots

#437600 Brain-Inspired Robot Controller Uses ...

Robots operating in the real world are starting to find themselves constrained by the amount of computing power they have available. Computers are certainly getting faster and more efficient, but they’re not keeping up with the potential of robotic systems, which have access to better sensors and more data, which in turn makes decision making more complex. A relatively new kind of computing device called a memristor could potentially help robotics smash through this barrier, through a combination of lower complexity, lower cost, and higher speed.

In a paper published today in Science Robotics, a team of researchers from the University of Southern California in Los Angeles and the Air Force Research Laboratory in Rome, N.Y., demonstrate a simple self-balancing robot that uses memristors to form a highly effective analog control system, inspired by the functional structure of the human brain.

First, we should go over just what the heck a memristor is. As the name suggests, it’s a type of memory that is resistance-based. That is, the resistance of a memristor can be programmed, and the memristor remembers that resistance even after it’s powered off (the resistance depends on the magnitude of the voltage applied to the memristor’s two terminals and the length of time that voltage has been applied). Memristors are potentially the ideal hybrid between RAM and flash memory, offering high speed, high density, non-volatile storage. So that’s cool, but what we’re most interested in as far as robot control systems go is that memristors store resistance, making them analog devices rather than digital ones.

By adding a memristor to an analog circuit with inputs from a gyroscope and an accelerometer, the researchers created a completely analog Kalman filter, which coupled to a second memristor functioned as a PD controller.

Nowadays, the word “analog” sounds like a bad thing, but robots are stuck in an analog world, and any physical interactions they have with the world (mediated through sensors) are fundamentally analog in nature. The challenge is that an analog signal is often “messy”—full of noise and non-linearities—and as such, the usual approach now is to get it converted to a digital signal and then processed to get anything useful out of it. This is fine, but it’s also not particularly fast or efficient. Where memristors come in is that they’re inherently analog, and in addition to storing data, they can also act as tiny analog computers, which is pretty wild.

By adding a memristor to an analog circuit with inputs from a gyroscope and an accelerometer, the researchers, led by Wei Wu, an associate professor of electrical engineering at USC, created a completely analog and completely physical Kalman filter to remove noise from the sensor signal. In addition, they used a second memristor can be used to turn that sensor data into a proportional-derivative (PD) controller. Next they put those two components together to build an analogy system that can do a bunch of the work required to keep an inverted pendulum robot upright far more efficiently than a traditional system. The difference in performance is readily apparent:

The shaking you see in the traditionally-controlled robot on the bottom comes from the non-linearity of the dynamic system, which changes faster than the on-board controller can keep up with. The memristors substantially reduce the cycle time, so the robot can balance much more smoothly. Specifically, cycle time is reduced from 3,034 microseconds to just 6 microseconds.

Of course, there’s more going on here, like motor drivers and a digital computer to talk to them, so this robot is really a hybrid system. But guess what? As the researchers point out, so are we!

The human brain consists of the cerebrum, the cerebellum, and the brainstem. The cerebrum is a major part of the brain in charge of vision, hearing, and thinking, whereas the cerebellum plays an important role in motion control. Through this cooperation of the cerebrum and the cerebellum, the human brain can conduct multiple tasks simultaneously with extremely low power consumption. Inspired by this, we developed a hybrid analog-digital computation platform, in which the digital component runs the high-level algorithm, whereas the analog component is responsible for sensor fusion and motion control.

By offloading a bunch of computation onto the memristors, the higher brain functions of the robot have more breathing room. Overall, you reduce power, space, and cost, while substantially improving performance. This has only become possible relatively recently due to memristor advances and availability, and the researchers expect that memristor-based hybrid computing will soon be able to “improve the robustness and the performance of mobile robotic systems with higher” degrees of freedom.

“A memristor-based hybrid analog-digital computing platform for mobile robotics,” by Buyun Chen, Hao Yang, Boxiang Song, Deming Meng, Xiaodong Yan, Yuanrui Li, Yunxiang Wang, Pan Hu, Tse-Hsien Ou, Mark Barnell, Qing Wu, Han Wang, and Wei Wu, from USC Viterbi and AFRL, was published in Science Robotics. Continue reading

Posted in Human Robots

#437554 Ending the COVID-19 Pandemic

Photo: F.J. Jimenez/Getty Images

The approach of a new year is always a time to take stock and be hopeful. This year, though, reflection and hope are more than de rigueur—they’re rejuvenating. We’re coming off a year in which doctors, engineers, and scientists took on the most dire public threat in decades, and in the new year we’ll see the greatest results of those global efforts. COVID-19 vaccines are just months away, and biomedical testing is being revolutionized.

At IEEE Spectrum we focus on the high-tech solutions: Can artificial intelligence (AI) be used to diagnose COVID-19 using cough recordings? Can mathematical modeling determine whether preventive measures against COVID-19 work? Can big data and AI provide accurate pandemic forecasting?

Consider our story “AI Recognizes COVID-19 in the Sound of a Cough,” reported by Megan Scudellari in our Human OS blog. Using a cellphone-recorded cough, machine-learning models can now detect coronavirus with 90 percent accuracy, even in people with no symptoms. It’s a remarkable research milestone. This AI model sifts through hundreds of factors to distinguish the COVID-19 cough from those of bronchitis, whooping cough, and asthma.

But while such high-tech triumphs give us hope, the no-tech solutions are mostly what we have to work with. Soon, as our Numbers Don’t Lie columnist, Vaclav Smil, pointed out in a recent email, we will have near-instantaneous home testing, and we will have an ability to use big data to crunch every move and every outbreak. But we are nowhere near that yet. So let’s use, as he says, some old-fashioned kindergarten epidemiology, the no-tech measures, while we work to get there:

Masks: Wear them. If we all did so, we could cut transmission by two-thirds, perhaps even 80 percent.

Hands: Wash them.

Social distancing: If we could all stay home for two weeks, we could see enormous declines in COVID-19 transmission.

These are all time-tested solutions, proven effective ages ago in countless outbreaks of diseases including typhoid and cholera. They’re inexpensive and easy to prescribe, and the regimens are easy to follow.

The conflict between public health and individual rights and privacy, however, is less easy to resolve. Even during the pandemic of 1918–19, there was widespread resistance to mask wearing and social distancing. Fifty million people died—675,000 in the United States alone. Today, we are up to 240,000 deaths in the United States, and the end is not in sight. Antiflu measures were framed in 1918 as a way to protect the troops fighting in World War I, and people who refused to wear masks were called out as “dangerous slackers.” There was a world war, and yet it was still hard to convince people of the need for even such simple measures.

Personally, I have found the resistance to these easy fixes startling. I wouldn’t want maskless, gloveless doctors taking me through a surgical procedure. Or waltzing in from lunch without washing their hands. I’m sure you wouldn’t, either.

Science-based medicine has been one of the world’s greatest and most fundamental advances. In recent years, it has been turbocharged by breakthroughs in genetics technologies, advanced materials, high-tech diagnostics, and implants and other electronics-based interventions. Such leaps have already saved untold lives, but there’s much more to be done. And there will be many more pandemics ahead for humanity.

< Back to IEEE COVID-19 Resources Continue reading

Posted in Human Robots