Tag Archives: head

#437778 A Bug-Sized Camera for Bug-Sized Robots ...

As if it’s not hard enough to make very small mobile robots, once you’ve gotten the power and autonomy all figured out (good luck with that), your robot isn’t going to be all that useful unless it can carry some payload. And the payload that everybody wants robots to carry is a camera, which is of course a relatively big, heavy, power hungry payload. Great, just great.

This whole thing is frustrating because tiny, lightweight, power efficient vision systems are all around us. Literally, all around us right this second, stuffed into the heads of insects. We can’t make anything quite that brilliant (yet), but roboticists from the University of Washington, in Seattle, have gotten us a bit closer, with the smallest wireless, steerable video camera we’ve ever seen—small enough to fit on the back of a microbot, or even a live bug.

To make a camera this small, the UW researchers, led by Shyam Gollakota, a professor of computer science and engineering, had to start nearly from scratch, primarily because existing systems aren’t nearly so constrained by power availability. Even things like swallowable pill cameras require batteries that weigh more than a gram, but only power the camera for under half an hour. With a focus on small size and efficiency, they started with an off-the-shelf ultra low-power image sensor that’s 2.3 mm wide and weighs 6.7 mg. They stuck on a Bluetooth 5.0 chip (3 mm wide, 6.8 mg), and had a fun time connecting those two things together without any intermediary hardware to broadcast the camera output. A functional wireless camera also requires a lens (20 mg) and an antenna, which is just 5 mm of wire. An accelerometer is useful so that insect motion can be used to trigger the camera, minimizing the redundant frames that you’d get from a robot or an insect taking a nap.

Photo: University of Washington

The microcamera developed by the UW researchers can stream monochrome video at up to 5 frames per second to a cellphone 120 meters away.

The last bit to make up this system is a mechanically steerable “head,” weighing 35 mg and bringing the total weight of the wireless camera system to 84 mg. If the look of the little piezoelectric actuator seems familiar, you have very good eyes because it’s tiny, and also, it’s the same kind of piezoelectric actuator that the folks at UW use to power their itty bitty flying robots. It’s got a 60-degree panning range, but also requires a 96 mg boost converter to function, which is a huge investment in size and weight just to be able to point the camera a little bit. But overall, the researchers say that this pays off, because not having to turn the entire robot (or insect) when you want to look around reduces the energy consumption of the system as a whole by a factor of up to 84 (!).

Photo: University of Washington

Insects are very mobile platforms for outdoor use, but they’re also not easy to steer, so the researchers also built a little insect-scale robot that they could remotely control while watching the camera feed. As it turns out, this seems to be the smallest, power-autonomous terrestrial robot with a camera ever made.

This efficiency means that the wireless camera system can stream video frames (160×120 pixels monochrome) to a cell phone up to 120 meters away for up to 6 hours when powered by a 0.5-g, 10-mAh battery. A live, first-bug view can be streamed at up to 5 frames per second. The system was successfully tested on a pair of darkling beetles that were allowed to roam freely outdoors, and the researchers noted that they could also mount it on spiders or moths, or anything else that could handle the payload. (The researchers removed the electronics from the insects after the experiments and observed no noticeable adverse effects on their behavior.)

The researchers are already thinking about what it might take to put a wireless camera system on something that flies, and it’s not going to be easy—a bumblebee can only carry between 100 and 200 mg. The power system is the primary limitation here, but it might be possible to use a solar cell to cut down on battery requirements. And the camera itself could be scaled down as well, by using a completely custom sensor and a different type of lens. The other thing to consider is that with a long-range wireless link and a vision system, it’s possible to add sophisticated vision-based autonomy to tiny robots by doing the computation remotely. So, next time you see something scuttling across the ground, give it another look, because it might be looking right back at you.

“Wireless steerable vision for live insects and insect-scale robots,” by Vikram Iyer, Ali Najafi, Johannes James, Sawyer Fuller, and Shyamnath Gollakota from the University of Washington, is published in Science Robotics. Continue reading

Posted in Human Robots

#437745 Video Friday: Japan’s Giant Gundam ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
ICSR 2020 – November 14-16, 2020 – Golden, Co., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s coming together—literally! Japan’s giant Gundam appears nearly finished and ready for its first steps. In a recent video, Gundam Factory Yokohama, which is constructing the 18-meter-tall, 25-ton walking robot, provided an update on the project. The video shows the Gundam getting its head attached—after being blessed by Shinto priests.

In the video update, they say the project is “steadily progressing” and further details will be announced around the end of September.

[ Gundam Factory Yokohama ]

Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this work develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions.

So that’s where Cassie’s eyes go.

[ Berkeley ]

Now that the DARPA SubT Cave Circuit is all virtual, here’s a good reminder of how it’ll work.

[ SubT ]

Since July 20, anyone 11+ years of age must wear a mask in closed public places in France. This measure also is highly recommended in many European, African and Persian Gulf countries. To support businesses and public places, SoftBank Robotics Europe unveils a new feature with Pepper: AI Face Mask Detection.

[ Softbank ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ University of Michigan ]

Suzumori Endo Lab, Tokyo Tech has created various types of IPMC robots. Those robots are fabricated by novel 3D fabrication methods.

[ Suzimori Endo Lab ]

The most explode-y of drones manages not to explode this time.

[ SpaceX ]

At Amazon, we’re constantly innovating to support our employees, customers, and communities as effectively as possible. As our fulfillment and delivery teams have been hard at work supplying customers with items during the pandemic, Amazon’s robotics team has been working behind the scenes to re-engineer bots and processes to increase safety in our fulfillment centers.

While some folks are able to do their jobs at home with just a laptop and internet connection, it’s not that simple for other employees at Amazon, including those who spend their days building and testing robots. Some engineers have turned their homes into R&D labs to continue building these new technologies to better serve our customers and employees. Their creativity and resourcefulness to keep our important programs going is inspiring.

[ Amazon ]

Australian Army soldiers from 2nd/14th Light Horse Regiment (Queensland Mounted Infantry) demonstrated the PD-100 Black Hornet Nano unmanned aircraft vehicle during a training exercise at Shoalwater Bay Training Area, Queensland, on 4 May 2018.

This robot has been around for a long time—maybe 10 years or more? It makes you wonder what the next generation will look like, and if they can manage to make it even smaller.

[ FLIR ]

Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.

[ Paper ] via [ HKUST ]

Emys can help keep kindergarteners sitting still for a long time, which is not small feat!

[ Emys ]

Introducing the RoboMaster EP Core, an advanced educational robot that was built to take learning to the next level and provides an all-in-one solution for STEAM-based classrooms everywhere, offering AI and programming projects for students of all ages and experience levels.

[ DJI ]

This Dutch food company Heemskerk uses ABB robots to automate their order picking. Their new solution reduces the amount of time the fresh produce spends in the supply chain, extending its shelf life, minimizing wastage, and creating a more sustainable solution for the fresh food industry.

[ ABB ]

This week’s episode of Pass the Torque features NASA’s Satellite Servicing Projects Division (NExIS) Robotics Engineer, Zakiya Tomlinson.

[ NASA ]

Massachusetts has been challenging Silicon Valley as the robotics capital of the United States. They’re not winning, yet. But they’re catching up.

[ MassTech ]

San Francisco-based Formant is letting anyone remotely take its Spot robot for a walk. Watch The Robot Report editors, based in Boston, take Spot for a walk around Golden Gate Park.

You can apply for this experience through Formant at the link below.

[ Formant ] via [ TRR ]

Thanks Steve!

An Institute for Advanced Study Seminar on “Theoretical Machine Learning,” featuring Peter Stone from UT Austin.

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Connections to theoretical advances in off-policy reinforcement learning will be highlighted throughout.

[ IAS ] Continue reading

Posted in Human Robots

#437673 Can AI and Automation Deliver a COVID-19 ...

Illustration: Marysia Machulska

Within moments of meeting each other at a conference last year, Nathan Collins and Yann Gaston-Mathé began devising a plan to work together. Gaston-Mathé runs a startup that applies automated software to the design of new drug candidates. Collins leads a team that uses an automated chemistry platform to synthesize new drug candidates.

“There was an obvious synergy between their technology and ours,” recalls Gaston-Mathé, CEO and cofounder of Paris-based Iktos.

In late 2019, the pair launched a project to create a brand-new antiviral drug that would block a specific protein exploited by influenza viruses. Then the COVID-19 pandemic erupted across the world stage, and Gaston-Mathé and Collins learned that the viral culprit, SARS-CoV-2, relied on a protein that was 97 percent similar to their influenza protein. The partners pivoted.

Their companies are just two of hundreds of biotech firms eager to overhaul the drug-discovery process, often with the aid of artificial intelligence (AI) tools. The first set of antiviral drugs to treat COVID-19 will likely come from sifting through existing drugs. Remdesivir, for example, was originally developed to treat Ebola, and it has been shown to speed the recovery of hospitalized COVID-19 patients. But a drug made for one condition often has side effects and limited potency when applied to another. If researchers can produce an ­antiviral that specifically targets SARS-CoV-2, the drug would likely be safer and more effective than a repurposed drug.

There’s one big problem: Traditional drug discovery is far too slow to react to a pandemic. Designing a drug from scratch typically takes three to five years—and that’s before human clinical trials. “Our goal, with the combination of AI and automation, is to reduce that down to six months or less,” says Collins, who is chief strategy officer at SRI Biosciences, a division of the Silicon Valley research nonprofit SRI International. “We want to get this to be very, very fast.”

That sentiment is shared by small biotech firms and big pharmaceutical companies alike, many of which are now ramping up automated technologies backed by supercomputing power to predict, design, and test new antivirals—for this pandemic as well as the next—with unprecedented speed and scope.

“The entire industry is embracing these tools,” says Kara Carter, president of the International Society for Antiviral Research and executive vice president of infectious disease at Evotec, a drug-discovery company in Hamburg. “Not only do we need [new antivirals] to treat the SARS-CoV-2 infection in the population, which is probably here to stay, but we’ll also need them to treat future agents that arrive.”

There are currentlyabout 200 known viruses that infect humans. Although viruses represent less than 14 percent of all known human pathogens, they make up two-thirds of all new human pathogens discovered since 1980.

Antiviral drugs are fundamentally different from vaccines, which teach a person’s immune system to mount a defense against a viral invader, and antibody treatments, which enhance the body’s immune response. By contrast, anti­virals are chemical compounds that directly block a virus after a person has become infected. They do this by binding to specific proteins and preventing them from functioning, so that the virus cannot copy itself or enter or exit a cell.

The SARS-CoV-2 virus has an estimated 25 to 29 proteins, but not all of them are suitable drug targets. Researchers are investigating, among other targets, the virus’s exterior spike protein, which binds to a receptor on a human cell; two scissorlike enzymes, called proteases, that cut up long strings of viral proteins into functional pieces inside the cell; and a polymerase complex that makes the cell churn out copies of the virus’s genetic material, in the form of single-stranded RNA.

But it’s not enough for a drug candidate to simply attach to a target protein. Chemists also consider how tightly the compound binds to its target, whether it binds to other things as well, how quickly it metabolizes in the body, and so on. A drug candidate may have 10 to 20 such objectives. “Very often those objectives can appear to be anticorrelated or contradictory with each other,” says Gaston-Mathé.

Compared with antibiotics, antiviral drug discovery has proceeded at a snail’s pace. Scientists advanced from isolating the first antibacterial molecules in 1910 to developing an arsenal of powerful antibiotics by 1944. By contrast, it took until 1951 for researchers to be able to routinely grow large amounts of virus particles in cells in a dish, a breakthrough that earned the inventors a Nobel Prize in Medicine in 1954.

And the lag between the discovery of a virus and the creation of a treatment can be heartbreaking. According to the World Health Organization, 71 million people worldwide have chronic hepatitis C, a major cause of liver cancer. The virus that causes the infection was discovered in 1989, but effective antiviral drugs didn’t hit the market until 2014.

While many antibiotics work on a range of microbes, most antivirals are highly specific to a single virus—what those in the business call “one bug, one drug.” It takes a detailed understanding of a virus to develop an antiviral against it, says Che Colpitts, a virologist at Queen’s University, in Canada, who works on antivirals against RNA viruses. “When a new virus emerges, like SARS-CoV-2, we’re at a big disadvantage.”

Making drugs to stop viruses is hard for three main reasons. First, viruses are the Spartans of the pathogen world: They’re frugal, brutal, and expert at evading the human immune system. About 20 to 250 nanometers in diameter, viruses rely on just a few parts to operate, hijacking host cells to reproduce and often destroying those cells upon departure. They employ tricks to camouflage their presence from the host’s immune system, including preventing infected cells from sending out molecular distress beacons. “Viruses are really small, so they only have a few components, so there’s not that many drug targets available to start with,” says Colpitts.

Second, viruses replicate quickly, typically doubling in number in hours or days. This constant copying of their genetic material enables viruses to evolve quickly, producing mutations able to sidestep drug effects. The virus that causes AIDS soon develops resistance when exposed to a single drug. That’s why a cocktail of antiviral drugs is used to treat HIV infection.

Finally, unlike bacteria, which can exist independently outside human cells, viruses invade human cells to propagate, so any drug designed to eliminate a virus needs to spare the host cell. A drug that fails to distinguish between a virus and a cell can cause serious side effects. “Discriminating between the two is really quite difficult,” says Evotec’s Carter, who has worked in antiviral drug discovery for over three decades.

And then there’s the money barrier. Developing antivirals is rarely profitable. Health-policy researchers at the London School of Economics recently estimated that the average cost of developing a new drug is US $1 billion, and up to $2.8 billion for cancer and other specialty drugs. Because antivirals are usually taken for only short periods of time or during short outbreaks of disease, companies rarely recoup what they spent developing the drug, much less turn a profit, says Carter.

To change the status quo, drug discovery needs fresh approaches that leverage new technologies, rather than incremental improvements, says Christian Tidona, managing director of BioMed X, an independent research institute in Heidelberg, Germany. “We need breakthroughs.”

Putting Drug Development on Autopilot
Earlier this year, SRI Biosciences and Iktos began collaborating on a way to use artificial intelligence and automated chemistry to rapidly identify new drugs to target the COVID-19 virus. Within four months, they had designed and synthesized a first round of antiviral candidates. Here’s how they’re doing it.

1/5

STEP 1: Iktos’s AI platform uses deep-learning algorithms in an iterative process to come up with new molecular structures likely to bind to and disable a specific coronavirus protein. Illustrations: Chris Philpot

2/5

STEP 2: SRI Biosciences’s SynFini system is a three-part automated chemistry suite for producing new compounds. Starting with a target compound from Iktos, SynRoute uses machine learning to analyze and optimize routes for creating that compound, with results in about 10 seconds. It prioritizes routes based on cost, likelihood of success, and ease of implementation.

3/5

STEP 3: SynJet, an automated inkjet printer platform, tests the routes by printing out tiny quantities of chemical ingredients to see how they react. If the right compound is produced, the platform tests it.

4/5

STEP 4: AutoSyn, an automated tabletop chemical plant, synthesizes milligrams to grams of the desired compound for further testing. Computer-selected “maps” dictate paths through the plant’s modular components.

5/5

STEP 5: The most promising compounds are tested against live virus samples.

Previous
Next

Iktos’s AI platform was created by a medicinal chemist and an AI expert. To tackle SARS-CoV-2, the company used generative models—deep-learning algorithms that generate new data—to “imagine” molecular structures with a good chance of disabling a key coronavirus protein.

For a new drug target, the software proposes and evaluates roughly 1 million compounds, says Gaston-Mathé. It’s an iterative process: At each step, the system generates 100 virtual compounds, which are tested in silico with predictive models to see how closely they meet the objectives. The test results are then used to design the next batch of compounds. “It’s like we have a very, very fast chemist who is designing compounds, testing compounds, getting back the data, then designing another batch of compounds,” he says.

The computer isn’t as smart as a human chemist, Gaston-Mathé notes, but it’s much faster, so it can explore far more of what people in the field call “chemical space”—the set of all possible organic compounds. Unexplored chemical space is huge: Biochemists estimate that there are at least 1063 possible druglike molecules, and that 99.9 percent of all possible small molecules or compounds have never been synthesized.

Still, designing a chemical compound isn’t the hardest part of creating a new drug. After a drug candidate is designed, it must be synthesized, and the highly manual process for synthesizing a new chemical hasn’t changed much in 200 years. It can take days to plan a synthesis process and then months to years to optimize it for manufacture.

That’s why Gaston-Mathé was eager to send Iktos’s AI-generated designs to Collins’s team at SRI Biosciences. With $13.8 million from the Defense Advanced Research Projects Agency, SRI Biosciences spent the last four years automating the synthesis process. The company’s automated suite of three technologies, called SynFini, can produce new chemical compounds in just hours or days, says Collins.

First, machine-learning software devises possible routes for making a desired molecule. Next, an inkjet printer platform tests the routes by printing out and mixing tiny quantities of chemical ingredients to see how they react with one another; if the right compound is produced, the platform runs tests on it. Finally, a tabletop chemical plant synthesizes milligrams to grams of the desired compound.

Less than four months after Iktos and SRI Biosciences announced their collaboration, they had designed and synthesized a first round of antiviral candidates for SARS-CoV-2. Now they’re testing how well the compounds work on actual samples of the virus.

Out of 10
63 possible druglike molecules, 99.9 percent have never been synthesized.

Theirs isn’t the only collaborationapplying new tools to drug discovery. In late March, Alex Zhavoronkov, CEO of Hong Kong–based Insilico Medicine, came across a YouTube video showing three virtual-reality avatars positioning colorful, sticklike fragments in the side of a bulbous blue protein. The three researchers were using VR to explore how compounds might bind to a SARS-CoV-2 enzyme. Zhavoronkov contacted the startup that created the simulation—Nanome, in San Diego—and invited it to examine Insilico’s ­AI-generated molecules in virtual reality.

Insilico runs an AI platform that uses biological data to train deep-learning algorithms, then uses those algorithms to identify molecules with druglike features that will likely bind to a protein target. A four-day training sprint in late January yielded 100 molecules that appear to bind to an important SARS-CoV-2 protease. The company recently began synthesizing some of those molecules for laboratory testing.

Nanome’s VR software, meanwhile, allows researchers to import a molecular structure, then view and manipulate it on the scale of individual atoms. Like human chess players who use computer programs to explore potential moves, chemists can use VR to predict how to make molecules more druglike, says Nanome CEO Steve McCloskey. “The tighter the interface between the human and the computer, the more information goes both ways,” he says.

Zhavoronkov sent data about several of Insilico’s compounds to Nanome, which re-created them in VR. Nanome’s chemist demonstrated chemical tweaks to potentially improve each compound. “It was a very good experience,” says Zhavoronkov.

Meanwhile, in March, Takeda Pharmaceutical Co., of Japan, invited Schrödinger, a New York–based company that develops chemical-simulation software, to join an alliance working on antivirals. Schrödinger’s AI focuses on the physics of how proteins interact with small molecules and one another.

The software sifts through billions of molecules per week to predict a compound’s properties, and it optimizes for multiple desired properties simultaneously, says Karen Akinsanya, chief biomedical scientist and head of discovery R&D at Schrödinger. “There’s a huge sense of urgency here to come up with a potent molecule, but also to come up with molecules that are going to be well tolerated” by the body, she says. Drug developers are seeking compounds that can be broadly used and easily administered, such as an oral drug rather than an intravenous drug, she adds.

Schrödinger evaluated four protein targets and performed virtual screens for two of them, a computing-intensive process. In June, Google Cloud donated the equivalent of 16 million hours of Nvidia GPU time for the company’s calculations. Next, the alliance’s drug companies will synthesize and test the most promising compounds identified by the virtual screens.

Other companies, including Amazon Web Services, IBM, and Intel, as well as several U.S. national labs are also donating time and resources to the Covid-19 High Performance Computing Consortium. The consortium is supporting 87 projects, which now have access to 6.8 million CPU cores, 50,000 GPUs, and 600 petaflops of computational resources.

While advanced technologies could transform early drug discovery, any new drug candidate still has a long road after that. It must be tested in animals, manufactured in large batches for clinical trials, then tested in a series of trials that, for antivirals, lasts an average of seven years.

In May, the BioMed X Institute in Germany launched a five-year project to build a Rapid Antiviral Response Platform, which would speed drug discovery all the way through manufacturing for clinical trials. The €40 million ($47 million) project, backed by drug companies, will identify ­outside-the-box proposals from young scientists, then provide space and funding to develop their ideas.

“We’ll focus on technologies that allow us to go from identification of a new virus to 10,000 doses of a novel potential therapeutic ready for trials in less than six months,” says BioMed X’s Tidona, who leads the project.

While a vaccine will likely arrive long before a bespoke antiviral does, experts expect COVID-19 to be with us for a long time, so the effort to develop a direct-acting, potent antiviral continues. Plus, having new antivirals—and tools to rapidly create more—can only help us prepare for the next pandemic, whether it comes next month or in another 102 years.

“We’ve got to start thinking differently about how to be more responsive to these kinds of threats,” says Collins. “It’s pushing us out of our comfort zones.”

This article appears in the October 2020 print issue as “Automating Antivirals.” Continue reading

Posted in Human Robots

#437671 Video Friday: Researchers 3D Print ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The Giant Gundam in Yokohama is actually way cooler than I thought it was going to be.

[ Gundam Factory ] via [ YouTube ]

A new 3D-printing method will make it easier to manufacture and control the shape of soft robots, artificial muscles and wearable devices. Researchers at UC San Diego show that by controlling the printing temperature of liquid crystal elastomer, or LCE, they can control the material’s degree of stiffness and ability to contract—also known as degree of actuation. What’s more, they are able to change the stiffness of different areas in the same material by exposing it to heat.

[ UCSD ]

Thanks Ioana!

This is the first successful reactive stepping test on our new torque-controlled biped robot named Bolt. The robot has 3 active degrees of freedom per leg and one passive joint in ankle. Since there is no active joint in ankle, the robot only relies on step location and timing adaptation to stabilize its motion. Not only can the robot perform stepping without active ankles, but it is also capable of rejecting external disturbances as we showed in this video.

[ ODRI ]

The curling robot “Curly” is the first AI-based robot to demonstrate competitive curling skills in an icy real environment with its high uncertainties. Scientists from seven different Korean research institutions including Prof. Klaus-Robert Müller, head of the machine-learning group at TU Berlin and guest professor at Korea University, have developed an AI-based curling robot.

[ TU Berlin ]

MoonRanger, a small robotic rover being developed by Carnegie Mellon University and its spinoff Astrobotic, has completed its preliminary design review in preparation for a 2022 mission to search for signs of water at the moon’s south pole. Red Whittaker explains why the new MoonRanger Lunar Explorer design is innovative and different from prior planetary rovers.

[ CMU ]

Cobalt’s security robot can now navigate unmodified elevators, which is an impressive feat.

Also, EXTERMINATE!

[ Cobalt ]

OrionStar, the robotics company invested in by Cheetah Mobile, announced the Robotic Coffee Master. Incorporating 3,000 hours of AI learning, 30,000 hours of robotic arm testing and machine vision training, the Robotic Coffee Master can perform complex brewing techniques, such as curves and spirals, with millimeter-level stability and accuracy (reset error ≤ 0.1mm).

[ Cheetah Mobile ]

DARPA OFFensive Swarm-Enabled Tactics (OFFSET) researchers recently tested swarms of autonomous air and ground vehicles at the Leschi Town Combined Arms Collective Training Facility (CACTF), located at Joint Base Lewis-McChord (JBLM) in Washington. The Leschi Town field experiment is the fourth of six planned experiments for the OFFSET program, which seeks to develop large-scale teams of collaborative autonomous systems capable of supporting ground forces operating in urban environments.

[ DARPA ]

Here are some highlights from Team Explorer’s SubT Urban competition back in February.

[ Team Explorer ]

Researchers with the Skoltech Intelligent Space Robotics Laboratory have developed a system that allows easy interaction with a micro-quadcopter with LEDs that can be used for light-painting. The researchers used a 92x92x29 mm Crazyflie 2.0 quadrotor that weighs just 27 grams, equipped with a light reflector and an array of controllable RGB LEDs. The control system consists of a glove equipped with an inertial measurement unit (IMU; an electronic device that tracks the movement of a user’s hand), and a base station that runs a machine learning algorithm.

[ Skoltech ]

“DeKonBot” is the prototype of a cleaning and disinfection robot for potentially contaminated surfaces in buildings such as door handles, light switches or elevator buttons. While other cleaning robots often spray the cleaning agents over a large area, DeKonBot autonomously identifies the surface to be cleaned.

[ Fraunhofer IPA ]

On Oct. 20, the OSIRIS-REx mission will perform the first attempt of its Touch-And-Go (TAG) sample collection event. Not only will the spacecraft navigate to the surface using innovative navigation techniques, but it could also collect the largest sample since the Apollo missions.

[ NASA ]

With all the robotics research that seems to happen in places where snow is more of an occasional novelty or annoyance, it’s good to see NORLAB taking things more seriously

[ NORLAB ]

Telexistence’s Model-T robot works very slowly, but very safely, restocking shelves.

[ Telexistence ] via [ YouTube ]

Roboy 3.0 will be unveiled next month!

[ Roboy ]

KUKA ready2_educate is your training cell for hands-on education in robotics. It is especially aimed at schools, universities and company training facilities. The training cell is a complete starter package and your perfect partner for entry into robotics.

[ KUKA ]

A UPenn GRASP Lab Special Seminar on Data Driven Perception for Autonomy, presented by Dapo Afolabi from UC Berkeley.

Perception systems form a crucial part of autonomous and artificial intelligence systems since they convert data about the relationship between an autonomous system and its environment into meaningful information. Perception systems can be difficult to build since they may involve modeling complex physical systems or other autonomous agents. In such scenarios, data driven models may be used to augment physics based models for perception. In this talk, I will present work making use of data driven models for perception tasks, highlighting the benefit of such approaches for autonomous systems.

[ GRASP Lab ]

A Maryland Robotics Center Special Robotics Seminar on Underwater Autonomy, presented by Ioannis Rekleitis from the University of South Carolina.

This talk presents an overview of algorithmic problems related to marine robotics, with a particular focus on increasing the autonomy of robotic systems in challenging environments. I will talk about vision-based state estimation and mapping of underwater caves. An application of monitoring coral reefs is going to be discussed. I will also talk about several vehicles used at the University of South Carolina such as drifters, underwater, and surface vehicles. In addition, a short overview of the current projects will be discussed. The work that I will present has a strong algorithmic flavour, while it is validated in real hardware. Experimental results from several testing campaigns will be presented.

[ MRC ]

This week’s CMU RI Seminar comes from Scott Niekum at UT Austin, on Scaling Probabilistically Safe Learning to Robotics.

Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient off-policy policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards safe robot learning with modest amounts of real-world data.

[ CMU RI ] Continue reading

Posted in Human Robots

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots