Tag Archives: more

#437721 Video Friday: Child Robot Learning to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

We first met Ibuki, Hiroshi Ishiguro’s latest humanoid robot, a couple of years ago. A recent video shows how Ishiguro and his team are teaching the robot to express its emotional state through gait and body posture while moving.

This paper presents a subjective evaluation of the emotions of a wheeled mobile humanoid robot expressing emotions during movement by replicating human gait-induced upper body motion. For this purpose, we proposed the robot equipped with a vertical oscillation mechanism that generates such motion by focusing on human center-of-mass trajectory. In the experiment, participants watched videos of the robot’s different emotional gait-induced upper body motions, and assess the type of emotion shown, and their confidence level in their answer.

[ Hiroshi Ishiguro Lab ] via [ RobotStart ]

ICYMI: This is a zinc-air battery made partly of Kevlar that can be used to support weight, not just add to it.

Like biological fat reserves store energy in animals, a new rechargeable zinc battery integrates into the structure of a robot to provide much more energy, a team led by the University of Michigan has shown.

The new battery works by passing hydroxide ions between a zinc electrode and the air side through an electrolyte membrane. That membrane is partly a network of aramid nanofibers—the carbon-based fibers found in Kevlar vests—and a new water-based polymer gel. The gel helps shuttle the hydroxide ions between the electrodes. Made with cheap, abundant and largely nontoxic materials, the battery is more environmentally friendly than those currently in use. The gel and aramid nanofibers will not catch fire if the battery is damaged, unlike the flammable electrolyte in lithium ion batteries. The aramid nanofibers could be upcycled from retired body armor.

[ University of Michigan ]

In what they say is the first large-scale study of the interactions between sound and robotic action, researchers at CMU’s Robotics Institute found that sounds could help a robot differentiate between objects, such as a metal screwdriver and a metal wrench. Hearing also could help robots determine what type of action caused a sound and help them use sounds to predict the physical properties of new objects.

[ CMU ]

Captured on Aug. 11 during the second rehearsal of the OSIRIS-REx mission’s sample collection event, this series of images shows the SamCam imager’s field of view as the NASA spacecraft approaches asteroid Bennu’s surface. The rehearsal brought the spacecraft through the first three maneuvers of the sampling sequence to a point approximately 131 feet (40 meters) above the surface, after which the spacecraft performed a back-away burn.

These images were captured over a 13.5-minute period. The imaging sequence begins at approximately 420 feet (128 meters) above the surface – before the spacecraft executes the “Checkpoint” maneuver – and runs through to the “Matchpoint” maneuver, with the last image taken approximately 144 feet (44 meters) above the surface of Bennu.

[ NASA ]

The DARPA AlphaDogfight Trials Final Event took place yesterday; the livestream is like 5 hours long, but you can skip ahead to 4:39 ish to see the AI winner take on a human F-16 pilot in simulation.

Some things to keep in mind about the result: The AI had perfect situational knowledge while the human pilot had to use eyeballs, and in particular, the AI did very well at lining up its (virtual) gun with the human during fast passing maneuvers, which is the sort of thing that autonomous systems excel at but is not necessarily reflective of better strategy.

[ DARPA ]

Coming soon from Clearpath Robotics!

[ Clearpath ]

This video introduces Preferred Networks’ Hand type A, a tendon-driven robot gripper with passively switchable underactuated surface.

[ Preferred Networks ]

CYBATHLON 2020 will take place on 13 – 14 November 2020 – at the teams’ home bases. They will set up their infrastructure for the competition and film their races. Instead of starting directly next to each other, the pilots will start individually and under the supervision of CYBATHLON officials. From Zurich, the competitions will be broadcast through a new platform in a unique live programme.

[ Cybathlon ]

In this project, we consider the task of autonomous car racing in the top-selling car racing game Gran Turismo Sport. Gran Turismo Sport is known for its detailed physics simulation of various cars and tracks. Our approach makes use of maximum-entropy deep reinforcement learning and a new reward design to train a sensorimotor policy to complete a given race track as fast as possible. We evaluate our approach in three different time trial settings with different cars and tracks. Our results show that the obtained controllers not only beat the built-in non-player character of Gran Turismo Sport, but also outperform the fastest known times in a dataset of personal best lap times of over 50,000 human drivers.

[ UZH ]

With the help of the software pitasc from Fraunhofer IPA, an assembly task is no longer programmed point by point, but workpiece-related. Thus, pitasc adapts the assembly process itself for new product variants with the help of updated parameters.

[ Fraunhofer ]

In this video, a multi-material robot simulator is used to design a shape-changing robot, which is then transferred to physical hardware. The simulated and real robots can use shape change to switch between rolling gaits and inchworm gaits, to locomote in multiple environments.

[ Yale ]

This work presents a novel loco-manipulation control framework for the execution of complex tasks with kinodynamic constraints using mobile manipulators. As a representative example, we consider the handling and re-positioning of pallet jacks in unstructured environments. While these results reveal with a proof-of- concept the effectiveness of the proposed framework, they also demonstrate the high potential of mobile manipulators for relieving human workers from such repetitive and labor intensive tasks. We believe that this extended functionality can contribute to increasing the usability of mobile manipulators in different application scenarios.

[ Paper ] via [ IIT ]

I don’t know why this dinosaur ice cream serving robot needs to blow smoke out of its nose, but I like it.

[ Connected Robotics ] via [ RobotStart ]

Guardian S remote visual inspection and surveillance robots make laying cable runs in confined or hard to reach spaces easy. With advanced maneuverability and the ability to climb vertical, ferrous surfaces, the robot reaches areas that are not always easily accessible.

[ Sarcos ]

Looks like the company that bought Anki is working on an add-on to let cars charge while they drive.

[ Digital Dream Labs ]

Chris Atkeson gives a brief talk for the CMU Robotics Institute orientation.

[ CMU RI ]

A UofT Robotics Seminar, featuring Russ Tedrake from MIT and TRI on “Feedback Control for Manipulation.”

Control theory has an answer for just about everything, but seems to fall short when it comes to closing a feedback loop using a camera, dealing with the dynamics of contact, and reasoning about robustness over the distribution of tasks one might find in the kitchen. Recent examples from RL and imitation learning demonstrate great promise, but don’t leverage the rigorous tools from systems theory. I’d like to discuss why, and describe some recent results of closing feedback loops from pixels for “category-level” robot manipulation.

[ UofT ] Continue reading

Posted in Human Robots

#437716 Robotic Tank Is Designed to Crawl ...

Let’s talk about bowels! Most of us have them, most of us use them a lot, and like anything that gets used a lot, they eventually need to get checked out to help make sure that everything will keep working the way it should for as long as you need it to. Generally, this means a colonoscopy, and while there are other ways of investigating what’s going on in your gut, a camera on a flexible tube is still “the gold-standard method of diagnosis and intervention,” according to some robotics researchers who want to change that up a bit.

The University of Colorado’s Advanced Medical Technologies Lab has been working on a tank robot called Endoculus that’s able to actively drive itself through your intestines, rather than being shoved. The good news is that it’s very small, and the bad news is that it’s probably not as small as you’d like it to be.

The reason why a robot like Endoculus is necessary (or at least a good idea) is that trying to stuff a semi-rigid endoscopy tube into the semi-floppy tube that is your intestine doesn’t always go smoothly. Sometimes, the tip of the endoscopy tube can get stuck, and as more tube is fed in, it causes the intestine to distend, which best case is painful and worst case can cause serious internal injuries. One way of solving this is with swallowable camera pills, but those don’t help you with tasks like taking tissue samples. A self-propelled system like Endoculus could reduce risk while also making the procedure faster and cheaper.

Image: Advanced Medical Technologies Lab/University of Colorado

The researchers say that while the width of Endoculus is larger than a traditional endoscope, the device would require “minimal distention during use” and would “not cause pain or harm to the patient.” Future versions of the robot, they add, will “yield a smaller footprint.”

Endoculus gets around with four sets of treads, angled to provide better traction against the curved walls of your gut. The treads are micropillared, or covered with small nubs, which helps them deal with all your “slippery colon mucosa.” Designing the robot was particularly tricky because of the severe constraints on the overall size of the device, which is just 3 centimeters wide and 2.3 cm high. In order to cram the two motors required for full control, they had to be arranged parallel to the treads, resulting in a fairly complex system of 3D-printed worm gears. And to make the robot actually useful, it includes a camera, LED lights, tubes for injecting air and water, and a tool port that can accommodate endoscopy instruments like forceps and snares to retrieve tissue samples.

So far, Endoculus has spent some time inside of a live pig, although it wasn’t able to get that far since pig intestines are smaller than human intestines, and because apparently the pig intestine is spiraled somehow. The pig (and the robot) both came out fine. A (presumably different) pig then provided some intestine that was expanded to human-intestine size, inside of which Endoculus did much better, and was able to zip along at up to 40 millimeters per second without causing any damage. Personally, I’m not sure I’d want a robot to explore my intestine at a speed much higher than that.

The next step with Endoculus is to add some autonomy, which means figuring out how to do localization and mapping using the robot’s onboard camera and IMU. And then of course someone has to be the first human to experience Endoculus directly, which I’d totally volunteer for except the research team is in Colorado and I’m not. Sorry!

“Novel Optimization-Based Design and Surgical Evaluation of a Treaded Robotic Capsule Colonoscope,” by Gregory A. Formosa, J. Micah Prendergast, Steven A. Edmundowicz, and Mark E. Rentschler, from the University of Colorado, was presented at ICRA 2020.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437709 iRobot Announces Major Software Update, ...

Since the release of the very first Roomba in 2002, iRobot’s long-term goal has been to deliver cleaner floors in a way that’s effortless and invisible. Which sounds pretty great, right? And arguably, iRobot has managed to do exactly this, with its most recent generation of robot vacuums that make their own maps and empty their own dustbins. For those of us who trust our robots, this is awesome, but iRobot has gradually been realizing that many Roomba users either don’t want this level of autonomy, or aren’t ready for it.

Today, iRobot is announcing a major new update to its app that represents a significant shift of its overall approach to home robot autonomy. Humans are being brought back into the loop through software that tries to learn when, where, and how you clean so that your Roomba can adapt itself to your life rather than the other way around.

To understand why this is such a shift for iRobot, let’s take a very brief look back at how the Roomba interface has evolved over the last couple of decades. The first generation of Roomba had three buttons on it that allowed (or required) the user to select whether the room being vacuumed was small or medium or large in size. iRobot ditched that system one generation later, replacing the room size buttons with one single “clean” button. Programmable scheduling meant that users no longer needed to push any buttons at all, and with Roombas able to find their way back to their docking stations, all you needed to do was empty the dustbin. And with the most recent few generations (the S and i series), the dustbin emptying is also done for you, reducing direct interaction with the robot to once a month or less.

Image: iRobot

iRobot CEO Colin Angle believes that working toward more intelligent human-robot collaboration is “the brave new frontier” of AI. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” he says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The point that the top-end Roombas are at now reflects a goal that iRobot has been working toward since 2002: With autonomy, scheduling, and the clean base to empty the bin, you can set up your Roomba to vacuum when you’re not home, giving you cleaner floors every single day without you even being aware that the Roomba is hard at work while you’re out. It’s not just hands-off, it’s brain-off. No noise, no fuss, just things being cleaner thanks to the efforts of a robot that does its best to be invisible to you. Personally, I’ve been completely sold on this idea for home robots, and iRobot CEO Colin Angle was as well.

“I probably told you that the perfect Roomba is the Roomba that you never see, you never touch, you just come home everyday and it’s done the right thing,” Angle told us. “But customers don’t want that—they want to be able to control what the robot does. We started to hear this a couple years ago, and it took a while before it sunk in, but it made sense.”

How? Angle compares it to having a human come into your house to clean, but you weren’t allowed to tell them where or when to do their job. Maybe after a while, you’ll build up the amount of trust necessary for that to work, but in the short term, it would likely be frustrating. And people get frustrated with their Roombas for this reason. “The desire to have more control over what the robot does kept coming up, and for me, it required a pretty big shift in my view of what intelligence we were trying to build. Autonomy is not intelligence. We need to do something more.”

That something more, Angle says, is a partnership as opposed to autonomy. It’s an acknowledgement that not everyone has the same level of trust in robots as the people who build them. It’s an understanding that people want to have a feeling of control over their homes, that they have set up the way that they want, and that they’ve been cleaning the way that they want, and a robot shouldn’t just come in and do its own thing.

This change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware.

“Until the robot proves that it knows enough about your home and about the way that you want your home cleaned,” Angle says, “you can’t move forward.” He adds that this is one of those things that seem obvious in retrospect, but even if they’d wanted to address the issue before, they didn’t have the technology to solve the problem. Now they do. “This whole journey has been earning the right to take this next step, because a robot can’t be responsive if it’s incompetent,” Angle says. “But thinking that autonomy was the destination was where I was just completely wrong.”

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

Where to Clean
Knowing where to clean depends on your Roomba having a detailed and accurate map of its environment. For several generations now, Roombas have been using visual mapping and localization (VSLAM) to build persistent maps of your home. These maps have been used to tell the Roomba to clean in specific rooms, but that’s about it. With the new update, Roombas with cameras will be able to recognize some objects and features in your home, including chairs, tables, couches, and even countertops. The robots will use these features to identify where messes tend to happen so that they can focus on those areas—like around the dining room table or along the front of the couch.

We should take a minute here to clarify how the Roomba is using its camera. The original (primary?) purpose of the camera was for VSLAM, where the robot would take photos of your home, downsample them into QR-code-like patterns of light and dark, and then use those (with the assistance of other sensors) to navigate. Now the camera is also being used to take pictures of other stuff around your house to make that map more useful.

Photo: iRobot

The robots will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table.

This is done through machine learning using a library of images of common household objects from a floor perspective that iRobot had to develop from scratch. Angle clarified for us that this is all done via a neural net that runs on the robot, and that “no recognizable images are ever stored on the robot or kept, and no images ever leave the robot.” Worst case, if all the data iRobot has about your home gets somehow stolen, the hacker would only know that (for example) your dining room has a table in it and the approximate size and location of that table, because the map iRobot has of your place only stores symbolic representations rather than images.

Another useful new feature is intended to help manage the “evil Roomba places” (as Angle puts it) that every home has that cause Roombas to get stuck. If the place is evil enough that Roomba has to call you for help because it gave up completely, Roomba will now remember, and suggest that either you make some changes or that it stops cleaning there, which seems reasonable.

When to Clean
It turns out that the primary cause of mission failure for Roombas is not that they get stuck or that they run out of battery—it’s user cancellation, usually because the robot is getting in the way or being noisy when you don’t want it to be. “If you kill a Roomba’s job because it annoys you,” points out Angle, “how is that robot being a good partner? I think it’s an epic fail.” Of course, it’s not the robot’s fault, because Roombas only clean when we tell them to, which Angle says is part of the problem. “People actually aren’t very good at making their own schedules—they tend to oversimplify, and not think through what their schedules are actually about, which leads to lots of [figurative] Roomba death.”

To help you figure out when the robot should actually be cleaning, the new app will look for patterns in when you ask the robot to clean, and then recommend a schedule based on those patterns. That might mean the robot cleans different areas at different times every day of the week. The app will also make scheduling recommendations that are event-based as well, integrated with other smart home devices. Would you prefer the Roomba to clean every time you leave the house? The app can integrate with your security system (or garage door, or any number of other things) and take care of that for you.

More generally, Roomba will now try to fit into the kinds of cleaning routines that many people already have established. For example, the app may suggest an “after dinner” routine that cleans just around the kitchen and dining room table. The app will also, to some extent, pay attention to the environment and season. It might suggest increasing your vacuuming frequency if pollen counts are especially high, or if it’s pet shedding season and you have a dog. Unfortunately, Roomba isn’t (yet?) capable of recognizing dogs on its own, so the app has to cheat a little bit by asking you some basic questions.

A Smarter App

Image: iRobot

The previous iteration of the iRobot app (and Roombas themselves) are built around one big fat CLEAN button. The new approach instead tries to figure out in much more detail where the robot should clean, and when, using a mixture of autonomous technology and interaction with the user.

The app update, which should be available starting today, is free. The scheduling and recommendations will work on every Roomba model, although for object recognition and anything related to mapping, you’ll need one of the more recent and fancier models with a camera. Future app updates will happen on a more aggressive schedule. Major app releases should happen every six months, with incremental updates happening even more frequently than that.

Angle also told us that overall, this change in direction also represents a substantial shift in resources for iRobot, and the company has pivoted two-thirds of its engineering organization to focus on software-based collaborative intelligence rather than hardware. “It’s not like we’re done doing hardware,” Angle assured us. “But we do think about hardware differently. We view our robots as platforms that have longer life cycles, and each platform will be able to support multiple generations of software. We’ve kind of decoupled robot intelligence from hardware, and that’s a change.”

Angle believes that working toward more intelligent collaboration between humans and robots is “the brave new frontier of artificial intelligence. I expect it to be the frontier for a reasonable amount of time to come,” he adds. “We have a lot of work to do to create the type of easy-to-use experience that consumer robots need.” Continue reading

Posted in Human Robots

#437707 Video Friday: This Robot Will Restock ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

CLAWAR 2020 – August 24-26, 2020 – [Online Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
CYBATHLON 2020 – November 13-14, 2020 – [Online Event]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today's videos.

Tokyo startup Telexistence has recently unveiled a new robot called the Model-T, an advanced teleoperated humanoid that can use tools and grasp a wide range of objects. Japanese convenience store chain FamilyMart plans to test the Model-T to restock shelves in up to 20 stores by 2022. In the trial, a human “pilot” will operate the robot remotely, handling items like beverage bottles, rice balls, sandwiches, and bento boxes.

With Model-T and AWP, FamilyMart and TX aim to realize a completely new store operation by remoteizing and automating the merchandise restocking work, which requires a large number of labor-hours. As a result, stores can operate with less number of workers and enable them to recruit employees regardless of the store’s physical location.

[ Telexistence ]

Quadruped dance-off should be a new robotics competition at IROS or ICRA.

I dunno though, that moonwalk might keep Spot in the lead…

[ Unitree ]

Through a hybrid of simulation and real-life training, this air muscle robot is learning to play table tennis.

Table tennis requires to execute fast and precise motions. To gain precision it is necessary to explore in this high-speed regimes, however, exploration can be safety-critical at the same time. The combination of RL and muscular soft robots allows to close this gap. While robots actuated by pneumatic artificial muscles generate high forces that are required for e.g. smashing, they also offer safe execution of explosive motions due to antagonistic actuation.

To enable practical training without real balls, we introduce Hybrid Sim and Real Training (HYSR) that replays prerecorded real balls in simulation while executing actions on the real system. In this manner, RL can learn the challenging motor control of the PAM-driven robot while executing ~15000 hitting motions.

[ Max Planck Institute ]

Thanks Dieter!

Anthony Cowley wrote in to share his recent thesis work on UPSLAM, a fast and lightweight SLAM technique that records data in panoramic depth images (just PNGs) that are easy to visualize and even easier to share between robots, even on low-bandwidth networks.

[ UPenn ]

Thanks Anthony!

GITAI’s G1 is the space dedicated general-purpose robot. G1 robot will enable automation of various tasks internally & externally on space stations and for lunar base development.

[ Gitai ]

The University of Michigan has a fancy new treadmill that’s built right into the floor, which proves to be a bit much for Mini Cheetah.

But Cassie Blue won’t get stuck on no treadmill! She goes for a 0.3 mile walk across campus, which ends when a certain someone ran the gantry into Cassie Blue’s foot.

[ Michigan Robotics ]

Some serious quadruped research going on at UT Austin Human Centered Robotics Lab.

[ HCRL ]

Will Burrard-Lucas has spent lockdown upgrading his slightly indestructible BeetleCam wildlife photographing robot.

[ Will Burrard-Lucas ]

Teleoperated surgical robots are becoming commonplace in operating rooms, but many are massive (sometimes taking up an entire room) and are difficult to manipulate, especially if a complication arises and the robot needs to removed from the patient. A new collaboration between the Wyss Institute, Harvard University, and Sony Corporation has created the mini-RCM, a surgical robot the size of a tennis ball that weighs as much as a penny, and performed significantly better than manually operated tools in delicate mock-surgical procedures. Importantly, its small size means it is more comparable to the human tissues and structures on which it operates, and it can easily be removed by hand if needed.

[ Harvard Wyss ]

Yaskawa appears to be working on a robot that can scan you with a temperature gun and then jam a mask on your face?

[ Motoman ]

Maybe we should just not have people working in mines anymore, how about that?

[ Exyn ]

Many current human-robot interactive systems tend to use accurate and fast – but also costly – actuators and tracking systems to establish working prototypes that are safe to use and deploy for user studies. This paper presents an embedded framework to build a desktop space for human-robot interaction, using an open-source robot arm, as well as two RGB cameras connected to a Raspberry Pi-based controller that allow a fast yet low-cost object tracking and manipulation in 3D. We show in our evaluations that this facilitates prototyping a number of systems in which user and robot arm can commonly interact with physical objects.

[ Paper ]

IBM Research is proud to host professor Yoshua Bengio — one of the world’s leading experts in AI — in a discussion of how AI can contribute to the fight against COVID-19.

[ IBM Research ]

Ira Pastor, ideaXme life sciences ambassador interviews Professor Dr. Hiroshi Ishiguro, the Director of the Intelligent Robotics Laboratory, of the Department of Systems Innovation, in the Graduate School of Engineering Science, at Osaka University, Japan.

[ ideaXme ]

A CVPR talk from Stanford’s Chelsea Finn on “Generalization in Visuomotor Learning.”

[ Stanford ] Continue reading

Posted in Human Robots

#437701 Robotics, AI, and Cloud Computing ...

IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.

IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.

The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.

Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.

All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.

Image: IBM Research

IBM RXN helps predict chemical reaction outcomes or design retrosynthesis in seconds.

In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.

Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.

The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.

IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?

According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”

Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.

While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.

“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”

There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.

Photo: Michael Buholzer

Teodoro Laino of IBM Research Europe.

“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe. “On the other hand, we are also pushing at bringing the entire system to a service level.”

Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.

Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”

< Back to IEEE COVID-19 Resources Continue reading

Posted in Human Robots