Tag Archives: new technology

#437940 How Boston Dynamics Taught Its Robots to ...

A week ago, Boston Dynamics posted a video of Atlas, Spot, and Handle dancing to “Do You Love Me.” It was, according to the video description, a way “to celebrate the start of what we hope will be a happier year.” As of today the video has been viewed nearly 24 million times, and the popularity is no surprise, considering the compelling mix of technical prowess and creativity on display.

Strictly speaking, the stuff going on in the video isn’t groundbreaking, in the sense that we’re not seeing any of the robots demonstrate fundamentally new capabilities, but that shouldn’t take away from how impressive it is—you’re seeing state-of-the-art in humanoid robotics, quadrupedal robotics, and whatever-the-heck-Handle-is robotics.

What is unique about this video from Boston Dynamics is the artistic component. We know that Atlas can do some practical tasks, and we know it can do some gymnastics and some parkour, but dancing is certainly something new. To learn more about what it took to make these dancing robots happen (and it’s much more complicated than it might seem), we spoke with Aaron Saunders, Boston Dynamics’ VP of Engineering.

Saunders started at Boston Dynamics in 2003, meaning that he’s been a fundamental part of a huge number of Boston Dynamics’ robots, even the ones you may have forgotten about. Remember LittleDog, for example? A team of two designed and built that adorable little quadruped, and Saunders was one of them.

While he’s been part of the Atlas project since the beginning (and had a hand in just about everything else that Boston Dynamics works on), Saunders has spent the last few years leading the Atlas team specifically, and he was kind enough to answer our questions about their dancing robots.

IEEE Spectrum: What’s your sense of how the Internet has been reacting to the video?

Aaron Saunders: We have different expectations for the videos that we make; this one was definitely anchored in fun for us. The response on YouTube was record-setting for us: We received hundreds of emails and calls with people expressing their enthusiasm, and also sharing their ideas for what we should do next, what about this song, what about this dance move, so that was really fun. My favorite reaction was one that I got from my 94-year-old grandma, who watched the video on YouTube and then sent a message through the family asking if I’d taught the robot those sweet moves. I think this video connected with a broader audience, because it mixed the old-school music with new technology.

We haven’t seen Atlas move like this before—can you talk about how you made it happen?

We started by working with dancers and a choreographer to create an initial concept for the dance by composing and assembling a routine. One of the challenges, and probably the core challenge for Atlas in particular, was adjusting human dance moves so that they could be performed on the robot. To do that, we used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go “that would be easy” or “that would be hard” or “that scares me.” And then we’d have a discussion, try different things in simulation, and make adjustments to find a compatible set of moves that we could execute on Atlas.

Throughout the project, the time frame for creating those new dance moves got shorter and shorter as we built tools, and as an example, eventually we were able to use that toolchain to create one of Atlas’ ballet moves in just one day, the day before we filmed, and it worked. So it’s not hand-scripted or hand-coded, it’s about having a pipeline that lets you take a diverse set of motions, that you can describe through a variety of different inputs, and push them through and onto the robot.

Image: Boston Dynamics

Were there some things that were particularly difficult to translate from human dancers to Atlas? Or, things that Atlas could do better than humans?

Some of the spinning turns in the ballet parts took more iterations to get to work, because they were the furthest from leaping and running and some of the other things that we have more experience with, so they challenged both the machine and the software in new ways. We definitely learned not to underestimate how flexible and strong dancers are—when you take elite athletes and you try to do what they do but with a robot, it’s a hard problem. It’s humbling. Fundamentally, I don’t think that Atlas has the range of motion or power that these athletes do, although we continue developing our robots towards that, because we believe that in order to broadly deploy these kinds of robots commercially, and eventually in a home, we think they need to have this level of performance.

One thing that robots are really good at is doing something over and over again the exact same way. So once we dialed in what we wanted to do, the robots could just do it again and again as we played with different camera angles.

I can understand how you could use human dancers to help you put together a routine with Atlas, but how did that work with Spot, and particularly with Handle?

I think the people we worked with actually had a lot of talent for thinking about motion, and thinking about how to express themselves through motion. And our robots do motion really well—they’re dynamic, they’re exciting, they balance. So I think what we found was that the dancers connected with the way the robots moved, and then shaped that into a story, and it didn’t matter whether there were two legs or four legs. When you don’t necessarily have a template of animal motion or human behavior, you just have to think a little harder about how to go about doing something, and that’s true for more pragmatic commercial behaviors as well.

“We used simulation to rapidly iterate through movement concepts while soliciting feedback from the choreographer to reach behaviors that Atlas had the strength and speed to execute. It was very iterative—they would literally dance out what they wanted us to do, and the engineers would look at the screen and go ‘that would be easy’ or ‘that would be hard’ or ‘that scares me.’”
—Aaron Saunders, Boston Dynamics

How does the experience that you get teaching robots to dance, or to do gymnastics or parkour, inform your approach to robotics for commercial applications?

We think that the skills inherent in dance and parkour, like agility, balance, and perception, are fundamental to a wide variety of robot applications. Maybe more importantly, finding that intersection between building a new robot capability and having fun has been Boston Dynamics’ recipe for robotics—it’s a great way to advance.

One good example is how when you push limits by asking your robots to do these dynamic motions over a period of several days, you learn a lot about the robustness of your hardware. Spot, through its productization, has become incredibly robust, and required almost no maintenance—it could just dance all day long once you taught it to. And the reason it’s so robust today is because of all those lessons we learned from previous things that may have just seemed weird and fun. You’ve got to go into uncharted territory to even know what you don’t know.

Image: Boston Dynamics

It’s often hard to tell from watching videos like these how much time it took to make things work the way you wanted them to, and how representative they are of the actual capabilities of the robots. Can you talk about that?

Let me try to answer in the context of this video, but I think the same is true for all of the videos that we post. We work hard to make something, and once it works, it works. For Atlas, most of the robot control existed from our previous work, like the work that we’ve done on parkour, which sent us down a path of using model predictive controllers that account for dynamics and balance. We used those to run on the robot a set of dance steps that we’d designed offline with the dancers and choreographer. So, a lot of time, months, we spent thinking about the dance and composing the motions and iterating in simulation.

Dancing required a lot of strength and speed, so we even upgraded some of Atlas’ hardware to give it more power. Dance might be the highest power thing we’ve done to date—even though you might think parkour looks way more explosive, the amount of motion and speed that you have in dance is incredible. That also took a lot of time over the course of months; creating the capability in the machine to go along with the capability in the algorithms.

Once we had the final sequence that you see in the video, we only filmed for two days. Much of that time was spent figuring out how to move the camera through a scene with a bunch of robots in it to capture one continuous two-minute shot, and while we ran and filmed the dance routine multiple times, we could repeat it quite reliably. There was no cutting or splicing in that opening two-minute shot.

There were definitely some failures in the hardware that required maintenance, and our robots stumbled and fell down sometimes. These behaviors are not meant to be productized and to be a 100 percent reliable, but they’re definitely repeatable. We try to be honest with showing things that we can do, not a snippet of something that we did once. I think there’s an honesty required in saying that you’ve achieved something, and that’s definitely important for us.

You mentioned that Spot is now robust enough to dance all day. How about Atlas? If you kept on replacing its batteries, could it dance all day, too?

Atlas, as a machine, is still, you know… there are only a handful of them in the world, they’re complicated, and reliability was not a main focus. We would definitely break the robot from time to time. But the robustness of the hardware, in the context of what we were trying to do, was really great. And without that robustness, we wouldn’t have been able to make the video at all. I think Atlas is a little more like a helicopter, where there’s a higher ratio between the time you spend doing maintenance and the time you spend operating. Whereas with Spot, the expectation is that it’s more like a car, where you can run it for a long time before you have to touch it.

When you’re teaching Atlas to do new things, is it using any kind of machine learning? And if not, why not?

As a company, we’ve explored a lot of things, but Atlas is not using a learning controller right now. I expect that a day will come when we will. Atlas’ current dance performance uses a mixture of what we like to call reflexive control, which is a combination of reacting to forces, online and offline trajectory optimization, and model predictive control. We leverage these techniques because they’re a reliable way of unlocking really high performance stuff, and we understand how to wield these tools really well. We haven’t found the end of the road in terms of what we can do with them.

We plan on using learning to extend and build on the foundation of software and hardware that we’ve developed, but I think that we, along with the community, are still trying to figure out where the right places to apply these tools are. I think you’ll see that as part of our natural progression.

Image: Boston Dynamics

Much of Atlas’ dynamic motion comes from its lower body at the moment, but parkour makes use of upper body strength and agility as well, and we’ve seen some recent concept images showing Atlas doing vaults and pullups. Can you tell us more?

Humans and animals do amazing things using their legs, but they do even more amazing things when they use their whole bodies. I think parkour provides a fantastic framework that allows us to progress towards whole body mobility. Walking and running was just the start of that journey. We’re progressing through more complex dynamic behaviors like jumping and spinning, that’s what we’ve been working on for the last couple of years. And the next step is to explore how using arms to push and pull on the world could extend that agility.

One of the missions that I’ve given to the Atlas team is to start working on leveraging the arms as much as we leverage the legs to enhance and extend our mobility, and I’m really excited about what we’re going to be working on over the next couple of years, because it’s going to open up a lot more opportunities for us to do exciting stuff with Atlas.

What’s your perspective on hydraulic versus electric actuators for highly dynamic robots?

Across my career at Boston Dynamics, I’ve felt passionately connected to so many different types of technology, but I’ve settled into a place where I really don’t think this is an either-or conversation anymore. I think the selection of actuator technology really depends on the size of the robot that you’re building, what you want that robot to do, where you want it to go, and many other factors. Ultimately, it’s good to have both kinds of actuators in your toolbox, and I love having access to both—and we’ve used both with great success to make really impressive dynamic machines.

I think the only delineation between hydraulic and electric actuators that appears to be distinct for me is probably in scale. It’s really challenging to make tiny hydraulic things because the industry just doesn’t do a lot of that, and the reciprocal is that the industry also doesn’t tend to make massive electrical things. So, you may find that to be a natural division between these two technologies.

Besides what you’re working on at Boston Dynamics, what recent robotics research are you most excited about?

For us as a company, we really love to follow advances in sensing, computer vision, terrain perception, these are all things where the better they get, the more we can do. For me personally, one of the things I like to follow is manipulation research, and in particular manipulation research that advances our understanding of complex, friction-based interactions like sliding and pushing, or moving compliant things like ropes.

We’re seeing a shift from just pinching things, lifting them, moving them, and dropping them, to much more meaningful interactions with the environment. Research in that type of manipulation I think is going to unlock the potential for mobile manipulators, and I think it’s really going to open up the ability for robots to interact with the world in a rich way.

Is there anything else you’d like people to take away from this video?

For me personally, and I think it’s because I spend so much of my time immersed in robotics and have a deep appreciation for what a robot is and what its capabilities and limitations are, one of my strong desires is for more people to spend more time with robots. We see a lot of opinions and ideas from people looking at our videos on YouTube, and it seems to me that if more people had opportunities to think about and learn about and spend time with robots, that new level of understanding could help them imagine new ways in which robots could be useful in our daily lives. I think the possibilities are really exciting, and I just want more people to be able to take that journey. Continue reading

Posted in Human Robots

#437701 Robotics, AI, and Cloud Computing ...

IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.

IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.

The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.

Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.

All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.

Image: IBM Research

IBM RXN helps predict chemical reaction outcomes or design retrosynthesis in seconds.

In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.

Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.

The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.

IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?

According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”

Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.

While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.

“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”

There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.

Photo: Michael Buholzer

Teodoro Laino of IBM Research Europe.

“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe. “On the other hand, we are also pushing at bringing the entire system to a service level.”

Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.

Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”

< Back to IEEE COVID-19 Resources Continue reading

Posted in Human Robots

#437407 Nvidia’s Arm Acquisition Brings the ...

Artificial intelligence and mobile computing have been two of the most disruptive technologies of this century. The unification of the two companies that made them possible could have wide-ranging consequences for the future of computing.

California-based Nvidia’s graphics processing units (GPUs) have powered the deep learning revolution ever since Google researchers discovered in 2011 that they could run neural networks far more efficiently than conventional CPUs. UK company Arm’s energy-efficient chip designs have dominated the mobile and embedded computing markets for even longer.

Now the two will join forces after the American company announced a $40 billion deal to buy Arm from its Japanese owner, Softbank. In a press release announcing the deal, Nvidia touted its potential to rapidly expand the reach of AI into all areas of our lives.

“In the years ahead, trillions of computers running AI will create a new internet-of-things that is thousands of times larger than today’s internet-of-people,” said Nvidia founder and CEO Jensen Huang. “Uniting NVIDIA’s AI computing capabilities with the vast ecosystem of Arm’s CPU, we can advance computing from the cloud, smartphones, PCs, self-driving cars and robotics, to edge IoT, and expand AI computing to every corner of the globe.”

There are good reasons to believe the hype. The two companies are absolutely dominant in their respective fields—Nvidia’s GPUs support more than 97 percent of AI computing infrastructure offered by big cloud service providers, and Arm’s chips power more than 90 percent of smartphones. And there’s little overlap in their competencies, which means the relationship could be a truly symbiotic one.

“I think the deal “fits like a glove” in that Arm plays in areas that Nvidia does not or isn’t that successful, while NVIDIA plays in many places Arm doesn’t or isn’t that successful,” analyst Patrick Moorhead wrote in Forbes.

One of the most obvious directions would be to expand Nvidia’s AI capabilities to the kind of low-power edge devices that Arm excels in. There’s growing demand for AI in devices like smartphones, wearables, cars, and drones, where transmitting data to the cloud for processing is undesirable either for reasons of privacy or speed.

But there might also be fruitful exchanges in the other direction. Huang told Moorhead a major focus would be bringing Arm’s expertise in energy efficiency to the data center. That’s a big concern for technology companies whose electricity bills and green credentials are taking a battering thanks to the huge amounts of energy required to run millions of computer chips around the clock.

The deal may not be plain sailing, though, most notably due to the two companies’ differing business models. While Nvidia sells ready-made processors, Arm simply creates chip designs and then licenses them to other companies who can then customize them to their particular hardware needs. It operates on an open-licence basis whereby any company with the necessary cash can access its designs.

As a result, its designs are found in products built by hundreds of companies that license its innovations, including Apple, Samsung, Huawei, Qualcomm, and even Nvidia. Some, including two of the company’s co-founders, have raised concerns that the purchase by Nvidia, which competes with many of these other companies, could harm the neutrality that has been central to its success.

It’s possible this could push more companies towards RISC-V, an open-source technology developed by researchers at the University of California at Berkeley that rivals Arm’s and is not owned by any one company. However, there are plenty of reasons why most companies still prefer arm over the less feature-rich open-source option, and it might take a considerable push to convince Arm’s customers to jump ship.

The deal will also have to navigate some thorny political issues. Unions, politicians, and business leaders in the UK have voiced concerns that it could lead to the loss of high-tech jobs, and government sources have suggested conditions could be placed on the deal.

Regulators in other countries could also put a spanner in the works. China is concerned that if Arm becomes US-owned, many of the Chinese companies that rely on its technology could become victims of export restrictions as the China-US trade war drags on. South Korea is also wary that the deal could create a new technology juggernaut that could dent Samsung’s growth in similar areas.

Nvidia has made commitments to keep Arm’s headquarters in the UK, which it says should lessen concerns around jobs and export restrictions. It’s also pledged to open a new world-class technology center in Cambridge and build a state-of-the-art AI supercomputer powered by Arm’s chips there. Whether the deal goes through still hangs in the balance, but of it does it could spur a whole new wave of AI innovation.

Image Credit: Nvidia Continue reading

Posted in Human Robots

#436530 How Smart Roads Will Make Driving ...

Roads criss-cross the landscape, but while they provide vital transport links, in many ways they represent a huge amount of wasted space. Advances in “smart road” technology could change that, creating roads that can harvest energy from cars, detect speeding, automatically weigh vehicles, and even communicate with smart cars.

“Smart city” projects are popping up in countries across the world thanks to advances in wireless communication, cloud computing, data analytics, remote sensing, and artificial intelligence. Transportation is a crucial element of most of these plans, but while much of the focus is on public transport solutions, smart roads are increasingly being seen as a crucial feature of these programs.

New technology is making it possible to tackle a host of issues including traffic congestion, accidents, and pollution, say the authors of a paper in the journal Proceedings of the Royal Society A. And they’ve outlined ten of the most promising advances under development or in planning stages that could feature on tomorrow’s roads.

Energy harvesting

A variety of energy harvesting technologies integrated into roads have been proposed as ways to power street lights and traffic signals or provide a boost to the grid. Photovoltaic panels could be built into the road surface to capture sunlight, or piezoelectric materials installed beneath the asphalt could generate current when deformed by vehicles passing overhead.

Musical roads

Countries like Japan, Denmark, the Netherlands, Taiwan, and South Korea have built roads that play music as cars pass by. By varying the spacing of rumble strips, it’s possible to produce a series of different notes as vehicles drive over them. The aim is generally to warn of hazards or help drivers keep to the speed limit.

Automatic weighing

Weight-in-motion technology that measures vehicles’ loads as they drive slowly through a designated lane has been around since the 1970s, but more recently high speed weight-in-motion tech has made it possible to measure vehicles as they travel at regular highway speeds. The latest advance has been integration with automatic licence plate reading and wireless communication to allow continuous remote monitoring both to enforce weight restrictions and monitor wear on roads.

Vehicle charging

The growing popularity of electric vehicles has spurred the development of technology to charge cars and buses as they drive. The most promising of these approaches is magnetic induction, which involves burying cables beneath the road to generate electromagnetic fields that a receiver device in the car then transforms into electrical power to charge batteries.

Smart traffic signs

Traffic signs aren’t always as visible as they should be, and it can often be hard to remember what all of them mean. So there are now proposals for “smart signs” that wirelessly beam a sign’s content to oncoming cars fitted with receivers, which can then alert the driver verbally or on the car’s display. The approach isn’t affected by poor weather and lighting, can be reprogrammed easily, and could do away with the need for complex sign recognition technology in future self-driving cars.

Traffic violation detection and notification

Sensors and cameras can be combined with these same smart signs to detect and automatically notify drivers of traffic violations. The automatic transmission of traffic signals means drivers won’t be able to deny they’ve seen the warnings or been notified of any fines, as a record will be stored on their car’s black box.

Talking cars

Car-to-car communication technology and V2X, which lets cars share information with any other connected device, are becoming increasingly common. Inter-car communication can be used to propagate accidents or traffic jam alerts to prevent congestion, while letting vehicles communicate with infrastructure can help signals dynamically manage timers to keep traffic flowing or automatically collect tolls.

Smart intersections

Combing sensors and cameras with object recognition systems that can detect vehicles and other road users can help increase safety and efficiency at intersections. It can be used to extend green lights for slower road users like pedestrians and cyclists, sense jaywalkers, give priority to emergency vehicles, and dynamically adjust light timers to optimize traffic flow. Information can even be broadcast to oncoming vehicles to highlight blind spots and potential hazards.

Automatic crash detection

There’s a “golden hour” after an accident in which the chance of saving lives is greatly increased. Vehicle communication technology can ensure that notification of a crash reaches the emergency services rapidly, and can also provide vital information about the number and type of vehicles involved, which can help emergency response planning. It can also be used to alert other drivers to slow down or stop to prevent further accidents.

Smart street lights

Street lights are increasingly being embedded with sensors, wireless connectivity, and micro-controllers to enable a variety of smart functions. These include motion activation to save energy, providing wireless access points, air quality monitoring, or parking and litter monitoring. This can also be used to send automatic maintenance requests if a light is faulty, and can even allow neighboring lights to be automatically brightened to compensate.

Image Credit: Image by David Mark from Pixabay Continue reading

Posted in Human Robots

#436526 Not Bot, Not Beast: Scientists Create ...

A remarkable combination of artificial intelligence (AI) and biology has produced the world’s first “living robots.”

This week, a research team of roboticists and scientists published their recipe for making a new lifeform called xenobots from stem cells. The term “xeno” comes from the frog cells (Xenopus laevis) used to make them.

One of the researchers described the creation as “neither a traditional robot nor a known species of animal,” but a “new class of artifact: a living, programmable organism.”

Xenobots are less than 1 millimeter long and made of 500-1,000 living cells. They have various simple shapes, including some with squat “legs.” They can propel themselves in linear or circular directions, join together to act collectively, and move small objects. Using their own cellular energy, they can live up to 10 days.

While these “reconfigurable biomachines” could vastly improve human, animal, and environmental health, they raise legal and ethical concerns.

Strange New ‘Creature’
To make xenobots, the research team used a supercomputer to test thousands of random designs of simple living things that could perform certain tasks.

The computer was programmed with an AI “evolutionary algorithm” to predict which organisms would likely display useful tasks, such as moving towards a target.

After the selection of the most promising designs, the scientists attempted to replicate the virtual models with frog skin or heart cells, which were manually joined using microsurgery tools. The heart cells in these bespoke assemblies contract and relax, giving the organisms motion.

The creation of xenobots is groundbreaking. Despite being described as “programmable living robots,” they are actually completely organic and made of living tissue. The term “robot” has been used because xenobots can be configured into different forms and shapes, and “programmed” to target certain objects, which they then unwittingly seek. They can also repair themselves after being damaged.

Possible Applications
Xenobots may have great value. Some speculate they could be used to clean our polluted oceans by collecting microplastics. Similarly, they may be used to enter confined or dangerous areas to scavenge toxins or radioactive materials. Xenobots designed with carefully shaped “pouches” might be able to carry drugs into human bodies.

Future versions may be built from a patient’s own cells to repair tissue or target cancers. Being biodegradable, xenobots would have an edge on technologies made of plastic or metal.

Further development of biological “robots” could accelerate our understanding of living and robotic systems. Life is incredibly complex, so manipulating living things could reveal some of life’s mysteries—and improve our use of AI.

Legal and Ethical Questions
Conversely, xenobots raise legal and ethical concerns. In the same way they could help target cancers, they could also be used to hijack life functions for malevolent purposes.

Some argue artificially making living things is unnatural, hubristic, or involves “playing God.” A more compelling concern is that of unintended or malicious use, as we have seen with technologies in fields including nuclear physics, chemistry, biology and AI. For instance, xenobots might be used for hostile biological purposes prohibited under international law.

More advanced future xenobots, especially ones that live longer and reproduce, could potentially “malfunction” and go rogue, and out-compete other species.

For complex tasks, xenobots may need sensory and nervous systems, possibly resulting in their sentience. A sentient programmed organism would raise additional ethical questions. Last year, the revival of a disembodied pig brain elicited concerns about different species’ suffering.

Managing Risks
The xenobot’s creators have rightly acknowledged the need for discussion around the ethics of their creation. The 2018 scandal over using CRISPR (which allows the introduction of genes into an organism) may provide an instructive lesson here. While the experiment’s goal was to reduce the susceptibility of twin baby girls to HIV-AIDS, associated risks caused ethical dismay. The scientist in question is in prison.

When CRISPR became widely available, some experts called for a moratorium on heritable genome editing. Others argued the benefits outweighed the risks.

While each new technology should be considered impartially and based on its merits, giving life to xenobots raises certain significant questions:

Should xenobots have biological kill-switches in case they go rogue?
Who should decide who can access and control them?
What if “homemade” xenobots become possible? Should there be a moratorium until regulatory frameworks are established? How much regulation is required?

Lessons learned in the past from advances in other areas of science could help manage future risks, while reaping the possible benefits.

Long Road Here, Long Road Ahead
The creation of xenobots had various biological and robotic precedents. Genetic engineering has created genetically modified mice that become fluorescent in UV light.

Designer microbes can produce drugs and food ingredients that may eventually replace animal agriculture. In 2012, scientists created an artificial jellyfish called a “medusoid” from rat cells.

Robotics is also flourishing. Nanobots can monitor people’s blood sugar levels and may eventually be able to clear clogged arteries. Robots can incorporate living matter, which we witnessed when engineers and biologists created a sting-ray robot powered by light-activated cells.

In the coming years, we are sure to see more creations like xenobots that evoke both wonder and due concern. And when we do, it is important we remain both open-minded and critical.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Photo by Joel Filipe on Unsplash Continue reading

Posted in Human Robots