Tag Archives: can

#441061 Self-organization: What robotics can ...

Amoebae are single-cell organisms. By means of self-organization, they can form complex structures—and do this purely through local interactions: If they have a lot of food, they disperse evenly through a culture medium. But if food becomes scarce, they emit the messenger known as cyclic adenosine monophosphate (cAMP). This chemical signal induces amoebae to gather in one place and form a multicellular aggregation. The result is a fruiting body. Continue reading

Posted in Human Robots

#441041 Scientists Build Synthetic Molecular ...

All life, as far as we know, assembles itself molecule by molecule. The blueprint for our bodies is encoded on ribbons of DNA and RNA. Cellular factories called ribosomes make these blueprints physical by linking amino acids into long strands called proteins. And these proteins, of which there are hundreds of millions, form an array of spectacular natural technologies: Eyes, muscles, bones, and brains.

The entire living world is built by these amazing molecular machines.

As scientists learn more about life’s machinery, they’re beginning to take the controls. Genetic engineers are tweaking the code with gene editing tools to treat illness. Synthetic biologists are coaxing genetically modified bacteria into producing substances like biofuels or converting society’s waste into valuable chemicals. Still more researchers are aiming to use DNA for digital storage and even robotics.

But there are limits to what living systems can make: They deal in carbon-based chemistry. Might we build new things by mirroring life’s machinery in inorganic ingredients? David Leigh, a University of Manchester organic chemist, thinks so. “As synthetic scientists, we’ve got the whole of the periodic table of elements that we can use,” he told Wired. “It’s breaking free of ways that biology is restricted.”

His team’s latest work, published in an article in Nature, describes a crucial step toward the ultimate goal: working molecular computers. Though there’s yet a very long way to go, Leigh’s vision fully realized would bring about a new way to build and compute. Molecular computers could store data and, like ribosomes, assemble physical products from coded blueprints. Instead of stringing amino acids into proteins, they might produce finely tuned materials with new properties that would be impossible to make any other way.

Turing Machines
Allan Turing was ahead of his day, but as it turns out, nature was ahead of Turing.

In 1936, Turing sketched out a thought experiment for what would become known as a Turing machine. In it, he imagined a tape with symbols punched into it being fed through a machine that could read the symbols and translate them into some kind of action. The Turing machine was the theoretical basis for modern computation, in which coded algorithms instruct machines to light pixels, load websites, or generate prose.

Turing’s machine should sound familiar for another reason. It’s similar to the way ribosomes read genetic code on ribbons of RNA to construct proteins.

Cellular factories are a kind of natural Turing machine. What Leigh’s team is after would work the same way but go beyond biochemistry. These microscopic Turing machines, or molecular computers, would allow engineers to write code for some physical output onto a synthetic molecular ribbon. Another molecule would travel along the ribbon, read (and one day write) the code, and output some specified action, like catalyzing a chemical reaction.

Now, Leigh’s team says they’ve built the first components of a molecular computer: A coded molecular ribbon and a mobile molecular reader of the code.

Researchers have been dreaming about molecular computers for decades. According to Jean-François Lutz of the National Center for Scientific Research in France, Leigh’s latest work is a notable step forward. “This is the first proof of principle, showing that you can effectively do it,” he told Wired. “It has been conceptualized, but never really achieved.” Here’s how it works.

Molecular Rings and Ribbons
Leigh’s molecular machines have a few key parts: a segmented molecular ribbon with carefully designed docking sites, a molecular ring that binds to and travels along the ribbon, and a solution in which many copies of the system are afloat. The team fuels the system with pulses of acid, changing the solution’s pH and modifying the ribbon’s structure.

With the first pulse, free molecular rings—in this case, a crown ether, or a ring of ether groups—thread themselves onto the ribbons, docking at the first of several binding sites. Each binding site’s chemical makeup induces a stereochemical change in the crown ether. That is, the binding site modifies the crown ether’s orientation in space without changing its composition.

Additional pulses of acid move the crown ether along sequential binding sites, and each new site causes it to contort itself into a different encoded configuration.

In @Nature, a tape-reading molecule that reads stereochemistry rather than nucleotide codons https://t.co/rSYjlAZJy5 Congrats to @YansongRen @RJamagne & Dan! Many tks to @SciCommStudios for graphics & animation [:bottom left=tape potential energy surface; right=CD spectrum] pic.twitter.com/EWiBaYzMNr

— Dave Leigh (@ProfDaveLeigh) October 19, 2022

These stereochemical changes are the key. The team assigned each configuration a value. Instead of the 1s and 0s in binary code, they chose -1s, 0s, and +1s for two stereochemical twists (each the mirror of the other) and a neutral position. So, as the crown ether traverses the molecular ribbon, its chemical changes read out the code.

All this is invisible to the eye—so, how’d they know it worked? Each crown ether configuration twists light a little differently. By bathing the solution in light, they could watch the changes as they took place. The team found the twisting light matched the crown ether’s journey along the ribbon, broadcasting the message exactly as encoded.

Long Road
The recent work is a fascinating proof of concept, but it’s still just that. The system is slow—taking several hours to move from site to site—only reads in one direction, and can’t yet write information. It doesn’t yet signal the impending arrival of molecular computers. “Dreaming in chemistry is always quite easy—making it happen is different,” Lutz said.

Still, it’s a step in the right direction, and next steps are in the works. Leigh said his team plans to get the system to write data. He also thinks greater speed is possible—though perhaps less important for some applications—and that they might increase information density by going from a three-digit system to five or even seven digits.

As scientists build on work like Leigh’s, they may open up a parallel universe of synthetic molecular machines just adjacent to the organic world.

Image Credit: Raphaël Biscaldi / Unsplash Continue reading

Posted in Human Robots

#441036 How Can We Talk About Autonomous ...

This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and has invites you to answer these questions.

Lethal autonomous weapons systems can sound terrifying, but autonomy in weapons systems is far more nuanced and complicated than a simple debate between “good or bad” and “ethical or unethical.” In order to address the legal and ethical issues that an autonomous weapons system (AWS) can raise, it’s important to look at the many technical challenges that arise along the full spectrum of autonomy. A group of experts convened by
the IEEE Standards Association is working on this, but they need your help.

Weapons systems can be built with a range of autonomous capabilities. They might be self-driving tanks, surveillance drones with AI-enabled image recognition, unmanned underwater vehicles that operate in swarms, loitering munitions with advanced target recognition—the list goes on. Some autonomous capabilities are less controversial, while others trigger intense debate over the legality and ethics of the capability. Some capabilities have existed for decades, while others are still hypothetical and may never be developed.

All of this can make autonomous weapons systems difficult to talk about, and doing so has proven to be incredibly challenging over the years. Answering even the most seemingly straightforward questions, such as whether an AWS is lethal or not, can get surprisingly complicated.

To date, international discussions have largely focused on the legal, ethical, and moral issues that arise with the prospect of lethal AWS, with limited consideration of the technical challenges. At the United Nations, these discussions have taken place within the
Convention on Certain Conventional Weapons. After nearly a decade, though, the U.N. has yet to come up with a new treaty or regulations to cover AWS. In early discussions at the CCW and other international forums, participants often talked past each other: One person might consider a “fully autonomous weapons system” to include capabilities that are only slightly more advanced than today’s drones, while another might use the term as a synonym for the Terminator.

Discussions advanced to the point that in 2019, member states at the CCW agreed on a set of
11 guiding principles regarding lethal AWS. But these principles are nonbinding, and it’s unclear how the technical community can implement them. At the most recent meeting of the CCW in July, delegates repeatedly pushed for more nuanced discussions and understanding of the various technical issues that arise throughout the life cycle of an AWS.

To help bring clarity to these and other discussions, the
IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance.

Last year, the expert group, which I lead, published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” In the document, we identified over 60 challenges of autonomous weapons systems, organized into 10 categories:

Establishing common language
Enabling effective human control
Determining legal obligations
Ensuring robustness
Testing and evaluating
Assessing risk
Addressing operational constraints
Collecting and curating data
Aligning procurement practices
Addressing nonmilitary use

It’s not surprising that “establishing common language” is the first category. As mentioned, when the debates around AWS first began, the focus was on
lethal autonomous weapons systems, and that’s often still where people focus. Yet determining whether or not an AWS is lethal turns out to be harder than one might expect.

Consider a drone that does autonomous surveillance and carries a remote-controlled weapon. It uses artificial intelligence to navigate to and identify targets, while a human makes the final decision about whether or not to launch an attack. Just the fact that the weapon and autonomous capabilities are within the same system suggests this could be considered a lethal AWS.

Additionally, a human may not be capable of monitoring all of the data the drone is collecting in real time in order to identify and verify the target, or the human may over-trust the system (a
common problem when humans work with machines). Even if the human makes the decision to launch an attack against the target that the AWS has identified, it’s not clear how much “meaningful control” the human truly has. (“Meaningful human control” is another phrase that has been hotly debated.)

This problem of definitions isn’t just an issue that comes up when policymakers at the U.N. discuss AWS. AI developers also have different definitions for commonly used concepts, including “bias,” “transparency,” “trust,” “autonomy,” and “artificial intelligence.” In many instances, the ultimate question may not be, Can we establish technical definitions for these terms? but rather, How do we address the fact that there may never be consistent definitions and agreement on these terms? Because, of course, one of the most important questions for all of the AWS challenges is not whether we technically
can address this, but even if there is a technical solution, should we build and deploy the system?

Identifying the challenges was just the first stage of the work for the IEEE-SA expert group. We also concluded that there are three critical perspectives from which a new group of experts will be considering these challenges in more depth:

Assurance and safety, which looks at the technical challenges of ensuring the system behaves the way it’s expected to.

Human–machine teaming, which considers how the human and the machine will interact to enable reasonable and realistic human control, responsibility, and accountability.
Law, policy, and ethics, which considers the legal, political, and ethical implications of the issues raised throughout the Challenges document.

What Do You Think?

This is where we want your feedback! Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of
IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a
series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022.

The independent group of experts who authored the report for the IEEE Standards Associate includes Emmanuel Bloch, Ariel Conn, Denise Garcia, Amandeep Gill, Ashley Llorens, Mart Noorma, and Heather Roff. Continue reading

Posted in Human Robots

#441034 How Can We Make Sure Autonomous Weapons ...

This article is part of our Autonomous Weapons Challenges series. The IEEE Standards Association is looking for your feedback on this topic, and has invites you to answer these questions.

International discussions about autonomous weapons systems (AWS) often focus on a fundamental question: Is it legal for a machine to make the decision to take a human life? But woven into this question is another fundamental issue: Can an automated weapons system be trusted to do what it’s expected to do?

If the technical challenges of developing and using AWS can’t be addressed, then the answer to both questions is likely “no.”

AI Challenges Are Magnified When Applied to Weapons
Many of the known issues with AI and machine learning become even more problematic when associated with weapons. For example, AI systems could help process data from images far faster than human analysts can, and the majority of the results would be accurate. But the algorithms used for this functionality are known to introduce or exacerbate issues of bias and discrimination, targeting certain demographics more than others. Given that, is it reasonable to use image-recognition software to help humans identify potential targets?

But concerns about the technical abilities of AWS extend beyond object recognition and algorithmic bias. Autonomy in weapons systems requires a slew of technologies, including sensors, communications, and onboard computing power, each of which poses its own challenges for developers. These components are often designed and programmed by different organizations, and it can be hard to predict how the components will function together within the system, as well as how they’ll react to a variety of real-world situations and adversaries.

Testing for Assurance and Risk
It’s also not at all clear how militaries can test these systems to ensure the AWS will do what’s expected and comply with International Humanitarian Law. And yet militaries typically want weapons to be tested and proven to act consistently, legally, and without harming their own soldiers before the systems are deployed. If commanders don’t trust a weapons system, they likely won’t use it. But standardized testing is especially complicated for an AI program that can learn from its interactions in the field—in fact, such standardized testing for AWS simply doesn’t exist.

We know how software updates can alter how a system behaves and may introduce bugs that cause a system to behave erratically. But an automated weapons system powered by AI may also update its behavior based on real-world experience, and changes to the AWS behavior could be much harder for users to track. New information that the system accesses in the field could even trigger it to start to shift away from its original goals.

Similarly, cyberattacks and adversarial attacks pose a known threat, which developers try to guard against. But if an attack is successful, what would testing look like to identify that the system has been hacked, and how would a user know to implement such tests?

Physical Challenges of Autonomous Weapons
Though recent advancements in artificial intelligence have led to greater concern about the use of AWS, the technical challenges of autonomy in weapons systems extends beyond AI. Physical challenges already exist for conventional weapons and for nonweaponized autonomous systems, but these same problems are further exacerbated and complicated in AWS.

For example, many autonomous systems are getting smaller, even as their computational needs grow, including navigation, data acquisition and analysis, and decision making—and potentially all while out of communication with commanders. Can the automated weapons system maintain the necessary and legal functionality throughout the mission, even if communication is lost? How is data protected if the system falls into enemy hands?

Issues similar to these may also arise with other autonomous systems, but the consequences of failure are magnified with AWS, and extra features will likely be necessary to ensure that, for example, a weaponized autonomous vehicle in the battlefield doesn’t violate International Humanitarian Law or mistake a friendly vehicle for an enemy target. Because these problems are so new, weapons developers and lawmakers will need to work with and learn from experts in the robotics space to be able to solve the technical challenges and create useful policy.

There are many technical advances that will contribute to various types of weapons systems. Some will prove far more difficult to develop than expected, while others will likely be developed faster. That means AWS development won’t be a leap from conventional weapons systems to full autonomy, but will instead make incremental steps as new autonomous capabilities are developed. This could lead to a slippery slope where it’s unclear if a line has been crossed from acceptable use of technology to unacceptable. Perhaps the solution is to look at specific robotic and autonomous technologies as they’re developed and ask ourselves whether society would want a weapons system with this capability, or if action should be taken to prevent that from happening.

What Do You Think?

We want your feedback! To help bring clarity to these AWS discussions, the
IEEE Standards Association convened an expert group in 2020, to consider the ethical and technical challenges of translating AWS principles into practice and what that might mean for future development and governance. Last year, the expert group published its findings in a report entitled “Ethical and Technical Challenges in the Development, Use, and Governance of Autonomous Weapons Systems.” Many of the AWS challenges are similar to those arising in other fields that are developing autonomous systems. We expect and hope that IEEE members and readers of IEEE Spectrum will have insights from their own fields that can inform the discussion around AWS technologies.

We’ve put together a
series of questions in the Challenges document that we hope you’ll answer, to help us better understand how people in other fields are addressing these issues. Autonomous capabilities will increasingly be applied to weapons systems, much as they are being applied in other technical realms, and we hope that by looking at the challenges in more detail, we can help establish effective technical solutions, while contributing to discussions about what can and should be legally acceptable. Your feedback will help us move toward this ultimate goal. Public comments will be open through 7 December 2022. Continue reading

Posted in Human Robots

#441010 Robots that can feel cloth layers may ...

New research from Carnegie Mellon University's Robotics Institute can help robots feel layers of cloth rather than relying on computer vision tools to only see it. The work could allow robots to assist people with household tasks like folding laundry. Continue reading

Posted in Human Robots