Tag Archives: exist

#439532 Lethal Autonomous Weapons Exist; They ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.
A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya's Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”
In so many words, the red line of autonomous targeting of humans has now been crossed.
To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever.
The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it's a Slaughterbot.
The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system.

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.
A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”
The “Slaughterbots” Question
In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre's Dec. 2017 IEEE Spectrum article “Why You Shouldn't Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.
We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video's epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.
The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.
“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”
While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.
Azerbaijan's decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan's success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles.
If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.
We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn't. Continue reading

Posted in Human Robots

#439320 Lethal Autonomous Weapons Exist; They ...

This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE.

A chilling future that some had said might not arrive for many years to come is, in fact, already here. According to a recent UN report, a drone airstrike in Libya from the spring of 2020—made against Libyan National Army forces by Turkish-made STM Kargu-2 drones on behalf of Libya’s Government of National Accord—was conducted by weapons systems with no known humans “in the loop.”

In so many words, the red line of autonomous targeting of humans has now been crossed.

To the best of our knowledge, this official United Nations reporting marks the first documented use case of a lethal autonomous weapon system akin to what has elsewhere been called a “Slaughterbot.” We believe this is a landmark moment. Civil society organizations, such as ours, have previously advocated for a preemptive treaty prohibiting the development and use of lethal autonomous weapons, much as blinding weapons were preemptively banned in 1998. The window for preemption has now passed, but the need for a treaty is more urgent than ever.

The STM Kargu-2 is a flying quadcopter that weighs a mere 7 kg, is being mass-produced, is capable of fully autonomous targeting, can form swarms, remains fully operational when GPS and radio links are jammed, and is equipped with facial recognition software to target humans. In other words, it’s a Slaughterbot.

The UN report notes: “Logistics convoys and retreating [Haftar Affiliated Forces] were subsequently hunted down and remotely engaged by the unmanned combat aerial vehicles or the lethal autonomous weapons systems such as the STM Kargu-2 (see Annex 30) and other loitering munitions. The lethal autonomous weapons systems were programmed to attack targets without requiring data connectivity between the operator and the munition.” Annex 30 of the report depicts photographic evidence of the downed STM Kargu-2 system.

UNITED NATIONS

In a previous effort to identify consensus areas for prohibition, we brought together experts with a range of views on lethal autonomous weapons to brainstorm a way forward. We published the agreed findings in “A Path Towards Reasonable Autonomous Weapons Regulation,” which suggested a “time-limited moratorium on the development, deployment, transfer, and use of anti-personnel lethal autonomous weapon systems” as a first, and absolute minimum, step for regulation.

A recent position statement from the International Committee of the Red Cross on autonomous weapons systems concurs. It states that “use of autonomous weapon systems to target human beings should be ruled out. This would best be achieved through a prohibition on autonomous weapon systems that are designed or used to apply force against persons.” This sentiment is shared by many civil society organizations, such as the UK-based advocacy organization Article 36, which recommends that “An effective structure for international legal regulation would prohibit certain configurations—such as systems that target people.”

The “Slaughterbots” Question

In 2017, the Future of Life Institute, which we represent, released a nearly eight-minute-long video titled “Slaughterbots”—which was viewed by an estimated 75 million people online—dramatizing the dangers of lethal autonomous weapons. At the time of release, the video received both praise and criticism. Paul Scharre’s Dec. 2017 IEEE Spectrum article “Why You Shouldn’t Fear Slaughterbots” argued that “Slaughterbots” was “very much science fiction” and a “piece of propaganda.” At a Nov. 2017 meeting about lethal autonomous weapons in Geneva, Switzerland, the Russian ambassador to the UN also reportedly dismissed it, saying that such concerns were 25 or 30 years in the future. We addressed these critiques in our piece—also for Spectrum— titled “Why You Should Fear Slaughterbots–A Response.” Now, less than four years later, reality has made the case for us: The age of Slaughterbots appears to have begun.

The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty.

We produced “Slaughterbots” to educate the public and policymakers alike about the potential imminent dangers of small, cheap, and ubiquitous lethal autonomous weapons systems. Beyond the moral issue of handing over decisions over life and death to algorithms, the video pointed out that autonomous weapons will, inevitably, turn into weapons of mass destruction, precisely because they require no human supervision and can therefore be deployed in vast numbers. (A related point, concerning the tactical agility of such weapons platforms, was made in Spectrum last month in an article by Natasha Bajema.) Furthermore, like small arms, autonomous weaponized drones will proliferate easily on the international arms market. As the “Slaughterbots” video’s epilogue explained, all the component technologies were already available, and we expected militaries to start deploying such weapons very soon. That prediction was essentially correct.

The past few years have seen a series of media reports about military testing of ever-larger drone swarms and battlefield use of weapons with increasingly autonomous functions. In 2019, then-Secretary of Defense Mark Esper, at a meeting of the National Security Commission on Artificial Intelligence, remarked, “As we speak, the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East.

“In addition,” Esper added, “Chinese weapons manufacturers are selling drones advertised as capable of full autonomy, including the ability to conduct lethal targeted strikes.”

While China has entered the autonomous drone export business, other producers and exporters of highly autonomous weapons systems include Turkey and Israel. Small drone systems have progressed from being limited to semi-autonomous and anti-materiel targeting, to possessing fully autonomous operational modes equipped with sensors that can identify, track, and target humans.

Azerbaijan’s decisive advantage over Armenian forces in the 2020 Nagorno-Karabakh conflict has been attributed to their arsenal of cheap, kamikaze “suicide drones.” During the conflict, there was reported use of the Israeli Orbiter 1K and Harop, which are both loitering munitions that self-destruct on impact. These weapons are deployed by a human in a specific geographic region, but they ultimately select their own targets without human intervention. Azerbaijan’s success with these weapons has provided a compelling precedent for how inexpensive, highly autonomous systems can enable militaries without an advanced air force to compete on the battlefield. The result has been a worldwide surge in demand for these systems, as the price of air superiority has gone down dramatically. While the systems used in Azerbaijan are arguably a software update away from autonomous targeting of humans, their described intended use was primarily materiel targets such as radar systems and vehicles.

If, as it seems, the age of Slaughterbots is here, what can the world do about it? The first step must be an immediate moratorium on the development, deployment, and use of lethal autonomous weapons that target persons, combined with a commitment to negotiate a permanent treaty. We also need agreements that facilitate verification and enforcement, including design constraints on remotely piloted weapons that prevent software conversion to autonomous operation as well as industry rules to prevent large-scale, illicit weaponization of civilian drones.

We want nothing more than for our “Slaughterbots” video to become merely a historical reminder of a horrendous path not taken—a mistake the human race could have made, but didn’t.

Stuart Russell is a professor of computer science at the University of California, Berkeley, and coauthor of the standard textbook “Artificial Intelligence: A Modern Approach.”

Anthony Aguirre is a professor of physics at the University of California, Santa Cruz, and cofounder of the Future of Life Institute.

Emilia Javorsky is a physician-scientist who leads advocacy on autonomous weapons for the Future of Life Institute.

Max Tegmark is a professor of physics at MIT, cofounder of the Future of Life Institute, and author of “Life 3.0: Being Human in the Age of Artificial Intelligence.” Continue reading

Posted in Human Robots

#439053 Bipedal Robots Are Learning To Move With ...

Most humans are bipeds, but even the best of us are really only bipeds until things get tricky. While our legs may be our primary mobility system, there are lots of situations in which we leverage our arms as well, either passively to keep balance or actively when we put out a hand to steady ourselves on a nearby object. And despite how unstable bipedal robots tend to be, using anything besides legs for mobility has been a challenge in both software and hardware, a significant limitation in highly unstructured environments.

Roboticists from TUM in Germany (with support from the German Research Foundation) have recently given their humanoid robot LOLA some major upgrades to make this kind of multi-contact locomotion possible. While it’s still in the early stages, it’s already some of the most human-like bipedal locomotion we’ve seen.

It’s certainly possible for bipedal robots to walk over challenging terrain without using limbs for support, but I’m sure you can think of lots of times where using your arms to assist with your own bipedal mobility was a requirement. It’s not a requirement because your leg strength or coordination or sense of balance is bad, necessarily. It’s just that sometimes, you might find yourself walking across something that’s highly unstable or in a situation where the consequences of a stumble are exceptionally high. And it may not even matter how much sensing you do beforehand, and how careful you are with your footstep planning: there are limits to how much you can know about your environment beforehand, and that can result in having a really bad time of it. This is why using multi-contact locomotion, whether it’s planned in advance or not, is a useful skill for humans, and should be for robots, too.

As the video notes (and props for being explicit up front about it), this isn’t yet fully autonomous behavior, with foot positions and arm contact points set by hand in advance. But it’s not much of a stretch to see how everything could be done autonomously, since one of the really hard parts (using multiple contact points to dynamically balance a moving robot) is being done onboard and in real time.

Getting LOLA to be able to do this required a major overhaul in hardware as well as software. And Philipp Seiwald, who works with LOLA at TUM, was able to tell us more about it.

IEEE Spectrum: Can you summarize the changes to LOLA’s hardware that are required for multi-contact locomotion?

Philipp Seiwald: The original version of LOLA has been designed for fast biped walking. Although it had two arms, they were not meant to get into contact with the environment but rather to compensate for the dynamic effects of the feet during fast walking. Also, the torso had a relatively simple design that was fine for its original purpose; however, it was not conceived to withstand the high loads coming from the hands during multi-contact maneuvers. Thus, we redesigned the complete upper body of LOLA from scratch. Starting from the pelvis, the strength and stiffness of the torso have been increased. We used the finite element method to optimize critical parts to obtain maximum strength at minimum weight. Moreover, we added additional degrees of freedom to the arms to increase the hands' reachable workspace. The kinematic topology of the arms, i.e., the arrangement of joints and link lengths, has been obtained from an optimization that takes typical multi-contact scenarios into account.

Why is this an important problem for bipedal humanoid robots?

Maintaining balance during locomotion can be considered the primary goal of legged robots. Naturally, this task is more challenging for bipeds when compared to robots with four or even more legs. Although current high-end prototypes show impressive progress, humanoid robots still do not have the robustness and versatility they need for most real-world applications. With our research, we try to contribute to this field and help to push the limits further. Recently, we showed our latest work on walking over uneven terrain without multi-contact support. Although the robustness is already high, there still exist scenarios, such as walking on loose objects, where the robot's stabilization fails when using only foot contacts. The use of additional hand-environment support during this (comparatively) fast walking allows a further significant increase in robustness, i.e., the robot's capability to compensate disturbances, modeling errors, or inaccurate sensor input. Besides stabilization on uneven terrain, multi-contact locomotion also enables more complex motions, e.g., stepping over a tall obstacle or toe-only contacts, as shown in our latest multi-contact video.

How can LOLA decide whether a surface is suitable for multi-contact locomotion?

LOLA’s visual perception system is currently developed by our project partners from the Chair for Computer Aided Medical Procedures & Augmented Reality at the TUM. This system relies on a novel semantic Simultaneous Localization and Mapping (SLAM) pipeline that can robustly extract the scene's semantic components (like floor, walls, and objects therein) by merging multiple observations from different viewpoints and by inferring therefrom the underlying scene graph. This provides a reliable estimate of which scene parts can be used to support the locomotion, based on the assumption that certain structural elements such as walls are fixed, while chairs, for example, are not.

Also, the team plans to develop a specific dataset with annotations further describing the attributes of the object (such as roughness of the surface or its softness) and that will be used to master multi-contact locomotion in even more complex scenes. As of today, the vision and navigation system is not finished yet; thus, in our latest video, we used pre-defined footholds and contact points for the hands. However, within our collaboration, we are working towards a fully integrated and autonomous system.

Is LOLA capable of both proactive and reactive multi-contact locomotion?

The software framework of LOLA has a hierarchical structure. On the highest level, the vision system generates an environment model and estimates the 6D-pose of the robot in the scene. The walking pattern generator then uses this information to plan a dynamically feasible future motion that will lead LOLA to a target position defined by the user. On a lower level, the stabilization module modifies this plan to compensate for model errors or any kind of disturbance and keep overall balance. So our approach currently focuses on proactive multi-contact locomotion. However, we also plan to work on a more reactive behavior such that additional hand support can also be triggered by an unexpected disturbance instead of being planned in advance.

What are some examples of unique capabilities that you are working towards with LOLA?

One of the main goals for the research with LOLA remains fast, autonomous, and robust locomotion on complex, uneven terrain. We aim to reach a walking speed similar to humans. Currently, LOLA can do multi-contact locomotion and cross uneven terrain at a speed of 1.8 km/h, which is comparably fast for a biped robot but still slow for a human. On flat ground, LOLA's high-end hardware allows it to walk at a relatively high maximum speed of 3.38 km/h.

Fully autonomous multi-contact locomotion for a life-sized humanoid robot is a tough task. As algorithms get more complex, computation time increases, which often results in offline motion planning methods. For LOLA, we restrict ourselves to gaited multi-contact locomotion, which means that we try to preserve the core characteristics of bipedal gait and use the arms only for assistance. This allows us to use simplified models of the robot which lead to very efficient algorithms running in real-time and fully onboard.

A long-term scientific goal with LOLA is to understand essential components and control policies of human walking. LOLA's leg kinematics is relatively similar to the human body. Together with scientists from kinesiology, we try to identify similarities and differences between observed human walking and LOLA’s “engineered” walking gait. We hope this research leads, on the one hand, to new ideas for the control of bipeds, and on the other hand, shows via experiments on bipeds if biomechanical models for the human gait are correctly understood. For a comparison of control policies on uneven terrain, LOLA must be able to walk at comparable speeds, which also motivates our research on fast and robust walking.

While it makes sense why the researchers are using LOLA’s arms primarily to assist with a conventional biped gait, looking ahead a bit it’s interesting to think about how robots that we typically consider to be bipeds could potentially leverage their limbs for mobility in decidedly non-human ways.

We’re used to legged robots being one particular morphology, I guess because associating them with either humans or dogs or whatever is just a comfortable way to do it, but there’s no particular reason why a robot with four limbs has to choose between being a quadruped and being a biped with arms, or some hybrid between the two, depending on what its task is. The research being done with LOLA could be a step in that direction, and maybe a hand on the wall in that direction, too. Continue reading

Posted in Human Robots

#439023 In ‘Klara and the Sun,’ We Glimpse ...

In a store in the center of an unnamed city, humanoid robots are displayed alongside housewares and magazines. They watch the fast-moving world outside the window, anxiously awaiting the arrival of customers who might buy them and take them home. Among them is Klara, a particularly astute robot who loves the sun and wants to learn as much as possible about humans and the world they live in.

So begins Kazuo Ishiguro’s new novel Klara and the Sun, published earlier this month. The book, told from Klara’s perspective, portrays an eerie future society in which intelligent machines and other advanced technologies have been integrated into daily life, but not everyone is happy about it.

Technological unemployment, the progress of artificial intelligence, inequality, the safety and ethics of gene editing, increasing loneliness and isolation—all of which we’re grappling with today—show up in Ishiguro’s world. It’s like he hit a fast-forward button, mirroring back to us how things might play out if we don’t approach these technologies with caution and foresight.

The wealthy genetically edit or “lift” their children to set them up for success, while the poor have to make do with the regular old brains and bodies bequeathed them by evolution. Lifted and unlifted kids generally don’t mix, and this is just one of many sinister delineations between a new breed of haves and have-nots.

There’s anger about robots’ steady infiltration into everyday life, and questions about how similar their rights should be to those of humans. “First they take the jobs. Then they take the seats at the theater?” one woman fumes.

References to “changes” and “substitutions” allude to an economy where automation has eliminated millions of jobs. While “post-employed” people squat in abandoned buildings and fringe communities arm themselves in preparation for conflict, those whose livelihoods haven’t been destroyed can afford to have live-in housekeepers and buy Artificial Friends (or AFs) for their lonely children.

“The old traditional model that we still live with now—where most of us can get some kind of paid work in exchange for our services or the goods we make—has broken down,” Ishiguro said in a podcast discussion of the novel. “We’re not talking just about the difference between rich and poor getting bigger. We’re talking about a gap appearing between people who participate in society in an obvious way and people who do not.”

He has a point; as much as techno-optimists claim that the economic changes brought by automation and AI will give us all more free time, let us work less, and devote time to our passion projects, how would that actually play out? What would millions of “post-employed” people receiving basic income actually do with their time and energy?

In the novel, we don’t get much of a glimpse of this side of the equation, but we do see how the wealthy live. After a long wait, just as the store manager seems ready to give up on selling her, Klara is chosen by a 14-year-old girl named Josie, the daughter of a woman who wears “high-rank clothes” and lives in a large, sunny home outside the city. Cheerful and kind, Josie suffers from an unspecified illness that periodically flares up and leaves her confined to her bed for days at a time.

Her life seems somewhat bleak, the need for an AF clear. In this future world, the children of the wealthy no longer go to school together, instead studying alone at home on their digital devices. “Interaction meetings” are set up for them to learn to socialize, their parents carefully eavesdropping from the next room and trying not to intervene when there’s conflict or hurt feelings.

Klara does her best to be a friend, aide, and confidante to Josie while continuing to learn about the world around her and decode the mysteries of human behavior. We surmise that she was programmed with a basic ability to understand emotions, which evolves along with her other types of intelligence. “I believe I have many feelings. The more I observe, the more feelings become available to me,” she explains to one character.

Ishiguro does an excellent job of representing Klara’s mind: a blend of pre-determined programming, observation, and continuous learning. Her narration has qualities both robotic and human; we can tell when something has been programmed in—she “Gives Privacy” to the humans around her when that’s appropriate, for example—and when she’s figured something out for herself.

But the author maintains some mystery around Klara’s inner emotional life. “Does she actually understand human emotions, or is she just observing human emotions and simulating them within herself?” he said. “I suppose the question comes back to, what are our emotions as human beings? What do they amount to?”

Klara is particularly attuned to human loneliness, since she essentially was made to help prevent it. It is, in her view, peoples’ biggest fear, and something they’ll go to great lengths to avoid, yet can never fully escape. “Perhaps all humans are lonely,” she says.

Warding off loneliness through technology isn’t a futuristic idea, it’s something we’ve been doing for a long time, with the technologies at hand growing more and more sophisticated. Products like AFs already exist. There’s XiaoIce, a chatbot that uses “sentiment analysis” to keep its 660 million users engaged, and Azuma Hikari, a character-based AI designed to “bring comfort” to users whose lives lack emotional connection with other humans.

The mere existence of these tools would be sinister if it wasn’t for their widespread adoption; when millions of people use AIs to fill a void in their lives, it raises deeper questions about our ability to connect with each other and whether technology is building it up or tearing it down.

This isn’t the only big question the novel tackles. An overarching theme is one we’ve been increasingly contemplating as computers start to acquire more complex capabilities, like the beginnings of creativity or emotional awareness: What is it that truly makes us human?

“Do you believe in the human heart?” one character asks. “I don’t mean simply the organ, obviously. I’m speaking in the poetic sense. The human heart. Do you think there is such a thing? Something that makes each of us special and individual?”

The alternative, at least in the story, is that people don’t have a unique essence, but rather we’re all a blend of traits and personalities that can be reduced to strings of code. Our understanding of the brain is still elementary, but at some level, doesn’t all human experience boil down to the firing of billions of neurons between our ears? Will we one day—in a future beyond that painted by Ishiguro, but certainly foreshadowed by it—be able to “decode” our humanity to the point that there’s nothing mysterious left about it? “A human heart is bound to be complex,” Klara says. “But it must be limited.”

Whether or not you agree, Klara and the Sun is worth the read. It’s both a marvelous, engaging story about what it means to love and be human, and a prescient warning to approach technological change with caution and nuance. We’re already living in a world where AI keeps us company, influences our behavior, and is wreaking various forms of havoc. Ishiguro’s novel is a snapshot of one of our possible futures, told through the eyes of a robot who keeps you rooting for her to the end.

Image Credit: Marion Wellmann from Pixabay Continue reading

Posted in Human Robots

#438749 Folding Drone Can Drop Into Inaccessible ...

Inspecting old mines is a dangerous business. For humans, mines can be lethal: prone to rockfalls and filled with noxious gases. Robots can go where humans might suffocate, but even robots can only do so much when mines are inaccessible from the surface.

Now, researchers in the UK, led by Headlight AI, have developed a drone that could cast a light in the darkness. Named Prometheus, this drone can enter a mine through a borehole not much larger than a football, before unfurling its arms and flying around the void. Once down there, it can use its payload of scanning equipment to map mines where neither humans nor robots can presently go. This, the researchers hope, could make mine inspection quicker and easier. The team behind Prometheus published its design in November in the journal Robotics.

Mine inspection might seem like a peculiarly specific task to fret about, but old mines can collapse, causing the ground to sink and damaging nearby buildings. It’s a far-reaching threat: the geotechnical engineering firm Geoinvestigate, based in Northeast England, estimates that around 8 percent of all buildings in the UK are at risk from any of the thousands of abandoned coal mines near the country’s surface. It’s also a threat to transport, such as road and rail. Indeed, Prometheus is backed by Network Rail, which operates Britain’s railway infrastructure.

Such grave dangers mean that old mines need periodic check-ups. To enter depths that are forbidden to traditional wheeled robots—such as those featured in the DARPA SubT Challenge—inspectors today drill boreholes down into the mine and lower scanners into the darkness.

But that can be an arduous and often fruitless process. Inspecting the entirety of a mine can take multiple boreholes, and that still might not be enough to chart a complete picture. Mines are jagged, labyrinthine places, and much of the void might lie out of sight. Furthermore, many old mines aren’t well-mapped, so it’s hard to tell where best to enter them.

Prometheus can fly around some of those challenges. Inspectors can lower Prometheus, tethered to a docking apparatus, down a single borehole. Once inside the mine, the drone can undock and fly around, using LIDAR scanners—common in mine inspection today—to generate a 3D map of the unknown void. Prometheus can fly through the mine autonomously, using infrared data to plot out its own course.

Other drones exist that can fly underground, but they’re either too small to carry a relatively heavy payload of scanning equipment, or too large to easily fit down a borehole. What makes Prometheus unique is its ability to fold its arms, allowing it to squeeze down spaces its counterparts cannot.

It’s that ability to fold and enter a borehole that makes Prometheus remarkable, says Jason Gross, a professor of mechanical and aerospace engineering at West Virginia University. Gross calls Prometheus “an exciting idea,” but he does note that it has a relatively short flight window and few abilities beyond scanning.

The researchers have conducted a number of successful test flights, both in a basement and in an old mine near Shrewsbury, England. Not only was Prometheus able to map out its space, the drone was able to plot its own course in an unknown area.

The researchers’ next steps, according to Puneet Chhabra, co-founder of Headlight AI, will be to test Prometheus’s ability to unfold in an actual mine. Following that, researchers plan to conduct full-scale test flights by the end of 2021. Continue reading

Posted in Human Robots