Tag Archives: version

#437583 Video Friday: Attack of the Hexapod ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Happy Halloween from HEBI Robotics!

Thanks Hardik!

[ HEBI Robotics ]

Happy Halloween from Berkshire Grey!

[ Berkshire Grey ]

These are some preliminary results of our lab’s new work on using reinforcement learning to train neural networks to imitate common bipedal gait behaviors, without using any motion capture data or reference trajectories. Our method is described in an upcoming submission to ICRA 2021. Work by Jonah Siekmann and Yesh Godse.

[ OSU DRL ]

The northern goshawk is a fast, powerful raptor that flies effortlessly through forests. This bird was the design inspiration for the next-generation drone developed by scientifics of the Laboratory of Intelligent Systems of EPFL led by Dario Floreano. They carefully studied the shape of the bird’s wings and tail and its flight behavior, and used that information to develop a drone with similar characteristics.

The engineers already designed a bird-inspired drone with morphing wing back in 2016. In a step forward, their new model can adjust the shape of its wing and tail thanks to its artificial feathers. Flying this new type of drone isn’t easy, due to the large number of wing and tail configurations possible. To take full advantage of the drone’s flight capabilities, Floreano’s team plans to incorporate artificial intelligence into the drone’s flight system so that it can fly semi-automatically. The team’s research has been published in Science Robotics.

[ EPFL ]

Oopsie.

[ Roborace ]

We’ve covered MIT’s Roboats in the past, but now they’re big enough to keep a couple of people afloat.

Self-driving boats have been able to transport small items for years, but adding human passengers has felt somewhat intangible due to the current size of the vessels. Roboat II is the “half-scale” boat in the growing body of work, and joins the previously developed quarter-scale Roboat, which is 1 meter long. The third installment, which is under construction in Amsterdam and is considered to be “full scale,” is 4 meters long and aims to carry anywhere from four to six passengers.

[ MIT ]

With a training technique commonly used to teach dogs to sit and stay, Johns Hopkins University computer scientists showed a robot how to teach itself several new tricks, including stacking blocks. With the method, the robot, named Spot, was able to learn in days what typically takes a month.

[ JHU ]

Exyn, a pioneer in autonomous aerial robot systems for complex, GPS-denied industrial environments, today announced the first dog, Kody, to successfully fly a drone at Number 9 Coal Mine, in Lansford, PA. Selected to carry out this mission was the new autonomous aerial robot, the ExynAero.

Yes, this is obviously a publicity stunt, and Kody is only flying the drone in the sense that he’s pushing the launch button and then taking a nap. But that’s also the point— drone autonomy doesn’t get much fuller than this, despite the challenge of the environment.

[ Exyn ]

In this video object instance segmentation and shape completion are combined with classical regrasp planning to perform pick-place of novel objects. It is demonstrated with a UR5, Robotiq 85 parallel-jaw gripper, and Structure depth sensor with three rearrangement tasks: bin packing (minimize the height of the packing), placing bottles onto coasters, and arrange blocks from tallest to shortest (according to the longest edge). The system also accounts for uncertainty in the segmentation/completion by avoiding grasping or placing on parts of the object where perceptual uncertainty is predicted to be high.

[ Paper ] via [ Northeastern ]

Thanks Marcus!

U can’t touch this!

[ University of Tokyo ]

We introduce a way to enable more natural interaction between humans and robots through Mixed Reality, by using a shared coordinate system. Azure Spatial Anchors, which already supports colocalizing multiple HoloLens and smartphone devices in the same space, has now been extended to support robots equipped with cameras. This allows humans and robots sharing the same space to interact naturally: humans can see the plan and intention of the robot, while the robot can interpret commands given from the person’s perspective. We hope that this can be a building block in the future of humans and robots being collaborators and coworkers.

[ Microsoft ]

Some very high jumps from the skinniest quadruped ever.

[ ODRI ]

In this video we present recent efforts to make our humanoid robot LOLA ready for multi-contact locomotion, i.e. additional hand-environment support for extra stabilization during walking.

[ TUM ]

Classic bike moves from Dr. Guero.

[ Dr. Guero ]

For a robotics company, iRobot is OLD.

[ iRobot ]

The Canadian Space Agency presents Juno, a preliminary version of a rover that could one day be sent to the Moon or Mars. Juno can navigate autonomously or be operated remotely. The Lunar Exploration Analogue Deployment (LEAD) consisted in replicating scenarios of a lunar sample return mission.

[ CSA ]

How exactly does the Waymo Driver handle a cat cutting across its driving path? Jonathan N., a Product Manager on our Perception team, breaks it all down for us.

Now do kangaroos.

[ Waymo ]

Jibo is hard at work at MIT playing games with kids.

Children’s creativity plummets as they enter elementary school. Social interactions with peers and playful environments have been shown to foster creativity in children. Digital pedagogical tools often lack the creativity benefits of co-located social interaction with peers. In this work, we leverage a social embodied robot as a playful peer and designed Escape!Bot, a game involving child-robot co-play, where the robot is a social agent that scaffolds for creativity during gameplay.

[ Paper ]

It’s nice when convenience stores are convenient even for the folks who have to do the restocking.

Who’s moving the crates around, though?

[ Telexistence ]

Hi, fans ! Join the ROS World 2020, opening November 12th , and see the footage of ROBOTIS’ ROS platform robots 🙂

[ ROS World 2020 ]

ML/RL methods are often viewed as a magical black box, and while that’s not true, learned policies are nonetheless a valuable tool that can work in conjunction with the underlying physics of the robot. In this video, Agility CTO Jonathan Hurst – wearing his professor hat at Oregon State University – presents some recent student work on using learned policies as a control method for highly dynamic legged robots.

[ Agility Robotics ]

Here’s an ICRA Legged Robots workshop talk from Marco Hutter at ETH Zürich, on Autonomy for ANYmal.

Recent advances in legged robots and their locomotion skills has led to systems that are skilled and mature enough for real-world deployment. In particular, quadrupedal robots have reached a level of mobility to navigate complex environments, which enables them to take over inspection or surveillance jobs in place like offshore industrial plants, in underground areas, or on construction sites. In this talk, I will present our research work with the quadruped ANYmal and explain some of the underlying technologies for locomotion control, environment perception, and mission autonomy. I will show how these robots can learn and plan complex maneuvers, how they can navigate through unknown environments, and how they are able to conduct surveillance, inspection, or exploration scenarios.

[ RSL ] Continue reading

Posted in Human Robots

#437571 Video Friday: Snugglebot Is What We All ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
Robotica 2020 – November 10-14, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

Snugglebot is what we all need right now.

[ Snugglebot ]

In his video message on his prayer intention for November, Pope Francis emphasizes that progress in robotics and artificial intelligence (AI) be oriented “towards respecting the dignity of the person and of Creation”.

[ Vatican News ]

KaPOW!

Apparently it's supposed to do that—the disruptor flies off backwards to reduce recoil on the robot, and has its own parachute to keep it from going too far.

[ Ghost Robotics ]

Animals have many muscles, receptors, and neurons which compose feedback loops. In this study, we designed artificial muscles, receptors, and neurons without any microprocessors, or software-based controllers. We imitate the reflexive rule observed in walking experiments of cats, as a result, the Pneumatic Brainless Robot II emerged running motion (a leg trajectory and a gait pattern) through the interaction between the body, the ground, and the artificial reflexes. We envision that the simple reflex circuit we discovered will be a candidate for a minimal model for describing the principles of animal locomotion.

Find the paper, “Brainless Running: A Quasi-quadruped Robot with Decentralized Spinal Reflexes by Solely Mechanical Devices,” on IROS On-Demand.

[ IROS ]

Thanks Yoichi!

I have no idea what these guys are saying, but they're talking about robots that serve chocolate!

The world of experience of the Zotter Schokoladen Manufaktur of managing director Josef Zotter counts more than 270,000 visitors annually. Since March 2019, this world of chocolate in Bergl near Riegersburg in Austria has been enriched by a new attraction: the world's first chocolate and praline robot from KUKA delights young and old alike and serves up chocolate and pralines to guests according to their personal taste.

[ Zotter ]

This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. The solution properly handles the challenging situations where the intent of the target and the dense environments are unknown to the UAV. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions. Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.

[ FAST Lab ]

Southwest Research Institute developed a cable management system for collaborative robotics, or “cobots.” Dress packs used on cobots can create problems when cables are too tight (e-stops) or loose (tangling). SwRI developed ADDRESS, or the Adaptive DRESing System, to provide smarter cobot dress packs that address e-stops and tangling.

[ SWRI ]

A quick demonstration of the acoustic contact sensor in the RBO Hand 2. An embedded microphone records the sound inside of the pneumatic finger. Depending on which part of the finger makes contact, the sound is a little bit different. We create a sensor that recognizes these small changes and predicts the contact location from the sound. The visualization on the left shows the recorded sound (top) and which of the nine contact classes the sensor is currently predicting (bottom).

[ TU Berlin ]

The MAVLab won the prize for the “most innovative design” in the IMAV 2018 indoor competition, in which drones had to fly through windows, gates, and follow a predetermined flight path. The prize was awarded for the demonstration of a fully autonomous version of the “DelFly Nimble”, a tailless flapping wing drone.

In order to fly by itself, the DelFly Nimble was equipped with a single, small camera and a small processor allowing onboard vision processing and control. The jury of international experts in the field praised the agility and autonomous flight capabilities of the DelFly Nimble.

[ MAVLab ]

A reactive walking controller for the Open Dynamic Robot Initiative's skinny quadruped.

[ ODRI ]

Mobile service robots are already able to recognize people and objects while navigating autonomously through their operating environments. But what is the ideal position of the robot to interact with a user? To solve this problem, Fraunhofer IPA developed an approach that connects navigation, 3D environment modeling, and person detection to find the optimal goal pose for HRI.

[ Fraunhofer ]

Yaskawa has been in robotics for a very, very long time.

[ Yaskawa ]

Black in Robotics IROS launch event, featuring Carlotta Berry.

[ Black in Robotics ]

What is AI? I have no idea! But these folks have some opinions.

[ MIT ]

Aerial-based Observations of Volcanic Emissions (ABOVE) is an international collaborative project that is changing the way we sample volcanic gas emissions. Harnessing recent advances in drone technology, unoccupied aerial systems (UAS) in the ABOVE fleet are able to acquire aerial measurements of volcanic gases directly from within previously inaccessible volcanic plumes. In May 2019, a team of 30 researchers undertook an ambitious field deployment to two volcanoes – Tavurvur (Rabaul) and Manam in Papua New Guinea – both amongst the most prodigious emitters of sulphur dioxide on Earth, and yet lacking any measurements of how much carbon they emit to the atmosphere.

[ ABOVE ]

A talk from IHMC's Robert Griffin for ICCAS 2020, including a few updates on their Nadia humanoid.

[ IHMC ] Continue reading

Posted in Human Robots

#437562 Video Friday: Aquanaut Robot Takes to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

[ UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech — “The Road to Vehicle Automation, a Toyota Guardian Approach” — to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach — which amplifies human drivers, rather than replacing them — and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ] Continue reading

Posted in Human Robots

#437543 This Is How We’ll Engineer Artificial ...

Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.”

Answer: “What is the human hand?”

Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel.

So why can’t robots do the same?

In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicon.

Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand.

Here’s what Sundaram laid out to get us to that future.

How Does Touch Work, Anyway?
Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain.

The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.”

At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in.

The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance.

It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different.

In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object.

Biology to Machine
Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia engineered a “feeling” robotic finger using overlapping light emitters and sensors in a way loosely similar to receptor fields. Distortions in light were then analyzed with deep learning to translate into contact location and force.

Although a radical departure from our own electrical-based system, the Columbia team’s attempt was clearly based on human biology. They’re not alone. “Substantial progress is being made in the creation of soft, stretchable electronic skins,” said Sundaram, many of which can sense forces or pressure, although they’re currently still limited.

What’s promising, however, is the “exciting progress in using visual data,” said Sundaram. Computer vision has gained enormously from ubiquitous cameras and large datasets, making it possible to train powerful but data-hungry algorithms such as deep convolutional neural networks (CNNs).

By piggybacking on their success, we can essentially add “eyes” to robotic hands, a superpower us humans can’t imagine. Even better, CNNs and other classes of algorithms can be readily adopted for processing tactile data. Together, a robotic hand could use its eyes to scan an object, plan its movements for grasp, and use touch for feedback to adjust its grip. Maybe we’ll finally have a robot that easily rescues the phone sadly dropped into a composting toilet. Or something much grander to benefit humanity.

That said, relying too heavily on vision could also be a downfall. Take a robot that scans a wide area of rubble for signs of life during a disaster response. If touch relies on sight, then it would have to keep a continuous line-of-sight in a complex and dynamic setting—something computer vision doesn’t do well in, at least for now.

A Neuromorphic Way Forward
Too Debbie Downer? I got your back! It’s hard to overstate the challenges, but what’s clear is that emerging machine learning tools can tackle data processing challenges. For vision, it’s distilling complex images into “actionable control policies,” said Sundaram. For touch, it’s easy to imagine the same. Couple the two together, and that’s a robotic super-hand in the making.

Going forward, argues Sundaram, we need to closely adhere to how the hand and brain process touch. Hijacking our biological “touch machinery” has already proved useful. In 2019, one team used a nerve-machine interface for amputees to control a robotic arm—the DEKA LUKE arm—and sense what the limb and attached hand were feeling. Pressure on the LUKE arm and hand activated an implanted neural interface, which zapped remaining nerves in a way that the brain processes as touch. When the AI analyzed pressure data similar to biological tactile neurons, the person was able to better identify different objects with their eyes closed.

“Neuromorphic tactile hardware (and software) advances will strongly influence the future of bionic prostheses—a compelling application of robotic hands,” said Sundaram, adding that the next step is to increase the density of sensors.

Two additional themes made the list of progressing towards a cyborg future. One is longevity, in that sensors on a robot need to be able to reliably produce large quantities of high-quality data—something that’s seemingly mundane, but is a practical limitation.

The other is going all-in-one. Rather than just a pressure sensor, we need something that captures the myriad of touch sensations. From feather-light to a heavy punch, from vibrations to temperatures, a tree-like architecture similar to our hands would help organize, integrate, and otherwise process data collected from those sensors.

Just a decade ago, mind-controlled robotics were considered a blue sky, stretch-goal neurotechnological fantasy. We now have a chance to “close the loop,” from thought to movement to touch and back to thought, and make some badass robots along the way.

Image Credit: PublicDomainPictures from Pixabay Continue reading

Posted in Human Robots

#437477 If a Robot Is Conscious, Is It OK to ...

In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?

The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.

Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.

As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.

Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.

On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.

Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.

It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.

Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.

Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.

Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.

For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.

In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.

Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.

But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?

Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.

There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots