Tag Archives: based

#437548 Curved origami provides new range of ...

New research that employs curved origami structures has dramatic implications in the development of robotics going forward, providing tunable flexibility—the ability to adjust stiffness based on function—that historically has been difficult to achieve using simple design. Continue reading

Posted in Human Robots

#437543 This Is How We’ll Engineer Artificial ...

Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.”

Answer: “What is the human hand?”

Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel.

So why can’t robots do the same?

In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicon.

Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand.

Here’s what Sundaram laid out to get us to that future.

How Does Touch Work, Anyway?
Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain.

The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.”

At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in.

The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance.

It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different.

In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object.

Biology to Machine
Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia engineered a “feeling” robotic finger using overlapping light emitters and sensors in a way loosely similar to receptor fields. Distortions in light were then analyzed with deep learning to translate into contact location and force.

Although a radical departure from our own electrical-based system, the Columbia team’s attempt was clearly based on human biology. They’re not alone. “Substantial progress is being made in the creation of soft, stretchable electronic skins,” said Sundaram, many of which can sense forces or pressure, although they’re currently still limited.

What’s promising, however, is the “exciting progress in using visual data,” said Sundaram. Computer vision has gained enormously from ubiquitous cameras and large datasets, making it possible to train powerful but data-hungry algorithms such as deep convolutional neural networks (CNNs).

By piggybacking on their success, we can essentially add “eyes” to robotic hands, a superpower us humans can’t imagine. Even better, CNNs and other classes of algorithms can be readily adopted for processing tactile data. Together, a robotic hand could use its eyes to scan an object, plan its movements for grasp, and use touch for feedback to adjust its grip. Maybe we’ll finally have a robot that easily rescues the phone sadly dropped into a composting toilet. Or something much grander to benefit humanity.

That said, relying too heavily on vision could also be a downfall. Take a robot that scans a wide area of rubble for signs of life during a disaster response. If touch relies on sight, then it would have to keep a continuous line-of-sight in a complex and dynamic setting—something computer vision doesn’t do well in, at least for now.

A Neuromorphic Way Forward
Too Debbie Downer? I got your back! It’s hard to overstate the challenges, but what’s clear is that emerging machine learning tools can tackle data processing challenges. For vision, it’s distilling complex images into “actionable control policies,” said Sundaram. For touch, it’s easy to imagine the same. Couple the two together, and that’s a robotic super-hand in the making.

Going forward, argues Sundaram, we need to closely adhere to how the hand and brain process touch. Hijacking our biological “touch machinery” has already proved useful. In 2019, one team used a nerve-machine interface for amputees to control a robotic arm—the DEKA LUKE arm—and sense what the limb and attached hand were feeling. Pressure on the LUKE arm and hand activated an implanted neural interface, which zapped remaining nerves in a way that the brain processes as touch. When the AI analyzed pressure data similar to biological tactile neurons, the person was able to better identify different objects with their eyes closed.

“Neuromorphic tactile hardware (and software) advances will strongly influence the future of bionic prostheses—a compelling application of robotic hands,” said Sundaram, adding that the next step is to increase the density of sensors.

Two additional themes made the list of progressing towards a cyborg future. One is longevity, in that sensors on a robot need to be able to reliably produce large quantities of high-quality data—something that’s seemingly mundane, but is a practical limitation.

The other is going all-in-one. Rather than just a pressure sensor, we need something that captures the myriad of touch sensations. From feather-light to a heavy punch, from vibrations to temperatures, a tree-like architecture similar to our hands would help organize, integrate, and otherwise process data collected from those sensors.

Just a decade ago, mind-controlled robotics were considered a blue sky, stretch-goal neurotechnological fantasy. We now have a chance to “close the loop,” from thought to movement to touch and back to thought, and make some badass robots along the way.

Image Credit: PublicDomainPictures from Pixabay Continue reading

Posted in Human Robots

#437477 If a Robot Is Conscious, Is It OK to ...

In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?

The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.

Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.

As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.

Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.

On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.

Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.

It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.

Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.

Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.

In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.

Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.

Data is an android. How do these distinctions play out with respect to him?

The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.

Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.

He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.

However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.

Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.

For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.

In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.

Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.

But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?

Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.

There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Ico Maker / Shutterstock.com Continue reading

Posted in Human Robots

#437471 How Giving Robots a Hybrid, Human-Like ...

Squeezing a lot of computing power into robots without using up too much space or energy is a constant battle for their designers. But a new approach that mimics the structure of the human brain could provide a workaround.

The capabilities of most of today’s mobile robots are fairly rudimentary, but giving them the smarts to do their jobs is still a serious challenge. Controlling a body in a dynamic environment takes a surprising amount of processing power, which requires both real estate for chips and considerable amounts of energy to power them.

As robots get more complex and capable, those demands are only going to increase. Today’s most powerful AI systems run in massive data centers across far more chips than can realistically fit inside a machine on the move. And the slow death of Moore’s Law suggests we can’t rely on conventional processors getting significantly more efficient or compact anytime soon.

That prompted a team from the University of Southern California to resurrect an idea from more than 40 years ago: mimicking the human brain’s division of labor between two complimentary structures. While the cerebrum is responsible for higher cognitive functions like vision, hearing, and thinking, the cerebellum integrates sensory data and governs movement, balance, and posture.

When the idea was first proposed the technology didn’t exist to make it a reality, but in a paper recently published in Science Robotics, the researchers describe a hybrid system that combines analog circuits that control motion and digital circuits that govern perception and decision-making in an inverted pendulum robot.

“Through this cooperation of the cerebrum and the cerebellum, the robot can conduct multiple tasks simultaneously with a much shorter latency and lower power consumption,” write the researchers.

The type of robot the researchers were experimenting with looks essentially like a pole balancing on a pair of wheels. They have a broad range of applications, from hoverboards to warehouse logistics—Boston Dynamics’ recently-unveiled Handle robot operates on the same principles. Keeping them stable is notoriously tough, but the new approach managed to significantly improve all digital control approaches by radically improving the speed and efficiency of computations.

Key to bringing the idea alive was the recent emergence of memristors—electrical components whose resistance relies on previous input, which allows them to combine computing and memory in one place in a way similar to how biological neurons operate.

The researchers used memristors to build an analog circuit that runs an algorithm responsible for integrating data from the robot’s accelerometer and gyroscope, which is crucial for detecting the angle and velocity of its body, and another that controls its motion. One key advantage of this setup is that the signals from the sensors are analog, so it does away with the need for extra circuitry to convert them into digital signals, saving both space and power.

More importantly, though, the analog system is an order of magnitude faster and more energy-efficient than a standard all-digital system, the authors report. This not only lets them slash the power requirements, but also lets them cut the processing loop from 3,000 microseconds to just 6. That significantly improves the robot’s stability, with it taking just one second to settle into a steady state compared to more than three seconds using the digital-only platform.

At the minute this is just a proof of concept. The robot the researchers have built is small and rudimentary, and the algorithms being run on the analog circuit are fairly basic. But the principle is a promising one, and there is currently a huge amount of R&D going into neuromorphic and memristor-based analog computing hardware.

As often turns out to be the case, it seems like we can’t go too far wrong by mimicking the best model of computation we have found so far: our own brains.

Image Credit: Photos Hobby / Unsplash Continue reading

Posted in Human Robots

#437466 How Future AI Could Recognize a Kangaroo ...

AI is continuously taking on new challenges, from detecting deepfakes (which, incidentally, are also made using AI) to winning at poker to giving synthetic biology experiments a boost. These impressive feats result partly from the huge datasets the systems are trained on. That training is costly and time-consuming, and it yields AIs that can really only do one thing well.

For example, to train an AI to differentiate between a picture of a dog and one of a cat, it’s fed thousands—if not millions—of labeled images of dogs and cats. A child, on the other hand, can see a dog or cat just once or twice and remember which is which. How can we make AIs learn more like children do?

A team at the University of Waterloo in Ontario has an answer: change the way AIs are trained.

Here’s the thing about the datasets normally used to train AI—besides being huge, they’re highly specific. A picture of a dog can only be a picture of a dog, right? But what about a really small dog with a long-ish tail? That sort of dog, while still being a dog, looks more like a cat than, say, a fully-grown Golden Retriever.

It’s this concept that the Waterloo team’s methodology is based on. They described their work in a paper published on the pre-print (or non-peer-reviewed) server arXiv last month. Teaching an AI system to identify a new class of objects using just one example is what they call “one-shot learning.” But they take it a step further, focusing on “less than one shot learning,” or LO-shot learning for short.

LO-shot learning consists of a system learning to classify various categories based on a number of examples that’s smaller than the number of categories. That’s not the most straightforward concept to wrap your head around, so let’s go back to the dogs and cats example. Say you want to teach an AI to identify dogs, cats, and kangaroos. How could that possibly be done without several clear examples of each animal?

The key, the Waterloo team says, is in what they call soft labels. Unlike hard labels, which label a data point as belonging to one specific class, soft labels tease out the relationship or degree of similarity between that data point and multiple classes. In the case of an AI trained on only dogs and cats, a third class of objects, say, kangaroos, might be described as 60 percent like a dog and 40 percent like a cat (I know—kangaroos probably aren’t the best animal to have thrown in as a third category).

“Soft labels can be used to represent training sets using fewer prototypes than there are classes, achieving large increases in sample efficiency over regular (hard-label) prototypes,” the paper says. Translation? Tell an AI a kangaroo is some fraction cat and some fraction dog—both of which it’s seen and knows well—and it’ll be able to identify a kangaroo without ever having seen one.

If the soft labels are nuanced enough, you could theoretically teach an AI to identify a large number of categories based on a much smaller number of training examples.

The paper’s authors use a simple machine learning algorithm called k-nearest neighbors (kNN) to explore this idea more in depth. The algorithm operates under the assumption that similar things are most likely to exist near each other; if you go to a dog park, there will be lots of dogs but no cats or kangaroos. Go to the Australian grasslands and there’ll be kangaroos but no cats or dogs. And so on.

To train a kNN algorithm to differentiate between categories, you choose specific features to represent each category (i.e. for animals you could use weight or size as a feature). With one feature on the x-axis and the other on the y-axis, the algorithm creates a graph where data points that are similar to each other are clustered near each other. A line down the center divides the categories, and it’s pretty straightforward for the algorithm to discern which side of the line new data points should fall on.

The Waterloo team kept it simple and used plots of color on a 2D graph. Using the colors and their locations on the graphs, the team created synthetic data sets and accompanying soft labels. One of the more simplistic graphs is pictured below, along with soft labels in the form of pie charts.

Image Credit: Ilia Sucholutsky & Matthias Schonlau
When the team had the algorithm plot the boundary lines of the different colors based on these soft labels, it was able to split the plot up into more colors than the number of data points it was given in the soft labels.

While the results are encouraging, the team acknowledges that they’re just the first step, and there’s much more exploration of this concept yet to be done. The kNN algorithm is one of the least complex models out there; what might happen when LO-shot learning is applied to a far more complex algorithm? Also, to apply it, you still need to distill a larger dataset down into soft labels.

One idea the team is already working on is having other algorithms generate the soft labels for the algorithm that’s going to be trained using LO-shot; manually designing soft labels won’t always be as easy as splitting up some pie charts into different colors.

LO-shot’s potential for reducing the amount of training data needed to yield working AI systems is promising. Besides reducing the cost and the time required to train new models, the method could also make AI more accessible to industries, companies, or individuals who don’t have access to large datasets—an important step for democratization of AI.

Image Credit: pen_ash from Pixabay Continue reading

Posted in Human Robots