Tag Archives: that

#437548 Curved origami provides new range of ...

New research that employs curved origami structures has dramatic implications in the development of robotics going forward, providing tunable flexibility—the ability to adjust stiffness based on function—that historically has been difficult to achieve using simple design. Continue reading

Posted in Human Robots

#437543 This Is How We’ll Engineer Artificial ...

Take a Jeopardy! guess: this body part was once referred to as the “consummation of all perfection as an instrument.”

Answer: “What is the human hand?”

Our hands are insanely complex feats of evolutionary engineering. Densely-packed sensors provide intricate and ultra-sensitive feelings of touch. Dozens of joints synergize to give us remarkable dexterity. A “sixth sense” awareness of where our hands are in space connects them to the mind, making it possible to open a door, pick up a mug, and pour coffee in total darkness based solely on what they feel.

So why can’t robots do the same?

In a new article in Science, Dr. Subramanian Sundaram at Boston and Harvard University argues that it’s high time to rethink robotic touch. Scientists have long dreamed of artificially engineering robotic hands with the same dexterity and feedback that we have. Now, after decades, we’re at the precipice of a breakthrough thanks to two major advances. One, we better understand how touch works in humans. Two, we have the mega computational powerhouse called machine learning to recapitulate biology in silicon.

Robotic hands with a sense of touch—and the AI brain to match it—could overhaul our idea of robots. Rather than charming, if somewhat clumsy, novelties, robots equipped with human-like hands are far more capable of routine tasks—making food, folding laundry—and specialized missions like surgery or rescue. But machines aren’t the only ones to gain. For humans, robotic prosthetic hands equipped with accurate, sensitive, and high-resolution artificial touch is the next giant breakthrough to seamlessly link a biological brain to a mechanical hand.

Here’s what Sundaram laid out to get us to that future.

How Does Touch Work, Anyway?
Let me start with some bad news: reverse engineering the human hand is really hard. It’s jam-packed with over 17,000 sensors tuned to mechanical forces alone, not to mention sensors for temperature and pain. These force “receptors” rely on physical distortions—bending, stretching, curling—to signal to the brain.

The good news? We now have a far clearer picture of how biological touch works. Imagine a coin pressed into your palm. The sensors embedded in the skin, called mechanoreceptors, capture that pressure, and “translate” it into electrical signals. These signals pulse through the nerves on your hand to the spine, and eventually make their way to the brain, where they gets interpreted as “touch.”

At least, that’s the simple version, but one too vague and not particularly useful for recapitulating touch. To get there, we need to zoom in.

The cells on your hand that collect touch signals, called tactile “first order” neurons (enter Star Wars joke) are like upside-down trees. Intricate branches extend from their bodies, buried deep in the skin, to a vast area of the hand. Each neuron has its own little domain called “receptor fields,” although some overlap. Like governors, these neurons manage a semi-dedicated region, so that any signal they transfer to the higher-ups—spinal cord and brain—is actually integrated from multiple sensors across a large distance.

It gets more intricate. The skin itself is a living entity that can regulate its own mechanical senses through hydration. Sweat, for example, softens the skin, which changes how it interacts with surrounding objects. Ever tried putting a glove onto a sweaty hand? It’s far more of a struggle than a dry one, and feels different.

In a way, the hand’s tactile neurons play a game of Morse Code. Through different frequencies of electrical beeps, they’re able to transfer information about an object’s size, texture, weight, and other properties, while also asking the brain for feedback to better control the object.

Biology to Machine
Reworking all of our hands’ greatest features into machines is absolutely daunting. But robots have a leg up—they’re not restricted to biological hardware. Earlier this year, for example, a team from Columbia engineered a “feeling” robotic finger using overlapping light emitters and sensors in a way loosely similar to receptor fields. Distortions in light were then analyzed with deep learning to translate into contact location and force.

Although a radical departure from our own electrical-based system, the Columbia team’s attempt was clearly based on human biology. They’re not alone. “Substantial progress is being made in the creation of soft, stretchable electronic skins,” said Sundaram, many of which can sense forces or pressure, although they’re currently still limited.

What’s promising, however, is the “exciting progress in using visual data,” said Sundaram. Computer vision has gained enormously from ubiquitous cameras and large datasets, making it possible to train powerful but data-hungry algorithms such as deep convolutional neural networks (CNNs).

By piggybacking on their success, we can essentially add “eyes” to robotic hands, a superpower us humans can’t imagine. Even better, CNNs and other classes of algorithms can be readily adopted for processing tactile data. Together, a robotic hand could use its eyes to scan an object, plan its movements for grasp, and use touch for feedback to adjust its grip. Maybe we’ll finally have a robot that easily rescues the phone sadly dropped into a composting toilet. Or something much grander to benefit humanity.

That said, relying too heavily on vision could also be a downfall. Take a robot that scans a wide area of rubble for signs of life during a disaster response. If touch relies on sight, then it would have to keep a continuous line-of-sight in a complex and dynamic setting—something computer vision doesn’t do well in, at least for now.

A Neuromorphic Way Forward
Too Debbie Downer? I got your back! It’s hard to overstate the challenges, but what’s clear is that emerging machine learning tools can tackle data processing challenges. For vision, it’s distilling complex images into “actionable control policies,” said Sundaram. For touch, it’s easy to imagine the same. Couple the two together, and that’s a robotic super-hand in the making.

Going forward, argues Sundaram, we need to closely adhere to how the hand and brain process touch. Hijacking our biological “touch machinery” has already proved useful. In 2019, one team used a nerve-machine interface for amputees to control a robotic arm—the DEKA LUKE arm—and sense what the limb and attached hand were feeling. Pressure on the LUKE arm and hand activated an implanted neural interface, which zapped remaining nerves in a way that the brain processes as touch. When the AI analyzed pressure data similar to biological tactile neurons, the person was able to better identify different objects with their eyes closed.

“Neuromorphic tactile hardware (and software) advances will strongly influence the future of bionic prostheses—a compelling application of robotic hands,” said Sundaram, adding that the next step is to increase the density of sensors.

Two additional themes made the list of progressing towards a cyborg future. One is longevity, in that sensors on a robot need to be able to reliably produce large quantities of high-quality data—something that’s seemingly mundane, but is a practical limitation.

The other is going all-in-one. Rather than just a pressure sensor, we need something that captures the myriad of touch sensations. From feather-light to a heavy punch, from vibrations to temperatures, a tree-like architecture similar to our hands would help organize, integrate, and otherwise process data collected from those sensors.

Just a decade ago, mind-controlled robotics were considered a blue sky, stretch-goal neurotechnological fantasy. We now have a chance to “close the loop,” from thought to movement to touch and back to thought, and make some badass robots along the way.

Image Credit: PublicDomainPictures from Pixabay Continue reading

Posted in Human Robots

#437535 Unravelling the secrets of spider limb ...

Spider webs are engineering marvels constructed by eight-legged experts with 400 million years of accumulated know-how. Much can be learned from the building of the spider's gossamer net and the operation of its sticky trap. Amazingly, garden cross spiders can regenerate lost legs and use them immediately to build a web that is pitch-perfect, even though the new limb is much shorter than the one it replaced. This phenomenon has allowed scientists to probe the rules the animal uses to build its web and how it uses its legs as measuring sticks. Continue reading

Posted in Human Robots

#437529 Magnetic FreeBOT balls make giant leap ...

A unique type of modular self-reconfiguring robotic system has been unveiled. The term is a mouthful, but it basically refers to a robotic enterprise that can construct itself out of modules that connect to one another to achieve a certain task. Continue reading

Posted in Human Robots

#437504 A New and Improved Burger Robot’s on ...

No doubt about it, the pandemic has changed the way we eat. Never before have so many people who hated cooking been forced to learn how to prepare a basic meal for themselves. With sit-down restaurants limiting their capacity or shutting down altogether, consumption of fast food and fast-casual food has skyrocketed. Don’t feel like slaving over a hot stove? Just hit the drive through and grab a sandwich and some fries (the health implications of increased fast food consumption are another matter…).

Given our sudden immense need for paper-wrapped burgers and cardboard cartons of fries, fast food workers are now counted as essential. But what about their safety, both from a virus standpoint and from the usual risks of working in a busy kitchen (like getting burned by the stove or the hot oil from the fryer, cut by a slicer, etc.)? And how many orders of burgers and fries can humans possibly churn out in an hour?

Enter the robot. Three and a half years ago, a burger-flipping robot aptly named Flippy, made by Miso Robotics, made its debut at a fast food restaurant in California called CaliBurger. Now Flippy is on the market for anyone who wishes to purchase their own, with a price tag of $30,000 and a range of new capabilities—this burger bot has progressed far beyond just flipping burgers.

Flippy’s first iteration was already pretty impressive. It used machine learning software to locate and identify objects in front of it (rather than needing to have objects lined up in specific spots), and was able to learn from experience to improve its accuracy. Sensors on its grill-facing side took in thermal and 3D data to gauge the cooking process for multiple patties at a time, and cameras allowed the robot to ‘see’ its surroundings.

A system that digitally sent tickets to the kitchen from the restaurant’s front counter kept Flippy on top of how many burgers it should be cooking at any given time. Its key tasks were pulling raw patties from a stack and placing them on the grill, tracking each burger’s cook time and temperature, and transferring cooked burgers to a plate.

The new and improved Flippy can do all this and more. It can cook 19 different foods, including chicken wings, onion rings, french fries, and even the Impossible Burger (which, as you may know, isn’t actually made of meat, and that means it’s a little trickier to grill it to perfection).

Flippy’s handiwork. Image Credit: Miso Robotics
And instead of its body sitting on a cart on wheels (which took up a lot of space and meant the robot’s arm could get in the way of human employees), it’s now attached to a rail along the stove’s hood, and can move along the rail to access both the grill and the fryer (provided they’re next to each other, which in many fast food restaurants they are). In fact, Flippy has a new acronym attached to its name: ROAR, which stands for Robot on a Rail.

Flippy ROAR in action, artist rendering. Image Credit: Miso Robotics
Sensors equipped with laser make it safer for human employees to work near Flippy. The bot can automatically switch between different tools, such as a spatula for flipping patties and tongs for gripping the handle of a fryer basket. Its AI software will enable it to learn new skills over time.

Flippy’s interface. Image Credit: Miso Robotics
The first big restaurant chain to go all-in on Flippy was White Castle, which in July announced plans to pilot Flippy ROAR before year’s end. And just last month, Miso made the bot commercially available. The current cost is $30,000 (plus a monthly fee of $1,500 for use of the software), but the company hopes to bring the price down to $20,000 within the next year.

According to Business Insider, demand for the fast food robot is through the roof, probably given a significant boost by the pandemic—thanks, Covid-19. The pace of automation has picked up across multiple sectors, and will likely continue to accelerate as companies look to insure themselves against additional losses.

So for the immediate future, it seems that no matter what happens, we don’t have to worry about the supply of burgers, fries, onion rings, chicken wings, and the like running out.

Now if only Flippy had a cousin—perhaps named Leafy—who could chop vegetables and greens and put together fresh-made salads…

Maybe that can be Miso Robotics’ next project.

Image Credit: Miso Robotics Continue reading

Posted in Human Robots