Tag Archives: upgrade

#435621 ANYbotics Introduces Sleek New ANYmal C ...

Quadrupedal robots are making significant advances lately, and just in the past few months we’ve seen Boston Dynamics’ Spot hauling a truck, IIT’s HyQReal pulling a plane, MIT’s MiniCheetah doing backflips, Unitree Robotics’ Laikago towing a van, and Ghost Robotics’ Vision 60 exploring a mine. Robot makers are betting that their four-legged machines will prove useful in a variety of applications in construction, security, delivery, and even at home.

ANYbotics has been working on such applications for years, testing out their ANYmal robot in places where humans typically don’t want to go (like offshore platforms) as well as places where humans really don’t want to go (like sewers), and they have a better idea than most companies what can make quadruped robots successful.

This week, ANYbotics is announcing a completely new quadruped platform, ANYmal C, a major upgrade from the really quite research-y ANYmal B. The new quadruped has been optimized for ruggedness and reliability in industrial environments, with a streamlined body painted a color that lets you know it means business.

ANYmal C’s physical specs are pretty impressive for a production quadruped. It can move at 1 meter per second, manage 20-degree slopes and 45-degree stairs, cross 25-centimeter gaps, and squeeze through passages just 60 centimeters wide. It’s packed with cameras and 3D sensors, including a lidar for 3D mapping and simultaneous localization and mapping (SLAM). All these sensors (along with the vast volume of gait research that’s been done with ANYmal) make this one of the most reliably autonomous quadrupeds out there, with real-time motion planning and obstacle avoidance.

Image: ANYbotics

ANYmal can autonomously attach itself to a cone-shaped docking station to recharge.

ANYmal C is also one of the ruggedest legged robots in existence. The 50-kilogram robot is IP67 rated, meaning that it’s completely impervious to dust and can withstand being submerged in a meter of water for an hour. If it’s submerged for longer than that, you’re absolutely doing something wrong. The robot will run for over 2 hours on battery power, and if that’s not enough endurance, don’t worry, because ANYmal can autonomously impale itself on a weird cone-shaped docking station to recharge.

Photo: ANYbotics

ANYmal C’s sensor payload includes cameras and a lidar for 3D mapping and SLAM.

As far as what ANYmal C is designed to actually do, it’s mostly remote inspection tasks where you need to move around through a relatively complex environment, but where for whatever reason you’d be better off not sending a human. ANYmal C has a sensor payload that gives it lots of visual options, like thermal imaging, and with the ability to handle a 10-kilogram payload, the robot can be adapted to many different environments.

Over the next few months, we’re hoping to see more examples of ANYmal C being deployed to do useful stuff in real-world environments, but for now, we do have a bit more detail from ANYbotics CTO Christian Gehring.

IEEE Spectrum: Can you tell us about the development process for ANYmal C?

Christian Gehring: We tested the previous generation of ANYmal (B) in a broad range of environments over the last few years and gained a lot of insights. Based on our learnings, it became clear that we would have to re-design the robot to meet the requirements of industrial customers in terms of safety, quality, reliability, and lifetime. There were different prototype stages both for the new drives and for single robot assemblies. Apart from electrical tests, we thoroughly tested the thermal control and ingress protection of various subsystems like the depth cameras and actuators.

What can ANYmal C do that the previous version of ANYmal can’t?

ANYmal C was redesigned with a focus on performance increase regarding actuation (new drives), computational power (new hexacore Intel i7 PCs), locomotion and navigation skills, and autonomy (new depth cameras). The new robot additionally features a docking system for autonomous recharging and an inspection payload as an option. The design of ANYmal C is far more integrated than its predecessor, which increases both performance and reliability.

How much of ANYmal C’s development and design was driven by your experience with commercial or industry customers?

Tests (such as the offshore installation with TenneT) and discussions with industry customers were important to get the necessary design input in terms of performance, safety, quality, reliability, and lifetime. Most customers ask for very similar inspection tasks that can be performed with our standard inspection payload and the required software packages. Some are looking for a robot that can also solve some simple manipulation tasks like pushing a button. Overall, most use cases customers have in mind are realistic and achievable, but some are really tough for the robot, like climbing 50° stairs in hot environments of 50°C.

Can you describe how much autonomy you expect ANYmal C to have in industrial or commercial operations?

ANYmal C is primarily developed to perform autonomous routine inspections in industrial environments. This autonomy especially adds value for operations that are difficult to access, as human operation is extremely costly. The robot can naturally also be operated via a remote control and we are working on long-distance remote operation as well.

Do you expect that researchers will be interested in ANYmal C? What research applications could it be useful for?

ANYmal C has been designed to also address the needs of the research community. The robot comes with two powerful hexacore Intel i7 computers and can additionally be equipped with an NVIDIA Jetson Xavier graphics card for learning-based applications. Payload interfaces enable users to easily install and test new sensors. By joining our established ANYmal Research community, researchers get access to simulation tools and software APIs, which boosts their research in various areas like control, machine learning, and navigation.

[ ANYmal C ] Continue reading

Posted in Human Robots

#435601 New Double 3 Robot Makes Telepresence ...

Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.

Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.

It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.

This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.

Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.

For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.

IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?

Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.

How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?

We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.

It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?

We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.

Do you expect developers to be excited for this new mapping capability?

We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.

We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.

Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?

When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.

Do you think 5G will make a significant difference to telepresence robots?

We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.

DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.

When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.

[ Double Robotics ] Continue reading

Posted in Human Robots

#435535 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
To Power AI, This Startup Built a Really, Really Big Chip
Tom Simonite | Wired
“The silicon monster is almost 22 centimeters—roughly 9 inches—on each side, making it likely the largest computer chip ever, and a monument to the tech industry’s hopes for artificial intelligence.”

COMPUTING
You Won’t See the Quantum Internet Coming
Ryan F. Mandelbaum | Gizmodo
“The quantum internet is coming sooner than you think—even sooner than quantum computing itself. When things change over, you might not even notice. But when they do, new rules will protect your data against attacks from computers that don’t even exist yet.”

LONGEVITY
What If Aging Weren’t Inevitable, But a Curable Disease
David Adam | MIT Technology Review
“…a growing number of scientists are questioning our basic conception of aging. What if you could challenge your death—or even prevent it altogether? What if the panoply of diseases that strike us in old age are symptoms, not causes? What would change if we classified aging itself as the disease?”

ROBOTICS
Thousands of Autonomous Delivery Robots Are About to Descend on College Campuses
Andrew J. Hawkins | The Verge
“The quintessential college experience of getting pizza delivered to your dorm room is about to get a high-tech upgrade. On Tuesday, Starship Technologies announced its plan to deploy thousands of its autonomous six-wheeled delivery robots on college campuses around the country over the next two years, after raising $40 million in Series A funding.”

TRANSPORTATION
Volocopter Reveals Its First Commercial Autonomous Flying Taxi
Christine Fisher | Endgadget
“It’s a race to the skies in terms of which company actually deploys an on-demand air taxi service based around electric vertical take-off and landing aircraft. For its part, German startup Volocopter is taking another key step with the revelation of its first aircraft designed for actual commercial use, the VoloCity.”

Image Credit: Colin Carter / Unsplash Continue reading

Posted in Human Robots

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots

#434854 New Lifelike Biomaterial Self-Reproduces ...

Life demands flux.

Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.

What if we could endow cold, static, lifeless robots with the gift of metabolism?

In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.

Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.

The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.

“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.

“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.

Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.

Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?

Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.

Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”

The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.

Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.

DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.

Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.

Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.

This “enabled a general design strategy for the DASH patterns,” they said.

In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.

These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.

Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.

“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.

Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.

In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.

Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.

Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.

The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.

In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.

A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.

DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.

Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.

“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.

“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”

Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading

Posted in Human Robots