Tag Archives: man

#435522 Harvard’s Smart Exo-Shorts Talk to the ...

Exosuits don’t generally scream “fashionable” or “svelte.” Take the mind-controlled robotic exoskeleton that allowed a paraplegic man to kick off the World Cup back in 2014. Is it cool? Hell yeah. Is it practical? Not so much.

Yapping about wearability might seem childish when the technology already helps people with impaired mobility move around dexterously. But the lesson of the ill-fated Google Glassholes, which includes an awkward dorky head tilt and an assuming voice command, clearly shows that wearable computer assistants can’t just work technologically—they have to look natural and allow the user to behave like as usual. They have to, in a sense, disappear.

To Dr. Jose Pons at the Legs + Walking Ability Lab in Chicago, exosuits need three main selling points to make it in the real world. One, they have to physically interact with their wearer and seamlessly deliver assistance when needed. Two, they should cognitively interact with the host to guide and control the robot at all times. Finally, they need to feel like a second skin—move with the user without adding too much extra mass or reducing mobility.

This week, a US-Korean collaboration delivered the whole shebang in a Lululemon-style skin-hugging package combined with a retro waist pack. The portable exosuit, weighing only 11 pounds, looks like a pair of spandex shorts but can support the wearer’s hip movement when needed. Unlike their predecessors, the shorts are embedded with sensors that let them know when the wearer is walking versus running by analyzing gait.

Switching between the two movement modes may not seem like much, but what naturally comes to our brains doesn’t translate directly to smart exosuits. “Walking and running have fundamentally different biomechanics, which makes developing devices that assist both gaits challenging,” the team said. Their algorithm, computed in the cloud, allows the wearer to easily switch between both, with the shorts providing appropriate hip support that makes the movement experience seamless.

To Pons, who was not involved in the research but wrote a perspective piece, the study is an exciting step towards future exosuits that will eventually disappear under the skin—that is, implanted neural interfaces to control robotic assistance or activate the user’s own muscles.

“It is realistic to think that we will witness, in the next several years…robust human-robot interfaces to command wearable robotics based on…the neural code of movement in humans,” he said.

A “Smart” Exosuit Hack
There are a few ways you can hack a human body to move with an exosuit. One is using implanted electrodes inside the brain or muscles to decipher movement intent. With heavy practice, a neural implant can help paralyzed people walk again or dexterously move external robotic arms. But because the technique requires surgery, it’s not an immediate sell for people who experience low mobility because of aging or low muscle tone.

The other approach is to look to biophysics. Rather than decoding neural signals that control movement, here the idea is to measure gait and other physical positions in space to decipher intent. As you can probably guess, accurately deciphering user intent isn’t easy, especially when the wearable tries to accommodate multiple gaits. But the gains are many: there’s no surgery involved, and the wearable is low in energy consumption.

Double Trouble
The authors decided to tackle an everyday situation. You’re walking to catch the train to work, realize you’re late, and immediately start sprinting.

That seemingly easy conversion hides a complex switch in biomechanics. When you walk, your legs act like an inverted pendulum that swing towards a dedicated center in a predictable way. When you run, however, the legs move more like a spring-loaded system, and the joints involved in the motion differ from a casual stroll. Engineering an assistive wearable for each is relatively simple; making one for both is exceedingly hard.

Led by Dr. Conor Walsh at Harvard University, the team started with an intuitive idea: assisted walking and running requires specialized “actuation” profiles tailored to both. When the user is moving in a way that doesn’t require assistance, the wearable needs to be out of the way so that it doesn’t restrict mobility. A quick analysis found that assisting hip extension has the largest impact, because it’s important to both gaits and doesn’t add mass to the lower legs.

Building on that insight, the team made a waist belt connected to two thigh wraps, similar to a climbing harness. Two electrical motors embedded inside the device connect the waist belt to other components through a pulley system to help the hip joints move. The whole contraption weighed about 11 lbs and didn’t obstruct natural movement.

Next, the team programmed two separate supporting profiles for walking and running. The goal was to reduce the “metabolic cost” for both movements, so that the wearer expends as little energy as needed. To switch between the two programs, they used a cloud-based classification algorithm to measure changes in energy fluctuation to figure out what mode—running or walking—the user is in.

Smart Booster
Initial trials on treadmills were highly positive. Six male volunteers with similar age and build donned the exosuit and either ran or walked on the treadmill at varying inclines. The algorithm performed perfectly at distinguishing between the two gaits in all conditions, even at steep angles.

An outdoor test with eight volunteers also proved the algorithm nearly perfect. Even on uneven terrain, only two steps out of all test trials were misclassified. In an additional trial on mud or snow, the algorithm performed just as well.

“The system allows the wearer to use their preferred gait for each speed,” the team said.

Software excellence translated to performance. A test found that the exosuit reduced the energy for walking by over nine percent and running by four percent. It may not sound like much, but the range of improvement is meaningful in athletic performance. Putting things into perspective, the team said, the metabolic rate reduction during walking is similar to taking 16 pounds off at the waist.

The Wearable Exosuit Revolution
The study’s lightweight exoshorts are hardly the only players in town. Back in 2017, SRI International’s spin-off, Superflex, engineered an Aura suit to support mobility in the elderly. The Aura used a different mechanism: rather than a pulley system, it incorporated a type of smart material that contracts in a manner similar to human muscles when zapped with electricity.

Embedded with a myriad of sensors for motion, accelerometers and gyroscopes, Aura’s smartness came from mini-computers that measure how fast the wearer is moving and track the user’s posture. The data were integrated and processed locally inside hexagon-shaped computing pods near the thighs and upper back. The pods also acted as the control center for sending electrical zaps to give the wearer a boost when needed.

Around the same time, a collaboration between Harvard’s Wyss Institute and ReWalk Robotics introduced a fabric-based wearable robot to assist a wearer’s legs for balance and movement. Meanwhile, a Swiss team coated normal fabric with electroactive material to weave soft, pliable artificial “muscles” that move with the skin.

Although health support is the current goal, the military is obviously interested in similar technologies to enhance soldiers’ physicality. Superflex’s Aura, for example, was originally inspired by technology born from DARPA’s Warrior Web Program, which aimed to reduce a soldier’s mechanical load.

That said, military gear has had a long history of trickling down to consumer use. Similar to the way camouflage, cargo pants, and GORE-TEX trickled down into the consumer ecosphere, it’s not hard to imagine your local Target eventually stocking intelligent exowear.

Image and Video Credit: Wyss Institute at Harvard University. Continue reading

Posted in Human Robots

#435474 Watch China’s New Hybrid AI Chip Power ...

When I lived in Beijing back in the 90s, a man walking his bike was nothing to look at. But today, I did a serious double-take at a video of a bike walking his man.

No kidding.

The bike itself looks overloaded but otherwise completely normal. Underneath its simplicity, however, is a hybrid computer chip that combines brain-inspired circuits with machine learning processes into a computing behemoth. Thanks to its smart chip, the bike self-balances as it gingerly rolls down a paved track before smoothly gaining speed into a jogging pace while navigating dexterously around obstacles. It can even respond to simple voice commands such as “speed up,” “left,” or “straight.”

Far from a circus trick, the bike is a real-world demo of the AI community’s latest attempt at fashioning specialized hardware to keep up with the challenges of machine learning algorithms. The Tianjic (天机*) chip isn’t just your standard neuromorphic chip. Rather, it has the architecture of a brain-like chip, but can also run deep learning algorithms—a match made in heaven that basically mashes together neuro-inspired hardware and software.

The study shows that China is readily nipping at the heels of Google, Facebook, NVIDIA, and other tech behemoths investing in developing new AI chip designs—hell, with billions in government investment it may have already had a head start. A sweeping AI plan from 2017 looks to catch up with the US on AI technology and application by 2020. By 2030, China’s aiming to be the global leader—and a champion for building general AI that matches humans in intellectual competence.

The country’s ambition is reflected in the team’s parting words.

“Our study is expected to stimulate AGI [artificial general intelligence] development by paving the way to more generalized hardware platforms,” said the authors, led by Dr. Luping Shi at Tsinghua University.

A Hardware Conundrum
Shi’s autonomous bike isn’t the first robotic two-wheeler. Back in 2015, the famed research nonprofit SRI International in Menlo Park, California teamed up with Yamaha to engineer MOTOBOT, a humanoid robot capable of driving a motorcycle. Powered by state-of-the-art robotic hardware and machine learning, MOTOBOT eventually raced MotoGPTM world champion Valentino Rossi in a nail-biting match-off.

However, the technological core of MOTOBOT and Shi’s bike vastly differ, and that difference reflects two pathways towards more powerful AI. One, exemplified by MOTOBOT, is software—developing brain-like algorithms with increasingly efficient architecture, efficacy, and speed. That sounds great, but deep neural nets demand so many computational resources that general-purpose chips can’t keep up.

As Shi told China Science Daily: “CPUs and other chips are driven by miniaturization technologies based on physics. Transistors might shrink to nanoscale-level in 10, 20 years. But what then?” As more transistors are squeezed onto these chips, efficient cooling becomes a limiting factor in computational speed. Tax them too much, and they melt.

For AI processes to continue, we need better hardware. An increasingly popular idea is to build neuromorphic chips, which resemble the brain from the ground up. IBM’s TrueNorth, for example, contains a massively parallel architecture nothing like the traditional Von Neumann structure of classic CPUs and GPUs. Similar to biological brains, TrueNorth’s memory is stored within “synapses” between physical “neurons” etched onto the chip, which dramatically cuts down on energy consumption.

But even these chips are limited. Because computation is tethered to hardware architecture, most chips resemble just one specific type of brain-inspired network called spiking neural networks (SNNs). Without doubt, neuromorphic chips are highly efficient setups with dynamics similar to biological networks. They also don’t play nicely with deep learning and other software-based AI.

Brain-AI Hybrid Core
Shi’s new Tianjic chip brought the two incompatibilities together onto a single piece of brainy hardware.

First was to bridge the deep learning and SNN divide. The two have very different computation philosophies and memory organizations, the team said. The biggest difference, however, is that artificial neural networks transform multidimensional data—image pixels, for example—into a single, continuous, multi-bit 0 and 1 stream. In contrast, neurons in SNNs activate using something called “binary spikes” that code for specific activation events in time.

Confused? Yeah, it’s hard to wrap my head around it too. That’s because SNNs act very similarly to our neural networks and nothing like computers. A particular neuron needs to generate an electrical signal (a “spike”) large enough to transfer down to the next one; little blips in signals don’t count. The way they transmit data also heavily depends on how they’re connected, or the network topology. The takeaway: SNNs work pretty differently than deep learning.

Shi’s team first recreated this firing quirk in the language of computers—0s and 1s—so that the coding mechanism would become compatible with deep learning algorithms. They then carefully aligned the step-by-step building blocks of the two models, which allowed them to tease out similarities into a common ground to further build on. “On the basis of this unified abstraction, we built a cross-paradigm neuron scheme,” they said.

In general, the design allowed both computational approaches to share the synapses, where neurons connect and store data, and the dendrites, the outgoing branches of the neurons. In contrast, the neuron body, where signals integrate, was left reconfigurable for each type of computation, as were the input branches. Each building block was combined into a single unified functional core (FCore), which acts like a deep learning/SNN converter depending on its specific setup. Translation: the chip can do both types of previously incompatible computation.

The Chip
Using nanoscale fabrication, the team arranged 156 FCores, containing roughly 40,000 neurons and 10 million synapses, onto a chip less than a fifth of an inch in length and width. Initial tests showcased the chip’s versatility, in that it can run both SNNs and deep learning algorithms such as the popular convolutional neural network (CNNs) often used in machine vision.

Compared to IBM TrueNorth, the density of Tianjic’s cores increased by 20 percent, speeding up performance ten times and increasing bandwidth at least 100-fold, the team said. When pitted against GPUs, the current hardware darling of machine learning, the chip increased processing throughput up to 100 times, while using just a sliver (1/10,000) of energy.

Although these stats are great, real-life performance is even better as a demo. Here’s where the authors gave their Tianjic brain a body. The team combined one chip with multiple specialized networks to process vision, balance, voice commands, and decision-making in real time. Object detection and target tracking, for example, relied on a deep neural net CNN, whereas voice commands and balance data were recognized using an SNN. The inputs were then integrated inside a neural state machine, which churned out decisions to downstream output modules—for example, controlling the handle bar to turn left.

Thanks to the chip’s brain-like architecture and bilingual ability, Tianjic “allowed all of the neural network models to operate in parallel and realized seamless communication across the models,” the team said. The result is an autonomous bike that rolls after its human, balances across speed bumps, avoids crashing into roadblocks, and answers to voice commands.

General AI?
“It’s a wonderful demonstration and quite impressive,” said the editorial team at Nature, which published the study on its cover last week.

However, they cautioned, when comparing Tianjic with state-of-the-art chips designed for a single problem toe-to-toe on that particular problem, Tianjic falls behind. But building these jack-of-all-trades hybrid chips is definitely worth the effort. Compared to today’s limited AI, what people really want is artificial general intelligence, which will require new architectures that aren’t designed to solve one particular problem.

Until people start to explore, innovate, and play around with different designs, it’s not clear how we can further progress in the pursuit of general AI. A self-driving bike might not be much to look at, but its hybrid brain is a pretty neat place to start.

*The name, in Chinese, means “heavenly machine,” “unknowable mystery of nature,” or “confidentiality.” Go figure.

Image Credit: Alexander Ryabintsev / Shutterstock.com Continue reading

Posted in Human Robots

#435423 Moving Beyond Mind-Controlled Limbs to ...

Brain-machine interface enthusiasts often gush about “closing the loop.” It’s for good reason. On the implant level, it means engineering smarter probes that only activate when they detect faulty electrical signals in brain circuits. Elon Musk’s Neuralink—among other players—are readily pursuing these bi-directional implants that both measure and zap the brain.

But to scientists laboring to restore functionality to paralyzed patients or amputees, “closing the loop” has broader connotations. Building smart mind-controlled robotic limbs isn’t enough; the next frontier is restoring sensation in offline body parts. To truly meld biology with machine, the robotic appendage has to “feel one” with the body.

This month, two studies from Science Robotics describe complementary ways forward. In one, scientists from the University of Utah paired a state-of-the-art robotic arm—the DEKA LUKE—with electrically stimulating remaining nerves above the attachment point. Using artificial zaps to mimic the skin’s natural response patterns to touch, the team dramatically increased the patient’s ability to identify objects. Without much training, he could easily discriminate between the small and large and the soft and hard while blindfolded and wearing headphones.

In another, a team based at the National University of Singapore took inspiration from our largest organ, the skin. Mimicking the neural architecture of biological skin, the engineered “electronic skin” not only senses temperature, pressure, and humidity, but continues to function even when scraped or otherwise damaged. Thanks to artificial nerves that transmit signals far faster than our biological ones, the flexible e-skin shoots electrical data 1,000 times quicker than human nerves.

Together, the studies marry neuroscience and robotics. Representing the latest push towards closing the loop, they show that integrating biological sensibilities with robotic efficiency isn’t impossible (super-human touch, anyone?). But more immediately—and more importantly—they’re beacons of hope for patients who hope to regain their sense of touch.

For one of the participants, a late middle-aged man with speckled white hair who lost his forearm 13 years ago, superpowers, cyborgs, or razzle-dazzle brain implants are the last thing on his mind. After a barrage of emotionally-neutral scientific tests, he grasped his wife’s hand and felt her warmth for the first time in over a decade. His face lit up in a blinding smile.

That’s what scientists are working towards.

Biomimetic Feedback
The human skin is a marvelous thing. Not only does it rapidly detect a multitude of sensations—pressure, temperature, itch, pain, humidity—its wiring “binds” disparate signals together into a sensory fingerprint that helps the brain identify what it’s feeling at any moment. Thanks to over 45 miles of nerves that connect the skin, muscles, and brain, you can pick up a half-full coffee cup, knowing that it’s hot and sloshing, while staring at your computer screen. Unfortunately, this complexity is also why restoring sensation is so hard.

The sensory electrode array implanted in the participant’s arm. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019)..
However, complex neural patterns can also be a source of inspiration. Previous cyborg arms are often paired with so-called “standard” sensory algorithms to induce a basic sense of touch in the missing limb. Here, electrodes zap residual nerves with intensities proportional to the contact force: the harder the grip, the stronger the electrical feedback. Although seemingly logical, that’s not how our skin works. Every time the skin touches or leaves an object, its nerves shoot strong bursts of activity to the brain; while in full contact, the signal is much lower. The resulting electrical strength curve resembles a “U.”

The LUKE hand. Image Credit: George et al., Sci. Robot. 4, eaax2352 (2019).
The team decided to directly compare standard algorithms with one that better mimics the skin’s natural response. They fitted a volunteer with a robotic LUKE arm and implanted an array of electrodes into his forearm—right above the amputation—to stimulate the remaining nerves. When the team activated different combinations of electrodes, the man reported sensations of vibration, pressure, tapping, or a sort of “tightening” in his missing hand. Some combinations of zaps also made him feel as if he were moving the robotic arm’s joints.

In all, the team was able to carefully map nearly 120 sensations to different locations on the phantom hand, which they then overlapped with contact sensors embedded in the LUKE arm. For example, when the patient touched something with his robotic index finger, the relevant electrodes sent signals that made him feel as if he were brushing something with his own missing index fingertip.

Standard sensory feedback already helped: even with simple electrical stimulation, the man could tell apart size (golf versus lacrosse ball) and texture (foam versus plastic) while blindfolded and wearing noise-canceling headphones. But when the team implemented two types of neuromimetic feedback—electrical zaps that resembled the skin’s natural response—his performance dramatically improved. He was able to identify objects much faster and more accurately under their guidance. Outside the lab, he also found it easier to cook, feed, and dress himself. He could even text on his phone and complete routine chores that were previously too difficult, such as stuffing an insert into a pillowcase, hammering a nail, or eating hard-to-grab foods like eggs and grapes.

The study shows that the brain more readily accepts biologically-inspired electrical patterns, making it a relatively easy—but enormously powerful—upgrade that seamlessly integrates the robotic arms with the host. “The functional and emotional benefits…are likely to be further enhanced with long-term use, and efforts are underway to develop a portable take-home system,” the team said.

E-Skin Revolution: Asynchronous Coded Electronic Skin (ACES)
Flexible electronic skins also aren’t new, but the second team presented an upgrade in both speed and durability while retaining multiplexed sensory capabilities.

Starting from a combination of rubber, plastic, and silicon, the team embedded over 200 sensors onto the e-skin, each capable of discerning contact, pressure, temperature, and humidity. They then looked to the skin’s nervous system for inspiration. Our skin is embedded with a dense array of nerve endings that individually transmit different types of sensations, which are integrated inside hubs called ganglia. Compared to having every single nerve ending directly ping data to the brain, this “gather, process, and transmit” architecture rapidly speeds things up.

The team tapped into this biological architecture. Rather than pairing each sensor with a dedicated receiver, ACES sends all sensory data to a single receiver—an artificial ganglion. This setup lets the e-skin’s wiring work as a whole system, as opposed to individual electrodes. Every sensor transmits its data using a characteristic pulse, which allows it to be uniquely identified by the receiver.

The gains were immediate. First was speed. Normally, sensory data from multiple individual electrodes need to be periodically combined into a map of pressure points. Here, data from thousands of distributed sensors can independently go to a single receiver for further processing, massively increasing efficiency—the new e-skin’s transmission rate is roughly 1,000 times faster than that of human skin.

Second was redundancy. Because data from individual sensors are aggregated, the system still functioned even when any individual receptors are damaged, making it far more resilient than previous attempts. Finally, the setup could easily scale up. Although the team only tested the idea with 240 sensors, theoretically the system should work with up to 10,000.

The team is now exploring ways to combine their invention with other material layers to make it water-resistant and self-repairable. As you might’ve guessed, an immediate application is to give robots something similar to complex touch. A sensory upgrade not only lets robots more easily manipulate tools, doorknobs, and other objects in hectic real-world environments, it could also make it easier for machines to work collaboratively with humans in the future (hey Wall-E, care to pass the salt?).

Dexterous robots aside, the team also envisions engineering better prosthetics. When coated onto cyborg limbs, for example, ACES may give them a better sense of touch that begins to rival the human skin—or perhaps even exceed it.

Regardless, efforts that adapt the functionality of the human nervous system to machines are finally paying off, and more are sure to come. Neuromimetic ideas may very well be the link that finally closes the loop.

Image Credit: Dan Hixson/University of Utah College of Engineering.. Continue reading

Posted in Human Robots

#435199 The Rise of AI Art—and What It Means ...

Artificially intelligent systems are slowly taking over tasks previously done by humans, and many processes involving repetitive, simple movements have already been fully automated. In the meantime, humans continue to be superior when it comes to abstract and creative tasks.

However, it seems like even when it comes to creativity, we’re now being challenged by our own creations.

In the last few years, we’ve seen the emergence of hundreds of “AI artists.” These complex algorithms are creating unique (and sometimes eerie) works of art. They’re generating stunning visuals, profound poetry, transcendent music, and even realistic movie scripts. The works of these AI artists are raising questions about the nature of art and the role of human creativity in future societies.

Here are a few works of art created by non-human entities.

Unsecured Futures
by Ai.Da

Ai-Da Robot with Painting. Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations.
Earlier this month we saw the announcement of Ai.Da, considered the first ultra-realistic drawing robot artist. Her mechanical abilities, combined with AI-based algorithms, allow her to draw, paint, and even sculpt. She is able to draw people using her artificial eye and a pencil in her hand. Ai.Da’s artwork and first solo exhibition, Unsecured Futures, will be showcased at Oxford University in July.

Ai-Da Cartesian Painting. Image Credit: Ai-Da Artworks. Published with permission from Midas Public Relations.
Obviously Ai.Da has no true consciousness, thoughts, or feelings. Despite that, the (human) organizers of the exhibition believe that Ai.Da serves as a basis for crucial conversations about the ethics of emerging technologies. The exhibition will serve as a stimulant for engaging with critical questions about what kind of future we ought to create via such technologies.

The exhibition’s creators wrote, “Humans are confident in their position as the most powerful species on the planet, but how far do we actually want to take this power? To a Brave New World (Nightmare)? And if we use new technologies to enhance the power of the few, we had better start safeguarding the future of the many.”

Google’s PoemPortraits
Our transcendence adorns,
That society of the stars seem to be the secret.

The two lines of poetry above aren’t like any poetry you’ve come across before. They are generated by an algorithm that was trained via deep learning neural networks trained on 20 million words of 19th-century poetry.

Google’s latest art project, named PoemPortraits, takes a word of your suggestion and generates a unique poem (once again, a collaboration of man and machine). You can even add a selfie in the final “PoemPortrait.” Artist Es Devlin, the project’s creator, explains that the AI “doesn’t copy or rework existing phrases, but uses its training material to build a complex statistical model. As a result, the algorithm generates original phrases emulating the style of what it’s been trained on.”

The generated poetry can sometimes be profound, and sometimes completely meaningless.But what makes the PoemPortraits project even more interesting is that it’s a collaborative project. All of the generated lines of poetry are combined to form a consistently growing collective poem, which you can view after your lines are generated. In many ways, the final collective poem is a collaboration of people from around the world working with algorithms.

Faceless Portraits Transcending Time
AICAN + Ahmed Elgammal

Image Credit: AICAN + Ahmed Elgammal | Faceless Portrait #2 (2019) | Artsy.
In March of this year, an AI artist called AICAN and its creator Ahmed Elgammal took over a New York gallery. The exhibition at HG Commentary showed two series of canvas works portraying harrowing, dream-like faceless portraits.

The exhibition was not simply credited to a machine, but rather attributed to the collaboration between a human and machine. Ahmed Elgammal is the founder and director of the Art and Artificial Intelligence Laboratory at Rutgers University. He considers AICAN to not only be an autonomous AI artist, but also a collaborator for artistic endeavors.

How did AICAN create these eerie faceless portraits? The system was presented with 100,000 photos of Western art from over five centuries, allowing it to learn the aesthetics of art via machine learning. It then drew from this historical knowledge and the mandate to create something new to create an artwork without human intervention.

Genesis
by AIVA Technologies

Listen to the score above. While you do, reflect on the fact that it was generated by an AI.

AIVA is an AI that composes soundtrack music for movies, commercials, games, and trailers. Its creative works span a wide range of emotions and moods. The scores it generates are indistinguishable from those created by the most talented human composers.

The AIVA music engine allows users to generate original scores in multiple ways. One is to upload an existing human-generated score and select the temp track to base the composition process on. Another method involves using preset algorithms to compose music in pre-defined styles, including everything from classical to Middle Eastern.

Currently, the platform is promoted as an opportunity for filmmakers and producers. But in the future, perhaps every individual will have personalized music generated for them based on their interests, tastes, and evolving moods. We already have algorithms on streaming websites recommending novel music to us based on our interests and history. Soon, algorithms may be used to generate music and other works of art that are tailored to impact our unique psyches.

The Future of Art: Pushing Our Creative Limitations
These works of art are just a glimpse into the breadth of the creative works being generated by algorithms and machines. Many of us will rightly fear these developments. We have to ask ourselves what our role will be in an era where machines are able to perform what we consider complex, abstract, creative tasks. The implications on the future of work, education, and human societies are profound.

At the same time, some of these works demonstrate that AI artists may not necessarily represent a threat to human artists, but rather an opportunity for us to push our creative boundaries. The most exciting artistic creations involve collaborations between humans and machines.

We have always used our technological scaffolding to push ourselves beyond our biological limitations. We use the telescope to extend our line of sight, planes to fly, and smartphones to connect with others. Our machines are not always working against us, but rather working as an extension of our minds. Similarly, we could use our machines to expand on our creativity and push the boundaries of art.

Image Credit: Ai-Da portraits by Nicky Johnston. Published with permission from Midas Public Relations. Continue reading

Posted in Human Robots

#434854 New Lifelike Biomaterial Self-Reproduces ...

Life demands flux.

Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.

What if we could endow cold, static, lifeless robots with the gift of metabolism?

In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.

Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.

The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.

“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.

“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.

Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.

Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?

Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.

Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”

The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.

Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.

DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.

Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.

Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.

This “enabled a general design strategy for the DASH patterns,” they said.

In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.

These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.

Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.

“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.

Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.

In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.

Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.

Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.

The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.

In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.

A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.

DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.

Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.

“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.

“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”

Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading

Posted in Human Robots