Tag Archives: animals

#434823 The Tangled Web of Turning Spider Silk ...

Spider-Man is one of the most popular superheroes of all time. It’s a bit surprising given that one of the more common phobias is arachnophobia—a debilitating fear of spiders.

Perhaps more fantastical is that young Peter Parker, a brainy high school science nerd, seemingly developed overnight the famous web-shooters and the synthetic spider silk that he uses to swing across the cityscape like Tarzan through the jungle.

That’s because scientists have been trying for decades to replicate spider silk, a material that is five times stronger than steel, among its many superpowers. In recent years, researchers have been untangling the protein-based fiber’s structure down to the molecular level, leading to new insights and new potential for eventual commercial uses.

The applications for such a material seem near endless. There’s the more futuristic visions, like enabling robotic “muscles” for human-like movement or ensnaring real-life villains with a Spider-Man-like web. Near-term applications could include the biomedical industry, such as bandages and adhesives, and as a replacement textile for everything from rope to seat belts to parachutes.

Spinning Synthetic Spider Silk
Randy Lewis has been studying the properties of spider silk and developing methods for producing it synthetically for more than three decades. In the 1990s, his research team was behind cloning the first spider silk gene, as well as the first to identify and sequence the proteins that make up the six different silks that web slingers make. Each has different mechanical properties.

“So our thought process was that you could take that information and begin to to understand what made them strong and what makes them stretchy, and why some are are very stretchy and some are not stretchy at all, and some are stronger and some are weaker,” explained Lewis, a biology professor at Utah State University and director of the Synthetic Spider Silk Lab, in an interview with Singularity Hub.

Spiders are naturally territorial and cannibalistic, so any intention to farm silk naturally would likely end in an orgy of arachnid violence. Instead, Lewis and company have genetically modified different organisms to produce spider silk synthetically, including inserting a couple of web-making genes into the genetic code of goats. The goats’ milk contains spider silk proteins.

The lab also produces synthetic spider silk through a fermentation process not entirely dissimilar to brewing beer, but using genetically modified bacteria to make the desired spider silk proteins. A similar technique has been used for years to make a key enzyme in cheese production. More recently, companies are using transgenic bacteria to make meat and milk proteins, entirely bypassing animals in the process.

The same fermentation technology is used by a chic startup called Bolt Threads outside of San Francisco that has raised more than $200 million for fashionable fibers made out of synthetic spider silk it calls Microsilk. (The company is also developing a second leather-like material, Mylo, using the underground root structure of mushrooms known as mycelium.)

Lewis’ lab also uses transgenic silkworms to produce a kind of composite material made up of the domesticated insect’s own silk proteins and those of spider silk. “Those have some fairly impressive properties,” Lewis said.

The researchers are even experimenting with genetically modified alfalfa. One of the big advantages there is that once the spider silk protein has been extracted, the remaining protein could be sold as livestock feed. “That would bring the cost of spider silk protein production down significantly,” Lewis said.

Building a Better Web
Producing synthetic spider silk isn’t the problem, according to Lewis, but the ability to do it at scale commercially remains a sticking point.

Another challenge is “weaving” the synthetic spider silk into usable products that can take advantage of the material’s marvelous properties.

“It is possible to make silk proteins synthetically, but it is very hard to assemble the individual proteins into a fiber or other material forms,” said Markus Buehler, head of the Department of Civil and Environmental Engineering at MIT, in an email to Singularity Hub. “The spider has a complex spinning duct in which silk proteins are exposed to physical forces, chemical gradients, the combination of which generates the assembly of molecules that leads to silk fibers.”

Buehler recently co-authored a paper in the journal Science Advances that found dragline spider silk exhibits different properties in response to changes in humidity that could eventually have applications in robotics.

Specifically, spider silk suddenly contracts and twists above a certain level of relative humidity, exerting enough force to “potentially be competitive with other materials being explored as actuators—devices that move to perform some activity such as controlling a valve,” according to a press release.

Studying Spider Silk Up Close
Recent studies at the molecular level are helping scientists learn more about the unique properties of spider silk, which may help researchers develop materials with extraordinary capabilities.

For example, scientists at Arizona State University used magnetic resonance tools and other instruments to image the abdomen of a black widow spider. They produced what they called the first molecular-level model of spider silk protein fiber formation, providing insights on the nanoparticle structure. The research was published last October in Proceedings of the National Academy of Sciences.

A cross section of the abdomen of a black widow (Latrodectus Hesperus) spider used in this study at Arizona State University. Image Credit: Samrat Amin.
Also in 2018, a study presented in Nature Communications described a sort of molecular clamp that binds the silk protein building blocks, which are called spidroins. The researchers observed for the first time that the clamp self-assembles in a two-step process, contributing to the extensibility, or stretchiness, of spider silk.

Another team put the spider silk of a brown recluse under an atomic force microscope, discovering that each strand, already 1,000 times thinner than a human hair, is made up of thousands of nanostrands. That helps explain its extraordinary tensile strength, though technique is also a factor, as the brown recluse uses a special looping method to reinforce its silk strands. The study also appeared last year in the journal ACS Macro Letters.

Making Spider Silk Stick
Buehler said his team is now trying to develop better and faster predictive methods to design silk proteins using artificial intelligence.

“These new methods allow us to generate new protein designs that do not naturally exist and which can be explored to optimize certain desirable properties like torsional actuation, strength, bioactivity—for example, tissue engineering—and others,” he said.

Meanwhile, Lewis’ lab has discovered a method that allows it to solubilize spider silk protein in what is essentially a water-based solution, eschewing acids or other toxic compounds that are normally used in the process.

That enables the researchers to develop materials beyond fiber, including adhesives that “are better than an awful lot of the current commercial adhesives,” Lewis said, as well as coatings that could be used to dampen vibrations, for example.

“We’re making gels for various kinds of of tissue regeneration, as well as drug delivery, and things like that,” he added. “So we’ve expanded the use profile from something beyond fibers to something that is a much more extensive portfolio of possible kinds of materials.”

And, yes, there’s even designs at the Synthetic Spider Silk Lab for developing a Spider-Man web-slinger material. The US Navy is interested in non-destructive ways of disabling an enemy vessel, such as fouling its propeller. The project also includes producing synthetic proteins from the hagfish, an eel-like critter that exudes a gelatinous slime when threatened.

Lewis said that while the potential for spider silk is certainly headline-grabbing, he cautioned that much of the hype is not focused on the unique mechanical properties that could lead to advances in healthcare and other industries.

“We want to see spider silk out there because it’s a unique material, not because it’s got marketing appeal,” he said.

Image Credit: mycteria / Shutterstock.com Continue reading

Posted in Human Robots

#434792 Extending Human Longevity With ...

Lizards can regrow entire limbs. Flatworms, starfish, and sea cucumbers regrow entire bodies. Sharks constantly replace lost teeth, often growing over 20,000 teeth throughout their lifetimes. How can we translate these near-superpowers to humans?

The answer: through the cutting-edge innovations of regenerative medicine.

While big data and artificial intelligence transform how we practice medicine and invent new treatments, regenerative medicine is about replenishing, replacing, and rejuvenating our physical bodies.

In Part 5 of this blog series on Longevity and Vitality, I detail three of the regenerative technologies working together to fully augment our vital human organs.

Replenish: Stem cells, the regenerative engine of the body
Replace: Organ regeneration and bioprinting
Rejuvenate: Young blood and parabiosis

Let’s dive in.

Replenish: Stem Cells – The Regenerative Engine of the Body
Stem cells are undifferentiated cells that can transform into specialized cells such as heart, neurons, liver, lung, skin and so on, and can also divide to produce more stem cells.

In a child or young adult, these stem cells are in large supply, acting as a built-in repair system. They are often summoned to the site of damage or inflammation to repair and restore normal function.

But as we age, our supply of stem cells begins to diminish as much as 100- to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing your body.

Imagine your stem cells as a team of repairmen in your newly constructed mansion. When the mansion is new and the repairmen are young, they can fix everything perfectly. But as the repairmen age and reduce in number, your mansion eventually goes into disrepair and finally crumbles.

What if you could restore and rejuvenate your stem cell population?

One option to accomplish this restoration and rejuvenation is to extract and concentrate your own autologous adult stem cells from places like your adipose (or fat) tissue or bone marrow.

These stem cells, however, are fewer in number and have undergone mutations (depending on your age) from their original ‘software code.’ Many scientists and physicians now prefer an alternative source, obtaining stem cells from the placenta or umbilical cord, the leftovers of birth.

These stem cells, available in large supply and expressing the undamaged software of a newborn, can be injected into joints or administered intravenously to rejuvenate and revitalize.

Think of these stem cells as chemical factories generating vital growth factors that can help to reduce inflammation, fight autoimmune disease, increase muscle mass, repair joints, and even revitalize skin and grow hair.

Over the last decade, the number of publications per year on stem cell-related research has increased 40x, and the stem cell market is expected to increase to $297 billion by 2022.

Rising research and development initiatives to develop therapeutic options for chronic diseases and growing demand for regenerative treatment options are the most significant drivers of this budding industry.

Biologists led by Kohji Nishida at Osaka University in Japan have discovered a new way to nurture and grow the tissues that make up the human eyeball. The scientists are able to grow retinas, corneas, the eye’s lens, and more, using only a small sample of adult skin.

In a Stanford study, seven of 18 stroke victims who agreed to stem cell treatments showed remarkable motor function improvements. This treatment could work for other neurodegenerative conditions such as Alzheimer’s, Parkinson’s, and ALS.

Doctors from the USC Neurorestoration Center and Keck Medicine of USC injected stem cells into the damaged cervical spine of a recently paralyzed 21-year-old man. Three months later, he showed dramatic improvement in sensation and movement of both arms.

In 2019, doctors in the U.K. cured a patient with HIV for the second time ever thanks to the efficacy of stem cells. After giving the cancer patient (who also had HIV) an allogeneic haematopoietic (e.g. blood) stem cell treatment for his Hodgkin’s lymphoma, the patient went into long-term HIV remission—18 months and counting at the time of the study’s publication.

Replace: Organ Regeneration and 3D Printing
Every 10 minutes, someone is added to the US organ transplant waiting list, totaling over 113,000 people waiting for replacement organs as of January 2019.

Countless more people in need of ‘spare parts’ never make it onto the waiting list. And on average, 20 people die each day while waiting for a transplant.

As a result, 35 percent of all US deaths (~900,000 people) could be prevented or delayed with access to organ replacements.

The excessive demand for donated organs will only intensify as technologies like self-driving cars make the world safer, given that many organ donors result from auto and motorcycle accidents. Safer vehicles mean less accidents and donations.

Clearly, replacement and regenerative medicine represent a massive opportunity.

Organ Entrepreneurs
Enter United Therapeutics CEO, Dr. Martine Rothblatt. A one-time aerospace entrepreneur (she was the founder of Sirius Satellite Radio), Rothblatt changed careers in the 1990s after her daughter developed a rare lung disease.

Her moonshot today is to create an industry of replacement organs. With an initial focus on diseases of the lung, Rothblatt set out to create replacement lungs. To accomplish this goal, her company United Therapeutics has pursued a number of technologies in parallel.

3D Printing Lungs
In 2017, United teamed up with one of the world’s largest 3D printing companies, 3D Systems, to build a collagen bioprinter and is paying another company, 3Scan, to slice up lungs and create detailed maps of their interior.

This 3D Systems bioprinter now operates according to a method called stereolithography. A UV laser flickers through a shallow pool of collagen doped with photosensitive molecules. Wherever the laser lingers, the collagen cures and becomes solid.

Gradually, the object being printed is lowered and new layers are added. The printer can currently lay down collagen at a resolution of around 20 micrometers, but will need to achieve resolution of a micrometer in size to make the lung functional.

Once a collagen lung scaffold has been printed, the next step is to infuse it with human cells, a process called recellularization.

The goal here is to use stem cells that grow on scaffolding and differentiate, ultimately providing the proper functionality. Early evidence indicates this approach can work.

In 2018, Harvard University experimental surgeon Harald Ott reported that he pumped billions of human cells (from umbilical cords and diced lungs) into a pig lung stripped of its own cells. When Ott’s team reconnected it to a pig’s circulation, the resulting organ showed rudimentary function.

Humanizing Pig Lungs
Another of Rothblatt’s organ manufacturing strategies is called xenotransplantation, the idea of transplanting an animal’s organs into humans who need a replacement.

Given the fact that adult pig organs are similar in size and shape to those of humans, United Therapeutics has focused on genetically engineering pigs to allow humans to use their organs. “It’s actually not rocket science,” said Rothblatt in her 2015 TED talk. “It’s editing one gene after another.”

To accomplish this goal, United Therapeutics made a series of investments in companies such as Revivicor Inc. and Synthetic Genomics Inc., and signed large funding agreements with the University of Maryland, University of Alabama, and New York Presbyterian/Columbia University Medical Center to create xenotransplantation programs for new hearts, kidneys, and lungs, respectively. Rothblatt hopes to see human translation in three to four years.

In preparation for that day, United Therapeutics owns a 132-acre property in Research Triangle Park and built a 275,000-square-foot medical laboratory that will ultimately have the capability to annually produce up to 1,000 sets of healthy pig lungs—known as xenolungs—from genetically engineered pigs.

Lung Ex Vivo Perfusion Systems
Beyond 3D printing and genetically engineering pig lungs, Rothblatt has already begun implementing a third near-term approach to improve the supply of lungs across the US.

Only about 30 percent of potential donor lungs meet transplant criteria in the first place; of those, only about 85 percent of those are usable once they arrive at the surgery center. As a result, nearly 75 percent of possible lungs never make it to the recipient in need.

What if these lungs could be rejuvenated? This concept informs Dr. Rothblatt’s next approach.

In 2016, United Therapeutics invested $41.8 million in TransMedics Inc., an Andover, Massachusetts company that develops ex vivo perfusion systems for donor lungs, hearts, and kidneys.

The XVIVO Perfusion System takes marginal-quality lungs that initially failed to meet transplantation standard-of-care criteria and perfuses and ventilates them at normothermic conditions, providing an opportunity for surgeons to reassess transplant suitability.

Rejuvenate Young Blood and Parabiosis
In HBO’s parody of the Bay Area tech community, Silicon Valley, one of the episodes (Season 4, Episode 5) is named “The Blood Boy.”

In this installment, tech billionaire Gavin Belson (Matt Ross) is meeting with Richard Hendricks (Thomas Middleditch) and his team, speaking about the future of the decentralized internet. A young, muscled twenty-something disrupts the meeting when he rolls in a transfusion stand and silently hooks an intravenous connection between himself and Belson.

Belson then introduces the newcomer as his “transfusion associate” and begins to explain the science of parabiosis: “Regular transfusions of the blood of a younger physically fit donor can significantly retard the aging process.”

While the sitcom is fiction, that science has merit, and the scenario portrayed in the episode is already happening today.

On the first point, research at Stanford and Harvard has demonstrated that older animals, when transfused with the blood of young animals, experience regeneration across many tissues and organs.

The opposite is also true: young animals, when transfused with the blood of older animals, experience accelerated aging. But capitalizing on this virtual fountain of youth has been tricky.

Ambrosia
One company, a San Francisco-based startup called Ambrosia, recently commenced one of the trials on parabiosis. Their protocol is simple: Healthy participants aged 35 and older get a transfusion of blood plasma from donors under 25, and researchers monitor their blood over the next two years for molecular indicators of health and aging.

Ambrosia’s founder Jesse Karmazin became interested in launching a company around parabiosis after seeing impressive data from animals and studies conducted abroad in humans: In one trial after another, subjects experience a reversal of aging symptoms across every major organ system. “The effects seem to be almost permanent,” he said. “It’s almost like there’s a resetting of gene expression.”

Infusing your own cord blood stem cells as you age may have tremendous longevity benefits. Following an FDA press release in February 2019, Ambrosia halted its consumer-facing treatment after several months of operation.

Understandably, the FDA raised concerns about the practice of parabiosis because to date, there is a marked lack of clinical data to support the treatment’s effectiveness.

Elevian
On the other end of the reputability spectrum is a startup called Elevian, spun out of Harvard University. Elevian is approaching longevity with a careful, scientifically validated strategy. (Full Disclosure: I am both an advisor to and investor in Elevian.)

CEO Mark Allen, MD, is joined by a dozen MDs and Ph.Ds out of Harvard. Elevian’s scientific founders started the company after identifying specific circulating factors that may be responsible for the “young blood” effect.

One example: A naturally occurring molecule known as “growth differentiation factor 11,” or GDF11, when injected into aged mice, reproduces many of the regenerative effects of young blood, regenerating heart, brain, muscles, lungs, and kidneys.

More specifically, GDF11 supplementation reduces age-related cardiac hypertrophy, accelerates skeletal muscle repair, improves exercise capacity, improves brain function and cerebral blood flow, and improves metabolism.

Elevian is developing a number of therapeutics that regulate GDF11 and other circulating factors. The goal is to restore our body’s natural regenerative capacity, which Elevian believes can address some of the root causes of age-associated disease with the promise of reversing or preventing many aging-related diseases and extending the healthy lifespan.

Conclusion
In 1992, futurist Leland Kaiser coined the term “regenerative medicine”:

“A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems.”

Since then, the powerful regenerative medicine industry has grown exponentially, and this rapid growth is anticipated to continue.

A dramatic extension of the human healthspan is just over the horizon. Soon, we’ll all have the regenerative superpowers previously relegated to a handful of animals and comic books.

What new opportunities open up when anybody, anywhere, and at anytime can regenerate, replenish, and replace entire organs and metabolic systems on command?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Giovanni Cancemi / Shutterstock.com Continue reading

Posted in Human Robots

#434786 AI Performed Like a Human on a Gestalt ...

Dr. Been Kim wants to rip open the black box of deep learning.

A senior researcher at Google Brain, Kim specializes in a sort of AI psychology. Like cognitive psychologists before her, she develops various ways to probe the alien minds of artificial neural networks (ANNs), digging into their gory details to better understand the models and their responses to inputs.

The more interpretable ANNs are, the reasoning goes, the easier it is to reveal potential flaws in their reasoning. And if we understand when or why our systems choke, we’ll know when not to use them—a foundation for building responsible AI.

There are already several ways to tap into ANN reasoning, but Kim’s inspiration for unraveling the AI black box came from an entirely different field: cognitive psychology. The field aims to discover fundamental rules of how the human mind—essentially also a tantalizing black box—operates, Kim wrote with her colleagues.

In a new paper uploaded to the pre-publication server arXiv, the team described a way to essentially perform a human cognitive test on ANNs. The test probes how we automatically complete gaps in what we see, so that they form entire objects—for example, perceiving a circle from a bunch of loose dots arranged along a clock face. Psychologist dub this the “law of completion,” a highly influential idea that led to explanations of how our minds generalize data into concepts.

Because deep neural networks in machine vision loosely mimic the structure and connections of the visual cortex, the authors naturally asked: do ANNs also exhibit the law of completion? And what does that tell us about how an AI thinks?

Enter the Germans
The law of completion is part of a series of ideas from Gestalt psychology. Back in the 1920s, long before the advent of modern neuroscience, a group of German experimental psychologists asked: in this chaotic, flashy, unpredictable world, how do we piece together input in a way that leads to meaningful perceptions?

The result is a group of principles known together as the Gestalt effect: that the mind self-organizes to form a global whole. In the more famous words of Gestalt psychologist Kurt Koffka, our perception forms a whole that’s “something else than the sum of its parts.” Not greater than; just different.

Although the theory has its critics, subsequent studies in humans and animals suggest that the law of completion happens on both the cognitive and neuroanatomical level.

Take a look at the drawing below. You immediately “see” a shape that’s actually the negative: a triangle or a square (A and B). Or you further perceive a 3D ball (C), or a snake-like squiggle (D). Your mind fills in blank spots, so that the final perception is more than just the black shapes you’re explicitly given.

Image Credit: Wikimedia Commons contributors, the free media repository.
Neuroscientists now think that the effect comes from how our visual system processes information. Arranged in multiple layers and columns, lower-level neurons—those first to wrangle the data—tend to extract simpler features such as lines or angles. In Gestalt speak, they “see” the parts.

Then, layer by layer, perception becomes more abstract, until higher levels of the visual system directly interpret faces or objects—or things that don’t really exist. That is, the “whole” emerges.

The Experiment Setup
Inspired by these classical experiments, Kim and team developed a protocol to test the Gestalt effect on feed-forward ANNs: one simple, the other, dubbed the “Inception V3,” far more complex and widely used in the machine vision community.

The main idea is similar to the triangle drawings above. First, the team generated three datasets: one set shows complete, ordinary triangles. The second—the “Illusory” set, shows triangles with the edges removed but the corners intact. Thanks to the Gestalt effect, to us humans these generally still look like triangles. The third set also only shows incomplete triangle corners. But here, the corners are randomly rotated so that we can no longer imagine a line connecting them—hence, no more triangle.

To generate a dataset large enough to tease out small effects, the authors changed the background color, image rotation, and other aspects of the dataset. In all, they produced nearly 1,000 images to test their ANNs on.

“At a high level, we compare an ANN’s activation similarities between the three sets of stimuli,” the authors explained. The process is two steps: first, train the AI on complete triangles. Second, test them on the datasets. If the response is more similar between the illusory set and the complete triangle—rather than the randomly rotated set—it should suggest a sort of Gestalt closure effect in the network.

Machine Gestalt
Right off the bat, the team got their answer: yes, ANNs do seem to exhibit the law of closure.

When trained on natural images, the networks better classified the illusory set as triangles than those with randomized connection weights or networks trained on white noise.

When the team dug into the “why,” things got more interesting. The ability to complete an image correlated with the network’s ability to generalize.

Humans subconsciously do this constantly: anything with a handle made out of ceramic, regardless of shape, could easily be a mug. ANNs still struggle to grasp common features—clues that immediately tells us “hey, that’s a mug!” But when they do, it sometimes allows the networks to better generalize.

“What we observe here is that a network that is able to generalize exhibits…more of the closure effect [emphasis theirs], hinting that the closure effect reflects something beyond simply learning features,” the team wrote.

What’s more, remarkably similar to the visual cortex, “higher” levels of the ANNs showed more of the closure effect than lower layers, and—perhaps unsurprisingly—the more layers a network had, the more it exhibited the closure effect.

As the networks learned, their ability to map out objects from fragments also improved. When the team messed around with the brightness and contrast of the images, the AI still learned to see the forest from the trees.

“Our findings suggest that neural networks trained with natural images do exhibit closure,” the team concluded.

AI Psychology
That’s not to say that ANNs recapitulate the human brain. As Google’s Deep Dream, an effort to coax AIs into spilling what they’re perceiving, clearly demonstrates, machine vision sees some truly weird stuff.

In contrast, because they’re modeled after the human visual cortex, perhaps it’s not all that surprising that these networks also exhibit higher-level properties inherent to how we process information.

But to Kim and her colleagues, that’s exactly the point.

“The field of psychology has developed useful tools and insights to study human brains– tools that we may be able to borrow to analyze artificial neural networks,” they wrote.

By tweaking these tools to better analyze machine minds, the authors were able to gain insight on how similarly or differently they see the world from us. And that’s the crux: the point isn’t to say that ANNs perceive the world sort of, kind of, maybe similar to humans. It’s to tap into a wealth of cognitive psychology tools, established over decades using human minds, to probe that of ANNs.

“The work here is just one step along a much longer path,” the authors conclude.

“Understanding where humans and neural networks differ will be helpful for research on interpretability by enlightening the fundamental differences between the two interesting species.”

Image Credit: Popova Alena / Shutterstock.com Continue reading

Posted in Human Robots

#434643 Sensors and Machine Learning Are Giving ...

According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.

This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.

Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.

Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.

Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?

New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.

The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.

In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.

Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.

Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.

They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.

Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.

Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.

Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.

But before they can get out and shape the world, as these studies show, they will need to understand themselves.

Image Credit: jumbojan / Shutterstock.com Continue reading

Posted in Human Robots

#434569 From Parkour to Surgery, Here Are the ...

The robot revolution may not be here quite yet, but our mechanical cousins have made some serious strides. And now some of the leading experts in the field have provided a rundown of what they see as the 10 most exciting recent developments.

Compiled by the editors of the journal Science Robotics, the list includes some of the most impressive original research and innovative commercial products to make a splash in 2018, as well as a couple from 2017 that really changed the game.

1. Boston Dynamics’ Atlas doing parkour

It seems like barely a few months go by without Boston Dynamics rewriting the book on what a robot can and can’t do. Last year they really outdid themselves when they got their Atlas humanoid robot to do parkour, leaping over logs and jumping between wooden crates.

Atlas’s creators have admitted that the videos we see are cherry-picked from multiple attempts, many of which don’t go so well. But they say they’re meant to be inspirational and aspirational rather than an accurate picture of where robotics is today. And combined with the company’s dog-like Spot robot, they are certainly pushing boundaries.

2. Intuitive Surgical’s da Vinci SP platform
Robotic surgery isn’t new, but the technology is improving rapidly. Market leader Intuitive’s da Vinci surgical robot was first cleared by the FDA in 2000, but since then it’s come a long way, with the company now producing three separate systems.

The latest addition is the da Vinci SP (single port) system, which is able to insert three instruments into the body through a single 2.5cm cannula (tube) bringing a whole new meaning to minimally invasive surgery. The system was granted FDA clearance for urological procedures last year, and the company has now started shipping the new system to customers.

3. Soft robot that navigates through growth

Roboticists have long borrowed principles from the animal kingdom, but a new robot design that mimics the way plant tendrils and fungi mycelium move by growing at the tip has really broken the mold on robot navigation.

The editors point out that this is the perfect example of bio-inspired design; the researchers didn’t simply copy nature, they took a general principle and expanded on it. The tube-like robot unfolds from the front as pneumatic pressure is applied, but unlike a plant, it can grow at the speed of an animal walking and can navigate using visual feedback from a camera.

4. 3D printed liquid crystal elastomers for soft robotics
Soft robotics is one of the fastest-growing sub-disciplines in the field, but powering these devices without rigid motors or pumps is an ongoing challenge. A variety of shape-shifting materials have been proposed as potential artificial muscles, including liquid crystal elastomeric actuators.

Harvard engineers have now demonstrated that these materials can be 3D printed using a special ink that allows the designer to easily program in all kinds of unusual shape-shifting abilities. What’s more, their technique produces actuators capable of lifting significantly more weight than previous approaches.

5. Muscle-mimetic, self-healing, and hydraulically amplified actuators
In another effort to find a way to power soft robots, last year researchers at the University of Colorado Boulder designed a series of super low-cost artificial muscles that can lift 200 times their own weight and even heal themselves.

The devices rely on pouches filled with a liquid that makes them contract with the force and speed of mammalian skeletal muscles when a voltage is applied. The most promising for robotics applications is the so-called Peano-HASEL, which features multiple rectangular pouches connected in series that contract linearly, just like real muscle.

6. Self-assembled nanoscale robot from DNA

While you may think of robots as hulking metallic machines, a substantial number of scientists are working on making nanoscale robots out of DNA. And last year German researchers built the first remote-controlled DNA robotic arm.

They created a length of tightly-bound DNA molecules to act as the arm and attached it to a DNA base plate via a flexible joint. Because DNA carries a charge, they were able to get the arm to swivel around like the hand of a clock by applying a voltage and switch direction by reversing that voltage. The hope is that this arm could eventually be used to build materials piece by piece at the nanoscale.

7. DelFly nimble bioinspired robotic flapper

Robotics doesn’t only borrow from biology—sometimes it gives back to it, too. And a new flapping-winged robot designed by Dutch engineers that mimics the humble fruit fly has done just that, by revealing how the animals that inspired it carry out predator-dodging maneuvers.

The lab has been building flapping robots for years, but this time they ditched the airplane-like tail used to control previous incarnations. Instead, they used insect-inspired adjustments to the motions of its twin pairs of flapping wings to hover, pitch, and roll with the agility of a fruit fly. That has provided a useful platform for investigating insect flight dynamics, as well as more practical applications.

8. Soft exosuit wearable robot

Exoskeletons could prevent workplace injuries, help people walk again, and even boost soldiers’ endurance. Strapping on bulky equipment isn’t ideal, though, so researchers at Harvard are working on a soft exoskeleton that combines specially-designed textiles, sensors, and lightweight actuators.

And last year the team made an important breakthrough by combining their novel exoskeleton with a machine-learning algorithm that automatically tunes the device to the user’s particular walking style. Using physiological data, it is able to adjust when and where the device needs to deliver a boost to the user’s natural movements to improve walking efficiency.

9. Universal Robots (UR) e-Series Cobots
Robots in factories are nothing new. The enormous mechanical arms you see in car factories normally have to be kept in cages to prevent them from accidentally crushing people. In recent years there’s been growing interest in “co-bots,” collaborative robots designed to work side-by-side with their human colleagues and even learn from them.

Earlier this year saw the demise of ReThink robotics, the pioneer of the approach. But the simple single arm devices made by Danish firm Universal Robotics are becoming ubiquitous in workshops and warehouses around the world, accounting for about half of global co-bot sales. Last year they released their latest e-Series, with enhanced safety features and force/torque sensing.

10. Sony’s aibo
After a nearly 20-year hiatus, Sony’s robotic dog aibo is back, and it’s had some serious upgrades. As well as a revamp to its appearance, the new robotic pet takes advantage of advances in AI, with improved environmental and command awareness and the ability to develop a unique character based on interactions with its owner.

The editors note that this new context awareness mark the device out as a significant evolution in social robots, which many hope could aid in childhood learning or provide companionship for the elderly.

Image Credit: DelFly Nimble / CC BY – SA 4.0 Continue reading

Posted in Human Robots