Tag Archives: here
#434854 New Lifelike Biomaterial Self-Reproduces ...
Life demands flux.
Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.
What if we could endow cold, static, lifeless robots with the gift of metabolism?
In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.
Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.
The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.
“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.
“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.
Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.
Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?
Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.
Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”
The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.
Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.
DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.
Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.
Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.
This “enabled a general design strategy for the DASH patterns,” they said.
In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.
These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.
Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.
“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.
Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.
In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.
Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.
Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.
The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.
In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.
A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.
DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.
Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.
“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.
“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”
Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading
#434827 AI and Robotics Are Transforming ...
During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.
In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.
Here are the three areas of greatest impact:
AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply
Let’s dive in!
Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.
Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.
Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.
Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.
With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.
This is where AI comes in: our mining mechanism.
In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?
Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.
Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.
Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.
This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.
Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.
And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.
Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.
One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”
Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.
Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.
Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.
As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.
Imagining the Future….
Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.
As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.
Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.
By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.
This brings us to a second critical convergence: robots and drones.
While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.
Let’s explore a few of the most disruptive examples to reach the testing phase.
First up….
Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.
Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.
Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.
Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.
Cheetah III and future versions are aimed at saving lives in almost any environment.
And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.
Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.
Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.
Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.
As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.
But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.
After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.
But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.
In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.
But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.
Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.
Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.
Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.
But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.
Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.
Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.
As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.
And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.
Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.
But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.
Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.
Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.
But drone technology is even beginning to transcend the limited scale of medical supplies and food.
Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.
Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.
As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.
Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.
While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.
And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.
Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Arcansel / Shutterstock.com Continue reading
#434792 Extending Human Longevity With ...
Lizards can regrow entire limbs. Flatworms, starfish, and sea cucumbers regrow entire bodies. Sharks constantly replace lost teeth, often growing over 20,000 teeth throughout their lifetimes. How can we translate these near-superpowers to humans?
The answer: through the cutting-edge innovations of regenerative medicine.
While big data and artificial intelligence transform how we practice medicine and invent new treatments, regenerative medicine is about replenishing, replacing, and rejuvenating our physical bodies.
In Part 5 of this blog series on Longevity and Vitality, I detail three of the regenerative technologies working together to fully augment our vital human organs.
Replenish: Stem cells, the regenerative engine of the body
Replace: Organ regeneration and bioprinting
Rejuvenate: Young blood and parabiosis
Let’s dive in.
Replenish: Stem Cells – The Regenerative Engine of the Body
Stem cells are undifferentiated cells that can transform into specialized cells such as heart, neurons, liver, lung, skin and so on, and can also divide to produce more stem cells.
In a child or young adult, these stem cells are in large supply, acting as a built-in repair system. They are often summoned to the site of damage or inflammation to repair and restore normal function.
But as we age, our supply of stem cells begins to diminish as much as 100- to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing your body.
Imagine your stem cells as a team of repairmen in your newly constructed mansion. When the mansion is new and the repairmen are young, they can fix everything perfectly. But as the repairmen age and reduce in number, your mansion eventually goes into disrepair and finally crumbles.
What if you could restore and rejuvenate your stem cell population?
One option to accomplish this restoration and rejuvenation is to extract and concentrate your own autologous adult stem cells from places like your adipose (or fat) tissue or bone marrow.
These stem cells, however, are fewer in number and have undergone mutations (depending on your age) from their original ‘software code.’ Many scientists and physicians now prefer an alternative source, obtaining stem cells from the placenta or umbilical cord, the leftovers of birth.
These stem cells, available in large supply and expressing the undamaged software of a newborn, can be injected into joints or administered intravenously to rejuvenate and revitalize.
Think of these stem cells as chemical factories generating vital growth factors that can help to reduce inflammation, fight autoimmune disease, increase muscle mass, repair joints, and even revitalize skin and grow hair.
Over the last decade, the number of publications per year on stem cell-related research has increased 40x, and the stem cell market is expected to increase to $297 billion by 2022.
Rising research and development initiatives to develop therapeutic options for chronic diseases and growing demand for regenerative treatment options are the most significant drivers of this budding industry.
Biologists led by Kohji Nishida at Osaka University in Japan have discovered a new way to nurture and grow the tissues that make up the human eyeball. The scientists are able to grow retinas, corneas, the eye’s lens, and more, using only a small sample of adult skin.
In a Stanford study, seven of 18 stroke victims who agreed to stem cell treatments showed remarkable motor function improvements. This treatment could work for other neurodegenerative conditions such as Alzheimer’s, Parkinson’s, and ALS.
Doctors from the USC Neurorestoration Center and Keck Medicine of USC injected stem cells into the damaged cervical spine of a recently paralyzed 21-year-old man. Three months later, he showed dramatic improvement in sensation and movement of both arms.
In 2019, doctors in the U.K. cured a patient with HIV for the second time ever thanks to the efficacy of stem cells. After giving the cancer patient (who also had HIV) an allogeneic haematopoietic (e.g. blood) stem cell treatment for his Hodgkin’s lymphoma, the patient went into long-term HIV remission—18 months and counting at the time of the study’s publication.
Replace: Organ Regeneration and 3D Printing
Every 10 minutes, someone is added to the US organ transplant waiting list, totaling over 113,000 people waiting for replacement organs as of January 2019.
Countless more people in need of ‘spare parts’ never make it onto the waiting list. And on average, 20 people die each day while waiting for a transplant.
As a result, 35 percent of all US deaths (~900,000 people) could be prevented or delayed with access to organ replacements.
The excessive demand for donated organs will only intensify as technologies like self-driving cars make the world safer, given that many organ donors result from auto and motorcycle accidents. Safer vehicles mean less accidents and donations.
Clearly, replacement and regenerative medicine represent a massive opportunity.
Organ Entrepreneurs
Enter United Therapeutics CEO, Dr. Martine Rothblatt. A one-time aerospace entrepreneur (she was the founder of Sirius Satellite Radio), Rothblatt changed careers in the 1990s after her daughter developed a rare lung disease.
Her moonshot today is to create an industry of replacement organs. With an initial focus on diseases of the lung, Rothblatt set out to create replacement lungs. To accomplish this goal, her company United Therapeutics has pursued a number of technologies in parallel.
3D Printing Lungs
In 2017, United teamed up with one of the world’s largest 3D printing companies, 3D Systems, to build a collagen bioprinter and is paying another company, 3Scan, to slice up lungs and create detailed maps of their interior.
This 3D Systems bioprinter now operates according to a method called stereolithography. A UV laser flickers through a shallow pool of collagen doped with photosensitive molecules. Wherever the laser lingers, the collagen cures and becomes solid.
Gradually, the object being printed is lowered and new layers are added. The printer can currently lay down collagen at a resolution of around 20 micrometers, but will need to achieve resolution of a micrometer in size to make the lung functional.
Once a collagen lung scaffold has been printed, the next step is to infuse it with human cells, a process called recellularization.
The goal here is to use stem cells that grow on scaffolding and differentiate, ultimately providing the proper functionality. Early evidence indicates this approach can work.
In 2018, Harvard University experimental surgeon Harald Ott reported that he pumped billions of human cells (from umbilical cords and diced lungs) into a pig lung stripped of its own cells. When Ott’s team reconnected it to a pig’s circulation, the resulting organ showed rudimentary function.
Humanizing Pig Lungs
Another of Rothblatt’s organ manufacturing strategies is called xenotransplantation, the idea of transplanting an animal’s organs into humans who need a replacement.
Given the fact that adult pig organs are similar in size and shape to those of humans, United Therapeutics has focused on genetically engineering pigs to allow humans to use their organs. “It’s actually not rocket science,” said Rothblatt in her 2015 TED talk. “It’s editing one gene after another.”
To accomplish this goal, United Therapeutics made a series of investments in companies such as Revivicor Inc. and Synthetic Genomics Inc., and signed large funding agreements with the University of Maryland, University of Alabama, and New York Presbyterian/Columbia University Medical Center to create xenotransplantation programs for new hearts, kidneys, and lungs, respectively. Rothblatt hopes to see human translation in three to four years.
In preparation for that day, United Therapeutics owns a 132-acre property in Research Triangle Park and built a 275,000-square-foot medical laboratory that will ultimately have the capability to annually produce up to 1,000 sets of healthy pig lungs—known as xenolungs—from genetically engineered pigs.
Lung Ex Vivo Perfusion Systems
Beyond 3D printing and genetically engineering pig lungs, Rothblatt has already begun implementing a third near-term approach to improve the supply of lungs across the US.
Only about 30 percent of potential donor lungs meet transplant criteria in the first place; of those, only about 85 percent of those are usable once they arrive at the surgery center. As a result, nearly 75 percent of possible lungs never make it to the recipient in need.
What if these lungs could be rejuvenated? This concept informs Dr. Rothblatt’s next approach.
In 2016, United Therapeutics invested $41.8 million in TransMedics Inc., an Andover, Massachusetts company that develops ex vivo perfusion systems for donor lungs, hearts, and kidneys.
The XVIVO Perfusion System takes marginal-quality lungs that initially failed to meet transplantation standard-of-care criteria and perfuses and ventilates them at normothermic conditions, providing an opportunity for surgeons to reassess transplant suitability.
Rejuvenate Young Blood and Parabiosis
In HBO’s parody of the Bay Area tech community, Silicon Valley, one of the episodes (Season 4, Episode 5) is named “The Blood Boy.”
In this installment, tech billionaire Gavin Belson (Matt Ross) is meeting with Richard Hendricks (Thomas Middleditch) and his team, speaking about the future of the decentralized internet. A young, muscled twenty-something disrupts the meeting when he rolls in a transfusion stand and silently hooks an intravenous connection between himself and Belson.
Belson then introduces the newcomer as his “transfusion associate” and begins to explain the science of parabiosis: “Regular transfusions of the blood of a younger physically fit donor can significantly retard the aging process.”
While the sitcom is fiction, that science has merit, and the scenario portrayed in the episode is already happening today.
On the first point, research at Stanford and Harvard has demonstrated that older animals, when transfused with the blood of young animals, experience regeneration across many tissues and organs.
The opposite is also true: young animals, when transfused with the blood of older animals, experience accelerated aging. But capitalizing on this virtual fountain of youth has been tricky.
Ambrosia
One company, a San Francisco-based startup called Ambrosia, recently commenced one of the trials on parabiosis. Their protocol is simple: Healthy participants aged 35 and older get a transfusion of blood plasma from donors under 25, and researchers monitor their blood over the next two years for molecular indicators of health and aging.
Ambrosia’s founder Jesse Karmazin became interested in launching a company around parabiosis after seeing impressive data from animals and studies conducted abroad in humans: In one trial after another, subjects experience a reversal of aging symptoms across every major organ system. “The effects seem to be almost permanent,” he said. “It’s almost like there’s a resetting of gene expression.”
Infusing your own cord blood stem cells as you age may have tremendous longevity benefits. Following an FDA press release in February 2019, Ambrosia halted its consumer-facing treatment after several months of operation.
Understandably, the FDA raised concerns about the practice of parabiosis because to date, there is a marked lack of clinical data to support the treatment’s effectiveness.
Elevian
On the other end of the reputability spectrum is a startup called Elevian, spun out of Harvard University. Elevian is approaching longevity with a careful, scientifically validated strategy. (Full Disclosure: I am both an advisor to and investor in Elevian.)
CEO Mark Allen, MD, is joined by a dozen MDs and Ph.Ds out of Harvard. Elevian’s scientific founders started the company after identifying specific circulating factors that may be responsible for the “young blood” effect.
One example: A naturally occurring molecule known as “growth differentiation factor 11,” or GDF11, when injected into aged mice, reproduces many of the regenerative effects of young blood, regenerating heart, brain, muscles, lungs, and kidneys.
More specifically, GDF11 supplementation reduces age-related cardiac hypertrophy, accelerates skeletal muscle repair, improves exercise capacity, improves brain function and cerebral blood flow, and improves metabolism.
Elevian is developing a number of therapeutics that regulate GDF11 and other circulating factors. The goal is to restore our body’s natural regenerative capacity, which Elevian believes can address some of the root causes of age-associated disease with the promise of reversing or preventing many aging-related diseases and extending the healthy lifespan.
Conclusion
In 1992, futurist Leland Kaiser coined the term “regenerative medicine”:
“A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems.”
Since then, the powerful regenerative medicine industry has grown exponentially, and this rapid growth is anticipated to continue.
A dramatic extension of the human healthspan is just over the horizon. Soon, we’ll all have the regenerative superpowers previously relegated to a handful of animals and comic books.
What new opportunities open up when anybody, anywhere, and at anytime can regenerate, replenish, and replace entire organs and metabolic systems on command?
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Giovanni Cancemi / Shutterstock.com Continue reading
#434786 AI Performed Like a Human on a Gestalt ...
Dr. Been Kim wants to rip open the black box of deep learning.
A senior researcher at Google Brain, Kim specializes in a sort of AI psychology. Like cognitive psychologists before her, she develops various ways to probe the alien minds of artificial neural networks (ANNs), digging into their gory details to better understand the models and their responses to inputs.
The more interpretable ANNs are, the reasoning goes, the easier it is to reveal potential flaws in their reasoning. And if we understand when or why our systems choke, we’ll know when not to use them—a foundation for building responsible AI.
There are already several ways to tap into ANN reasoning, but Kim’s inspiration for unraveling the AI black box came from an entirely different field: cognitive psychology. The field aims to discover fundamental rules of how the human mind—essentially also a tantalizing black box—operates, Kim wrote with her colleagues.
In a new paper uploaded to the pre-publication server arXiv, the team described a way to essentially perform a human cognitive test on ANNs. The test probes how we automatically complete gaps in what we see, so that they form entire objects—for example, perceiving a circle from a bunch of loose dots arranged along a clock face. Psychologist dub this the “law of completion,” a highly influential idea that led to explanations of how our minds generalize data into concepts.
Because deep neural networks in machine vision loosely mimic the structure and connections of the visual cortex, the authors naturally asked: do ANNs also exhibit the law of completion? And what does that tell us about how an AI thinks?
Enter the Germans
The law of completion is part of a series of ideas from Gestalt psychology. Back in the 1920s, long before the advent of modern neuroscience, a group of German experimental psychologists asked: in this chaotic, flashy, unpredictable world, how do we piece together input in a way that leads to meaningful perceptions?
The result is a group of principles known together as the Gestalt effect: that the mind self-organizes to form a global whole. In the more famous words of Gestalt psychologist Kurt Koffka, our perception forms a whole that’s “something else than the sum of its parts.” Not greater than; just different.
Although the theory has its critics, subsequent studies in humans and animals suggest that the law of completion happens on both the cognitive and neuroanatomical level.
Take a look at the drawing below. You immediately “see” a shape that’s actually the negative: a triangle or a square (A and B). Or you further perceive a 3D ball (C), or a snake-like squiggle (D). Your mind fills in blank spots, so that the final perception is more than just the black shapes you’re explicitly given.
Image Credit: Wikimedia Commons contributors, the free media repository.
Neuroscientists now think that the effect comes from how our visual system processes information. Arranged in multiple layers and columns, lower-level neurons—those first to wrangle the data—tend to extract simpler features such as lines or angles. In Gestalt speak, they “see” the parts.
Then, layer by layer, perception becomes more abstract, until higher levels of the visual system directly interpret faces or objects—or things that don’t really exist. That is, the “whole” emerges.
The Experiment Setup
Inspired by these classical experiments, Kim and team developed a protocol to test the Gestalt effect on feed-forward ANNs: one simple, the other, dubbed the “Inception V3,” far more complex and widely used in the machine vision community.
The main idea is similar to the triangle drawings above. First, the team generated three datasets: one set shows complete, ordinary triangles. The second—the “Illusory” set, shows triangles with the edges removed but the corners intact. Thanks to the Gestalt effect, to us humans these generally still look like triangles. The third set also only shows incomplete triangle corners. But here, the corners are randomly rotated so that we can no longer imagine a line connecting them—hence, no more triangle.
To generate a dataset large enough to tease out small effects, the authors changed the background color, image rotation, and other aspects of the dataset. In all, they produced nearly 1,000 images to test their ANNs on.
“At a high level, we compare an ANN’s activation similarities between the three sets of stimuli,” the authors explained. The process is two steps: first, train the AI on complete triangles. Second, test them on the datasets. If the response is more similar between the illusory set and the complete triangle—rather than the randomly rotated set—it should suggest a sort of Gestalt closure effect in the network.
Machine Gestalt
Right off the bat, the team got their answer: yes, ANNs do seem to exhibit the law of closure.
When trained on natural images, the networks better classified the illusory set as triangles than those with randomized connection weights or networks trained on white noise.
When the team dug into the “why,” things got more interesting. The ability to complete an image correlated with the network’s ability to generalize.
Humans subconsciously do this constantly: anything with a handle made out of ceramic, regardless of shape, could easily be a mug. ANNs still struggle to grasp common features—clues that immediately tells us “hey, that’s a mug!” But when they do, it sometimes allows the networks to better generalize.
“What we observe here is that a network that is able to generalize exhibits…more of the closure effect [emphasis theirs], hinting that the closure effect reflects something beyond simply learning features,” the team wrote.
What’s more, remarkably similar to the visual cortex, “higher” levels of the ANNs showed more of the closure effect than lower layers, and—perhaps unsurprisingly—the more layers a network had, the more it exhibited the closure effect.
As the networks learned, their ability to map out objects from fragments also improved. When the team messed around with the brightness and contrast of the images, the AI still learned to see the forest from the trees.
“Our findings suggest that neural networks trained with natural images do exhibit closure,” the team concluded.
AI Psychology
That’s not to say that ANNs recapitulate the human brain. As Google’s Deep Dream, an effort to coax AIs into spilling what they’re perceiving, clearly demonstrates, machine vision sees some truly weird stuff.
In contrast, because they’re modeled after the human visual cortex, perhaps it’s not all that surprising that these networks also exhibit higher-level properties inherent to how we process information.
But to Kim and her colleagues, that’s exactly the point.
“The field of psychology has developed useful tools and insights to study human brains– tools that we may be able to borrow to analyze artificial neural networks,” they wrote.
By tweaking these tools to better analyze machine minds, the authors were able to gain insight on how similarly or differently they see the world from us. And that’s the crux: the point isn’t to say that ANNs perceive the world sort of, kind of, maybe similar to humans. It’s to tap into a wealth of cognitive psychology tools, established over decades using human minds, to probe that of ANNs.
“The work here is just one step along a much longer path,” the authors conclude.
“Understanding where humans and neural networks differ will be helpful for research on interpretability by enlightening the fundamental differences between the two interesting species.”
Image Credit: Popova Alena / Shutterstock.com Continue reading