Tag Archives: eye

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#434865 5 AI Breakthroughs We’ll Likely See in ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

As AI algorithms such as Siri and Alexa can process your voice and output helpful responses, other AIs like Face++ can recognize faces. And yet others create art from scribbles, or even diagnose medical conditions.

Let’s dive into AI and convergence.

Top 5 Predictions for AI Breakthroughs (2019-2024)
My friend Neil Jacobstein is my ‘go-to expert’ in AI, with over 25 years of technical consulting experience in the field. Currently the AI and Robotics chair at Singularity University, Jacobstein is also a Distinguished Visiting Scholar in Stanford’s MediaX Program, a Henry Crown Fellow, an Aspen Institute moderator, and serves on the National Academy of Sciences Earth and Life Studies Committee. Neil predicted five trends he expects to emerge over the next five years, by 2024.

AI gives rise to new non-human pattern recognition and intelligence results

AlphaGo Zero, a machine learning computer program trained to play the complex game of Go, defeated the Go world champion in 2016 by 100 games to zero. But instead of learning from human play, AlphaGo Zero trained by playing against itself—a method known as reinforcement learning.

Building its own knowledge from scratch, AlphaGo Zero demonstrates a novel form of creativity, free of human bias. Even more groundbreaking, this type of AI pattern recognition allows machines to accumulate thousands of years of knowledge in a matter of hours.

While these systems can’t answer the question “What is orange juice?” or compete with the intelligence of a fifth grader, they are growing more and more strategically complex, merging with other forms of narrow artificial intelligence. Within the next five years, who knows what successors of AlphaGo Zero will emerge, augmenting both your business functions and day-to-day life.

Doctors risk malpractice when not using machine learning for diagnosis and treatment planning

A group of Chinese and American researchers recently created an AI system that diagnoses common childhood illnesses, ranging from the flu to meningitis. Trained on electronic health records compiled from 1.3 million outpatient visits of almost 600,000 patients, the AI program produced diagnosis outcomes with unprecedented accuracy.

While the US health system does not tout the same level of accessible universal health data as some Chinese systems, we’ve made progress in implementing AI in medical diagnosis. Dr. Kang Zhang, chief of ophthalmic genetics at the University of California, San Diego, created his own system that detects signs of diabetic blindness, relying on both text and medical images.

With an eye to the future, Jacobstein has predicted that “we will soon see an inflection point where doctors will feel it’s a risk to not use machine learning and AI in their everyday practices because they don’t want to be called out for missing an important diagnostic signal.”

Quantum advantage will massively accelerate drug design and testing

Researchers estimate that there are 1060 possible drug-like molecules—more than the number of atoms in our solar system. But today, chemists must make drug predictions based on properties influenced by molecular structure, then synthesize numerous variants to test their hypotheses.

Quantum computing could transform this time-consuming, highly costly process into an efficient, not to mention life-changing, drug discovery protocol.

“Quantum computing is going to have a major industrial impact… not by breaking encryption,” said Jacobstein, “but by making inroads into design through massive parallel processing that can exploit superposition and quantum interference and entanglement, and that can wildly outperform classical computing.”

AI accelerates security systems’ vulnerability and defense

With the incorporation of AI into almost every aspect of our lives, cyberattacks have grown increasingly threatening. “Deep attacks” can use AI-generated content to avoid both human and AI controls.

Previous examples include fake videos of former President Obama speaking fabricated sentences, and an adversarial AI fooling another algorithm into categorizing a stop sign as a 45 mph speed limit sign. Without the appropriate protections, AI systems can be manipulated to conduct any number of destructive objectives, whether ruining reputations or diverting autonomous vehicles.

Jacobstein’s take: “We all have security systems on our buildings, in our homes, around the healthcare system, and in air traffic control, financial organizations, the military, and intelligence communities. But we all know that these systems have been hacked periodically and we’re going to see that accelerate. So, there are major business opportunities there and there are major opportunities for you to get ahead of that curve before it bites you.”

AI design systems drive breakthroughs in atomically precise manufacturing

Just as the modern computer transformed our relationship with bits and information, AI will redefine and revolutionize our relationship with molecules and materials. AI is currently being used to discover new materials for clean-tech innovations, such as solar panels, batteries, and devices that can now conduct artificial photosynthesis.

Today, it takes about 15 to 20 years to create a single new material, according to industry experts. But as AI design systems skyrocket in capacity, these will vastly accelerate the materials discovery process, allowing us to address pressing issues like climate change at record rates. Companies like Kebotix are already on their way to streamlining the creation of chemistries and materials at the click of a button.

Atomically precise manufacturing will enable us to produce the previously unimaginable.

Final Thoughts
Within just the past three years, countries across the globe have signed into existence national AI strategies and plans for ramping up innovation. Businesses and think tanks have leaped onto the scene, hiring AI engineers and tech consultants to leverage what computer scientist Andrew Ng has even called the new ‘electricity’ of the 21st century.

As AI plays an exceedingly vital role in everyday life, how will your business leverage it to keep up and build forward?

In the wake of burgeoning markets, new ventures will quickly arise, each taking advantage of untapped data sources or unmet security needs.

And as your company aims to ride the wave of AI’s exponential growth, consider the following pointers to leverage AI and disrupt yourself before it reaches you first:

Determine where and how you can begin collecting critical data to inform your AI algorithms
Identify time-intensive processes that can be automated and accelerated within your company
Discern which global challenges can be expedited by hyper-fast, all-knowing minds

Remember: good data is vital fuel. Well-defined problems are the best compass. And the time to start implementing AI is now.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#434854 New Lifelike Biomaterial Self-Reproduces ...

Life demands flux.

Every living organism is constantly changing: cells divide and die, proteins build and disintegrate, DNA breaks and heals. Life demands metabolism—the simultaneous builder and destroyer of living materials—to continuously upgrade our bodies. That’s how we heal and grow, how we propagate and survive.

What if we could endow cold, static, lifeless robots with the gift of metabolism?

In a study published this month in Science Robotics, an international team developed a DNA-based method that gives raw biomaterials an artificial metabolism. Dubbed DASH—DNA-based assembly and synthesis of hierarchical materials—the method automatically generates “slime”-like nanobots that dynamically move and navigate their environments.

Like humans, the artificial lifelike material used external energy to constantly change the nanobots’ bodies in pre-programmed ways, recycling their DNA-based parts as both waste and raw material for further use. Some “grew” into the shape of molecular double-helixes; others “wrote” the DNA letters inside micro-chips.

The artificial life forms were also rather “competitive”—in quotes, because these molecular machines are not conscious. Yet when pitted against each other, two DASH bots automatically raced forward, crawling in typical slime-mold fashion at a scale easily seen under the microscope—and with some iterations, with the naked human eye.

“Fundamentally, we may be able to change how we create and use the materials with lifelike characteristics. Typically materials and objects we create in general are basically static… one day, we may be able to ‘grow’ objects like houses and maintain their forms and functions autonomously,” said study author Dr. Shogo Hamada to Singularity Hub.

“This is a great study that combines the versatility of DNA nanotechnology with the dynamics of living materials,” said Dr. Job Boekhoven at the Technical University of Munich, who was not involved in the work.

Dissipative Assembly
The study builds on previous ideas on how to make molecular Lego blocks that essentially assemble—and destroy—themselves.

Although the inspiration came from biological metabolism, scientists have long hoped to cut their reliance on nature. At its core, metabolism is just a bunch of well-coordinated chemical reactions, programmed by eons of evolution. So why build artificial lifelike materials still tethered by evolution when we can use chemistry to engineer completely new forms of artificial life?

Back in 2015, for example, a team led by Boekhoven described a way to mimic how our cells build their internal “structural beams,” aptly called the cytoskeleton. The key here, unlike many processes in nature, isn’t balance or equilibrium; rather, the team engineered an extremely unstable system that automatically builds—and sustains—assemblies from molecular building blocks when given an external source of chemical energy.

Sound familiar? The team basically built molecular devices that “die” without “food.” Thanks to the laws of thermodynamics (hey ya, Newton!), that energy eventually dissipates, and the shapes automatically begin to break down, completing an artificial “circle of life.”

The new study took the system one step further: rather than just mimicking synthesis, they completed the circle by coupling the building process with dissipative assembly.

Here, the “assembling units themselves are also autonomously created from scratch,” said Hamada.

DNA Nanobots
The process of building DNA nanobots starts on a microfluidic chip.

Decades of research have allowed researchers to optimize DNA assembly outside the body. With the help of catalysts, which help “bind” individual molecules together, the team found that they could easily alter the shape of the self-assembling DNA bots—which formed fiber-like shapes—by changing the structure of the microfluidic chambers.

Computer simulations played a role here too: through both digital simulations and observations under the microscope, the team was able to identify a few critical rules that helped them predict how their molecules self-assemble while navigating a maze of blocking “pillars” and channels carved onto the microchips.

This “enabled a general design strategy for the DASH patterns,” they said.

In particular, the whirling motion of the fluids as they coursed through—and bumped into—ridges in the chips seems to help the DNA molecules “entangle into networks,” the team explained.

These insights helped the team further develop the “destroying” part of metabolism. Similar to linking molecules into DNA chains, their destruction also relies on enzymes.

Once the team pumped both “generation” and “degeneration” enzymes into the microchips, along with raw building blocks, the process was completely autonomous. The simultaneous processes were so lifelike that the team used a metric commonly used in robotics, finite-state automation, to measure the behavior of their DNA nanobots from growth to eventual decay.

“The result is a synthetic structure with features associated with life. These behaviors include locomotion, self-regeneration, and spatiotemporal regulation,” said Boekhoven.

Molecular Slime Molds
Just witnessing lifelike molecules grow in place like the dance move running man wasn’t enough.

In their next experiments, the team took inspiration from slugs to program undulating movements into their DNA bots. Here, “movement” is actually a sort of illusion: the machines “moved” because their front ends kept regenerating, whereas their back ends degenerated. In essence, the molecular slime was built from linking multiple individual “DNA robot-like” units together: each unit receives a delayed “decay” signal from the head of the slime in a way that allowed the whole artificial “organism” to crawl forward, against the steam of fluid flow.

Here’s the fun part: the team eventually engineered two molecular slime bots and pitted them against each other, Mario Kart-style. In these experiments, the faster moving bot alters the state of its competitor to promote “decay.” This slows down the competitor, allowing the dominant DNA nanoslug to win in a race.

Of course, the end goal isn’t molecular podracing. Rather, the DNA-based bots could easily amplify a given DNA or RNA sequence, making them efficient nano-diagnosticians for viral and other infections.

The lifelike material can basically generate patterns that doctors can directly ‘see’ with their eyes, which makes DNA or RNA molecules from bacteria and viruses extremely easy to detect, the team said.

In the short run, “the detection device with this self-generating material could be applied to many places and help people on site, from farmers to clinics, by providing an easy and accurate way to detect pathogens,” explained Hamaga.

A Futuristic Iron Man Nanosuit?
I’m letting my nerd flag fly here. In Avengers: Infinity Wars, the scientist-engineer-philanthropist-playboy Tony Stark unveiled a nanosuit that grew to his contours when needed and automatically healed when damaged.

DASH may one day realize that vision. For now, the team isn’t focused on using the technology for regenerating armor—rather, the dynamic materials could create new protein assemblies or chemical pathways inside living organisms, for example. The team also envisions adding simple sensing and computing mechanisms into the material, which can then easily be thought of as a robot.

Unlike synthetic biology, the goal isn’t to create artificial life. Rather, the team hopes to give lifelike properties to otherwise static materials.

“We are introducing a brand-new, lifelike material concept powered by its very own artificial metabolism. We are not making something that’s alive, but we are creating materials that are much more lifelike than have ever been seen before,” said lead author Dr. Dan Luo.

“Ultimately, our material may allow the construction of self-reproducing machines… artificial metabolism is an important step toward the creation of ‘artificial’ biological systems with dynamic, lifelike capabilities,” added Hamada. “It could open a new frontier in robotics.”

Image Credit: A timelapse image of DASH, by Jeff Tyson at Cornell University. Continue reading

Posted in Human Robots

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots

#434792 Extending Human Longevity With ...

Lizards can regrow entire limbs. Flatworms, starfish, and sea cucumbers regrow entire bodies. Sharks constantly replace lost teeth, often growing over 20,000 teeth throughout their lifetimes. How can we translate these near-superpowers to humans?

The answer: through the cutting-edge innovations of regenerative medicine.

While big data and artificial intelligence transform how we practice medicine and invent new treatments, regenerative medicine is about replenishing, replacing, and rejuvenating our physical bodies.

In Part 5 of this blog series on Longevity and Vitality, I detail three of the regenerative technologies working together to fully augment our vital human organs.

Replenish: Stem cells, the regenerative engine of the body
Replace: Organ regeneration and bioprinting
Rejuvenate: Young blood and parabiosis

Let’s dive in.

Replenish: Stem Cells – The Regenerative Engine of the Body
Stem cells are undifferentiated cells that can transform into specialized cells such as heart, neurons, liver, lung, skin and so on, and can also divide to produce more stem cells.

In a child or young adult, these stem cells are in large supply, acting as a built-in repair system. They are often summoned to the site of damage or inflammation to repair and restore normal function.

But as we age, our supply of stem cells begins to diminish as much as 100- to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing your body.

Imagine your stem cells as a team of repairmen in your newly constructed mansion. When the mansion is new and the repairmen are young, they can fix everything perfectly. But as the repairmen age and reduce in number, your mansion eventually goes into disrepair and finally crumbles.

What if you could restore and rejuvenate your stem cell population?

One option to accomplish this restoration and rejuvenation is to extract and concentrate your own autologous adult stem cells from places like your adipose (or fat) tissue or bone marrow.

These stem cells, however, are fewer in number and have undergone mutations (depending on your age) from their original ‘software code.’ Many scientists and physicians now prefer an alternative source, obtaining stem cells from the placenta or umbilical cord, the leftovers of birth.

These stem cells, available in large supply and expressing the undamaged software of a newborn, can be injected into joints or administered intravenously to rejuvenate and revitalize.

Think of these stem cells as chemical factories generating vital growth factors that can help to reduce inflammation, fight autoimmune disease, increase muscle mass, repair joints, and even revitalize skin and grow hair.

Over the last decade, the number of publications per year on stem cell-related research has increased 40x, and the stem cell market is expected to increase to $297 billion by 2022.

Rising research and development initiatives to develop therapeutic options for chronic diseases and growing demand for regenerative treatment options are the most significant drivers of this budding industry.

Biologists led by Kohji Nishida at Osaka University in Japan have discovered a new way to nurture and grow the tissues that make up the human eyeball. The scientists are able to grow retinas, corneas, the eye’s lens, and more, using only a small sample of adult skin.

In a Stanford study, seven of 18 stroke victims who agreed to stem cell treatments showed remarkable motor function improvements. This treatment could work for other neurodegenerative conditions such as Alzheimer’s, Parkinson’s, and ALS.

Doctors from the USC Neurorestoration Center and Keck Medicine of USC injected stem cells into the damaged cervical spine of a recently paralyzed 21-year-old man. Three months later, he showed dramatic improvement in sensation and movement of both arms.

In 2019, doctors in the U.K. cured a patient with HIV for the second time ever thanks to the efficacy of stem cells. After giving the cancer patient (who also had HIV) an allogeneic haematopoietic (e.g. blood) stem cell treatment for his Hodgkin’s lymphoma, the patient went into long-term HIV remission—18 months and counting at the time of the study’s publication.

Replace: Organ Regeneration and 3D Printing
Every 10 minutes, someone is added to the US organ transplant waiting list, totaling over 113,000 people waiting for replacement organs as of January 2019.

Countless more people in need of ‘spare parts’ never make it onto the waiting list. And on average, 20 people die each day while waiting for a transplant.

As a result, 35 percent of all US deaths (~900,000 people) could be prevented or delayed with access to organ replacements.

The excessive demand for donated organs will only intensify as technologies like self-driving cars make the world safer, given that many organ donors result from auto and motorcycle accidents. Safer vehicles mean less accidents and donations.

Clearly, replacement and regenerative medicine represent a massive opportunity.

Organ Entrepreneurs
Enter United Therapeutics CEO, Dr. Martine Rothblatt. A one-time aerospace entrepreneur (she was the founder of Sirius Satellite Radio), Rothblatt changed careers in the 1990s after her daughter developed a rare lung disease.

Her moonshot today is to create an industry of replacement organs. With an initial focus on diseases of the lung, Rothblatt set out to create replacement lungs. To accomplish this goal, her company United Therapeutics has pursued a number of technologies in parallel.

3D Printing Lungs
In 2017, United teamed up with one of the world’s largest 3D printing companies, 3D Systems, to build a collagen bioprinter and is paying another company, 3Scan, to slice up lungs and create detailed maps of their interior.

This 3D Systems bioprinter now operates according to a method called stereolithography. A UV laser flickers through a shallow pool of collagen doped with photosensitive molecules. Wherever the laser lingers, the collagen cures and becomes solid.

Gradually, the object being printed is lowered and new layers are added. The printer can currently lay down collagen at a resolution of around 20 micrometers, but will need to achieve resolution of a micrometer in size to make the lung functional.

Once a collagen lung scaffold has been printed, the next step is to infuse it with human cells, a process called recellularization.

The goal here is to use stem cells that grow on scaffolding and differentiate, ultimately providing the proper functionality. Early evidence indicates this approach can work.

In 2018, Harvard University experimental surgeon Harald Ott reported that he pumped billions of human cells (from umbilical cords and diced lungs) into a pig lung stripped of its own cells. When Ott’s team reconnected it to a pig’s circulation, the resulting organ showed rudimentary function.

Humanizing Pig Lungs
Another of Rothblatt’s organ manufacturing strategies is called xenotransplantation, the idea of transplanting an animal’s organs into humans who need a replacement.

Given the fact that adult pig organs are similar in size and shape to those of humans, United Therapeutics has focused on genetically engineering pigs to allow humans to use their organs. “It’s actually not rocket science,” said Rothblatt in her 2015 TED talk. “It’s editing one gene after another.”

To accomplish this goal, United Therapeutics made a series of investments in companies such as Revivicor Inc. and Synthetic Genomics Inc., and signed large funding agreements with the University of Maryland, University of Alabama, and New York Presbyterian/Columbia University Medical Center to create xenotransplantation programs for new hearts, kidneys, and lungs, respectively. Rothblatt hopes to see human translation in three to four years.

In preparation for that day, United Therapeutics owns a 132-acre property in Research Triangle Park and built a 275,000-square-foot medical laboratory that will ultimately have the capability to annually produce up to 1,000 sets of healthy pig lungs—known as xenolungs—from genetically engineered pigs.

Lung Ex Vivo Perfusion Systems
Beyond 3D printing and genetically engineering pig lungs, Rothblatt has already begun implementing a third near-term approach to improve the supply of lungs across the US.

Only about 30 percent of potential donor lungs meet transplant criteria in the first place; of those, only about 85 percent of those are usable once they arrive at the surgery center. As a result, nearly 75 percent of possible lungs never make it to the recipient in need.

What if these lungs could be rejuvenated? This concept informs Dr. Rothblatt’s next approach.

In 2016, United Therapeutics invested $41.8 million in TransMedics Inc., an Andover, Massachusetts company that develops ex vivo perfusion systems for donor lungs, hearts, and kidneys.

The XVIVO Perfusion System takes marginal-quality lungs that initially failed to meet transplantation standard-of-care criteria and perfuses and ventilates them at normothermic conditions, providing an opportunity for surgeons to reassess transplant suitability.

Rejuvenate Young Blood and Parabiosis
In HBO’s parody of the Bay Area tech community, Silicon Valley, one of the episodes (Season 4, Episode 5) is named “The Blood Boy.”

In this installment, tech billionaire Gavin Belson (Matt Ross) is meeting with Richard Hendricks (Thomas Middleditch) and his team, speaking about the future of the decentralized internet. A young, muscled twenty-something disrupts the meeting when he rolls in a transfusion stand and silently hooks an intravenous connection between himself and Belson.

Belson then introduces the newcomer as his “transfusion associate” and begins to explain the science of parabiosis: “Regular transfusions of the blood of a younger physically fit donor can significantly retard the aging process.”

While the sitcom is fiction, that science has merit, and the scenario portrayed in the episode is already happening today.

On the first point, research at Stanford and Harvard has demonstrated that older animals, when transfused with the blood of young animals, experience regeneration across many tissues and organs.

The opposite is also true: young animals, when transfused with the blood of older animals, experience accelerated aging. But capitalizing on this virtual fountain of youth has been tricky.

Ambrosia
One company, a San Francisco-based startup called Ambrosia, recently commenced one of the trials on parabiosis. Their protocol is simple: Healthy participants aged 35 and older get a transfusion of blood plasma from donors under 25, and researchers monitor their blood over the next two years for molecular indicators of health and aging.

Ambrosia’s founder Jesse Karmazin became interested in launching a company around parabiosis after seeing impressive data from animals and studies conducted abroad in humans: In one trial after another, subjects experience a reversal of aging symptoms across every major organ system. “The effects seem to be almost permanent,” he said. “It’s almost like there’s a resetting of gene expression.”

Infusing your own cord blood stem cells as you age may have tremendous longevity benefits. Following an FDA press release in February 2019, Ambrosia halted its consumer-facing treatment after several months of operation.

Understandably, the FDA raised concerns about the practice of parabiosis because to date, there is a marked lack of clinical data to support the treatment’s effectiveness.

Elevian
On the other end of the reputability spectrum is a startup called Elevian, spun out of Harvard University. Elevian is approaching longevity with a careful, scientifically validated strategy. (Full Disclosure: I am both an advisor to and investor in Elevian.)

CEO Mark Allen, MD, is joined by a dozen MDs and Ph.Ds out of Harvard. Elevian’s scientific founders started the company after identifying specific circulating factors that may be responsible for the “young blood” effect.

One example: A naturally occurring molecule known as “growth differentiation factor 11,” or GDF11, when injected into aged mice, reproduces many of the regenerative effects of young blood, regenerating heart, brain, muscles, lungs, and kidneys.

More specifically, GDF11 supplementation reduces age-related cardiac hypertrophy, accelerates skeletal muscle repair, improves exercise capacity, improves brain function and cerebral blood flow, and improves metabolism.

Elevian is developing a number of therapeutics that regulate GDF11 and other circulating factors. The goal is to restore our body’s natural regenerative capacity, which Elevian believes can address some of the root causes of age-associated disease with the promise of reversing or preventing many aging-related diseases and extending the healthy lifespan.

Conclusion
In 1992, futurist Leland Kaiser coined the term “regenerative medicine”:

“A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems.”

Since then, the powerful regenerative medicine industry has grown exponentially, and this rapid growth is anticipated to continue.

A dramatic extension of the human healthspan is just over the horizon. Soon, we’ll all have the regenerative superpowers previously relegated to a handful of animals and comic books.

What new opportunities open up when anybody, anywhere, and at anytime can regenerate, replenish, and replace entire organs and metabolic systems on command?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Giovanni Cancemi / Shutterstock.com Continue reading

Posted in Human Robots