Tag Archives: lab

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots

#434823 The Tangled Web of Turning Spider Silk ...

Spider-Man is one of the most popular superheroes of all time. It’s a bit surprising given that one of the more common phobias is arachnophobia—a debilitating fear of spiders.

Perhaps more fantastical is that young Peter Parker, a brainy high school science nerd, seemingly developed overnight the famous web-shooters and the synthetic spider silk that he uses to swing across the cityscape like Tarzan through the jungle.

That’s because scientists have been trying for decades to replicate spider silk, a material that is five times stronger than steel, among its many superpowers. In recent years, researchers have been untangling the protein-based fiber’s structure down to the molecular level, leading to new insights and new potential for eventual commercial uses.

The applications for such a material seem near endless. There’s the more futuristic visions, like enabling robotic “muscles” for human-like movement or ensnaring real-life villains with a Spider-Man-like web. Near-term applications could include the biomedical industry, such as bandages and adhesives, and as a replacement textile for everything from rope to seat belts to parachutes.

Spinning Synthetic Spider Silk
Randy Lewis has been studying the properties of spider silk and developing methods for producing it synthetically for more than three decades. In the 1990s, his research team was behind cloning the first spider silk gene, as well as the first to identify and sequence the proteins that make up the six different silks that web slingers make. Each has different mechanical properties.

“So our thought process was that you could take that information and begin to to understand what made them strong and what makes them stretchy, and why some are are very stretchy and some are not stretchy at all, and some are stronger and some are weaker,” explained Lewis, a biology professor at Utah State University and director of the Synthetic Spider Silk Lab, in an interview with Singularity Hub.

Spiders are naturally territorial and cannibalistic, so any intention to farm silk naturally would likely end in an orgy of arachnid violence. Instead, Lewis and company have genetically modified different organisms to produce spider silk synthetically, including inserting a couple of web-making genes into the genetic code of goats. The goats’ milk contains spider silk proteins.

The lab also produces synthetic spider silk through a fermentation process not entirely dissimilar to brewing beer, but using genetically modified bacteria to make the desired spider silk proteins. A similar technique has been used for years to make a key enzyme in cheese production. More recently, companies are using transgenic bacteria to make meat and milk proteins, entirely bypassing animals in the process.

The same fermentation technology is used by a chic startup called Bolt Threads outside of San Francisco that has raised more than $200 million for fashionable fibers made out of synthetic spider silk it calls Microsilk. (The company is also developing a second leather-like material, Mylo, using the underground root structure of mushrooms known as mycelium.)

Lewis’ lab also uses transgenic silkworms to produce a kind of composite material made up of the domesticated insect’s own silk proteins and those of spider silk. “Those have some fairly impressive properties,” Lewis said.

The researchers are even experimenting with genetically modified alfalfa. One of the big advantages there is that once the spider silk protein has been extracted, the remaining protein could be sold as livestock feed. “That would bring the cost of spider silk protein production down significantly,” Lewis said.

Building a Better Web
Producing synthetic spider silk isn’t the problem, according to Lewis, but the ability to do it at scale commercially remains a sticking point.

Another challenge is “weaving” the synthetic spider silk into usable products that can take advantage of the material’s marvelous properties.

“It is possible to make silk proteins synthetically, but it is very hard to assemble the individual proteins into a fiber or other material forms,” said Markus Buehler, head of the Department of Civil and Environmental Engineering at MIT, in an email to Singularity Hub. “The spider has a complex spinning duct in which silk proteins are exposed to physical forces, chemical gradients, the combination of which generates the assembly of molecules that leads to silk fibers.”

Buehler recently co-authored a paper in the journal Science Advances that found dragline spider silk exhibits different properties in response to changes in humidity that could eventually have applications in robotics.

Specifically, spider silk suddenly contracts and twists above a certain level of relative humidity, exerting enough force to “potentially be competitive with other materials being explored as actuators—devices that move to perform some activity such as controlling a valve,” according to a press release.

Studying Spider Silk Up Close
Recent studies at the molecular level are helping scientists learn more about the unique properties of spider silk, which may help researchers develop materials with extraordinary capabilities.

For example, scientists at Arizona State University used magnetic resonance tools and other instruments to image the abdomen of a black widow spider. They produced what they called the first molecular-level model of spider silk protein fiber formation, providing insights on the nanoparticle structure. The research was published last October in Proceedings of the National Academy of Sciences.

A cross section of the abdomen of a black widow (Latrodectus Hesperus) spider used in this study at Arizona State University. Image Credit: Samrat Amin.
Also in 2018, a study presented in Nature Communications described a sort of molecular clamp that binds the silk protein building blocks, which are called spidroins. The researchers observed for the first time that the clamp self-assembles in a two-step process, contributing to the extensibility, or stretchiness, of spider silk.

Another team put the spider silk of a brown recluse under an atomic force microscope, discovering that each strand, already 1,000 times thinner than a human hair, is made up of thousands of nanostrands. That helps explain its extraordinary tensile strength, though technique is also a factor, as the brown recluse uses a special looping method to reinforce its silk strands. The study also appeared last year in the journal ACS Macro Letters.

Making Spider Silk Stick
Buehler said his team is now trying to develop better and faster predictive methods to design silk proteins using artificial intelligence.

“These new methods allow us to generate new protein designs that do not naturally exist and which can be explored to optimize certain desirable properties like torsional actuation, strength, bioactivity—for example, tissue engineering—and others,” he said.

Meanwhile, Lewis’ lab has discovered a method that allows it to solubilize spider silk protein in what is essentially a water-based solution, eschewing acids or other toxic compounds that are normally used in the process.

That enables the researchers to develop materials beyond fiber, including adhesives that “are better than an awful lot of the current commercial adhesives,” Lewis said, as well as coatings that could be used to dampen vibrations, for example.

“We’re making gels for various kinds of of tissue regeneration, as well as drug delivery, and things like that,” he added. “So we’ve expanded the use profile from something beyond fibers to something that is a much more extensive portfolio of possible kinds of materials.”

And, yes, there’s even designs at the Synthetic Spider Silk Lab for developing a Spider-Man web-slinger material. The US Navy is interested in non-destructive ways of disabling an enemy vessel, such as fouling its propeller. The project also includes producing synthetic proteins from the hagfish, an eel-like critter that exudes a gelatinous slime when threatened.

Lewis said that while the potential for spider silk is certainly headline-grabbing, he cautioned that much of the hype is not focused on the unique mechanical properties that could lead to advances in healthcare and other industries.

“We want to see spider silk out there because it’s a unique material, not because it’s got marketing appeal,” he said.

Image Credit: mycteria / Shutterstock.com Continue reading

Posted in Human Robots

#434812 This Week’s Awesome Stories From ...

FUTURE OF FOOD
Behold the ‘Beefless Impossible Whopper’
Nathaniel Popper | The New York Times
“Burger King is introducing a Whopper made with a vegetarian patty from the start-up Impossible Foods. The deal is a big step toward the mainstream for start-ups trying to mimic and replace meat.”

ARTIFICIAL INTELLIGENCE
The Animal-AI Olympics Is Going to Treat AI Like a Lab Rat
Oscar Schwartz | MIT Technology Review
“What is being tested is not a particular type of intelligence but the ability for a single agent to adapt to diverse environments. This would demonstrate a limited form of generalized intelligence—a type of common sense that AI will need if it is ever to succeed in our homes or in our daily lives.”

SPACE
Falcon Heavy’s First Real Launch on Sunday Is the Dawn of a New Heavy-Lift Era in Space
Devin Coldewey | TechCrunch
“The Falcon Heavy has flown before, but now it’s got a payload that matters and competitors nipping at its heels. It’s the first of a new generation of launch vehicles that can take huge payloads to space cheaply and frequently, opening up a new frontier in the space race.”

ROBOTICS
Self-Driving Harvesting Robot Suctions the Fruit Off Trees
Luke Dormehl | Digital Trends
“[Abundant Robotics] has developed a cutting edge solution to the apple-picking problem in the form of an autonomous tractor-style vehicle which can navigate through orchards using Lidar. Once it spots the apples it seeks, it’s able to detect their ripeness using image recognition technology. It can then reach out and literally suction its chosen apples off the trees and into an on-board storage bin.”

CRYPTOCURRENCY
Amid Bitcoin Uncertainty ‘the Smart Money Knows That Crypto Is Not Ready’
Nathaniel Popper | The New York Times
“Some cryptocurrency enthusiasts had hoped that the entrance of Wall Street institutions would give them legitimacy with traditional investors. But their struggles—and waning interest—illustrate the difficulty in bringing Bitcoin from the fringes of the internet into the mainstream financial world.”

SCIENCE
Sorry, Graphene—Borophene Is the New Wonder Material That’s Got Everyone Excited
Emerging Technology from the arXiv | MIT Technology Review
“Stronger and more flexible than graphene, a single-atom layer of boron could revolutionize sensors, batteries, and catalytic chemistry.”

Image Credit: JoeZ / Shutterstock.com Continue reading

Posted in Human Robots

#434797 This Week’s Awesome Stories From ...

GENE EDITING
Genome Engineers Made More Than 13,000 Genome Edits in a Single Cell
Antonio Regalado | MIT Technology Review
“The group, led by gene technologist George Church, wants to rewrite genomes at a far larger scale than has currently been possible, something it says could ultimately lead to the ‘radical redesign’ of species—even humans.”

ROBOTICS
Inside Google’s Rebooted Robotics Program
Cade Metz | The New York Times
“Google’s new lab is indicative of a broader effort to bring so-called machine learning to robotics. …Many believe that machine learning—not extravagant new devices—will be the key to developing robotics for manufacturing, warehouse automation, transportation and many other tasks.

VIDEOS
Boston Dynamics Builds the Warehouse Robot of Jeff Bezos’ Dreams
Luke Dormehl | Digital Trends
“…for anyone wondering what the future of warehouse operation is likely to look like, this offers a far more practical glimpse of the years to come than, say, a dancing dog robot. As Boston Dynamics moves toward commercializing its creations for the first time, this could turn out to be a lot closer than you might think.”

TECHNOLOGY
Europe Is Splitting the Internet Into Three
Casey Newton | The Verge
“The internet had previously been divided into two: the open web, which most of the world could access; and the authoritarian web of countries like China, which is parceled out stingily and heavily monitored. As of today, though, the web no longer feels truly worldwide. Instead we now have the American internet, the authoritarian internet, and the European internet. How does the EU Copyright Directive change our understanding of the web?”

VIRTUAL REALITY
No Man’s Sky’s Next Update Will Let You Explore Infinite Space in Virtual Reality
Taylor Hatmaker | TechCrunch
“Assuming the game runs well enough, No Man’s Sky Virtual Reality will be a far cry from gimmicky VR games that lack true depth, offering one of the most expansive—if not the most expansive—VR experiences to date.”

3D PRINTING
3D Metal Printing Tries to Break Into the Manufacturing Mainstream
Mark Anderson | IEEE Spectrum
“It’s been five or so years since 3D printing was at peak hype. Since then, the technology has edged its way into a new class of materials and started to break into more applications. Today, 3D printers are being seriously considered as a means to produce stainless steel 5G smartphones, high-strength alloy gas-turbine blades, and other complex metal parts.”

Image Credit: ale de sun / Shutterstock.com Continue reading

Posted in Human Robots

#434658 The Next Data-Driven Healthtech ...

Increasing your healthspan (i.e. making 100 years old the new 60) will depend to a large degree on artificial intelligence. And, as we saw in last week’s blog, healthcare AI systems are extremely data-hungry.

Fortunately, a slew of new sensors and data acquisition methods—including over 122 million wearables shipped in 2018—are bursting onto the scene to meet the massive demand for medical data.

From ubiquitous biosensors, to the mobile healthcare revolution, to the transformative power of the Health Nucleus, converging exponential technologies are fundamentally transforming our approach to healthcare.

In Part 4 of this blog series on Longevity & Vitality, I expand on how we’re acquiring the data to fuel today’s AI healthcare revolution.

In this blog, I’ll explore:

How the Health Nucleus is transforming “sick care” to healthcare
Sensors, wearables, and nanobots
The advent of mobile health

Let’s dive in.

Health Nucleus: Transforming ‘Sick Care’ to Healthcare
Much of today’s healthcare system is actually sick care. Most of us assume that we’re perfectly healthy, with nothing going on inside our bodies, until the day we travel to the hospital writhing in pain only to discover a serious or life-threatening condition.

Chances are that your ailment didn’t materialize that morning; rather, it’s been growing or developing for some time. You simply weren’t aware of it. At that point, once you’re diagnosed as “sick,” our medical system engages to take care of you.

What if, instead of this retrospective and reactive approach, you were constantly monitored, so that you could know the moment anything was out of whack?

Better yet, what if you more closely monitored those aspects of your body that your gene sequence predicted might cause you difficulty? Think: your heart, your kidneys, your breasts. Such a system becomes personalized, predictive, and possibly preventative.

This is the mission of the Health Nucleus platform built by Human Longevity, Inc. (HLI). While not continuous—that will come later, with the next generation of wearable and implantable sensors—the Health Nucleus was designed to ‘digitize’ you once per year to help you determine whether anything is going on inside your body that requires immediate attention.

The Health Nucleus visit provides you with the following tests during a half-day visit:

Whole genome sequencing (30x coverage)
Whole body (non-contrast) MRI
Brain magnetic resonance imaging/angiography (MRI/MRA)
CT (computed tomography) of the heart and lungs
Coronary artery calcium scoring
Electrocardiogram
Echocardiogram
Continuous cardiac monitoring
Clinical laboratory tests and metabolomics

In late 2018, HLI published the results of the first 1,190 clients through the Health Nucleus. The results were eye-opening—especially since these patients were all financially well-off, and already had access to the best doctors.

Following are the physiological and genomic findings in these clients who self-selected to undergo evaluation at HLI’s Health Nucleus.

Physiological Findings [TG]

Two percent had previously unknown tumors detected by MRI
2.5 percent had previously undetected aneurysms detected by MRI
Eight percent had cardiac arrhythmia found on cardiac rhythm monitoring, not previously known
Nine percent had moderate-severe coronary artery disease risk, not previously known
16 percent discovered previously unknown cardiac structure/function abnormalities
30 percent had elevated liver fat, not previously known

Genomic Findings [TG]

24 percent of clients uncovered a rare (unknown) genetic mutation found on WGS
63 percent of clients had a rare genetic mutation with a corresponding phenotypic finding

In summary, HLI’s published results found that 14.4 percent of clients had significant findings that are actionable, requiring immediate or near-term follow-up and intervention.

Long-term value findings were found in 40 percent of the clients we screened. Long-term clinical findings include discoveries that require medical attention or monitoring but are not immediately life-threatening.

The bottom line: most people truly don’t know their actual state of health. The ability to take a fully digital deep dive into your health status at least once per year will enable you to detect disease at stage zero or stage one, when it is most curable.

Sensors, Wearables, and Nanobots
Wearables, connected devices, and quantified self apps will allow us to continuously collect enormous amounts of useful health information.

Wearables like the Quanttus wristband and Vital Connect can transmit your electrocardiogram data, vital signs, posture, and stress levels anywhere on the planet.

In April 2017, we were proud to grant $2.5 million in prize money to the winning team in the Qualcomm Tricorder XPRIZE, Final Frontier Medical Devices.

Using a group of noninvasive sensors that collect data on vital signs, body chemistry, and biological functions, Final Frontier integrates this data in their powerful, AI-based DxtER diagnostic engine for rapid, high-precision assessments.

Their engine combines learnings from clinical emergency medicine and data analysis from actual patients.

Google is developing a full range of internal and external sensors (e.g. smart contact lenses) that can monitor the wearer’s vitals, ranging from blood sugar levels to blood chemistry.

In September 2018, Apple announced its Series 4 Apple Watch, including an FDA-approved mobile, on-the-fly ECG. Granted its first FDA approval, Apple appears to be moving deeper into the sensing healthcare market.

Further, Apple is reportedly now developing sensors that can non-invasively monitor blood sugar levels in real time for diabetic treatment. IoT-connected sensors are also entering the world of prescription drugs.

Last year, the FDA approved the first sensor-embedded pill, Abilify MyCite. This new class of digital pills can now communicate medication data to a user-controlled app, to which doctors may be granted access for remote monitoring.

Perhaps what is most impressive about the next generation of wearables and implantables is the density of sensors, processing, networking, and battery capability that we can now cheaply and compactly integrate.

Take the second-generation OURA ring, for example, which focuses on sleep measurement and management.

The OURA ring looks like a slightly thick wedding band, yet contains an impressive array of sensors and capabilities, including:

Two infrared LED
One infrared sensor
Three temperature sensors
One accelerometer
A six-axis gyro
A curved battery with a seven-day life
The memory, processing, and transmission capability required to connect with your smartphone

Disrupting Medical Imaging Hardware
In 2018, we saw lab breakthroughs that will drive the cost of an ultrasound sensor to below $100, in a packaging smaller than most bandages, powered by a smartphone. Dramatically disrupting ultrasound is just the beginning.

Nanobots and Nanonetworks
While wearables have long been able to track and transmit our steps, heart rate, and other health data, smart nanobots and ingestible sensors will soon be able to monitor countless new parameters and even help diagnose disease.

Some of the most exciting breakthroughs in smart nanotechnology from the past year include:

Researchers from the École Polytechnique Fédérale de Lausanne (EPFL) and the Swiss Federal Institute of Technology in Zurich (ETH Zurich) demonstrated artificial microrobots that can swim and navigate through different fluids, independent of additional sensors, electronics, or power transmission.

Researchers at the University of Chicago proposed specific arrangements of DNA-based molecular logic gates to capture the information contained in the temporal portion of our cells’ communication mechanisms. Accessing the otherwise-lost time-dependent information of these cellular signals is akin to knowing the tune of a song, rather than solely the lyrics.

MIT researchers built micron-scale robots able to sense, record, and store information about their environment. These tiny robots, about 100 micrometers in diameter (approximately the size of a human egg cell), can also carry out pre-programmed computational tasks.

Engineers at University of California, San Diego developed ultrasound-powered nanorobots that swim efficiently through your blood, removing harmful bacteria and the toxins they produce.

But it doesn’t stop there.

As nanosensor and nanonetworking capabilities develop, these tiny bots may soon communicate with each other, enabling the targeted delivery of drugs and autonomous corrective action.

Mobile Health
The OURA ring and the Series 4 Apple Watch are just the tip of the spear when it comes to our future of mobile health. This field, predicted to become a $102 billion market by 2022, puts an on-demand virtual doctor in your back pocket.

Step aside, WebMD.

In true exponential technology fashion, mobile device penetration has increased dramatically, while image recognition error rates and sensor costs have sharply declined.

As a result, AI-powered medical chatbots are flooding the market; diagnostic apps can identify anything from a rash to diabetic retinopathy; and with the advent of global connectivity, mHealth platforms enable real-time health data collection, transmission, and remote diagnosis by medical professionals.

Already available to residents across North London, Babylon Health offers immediate medical advice through AI-powered chatbots and video consultations with doctors via its app.

Babylon now aims to build up its AI for advanced diagnostics and even prescription. Others, like Woebot, take on mental health, using cognitive behavioral therapy in communications over Facebook messenger with patients suffering from depression.

In addition to phone apps and add-ons that test for fertility or autism, the now-FDA-approved Clarius L7 Linear Array Ultrasound Scanner can connect directly to iOS and Android devices and perform wireless ultrasounds at a moment’s notice.

Next, Healthy.io, an Israeli startup, uses your smartphone and computer vision to analyze traditional urine test strips—all you need to do is take a few photos.

With mHealth platforms like ClickMedix, which connects remotely-located patients to medical providers through real-time health data collection and transmission, what’s to stop us from delivering needed treatments through drone delivery or robotic telesurgery?

Welcome to the age of smartphone-as-a-medical-device.

Conclusion
With these DIY data collection and diagnostic tools, we save on transportation costs (time and money), and time bottlenecks.

No longer will you need to wait for your urine or blood results to go through the current information chain: samples will be sent to the lab, analyzed by a technician, results interpreted by your doctor, and only then relayed to you.

Just like the “sage-on-the-stage” issue with today’s education system, healthcare has a “doctor-on-the-dais” problem. Current medical procedures are too complicated and expensive for a layperson to perform and analyze on their own.

The coming abundance of healthcare data promises to transform how we approach healthcare, putting the power of exponential technologies in the patient’s hands and revolutionizing how we live.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Titima Ongkantong / Shutterstock.com Continue reading

Posted in Human Robots