Tag Archives: beyond
#434580 How Genome Sequencing and Senolytics Can ...
The causes of aging are extremely complex and unclear. With the dramatic demonetization of genome reading and editing over the past decade, and Big Pharma, startups, and the FDA starting to face aging as a disease, we are starting to find practical ways to extend our healthspan.
Here, in Part 2 of a series of blogs on longevity and vitality, I explore how genome sequencing and editing, along with new classes of anti-aging drugs, are augmenting our biology to further extend our healthy lives.
In this blog I’ll cover two classes of emerging technologies:
Genome Sequencing and Editing;
Senolytics, Nutraceuticals & Pharmaceuticals.
Let’s dive in.
Genome Sequencing & Editing
Your genome is the software that runs your body.
A sequence of 3.2 billion letters makes you “you.” These base pairs of A’s, T’s, C’s, and G’s determine your hair color, your height, your personality, your propensity to disease, your lifespan, and so on.
Until recently, it’s been very difficult to rapidly and cheaply “read” these letters—and even more difficult to understand what they mean.
Since 2001, the cost to sequence a whole human genome has plummeted exponentially, outpacing Moore’s Law threefold. From an initial cost of $3.7 billion, it dropped to $10 million in 2006, and to $5,000 in 2012.
Today, the cost of genome sequencing has dropped below $500, and according to Illumina, the world’s leading sequencing company, the process will soon cost about $100 and take about an hour to complete.
This represents one of the most powerful and transformative technology revolutions in healthcare.
When we understand your genome, we’ll be able to understand how to optimize “you.”
We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you.
We’ll understand what microbiome types, or gut flora, are ideal for you (more on this in a later blog).
We’ll accurately predict how specific sedatives and medicines will impact you.
We’ll learn which diseases and illnesses you’re most likely to develop and, more importantly, how to best prevent them from developing in the first place (rather than trying to cure them after the fact).
CRISPR Gene Editing
In addition to reading the human genome, scientists can now edit a genome using a naturally-occurring biological system discovered in 1987 called CRISPR/Cas9.
Short for Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9, the editing system was adapted from a naturally-occurring defense system found in bacteria.
Here’s how it works:
The bacteria capture snippets of DNA from invading viruses (or bacteriophage) and use them to create DNA segments known as CRISPR arrays.
The CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones), and defend against future invasions.
If the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 to cut the DNA apart, which disables the virus.
Most importantly, CRISPR is cheap, quick, easy to use, and more accurate than all previous gene editing methods. As a result, CRISPR/Cas9 has swept through labs around the world as the way to edit a genome.
A short search in the literature will show an exponential rise in the number of CRISPR-related publications and patents.
2018: Filled With CRISPR Breakthroughs
Early results are impressive. Researchers from the University of Chicago recently used CRISPR to genetically engineer cocaine resistance into mice.
Researchers at the University of Texas Southwestern Medical Center used CRISPR to reverse the gene defect causing Duchenne muscular dystrophy (DMD) in dogs (DMD is the most common fatal genetic disease in children).
With great power comes great responsibility, and moral and ethical dilemmas.
In 2015, Chinese scientists sparked global controversy when they first edited human embryo cells in the lab with the goal of modifying genes that would make the child resistant to smallpox, HIV, and cholera.
Three years later, in November 2018, researcher He Jiankui informed the world that the first set of CRISPR-engineered female twins had been delivered.
To accomplish his goal, Jiankui deleted a region of a receptor on the surface of white blood cells known as CCR5, introducing a rare, natural genetic variation that makes it more difficult for HIV to infect its favorite target, white blood cells.
Setting aside the significant ethical conversations, CRISPR will soon provide us the tools to eliminate diseases, create hardier offspring, produce new environmentally resistant crops, and even wipe out pathogens.
Senolytics, Nutraceuticals & Pharmaceuticals
Over the arc of your life, the cells in your body divide until they reach what is known as the Hayflick limit, or the number of times a normal human cell population will divide before cell division stops, which is typically about 50 divisions.
What normally follows next is programmed cell death or destruction by the immune system. A very small fraction of cells, however, become senescent cells and evade this fate to linger indefinitely.
These lingering cells secrete a potent mix of molecules that triggers chronic inflammation, damages the surrounding tissue structures, and changes the behavior of nearby cells for the worse.
Senescent cells appear to be one of the root causes of aging, causing everything from fibrosis and blood vessel calcification, to localized inflammatory conditions such as osteoarthritis, to diminished lung function.
Fortunately, both the scientific and entrepreneurial communities have begun to work on senolytic therapies, moving the technology for selectively destroying senescent cells out of the laboratory and into a half-dozen startup companies.
Prominent companies in the field include the following:
Unity Biotechnology is developing senolytic medicines to selectively eliminate senescent cells with an initial focus on delivering localized therapy in osteoarthritis, ophthalmology and pulmonary disease.
Oisin Biotechnologiesis pioneering a programmable gene therapy that can destroy cells based on their internal biochemistry.
SIWA Therapeuticsis working on an immunotherapy approach to the problem of senescent cells.
In recent years, researchers have identified or designed a handful of senolytic compounds that can curb aging by regulating senescent cells. Two of these drugs that have gained mainstay research traction are rapamycin and metformin.
Rapamycin
Originally extracted from bacteria found on Easter Island, Rapamycin acts on the m-TOR (mechanistic target of rapamycin) pathway to selectively block a key protein that facilitates cell division.
Currently, rapamycin derivatives are widely used as immunosuppression in organ and bone marrow transplants. Research now suggests that use results in prolonged lifespan and enhanced cognitive and immune function.
PureTech Health subsidiary resTORbio (which started 2018 by going public) is working on a rapamycin-based drug intended to enhance immunity and reduce infection. Their clinical-stage RTB101 drug works by inhibiting part of the mTOR pathway.
Results of the drug’s recent clinical trial include:
Decreased incidence of infection
Improved influenza vaccination response
A 30.6 percent decrease in respiratory tract infections
Impressive, to say the least.
Metformin
Metformin is a widely-used generic drug for mitigating liver sugar production in Type 2 diabetes patients.
Researchers have found that Metformin also reduces oxidative stress and inflammation, which otherwise increase as we age.
There is strong evidence that Metformin can augment cellular regeneration and dramatically mitigate cellular senescence by reducing both oxidative stress and inflammation.
Over 100 studies registered on ClinicalTrials.gov are currently following up on strong evidence of Metformin’s protective effect against cancer.
Nutraceuticals and NAD+
Beyond cellular senescence, certain critical nutrients and proteins tend to decline as a function of age. Nutraceuticals combat aging by supplementing and replenishing these declining nutrient levels.
NAD+ exists in every cell, participating in every process from DNA repair to creating the energy vital for cellular processes. It’s been shown that NAD+ levels decline as we age.
The Elysium Health Basis supplement aims to elevate NAD+ levels in the body to extend one’s lifespan. Elysium’s clinical study reports that Basis increases NAD+ levels consistently by a sustained 40 percent.
Conclusion
These are just a taste of the tremendous momentum that longevity and aging technology has right now. As artificial intelligence and quantum computing transform how we decode our DNA and how we discover drugs, genetics and pharmaceuticals will become truly personalized.
The next blog in this series will demonstrate how artificial intelligence is converging with genetics and pharmaceuticals to transform how we approach longevity, aging, and vitality.
We are edging closer to a dramatically extended healthspan—where 100 is the new 60. What will you create, where will you explore, and how will you spend your time if you are able to add an additional 40 healthy years to your life?
Join Me
Abundance Digital is my online educational portal and community of abundance-minded entrepreneurs. You’ll find weekly video updates from Peter, a curated newsfeed of exponential news, and a place to share your bold ideas. Click here to learn more and sign up.
Image Credit: ktsdesign / Shutterstock.com Continue reading
#434303 Making Superhumans Through Radical ...
Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.
Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.
These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.
Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.
Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.
If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.
Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.
Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.
Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.
Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?
Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.
The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.
Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.
By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.
Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.
Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.
These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.
Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.
This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.
Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.
Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.
The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.
When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.
Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.
The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.
Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.
Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.
Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.
This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.
Image Credit: jamesteohart / Shutterstock.com Continue reading
#434270 AI Will Create Millions More Jobs Than ...
In the past few years, artificial intelligence has advanced so quickly that it now seems hardly a month goes by without a newsworthy AI breakthrough. In areas as wide-ranging as speech translation, medical diagnosis, and gameplay, we have seen computers outperform humans in startling ways.
This has sparked a discussion about how AI will impact employment. Some fear that as AI improves, it will supplant workers, creating an ever-growing pool of unemployable humans who cannot compete economically with machines.
This concern, while understandable, is unfounded. In fact, AI will be the greatest job engine the world has ever seen.
New Technology Isn’t a New Phenomenon
On the one hand, those who predict massive job loss from AI can be excused. It is easier to see existing jobs disrupted by new technology than to envision what new jobs the technology will enable.
But on the other hand, radical technological advances aren’t a new phenomenon. Technology has progressed nonstop for 250 years, and in the US unemployment has stayed between 5 to 10 percent for almost all that time, even when radical new technologies like steam power and electricity came on the scene.
But you don’t have to look back to steam, or even electricity. Just look at the internet. Go back 25 years, well within the memory of today’s pessimistic prognosticators, to 1993. The web browser Mosaic had just been released, and the phrase “surfing the web,” that most mixed of metaphors, was just a few months old.
If someone had asked you what would be the result of connecting a couple billion computers into a giant network with common protocols, you might have predicted that email would cause us to mail fewer letters, and the web might cause us to read fewer newspapers and perhaps even do our shopping online. If you were particularly farsighted, you might have speculated that travel agents and stockbrokers would be adversely affected by this technology. And based on those surmises, you might have thought the internet would destroy jobs.
But now we know what really happened. The obvious changes did occur. But a slew of unexpected changes happened as well. We got thousands of new companies worth trillions of dollars. We bettered the lot of virtually everyone on the planet touched by the technology. Dozens of new careers emerged, from web designer to data scientist to online marketer. The cost of starting a business with worldwide reach plummeted, and the cost of communicating with customers and leads went to nearly zero. Vast storehouses of information were made freely available and used by entrepreneurs around the globe to build new kinds of businesses.
But yes, we mail fewer letters and buy fewer newspapers.
The Rise of Artificial Intelligence
Then along came a new, even bigger technology: artificial intelligence. You hear the same refrain: “It will destroy jobs.”
Consider the ATM. If you had to point to a technology that looked as though it would replace people, the ATM might look like a good bet; it is, after all, an automated teller machine. And yet, there are more tellers now than when ATMs were widely released. How can this be? Simple: ATMs lowered the cost of opening bank branches, and banks responded by opening more, which required hiring more tellers.
In this manner, AI will create millions of jobs that are far beyond our ability to imagine. For instance, AI is becoming adept at language translation—and according to the US Bureau of Labor Statistics, demand for human translators is skyrocketing. Why? If the cost of basic translation drops to nearly zero, the cost of doing business with those who speak other languages falls. Thus, it emboldens companies to do more business overseas, creating more work for human translators. AI may do the simple translations, but humans are needed for the nuanced kind.
In fact, the BLS forecasts faster-than-average job growth in many occupations that AI is expected to impact: accountants, forensic scientists, geological technicians, technical writers, MRI operators, dietitians, financial specialists, web developers, loan officers, medical secretaries, and customer service representatives, to name a very few. These fields will not experience job growth in spite of AI, but through it.
But just as with the internet, the real gains in jobs will come from places where our imaginations cannot yet take us.
Parsing Pessimism
You may recall waking up one morning to the news that “47 percent of jobs will be lost to technology.”
That report by Carl Frey and Michael Osborne is a fine piece of work, but readers and the media distorted their 47 percent number. What the authors actually said is that some functions within 47 percent of jobs will be automated, not that 47 percent of jobs will disappear.
Frey and Osborne go on to rank occupations by “probability of computerization” and give the following jobs a 65 percent or higher probability: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean? Social science professors will no longer have research assistants? Of course they will. They will just do different things because much of what they do today will be automated.
The intergovernmental Organization for Economic Co-operation and Development released a report of their own in 2016. This report, titled “The Risk of Automation for Jobs in OECD Countries,” applies a different “whole occupations” methodology and puts the share of jobs potentially lost to computerization at nine percent. That is normal churn for the economy.
But what of the skills gap? Will AI eliminate low-skilled workers and create high-skilled job opportunities? The relevant question is whether most people can do a job that’s just a little more complicated than the one they currently have. This is exactly what happened with the industrial revolution; farmers became factory workers, factory workers became factory managers, and so on.
Embracing AI in the Workplace
A January 2018 Accenture report titled “Reworking the Revolution” estimates that new applications of AI combined with human collaboration could boost employment worldwide as much as 10 percent by 2020.
Electricity changed the world, as did mechanical power, as did the assembly line. No one can reasonably claim that we would be better off without those technologies. Each of them bettered our lives, created jobs, and raised wages. AI will be bigger than electricity, bigger than mechanization, bigger than anything that has come before it.
This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. There are as many jobs in the world as there are buyers and sellers of labor.
Image Credit: enzozo / Shutterstock.com Continue reading