Tag Archives: figure

#434753 Top Takeaways From The Economist ...

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.

Blockchain
There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology
Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality
Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications? “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing
If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.

Space
Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work
From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes
This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko / Shutterstock.com Continue reading

Posted in Human Robots

#434701 3 Practical Solutions to Offset ...

In recent years, the media has sounded the alarm about mass job loss to automation and robotics—some studies predict that up to 50 percent of current jobs or tasks could be automated in coming decades. While this topic has received significant attention, much of the press focuses on potential problems without proposing realistic solutions or considering new opportunities.

The economic impacts of AI, robotics, and automation are complex topics that require a more comprehensive perspective to understand. Is universal basic income, for example, the answer? Many believe so, and there are a number of experiments in progress. But it’s only one strategy, and without a sustainable funding source, universal basic income may not be practical.

As automation continues to accelerate, we’ll need a multi-pronged approach to ease the transition. In short, we need to update broad socioeconomic strategies for a new century of rapid progress. How, then, do we plan practical solutions to support these new strategies?

Take history as a rough guide to the future. Looking back, technology revolutions have three themes in common.

First, past revolutions each produced profound benefits to productivity, increasing human welfare. Second, technological innovation and technology diffusion have accelerated over time, each iteration placing more strain on the human ability to adapt. And third, machines have gradually replaced more elements of human work, with human societies adapting by moving into new forms of work—from agriculture to manufacturing to service, for example.

Public and private solutions, therefore, need to be developed to address each of these three components of change. Let’s explore some practical solutions for each in turn.

Figure 1. Technology’s structural impacts in the 21st century. Refer to Appendix I for quantitative charts and technological examples corresponding to the numbers (1-22) in each slice.
Solution 1: Capture New Opportunities Through Aggressive Investment
The rapid emergence of new technology promises a bounty of opportunity for the twenty-first century’s economic winners. This technological arms race is shaping up to be a global affair, and the winners will be determined in part by who is able to build the future economy fastest and most effectively. Both the private and public sectors have a role to play in stimulating growth.

At the country level, several nations have created competitive strategies to promote research and development investments as automation technologies become more mature.

Germany and China have two of the most notable growth strategies. Germany’s Industrie 4.0 plan targets a 50 percent increase in manufacturing productivity via digital initiatives, while halving the resources required. China’s Made in China 2025 national strategy sets ambitious targets and provides subsidies for domestic innovation and production. It also includes building new concept cities, investing in robotics capabilities, and subsidizing high-tech acquisitions abroad to become the leader in certain high-tech industries. For China, specifically, tech innovation is driven partially by a fear that technology will disrupt social structures and government control.

Such opportunities are not limited to existing economic powers. Estonia’s progress after the breakup of the Soviet Union is a good case study in transitioning to a digital economy. The nation rapidly implemented capitalistic reforms and transformed itself into a technology-centric economy in preparation for a massive tech disruption. Internet access was declared a right in 2000, and the country’s classrooms were outfitted for a digital economy, with coding as a core educational requirement starting at kindergarten. Internet broadband speeds in Estonia are among the fastest in the world. Accordingly, the World Bank now ranks Estonia as a high-income country.

Solution 2: Address Increased Rate of Change With More Nimble Education Systems
Education and training are currently not set for the speed of change in the modern economy. Schools are still based on a one-time education model, with school providing the foundation for a single lifelong career. With content becoming obsolete faster and rapidly escalating costs, this system may be unsustainable in the future. To help workers more smoothly transition from one job into another, for example, we need to make education a more nimble, lifelong endeavor.

Primary and university education may still have a role in training foundational thinking and general education, but it will be necessary to curtail rising price of tuition and increase accessibility. Massive open online courses (MooCs) and open-enrollment platforms are early demonstrations of what the future of general education may look like: cheap, effective, and flexible.

Georgia Tech’s online Engineering Master’s program (a fraction of the cost of residential tuition) is an early example in making university education more broadly available. Similarly, nanodegrees or microcredentials provided by online education platforms such as Udacity and Coursera can be used for mid-career adjustments at low cost. AI itself may be deployed to supplement the learning process, with applications such as AI-enhanced tutorials or personalized content recommendations backed by machine learning. Recent developments in neuroscience research could optimize this experience by perfectly tailoring content and delivery to the learner’s brain to maximize retention.

Finally, companies looking for more customized skills may take a larger role in education, providing on-the-job training for specific capabilities. One potential model involves partnering with community colleges to create apprenticeship-style learning, where students work part-time in parallel with their education. Siemens has pioneered such a model in four states and is developing a playbook for other companies to do the same.

Solution 3: Enhance Social Safety Nets to Smooth Automation Impacts
If predicted job losses to automation come to fruition, modernizing existing social safety nets will increasingly become a priority. While the issue of safety nets can become quickly politicized, it is worth noting that each prior technological revolution has come with corresponding changes to the social contract (see below).

The evolving social contract (U.S. examples)
– 1842 | Right to strike
– 1924 | Abolish child labor
– 1935 | Right to unionize
– 1938 | 40-hour work week
– 1962, 1974 | Trade adjustment assistance
– 1964 | Pay discrimination prohibited
– 1970 | Health and safety laws
– 21st century | AI and automation adjustment assistance?

Figure 2. Labor laws have historically adjusted as technology and society progressed

Solutions like universal basic income (no-strings-attached monthly payout to all citizens) are appealing in concept, but somewhat difficult to implement as a first measure in countries such as the US or Japan that already have high debt. Additionally, universal basic income may create dis-incentives to stay in the labor force. A similar cautionary tale in program design was the Trade Adjustment Assistance (TAA), which was designed to protect industries and workers from import competition shocks from globalization, but is viewed as a missed opportunity due to insufficient coverage.

A near-term solution could come in the form of graduated wage insurance (compensation for those forced to take a lower-paying job), including health insurance subsidies to individuals directly impacted by automation, with incentives to return to the workforce quickly. Another topic to tackle is geographic mismatch between workers and jobs, which can be addressed by mobility assistance. Lastly, a training stipend can be issued to individuals as means to upskill.

Policymakers can intervene to reverse recent historical trends that have shifted incomes from labor to capital owners. The balance could be shifted back to labor by placing higher taxes on capital—an example is the recently proposed “robot tax” where the taxation would be on the work rather than the individual executing it. That is, if a self-driving car performs the task that formerly was done by a human, the rideshare company will still pay the tax as if a human was driving.

Other solutions may involve distribution of work. Some countries, such as France and Sweden, have experimented with redistributing working hours. The idea is to cap weekly hours, with the goal of having more people employed and work more evenly spread. So far these programs have had mixed results, with lower unemployment but high costs to taxpayers, but are potential models that can continue to be tested.

We cannot stop growth, nor should we. With the roles in response to this evolution shifting, so should the social contract between the stakeholders. Government will continue to play a critical role as a stabilizing “thumb” in the invisible hand of capitalism, regulating and cushioning against extreme volatility, particularly in labor markets.

However, we already see business leaders taking on some of the role traditionally played by government—thinking about measures to remedy risks of climate change or economic proposals to combat unemployment—in part because of greater agility in adapting to change. Cross-disciplinary collaboration and creative solutions from all parties will be critical in crafting the future economy.

Note: The full paper this article is based on is available here.

Image Credit: Dmitry Kalinovsky / Shutterstock.com Continue reading

Posted in Human Robots

#434559 Can AI Tell the Difference Between a ...

Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.

Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.

These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.

Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.

They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.

In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.

A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.

If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.

The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.

Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.

Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.

But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.

By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.

Image Credit: Irvan Pratama / Shutterstock.com Continue reading

Posted in Human Robots

#434544 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
DeepMind Beats Pros at Starcraft in Another Triumph for Bots
Tom Simonite | Wired
“DeepMind’s feat is the most complex yet in a long train of contests in which computers have beaten top humans at games. Checkers fell in 1994, chess in 1997, and DeepMind’s earlier bot AlphaGo became the first to beat a champion at the board game Go in 2016. The StarCraft bot is the most powerful AI game player yet; it may also be the least unexpected.”

GENETICS
Complete Axolotl Genome Could Pave the Way Toward Human Tissue Regeneration
George Dvorsky | Gizmodo
“Now that researchers have a near-complete axolotl genome—the new assembly still requires a bit of fine-tuning (more on that in a bit)—they, along with others, can now go about the work of identifying the genes responsible for axolotl tissue regeneration.”

FUTURE
We Analyzed 16,625 Papers to Figure Out Where AI Is Headed Next
Karen Hao | MIT Technology Review
“…though deep learning has singlehandedly thrust AI into the public eye, it represents just a small blip in the history of humanity’s quest to replicate our own intelligence. It’s been at the forefront of that effort for less than 10 years. When you zoom out on the whole history of the field, it’s easy to realize that it could soon be on its way out.”

COMPUTING
Apple’s Finger-Controller Patent Is a Glimpse at Mixed Reality’s Future
Mark Sullivan | Fast Company
“[Apple’s] engineers are now looking past the phone touchscreen toward mixed reality, where the company’s next great UX will very likely be built. A recent patent application gives some tantalizing clues as to how Apple’s people are thinking about aspects of that challenge.”

GOVERNANCE
How Do You Govern Machines That Can Learn? Policymakers Are Trying to Figure That Out
Steve Lohr | The New York Times
“Regulation is coming. That’s a good thing. Rules of competition and behavior are the foundation of healthy, growing markets. That was the consensus of the policymakers at MIT. But they also agreed that artificial intelligence raises some fresh policy challenges.”

Image Credit: Victoria Shapiro / Shutterstock.com Continue reading

Posted in Human Robots

#434534 To Extend Our Longevity, First We Must ...

Healthcare today is reactive, retrospective, bureaucratic, and expensive. It’s sick care, not healthcare.

But that is radically changing at an exponential rate.

Through this multi-part blog series on longevity, I’ll take a deep dive into aging, longevity, and healthcare technologies that are working together to dramatically extend the human lifespan, disrupting the $3 trillion healthcare system in the process.

I’ll begin the series by explaining the nine hallmarks of aging, as explained in this journal article. Next, I’ll break down the emerging technologies and initiatives working to combat these nine hallmarks. Finally, I’ll explore the transformative implications of dramatically extending the human health span.

In this blog I’ll cover:

Why the healthcare system is broken
Why, despite this, we live in the healthiest time in human history
The nine mechanisms of aging

Let’s dive in.

The System is Broken—Here’s the Data:

Doctors spend $210 billion per year on procedures that aren’t based on patient need, but fear of liability.
Americans spend, on average, $8,915 per person on healthcare—more than any other country on Earth.
Prescription drugs cost around 50 percent more in the US than in other industrialized countries.
At current rates, by 2025, nearly 25 percent of the US GDP will be spent on healthcare.
It takes 12 years and $359 million, on average, to take a new drug from the lab to a patient.
Only 5 in 5,000 of these new drugs proceed to human testing. From there, only 1 of those 5 is actually approved for human use.

And Yet, We Live in the Healthiest Time in Human History
Consider these insights, which I adapted from Max Roser’s excellent database Our World in Data:

Right now, the countries with the lowest life expectancy in the world still have higher life expectancies than the countries with the highest life expectancy did in 1800.
In 1841, a 5-year-old had a life expectancy of 55 years. Today, a 5-year-old can expect to live 82 years—an increase of 27 years.
We’re seeing a dramatic increase in healthspan. In 1845, a newborn would expect to live to 40 years old. For a 70-year-old, that number became 79. Now, people of all ages can expect to live to be 81 to 86 years old.
100 years ago, 1 of 3 children would die before the age of 5. As of 2015, the child mortality rate fell to just 4.3 percent.
The cancer mortality rate has declined 27 percent over the past 25 years.

Figure: Around the globe, life expectancy has doubled since the 1800s. | Image from Life Expectancy by Max Roser – Our World in Data / CC BY SA
Figure: A dramatic reduction in child mortality in 1800 vs. in 2015. | Image from Child Mortality by Max Roser – Our World in Data / CC BY SA
The 9 Mechanisms of Aging
*This section was adapted from CB INSIGHTS: The Future Of Aging.

Longevity, healthcare, and aging are intimately linked.

With better healthcare, we can better treat some of the leading causes of death, impacting how long we live.

By investigating how to treat diseases, we’ll inevitably better understand what causes these diseases in the first place, which directly correlates to why we age.

Following are the nine hallmarks of aging. I’ll share examples of health and longevity technologies addressing each of these later in this blog series.

Genomic instability: As we age, the environment and normal cellular processes cause damage to our genes. Activities like flying at high altitude, for example, expose us to increased radiation or free radicals. This damage compounds over the course of life and is known to accelerate aging.
Telomere attrition: Each strand of DNA in the body (known as chromosomes) is capped by telomeres. These short snippets of DNA repeated thousands of times are designed to protect the bulk of the chromosome. Telomeres shorten as our DNA replicates; if a telomere reaches a certain critical shortness, a cell will stop dividing, resulting in increased incidence of disease.
Epigenetic alterations: Over time, environmental factors will change how genes are expressed, i.e., how certain sequences of DNA are read and the instruction set implemented.
Loss of proteostasis: Over time, different proteins in our body will no longer fold and function as they are supposed to, resulting in diseases ranging from cancer to neurological disorders.
Deregulated nutrient-sensing: Nutrient levels in the body can influence various metabolic pathways. Among the affected parts of these pathways are proteins like IGF-1, mTOR, sirtuins, and AMPK. Changing levels of these proteins’ pathways has implications on longevity.
Mitochondrial dysfunction: Mitochondria (our cellular power plants) begin to decline in performance as we age. Decreased performance results in excess fatigue and other symptoms of chronic illnesses associated with aging.
Cellular senescence: As cells age, they stop dividing and cannot be removed from the body. They build up and typically cause increased inflammation.
Stem cell exhaustion: As we age, our supply of stem cells begins to diminish as much as 100 to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing the body.
Altered intercellular communication: The communication mechanisms that cells use are disrupted as cells age, resulting in decreased ability to transmit information between cells.

Conclusion
Over the past 200 years, we have seen an abundance of healthcare technologies enable a massive lifespan boom.

Now, exponential technologies like artificial intelligence, 3D printing and sensors, as well as tremendous advancements in genomics, stem cell research, chemistry, and many other fields, are beginning to tackle the fundamental issues of why we age.

In the next blog in this series, we will dive into how genome sequencing and editing, along with new classes of drugs, are augmenting our biology to further extend our healthy lives.

What will you be able to achieve with an extra 30 to 50 healthy years (or longer) in your lifespan? Personally, I’m excited for a near-infinite lifespan to take on moonshots.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: David Carbo / Shutterstock.com Continue reading

Posted in Human Robots