Tag Archives: hacking

#433884 Designer Babies, and Their Babies: How ...

As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?

Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.

Metzl is a senior fellow at the Atlantic Council and author of the upcoming book Hacking Darwin: Genetic Engineering and the Future of Humanity. At Singularity University’s Exponential Medicine conference last week, he shared his insights on genomics and AI, and where their convergence could take us.

Life As We Know It
Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.

In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.

Jamie Metzl at Exponential Medicine
Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.

“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”

Learning About Life Meets Machine Learning
In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.

Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.

Getting a standardized set of rules for our biology—and, eventually, maybe even outsmarting our biology—will require genomic data. Lots of it.

Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.

“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.

To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.

But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”

Designer Babies, and Their Babies
In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.

Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.

Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.

This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”

Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”

From Crazy to Commonplace?
It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.

“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?

As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.

The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”

Image Credit: hywards / Shutterstock.com Continue reading

Posted in Human Robots

#433770 Will Tech Make Insurance Obsolete in the ...

We profit from it, we fear it, and we find it impossibly hard to quantify: risk.

While not the sexiest of industries, insurance can be a life-saving protector, pooling everyone’s premiums to safeguard against some of our greatest, most unexpected losses.

One of the most profitable in the world, the insurance industry exceeded $1.2 trillion in annual revenue since 2011 in the US alone.

But risk is becoming predictable. And insurance is getting disrupted fast.

By 2025, we’ll be living in a trillion-sensor economy. And as we enter a world where everything is measured all the time, we’ll start to transition from protecting against damages to preventing them in the first place.

But what happens to health insurance when Big Brother is always watching? Do rates go up when you sneak a cigarette? Do they go down when you eat your vegetables?

And what happens to auto insurance when most cars are autonomous? Or life insurance when the human lifespan doubles?

For that matter, what happens to insurance brokers when blockchain makes them irrelevant?

In this article, I’ll be discussing four key transformations:

Sensors and AI replacing your traditional broker
Blockchain
The ecosystem approach
IoT and insurance connectivity

Let’s dive in.

AI and the Trillion-Sensor Economy
As sensors continue to proliferate across every context—from smart infrastructure to millions of connected home devices to medicine—smart environments will allow us to ask any question, anytime, anywhere.

And as I often explain, once your AI has access to this treasure trove of ubiquitous sensor data in real time, it will be the quality of your questions that make or break your business.

But perhaps the most exciting insurance application of AI’s convergence with sensors is in healthcare. Tremendous advances in genetic screening are empowering us with predictive knowledge about our long-term health risks.

Leading the charge in genome sequencing, Illumina predicts that in a matter of years, decoding the full human genome will drop to $100, taking merely one hour to complete. Other companies are racing to get you sequences faster and cheaper.

Adopting an ecosystem approach, incumbent insurers and insurtech firms will soon be able to collaborate to provide risk-minimizing services in the health sector. Using sensor data and AI-driven personalized recommendations, insurance partnerships could keep consumers healthy, dramatically reducing the cost of healthcare.

Some fear that information asymmetry will allow consumers to learn of their health risks and leave insurers in the dark. However, both parties could benefit if insurers become part of the screening process.

A remarkable example of this is Gilad Meiri’s company, Neura AI. Aiming to predict health patterns, Neura has developed machine learning algorithms that analyze data from all of a user’s connected devices (sometimes from up to 54 apps!).

Neura predicts a user’s behavior and draws staggering insights about consumers’ health risks. Meiri soon began selling his personal risk assessment tool to insurers, who could then help insured customers mitigate long-term health risks.

But artificial intelligence will impact far more than just health insurance.

In October of 2016, a claim was submitted to Lemonade, the world’s first peer-to-peer insurance company. Rather than being processed by a human, every step in this claim resolution chain—from initial triage through fraud mitigation through final payment—was handled by an AI.

This transaction marks the first time an AI has processed an insurance claim. And it won’t be the last. A traditional human-processed claim takes 40 days to pay out. In Lemonade’s case, payment was transferred within three seconds.

However, Lemonade’s achievement only marks a starting point. Over the course of the next decade, nearly every facet of the insurance industry will undergo a similarly massive transformation.

New business models like peer-to-peer insurance are replacing traditional brokerage relationships, while AI and blockchain pairings significantly reduce the layers of bureaucracy required (with each layer getting a cut) for traditional insurance.

Consider Juniper, a startup that scrapes social media to build your risk assessment, subsequently asking you 12 questions via an iPhone app. Geared with advanced analytics, the platform can generate a million-dollar life insurance policy, approved in less than five minutes.

But what’s keeping all your data from unwanted hands?

Blockchain Building Trust
Current distrust in centralized financial services has led to staggering rates of underinsurance. Add to this fear of poor data and privacy protection, particularly in the wake of 2017’s widespread cybercriminal hacks.

Enabling secure storage and transfer of personal data, blockchain holds remarkable promise against the fraudulent activity that often plagues insurance firms.

The centralized model of insurance companies and other organizations is becoming redundant. Developing blockchain-based solutions for capital markets, Symbiont develops smart contracts to execute payments with little to no human involvement.

But distributed ledger technology (DLT) is enabling far more than just smart contracts.

Also targeting insurance is Tradle, leveraging blockchain for its proclaimed goal of “building a trust provisioning network.” Built around “know-your-customer” (KYC) data, Tradle aims to verify KYC data so that it can be securely forwarded to other firms without any further verification.

By requiring a certain number of parties to reuse pre-verified data, the platform makes your data much less vulnerable to hacking and allows you to keep it on a personal device. Only its verification—let’s say of a transaction or medical exam—is registered in the blockchain.

As insurance data grow increasingly decentralized, key insurance players will experience more and more pressure to adopt an ecosystem approach.

The Ecosystem Approach
Just as exponential technologies converge to provide new services, exponential businesses must combine the strengths of different sectors to expand traditional product lines.

By partnering with platform-based insurtech firms, forward-thinking insurers will no longer serve only as reactive policy-providers, but provide risk-mitigating services as well.

Especially as digital technologies demonetize security services—think autonomous vehicles—insurers must create new value chains and span more product categories.

For instance, France’s multinational AXA recently partnered with Alibaba and Ant Financial Services to sell a varied range of insurance products on Alibaba’s global e-commerce platform at the click of a button.

Building another ecosystem, Alibaba has also collaborated with Ping An Insurance and Tencent to create ZhongAn Online Property and Casualty Insurance—China’s first internet-only insurer, offering over 300 products. Now with a multibillion-dollar valuation, Zhong An has generated about half its business from selling shipping return insurance to Alibaba consumers.

But it doesn’t stop there. Insurers that participate in digital ecosystems can now sell risk-mitigating services that prevent damage before it occurs.

Imagine a corporate manufacturer whose sensors collect data on environmental factors affecting crop yield in an agricultural community. With the backing of investors and advanced risk analytics, such a manufacturer could sell crop insurance to farmers. By implementing an automated, AI-driven UI, they could automatically make payments when sensors detect weather damage to crops.

Now let’s apply this concept to your house, your car, your health insurance.

What’s stopping insurers from partnering with third-party IoT platforms to predict fires, collisions, chronic heart disease—and then empowering the consumer with preventive services?

This brings us to the powerful field of IoT.

Internet of Things and Insurance Connectivity
Leap ahead a few years. With a centralized hub like Echo, your smart home protects itself with a network of sensors. While gone, you’ve left on a gas burner and your internet-connected stove notifies you via a home app.

Better yet, home sensors monitoring heat and humidity levels run this data through an AI, which then remotely controls heating, humidity levels, and other connected devices based on historical data patterns and fire risk factors.

Several firms are already working toward this reality.

AXA plans to one day cooperate with a centralized home hub whereby remote monitoring will collect data for future analysis and detect abnormalities.

With remote monitoring and app-centralized control for users, MonAXA is aimed at customizing insurance bundles. These would reflect exact security features embedded in smart homes.

Wouldn’t you prefer not to have to rely on insurance after a burglary? With digital ecosystems, insurers may soon prevent break-ins from the start.

By gathering sensor data from third parties on neighborhood conditions, historical theft data, suspicious activity and other risk factors, an insurtech firm might automatically put your smart home on high alert, activating alarms and specialized locks in advance of an attack.

Insurance policy premiums are predicted to vastly reduce with lessened likelihood of insured losses. But insurers moving into preventive insurtech will likely turn a profit from other areas of their business. PricewaterhouseCoopers predicts that the connected home market will reach $149 billion USD by 2020.

Let’s look at car insurance.

Car insurance premiums are currently calculated according to the driver and traits of the car. But as more autonomous vehicles take to the roads, not only does liability shift to manufacturers and software engineers, but the risk of collision falls dramatically.

But let’s take this a step further.

In a future of autonomous cars, you will no longer own your car, instead subscribing to Transport as a Service (TaaS) and giving up the purchase of automotive insurance altogether.

This paradigm shift has already begun with Waymo, which automatically provides passengers with insurance every time they step into a Waymo vehicle.

And with the rise of smart traffic systems, sensor-embedded roads, and skyrocketing autonomous vehicle technology, the risks involved in transit only continue to plummet.

Final Thoughts
Insurtech firms are hitting the market fast. IoT, autonomous vehicles and genetic screening are rapidly making us invulnerable to risk. And AI-driven services are quickly pushing conventional insurers out of the market.

By 2024, roll-out of 5G on the ground, as well as OneWeb and Starlink in orbit are bringing 4.2 billion new consumers to the web—most of whom will need insurance. Yet, because of the changes afoot in the industry, none of them will buy policies from a human broker.

While today’s largest insurance companies continue to ignore this fact at their peril (and this segment of the market), thousands of entrepreneurs see it more clearly: as one of the largest opportunities ahead.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: 24Novembers / Shutterstock.com Continue reading

Posted in Human Robots

#433731 From cyborgs to sex robots, U of M ...

Francis Shen spends a lot of time thinking about transhuman cyborgs, brain-wave lie detectors, sex robots and terrorists hacking into devices implanted in our heads. Continue reading

Posted in Human Robots

#432249 New Malicious AI Report Outlines Biggest ...

Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?

Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.

How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.

The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?

The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.

The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.

Some of the concerns the report explores are enhancements to familiar threats.

Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.

Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.

These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.

Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.

As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”

There are ways around this approach.

Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.

Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.

This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.

As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.

Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.

Image Credit: lolloj / Shutterstock.com Continue reading

Posted in Human Robots

#431427 Why the Best Healthcare Hacks Are the ...

Technology has the potential to solve some of our most intractable healthcare problems. In fact, it’s already doing so, with inventions getting us closer to a medical Tricorder, and progress toward 3D printed organs, and AIs that can do point-of-care diagnosis.
No doubt these applications of cutting-edge tech will continue to push the needle on progress in medicine, diagnosis, and treatment. But what if some of the healthcare hacks we need most aren’t high-tech at all?
According to Dr. Darshak Sanghavi, this is exactly the case. In a talk at Singularity University’s Exponential Medicine last week, Sanghavi told the audience, “We often think in extremely complex ways, but I think a lot of the improvements in health at scale can be done in an analog way.”
Sanghavi is the chief medical officer and senior vice president of translation at OptumLabs, and was previously director of preventive and population health at the Center for Medicare and Medicaid Innovation, where he oversaw the development of large pilot programs aimed at improving healthcare costs and quality.
“How can we improve health at scale, not for only a small number of people, but for entire populations?” Sanghavi asked. With programs that benefit a small group of people, he explained, what tends to happen is that the average health of a population improves, but the disparities across the group worsen.
“My mantra became, ‘The denominator is everybody,’” he said. He shared details of some low-tech but crucial fixes he believes could vastly benefit the US healthcare system.
1. Regulatory Hacking
Healthcare regulations are ultimately what drive many aspects of patient care, for better or worse. Worse because the mind-boggling complexity of regulations (exhibit A: the Affordable Care Act is reportedly about 20,000 pages long) can make it hard for people to get the care they need at a cost they can afford, but better because, as Sanghavi explained, tweaking these regulations in the right way can result in across-the-board improvements in a given population’s health.
An adjustment to Medicare hospitalization rules makes for a relevant example. The code was updated to state that if people who left the hospital were re-admitted within 30 days, that hospital had to pay a penalty. The result was hospitals taking more care to ensure patients were released not only in good health, but also with a solid understanding of what they had to do to take care of themselves going forward. “Here, arguably the writing of a few lines of regulatory code resulted in a remarkable decrease in 30-day re-admissions, and the savings of several billion dollars,” Sanghavi said.
2. Long-Term Focus
It’s easy to focus on healthcare hacks that have immediate, visible results—but what about fixes whose benefits take years to manifest? How can we motivate hospitals, regulators, and doctors to take action when they know they won’t see changes anytime soon?
“I call this the reality TV problem,” Sanghavi said. “Reality shows don’t really care about who’s the most talented recording artist—they care about getting the most viewers. That is exactly how we think about health care.”
Sanghavi’s team wanted to address this problem for heart attacks. They found they could reliably determine someone’s 10-year risk of having a heart attack based on a simple risk profile. Rather than monitoring patients’ cholesterol, blood pressure, weight, and other individual factors, the team took the average 10-year risk across entire provider panels, then made providers responsible for controlling those populations.
“Every percentage point you lower that risk, by hook or by crook, you get some people to stop smoking, you get some people on cholesterol medication. It’s patient-centered decision-making, and the provider then makes money. This is the world’s first predictive analytic model, at scale, that’s actually being paid for at scale,” he said.
3. Aligned Incentives
If hospitals are held accountable for the health of the communities they’re based in, those hospitals need to have the right incentives to follow through. “Hospitals have to spend money on community benefit, but linking that benefit to a meaningful population health metric can catalyze significant improvements,” Sanghavi said.
Darshak Sanghavi speaking at Singularity University’s 2017 Exponential Medicine Summit in San Diego, CA.
He used smoking cessation as an example. His team designed a program where hospitals were given a score (determined by the Centers for Disease Control and Prevention) based on the smoking rate in the counties where they’re located, then given monetary incentives to improve their score. Improving their score, in turn, resulted in better health for their communities, which meant fewer patients to treat for smoking-related health problems.
4. Social Determinants of Health
Social determinants of health include factors like housing, income, family, and food security. The answer to getting people to pay attention to these factors at scale, and creating aligned incentives, Sanghavi said, is “Very simple. We just have to measure it to start with, and measure it universally.”
His team was behind a $157 million pilot program called Accountable Health Communities that went live this year. The program requires all Medicare and Medicaid beneficiaries get screened for various social determinants of health. With all that data being collected, analysts can pinpoint local trends, then target funds to address the underlying problem, whether it’s job training, drug use, or nutritional education. “You’re then free to invest the dollars where they’re needed…this is how we can improve health at scale, with very simple changes in the incentive structures that are created,” he said.
5. ‘Securitizing’ Public Health
Sanghavi’s final point tied back to his discussion of aligning incentives. As misguided as it may seem, the reality is that financial incentives can make a huge difference in healthcare outcomes, from both a patient and a provider perspective.
Sanghavi’s team did an experiment in which they created outcome benchmarks for three major health problems that exist across geographically diverse areas: smoking, adolescent pregnancy, and binge drinking. The team proposed measuring the baseline of these issues then creating what they called a social impact bond. If communities were able to lower their frequency of these conditions by a given percent within a stated period of time, they’d get paid for it.
“What that did was essentially say, ‘you have a buyer for this outcome if you can achieve it,’” Sanghavi said. “And you can try to get there in any way you like.” The program is currently in CMS clearance.
AI and Robots Not Required
Using robots to perform surgery and artificial intelligence to diagnose disease will undoubtedly benefit doctors and patients around the US and the world. But Sanghavi’s talk made it clear that our healthcare system needs much more than this, and that improving population health on a large scale is really a low-tech project—one involving more regulatory and financial innovation than technological innovation.
“The things that get measured are the things that get changed,” he said. “If we choose the right outcomes to predict long-term benefit, and we pay for those outcomes, that’s the way to make progress.”
Image Credit: Wonderful Nature / Shutterstock.com Continue reading

Posted in Human Robots