Tag Archives: possible

#434637 AI Is Rapidly Augmenting Healthcare and ...

When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.

Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.

During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.

The pace of AI-augmented healthcare innovation is only accelerating.

In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.

In this blog, I’ll expand on:

Machine learning and drug design
Artificial intelligence and big data in medicine
Healthcare, AI & China

Let’s dive in.

Machine Learning in Drug Design
What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?

Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.

It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.

This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.

One of the hottest startups in digital drug discovery today is Insilico Medicine. Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.

Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets. These molecules either already exist or can be generated de novo with the desired set of parameters.

In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system—an impressive feat made possible by converging exponential technologies.

Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.

Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.

Dr. Zhavoronkov’s ultimate goal is to develop a fully-automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.

Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.

Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.

Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.

As I discussed in my blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.

In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.

And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Artificial Intelligence and Data Crunching
AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives. Take WAVE, for instance. Every year, over 400,000 patients die prematurely in US hospitals as a result of heart attack or respiratory failure.

Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.

Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.

Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.

Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.

For some time, it has become physically impossible for a human physician to read—let alone remember—all of the relevant published data.

To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.

Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.

One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.

After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.

But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, and pathology and radiology reports.

In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnoses, treatments, dosages, symptoms and signs.

Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the Wall Street Journal that internal tests demonstrated that the software even performs as well as or better than other published efforts.

On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”

Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.

Healthcare, AI & China
In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.

Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.

Backed by the Chinese government, China’s largest tech companies—particularly Tencent—have now made strong entrances into healthcare.

Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.

Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous US personalized medicine startups.

Considering Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.

China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform. At the same time, 2,000 Chinese hospitals accept WeChat payments.

Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.

Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.

Conclusion
As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.

Next week, I’ll continue to explore this concept of AI systems in healthcare.

Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.

As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your moonshots and live out your massively transformative purpose?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots

#434616 What Games Are Humans Still Better at ...

Artificial intelligence (AI) systems’ rapid advances are continually crossing rows off the list of things humans do better than our computer compatriots.

AI has bested us at board games like chess and Go, and set astronomically high scores in classic computer games like Ms. Pacman. More complex games form part of AI’s next frontier.

While a team of AI bots developed by OpenAI, known as the OpenAI Five, ultimately lost to a team of professional players last year, they have since been running rampant against human opponents in Dota 2. Not to be outdone, Google’s DeepMind AI recently took on—and beat—several professional players at StarCraft II.

These victories beg the questions: what games are humans still better at than AI? And for how long?

The Making Of AlphaStar
DeepMind’s results provide a good starting point in a search for answers. The version of its AI for StarCraft II, dubbed AlphaStar, learned to play the games through supervised learning and reinforcement learning.

First, AI agents were trained by analyzing and copying human players, learning basic strategies. The initial agents then played each other in a sort of virtual death match where the strongest agents stayed on. New iterations of the agents were developed and entered the competition. Over time, the agents became better and better at the game, learning new strategies and tactics along the way.

One of the advantages of AI is that it can go through this kind of process at superspeed and quickly develop better agents. DeepMind researchers estimate that the AlphaStar agents went through the equivalent of roughly 200 years of game time in about 14 days.

Cheating or One Hand Behind the Back?
The AlphaStar AI agents faced off against human professional players in a series of games streamed on YouTube and Twitch. The AIs trounced their human opponents, winning ten games on the trot, before pro player Grzegorz “MaNa” Komincz managed to salvage some pride for humanity by winning the final game. Experts commenting on AlphaStar’s performance used words like “phenomenal” and “superhuman”—which was, to a degree, where things got a bit problematic.

AlphaStar proved particularly skilled at controlling and directing units in battle, known as micromanagement. One reason was that it viewed the whole game map at once—something a human player is not able to do—which made it seemingly able to control units in different areas at the same time. DeepMind researchers said the AIs only focused on a single part of the map at any given time, but interestingly, AlphaStar’s AI agent was limited to a more restricted camera view during the match “MaNA” won.

Potentially offsetting some of this advantage was the fact that AlphaStar was also restricted in certain ways. For example, it was prevented from performing more clicks per minute than a human player would be able to.

Where AIs Struggle
Games like StarCraft II and Dota 2 throw a lot of challenges at AIs. Complex game theory/ strategies, operating with imperfect/incomplete information, undertaking multi-variable and long-term planning, real-time decision-making, navigating a large action space, and making a multitude of possible decisions at every point in time are just the tip of the iceberg. The AIs’ performance in both games was impressive, but also highlighted some of the areas where they could be said to struggle.

In Dota 2 and StarCraft II, AI bots have seemed more vulnerable in longer games, or when confronted with surprising, unfamiliar strategies. They seem to struggle with complexity over time and improvisation/adapting to quick changes. This could be tied to how AIs learn. Even within the first few hours of performing a task, humans tend to gain a sense of familiarity and skill that takes an AI much longer. We are also better at transferring skill from one area to another. In other words, experience playing Dota 2 can help us become good at StarCraft II relatively quickly. This is not the case for AI—yet.

Dwindling Superiority
While the battle between AI and humans for absolute superiority is still on in Dota 2 and StarCraft II, it looks likely that AI will soon reign supreme. Similar things are happening to other types of games.

In 2017, a team from Carnegie Mellon University pitted its Libratus AI against four professionals. After 20 days of No Limit Texas Hold’em, Libratus was up by $1.7 million. Another likely candidate is the destroyer of family harmony at Christmas: Monopoly.

Poker involves bluffing, while Monopoly involves negotiation—skills you might not think AI would be particularly suited to handle. However, an AI experiment at Facebook showed that AI bots are more than capable of undertaking such tasks. The bots proved skilled negotiators, and developed negotiating strategies like pretending interest in one object while they were interested in another altogether—bluffing.

So, what games are we still better at than AI? There is no precise answer, but the list is getting shorter at a rapid pace.

The Aim Of the Game
While AI’s mastery of games might at first glance seem an odd area to focus research on, the belief is that the way AI learn to master a game is transferrable to other areas.

For example, the Libratus poker-playing AI employed strategies that could work in financial trading or political negotiations. The same applies to AlphaStar. As Oriol Vinyals, co-leader of the AlphaStar project, told The Verge:

“First and foremost, the mission at DeepMind is to build an artificial general intelligence. […] To do so, it’s important to benchmark how our agents perform on a wide variety of tasks.”

A 2017 survey of more than 350 AI researchers predicts AI could be a better driver than humans within ten years. By the middle of the century, AI will be able to write a best-selling novel, and a few years later, it will be better than humans at surgery. By the year 2060, AI may do everything better than us.

Whether you think this is a good or a bad thing, it’s worth noting that AI has an often overlooked ability to help us see things differently. When DeepMind’s AlphaGo beat human Go champion Lee Sedol, the Go community learned from it, too. Lee himself went on a win streak after the match with AlphaGo. The same is now happening within the Dota 2 and StarCraft II communities that are studying the human vs. AI games intensely.

More than anything, AI’s recent gaming triumphs illustrate how quickly artificial intelligence is developing. In 1997, Dr. Piet Hut, an astrophysicist at the Institute for Advanced Study at Princeton and a GO enthusiast, told the New York Times that:

”It may be a hundred years before a computer beats humans at Go—maybe even longer.”

Image Credit: Roman Kosolapov / Shutterstock.com Continue reading

Posted in Human Robots

#434559 Can AI Tell the Difference Between a ...

Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.

Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.

These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.

Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.

They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.

In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.

A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.

If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.

The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.

Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.

Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.

But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.

By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.

Image Credit: Irvan Pratama / Shutterstock.com Continue reading

Posted in Human Robots

#434508 The Top Biotech and Medicine Advances to ...

2018 was bonkers for science.

From a woman who gave birth using a transplanted uterus, to the infamous CRISPR baby scandal, to forensics adopting consumer-based genealogy test kits to track down criminals, last year was a factory churning out scientific “whoa” stories with consequences for years to come.

With CRISPR still in the headlines, Britain ready to bid Europe au revoir, and multiple scientific endeavors taking off, 2019 is shaping up to be just as tumultuous.

Here are the science and health stories that may blow up in the new year. But first, a note of caveat: predicting the future is tough. Forecasting is the lovechild between statistics and (a good deal of) intuition, and entire disciplines have been dedicated to the endeavor. But January is the perfect time to gaze into the crystal ball for wisps of insight into the year to come. Last year we predicted the widespread approval of gene therapy products—on the most part, we nailed it. This year we’re hedging our bets with multiple predictions.

Gene Drives Used in the Wild
The concept of gene drives scares many, for good reason. Gene drives are a step up in severity (and consequences) from CRISPR and other gene-editing tools. Even with germline editing, in which the sperm, egg, or embryos are altered, gene editing affects just one genetic line—one family—at least at the beginning, before they reproduce with the general population.

Gene drives, on the other hand, have the power to wipe out entire species.

In a nutshell, they’re little bits of DNA code that help a gene transfer from parent to child with almost 100 percent perfect probability. The “half of your DNA comes from dad, the other comes from mom” dogma? Gene drives smash that to bits.

In other words, the only time one would consider using a gene drive is to change the genetic makeup of an entire population. It sounds like the plot of a supervillain movie, but scientists have been toying around with the idea of deploying the technology—first in mosquitoes, then (potentially) in rodents.

By releasing just a handful of mutant mosquitoes that carry gene drives for infertility, for example, scientists could potentially wipe out entire populations that carry infectious scourges like malaria, dengue, or Zika. The technology is so potent—and dangerous—the US Defense Advances Research Projects Agency is shelling out $65 million to suss out how to deploy, control, counter, or even reverse the effects of tampering with ecology.

Last year, the U.N. gave a cautious go-ahead for the technology to be deployed in the wild in limited terms. Now, the first release of a genetically modified mosquito is set for testing in Burkina Faso in Africa—the first-ever field experiment involving gene drives.

The experiment will only release mosquitoes in the Anopheles genus, which are the main culprits transferring disease. As a first step, over 10,000 male mosquitoes are set for release into the wild. These dudes are genetically sterile but do not cause infertility, and will help scientists examine how they survive and disperse as a preparation for deploying gene-drive-carrying mosquitoes.

Hot on the project’s heels, the nonprofit consortium Target Malaria, backed by the Bill and Melinda Gates foundation, is engineering a gene drive called Mosq that will spread infertility across the population or kill out all female insects. Their attempt to hack the rules of inheritance—and save millions in the process—is slated for 2024.

A Universal Flu Vaccine
People often brush off flu as a mere annoyance, but the infection kills hundreds of thousands each year based on the CDC’s statistical estimates.

The flu virus is actually as difficult of a nemesis as HIV—it mutates at an extremely rapid rate, making effective vaccines almost impossible to engineer on time. Scientists currently use data to forecast the strains that will likely explode into an epidemic and urge the public to vaccinate against those predictions. That’s partly why, on average, flu vaccines only have a success rate of roughly 50 percent—not much better than a coin toss.

Tired of relying on educated guesses, scientists have been chipping away at a universal flu vaccine that targets all strains—perhaps even those we haven’t yet identified. Often referred to as the “holy grail” in epidemiology, these vaccines try to alert our immune systems to parts of a flu virus that are least variable from strain to strain.

Last November, a first universal flu vaccine developed by BiondVax entered Phase 3 clinical trials, which means it’s already been proven safe and effective in a small numbers and is now being tested in a broader population. The vaccine doesn’t rely on dead viruses, which is a common technique. Rather, it uses a small chain of amino acids—the chemical components that make up proteins—to stimulate the immune system into high alert.

With the government pouring $160 million into the research and several other universal candidates entering clinical trials, universal flu vaccines may finally experience a breakthrough this year.

In-Body Gene Editing Shows Further Promise
CRISPR and other gene editing tools headed the news last year, including both downers suggesting we already have immunity to the technology and hopeful news of it getting ready for treating inherited muscle-wasting diseases.

But what wasn’t widely broadcasted was the in-body gene editing experiments that have been rolling out with gusto. Last September, Sangamo Therapeutics in Richmond, California revealed that they had injected gene-editing enzymes into a patient in an effort to correct a genetic deficit that prevents him from breaking down complex sugars.

The effort is markedly different than the better-known CAR-T therapy, which extracts cells from the body for genetic engineering before returning them to the hosts. Rather, Sangamo’s treatment directly injects viruses carrying the edited genes into the body. So far, the procedure looks to be safe, though at the time of reporting it was too early to determine effectiveness.

This year the company hopes to finally answer whether it really worked.

If successful, it means that devastating genetic disorders could potentially be treated with just a few injections. With a gamut of new and more precise CRISPR and other gene-editing tools in the works, the list of treatable inherited diseases is likely to grow. And with the CRISPR baby scandal potentially dampening efforts at germline editing via regulations, in-body gene editing will likely receive more attention if Sangamo’s results return positive.

Neuralink and Other Brain-Machine Interfaces
Neuralink is the stuff of sci fi: tiny implanted particles into the brain could link up your biological wetware with silicon hardware and the internet.

But that’s exactly what Elon Musk’s company, founded in 2016, seeks to develop: brain-machine interfaces that could tinker with your neural circuits in an effort to treat diseases or even enhance your abilities.

Last November, Musk broke his silence on the secretive company, suggesting that he may announce something “interesting” in a few months, that’s “better than anyone thinks is possible.”

Musk’s aspiration for achieving symbiosis with artificial intelligence isn’t the driving force for all brain-machine interfaces (BMIs). In the clinics, the main push is to rehabilitate patients—those who suffer from paralysis, memory loss, or other nerve damage.

2019 may be the year that BMIs and neuromodulators cut the cord in the clinics. These devices may finally work autonomously within a malfunctioning brain, applying electrical stimulation only when necessary to reduce side effects without requiring external monitoring. Or they could allow scientists to control brains with light without needing bulky optical fibers.

Cutting the cord is just the first step to fine-tuning neurological treatments—or enhancements—to the tune of your own brain, and 2019 will keep on bringing the music.

Image Credit: angellodeco / Shutterstock.com Continue reading

Posted in Human Robots

#434324 Big Brother Nation: The Case for ...

Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.

As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.

In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.

This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.

It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.

The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”

Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.

Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.

By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.

Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.

Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.

Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.

Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.

Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”

If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.

While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.

The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.

The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.

Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.

Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.

Image Credit: VladFotoMag / Shutterstock.com Continue reading

Posted in Human Robots