Tag Archives: language

#436977 The Top 100 AI Startups Out There Now, ...

New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.

What do all these disparate efforts have in common? They’re some of the solutions that the world’s most promising artificial intelligence startups are pursuing.

Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.

About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.

It uses that algorithm-generated data from what it calls a company’s Mosaic score—pulling together information on market trends, money, and momentum—along with other details ranging from patent activity to the latest news analysis to identify the best of the best.

“Our final list of companies is a mix of startups at various stages of R&D and product commercialization,” said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.

About 10 companies on the list are among the world’s most valuable AI startups. For instance, there’s San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.

Image courtesy of CB Insights
Funding for AI in Healthcare
Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platform’s diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.

In fact, there are more than a dozen AI healthcare startups on this year’s AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.

One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. There’s even a chance AI could help fight the coronavirus pandemic.

There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.

And then there’s OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the company’s AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.

Keeping Cyber Networks Healthy
Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.

“I think this is an interesting field because it’s a bit of a cat and mouse game,” noted Varadharajanis. “As your cyber defenses get smarter, your cyber attacks get even smarter, and so it’s a constant game of who’s going to match the other in terms of tech capabilities.”

Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The company’s platform automates what’s called endpoint security, referring to laptops, phones, and other devices at the “end” of a centralized network.

Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the “edge” of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.

Deepfakes Get a Friendly Makeover
Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.

Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a user’s face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.

Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startup’s claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.

There’s also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.

AI Helps Make Smart Cities Smarter
Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least that’s the dream.

A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. It’s sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.

Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.

Some people might complain that weather forecasters don’t even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.

And those are just some of the highlights of what some of the world’s most promising AI startups are doing.

“You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards,” Varadharajanis said. “So a lot of creative ways in which companies are applying AI to solve different issues in different industries.”

Image Credit: Butterfly Network Continue reading

Posted in Human Robots

#436911 Scientists Linked Artificial and ...

Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.

Whoa.

We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.

As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.

This month, an international team put all of those ingredients together, turning theory into reality.

The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.

The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.

That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.

And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.

The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.

The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.

Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.

Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.

Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.

That’s what this study did.

A Hybrid Network
Still with me? Let’s talk network.

It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.

Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).

So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.

To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.

Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.

Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.

You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.

Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.

Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.

It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.

However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.

While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.

“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436774 AI Is an Energy-Guzzler. We Need to ...

There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.

Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.

The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.

For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.

The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.

OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.

Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.

One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.

To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.

The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.

An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”

Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.

The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.

Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.

Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.

It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.

“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.

He’s not the only one.

One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.

One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.

It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.

A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.

The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.

The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.

Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.

“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.

Image Credit: analogicus from Pixabay Continue reading

Posted in Human Robots

#436559 This Is What an AI Said When Asked to ...

“What’s past is prologue.” So says the famed quote from Shakespeare’s The Tempest, alleging that we can look to what has already happened as an indication of what will happen next.

This idea could be interpreted as being rather bleak; are we doomed to repeat the errors of the past until we correct them? We certainly do need to learn and re-learn life lessons—whether in our work, relationships, finances, health, or other areas—in order to grow as people.

Zooming out, the same phenomenon exists on a much bigger scale—that of our collective human history. We like to think we’re improving as a species, but haven’t yet come close to doing away with the conflicts and injustices that plagued our ancestors.

Zooming back in (and lightening up) a little, what about the short-term future? What might happen over the course of this year, and what information would we use to make educated guesses about it?

The editorial team at The Economist took a unique approach to answering these questions. On top of their own projections for 2020, including possible scenarios in politics, economics, and the continued development of technologies like artificial intelligence, they looked to an AI to make predictions of its own. What it came up with is intriguing, and a little bit uncanny.

[For the full list of the questions and answers, read The Economist article].

An AI That Reads—Then Writes
Almost exactly a year ago, non-profit OpenAI announced it had built a neural network for natural language processing called GPT-2. The announcement was met with some controversy, as it included the caveat that the tool would not be immediately released to the public due to its potential for misuse. It was then released in phases over the course of several months.

GPT-2’s creators upped the bar on quality when training the neural net; rather than haphazardly feeding it low-quality text, they only used articles that got more than three upvotes on Reddit (admittedly, this doesn’t guarantee high quality across the board—but it’s something).

The training dataset consisted of 40GB of text. For context, 1GB of text is about 900,000 ASCII pages or 130,000 double-spaced Microsoft Word pages.

The tool has no understanding of the text it’s generating, of course. It uses language patterns and word sequences to draw statistical associations between words and phrases, building a sort of guidebook for itself (not unlike the grammar rules and vocabulary words you might study when trying to learn a foreign language). It then uses that guidebook to answer questions or predict what will come after a particular sequence of words.

GPT-2’s creators did note that when the model is trained on specifically-selected datasets for narrower applications, its replies become more convincing.

Some Light Editing
Though the Economist article detailing GPT-2’s predictions describes the text the model generated as “unedited,” this isn’t wholly accurate. In a Medium post, deputy editor Tom Standage describes the methodology he used, and it does involve a degree of tweaking.

Standage wrote an introductory paragraph, followed by his opening question (“Greetings, GPT-2. Have you done an interview before?”). He configured the tool to generate five responses to each subsequent question then selected the best answer, deleting any questions GPT-2 came up with as part of its response.

AI’s View of the World—and Itself
The questions Standage asked ranged from tech-related (the future of AI, technological unemployment, misuse of technology) to political and economic (re-election of Donald Trump, the US-China trade war). Here are a couple of his questions and GPT-2’s answers:

Q: Do fake news stories, generated using AI, pose a threat to democracy? Are we likely to see this tactic being used in the 2020 American presidential elections?
A: Yes, we’ll probably see them. It’s just a matter of when. Fake news stories are generally generated by political campaigns, and have the potential to have a huge impact on the outcome of an election. This is because they spread misinformation, often with no basis in fact.

Q: How do you see the economic outlook for 2020?
A: I think it is very unclear. The outlook for the UK economy is uncertain, but the American economy is very strong. I think we are going to have a lot of turbulence in the world economy.

While some of its answers seemed insightful—when asked about the future of AI, GPT-2 advised that we treat the technology like a tool and “put as much effort into developing [it] as necessary, rather than worrying that it’s going to harm us and destroy our lives”—many of the responses were quite vague. One answer, for example, mentions that there will “probably be major changes in China.” Are we talking political changes? Social? Economic? All of the above?

However, while nothing the AI wrote was particularly illuminating or new, it’s impressive that it was able to expound on the correct topic for each question, and in fully coherent English.

GPT-2 named itself—AI—as the most important disruptive technology to watch in 2020. This is telling; it’s coming from a synthesis of millions of pages of published articles. If GPT-2 thinks AI is more important than, say, gene editing or quantum computing, it could be right. Or it could simply be that AI is at peak hype, and as such more is being written about it than about other technologies.

Equally intriguing was GPT-2’s response when asked whether Donald Trump will win a second term: “I think he will not win a second term. I think he will be defeated in the general election.” Some deeper insight there would be great, but hey—we’ll take it.

Predicting Predictions
Since an AI can read and synthesize vast data sets much faster than we can, it’s being used to predict all kinds of things, from virus outbreaks to crime. But asking it to philosophize on the future based on the (Reddit-curated) past is new, and if you think about it, a pretty fascinating undertaking.

As GPT-2 and tools like it continually improve, we’ll likely see them making more—and better—predictions of the future. In the meantime, let’s hope that the new data these models are trained on—news of what’s happening this week, this month, this year—add to an already-present sense of optimism.

When asked if it had any advice for readers, GPT-2 replied, “The big projects that you think are impossible today are actually possible in the near future.”

Image Credit: Alexas_Fotos from Pixabay Continue reading

Posted in Human Robots

#436546 How AI Helped Predict the Coronavirus ...

Coronavirus has been all over the news for the last couple weeks. A dedicated hospital sprang up in just eight days, the stock market took a hit, Chinese New Year celebrations were spoiled, and travel restrictions are in effect.

But let’s rewind a bit; some crucial events took place before we got to this point.

A little under two weeks before the World Health Organization (WHO) alerted the public of the coronavirus outbreak, a Canadian artificial intelligence company was already sounding the alarm. BlueDot uses AI-powered algorithms to analyze information from a multitude of sources to identify disease outbreaks and forecast how they may spread. On December 31st 2019, the company sent out a warning to its customers to avoid Wuhan, where the virus originated. The WHO didn’t send out a similar public notice until January 9th, 2020.

The story of BlueDot’s early warning is the latest example of how AI can improve our identification of and response to new virus outbreaks.

Predictions Are Bad News
Global pandemic or relatively minor scare? The jury is still out on the coronavirus. However, the math points to signs that the worst is yet to come.

Scientists are still working to determine how infectious the virus is. Initial analysis suggests it may be somewhere between influenza and polio on the virus reproduction number scale, which indicates how many new cases one case leads to.

UK and US-based researchers have published a preliminary paper estimating that the confirmed infected people in Wuhan only represent five percent of those who are actually infected. If the models are correct, 190,000 people in Wuhan will be infected by now, major Chinese cities are on the cusp of large-scale outbreaks, and the virus will continue to spread to other countries.

Finding the Start
The spread of a given virus is partly linked to how long it remains undetected. Identifying a new virus is the first step towards mobilizing a response and, in time, creating a vaccine. Warning at-risk populations as quickly as possible also helps with limiting the spread.

These are among the reasons why BlueDot’s achievement is important in and of itself. Furthermore, it illustrates how AIs can sift through vast troves of data to identify ongoing virus outbreaks.

BlueDot uses natural language processing and machine learning to scour a variety of information sources, including chomping through 100,000 news reports in 65 languages a day. Data is compared with flight records to help predict virus outbreak patterns. Once the automated data sifting is completed, epidemiologists check that the findings make sense from a scientific standpoint, and reports are sent to BlueDot’s customers, which include governments, businesses, and public health organizations.

AI for Virus Detection and Prevention
Other companies, such as Metabiota, are also using data-driven approaches to track the spread of the likes of the coronavirus.

Researchers have trained neural networks to predict the spread of infectious diseases in real time. Others are using AI algorithms to identify how preventive measures can have the greatest effect. AI is also being used to create new drugs, which we may well see repeated for the coronavirus.

If the work of scientists Barbara Han and David Redding comes to fruition, AI and machine learning may even help us predict where virus outbreaks are likely to strike—before they do.

The Uncertainty Factor
One of AI’s core strengths when working on identifying and limiting the effects of virus outbreaks is its incredibly insistent nature. AIs never tire, can sift through enormous amounts of data, and identify possible correlations and causations that humans can’t.

However, there are limits to AI’s ability to both identify virus outbreaks and predict how they will spread. Perhaps the best-known example comes from the neighboring field of big data analytics. At its launch, Google Flu Trends was heralded as a great leap forward in relation to identifying and estimating the spread of the flu—until it underestimated the 2013 flu season by a whopping 140 percent and was quietly put to rest.

Poor data quality was identified as one of the main reasons Google Flu Trends failed. Unreliable or faulty data can wreak havoc on the prediction power of AIs.

In our increasingly interconnected world, tracking the movements of potentially infected individuals (by car, trains, buses, or planes) is just one vector surrounded by a lot of uncertainty.

The fact that BlueDot was able to correctly identify the coronavirus, in part due to its AI technology, illustrates that smart computer systems can be incredibly useful in helping us navigate these uncertainties.

Importantly, though, this isn’t the same as AI being at a point where it unerringly does so on its own—which is why BlueDot employs human experts to validate the AI’s findings.

Image Credit: Coronavirus molecular illustration, Gianluca Tomasello/Wikimedia Commons Continue reading

Posted in Human Robots