Tag Archives: think
#436977 The Top 100 AI Startups Out There Now, ...
New drug therapies for a range of chronic diseases. Defenses against various cyber attacks. Technologies to make cities work smarter. Weather and wildfire forecasts that boost safety and reduce risk. And commercial efforts to monetize so-called deepfakes.
What do all these disparate efforts have in common? They’re some of the solutions that the world’s most promising artificial intelligence startups are pursuing.
Data research firm CB Insights released its much-anticipated fourth annual list of the top 100 AI startups earlier this month. The New York-based company has become one of the go-to sources for emerging technology trends, especially in the startup scene.
About 10 years ago, it developed its own algorithm to assess the health of private companies using publicly-available information and non-traditional signals (think social media sentiment, for example) thanks to more than $1 million in grants from the National Science Foundation.
It uses that algorithm-generated data from what it calls a company’s Mosaic score—pulling together information on market trends, money, and momentum—along with other details ranging from patent activity to the latest news analysis to identify the best of the best.
“Our final list of companies is a mix of startups at various stages of R&D and product commercialization,” said Deepashri Varadharajanis, a lead analyst at CB Insights, during a recent presentation on the most prominent trends among the 2020 AI 100 startups.
About 10 companies on the list are among the world’s most valuable AI startups. For instance, there’s San Francisco-based Faire, which has raised at least $266 million since it was founded just three years ago. The company offers a wholesale marketplace that uses machine learning to match local retailers with goods that are predicted to sell well in their specific location.
Image courtesy of CB Insights
Funding for AI in Healthcare
Another startup valued at more than $1 billion, referred to as a unicorn in venture capital speak, is Butterfly Network, a company on the East Coast that has figured out a way to turn a smartphone phone into an ultrasound machine. Backed by $350 million in private investments, Butterfly Network uses AI to power the platform’s diagnostics. A more modestly funded San Francisco startup called Eko is doing something similar for stethoscopes.
In fact, there are more than a dozen AI healthcare startups on this year’s AI 100 list, representing the most companies of any industry on the list. In total, investors poured about $4 billion into AI healthcare startups last year, according to CB Insights, out of a record $26.6 billion raised by all private AI companies in 2019. Since 2014, more than 4,300 AI startups in 80 countries have raised about $83 billion.
One of the most intensive areas remains drug discovery, where companies unleash algorithms to screen potential drug candidates at an unprecedented speed and breadth that was impossible just a few years ago. It has led to the discovery of a new antibiotic to fight superbugs. There’s even a chance AI could help fight the coronavirus pandemic.
There are several AI drug discovery startups among the AI 100: San Francisco-based Atomwise claims its deep convolutional neural network, AtomNet, screens more than 100 million compounds each day. Cyclica is an AI drug discovery company in Toronto that just announced it would apply its platform to identify and develop novel cannabinoid-inspired drugs for neuropsychiatric conditions such as bipolar disorder and anxiety.
And then there’s OWKIN out of New York City, a startup that uses a type of machine learning called federated learning. Backed by Google, the company’s AI platform helps train algorithms without sharing the necessary patient data required to provide the sort of valuable insights researchers need for designing new drugs or even selecting the right populations for clinical trials.
Keeping Cyber Networks Healthy
Privacy and data security are the focus of a number of AI cybersecurity startups, as hackers attempt to leverage artificial intelligence to launch sophisticated attacks while also trying to fool the AI-powered systems rapidly coming online.
“I think this is an interesting field because it’s a bit of a cat and mouse game,” noted Varadharajanis. “As your cyber defenses get smarter, your cyber attacks get even smarter, and so it’s a constant game of who’s going to match the other in terms of tech capabilities.”
Few AI cybersecurity startups match Silicon Valley-based SentinelOne in terms of private capital. The company has raised more than $400 million, with a valuation of $1.1 billion following a $200 million Series E earlier this year. The company’s platform automates what’s called endpoint security, referring to laptops, phones, and other devices at the “end” of a centralized network.
Fellow AI 100 cybersecurity companies include Blue Hexagon, which protects the “edge” of the network against malware, and Abnormal Security, which stops targeted email attacks, both out of San Francisco. Just down the coast in Los Angeles is Obsidian Security, a startup offering cybersecurity for cloud services.
Deepfakes Get a Friendly Makeover
Deepfakes of videos and other types of AI-manipulated media where faces or voices are synthesized in order to fool viewers or listeners has been a different type of ongoing cybersecurity risk. However, some firms are swapping malicious intent for benign marketing and entertainment purposes.
Now anyone can be a supermodel thanks to Superpersonal, a London-based AI startup that has figured out a way to seamlessly swap a user’s face onto a fashionista modeling the latest threads on the catwalk. The most obvious use case is for shoppers to see how they will look in a particular outfit before taking the plunge on a plunging neckline.
Another British company called Synthesia helps users create videos where a talking head will deliver a customized speech or even talk in a different language. The startup’s claim to fame was releasing a campaign video for the NGO Malaria Must Die showing soccer star David Becham speak in nine different languages.
There’s also a Seattle-based company, Wellsaid Labs, which uses AI to produce voice-over narration where users can choose from a library of digital voices with human pitch, emphasis, and intonation. Because every narrator sounds just a little bit smarter with a British accent.
AI Helps Make Smart Cities Smarter
Speaking of smarter: A handful of AI 100 startups are helping create the smart city of the future, where a digital web of sensors, devices, and cloud-based analytics ensure that nobody is ever stuck in traffic again or without an umbrella at the wrong time. At least that’s the dream.
A couple of them are directly connected to Google subsidiary Sidewalk Labs, which focuses on tech solutions to improve urban design. A company called Replica was spun out just last year. It’s sort of SimCity for urban planning. The San Francisco startup uses location data from mobile phones to understand how people behave and travel throughout a typical day in the city. Those insights can then help city governments, for example, make better decisions about infrastructure development.
Denver-area startup AMP Robotics gets into the nitty gritty details of recycling by training robots on how to recycle trash, since humans have largely failed to do the job. The U.S. Environmental Protection Agency estimates that only about 30 percent of waste is recycled.
Some people might complain that weather forecasters don’t even do that well when trying to predict the weather. An Israeli AI startup, ClimaCell, claims it can forecast rain block by block. While the company taps the usual satellite and ground-based sources to create weather models, it has developed algorithms to analyze how precipitation and other conditions affect signals in cellular networks. By analyzing changes in microwave signals between cellular towers, the platform can predict the type and intensity of the precipitation down to street level.
And those are just some of the highlights of what some of the world’s most promising AI startups are doing.
“You have companies optimizing mining operations, warehouse logistics, insurance, workflows, and even working on bringing AI solutions to designing printed circuit boards,” Varadharajanis said. “So a lot of creative ways in which companies are applying AI to solve different issues in different industries.”
Image Credit: Butterfly Network Continue reading
#436944 Is Digital Learning Still Second Best?
As Covid-19 continues to spread, the world has gone digital on an unprecedented scale. Tens of thousands of employees are working from home, and huge conferences, like the Google I/O and Apple WWDC software extravaganzas, plan to experiment with digital events.
Universities too are sending students home. This might have meant an extended break from school not too long ago. But no more. As lecture halls go empty, an experiment into digital learning at scale is ramping up. In the US alone, over 100 universities, from Harvard to Duke, are offering online classes to students to keep the semester going.
While digital learning has been improving for some time, Covid-19 may not only tip us further into a more digitally connected reality, but also help us better appreciate its benefits. This is important because historically, digital learning has been viewed as inferior to traditional learning. But that may be changing.
The Inversion
We often think about digital technologies as ways to reach people without access to traditional services—online learning for children who don’t have schools nearby or telemedicine for patients with no access to doctors. And while these solutions have helped millions of people, they’re often viewed as “second best” and “better than nothing.” Even in more resource-rich environments, there’s an assumption one should pay more to attend an event in person—a concert, a football game, an exercise class—while digital equivalents are extremely cheap or free. Why is this? And is the situation about to change?
Take the case of Dr. Sanjeev Arora, a professor of medicine at the University of New Mexico. Arora started Project Echo because he was frustrated by how many late-stage cases of hepatitis C he encountered in rural New Mexico. He realized that if he had reached patients sooner, he could have prevented needless deaths. The solution? Digital learning for local health workers.
Project Echo connects rural healthcare practitioners to specialists at top health centers by video. The approach is collaborative: Specialists share best practices and work through cases with participants to apply them in the real world and learn from edge cases. Added to expert presentations, there are lots of opportunities to ask questions and interact with specialists.
The method forms a digital loop of learning, practice, assessment, and adjustment.
Since 2003, Project Echo has scaled to 800 locations in 39 countries and trained over 90,000 healthcare providers. Most notably, a study in The New England Journal of Medicine found that the outcomes of hepatitis C treatment given by Project Echo trained healthcare workers in rural and underserved areas were similar to outcomes at university medical centers. That is, digital learning in this context was equivalent to high quality in-person learning.
If that is possible today, with simple tools, will they surpass traditional medical centers and schools in the future? Can digital learning more generally follow suit and have the same success? Perhaps. Going digital brings its own special toolset to the table too.
The Benefits of Digital
If you’re training people online, you can record the session to better understand their engagement levels—or even add artificial intelligence to analyze it in real time. Ahura AI, for example, founded by Bryan Talebi, aims to upskill workers through online training. Early study of their method suggests they can significantly speed up learning by analyzing users’ real-time emotions—like frustration or distraction—and adjusting the lesson plan or difficulty on the fly.
Other benefits of digital learning include the near-instantaneous download of course materials—rather than printing and shipping books—and being able to more easily report grades and other results, a requirement for many schools and social services organizations. And of course, as other digitized industries show, digital learning can grow and scale further at much lower costs.
To that last point, 360ed, a digital learning startup founded in 2016 by Hla Hla Win, now serves millions of children in Myanmar with augmented reality lesson plans. And Global Startup Ecosystem, founded by Christine Souffrant Ntim and Einstein Kofi Ntim in 2015, is the world’s first and largest digital accelerator program. Their entirely online programs support over 1,000 companies in 90 countries. It’s astonishing how fast both of these organizations have grown.
Notably, both examples include offline experiences too. Many of the 360ed lesson plans come with paper flashcards children use with their smartphones because the online-offline interaction improves learning. The Global Startup Ecosystem also hosts about 10 additional in-person tech summits around the world on various topics through a related initiative.
Looking further ahead, probably the most important benefit of online learning will be its potential to integrate with other digital systems in the workplace.
Imagine a medical center that has perfect information about every patient and treatment in real time and that this information is (anonymously and privately) centralized, analyzed, and shared with medical centers, research labs, pharmaceutical companies, clinical trials, policy makers, and medical students around the world. Just as self-driving cars can learn to drive better by having access to the experiences of other self-driving cars, so too can any group working to solve complex, time-sensitive challenges learn from and build on each other’s experiences.
Why This Matters
While in the long term the world will likely end up combining the best aspects of traditional and digital learning, it’s important in the near term to be more aware of the assumptions we make about digital technologies. Some of the most pioneering work in education, healthcare, and other industries may not be highly visible right now because it is in a virtual setting. Most people are unaware, for example, that the busiest emergency room in rural America is already virtual.
Once they start converging with other digital technologies, these innovations will likely become the mainstream system for all of us. Which raises more questions: What is the best business model for these virtual services? If they start delivering better healthcare and educational outcomes than traditional institutions, should they charge more? Hopefully, we will see an even bigger shift occurring, in which technology allows us to provide high quality education, healthcare, and other services to everyone at more affordable prices than today.
These are some of the topics we can consider as Covid-19 forces us into uncharted territory.
Image Credit: Andras Vas / Unsplash Continue reading
#436774 AI Is an Energy-Guzzler. We Need to ...
There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.
Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.
The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.
For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.
The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.
OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.
Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.
One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.
To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.
The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.
An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”
Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.
The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.
Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.
Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.
It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.
“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.
He’s not the only one.
One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.
One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.
It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.
A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.
The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.
The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.
Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.
“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.
Image Credit: analogicus from Pixabay Continue reading
#436578 AI Just Discovered a New Antibiotic to ...
Penicillin, one of the greatest discoveries in the history of medicine, was a product of chance.
After returning from summer vacation in September 1928, bacteriologist Alexander Fleming found a colony of bacteria he’d left in his London lab had sprouted a fungus. Curiously, wherever the bacteria contacted the fungus, their cell walls broke down and they died. Fleming guessed the fungus was secreting something lethal to the bacteria—and the rest is history.
Fleming’s discovery of penicillin and its later isolation, synthesis, and scaling in the 1940s released a flood of antibiotic discoveries in the next few decades. Bacteria and fungi had been waging an ancient war against each other, and the weapons they’d evolved over eons turned out to be humanity’s best defense against bacterial infection and disease.
In recent decades, however, the flood of new antibiotics has slowed to a trickle.
Their development is uneconomical for drug companies, and the low-hanging fruit has long been picked. We’re now facing the emergence of strains of super bacteria resistant to one or more antibiotics and an aging arsenal to fight them with. Gone unchallenged, an estimated 700,000 deaths worldwide due to drug resistance could rise to as many as 10 million in 2050.
Increasingly, scientists warn the tide is turning, and we need a new strategy to keep pace with the remarkably quick and boundlessly creative tactics of bacterial evolution.
But where the golden age of antibiotics was sparked by serendipity, human intelligence, and natural molecular weapons, its sequel may lean on the uncanny eye of artificial intelligence to screen millions of compounds—and even design new ones—in search of the next penicillin.
Hal Discovers a Powerful Antibiotic
In a paper published this week in the journal, Cell, MIT researchers took a step in this direction. The team says their machine learning algorithm discovered a powerful new antibiotic.
Named for the AI in 2001: A Space Odyssey, the antibiotic, halicin, successfully wiped out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria on the World Health Organization’s most wanted list. The bacteria also failed to develop resistance to E. coli during a month of observation, in stark contrast to existing antibiotic ciprofloxacin.
“In terms of antibiotic discovery, this is absolutely a first,” Regina Barzilay, a senior author on the study and computer science professor at MIT, told The Guardian.
The algorithm that discovered halicin was trained on the molecular features of 2,500 compounds. Nearly half were FDA-approved drugs, and another 800 naturally occurring. The researchers specifically tuned the algorithm to look for molecules with antibiotic properties but whose structures would differ from existing antibiotics (as halicin’s does). Using another machine learning program, they screened the results for those likely to be safe for humans.
Early study suggests halicin attacks the bacteria’s cell membranes, disrupting their ability to produce energy. Protecting the cell membrane from halicin might take more than one or two genetic mutations, which could account for its impressive ability to prevent resistance.
“I think this is one of the more powerful antibiotics that has been discovered to date,” James Collins, an MIT professor of bioengineering and senior author told The Guardian. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”
Beyond tests in petri-dish bacterial colonies, the team also tested halicin in mice. The antibiotic cleared up infections of a strain of bacteria resistant to all known antibiotics in a day. The team plans further study in partnership with a pharmaceutical company or nonprofit, and they hope to eventually prove it safe and effective for use in humans.
This last bit remains the trickiest step, given the cost of getting a new drug approved. But Collins hopes algorithms like theirs will help. “We could dramatically reduce the cost required to get through clinical trials,” he told the Financial Times.
A Universe of Drugs Awaits
The bigger story may be what happens next.
How many novel antibiotics await discovery, and how far can AI screening take us? The initial 6,000 compounds scanned by Barzilay and Collins’s team is a drop in the bucket.
They’ve already begun digging deeper by setting the algorithm loose on 100 million molecules from an online library of 1.5 billion compounds called the ZINC15 database. This first search took three days and turned up 23 more candidates that, like halicin, differ structurally from existing antibiotics and may be safe for humans. Two of these—which the team will study further—appear to be especially powerful.
Even more ambitiously, Barzilay hopes the approach can find or even design novel antibiotics that kill bad bacteria with alacrity while sparing the good guys. In this way, a round of antibiotics would cure whatever ails you without taking out your whole gut microbiome in the process.
All this is part of a larger movement to use machine learning algorithms in the long, expensive process of drug discovery. Other players in the area are also training AI on the vast possibility space of drug-like compounds. Last fall, one of the leaders in the area, Insilico, was challenged by a partner to see just how fast their method could do the job. The company turned out a new a proof-of-concept drug candidate in only 46 days.
The field is still developing, however, and it has yet to be seen exactly how valuable these approaches will be in practice. Barzilay is optimistic though.
“There is still a question of whether machine-learning tools are really doing something intelligent in healthcare, and how we can develop them to be workhorses in the pharmaceuticals industry,” she said. “This shows how far you can adapt this tool.”
Image Credit: Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not. Collins Lab at MIT Continue reading