Tag Archives: intelligence
#436944 Is Digital Learning Still Second Best?
As Covid-19 continues to spread, the world has gone digital on an unprecedented scale. Tens of thousands of employees are working from home, and huge conferences, like the Google I/O and Apple WWDC software extravaganzas, plan to experiment with digital events.
Universities too are sending students home. This might have meant an extended break from school not too long ago. But no more. As lecture halls go empty, an experiment into digital learning at scale is ramping up. In the US alone, over 100 universities, from Harvard to Duke, are offering online classes to students to keep the semester going.
While digital learning has been improving for some time, Covid-19 may not only tip us further into a more digitally connected reality, but also help us better appreciate its benefits. This is important because historically, digital learning has been viewed as inferior to traditional learning. But that may be changing.
The Inversion
We often think about digital technologies as ways to reach people without access to traditional services—online learning for children who don’t have schools nearby or telemedicine for patients with no access to doctors. And while these solutions have helped millions of people, they’re often viewed as “second best” and “better than nothing.” Even in more resource-rich environments, there’s an assumption one should pay more to attend an event in person—a concert, a football game, an exercise class—while digital equivalents are extremely cheap or free. Why is this? And is the situation about to change?
Take the case of Dr. Sanjeev Arora, a professor of medicine at the University of New Mexico. Arora started Project Echo because he was frustrated by how many late-stage cases of hepatitis C he encountered in rural New Mexico. He realized that if he had reached patients sooner, he could have prevented needless deaths. The solution? Digital learning for local health workers.
Project Echo connects rural healthcare practitioners to specialists at top health centers by video. The approach is collaborative: Specialists share best practices and work through cases with participants to apply them in the real world and learn from edge cases. Added to expert presentations, there are lots of opportunities to ask questions and interact with specialists.
The method forms a digital loop of learning, practice, assessment, and adjustment.
Since 2003, Project Echo has scaled to 800 locations in 39 countries and trained over 90,000 healthcare providers. Most notably, a study in The New England Journal of Medicine found that the outcomes of hepatitis C treatment given by Project Echo trained healthcare workers in rural and underserved areas were similar to outcomes at university medical centers. That is, digital learning in this context was equivalent to high quality in-person learning.
If that is possible today, with simple tools, will they surpass traditional medical centers and schools in the future? Can digital learning more generally follow suit and have the same success? Perhaps. Going digital brings its own special toolset to the table too.
The Benefits of Digital
If you’re training people online, you can record the session to better understand their engagement levels—or even add artificial intelligence to analyze it in real time. Ahura AI, for example, founded by Bryan Talebi, aims to upskill workers through online training. Early study of their method suggests they can significantly speed up learning by analyzing users’ real-time emotions—like frustration or distraction—and adjusting the lesson plan or difficulty on the fly.
Other benefits of digital learning include the near-instantaneous download of course materials—rather than printing and shipping books—and being able to more easily report grades and other results, a requirement for many schools and social services organizations. And of course, as other digitized industries show, digital learning can grow and scale further at much lower costs.
To that last point, 360ed, a digital learning startup founded in 2016 by Hla Hla Win, now serves millions of children in Myanmar with augmented reality lesson plans. And Global Startup Ecosystem, founded by Christine Souffrant Ntim and Einstein Kofi Ntim in 2015, is the world’s first and largest digital accelerator program. Their entirely online programs support over 1,000 companies in 90 countries. It’s astonishing how fast both of these organizations have grown.
Notably, both examples include offline experiences too. Many of the 360ed lesson plans come with paper flashcards children use with their smartphones because the online-offline interaction improves learning. The Global Startup Ecosystem also hosts about 10 additional in-person tech summits around the world on various topics through a related initiative.
Looking further ahead, probably the most important benefit of online learning will be its potential to integrate with other digital systems in the workplace.
Imagine a medical center that has perfect information about every patient and treatment in real time and that this information is (anonymously and privately) centralized, analyzed, and shared with medical centers, research labs, pharmaceutical companies, clinical trials, policy makers, and medical students around the world. Just as self-driving cars can learn to drive better by having access to the experiences of other self-driving cars, so too can any group working to solve complex, time-sensitive challenges learn from and build on each other’s experiences.
Why This Matters
While in the long term the world will likely end up combining the best aspects of traditional and digital learning, it’s important in the near term to be more aware of the assumptions we make about digital technologies. Some of the most pioneering work in education, healthcare, and other industries may not be highly visible right now because it is in a virtual setting. Most people are unaware, for example, that the busiest emergency room in rural America is already virtual.
Once they start converging with other digital technologies, these innovations will likely become the mainstream system for all of us. Which raises more questions: What is the best business model for these virtual services? If they start delivering better healthcare and educational outcomes than traditional institutions, should they charge more? Hopefully, we will see an even bigger shift occurring, in which technology allows us to provide high quality education, healthcare, and other services to everyone at more affordable prices than today.
These are some of the topics we can consider as Covid-19 forces us into uncharted territory.
Image Credit: Andras Vas / Unsplash Continue reading
#436774 AI Is an Energy-Guzzler. We Need to ...
There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.
Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.
The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.
For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.
The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.
OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.
Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.
One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.
To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.
The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.
An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”
Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.
The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.
Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.
Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.
It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.
“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.
He’s not the only one.
One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.
One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.
It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.
A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.
The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.
The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.
Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.
“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.
Image Credit: analogicus from Pixabay Continue reading
#436578 AI Just Discovered a New Antibiotic to ...
Penicillin, one of the greatest discoveries in the history of medicine, was a product of chance.
After returning from summer vacation in September 1928, bacteriologist Alexander Fleming found a colony of bacteria he’d left in his London lab had sprouted a fungus. Curiously, wherever the bacteria contacted the fungus, their cell walls broke down and they died. Fleming guessed the fungus was secreting something lethal to the bacteria—and the rest is history.
Fleming’s discovery of penicillin and its later isolation, synthesis, and scaling in the 1940s released a flood of antibiotic discoveries in the next few decades. Bacteria and fungi had been waging an ancient war against each other, and the weapons they’d evolved over eons turned out to be humanity’s best defense against bacterial infection and disease.
In recent decades, however, the flood of new antibiotics has slowed to a trickle.
Their development is uneconomical for drug companies, and the low-hanging fruit has long been picked. We’re now facing the emergence of strains of super bacteria resistant to one or more antibiotics and an aging arsenal to fight them with. Gone unchallenged, an estimated 700,000 deaths worldwide due to drug resistance could rise to as many as 10 million in 2050.
Increasingly, scientists warn the tide is turning, and we need a new strategy to keep pace with the remarkably quick and boundlessly creative tactics of bacterial evolution.
But where the golden age of antibiotics was sparked by serendipity, human intelligence, and natural molecular weapons, its sequel may lean on the uncanny eye of artificial intelligence to screen millions of compounds—and even design new ones—in search of the next penicillin.
Hal Discovers a Powerful Antibiotic
In a paper published this week in the journal, Cell, MIT researchers took a step in this direction. The team says their machine learning algorithm discovered a powerful new antibiotic.
Named for the AI in 2001: A Space Odyssey, the antibiotic, halicin, successfully wiped out dozens of bacterial strains, including some of the most dangerous drug-resistant bacteria on the World Health Organization’s most wanted list. The bacteria also failed to develop resistance to E. coli during a month of observation, in stark contrast to existing antibiotic ciprofloxacin.
“In terms of antibiotic discovery, this is absolutely a first,” Regina Barzilay, a senior author on the study and computer science professor at MIT, told The Guardian.
The algorithm that discovered halicin was trained on the molecular features of 2,500 compounds. Nearly half were FDA-approved drugs, and another 800 naturally occurring. The researchers specifically tuned the algorithm to look for molecules with antibiotic properties but whose structures would differ from existing antibiotics (as halicin’s does). Using another machine learning program, they screened the results for those likely to be safe for humans.
Early study suggests halicin attacks the bacteria’s cell membranes, disrupting their ability to produce energy. Protecting the cell membrane from halicin might take more than one or two genetic mutations, which could account for its impressive ability to prevent resistance.
“I think this is one of the more powerful antibiotics that has been discovered to date,” James Collins, an MIT professor of bioengineering and senior author told The Guardian. “It has remarkable activity against a broad range of antibiotic-resistant pathogens.”
Beyond tests in petri-dish bacterial colonies, the team also tested halicin in mice. The antibiotic cleared up infections of a strain of bacteria resistant to all known antibiotics in a day. The team plans further study in partnership with a pharmaceutical company or nonprofit, and they hope to eventually prove it safe and effective for use in humans.
This last bit remains the trickiest step, given the cost of getting a new drug approved. But Collins hopes algorithms like theirs will help. “We could dramatically reduce the cost required to get through clinical trials,” he told the Financial Times.
A Universe of Drugs Awaits
The bigger story may be what happens next.
How many novel antibiotics await discovery, and how far can AI screening take us? The initial 6,000 compounds scanned by Barzilay and Collins’s team is a drop in the bucket.
They’ve already begun digging deeper by setting the algorithm loose on 100 million molecules from an online library of 1.5 billion compounds called the ZINC15 database. This first search took three days and turned up 23 more candidates that, like halicin, differ structurally from existing antibiotics and may be safe for humans. Two of these—which the team will study further—appear to be especially powerful.
Even more ambitiously, Barzilay hopes the approach can find or even design novel antibiotics that kill bad bacteria with alacrity while sparing the good guys. In this way, a round of antibiotics would cure whatever ails you without taking out your whole gut microbiome in the process.
All this is part of a larger movement to use machine learning algorithms in the long, expensive process of drug discovery. Other players in the area are also training AI on the vast possibility space of drug-like compounds. Last fall, one of the leaders in the area, Insilico, was challenged by a partner to see just how fast their method could do the job. The company turned out a new a proof-of-concept drug candidate in only 46 days.
The field is still developing, however, and it has yet to be seen exactly how valuable these approaches will be in practice. Barzilay is optimistic though.
“There is still a question of whether machine-learning tools are really doing something intelligent in healthcare, and how we can develop them to be workhorses in the pharmaceuticals industry,” she said. “This shows how far you can adapt this tool.”
Image Credit: Halicin (top row) prevented the development of antibiotic resistance in E. coli, while ciprofloxacin (bottom row) did not. Collins Lab at MIT Continue reading