Tag Archives: tech
#436944 Is Digital Learning Still Second Best?
As Covid-19 continues to spread, the world has gone digital on an unprecedented scale. Tens of thousands of employees are working from home, and huge conferences, like the Google I/O and Apple WWDC software extravaganzas, plan to experiment with digital events.
Universities too are sending students home. This might have meant an extended break from school not too long ago. But no more. As lecture halls go empty, an experiment into digital learning at scale is ramping up. In the US alone, over 100 universities, from Harvard to Duke, are offering online classes to students to keep the semester going.
While digital learning has been improving for some time, Covid-19 may not only tip us further into a more digitally connected reality, but also help us better appreciate its benefits. This is important because historically, digital learning has been viewed as inferior to traditional learning. But that may be changing.
The Inversion
We often think about digital technologies as ways to reach people without access to traditional services—online learning for children who don’t have schools nearby or telemedicine for patients with no access to doctors. And while these solutions have helped millions of people, they’re often viewed as “second best” and “better than nothing.” Even in more resource-rich environments, there’s an assumption one should pay more to attend an event in person—a concert, a football game, an exercise class—while digital equivalents are extremely cheap or free. Why is this? And is the situation about to change?
Take the case of Dr. Sanjeev Arora, a professor of medicine at the University of New Mexico. Arora started Project Echo because he was frustrated by how many late-stage cases of hepatitis C he encountered in rural New Mexico. He realized that if he had reached patients sooner, he could have prevented needless deaths. The solution? Digital learning for local health workers.
Project Echo connects rural healthcare practitioners to specialists at top health centers by video. The approach is collaborative: Specialists share best practices and work through cases with participants to apply them in the real world and learn from edge cases. Added to expert presentations, there are lots of opportunities to ask questions and interact with specialists.
The method forms a digital loop of learning, practice, assessment, and adjustment.
Since 2003, Project Echo has scaled to 800 locations in 39 countries and trained over 90,000 healthcare providers. Most notably, a study in The New England Journal of Medicine found that the outcomes of hepatitis C treatment given by Project Echo trained healthcare workers in rural and underserved areas were similar to outcomes at university medical centers. That is, digital learning in this context was equivalent to high quality in-person learning.
If that is possible today, with simple tools, will they surpass traditional medical centers and schools in the future? Can digital learning more generally follow suit and have the same success? Perhaps. Going digital brings its own special toolset to the table too.
The Benefits of Digital
If you’re training people online, you can record the session to better understand their engagement levels—or even add artificial intelligence to analyze it in real time. Ahura AI, for example, founded by Bryan Talebi, aims to upskill workers through online training. Early study of their method suggests they can significantly speed up learning by analyzing users’ real-time emotions—like frustration or distraction—and adjusting the lesson plan or difficulty on the fly.
Other benefits of digital learning include the near-instantaneous download of course materials—rather than printing and shipping books—and being able to more easily report grades and other results, a requirement for many schools and social services organizations. And of course, as other digitized industries show, digital learning can grow and scale further at much lower costs.
To that last point, 360ed, a digital learning startup founded in 2016 by Hla Hla Win, now serves millions of children in Myanmar with augmented reality lesson plans. And Global Startup Ecosystem, founded by Christine Souffrant Ntim and Einstein Kofi Ntim in 2015, is the world’s first and largest digital accelerator program. Their entirely online programs support over 1,000 companies in 90 countries. It’s astonishing how fast both of these organizations have grown.
Notably, both examples include offline experiences too. Many of the 360ed lesson plans come with paper flashcards children use with their smartphones because the online-offline interaction improves learning. The Global Startup Ecosystem also hosts about 10 additional in-person tech summits around the world on various topics through a related initiative.
Looking further ahead, probably the most important benefit of online learning will be its potential to integrate with other digital systems in the workplace.
Imagine a medical center that has perfect information about every patient and treatment in real time and that this information is (anonymously and privately) centralized, analyzed, and shared with medical centers, research labs, pharmaceutical companies, clinical trials, policy makers, and medical students around the world. Just as self-driving cars can learn to drive better by having access to the experiences of other self-driving cars, so too can any group working to solve complex, time-sensitive challenges learn from and build on each other’s experiences.
Why This Matters
While in the long term the world will likely end up combining the best aspects of traditional and digital learning, it’s important in the near term to be more aware of the assumptions we make about digital technologies. Some of the most pioneering work in education, healthcare, and other industries may not be highly visible right now because it is in a virtual setting. Most people are unaware, for example, that the busiest emergency room in rural America is already virtual.
Once they start converging with other digital technologies, these innovations will likely become the mainstream system for all of us. Which raises more questions: What is the best business model for these virtual services? If they start delivering better healthcare and educational outcomes than traditional institutions, should they charge more? Hopefully, we will see an even bigger shift occurring, in which technology allows us to provide high quality education, healthcare, and other services to everyone at more affordable prices than today.
These are some of the topics we can consider as Covid-19 forces us into uncharted territory.
Image Credit: Andras Vas / Unsplash Continue reading
#436774 AI Is an Energy-Guzzler. We Need to ...
There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.
Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.
The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.
For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.
The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.
OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.
Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.
One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.
To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.
The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.
An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”
Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.
The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.
Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.
Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.
It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.
“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.
He’s not the only one.
One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.
One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.
It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.
A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.
The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.
The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.
Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.
“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.
Image Credit: analogicus from Pixabay Continue reading