Tag Archives: self
#436944 Is Digital Learning Still Second Best?
As Covid-19 continues to spread, the world has gone digital on an unprecedented scale. Tens of thousands of employees are working from home, and huge conferences, like the Google I/O and Apple WWDC software extravaganzas, plan to experiment with digital events.
Universities too are sending students home. This might have meant an extended break from school not too long ago. But no more. As lecture halls go empty, an experiment into digital learning at scale is ramping up. In the US alone, over 100 universities, from Harvard to Duke, are offering online classes to students to keep the semester going.
While digital learning has been improving for some time, Covid-19 may not only tip us further into a more digitally connected reality, but also help us better appreciate its benefits. This is important because historically, digital learning has been viewed as inferior to traditional learning. But that may be changing.
The Inversion
We often think about digital technologies as ways to reach people without access to traditional services—online learning for children who don’t have schools nearby or telemedicine for patients with no access to doctors. And while these solutions have helped millions of people, they’re often viewed as “second best” and “better than nothing.” Even in more resource-rich environments, there’s an assumption one should pay more to attend an event in person—a concert, a football game, an exercise class—while digital equivalents are extremely cheap or free. Why is this? And is the situation about to change?
Take the case of Dr. Sanjeev Arora, a professor of medicine at the University of New Mexico. Arora started Project Echo because he was frustrated by how many late-stage cases of hepatitis C he encountered in rural New Mexico. He realized that if he had reached patients sooner, he could have prevented needless deaths. The solution? Digital learning for local health workers.
Project Echo connects rural healthcare practitioners to specialists at top health centers by video. The approach is collaborative: Specialists share best practices and work through cases with participants to apply them in the real world and learn from edge cases. Added to expert presentations, there are lots of opportunities to ask questions and interact with specialists.
The method forms a digital loop of learning, practice, assessment, and adjustment.
Since 2003, Project Echo has scaled to 800 locations in 39 countries and trained over 90,000 healthcare providers. Most notably, a study in The New England Journal of Medicine found that the outcomes of hepatitis C treatment given by Project Echo trained healthcare workers in rural and underserved areas were similar to outcomes at university medical centers. That is, digital learning in this context was equivalent to high quality in-person learning.
If that is possible today, with simple tools, will they surpass traditional medical centers and schools in the future? Can digital learning more generally follow suit and have the same success? Perhaps. Going digital brings its own special toolset to the table too.
The Benefits of Digital
If you’re training people online, you can record the session to better understand their engagement levels—or even add artificial intelligence to analyze it in real time. Ahura AI, for example, founded by Bryan Talebi, aims to upskill workers through online training. Early study of their method suggests they can significantly speed up learning by analyzing users’ real-time emotions—like frustration or distraction—and adjusting the lesson plan or difficulty on the fly.
Other benefits of digital learning include the near-instantaneous download of course materials—rather than printing and shipping books—and being able to more easily report grades and other results, a requirement for many schools and social services organizations. And of course, as other digitized industries show, digital learning can grow and scale further at much lower costs.
To that last point, 360ed, a digital learning startup founded in 2016 by Hla Hla Win, now serves millions of children in Myanmar with augmented reality lesson plans. And Global Startup Ecosystem, founded by Christine Souffrant Ntim and Einstein Kofi Ntim in 2015, is the world’s first and largest digital accelerator program. Their entirely online programs support over 1,000 companies in 90 countries. It’s astonishing how fast both of these organizations have grown.
Notably, both examples include offline experiences too. Many of the 360ed lesson plans come with paper flashcards children use with their smartphones because the online-offline interaction improves learning. The Global Startup Ecosystem also hosts about 10 additional in-person tech summits around the world on various topics through a related initiative.
Looking further ahead, probably the most important benefit of online learning will be its potential to integrate with other digital systems in the workplace.
Imagine a medical center that has perfect information about every patient and treatment in real time and that this information is (anonymously and privately) centralized, analyzed, and shared with medical centers, research labs, pharmaceutical companies, clinical trials, policy makers, and medical students around the world. Just as self-driving cars can learn to drive better by having access to the experiences of other self-driving cars, so too can any group working to solve complex, time-sensitive challenges learn from and build on each other’s experiences.
Why This Matters
While in the long term the world will likely end up combining the best aspects of traditional and digital learning, it’s important in the near term to be more aware of the assumptions we make about digital technologies. Some of the most pioneering work in education, healthcare, and other industries may not be highly visible right now because it is in a virtual setting. Most people are unaware, for example, that the busiest emergency room in rural America is already virtual.
Once they start converging with other digital technologies, these innovations will likely become the mainstream system for all of us. Which raises more questions: What is the best business model for these virtual services? If they start delivering better healthcare and educational outcomes than traditional institutions, should they charge more? Hopefully, we will see an even bigger shift occurring, in which technology allows us to provide high quality education, healthcare, and other services to everyone at more affordable prices than today.
These are some of the topics we can consider as Covid-19 forces us into uncharted territory.
Image Credit: Andras Vas / Unsplash Continue reading
#436911 Scientists Linked Artificial and ...
Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.
Whoa.
We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.
As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.
This month, an international team put all of those ingredients together, turning theory into reality.
The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.
The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.
That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.
And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.
The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.
The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.
Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.
Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.
Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.
That’s what this study did.
A Hybrid Network
Still with me? Let’s talk network.
It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.
Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).
So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.
To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.
Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.
Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.
You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.
Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.
Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.
It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.
However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.
While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.
“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”
Image Credit: Gerd Altmann from Pixabay Continue reading
#436484 If Machines Want to Make Art, Will ...
Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?
Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.
But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.
We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.
But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.
Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.
Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.
Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.
The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit: Rene Böhmer / Unsplash Continue reading