Tag Archives: theory
#431592 Reactive Content Will Get to Know You ...
The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.
For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading
#431424 A ‘Google Maps’ for the Mouse Brain ...
Ask any neuroscientist to draw you a neuron, and it’ll probably look something like a star with two tails: one stubby with extensive tree-like branches, the other willowy, lengthy and dotted with spindly spikes.
While a decent abstraction, this cartoonish image hides the uncomfortable truth that scientists still don’t know much about what many neurons actually look like, not to mention the extent of their connections.
But without untangling the jumbled mess of neural wires that zigzag across the brain, scientists are stumped in trying to answer one of the most fundamental mysteries of the brain: how individual neuronal threads carry and assemble information, which forms the basis of our thoughts, memories, consciousness, and self.
What if there was a way to virtually trace and explore the brain’s serpentine fibers, much like the way Google Maps allows us to navigate the concrete tangles of our cities’ highways?
Thanks to an interdisciplinary team at Janelia Research Campus, we’re on our way. Meet MouseLight, the most extensive map of the mouse brain ever attempted. The ongoing project has an ambitious goal: reconstructing thousands—if not more—of the mouse’s 70 million neurons into a 3D map. (You can play with it here!)
With map in hand, neuroscientists around the world can begin to answer how neural circuits are organized in the brain, and how information flows from one neuron to another across brain regions and hemispheres.
The first release, presented Monday at the Society for Neuroscience Annual Conference in Washington, DC, contains information about the shape and sizes of 300 neurons.
And that’s just the beginning.
“MouseLight’s new dataset is the largest of its kind,” says Dr. Wyatt Korff, director of project teams. “It’s going to change the textbook view of neurons.”
http://mouselight.janelia.org/assets/carousel/ML-Movie.mp4
Brain Atlas
MouseLight is hardly the first rodent brain atlasing project.
The Mouse Brain Connectivity Atlas at the Allen Institute for Brain Science in Seattle tracks neuron activity across small circuits in an effort to trace a mouse’s connectome—a complete atlas of how the firing of one neuron links to the next.
MICrONS (Machine Intelligence from Cortical Networks), the $100 million government-funded “moonshot” hopes to distill brain computation into algorithms for more powerful artificial intelligence. Its first step? Brain mapping.
What makes MouseLight stand out is its scope and level of detail.
MICrONS, for example, is focused on dissecting a cubic millimeter of the mouse visual processing center. In contrast, MouseLight involves tracing individual neurons across the entire brain.
And while connectomics outlines the major connections between brain regions, the birds-eye view entirely misses the intricacies of each individual neuron. This is where MouseLight steps in.
Slice and Dice
With a width only a fraction of a human hair, neuron projections are hard to capture in their native state. Tug or squeeze the brain too hard, and the long, delicate branches distort or even shred into bits.
In fact, previous attempts at trying to reconstruct neurons at this level of detail topped out at just a dozen, stymied by technological hiccups and sky-high costs.
A few years ago, the MouseLight team set out to automate the entire process, with a few time-saving tweaks. Here’s how it works.
After injecting a mouse with a virus that causes a handful of neurons to produce a green-glowing protein, the team treated the brain with a sugar alcohol solution. This step “clears” the brain, transforming the beige-colored organ to translucent, making it easier for light to penetrate and boosting the signal-to-background noise ratio. The brain is then glued onto a small pedestal and ready for imaging.
Building upon an established method called “two-photon microscopy,” the team then tweaked several parameters to reduce imaging time from days (or weeks) down to a fraction of that. Endearingly known as “2P” by the experts, this type of laser microscope zaps the tissue with just enough photos to light up a single plane without damaging the tissue—sharper plane, better focus, crisper image.
After taking an image, the setup activates its vibrating razor and shaves off the imaged section of the brain—a waspy slice about 200 micrometers thick. The process is repeated until the whole brain is imaged.
This setup increased imaging speed by 16 to 48 times faster than conventional microscopy, writes team leader Dr. Jayaram Chandrashekar, who published a version of the method early last year in eLife.
The resulting images strikingly highlight every crook and cranny of a neuronal branch, popping out against a pitch-black background. But pretty pictures come at a hefty data cost: each image takes up a whopping 20 terabytes of data—roughly the storage space of 4,000 DVDs, or 10,000 hours of movies.
Stitching individual images back into 3D is an image-processing nightmare. The MouseLight team used a combination of computational power and human prowess to complete this final step.
The reconstructed images are handed off to a mighty team of seven trained neuron trackers. With the help of tracing algorithms developed in-house and a keen eye, each member can track roughly a neuron a day—significantly less time than the week or so previously needed.
A Numbers Game
Even with just 300 fully reconstructed neurons, MouseLight has already revealed new secrets of the brain.
While it’s widely accepted that axons, the neurons’ outgoing projection, can span the entire length of the brain, these extra-long connections were considered relatively rare. (In fact, one previously discovered “giant neuron” was thought to link to consciousness because of its expansive connections).
Images captured from two-photon microscopy show an axon and dendrites protruding from a neuron’s cell body (sphere in center). Image Credit: Janelia Research Center, MouseLight project team
MouseLight blows that theory out of the water.
The data clearly shows that “giant neurons” are far more common than previously thought. For example, four neurons normally associated with taste had wiry branches that stretched all the way into brain areas that control movement and process touch.
“We knew that different regions of the brain talked to each other, but seeing it in 3D is different,” says Dr. Eve Marder at Brandeis University.
“The results are so stunning because they give you a really clear view of how the whole brain is connected.”
With a tested and true system in place, the team is now aiming to add 700 neurons to their collection within a year.
But appearance is only part of the story.
We can’t tell everything about a person simply by how they look. Neurons are the same: scientists can only infer so much about a neuron’s function by looking at their shape and positions. The team also hopes to profile the gene expression patterns of each neuron, which could provide more hints to their roles in the brain.
MouseLight essentially dissects the neural infrastructure that allows information traffic to flow through the brain. These anatomical highways are just the foundation. Just like Google Maps, roads form only the critical first layer of the map. Street view, traffic information and other add-ons come later for a complete look at cities in flux.
The same will happen for understanding our ever-changing brain.
Image Credit: Janelia Research Campus, MouseLight project team Continue reading
#431385 Here’s How to Get to Conscious ...
“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading
#431243 Does Our Survival Depend on Relentless ...
Malthus had a fever dream in the 1790s. While the world was marveling in the first manifestations of modern science and technology and the industrial revolution that was just beginning, he was concerned. He saw the exponential growth in the human population as a terrible problem for the species—an existential threat. He was afraid the human population would overshoot the availability of resources, and then things would really hit the fan.
“Famine seems to be the last, the most dreadful resource of nature. The power of population is so superior to the power of the earth to produce subsistence for man, that premature death must in some shape or other visit the human race. The vices of mankind are active and able ministers of depopulation.”
So Malthus wrote in his famous text, an essay on the principles of population.
But Malthus was wrong. Not just in his proposed solution, which was to stop giving aid and food to the poor so that they wouldn’t explode in population. His prediction was also wrong: there was no great, overwhelming famine that caused the population to stay at the levels of the 1790s. Instead, the world population—with a few dips—has continued to grow exponentially ever since. And it’s still growing.
There have concurrently been developments in agriculture and medicine and, in the 20th century, the Green Revolution, in which Norman Borlaug ensured that countries adopted high-yield varieties of crops—the first precursors to modern ideas of genetically engineering food to produce better crops and more growth. The world was able to produce an astonishing amount of food—enough, in the modern era, for ten billion people. It is only a grave injustice in the way that food is distributed that means 12 percent of the world goes hungry, and we still have starvation. But, aside from that, we were saved by the majesty of another kind of exponential growth; the population grew, but the ability to produce food grew faster.
In so much of the world around us today, there’s the same old story. Take exploitation of fossil fuels: here, there is another exponential race. The exponential growth of our ability to mine coal, extract natural gas, refine oil from ever more complex hydrocarbons: this is pitted against our growing appetite. The stock market is built on exponential growth; you cannot provide compound interest unless the economy grows by a certain percentage a year.
“This relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species.”
When the economy fails to grow exponentially, it’s considered a crisis: a financial catastrophe. This expectation penetrates down to individual investors. In the cryptocurrency markets—hardly immune from bubbles, the bull-and-bear cycle of economics—the traders’ saying is “Buy the hype, sell the news.” Before an announcement is made, the expectation of growth, of a boost—the psychological shift—is almost invariably worth more than whatever the major announcement turns out to be. The idea of growth is baked into the share price, to the extent that even good news can often cause the price to dip when it’s delivered.
In the same way, this relentless and ruthless expectation—that technology will continue to improve in ways we can’t foresee—is not just baked into share prices, but into the very survival of our species. A third of Earth’s soil has been acutely degraded due to agriculture; we are looming on the brink of a topsoil crisis. In less relentless times, we may have tried to solve the problem by letting the fields lie fallow for a few years. But that’s no longer an option: if we do so, people will starve. Instead, we look to a second Green Revolution—genetically modified crops, or hydroponics—to save us.
Climate change is considered by many to be an existential threat. The Intergovernmental Panel on Climate Change has already put their faith in the exponential growth of technology. Many of the scenarios where they can successfully imagine the human race dealing with the climate crisis involve the development and widespread deployment of carbon capture and storage technology. Our hope for the future already has built-in expectations of exponential growth in our technology in this field. Alongside this, to reduce carbon emissions to zero on the timescales we need to, we will surely require new technologies in renewable energy, energy efficiency, and electrification of the transport system.
Without exponential growth in technology continuing, then, we are doomed. Humanity finds itself on a treadmill that’s rapidly accelerating, with the risk of plunging into the abyss if we can’t keep up the pace. Yet this very acceleration could also pose an existential threat. As our global system becomes more interconnected and complex, chaos theory takes over: the economics of a town in Macedonia can influence a US presidential election; critical infrastructure can be brought down by cybercriminals.
New threats, such as biotechnology, nanotechnology, or a generalized artificial intelligence, could put incredible power—power over the entire species—into the hands of a small number of people. We are faced with a paradox: the continued existence of our system depends on the exponential growth of our capacities outpacing the exponential growth of our needs and desires. Yet this very growth will create threats that are unimaginably larger than any humans have faced before in history.
“It is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.”
Neo-Luddites may find satisfaction in rejecting the ill-effects of technology, but they will still live in a society where technology is the lifeblood that keeps the whole system pumping. Now, more than ever, it is necessary that we understand the consequences and prospects for exponential growth: that we understand the nature of the race that we’re in.
If we decide that limitless exponential growth on a finite planet is unsustainable, we need to plan for the transition to a new way of living before our ability to accelerate runs out. If we require new technologies or fields of study to enable this growth to continue, we must focus our efforts on these before anything else. If we want to survive the 21st century without major catastrophe, we don’t have a choice but to understand it. Almost by default, we’re all accelerationists now.
Image Credit: focal point / Shutterstock.com Continue reading