Tag Archives: human
#431592 Reactive Content Will Get to Know You ...
The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.
For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading
#431559 Drug Discovery AI to Scour a Universe of ...
On a dark night, away from city lights, the stars of the Milky Way can seem uncountable. Yet from any given location no more than 4,500 are visible to the naked eye. Meanwhile, our galaxy has 100–400 billion stars, and there are even more galaxies in the universe.
The numbers of the night sky are humbling. And they give us a deep perspective…on drugs.
Yes, this includes wow-the-stars-are-freaking-amazing-tonight drugs, but also the kinds of drugs that make us well again when we’re sick. The number of possible organic compounds with “drug-like” properties dwarfs the number of stars in the universe by over 30 orders of magnitude.
Next to this multiverse of possibility, the chemical configurations scientists have made into actual medicines are like the smattering of stars you’d glimpse downtown.
But for good reason.
Exploring all that potential drug-space is as humanly impossible as exploring all of physical space, and even if we could, most of what we’d find wouldn’t fit our purposes. Still, the idea that wonder drugs must surely lurk amid the multitudes is too tantalizing to ignore.
Which is why, Alex Zhavoronkov said at Singularity University’s Exponential Medicine in San Diego last week, we should use artificial intelligence to do more of the legwork and speed discovery. This, he said, could be one of the next big medical applications for AI.
Dogs, Diagnosis, and Drugs
Zhavoronkov is CEO of Insilico Medicine and CSO of the Biogerontology Research Foundation. Insilico is one of a number of AI startups aiming to accelerate drug discovery with AI.
In recent years, Zhavoronkov said, the now-famous machine learning technique, deep learning, has made progress on a number of fronts. Algorithms that can teach themselves to play games—like DeepMind’s AlphaGo Zero or Carnegie Mellon’s poker playing AI—are perhaps the most headline-grabbing of the bunch. But pattern recognition was the thing that kicked deep learning into overdrive early on, when machine learning algorithms went from struggling to tell dogs and cats apart to outperforming their peers and then their makers in quick succession.
[Watch this video for an AI update from Neil Jacobstein, chair of Artificial Intelligence and Robotics at Singularity University.]
In medicine, deep learning algorithms trained on databases of medical images can spot life-threatening disease with equal or greater accuracy than human professionals. There’s even speculation that AI, if we learn to trust it, could be invaluable in diagnosing disease. And, as Zhavoronkov noted, with more applications and a longer track record that trust is coming.
“Tesla is already putting cars on the street,” Zhavoronkov said. “Three-year, four-year-old technology is already carrying passengers from point A to point B, at 100 miles an hour, and one mistake and you’re dead. But people are trusting their lives to this technology.”
“So, why don’t we do it in pharma?”
Trial and Error and Try Again
AI wouldn’t drive the car in pharmaceutical research. It’d be an assistant that, when paired with a chemist or two, could fast-track discovery by screening more possibilities for better candidates.
There’s plenty of room to make things more efficient, according to Zhavoronkov.
Drug discovery is arduous and expensive. Chemists sift tens of thousands of candidate compounds for the most promising to synthesize. Of these, a handful will go on to further research, fewer will make it to human clinical trials, and a fraction of those will be approved.
The whole process can take many years and cost hundreds of millions of dollars.
This is a big data problem if ever there was one, and deep learning thrives on big data. Early applications have shown their worth unearthing subtle patterns in huge training databases. Although drug-makers already use software to sift compounds, such software requires explicit rules written by chemists. AI’s allure is its ability to learn and improve on its own.
“There are two strategies for AI-driven innovation in pharma to ensure you get better molecules and much faster approvals,” Zhavoronkov said. “One is looking for the needle in the haystack, and another one is creating a new needle.”
To find the needle in the haystack, algorithms are trained on large databases of molecules. Then they go looking for molecules with attractive properties. But creating a new needle? That’s a possibility enabled by the generative adversarial networks Zhavoronkov specializes in.
Such algorithms pit two neural networks against each other. One generates meaningful output while the other judges whether this output is true or false, Zhavoronkov said. Together, the networks generate new objects like text, images, or in this case, molecular structures.
“We started employing this particular technology to make deep neural networks imagine new molecules, to make it perfect right from the start. So, to come up with really perfect needles,” Zhavoronkov said. “[You] can essentially go to this [generative adversarial network] and ask it to create molecules that inhibit protein X at concentration Y, with the highest viability, specific characteristics, and minimal side effects.”
Zhavoronkov believes AI can find or fabricate more needles from the array of molecular possibilities, freeing human chemists to focus on synthesizing only the most promising. If it works, he hopes we can increase hits, minimize misses, and generally speed the process up.
Proof’s in the Pudding
Insilico isn’t alone on its drug-discovery quest, nor is it a brand new area of interest.
Last year, a Harvard group published a paper on an AI that similarly suggests drug candidates. The software trained on 250,000 drug-like molecules and used its experience to generate new molecules that blended existing drugs and made suggestions based on desired properties.
An MIT Technology Review article on the subject highlighted a few of the challenges such systems may still face. The results returned aren’t always meaningful or easy to synthesize in the lab, and the quality of these results, as always, is only as good as the data dined upon.
Stanford chemistry professor and Andreesen Horowitz partner, Vijay Pande, said that images, speech, and text—three of the areas deep learning’s made quick strides in—have better, cleaner data. Chemical data, on the other hand, is still being optimized for deep learning. Also, while there are public databases, much data still lives behind closed doors at private companies.
To overcome the challenges and prove their worth, Zhavoronkov said, his company is very focused on validating the tech. But this year, skepticism in the pharmaceutical industry seems to be easing into interest and investment.
AI drug discovery startup Exscientia inked a deal with Sanofi for $280 million and GlaxoSmithKline for $42 million. Insilico is also partnering with GlaxoSmithKline, and Numerate is working with Takeda Pharmaceutical. Even Google may jump in. According to an article in Nature outlining the field, the firm’s deep learning project, Google Brain, is growing its biosciences team, and industry watchers wouldn’t be surprised to see them target drug discovery.
With AI and the hardware running it advancing rapidly, the greatest potential may yet be ahead. Perhaps, one day, all 1060 molecules in drug-space will be at our disposal. “You should take all the data you have, build n new models, and search as much of that 1060 as possible” before every decision you make, Brandon Allgood, CTO at Numerate, told Nature.
Today’s projects need to live up to their promises, of course, but Zhavoronkov believes AI will have a big impact in the coming years, and now’s the time to integrate it. “If you are working for a pharma company, and you’re still thinking, ‘Okay, where is the proof?’ Once there is a proof, and once you can see it to believe it—it’s going to be too late,” he said.
Image Credit: Klavdiya Krinichnaya / Shutterstock.com Continue reading
#431361 Humanoid Sophia Becomes First Non-Human ...
Saudi Arabia recently made AI robot Sophia a “citizen”, thus becoming the first country to provide a non-human with citizenship!
#431356 Humanoid Robot Tête-à-tête
“Sophia” and “Han” discuss the future of humanity with Hanson Robotics’ Ben Goertzel.
#431424 A ‘Google Maps’ for the Mouse Brain ...
Ask any neuroscientist to draw you a neuron, and it’ll probably look something like a star with two tails: one stubby with extensive tree-like branches, the other willowy, lengthy and dotted with spindly spikes.
While a decent abstraction, this cartoonish image hides the uncomfortable truth that scientists still don’t know much about what many neurons actually look like, not to mention the extent of their connections.
But without untangling the jumbled mess of neural wires that zigzag across the brain, scientists are stumped in trying to answer one of the most fundamental mysteries of the brain: how individual neuronal threads carry and assemble information, which forms the basis of our thoughts, memories, consciousness, and self.
What if there was a way to virtually trace and explore the brain’s serpentine fibers, much like the way Google Maps allows us to navigate the concrete tangles of our cities’ highways?
Thanks to an interdisciplinary team at Janelia Research Campus, we’re on our way. Meet MouseLight, the most extensive map of the mouse brain ever attempted. The ongoing project has an ambitious goal: reconstructing thousands—if not more—of the mouse’s 70 million neurons into a 3D map. (You can play with it here!)
With map in hand, neuroscientists around the world can begin to answer how neural circuits are organized in the brain, and how information flows from one neuron to another across brain regions and hemispheres.
The first release, presented Monday at the Society for Neuroscience Annual Conference in Washington, DC, contains information about the shape and sizes of 300 neurons.
And that’s just the beginning.
“MouseLight’s new dataset is the largest of its kind,” says Dr. Wyatt Korff, director of project teams. “It’s going to change the textbook view of neurons.”
http://mouselight.janelia.org/assets/carousel/ML-Movie.mp4
Brain Atlas
MouseLight is hardly the first rodent brain atlasing project.
The Mouse Brain Connectivity Atlas at the Allen Institute for Brain Science in Seattle tracks neuron activity across small circuits in an effort to trace a mouse’s connectome—a complete atlas of how the firing of one neuron links to the next.
MICrONS (Machine Intelligence from Cortical Networks), the $100 million government-funded “moonshot” hopes to distill brain computation into algorithms for more powerful artificial intelligence. Its first step? Brain mapping.
What makes MouseLight stand out is its scope and level of detail.
MICrONS, for example, is focused on dissecting a cubic millimeter of the mouse visual processing center. In contrast, MouseLight involves tracing individual neurons across the entire brain.
And while connectomics outlines the major connections between brain regions, the birds-eye view entirely misses the intricacies of each individual neuron. This is where MouseLight steps in.
Slice and Dice
With a width only a fraction of a human hair, neuron projections are hard to capture in their native state. Tug or squeeze the brain too hard, and the long, delicate branches distort or even shred into bits.
In fact, previous attempts at trying to reconstruct neurons at this level of detail topped out at just a dozen, stymied by technological hiccups and sky-high costs.
A few years ago, the MouseLight team set out to automate the entire process, with a few time-saving tweaks. Here’s how it works.
After injecting a mouse with a virus that causes a handful of neurons to produce a green-glowing protein, the team treated the brain with a sugar alcohol solution. This step “clears” the brain, transforming the beige-colored organ to translucent, making it easier for light to penetrate and boosting the signal-to-background noise ratio. The brain is then glued onto a small pedestal and ready for imaging.
Building upon an established method called “two-photon microscopy,” the team then tweaked several parameters to reduce imaging time from days (or weeks) down to a fraction of that. Endearingly known as “2P” by the experts, this type of laser microscope zaps the tissue with just enough photos to light up a single plane without damaging the tissue—sharper plane, better focus, crisper image.
After taking an image, the setup activates its vibrating razor and shaves off the imaged section of the brain—a waspy slice about 200 micrometers thick. The process is repeated until the whole brain is imaged.
This setup increased imaging speed by 16 to 48 times faster than conventional microscopy, writes team leader Dr. Jayaram Chandrashekar, who published a version of the method early last year in eLife.
The resulting images strikingly highlight every crook and cranny of a neuronal branch, popping out against a pitch-black background. But pretty pictures come at a hefty data cost: each image takes up a whopping 20 terabytes of data—roughly the storage space of 4,000 DVDs, or 10,000 hours of movies.
Stitching individual images back into 3D is an image-processing nightmare. The MouseLight team used a combination of computational power and human prowess to complete this final step.
The reconstructed images are handed off to a mighty team of seven trained neuron trackers. With the help of tracing algorithms developed in-house and a keen eye, each member can track roughly a neuron a day—significantly less time than the week or so previously needed.
A Numbers Game
Even with just 300 fully reconstructed neurons, MouseLight has already revealed new secrets of the brain.
While it’s widely accepted that axons, the neurons’ outgoing projection, can span the entire length of the brain, these extra-long connections were considered relatively rare. (In fact, one previously discovered “giant neuron” was thought to link to consciousness because of its expansive connections).
Images captured from two-photon microscopy show an axon and dendrites protruding from a neuron’s cell body (sphere in center). Image Credit: Janelia Research Center, MouseLight project team
MouseLight blows that theory out of the water.
The data clearly shows that “giant neurons” are far more common than previously thought. For example, four neurons normally associated with taste had wiry branches that stretched all the way into brain areas that control movement and process touch.
“We knew that different regions of the brain talked to each other, but seeing it in 3D is different,” says Dr. Eve Marder at Brandeis University.
“The results are so stunning because they give you a really clear view of how the whole brain is connected.”
With a tested and true system in place, the team is now aiming to add 700 neurons to their collection within a year.
But appearance is only part of the story.
We can’t tell everything about a person simply by how they look. Neurons are the same: scientists can only infer so much about a neuron’s function by looking at their shape and positions. The team also hopes to profile the gene expression patterns of each neuron, which could provide more hints to their roles in the brain.
MouseLight essentially dissects the neural infrastructure that allows information traffic to flow through the brain. These anatomical highways are just the foundation. Just like Google Maps, roads form only the critical first layer of the map. Street view, traffic information and other add-ons come later for a complete look at cities in flux.
The same will happen for understanding our ever-changing brain.
Image Credit: Janelia Research Campus, MouseLight project team Continue reading