Tag Archives: possible
The alternative universe known as science fiction has given our culture a menagerie of alien species. From overstuffed teddy bears like Ewoks and Wookies to terrifying nightmares such as Alien and Predator, our collective imagination of what form alien life from another world may take has been irrevocably imprinted by Hollywood.
It might all be possible, or all these bug-eyed critters might turn out to be just B-movie versions of how real extraterrestrials will appear if and when they finally make the evening news.
One thing for certain is that aliens from another world will be shaped by the same evolutionary forces as here on Earth—natural selection. That’s the conclusion of a team of scientists from the University of Oxford in a study published this month in the International Journal of Astrobiology.
A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests.Image Credit: Helen S. Cooper/University of Oxford.
The researchers suggest that evolutionary theory—famously put forth by Charles Darwin in his seminal book On the Origin of Species 158 years ago this month—can be used to make some predictions about alien species. In particular, the team argues that extraterrestrials will undergo natural selection, because that is the only process by which organisms can adapt to their environment.
“Adaptation is what defines life,” lead author Samuel Levin tells Singularity Hub.
While it’s likely that NASA or some SpaceX-like private venture will eventually kick over a few space rocks and discover microbial life in the not-too-distant future, the sorts of aliens Levin and his colleagues are interested in describing are more complex. That’s because natural selection is at work.
A quick evolutionary theory 101 refresher: Natural selection is the process by which certain traits are favored over others in a given population. For example, take a group of brown and green beetles. It just so happens that birds prefer foraging on green beetles, allowing more brown beetles to survive and reproduce than the more delectable green ones. Eventually, if these population pressures persist, brown beetles will become the dominant type. Brown wins, green loses.
And just as human beings are the result of millions of years of adaptations—eyes and thumbs, for example—aliens will similarly be constructed from parts that were once free living but through time came together to work as one organism.
“Life has so many intricate parts, so much complexity, for that to happen (randomly),” Levin explains. “It’s too complex and too many things working together in a purposeful way for that to happen by chance, as how certain molecules come about. Instead you need a process for making it, and natural selection is that process.”
Just don’t expect ET to show up as a bipedal humanoid with a large head and almond-shaped eyes, Levin says.
“They can be built from entirely different chemicals and so visually, superficially, unfamiliar,” he explains. “They will have passed through the same evolutionary history as us. To me, that’s way cooler and more exciting than them having two legs.”
Need for Data
Seth Shostak, a lead astronomer at the SETI Institute and host of the organization’s Big Picture Science radio show, wrote that while the argument is interesting, it doesn’t answer the question of ET’s appearance.
Shostak argues that a more productive approach would invoke convergent evolution, where similar environments lead to similar adaptations, at least assuming a range of Earth-like conditions such as liquid oceans and thick atmospheres. For example, an alien species that evolved in a liquid environment would evolve a streamlined body to move through water.
“Happenstance and the specifics of the environment will produce variations on an alien species’ planet as it has on ours, and there’s really no way to predict these,” Shostak concludes. “Alas, an accurate cosmic bestiary cannot be written by the invocation of biological mechanisms alone. We need data. That requires more than simply thinking about alien life. We need to actually discover it.”
Search is On
The search is on. On one hand, the task seems easy enough: There are at least 100 billion planets in the Milky Way alone, and at least 20 percent of those are likely to be capable of producing a biosphere. Even if the evolution of life is exceedingly rare—take a conservative estimate of .001 percent or 200,000 planets, as proposed by the Oxford paper—you have to like the odds.
Of course, it’s not that easy by a billion light years.
Planet hunters can’t even agree on what signatures of life they should focus on. The idea is that where there’s smoke there’s fire. In the case of an alien world home to biological life, astrobiologists are searching for the presence of “biosignature gases,” vapors that could only be produced by alien life.
As Quanta Magazine reported, scientists do this by measuring a planet’s atmosphere against starlight. Gases in the atmosphere absorb certain frequencies of starlight, offering a clue as to what is brewing around a particular planet.
The presence of oxygen would seem to be a biological no-brainer, but there are instances where a planet can produce a false positive, meaning non-biological processes are responsible for the exoplanet’s oxygen. Scientists like Sara Seager, an astrophysicist at MIT, have argued there are plenty of examples of other types of gases produced by organisms right here on Earth that could also produce the smoking gun, er, planet.
Life as We Know It
Indeed, the existence of Earth-bound extremophiles—organisms that defy conventional wisdom about where life can exist, such as in the vacuum of space—offer another clue as to what kind of aliens we might eventually meet.
Lynn Rothschild, an astrobiologist and synthetic biologist in the Earth Science Division at NASA’s Ames Research Center in Silicon Valley, takes extremophiles as a baseline and then supersizes them through synthetic biology.
For example, say a bacteria is capable of surviving at 120 degrees Celsius. Rothschild’s lab might tweak an organism’s DNA to see if it could metabolize at 150 degrees. The idea, as she explains, is to expand the envelope for life without ever getting into a rocket ship.
While researchers may not always agree on the “where” and “how” and “what” of the search for extraterrestrial life, most do share one belief: Alien life must be out there.
“It would shock me if there weren’t [extraterrestrials],” Levin says. “There are few things that would shock me more than to find out there aren’t any aliens…If I had to bet on it, I would bet on the side of there being lots and lots of aliens out there.”
Image Credit: NASA Continue reading
The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.
For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading
Possible Ways How Machine Learning Can Improve Education The days of transition from manual to machine learning has reckoned with myriad challenges. The machine learning is a subset of artificial intelligence which helps to identify patterns in data to inform algorithms which can make an accurate data-driven prediction. Prolonged interactions with computers enable the computers …
The post How Machine Learning Will Change Education: Possible Ways appeared first on TFOT. Continue reading
On a dark night, away from city lights, the stars of the Milky Way can seem uncountable. Yet from any given location no more than 4,500 are visible to the naked eye. Meanwhile, our galaxy has 100–400 billion stars, and there are even more galaxies in the universe.
The numbers of the night sky are humbling. And they give us a deep perspective…on drugs.
Yes, this includes wow-the-stars-are-freaking-amazing-tonight drugs, but also the kinds of drugs that make us well again when we’re sick. The number of possible organic compounds with “drug-like” properties dwarfs the number of stars in the universe by over 30 orders of magnitude.
Next to this multiverse of possibility, the chemical configurations scientists have made into actual medicines are like the smattering of stars you’d glimpse downtown.
But for good reason.
Exploring all that potential drug-space is as humanly impossible as exploring all of physical space, and even if we could, most of what we’d find wouldn’t fit our purposes. Still, the idea that wonder drugs must surely lurk amid the multitudes is too tantalizing to ignore.
Which is why, Alex Zhavoronkov said at Singularity University’s Exponential Medicine in San Diego last week, we should use artificial intelligence to do more of the legwork and speed discovery. This, he said, could be one of the next big medical applications for AI.
Dogs, Diagnosis, and Drugs
Zhavoronkov is CEO of Insilico Medicine and CSO of the Biogerontology Research Foundation. Insilico is one of a number of AI startups aiming to accelerate drug discovery with AI.
In recent years, Zhavoronkov said, the now-famous machine learning technique, deep learning, has made progress on a number of fronts. Algorithms that can teach themselves to play games—like DeepMind’s AlphaGo Zero or Carnegie Mellon’s poker playing AI—are perhaps the most headline-grabbing of the bunch. But pattern recognition was the thing that kicked deep learning into overdrive early on, when machine learning algorithms went from struggling to tell dogs and cats apart to outperforming their peers and then their makers in quick succession.
[Watch this video for an AI update from Neil Jacobstein, chair of Artificial Intelligence and Robotics at Singularity University.]
In medicine, deep learning algorithms trained on databases of medical images can spot life-threatening disease with equal or greater accuracy than human professionals. There’s even speculation that AI, if we learn to trust it, could be invaluable in diagnosing disease. And, as Zhavoronkov noted, with more applications and a longer track record that trust is coming.
“Tesla is already putting cars on the street,” Zhavoronkov said. “Three-year, four-year-old technology is already carrying passengers from point A to point B, at 100 miles an hour, and one mistake and you’re dead. But people are trusting their lives to this technology.”
“So, why don’t we do it in pharma?”
Trial and Error and Try Again
AI wouldn’t drive the car in pharmaceutical research. It’d be an assistant that, when paired with a chemist or two, could fast-track discovery by screening more possibilities for better candidates.
There’s plenty of room to make things more efficient, according to Zhavoronkov.
Drug discovery is arduous and expensive. Chemists sift tens of thousands of candidate compounds for the most promising to synthesize. Of these, a handful will go on to further research, fewer will make it to human clinical trials, and a fraction of those will be approved.
The whole process can take many years and cost hundreds of millions of dollars.
This is a big data problem if ever there was one, and deep learning thrives on big data. Early applications have shown their worth unearthing subtle patterns in huge training databases. Although drug-makers already use software to sift compounds, such software requires explicit rules written by chemists. AI’s allure is its ability to learn and improve on its own.
“There are two strategies for AI-driven innovation in pharma to ensure you get better molecules and much faster approvals,” Zhavoronkov said. “One is looking for the needle in the haystack, and another one is creating a new needle.”
To find the needle in the haystack, algorithms are trained on large databases of molecules. Then they go looking for molecules with attractive properties. But creating a new needle? That’s a possibility enabled by the generative adversarial networks Zhavoronkov specializes in.
Such algorithms pit two neural networks against each other. One generates meaningful output while the other judges whether this output is true or false, Zhavoronkov said. Together, the networks generate new objects like text, images, or in this case, molecular structures.
“We started employing this particular technology to make deep neural networks imagine new molecules, to make it perfect right from the start. So, to come up with really perfect needles,” Zhavoronkov said. “[You] can essentially go to this [generative adversarial network] and ask it to create molecules that inhibit protein X at concentration Y, with the highest viability, specific characteristics, and minimal side effects.”
Zhavoronkov believes AI can find or fabricate more needles from the array of molecular possibilities, freeing human chemists to focus on synthesizing only the most promising. If it works, he hopes we can increase hits, minimize misses, and generally speed the process up.
Proof’s in the Pudding
Insilico isn’t alone on its drug-discovery quest, nor is it a brand new area of interest.
Last year, a Harvard group published a paper on an AI that similarly suggests drug candidates. The software trained on 250,000 drug-like molecules and used its experience to generate new molecules that blended existing drugs and made suggestions based on desired properties.
An MIT Technology Review article on the subject highlighted a few of the challenges such systems may still face. The results returned aren’t always meaningful or easy to synthesize in the lab, and the quality of these results, as always, is only as good as the data dined upon.
Stanford chemistry professor and Andreesen Horowitz partner, Vijay Pande, said that images, speech, and text—three of the areas deep learning’s made quick strides in—have better, cleaner data. Chemical data, on the other hand, is still being optimized for deep learning. Also, while there are public databases, much data still lives behind closed doors at private companies.
To overcome the challenges and prove their worth, Zhavoronkov said, his company is very focused on validating the tech. But this year, skepticism in the pharmaceutical industry seems to be easing into interest and investment.
AI drug discovery startup Exscientia inked a deal with Sanofi for $280 million and GlaxoSmithKline for $42 million. Insilico is also partnering with GlaxoSmithKline, and Numerate is working with Takeda Pharmaceutical. Even Google may jump in. According to an article in Nature outlining the field, the firm’s deep learning project, Google Brain, is growing its biosciences team, and industry watchers wouldn’t be surprised to see them target drug discovery.
With AI and the hardware running it advancing rapidly, the greatest potential may yet be ahead. Perhaps, one day, all 1060 molecules in drug-space will be at our disposal. “You should take all the data you have, build n new models, and search as much of that 1060 as possible” before every decision you make, Brandon Allgood, CTO at Numerate, told Nature.
Today’s projects need to live up to their promises, of course, but Zhavoronkov believes AI will have a big impact in the coming years, and now’s the time to integrate it. “If you are working for a pharma company, and you’re still thinking, ‘Okay, where is the proof?’ Once there is a proof, and once you can see it to believe it—it’s going to be too late,” he said.
Image Credit: Klavdiya Krinichnaya / Shutterstock.com Continue reading