Tag Archives: view
#433799 The First Novel Written by AI Is ...
Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.
People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.
This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.
In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.
But is creativity a fundamentally human phenomenon? Or can it be learned by machines?
And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?
Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.
“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.
Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.
The results are… mixed.
The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.
Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.
A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.
The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.
The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.
In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.
The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.
Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.
When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.
Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.
Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.
Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”
AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.
Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.
Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.
Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.
You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.
Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”
Image Credit: eurobanks / Shutterstock.com Continue reading
#433486 This AI Predicts Obesity ...
A research team at the University of Washington has trained an artificial intelligence system to spot obesity—all the way from space. The system used a convolutional neural network (CNN) to analyze 150,000 satellite images and look for correlations between the physical makeup of a neighborhood and the prevalence of obesity.
The team’s results, presented in JAMA Network Open, showed that features of a given neighborhood could explain close to two-thirds (64.8 percent) of the variance in obesity. Researchers found that analyzing satellite data could help increase understanding of the link between peoples’ environment and obesity prevalence. The next step would be to make corresponding structural changes in the way neighborhoods are built to encourage physical activity and better health.
Training AI to Spot Obesity
Convolutional neural networks (CNNs) are particularly adept at image analysis, object recognition, and identifying special hierarchies in large datasets.
Prior to analyzing 150,000 high-resolution satellite images of Bellevue, Seattle, Tacoma, Los Angeles, Memphis, and San Antonio, the researchers trained the CNN on 1.2 million images from the ImageNet database. The categorizations were correlated with obesity prevalence estimates for the six urban areas from census tracts gathered by the 500 Cities project.
The system was able to identify the presence of certain features that increased likelihood of obesity in a given area. Some of these features included tightly–packed houses, being close to roadways, and living in neighborhoods with a lack of greenery.
Visualization of features identified by the convolutional neural network (CNN) model. The images on the left column are satellite images taken from Google Static Maps API (application programming interface). Images in the middle and right columns are activation maps taken from the second convolutional layer of VGG-CNN-F network after forward pass of the respective satellite images through the network. From Google Static Maps API, DigitalGlobe, US Geological Survey (accessed July 2017). Credit: JAMA Network Open
Your Surroundings Are Key
In their discussion of the findings, the researchers stressed that there are limitations to the conclusions that can be drawn from the AI’s results. For example, socio-economic factors like income likely play a major role for obesity prevalence in a given geographic area.
However, the study concluded that the AI-powered analysis showed the prevalence of specific man-made features in neighborhoods consistently correlating with obesity prevalence and not necessarily correlating with socioeconomic status.
The system’s success rates varied between studied cities, with Memphis being the highest (73.3 percent) and Seattle being the lowest (55.8 percent).
AI Takes To the Sky
Around a third of the US population is categorized as obese. Obesity is linked to a number of health-related issues, and the AI-generated results could potentially help improve city planning and better target campaigns to limit obesity.
The study is one of the latest of a growing list that uses AI to analyze images and extrapolate insights.
A team at Stanford University has used a CNN to predict poverty via satellite imagery, assisting governments and NGOs to better target their efforts. A combination of the public Automatic Identification System for shipping, satellite imagery, and Google’s AI has proven able to identify illegal fishing activity. Researchers have even been able to use AI and Google Street View to predict what party a given city will vote for, based on what cars are parked on the streets.
In each case, the AI systems have been able to look at volumes of data about our world and surroundings that are beyond the capabilities of humans and extrapolate new insights. If one were to moralize about the good and bad sides of AI (new opportunities vs. potential job losses, for example) it could seem that it comes down to what we ask AI systems to look at—and what questions we ask of them.
Image Credit: Ocean Biology Processing Group at NASA’s Goddard Space Flight Center Continue reading
#433386 What We Have to Gain From Making ...
The borders between the real world and the digital world keep crumbling, and the latter’s importance in both our personal and professional lives keeps growing. Some describe the melding of virtual and real worlds as part of the fourth industrial revolution. Said revolution’s full impact on us as individuals, our companies, communities, and societies is still unknown.
Greg Cross, chief business officer of New Zealand-based AI company Soul Machines, thinks one inescapable consequence of these crumbling borders is people spending more and more time interacting with technology. In a presentation at Singularity University’s Global Summit in San Francisco last month, Cross unveiled Soul Machines’ latest work and shared his views on the current state of human-like AI and where the technology may go in the near future.
Humanizing Technology Interaction
Cross started by introducing Rachel, one of Soul Machines’ “emotionally responsive digital humans.” The company has built 15 different digital humans of various sexes, groups, and ethnicities. Rachel, along with her “sisters” and “brothers,” has a virtual nervous system based on neural networks and biological models of different paths in the human brain. The system is controlled by virtual neurotransmitters and hormones akin to dopamine, serotonin, and oxytocin, which influence learning and behavior.
As a result, each digital human can have its own unique set of “feelings” and responses to interactions. People interact with them via visual and audio sensors, and the machines respond in real time.
“Over the last 20 or 30 years, the way we think about machines and the way we interact with machines has changed,” Cross said. “We’ve always had this view that they should actually be more human-like.”
The realism of the digital humans’ graphic representations comes thanks to the work of Soul Machines’ other co-founder, Dr. Mark Sager, who has won two Academy Awards for his work on some computer-generated movies, including James Cameron’s Avatar.
Cross pointed out, for example, that rather than being unrealistically flawless and clear, Rachel’s skin has blemishes and sun spots, just like real human skin would.
The Next Human-Machine Frontier
When people interact with each other face to face, emotional and intellectual engagement both heavily influence the interaction. What would it look like for machines to bring those same emotional and intellectual capacities to our interactions with them, and how would this type of interaction affect the way we use, relate to, and feel about AI?
Cross and his colleagues believe that humanizing artificial intelligence will make the technology more useful to humanity, and prompt people to use AI in more beneficial ways.
“What we think is a very important view as we move forward is that these machines can be more helpful to us. They can be more useful to us. They can be more interesting to us if they’re actually more like us,” Cross said.
It is an approach that seems to resonate with companies and organizations. For example, in the UK, where NatWest Bank is testing out Cora as a digital employee to help answer customer queries. In Germany, Daimler Financial Group plans to employ Sarah as something “similar to a personal concierge” for its customers. According to Cross, Daimler is looking at other ways it could deploy digital humans across the organization, from building digital service people, digital sales people, and maybe in the future, digital chauffeurs.
Soul Machines’ latest creation is Will, a digital teacher that can interact with children through a desktop, tablet, or mobile device and help them learn about renewable energy. Cross sees other social uses for digital humans, including potentially serving as doctors to rural communities.
Our Digital Friends—and Twins
Soul Machines is not alone in its quest to humanize technology. It is a direction many technology companies, including the likes of Amazon, also seem to be pursuing. Amazon is working on building a home robot that, according to Bloomberg, “could be a sort of mobile Alexa.”
Finding a more human form for technology seems like a particularly pervasive pursuit in Japan. Not just when it comes to its many, many robots, but also virtual assistants like Gatebox.
The Japanese approach was perhaps best summed up by famous android researcher Dr. Hiroshi Ishiguro, who I interviewed last year: “The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction.”
During Cross’s presentation, Rob Nail, CEO and associate founder of Singularity University, joined him on the stage, extending an invitation to Rachel to be SU’s first fully digital faculty member. Rachel accepted, and though she’s the only digital faculty right now, she predicted this won’t be the case for long.
“In 10 years, all of you will have digital versions of yourself, just like me, to take on specific tasks and make your life a whole lot easier,” she said. “This is great news for me. I’ll have millions of digital friends.”
Image Credit: Soul Machines Continue reading