Tag Archives: vs
#433486 This AI Predicts Obesity ...
A research team at the University of Washington has trained an artificial intelligence system to spot obesity—all the way from space. The system used a convolutional neural network (CNN) to analyze 150,000 satellite images and look for correlations between the physical makeup of a neighborhood and the prevalence of obesity.
The team’s results, presented in JAMA Network Open, showed that features of a given neighborhood could explain close to two-thirds (64.8 percent) of the variance in obesity. Researchers found that analyzing satellite data could help increase understanding of the link between peoples’ environment and obesity prevalence. The next step would be to make corresponding structural changes in the way neighborhoods are built to encourage physical activity and better health.
Training AI to Spot Obesity
Convolutional neural networks (CNNs) are particularly adept at image analysis, object recognition, and identifying special hierarchies in large datasets.
Prior to analyzing 150,000 high-resolution satellite images of Bellevue, Seattle, Tacoma, Los Angeles, Memphis, and San Antonio, the researchers trained the CNN on 1.2 million images from the ImageNet database. The categorizations were correlated with obesity prevalence estimates for the six urban areas from census tracts gathered by the 500 Cities project.
The system was able to identify the presence of certain features that increased likelihood of obesity in a given area. Some of these features included tightly–packed houses, being close to roadways, and living in neighborhoods with a lack of greenery.
Visualization of features identified by the convolutional neural network (CNN) model. The images on the left column are satellite images taken from Google Static Maps API (application programming interface). Images in the middle and right columns are activation maps taken from the second convolutional layer of VGG-CNN-F network after forward pass of the respective satellite images through the network. From Google Static Maps API, DigitalGlobe, US Geological Survey (accessed July 2017). Credit: JAMA Network Open
Your Surroundings Are Key
In their discussion of the findings, the researchers stressed that there are limitations to the conclusions that can be drawn from the AI’s results. For example, socio-economic factors like income likely play a major role for obesity prevalence in a given geographic area.
However, the study concluded that the AI-powered analysis showed the prevalence of specific man-made features in neighborhoods consistently correlating with obesity prevalence and not necessarily correlating with socioeconomic status.
The system’s success rates varied between studied cities, with Memphis being the highest (73.3 percent) and Seattle being the lowest (55.8 percent).
AI Takes To the Sky
Around a third of the US population is categorized as obese. Obesity is linked to a number of health-related issues, and the AI-generated results could potentially help improve city planning and better target campaigns to limit obesity.
The study is one of the latest of a growing list that uses AI to analyze images and extrapolate insights.
A team at Stanford University has used a CNN to predict poverty via satellite imagery, assisting governments and NGOs to better target their efforts. A combination of the public Automatic Identification System for shipping, satellite imagery, and Google’s AI has proven able to identify illegal fishing activity. Researchers have even been able to use AI and Google Street View to predict what party a given city will vote for, based on what cars are parked on the streets.
In each case, the AI systems have been able to look at volumes of data about our world and surroundings that are beyond the capabilities of humans and extrapolate new insights. If one were to moralize about the good and bad sides of AI (new opportunities vs. potential job losses, for example) it could seem that it comes down to what we ask AI systems to look at—and what questions we ask of them.
Image Credit: Ocean Biology Processing Group at NASA’s Goddard Space Flight Center Continue reading
#432487 Can We Make a Musical Turing Test?
As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.
At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.
But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?
Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.
Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)
Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.
Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).
There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.
Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:
The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.
We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?
And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?
Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.
Image Credit: d1sk / Shutterstock.com Continue reading