Tag Archives: network

#432891 This Week’s Awesome Stories From ...

TRANSPORTATION
Elon Musk Presents His Tunnel Vision to the People of LA
Jack Stewart and Aarian Marshall | Wired
“Now, Musk wants to build this new, 2.1-mile tunnel, near LA’s Sepulveda pass. It’s all part of his broader vision of a sprawling network that could take riders from Sherman Oaks in the north to Long Beach Airport in the south, Santa Monica in the west to Dodger Stadium in the east—without all that troublesome traffic.”

ROBOTICS
Feel What This Robot Feels Through Tactile Expressions
Evan Ackerman | IEEE Spectrum
“Guy Hoffman’s Human-Robot Collaboration & Companionship (HRC2) Lab at Cornell University is working on a new robot that’s designed to investigate this concept of textural communication, which really hasn’t been explored in robotics all that much. The robot uses a pneumatically powered elastomer skin that can be dynamically textured with either goosebumps or spikes, which should help it communicate more effectively, especially if what it’s trying to communicate is, ‘Don’t touch me!’”

VIRTUAL REALITY
In Virtual Reality, How Much Body Do You Need?
Steph Yin | The New York Times
“In a paper published Tuesday in Scientific Reports, they showed that animating virtual hands and feet alone is enough to make people feel their sense of body drift toward an invisible avatar. Their work fits into a corpus of research on illusory body ownership, which has challenged understandings of perception and contributed to therapies like treating pain for amputees who experience phantom limb.”

MEDICINE
How Graphene and Gold Could Help Us Test Drugs and Monitor Cancer
Angela Chen | The Verge
“In today’s study, scientists learned to precisely control the amount of electricity graphene generates by changing how much light they shine on the material. When they grew heart cells on the graphene, they could manipulate the cells too, says study co-author Alex Savtchenko, a physicist at the University of California, San Diego. They could make it beat 1.5 times faster, three times faster, 10 times faster, or whatever they needed.”

DISASTER RELIEF
Robotic Noses Could Be the Future of Disaster Rescue—If They Can Outsniff Search Dogs
Eleanor Cummins | Popular Science
“While canine units are a tried and fairly true method for identifying people trapped in the wreckage of a disaster, analytical chemists have for years been working in the lab to create a robotic alternative. A synthetic sniffer, they argue, could potentially prove to be just as or even more reliable than a dog, more resilient in the face of external pressures like heat and humidity, and infinitely more portable.”

Image Credit: Sergey Nivens / Shutterstock.com Continue reading

Posted in Human Robots

#432487 Can We Make a Musical Turing Test?

As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.

At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.

But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?

Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.

Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)

Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.

Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).

There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.

Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:

The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.

We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?

And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?

Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.

Image Credit: d1sk / Shutterstock.com Continue reading

Posted in Human Robots

#432433 Just a Few of the Amazing Things AI Is ...

In an interview at Singularity University’s Exponential Medicine in San Diego, Neil Jacobstein shared some groundbreaking developments in artificial intelligence for healthcare.

Jacobstein is Singularity University’s faculty chair in AI and robotics, a distinguished visiting scholar at Stanford University’s MediaX Program, and has served as an AI technical consultant on research and development projects for organizations like DARPA, Deloitte, NASA, Boeing, and many more.

According to Jacobstein, 2017 was an exciting year for AI, not only due to how the technology matured, but also thanks to new applications and successes in several health domains.

Among the examples cited in his interview, Jacobstein referenced a 2017 breakthrough at Stanford University where an AI system was used for skin cancer identification. To train the system, the team showed a convolutional neural network images of 129,000 skin lesions. The system was able to differentiate between images displaying malignant melanomas and benign skin lesions. When tested against 21 board–certified dermatologists, the system made comparable diagnostic calls.

Pattern recognition and image detection are just two examples of successful uses of AI in healthcare and medicine—the list goes on.

“We’re seeing AI and machine learning systems performing at narrow tasks remarkably well, and getting breakthrough results both in AI for problem-solving and AI with medicine,” Jacobstein said.

He continued, “We are not seeing super-human terminator systems. But we are seeing more members of the AI community paying attention to managing the downside risk of AI responsibly.”

Watch the full interview to learn more examples of how AI is advancing in healthcare and medicine and elsewhere and what Jacobstein thinks is coming next.

Image Credit: GrAI / Shutterstock.com Continue reading

Posted in Human Robots

#432236 Why Hasn’t AI Mastered Language ...

In the myth about the Tower of Babel, people conspired to build a city and tower that would reach heaven. Their creator observed, “And now nothing will be restrained from them, which they have imagined to do.” According to the myth, God thwarted this effort by creating diverse languages so that they could no longer collaborate.

In our modern times, we’re experiencing a state of unprecedented connectivity thanks to technology. However, we’re still living under the shadow of the Tower of Babel. Language remains a barrier in business and marketing. Even though technological devices can quickly and easily connect, humans from different parts of the world often can’t.

Translation agencies step in, making presentations, contracts, outsourcing instructions, and advertisements comprehensible to all intended recipients. Some agencies also offer “localization” expertise. For instance, if a company is marketing in Quebec, the advertisements need to be in Québécois French, not European French. Risk-averse companies may be reluctant to invest in these translations. Consequently, these ventures haven’t achieved full market penetration.

Global markets are waiting, but AI-powered language translation isn’t ready yet, despite recent advancements in natural language processing and sentiment analysis. AI still has difficulties processing requests in one language, without the additional complications of translation. In November 2016, Google added a neural network to its translation tool. However, some of its translations are still socially and grammatically odd. I spoke to technologists and a language professor to find out why.

“To Google’s credit, they made a pretty massive improvement that appeared almost overnight. You know, I don’t use it as much. I will say this. Language is hard,” said Michael Housman, chief data science officer at RapportBoost.AI and faculty member of Singularity University.

He explained that the ideal scenario for machine learning and artificial intelligence is something with fixed rules and a clear-cut measure of success or failure. He named chess as an obvious example, and noted machines were able to beat the best human Go player. This happened faster than anyone anticipated because of the game’s very clear rules and limited set of moves.

Housman elaborated, “Language is almost the opposite of that. There aren’t as clearly-cut and defined rules. The conversation can go in an infinite number of different directions. And then of course, you need labeled data. You need to tell the machine to do it right or wrong.”

Housman noted that it’s inherently difficult to assign these informative labels. “Two translators won’t even agree on whether it was translated properly or not,” he said. “Language is kind of the wild west, in terms of data.”

Google’s technology is now able to consider the entirety of a sentence, as opposed to merely translating individual words. Still, the glitches linger. I asked Dr. Jorge Majfud, Associate Professor of Spanish, Latin American Literature, and International Studies at Jacksonville University, to explain why consistently accurate language translation has thus far eluded AI.

He replied, “The problem is that considering the ‘entire’ sentence is still not enough. The same way the meaning of a word depends on the rest of the sentence (more in English than in Spanish), the meaning of a sentence depends on the rest of the paragraph and the rest of the text, as the meaning of a text depends on a larger context called culture, speaker intentions, etc.”

He noted that sarcasm and irony only make sense within this widened context. Similarly, idioms can be problematic for automated translations.

“Google translation is a good tool if you use it as a tool, that is, not to substitute human learning or understanding,” he said, before offering examples of mistranslations that could occur.

“Months ago, I went to buy a drill at Home Depot and I read a sign under a machine: ‘Saw machine.’ Right below it, the Spanish translation: ‘La máquina vió,’ which means, ‘The machine did see it.’ Saw, not as a noun but as a verb in the preterit form,” he explained.

Dr. Majfud warned, “We should be aware of the fragility of their ‘interpretation.’ Because to translate is basically to interpret, not just an idea but a feeling. Human feelings and ideas that only humans can understand—and sometimes not even we, humans, understand other humans.”

He noted that cultures, gender, and even age can pose barriers to this understanding and also contended that an over-reliance on technology is leading to our cultural and political decline. Dr. Majfud mentioned that Argentinean writer Julio Cortázar used to refer to dictionaries as “cemeteries.” He suggested that automatic translators could be called “zombies.”

Erik Cambria is an academic AI researcher and assistant professor at Nanyang Technological University in Singapore. He mostly focuses on natural language processing, which is at the core of AI-powered language translation. Like Dr. Majfud, he sees the complexity and associated risks. “There are so many things that we unconsciously do when we read a piece of text,” he told me. Reading comprehension requires multiple interrelated tasks, which haven’t been accounted for in past attempts to automate translation.

Cambria continued, “The biggest issue with machine translation today is that we tend to go from the syntactic form of a sentence in the input language to the syntactic form of that sentence in the target language. That’s not what we humans do. We first decode the meaning of the sentence in the input language and then we encode that meaning into the target language.”

Additionally, there are cultural risks involved with these translations. Dr. Ramesh Srinivasan, Director of UCLA’s Digital Cultures Lab, said that new technological tools sometimes reflect underlying biases.

“There tend to be two parameters that shape how we design ‘intelligent systems.’ One is the values and you might say biases of those that create the systems. And the second is the world if you will that they learn from,” he told me. “If you build AI systems that reflect the biases of their creators and of the world more largely, you get some, occasionally, spectacular failures.”

Dr. Srinivasan said translation tools should be transparent about their capabilities and limitations. He said, “You know, the idea that a single system can take languages that I believe are very diverse semantically and syntactically from one another and claim to unite them or universalize them, or essentially make them sort of a singular entity, it’s a misnomer, right?”

Mary Cochran, co-founder of Launching Labs Marketing, sees the commercial upside. She mentioned that listings in online marketplaces such as Amazon could potentially be auto-translated and optimized for buyers in other countries.

She said, “I believe that we’re just at the tip of the iceberg, so to speak, with what AI can do with marketing. And with better translation, and more globalization around the world, AI can’t help but lead to exploding markets.”

Image Credit: igor kisselev / Shutterstock.com Continue reading

Posted in Human Robots

#432190 In the Future, There Will Be No Limit to ...

New planets found in distant corners of the galaxy. Climate models that may improve our understanding of sea level rise. The emergence of new antimalarial drugs. These scientific advances and discoveries have been in the news in recent months.

While representing wildly divergent disciplines, from astronomy to biotechnology, they all have one thing in common: Artificial intelligence played a key role in their scientific discovery.

One of the more recent and famous examples came out of NASA at the end of 2017. The US space agency had announced an eighth planet discovered in the Kepler-90 system. Scientists had trained a neural network—a computer with a “brain” modeled on the human mind—to re-examine data from Kepler, a space-borne telescope with a four-year mission to seek out new life and new civilizations. Or, more precisely, to find habitable planets where life might just exist.

The researchers trained the artificial neural network on a set of 15,000 previously vetted signals until it could identify true planets and false positives 96 percent of the time. It then went to work on weaker signals from nearly 700 star systems with known planets.

The machine detected Kepler 90i—a hot, rocky planet that orbits its sun about every two Earth weeks—through a nearly imperceptible change in brightness captured when a planet passes a star. It also found a sixth Earth-sized planet in the Kepler-80 system.

AI Handles Big Data
The application of AI to science is being driven by three great advances in technology, according to Ross King from the Manchester Institute of Biotechnology at the University of Manchester, leader of a team that developed an artificially intelligent “scientist” called Eve.

Those three advances include much faster computers, big datasets, and improved AI methods, King said. “These advances increasingly give AI superhuman reasoning abilities,” he told Singularity Hub by email.

AI systems can flawlessly remember vast numbers of facts and extract information effortlessly from millions of scientific papers, not to mention exhibit flawless logical reasoning and near-optimal probabilistic reasoning, King says.

AI systems also beat humans when it comes to dealing with huge, diverse amounts of data.

That’s partly what attracted a team of glaciologists to turn to machine learning to untangle the factors involved in how heat from Earth’s interior might influence the ice sheet that blankets Greenland.

Algorithms juggled 22 geologic variables—such as bedrock topography, crustal thickness, magnetic anomalies, rock types, and proximity to features like trenches, ridges, young rifts, and volcanoes—to predict geothermal heat flux under the ice sheet throughout Greenland.

The machine learning model, for example, predicts elevated heat flux upstream of Jakobshavn Glacier, the fastest-moving glacier in the world.

“The major advantage is that we can incorporate so many different types of data,” explains Leigh Stearns, associate professor of geology at Kansas University, whose research takes her to the polar regions to understand how and why Earth’s great ice sheets are changing, questions directly related to future sea level rise.

“All of the other models just rely on one parameter to determine heat flux, but the [machine learning] approach incorporates all of them,” Stearns told Singularity Hub in an email. “Interestingly, we found that there is not just one parameter…that determines the heat flux, but a combination of many factors.”

The research was published last month in Geophysical Research Letters.

Stearns says her team hopes to apply high-powered machine learning to characterize glacier behavior over both short and long-term timescales, thanks to the large amounts of data that she and others have collected over the last 20 years.

Emergence of Robot Scientists
While Stearns sees machine learning as another tool to augment her research, King believes artificial intelligence can play a much bigger role in scientific discoveries in the future.

“I am interested in developing AI systems that autonomously do science—robot scientists,” he said. Such systems, King explained, would automatically originate hypotheses to explain observations, devise experiments to test those hypotheses, physically run the experiments using laboratory robotics, and even interpret the results. The conclusions would then influence the next cycle of hypotheses and experiments.

His AI scientist Eve recently helped researchers discover that triclosan, an ingredient commonly found in toothpaste, could be used as an antimalarial drug against certain strains that have developed a resistance to other common drug therapies. The research was published in the journal Scientific Reports.

Automation using artificial intelligence for drug discovery has become a growing area of research, as the machines can work orders of magnitude faster than any human. AI is also being applied in related areas, such as synthetic biology for the rapid design and manufacture of microorganisms for industrial uses.

King argues that machines are better suited to unravel the complexities of biological systems, with even the most “simple” organisms are host to thousands of genes, proteins, and small molecules that interact in complicated ways.

“Robot scientists and semi-automated AI tools are essential for the future of biology, as there are simply not enough human biologists to do the necessary work,” he said.

Creating Shockwaves in Science
The use of machine learning, neural networks, and other AI methods can often get better results in a fraction of the time it would normally take to crunch data.

For instance, scientists at the National Center for Supercomputing Applications, located at the University of Illinois at Urbana-Champaign, have a deep learning system for the rapid detection and characterization of gravitational waves. Gravitational waves are disturbances in spacetime, emanating from big, high-energy cosmic events, such as the massive explosion of a star known as a supernova. The “Holy Grail” of this type of research is to detect gravitational waves from the Big Bang.

Dubbed Deep Filtering, the method allows real-time processing of data from LIGO, a gravitational wave observatory comprised of two enormous laser interferometers located thousands of miles apart in California and Louisiana. The research was published in Physics Letters B. You can watch a trippy visualization of the results below.

In a more down-to-earth example, scientists published a paper last month in Science Advances on the development of a neural network called ConvNetQuake to detect and locate minor earthquakes from ground motion measurements called seismograms.

ConvNetQuake uncovered 17 times more earthquakes than traditional methods. Scientists say the new method is particularly useful in monitoring small-scale seismic activity, which has become more frequent, possibly due to fracking activities that involve injecting wastewater deep underground. You can learn more about ConvNetQuake in this video:

King says he believes that in the long term there will be no limit to what AI can accomplish in science. He and his team, including Eve, are currently working on developing cancer therapies under a grant from DARPA.

“Robot scientists are getting smarter and smarter; human scientists are not,” he says. “Indeed, there is arguably a case that human scientists are less good. I don’t see any scientist alive today of the stature of a Newton or Einstein—despite the vast number of living scientists. The Physics Nobel [laureate] Frank Wilczek is on record as saying (10 years ago) that in 100 years’ time the best physicist will be a machine. I agree.”

Image Credit: Romaset / Shutterstock.com Continue reading

Posted in Human Robots