Tag Archives: reality

#432487 Can We Make a Musical Turing Test?

As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.

At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.

But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?

Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.

Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)

Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.

Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).

There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.

Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:

The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.

We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?

And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?

Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.

Image Credit: d1sk / Shutterstock.com Continue reading

Posted in Human Robots

#432467 Dungeons and Dragons, Not Chess and Go: ...

Everyone had died—not that you’d know it, from how they were laughing about their poor choices and bad rolls of the dice. As a social anthropologist, I study how people understand artificial intelligence (AI) and our efforts towards attaining it; I’m also a life-long fan of Dungeons and Dragons (D&D), the inventive fantasy roleplaying game. During a recent quest, when I was playing an elf ranger, the trainee paladin (or holy knight) acted according to his noble character, and announced our presence at the mouth of a dragon’s lair. The results were disastrous. But while success in D&D means “beating the bad guy,” the game is also a creative sandbox, where failure can count as collective triumph so long as you tell a great tale.

What does this have to do with AI? In computer science, games are frequently used as a benchmark for an algorithm’s “intelligence.” The late Robert Wilensky, a professor at the University of California, Berkeley and a leading figure in AI, offered one reason why this might be. Computer scientists “looked around at who the smartest people were, and they were themselves, of course,” he told the authors of Compulsive Technology: Computers as Culture (1985). “They were all essentially mathematicians by training, and mathematicians do two things—they prove theorems and play chess. And they said, hey, if it proves a theorem or plays chess, it must be smart.” No surprise that demonstrations of AI’s “smarts” have focused on the artificial player’s prowess.

Yet the games that get chosen—like Go, the main battlefield for Google DeepMind’s algorithms in recent years—tend to be tightly bounded, with set objectives and clear paths to victory or defeat. These experiences have none of the open-ended collaboration of D&D. Which got me thinking: do we need a new test for intelligence, where the goal is not simply about success, but storytelling? What would it mean for an AI to “pass” as human in a game of D&D? Instead of the Turing test, perhaps we need an elf ranger test?

Of course, this is just a playful thought experiment, but it does highlight the flaws in certain models of intelligence. First, it reveals how intelligence has to work across a variety of environments. D&D participants can inhabit many characters in many games, and the individual player can “switch” between roles (the fighter, the thief, the healer). Meanwhile, AI researchers know that it’s super difficult to get a well-trained algorithm to apply its insights in even slightly different domains—something that we humans manage surprisingly well.

Second, D&D reminds us that intelligence is embodied. In computer games, the bodily aspect of the experience might range from pressing buttons on a controller in order to move an icon or avatar (a ping-pong paddle; a spaceship; an anthropomorphic, eternally hungry, yellow sphere), to more recent and immersive experiences involving virtual-reality goggles and haptic gloves. Even without these add-ons, games can still produce biological responses associated with stress and fear (if you’ve ever played Alien: Isolation you’ll understand). In the original D&D, the players encounter the game while sitting around a table together, feeling the story and its impact. Recent research in cognitive science suggests that bodily interactions are crucial to how we grasp more abstract mental concepts. But we give minimal attention to the embodiment of artificial agents, and how that might affect the way they learn and process information.

Finally, intelligence is social. AI algorithms typically learn through multiple rounds of competition, in which successful strategies get reinforced with rewards. True, it appears that humans also evolved to learn through repetition, reward and reinforcement. But there’s an important collaborative dimension to human intelligence. In the 1930s, the psychologist Lev Vygotsky identified the interaction of an expert and a novice as an example of what became called “scaffolded” learning, where the teacher demonstrates and then supports the learner in acquiring a new skill. In unbounded games, this cooperation is channelled through narrative. Games of It among small children can evolve from win/lose into attacks by terrible monsters, before shifting again to more complex narratives that explain why the monsters are attacking, who is the hero, and what they can do and why—narratives that aren’t always logical or even internally compatible. An AI that could engage in social storytelling is doubtless on a surer, more multifunctional footing than one that plays chess; and there’s no guarantee that chess is even a step on the road to attaining intelligence of this sort.

In some ways, this failure to look at roleplaying as a technical hurdle for intelligence is strange. D&D was a key cultural touchstone for technologists in the 1980s and the inspiration for many early text-based computer games, as Katie Hafner and Matthew Lyon point out in Where Wizards Stay up Late: The Origins of the Internet (1996). Even today, AI researchers who play games in their free time often mention D&D specifically. So instead of beating adversaries in games, we might learn more about intelligence if we tried to teach artificial agents to play together as we do: as paladins and elf rangers.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit:Benny Mazur/Flickr / CC BY 2.0 Continue reading

Posted in Human Robots

#432331 $10 million XPRIZE Aims for Robot ...

Ever wished you could be in two places at the same time? The XPRIZE Foundation wants to make that a reality with a $10 million competition to build robot avatars that can be controlled from at least 100 kilometers away.

The competition was announced by XPRIZE founder Peter Diamandis at the SXSW conference in Austin last week, with an ambitious timeline of awarding the grand prize by October 2021. Teams have until October 31st to sign up, and they need to submit detailed plans to a panel of judges by the end of next January.

The prize, sponsored by Japanese airline ANA, has given contestants little guidance on how they expect them to solve the challenge other than saying their solutions need to let users see, hear, feel, and interact with the robot’s environment as well as the people in it.

XPRIZE has also not revealed details of what kind of tasks the robots will be expected to complete, though they’ve said tasks will range from “simple” to “complex,” and it should be possible for an untrained operator to use them.

That’s a hugely ambitious goal that’s likely to require teams to combine multiple emerging technologies, from humanoid robotics to virtual reality high-bandwidth communications and high-resolution haptics.

If any of the teams succeed, the technology could have myriad applications, from letting emergency responders enter areas too hazardous for humans to helping people care for relatives who live far away or even just allowing tourists to visit other parts of the world without the jet lag.

“Our ability to physically experience another geographic location, or to provide on-the-ground assistance where needed, is limited by cost and the simple availability of time,” Diamandis said in a statement.

“The ANA Avatar XPRIZE can enable creation of an audacious alternative that could bypass these limitations, allowing us to more rapidly and efficiently distribute skill and hands-on expertise to distant geographic locations where they are needed, bridging the gap between distance, time, and cultures,” he added.

Interestingly, the technology may help bypass an enduring hand break on the widespread use of robotics: autonomy. By having a human in the loop, you don’t need nearly as much artificial intelligence analyzing sensory input and making decisions.

Robotics software is doing a lot more than just high-level planning and strategizing, though. While a human moves their limbs instinctively without consciously thinking about which muscles to activate, controlling and coordinating a robot’s components requires sophisticated algorithms.

The DARPA Robotics Challenge demonstrated just how hard it was to get human-shaped robots to do tasks humans would find simple, such as opening doors, climbing steps, and even just walking. These robots were supposedly semi-autonomous, but on many tasks they were essentially tele-operated, and the results suggested autonomy isn’t the only problem.

There’s also the issue of powering these devices. You may have noticed that in a lot of the slick web videos of humanoid robots doing cool things, the machine is attached to the roof by a large cable. That’s because they suck up huge amounts of power.

Possibly the most advanced humanoid robot—Boston Dynamics’ Atlas—has a battery, but it can only run for about an hour. That might be fine for some applications, but you don’t want it running out of juice halfway through rescuing someone from a mine shaft.

When it comes to the link between the robot and its human user, some of the technology is probably not that much of a stretch. Virtual reality headsets can create immersive audio-visual environments, and a number of companies are working on advanced haptic suits that will let people “feel” virtual environments.

Motion tracking technology may be more complicated. While even consumer-grade devices can track peoples’ movements with high accuracy, you will probably need to don something more like an exoskeleton that can both pick up motion and provide mechanical resistance, so that when the robot bumps into an immovable object, the user stops dead too.

How hard all of this will be is also dependent on how the competition ultimately defines subjective terms like “feel” and “interact.” Will the user need to be able to feel a gentle breeze on the robot’s cheek or be able to paint a watercolor? Or will simply having the ability to distinguish a hard object from a soft one or shake someone’s hand be enough?

Whatever the fidelity they decide on, the approach will require huge amounts of sensory and control data to be transmitted over large distances, most likely wirelessly, in a way that’s fast and reliable enough that there’s no lag or interruptions. Fortunately 5G is launching this year, with a speed of 10 gigabits per second and very low latency, so this problem should be solved by 2021.

And it’s worth remembering there have already been some tentative attempts at building robotic avatars. Telepresence robots have solved the seeing, hearing, and some of the interacting problems, and MIT has already used virtual reality to control robots to carry out complex manipulation tasks.

South Korean company Hankook Mirae Technology has also unveiled a 13-foot-tall robotic suit straight out of a sci-fi movie that appears to have made some headway with the motion tracking problem, albeit with a human inside the robot. Toyota’s T-HR3 does the same, but with the human controlling the robot from a “Master Maneuvering System” that marries motion tracking with VR.

Combining all of these capabilities into a single machine will certainly prove challenging. But if one of the teams pulls it off, you may be able to tick off trips to the Seven Wonders of the World without ever leaving your house.

Image Credit: ANA Avatar XPRIZE Continue reading

Posted in Human Robots

#432279 This Week’s Awesome Stories From ...

COMPUTING
Google Thinks It’s Close to ‘Quantum Supremacy.’ Here’s What That Really Means.
Martin Giles and Will Knight | MIT Technology Review
“Seventy-two may not be a large number, but in quantum computing terms, it’s massive. This week Google unveiled Bristlecone, a new quantum computing chip with 72 quantum bits, or qubits—the fundamental units of computation in a quantum machine…John Martinis, who heads Google’s effort, says his team still needs to do more testing, but he thinks it’s ‘pretty likely’ that this year, perhaps even in just a few months, the new chip can achieve ‘quantum supremacy.'”

INTERNET
How Project Loon Built the Navigation System That Kept Its Balloons Over Puerto Rico
Amy Nordrum | IEEE Spectrum
“Last year, Alphabet’s Project Loon made a big shift in the way it flies its high-altitude balloons. And that shift—from steering every balloon in a huge circle around the world to clustering balloons over specific areas—allowed the project to provide basic Internet service to more than 200,000 people in Puerto Rico after Hurricane Maria.”

DIGITAL MEDIA
The Grim Conclusions of the Largest-Ever Study of Fake News
Robinson Meyer | The Atlantic
“The massive new study analyzes every major contested news story in English across the span of Twitter’s existence—some 126,000 stories, tweeted by 3 million users, over more than 10 years—and finds that the truth simply cannot compete with hoax and rumor.”

AUGMENTED REALITY
Magic Leap Raises $461 Million in Fresh Funding From the Kingdom of Saudi Arabia
Lucas Matney | TechCrunch
“Magic Leap still hasn’t released a product, but they’re continuing to raise a lot of cash to get there. The Plantation, Florida-based augmented reality startup announced today that it has raised $461 million from the Kingdom of Saudi Arabia’s sovereign investment arm, The Public Investment Fund…Magic Leap has raised more than $2.3 billion in funding to date.”

TECHNOLOGY & SOCIETY
Social Inequality Will Not Be Solved by an App
Safiya Umoja Noble | Wired
“An app will not save us. We will not sort out social inequality lying in bed staring at smartphones. It will not stem from simply sending emails to people in power, one person at a time…We need more intense attention on how these types of artificial intelligence, under the auspices of individual freedom to make choices, forestall the ability to see what kinds of choices we are making and the collective impact of these choices in reversing decades of struggle for social, political, and economic equality. Digital technologies are implicated in these struggles.”

Image Credit: topseller / Shutterstock.com Continue reading

Posted in Human Robots

#432271 Your Shopping Experience Is on the Verge ...

Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.

E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.

Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.

Massive change is occurring in this arena.

For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.

Let’s dive in.

E-Commerce Trends
Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.

These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.

At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.

Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.

And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.

In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.

There’s plenty more room for digital disruption.

AI and the Retail Experience
For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.

In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.

Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.

Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, an Austrian company called Soul Machines.

Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.

Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.

Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.

Amazon’s Alexa marks an important user interface moment in this regard.

Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on Amazon.com when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.

As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.

But let’s take it one step further.

Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.

In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.

In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.

In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?

The dematerialization, demonetization, and democratization of personalized shopping has only just begun.

The In-Store Experience: Experiential Retailing
In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.

As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.

Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.

The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.

As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.

In the short term,retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).

In the long term,we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.

One thing is certain: the nominal shopping experience is on the verge of a major transformation.

Implications
The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.

Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.

And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.

Perhaps nothing will be more transformed than today’s $20 trillion retail sector.

Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.

Join Me
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.

Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots