Tag Archives: car

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433807 The How, Why, and Whether of Custom ...

A digital afterlife may soon be within reach, but it might not be for your benefit.

The reams of data we’re creating could soon make it possible to create digital avatars that live on after we die, aimed at comforting our loved ones or sharing our experience with future generations.

That may seem like a disappointing downgrade from the vision promised by the more optimistic futurists, where we upload our consciousness to the cloud and live forever in machines. But it might be a realistic possibility in the not-too-distant future—and the first steps have already been taken.

After her friend died in a car crash, Eugenia Kuyda, co-founder of Russian AI startup Luka, trained a neural network-powered chatbot on their shared message history to mimic him. Journalist and amateur coder James Vlahos took a more involved approach, carrying out extensive interviews with his terminally ill father so that he could create a digital clone of him when he died.

For those of us without the time or expertise to build our own artificial intelligence-powered avatar, startup Eternime is offering to take your social media posts and interactions as well as basic personal information to build a copy of you that could then interact with relatives once you’re gone. The service is so far only running a private beta with a handful of people, but with 40,000 on its waiting list, it’s clear there’s a market.

Comforting—Or Creepy?
The whole idea may seem eerily similar to the Black Mirror episode Be Right Back, in which a woman pays a company to create a digital copy of her deceased husband and eventually a realistic robot replica. And given the show’s focus on the emotional turmoil she goes through, people might question whether the idea is a sensible one.

But it’s hard to say at this stage whether being able to interact with an approximation of a deceased loved one would be a help or a hindrance in the grieving process. The fear is that it could make it harder for people to “let go” or “move on,” but others think it could play a useful therapeutic role, reminding people that just because someone is dead it doesn’t mean they’re gone, and providing a novel way for them to express and come to terms with their feelings.

While at present most envisage these digital resurrections as a way to memorialize loved ones, there are also more ambitious plans to use the technology as a way to preserve expertise and experience. A project at MIT called Augmented Eternity is investigating whether we could use AI to trawl through someone’s digital footprints and extract both their knowledge and elements of their personality.

Project leader Hossein Rahnama says he’s already working with a CEO who wants to leave behind a digital avatar that future executives could consult with after he’s gone. And you wouldn’t necessarily have to wait until you’re dead—experts could create virtual clones of themselves that could dispense advice on demand to far more people. These clones could soon be more than simple chatbots, too. Hollywood has already started spending millions of dollars to create 3D scans of its most bankable stars so that they can keep acting beyond the grave.

It’s easy to see the appeal of the idea; imagine if we could bring back Stephen Hawking or Tim Cook to share their wisdom with us. And what if we could create a digital brain trust combining the experience and wisdom of all the world’s greatest thinkers, accessible on demand?

But there are still huge hurdles ahead before we could create truly accurate representations of people by simply trawling through their digital remains. The first problem is data. Most peoples’ digital footprints only started reaching significant proportions in the last decade or so, and cover a relatively small period of their lives. It could take many years before there’s enough data to create more than just a superficial imitation of someone.

And that’s assuming that the data we produce is truly representative of who we are. Carefully-crafted Instagram profiles and cautiously-worded work emails hardly capture the messy realities of most peoples’ lives.

Perhaps if the idea is simply to create a bank of someone’s knowledge and expertise, accurately capturing the essence of their character would be less important. But these clones would also be static. Real people continually learn and change, but a digital avatar is a snapshot of someone’s character and opinions at the point they died. An inability to adapt as the world around them changes could put a shelf life on the usefulness of these replicas.

Who’s Calling the (Digital) Shots?
It won’t stop people trying, though, and that raises a potentially more important question: Who gets to make the calls about our digital afterlife? The subjects, their families, or the companies that hold their data?

In most countries, the law is currently pretty hazy on this topic. Companies like Google and Facebook have processes to let you choose who should take control of your accounts in the event of your death. But if you’ve forgotten to do that, the fate of your virtual remains comes down to a tangle of federal law, local law, and tech company terms of service.

This lack of regulation could create incentives and opportunities for unscrupulous behavior. The voice of a deceased loved one could be a highly persuasive tool for exploitation, and digital replicas of respected experts could be powerful means of pushing a hidden agenda.

That means there’s a pressing need for clear and unambiguous rules. Researchers at Oxford University recently suggested ethical guidelines that would treat our digital remains the same way museums and archaeologists are required to treat mortal remains—with dignity and in the interest of society.

Whether those kinds of guidelines are ever enshrined in law remains to be seen, but ultimately they may decide whether the digital afterlife turns out to be heaven or hell.

Image Credit: frankie’s / Shutterstock.com Continue reading

Posted in Human Robots

#433803 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

LONGEVITY
Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

SYNTHETIC BIOLOGY
Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

CYBERNETICS
Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

ART
AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

ETHICS
Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

TECHNOLOGY
The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#433799 The First Novel Written by AI Is ...

Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.

People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.

This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.

In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.

But is creativity a fundamentally human phenomenon? Or can it be learned by machines?

And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?

Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.

“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.

Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.

The results are… mixed.

The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.

Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.

A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.

The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.

The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.

In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.

The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.

Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.

When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.

Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.

Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.

Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”

AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.

Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.

Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.

Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.

You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.

Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”

Image Credit: eurobanks / Shutterstock.com Continue reading

Posted in Human Robots