Tag Archives: makes

#433892 The Spatial Web Will Map Our 3D ...

The boundaries between digital and physical space are disappearing at a breakneck pace. What was once static and boring is becoming dynamic and magical.

For all of human history, looking at the world through our eyes was the same experience for everyone. Beyond the bounds of an over-active imagination, what you see is the same as what I see.

But all of this is about to change. Over the next two to five years, the world around us is about to light up with layer upon layer of rich, fun, meaningful, engaging, and dynamic data. Data you can see and interact with.

This magical future ahead is called the Spatial Web and will transform every aspect of our lives, from retail and advertising, to work and education, to entertainment and social interaction.

Massive change is underway as a result of a series of converging technologies, from 5G global networks and ubiquitous artificial intelligence, to 30+ billion connected devices (known as the IoT), each of which will generate scores of real-world data every second, everywhere.

The current AI explosion will make everything smart, autonomous, and self-programming. Blockchain and cloud-enabled services will support a secure data layer, putting data back in the hands of users and allowing us to build complex rule-based infrastructure in tomorrow’s virtual worlds.

And with the rise of online-merge-offline (OMO) environments, two-dimensional screens will no longer serve as our exclusive portal to the web. Instead, virtual and augmented reality eyewear will allow us to interface with a digitally-mapped world, richly layered with visual data.

Welcome to the Spatial Web. Over the next few months, I’ll be doing a deep dive into the Spatial Web (a.k.a. Web 3.0), covering what it is, how it works, and its vast implications across industries, from real estate and healthcare to entertainment and the future of work. In this blog, I’ll discuss the what, how, and why of Web 3.0—humanity’s first major foray into our virtual-physical hybrid selves (BTW, this year at Abundance360, we’ll be doing a deep dive into the Spatial Web with the leaders of HTC, Magic Leap, and High-Fidelity).

Let’s dive in.

What is the Spatial Web?
While we humans exist in three dimensions, our web today is flat.

The web was designed for shared information, absorbed through a flat screen. But as proliferating sensors, ubiquitous AI, and interconnected networks blur the lines between our physical and online worlds, we need a spatial web to help us digitally map a three-dimensional world.

To put Web 3.0 in context, let’s take a trip down memory lane. In the late 1980s, the newly-birthed world wide web consisted of static web pages and one-way information—a monumental system of publishing and linking information unlike any unified data system before it. To connect, we had to dial up through unstable modems and struggle through insufferably slow connection speeds.

But emerging from this revolutionary (albeit non-interactive) infodump, Web 2.0 has connected the planet more in one decade than empires did in millennia.

Granting democratized participation through newly interactive sites and applications, today’s web era has turbocharged information-sharing and created ripple effects of scientific discovery, economic growth, and technological progress on an unprecedented scale.

We’ve seen the explosion of social networking sites, wikis, and online collaboration platforms. Consumers have become creators; physically isolated users have been handed a global microphone; and entrepreneurs can now access billions of potential customers.

But if Web 2.0 took the world by storm, the Spatial Web emerging today will leave it in the dust.

While there’s no clear consensus about its definition, the Spatial Web refers to a computing environment that exists in three-dimensional space—a twinning of real and virtual realities—enabled via billions of connected devices and accessed through the interfaces of virtual and augmented reality.

In this way, the Spatial Web will enable us to both build a twin of our physical reality in the virtual realm and bring the digital into our real environments.

It’s the next era of web-like technologies:

Spatial computing technologies, like augmented and virtual reality;
Physical computing technologies, like IoT and robotic sensors;
And decentralized computing: both blockchain—which enables greater security and data authentication—and edge computing, which pushes computing power to where it’s most needed, speeding everything up.

Geared with natural language search, data mining, machine learning, and AI recommendation agents, the Spatial Web is a growing expanse of services and information, navigable with the use of ever-more-sophisticated AI assistants and revolutionary new interfaces.

Where Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and social media on two-dimensional screens. But converging technologies are quickly transcending the laptop, and will even disrupt the smartphone in the next decade.

With the rise of wearables, smart glasses, AR / VR interfaces, and the IoT, the Spatial Web will integrate seamlessly into our physical environment, overlaying every conversation, every road, every object, conference room, and classroom with intuitively-presented data and AI-aided interaction.

Think: the Oasis in Ready Player One, where anyone can create digital personas, build and invest in smart assets, do business, complete effortless peer-to-peer transactions, and collect real estate in a virtual world.

Or imagine a virtual replica or “digital twin” of your office, each conference room authenticated on the blockchain, requiring a cryptographic key for entry.

As I’ve discussed with my good friend and “VR guru” Philip Rosedale, I’m absolutely clear that in the not-too-distant future, every physical element of every building in the world is going to be fully digitized, existing as a virtual incarnation or even as N number of these. “Meet me at the top of the Empire State Building?” “Sure, which one?”

This digitization of life means that suddenly every piece of information can become spatial, every environment can be smarter by virtue of AI, and every data point about me and my assets—both virtual and physical—can be reliably stored, secured, enhanced, and monetized.

In essence, the Spatial Web lets us interface with digitally-enhanced versions of our physical environment and build out entirely fictional virtual worlds—capable of running simulations, supporting entire economies, and even birthing new political systems.

But while I’ll get into the weeds of different use cases next week, let’s first concretize.

How Does It Work?
Let’s start with the stack. In the PC days, we had a database accompanied by a program that could ingest that data and present it to us as digestible information on a screen.

Then, in the early days of the web, data migrated to servers. Information was fed through a website, with which you would interface via a browser—whether Mosaic or Mozilla.

And then came the cloud.

Resident at either the edge of the cloud or on your phone, today’s rapidly proliferating apps now allow us to interact with previously read-only data, interfacing through a smartphone. But as Siri and Alexa have brought us verbal interfaces, AI-geared phone cameras can now determine your identity, and sensors are beginning to read our gestures.

And now we’re not only looking at our screens but through them, as the convergence of AI and AR begins to digitally populate our physical worlds.

While Pokémon Go sent millions of mobile game-players on virtual treasure hunts, IKEA is just one of the many companies letting you map virtual furniture within your physical home—simulating everything from cabinets to entire kitchens. No longer the one-sided recipients, we’re beginning to see through sensors, creatively inserting digital content in our everyday environments.

Let’s take a look at how the latest incarnation might work. In this new Web 3.0 stack, my personal AI would act as an intermediary, accessing public or privately-authorized data through the blockchain on my behalf, and then feed it through an interface layer composed of everything from my VR headset, to numerous wearables, to my smart environment (IoT-connected devices or even in-home robots).

But as we attempt to build a smart world with smart infrastructure, smart supply chains and smart everything else, we need a set of basic standards with addresses for people, places, and things. Just like our web today relies on the Internet Protocol (TCP/IP) and other infrastructure, by which your computer is addressed and data packets are transferred, we need infrastructure for the Spatial Web.

And a select group of players is already stepping in to fill this void. Proposing new structural designs for Web 3.0, some are attempting to evolve today’s web model from text-based web pages in 2D to three-dimensional AR and VR web experiences located in both digitally-mapped physical worlds and newly-created virtual ones.

With a spatial programming language analogous to HTML, imagine building a linkable address for any physical or virtual space, granting it a format that then makes it interchangeable and interoperable with all other spaces.

But it doesn’t stop there.

As soon as we populate a virtual room with content, we then need to encode who sees it, who can buy it, who can move it…

And the Spatial Web’s eventual governing system (for posting content on a centralized grid) would allow us to address everything from the room you’re sitting in, to the chair on the other side of the table, to the building across the street.

Just as we have a DNS for the web and the purchasing of web domains, once we give addresses to spaces (akin to granting URLs), we then have the ability to identify and visit addressable locations, physical objects, individuals, or pieces of digital content in cyberspace.

And these not only apply to virtual worlds, but to the real world itself. As new mapping technologies emerge, we can now map rooms, objects, and large-scale environments into virtual space with increasing accuracy.

We might then dictate who gets to move your coffee mug in a virtual conference room, or when a team gets to use the room itself. Rules and permissions would be set in the grid, decentralized governance systems, or in the application layer.

Taken one step further, imagine then monetizing smart spaces and smart assets. If you have booked the virtual conference room, perhaps you’ll let me pay you 0.25 BTC to let me use it instead?

But given the Spatial Web’s enormous technological complexity, what’s allowing it to emerge now?

Why Is It Happening Now?
While countless entrepreneurs have already started harnessing blockchain technologies to build decentralized apps (or dApps), two major developments are allowing today’s birth of Web 3.0:

High-resolution wireless VR/AR headsets are finally catapulting virtual and augmented reality out of a prolonged winter.

The International Data Corporation (IDC) predicts the VR and AR headset market will reach 65.9 million units by 2022. Already in the next 18 months, 2 billion devices will be enabled with AR. And tech giants across the board have long begun investing heavy sums.

In early 2019, HTC is releasing the VIVE Focus, a wireless self-contained VR headset. At the same time, Facebook is charging ahead with its Project Santa Cruz—the Oculus division’s next-generation standalone, wireless VR headset. And Magic Leap has finally rolled out its long-awaited Magic Leap One mixed reality headset.

Mass deployment of 5G will drive 10 to 100-gigabit connection speeds in the next 6 years, matching hardware progress with the needed speed to create virtual worlds.

We’ve already seen tremendous leaps in display technology. But as connectivity speeds converge with accelerating GPUs, we’ll start to experience seamless VR and AR interfaces with ever-expanding virtual worlds.

And with such democratizing speeds, every user will be able to develop in VR.

But accompanying these two catalysts is also an important shift towards the decentralized web and a demand for user-controlled data.

Converging technologies, from immutable ledgers and blockchain to machine learning, are now enabling the more direct, decentralized use of web applications and creation of user content. With no central point of control, middlemen are removed from the equation and anyone can create an address, independently interacting with the network.

Enabled by a permission-less blockchain, any user—regardless of birthplace, gender, ethnicity, wealth, or citizenship—would thus be able to establish digital assets and transfer them seamlessly, granting us a more democratized Internet.

And with data stored on distributed nodes, this also means no single point of failure. One could have multiple backups, accessible only with digital authorization, leaving users immune to any single server failure.

Implications Abound–What’s Next…
With a newly-built stack and an interface built from numerous converging technologies, the Spatial Web will transform every facet of our everyday lives—from the way we organize and access our data, to our social and business interactions, to the way we train employees and educate our children.

We’re about to start spending more time in the virtual world than ever before. Beyond entertainment or gameplay, our livelihoods, work, and even personal decisions are already becoming mediated by a web electrified with AI and newly-emerging interfaces.

In our next blog on the Spatial Web, I’ll do a deep dive into the myriad industry implications of Web 3.0, offering tangible use cases across sectors.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘on ramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Comeback01 / Shutterstock.com Continue reading

Posted in Human Robots

#433872 Breaking Out of the Corporate Bubble ...

For big companies, success is a blessing and a curse. You don’t get big without doing something (or many things) very right. It might start with an invention or service the world didn’t know it needed. Your product takes off, and growth brings a whole new set of logistical challenges. Delivering consistent quality, hiring the right team, establishing a strong culture, tapping into new markets, satisfying shareholders. The list goes on.

Eventually, however, what made you successful also makes you resistant to change.

You’ve built a machine for one purpose, and it’s running smoothly, but what about retooling that machine to make something new? Not so easy. Leaders of big companies know there is no future for their organizations without change. And yet, they struggle to drive it.

In their new book, Leading Transformation: How to Take Charge of Your Company’s Future, Kyle Nel, Nathan Furr, and Thomas Ramsøy aim to deliver a roadmap for corporate transformation.

The book focuses on practical tools that have worked in big companies to break down behavioral and cognitive biases, envision radical futures, and run experiments. These include using science fiction and narrative to see ahead and adopting better measures of success for new endeavors.

A thread throughout is how to envision a new future and move into that future.

We’re limited by the bubbles in which we spend the most time—the corporate bubble, the startup bubble, the nonprofit bubble. The mutually beneficial convergence of complementary bubbles, then, can be a powerful tool for kickstarting transformation. The views and experiences of one partner can challenge the accepted wisdom of the other; resources can flow into newly co-created visions and projects; and connections can be made that wouldn’t otherwise exist.

The authors call such alliances uncommon partners. In the following excerpt from the book, Made In Space, a startup building 3D printers for space, helps Lowe’s explore an in-store 3D printing system, and Lowe’s helps Made In Space expand its vision and focus.

Uncommon Partners
In a dingy conference room at NASA, five prototypical nerds, smelling of Thai food, laid out the path to printing satellites in space and buildings on distant planets. At the end of their four-day marathon, they emerged with an artifact trail that began with early prototypes for the first 3D printer on the International Space Station and ended in the additive-manufacturing future—a future much bigger than 3D printing.

In the additive-manufacturing future, we will view everything as transient, or capable of being repurposed into new things. Rather than throwing away a soda bottle or a bent nail, we will simply reprocess these things into a new hinge for the fence we are building or a light switch plate for the tool shed. Indeed, we might not even go buy bricks for the tool shed, but instead might print them from impurities pulled from the air and the dirt beneath our feet. Such a process would both capture carbon in the air to make the bricks and avoid all the carbon involved in making and then transporting traditional bricks to your house.

If it all sounds a little too science fiction, think again. Lowe’s has already been honored as a Champion of Change by the US government for its prototype system to recycle plastic (e.g., plastic bags and bottles). The future may be closer than you have imagined. But to get there, Lowe’s didn’t work alone. It had to work with uncommon partners to create the future.

Uncommon partners are the types of organizations you might not normally work with, but which can greatly help you create radical new futures. Increasingly, as new technologies emerge and old industries converge, companies are finding that working independently to create all the necessary capabilities to enter new industries or create new technologies is costly, risky, and even counterproductive. Instead, organizations are finding that they need to collaborate with uncommon partners as an ecosystem to cocreate the future together. Nathan [Furr] and his colleague at INSEAD, Andrew Shipilov, call this arrangement an adaptive ecosystem strategy and described how companies such as Lowe’s, Samsung, Mastercard, and others are learning to work differently with partners and to work with different kinds of partners to more effectively discover new opportunities. For Lowe’s, an adaptive ecosystem strategy working with uncommon partners forms the foundation of capturing new opportunities and transforming the company. Despite its increased agility, Lowe’s can’t be (and shouldn’t become) an independent additive-manufacturing, robotics-using, exosuit-building, AR-promoting, fill-in-the-blank-what’s-next-ing company in addition to being a home improvement company. Instead, Lowe’s applies an adaptive ecosystem strategy to find the uncommon partners with which it can collaborate in new territory.

To apply the adaptive ecosystem strategy with uncommon partners, start by identifying the technical or operational components required for a particular focus area (e.g., exosuits) and then sort these components into three groups. First, there are the components that are emerging organically without any assistance from the orchestrator—the leader who tries to bring together the adaptive ecosystem. Second, there are the elements that might emerge, with encouragement and support. Third are the elements that won’t happen unless you do something about it. In an adaptive ecosystem strategy, you can create regular partnerships for the first two elements—those already emerging or that might emerge—if needed. But you have to create the elements in the final category (those that won’t emerge) either with an uncommon partner or by yourself.

For example, when Lowe’s wanted to explore the additive-manufacturing space, it began a search for an uncommon partner to provide the missing but needed capabilities. Unfortunately, initial discussions with major 3D printing companies proved disappointing. The major manufacturers kept trying to sell Lowe’s 3D printers. But the vision our group had created with science fiction was not for vendors to sell Lowe’s a printer, but for partners to help the company build a system—something that would allow customers to scan, manipulate, print, and eventually recycle additive-manufacturing objects. Every time we discussed 3D printing systems with these major companies, they responded that they could do it and then tried to sell printers. When Carin Watson, one of the leading lights at Singularity University, introduced us to Made In Space (a company being incubated in Singularity University’s futuristic accelerator), we discovered an uncommon partner that understood what it meant to cocreate a system.

Initially, Made In Space had been focused on simply getting 3D printing to work in space, where you can’t rely on gravity, you can’t send up a technician if the machine breaks, and you can’t release noxious fumes into cramped spacecraft quarters. But after the four days in the conference room going over the comic for additive manufacturing, Made In Space and Lowe’s emerged with a bigger vision. The company helped lay out an artifact trail that included not only the first printer on the International Space Station but also printing system services in Lowe’s stores.

Of course, the vision for an additive-manufacturing future didn’t end there. It also reshaped Made In Space’s trajectory, encouraging the startup, during those four days in a NASA conference room, to design a bolder future. Today, some of its bold projects include the Archinaut, a system that enables satellites to build themselves while in space, a direction that emerged partly from the science fiction narrative we created around additive manufacturing.

In summary, uncommon partners help you succeed by providing you with the capabilities you shouldn’t be building yourself, as well as with fresh insights. You also help uncommon partners succeed by creating new opportunities from which they can prosper.

Helping Uncommon Partners Prosper
Working most effectively with uncommon partners can require a shift from more familiar outsourcing or partnership relationships. When working with uncommon partners, you are trying to cocreate the future, which entails a great deal more uncertainty. Because you can’t specify outcomes precisely, agreements are typically less formal than in other types of relationships, and they operate under the provisions of shared vision and trust more than binding agreement clauses. Moreover, your goal isn’t to extract all the value from the relationship. Rather, you need to find a way to share the value.

Ideally, your uncommon partners should be transformed for the better by the work you do. For example, Lowe’s uncommon partner developing the robotics narrative was a small startup called Fellow Robots. Through their work with Lowe’s, Fellow Robots transformed from a small team focused on a narrow application of robotics (which was arguably the wrong problem) to a growing company developing a very different and valuable set of capabilities: putting cutting-edge technology on top of the old legacy systems embedded at the core of most companies. Working with Lowe’s allowed Fellow Robots to discover new opportunities, and today Fellow Robots works with retailers around the world, including BevMo! and Yamada. Ultimately, working with uncommon partners should be transformative for both of you, so focus more on creating a bigger pie than on how you are going to slice up a smaller pie.

The above excerpt appears in the new book Leading Transformation: How to Take Charge of Your Company’s Future by Kyle Nel, Nathan Furr, and Thomas Ramsøy, published by Harvard Business Review Press.

Image Credit: Here / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#433803 This Week’s Awesome Stories From ...

The AI Cold War That Could Doom Us All
Nicholas Thompson | Wired
“At the dawn of a new stage in the digital revolution, the world’s two most powerful nations are rapidly retreating into positions of competitive isolation, like players across a Go board. …Is the arc of the digital revolution bending toward tyranny, and is there any way to stop it?”

Finally, the Drug That Keeps You Young
Stephen S. Hall | MIT Technology Review
“The other thing that has changed is that the field of senescence—and the recognition that senescent cells can be such drivers of aging—has finally gained acceptance. Whether those drugs will work in people is still an open question. But the first human trials are under way right now.”

Ginkgo Bioworks Is Turning Human Cells Into On-Demand Factories
Megan Molteni | Wired
“The biotech unicorn is already cranking out an impressive number of microbial biofactories that grow and multiply and burp out fragrances, fertilizers, and soon, psychoactive substances. And they do it at a fraction of the cost of traditional systems. But Kelly is thinking even bigger.”

Thousands of Swedes Are Inserting Microchips Under Their Skin
Maddy Savage | NPR
“Around the size of a grain of rice, the chips typically are inserted into the skin just above each user’s thumb, using a syringe similar to that used for giving vaccinations. The procedure costs about $180. So many Swedes are lining up to get the microchips that the country’s main chipping company says it can’t keep up with the number of requests.”

AI Art at Christie’s Sells for $432,500
Gabe Cohn | The New York Times
“Last Friday, a portrait produced by artificial intelligence was hanging at Christie’s New York opposite an Andy Warhol print and beside a bronze work by Roy Lichtenstein. On Thursday, it sold for well over double the price realized by both those pieces combined.”

Should a Self-Driving Car Kill the Baby or the Grandma? Depends on Where You’re From
Karen Hao | MIT Technology Review
“The researchers never predicted the experiment’s viral reception. Four years after the platform went live, millions of people in 233 countries and territories have logged 40 million decisions, making it one of the largest studies ever done on global moral preferences.”

The Rodney Brooks Rules for Predicting a Technology’s Success
Rodney Brooks | IEEE Spectrum
“Building electric cars and reusable rockets is fairly easy. Building a nuclear fusion reactor, flying cars, self-driving cars, or a Hyperloop system is very hard. What makes the difference?”

Image Source: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots

#433799 The First Novel Written by AI Is ...

Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.

People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.

This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.

In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.

But is creativity a fundamentally human phenomenon? Or can it be learned by machines?

And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?

Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.

“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.

Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.

The results are… mixed.

The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.

Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.

A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.

The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.

The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.

In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.

The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.

Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.

When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.

Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.

Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.

Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”

AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.

Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.

Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.

Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.

You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.

Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”

Image Credit: eurobanks / Shutterstock.com Continue reading

Posted in Human Robots

#433776 Why We Should Stop Conflating Human and ...

It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.

Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.

It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.

Neural Networks
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.

But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).

This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).

Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.

The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.

Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.

In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.

Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.

Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.

Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.

Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.

But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.

Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.

Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.

AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.

Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.

Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.

Image Credit: Liu zishan/Shutterstock Continue reading

Posted in Human Robots