Tag Archives: style

#433907 How the Spatial Web Will Fix What’s ...

Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.

This is the age of Web 3.0.

Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.

To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.

During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.

Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.

And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.

In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:

News & Media Coverage
Smart Advertising
Personalized Retail

Let’s dive in.

Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.

The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.

Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.

But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?

Enter mesh networks, AI, public ledgers, and virtual reality.

While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.

Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.

We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.

And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.

The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.

Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.

Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.

With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.

And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.

Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.

But perhaps most exciting is the convergence of artificial intelligence and augmented reality.

As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.

Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.

As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.

After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.

Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.

But what if you’d rather not get auto-populated product info in the first place? No problem!

Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.

And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.

With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.

This brings us to the transformative future of retail.

Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.

As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.

Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.

After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).

Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.

And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.

But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.

Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….

As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.

Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.

Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.

In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#433799 The First Novel Written by AI Is ...

Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.

People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.

This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.

In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.

But is creativity a fundamentally human phenomenon? Or can it be learned by machines?

And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?

Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.

“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.

Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.

The results are… mixed.

The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.

Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.

A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.

The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.

The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.

In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.

The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.

Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.

When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.

Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.

Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.

Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”

AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.

Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.

Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.

Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.

You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.

Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”

Image Credit: eurobanks / Shutterstock.com Continue reading

Posted in Human Robots

#432563 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Pedro Domingos on the Arms Race in Artificial Intelligence
Christoph Scheuermann and Bernhard Zand | Spiegel Online
“AI lowers the cost of knowledge by orders of magnitude. One good, effective machine learning system can do the work of a million people, whether it’s for commercial purposes or for cyberespionage. Imagine a country that produces a thousand times more knowledge than another. This is the challenge we are facing.”

BIOTECHNOLOGY
Gene Therapy Could Free Some People From a Lifetime of Blood Transfusions
Emily Mullin | MIT Technology Review
“A one-time, experimental treatment for an inherited blood disorder has shown dramatic results in a small study. …[Lead author Alexis Thompson] says the effect on patients has been remarkable. ‘They have been tied to this ongoing medical therapy that is burdensome and expensive for their whole lives,’ she says. ‘Gene therapy has allowed people to have aspirations and really pursue them.’ ”

ENVIRONMENT
The Revolutionary Giant Ocean Cleanup Machine Is About to Set Sail
Adele Peters | Fast Company
“By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch—around 40,000 metric tons—within five years.”

ROBOTICS
Autonomous Boats Will Be on the Market Sooner Than Self-Driving Cars
Tracey Lindeman | Motherboard
“Some unmanned watercraft…may be at sea commercially before 2020. That’s partly because automating all ships could generate a ridiculous amount of revenue. According to the United Nations, 90 percent of the world’s trade is carried by sea and 10.3 billion tons of products were shipped in 2016.”

DIGITAL CULTURE
Style Is an Algorithm
Kyle Chayka | Racked
“Confronting the Echo Look’s opaque statements on my fashion sense, I realize that all of these algorithmic experiences are matters of taste: the question of what we like and why we like it, and what it means that taste is increasingly dictated by black-box robots like the camera on my shelf.”

COMPUTING
How Apple Will Use AR to Reinvent the Human-Computer Interface
Tim Bajarin | Fast Company
“It’s in Apple’s DNA to continually deliver the ‘next’ major advancement to the personal computing experience. Its innovation in man-machine interfaces started with the Mac and then extended to the iPod, the iPhone, the iPad, and most recently, the Apple Watch. Now, get ready for the next chapter, as Apple tackles augmented reality, in a way that could fundamentally transform the human-computer interface.”

SCIENCE
Advanced Microscope Shows Cells at Work in Incredible Detail
Steve Dent | Engadget
“For the first time, scientists have peered into living cells and created videos showing how they function with unprecedented 3D detail. Using a special microscope and new lighting techniques, a team from Harvard and the Howard Hughes Medical Institute captured zebrafish immune cell interactions with unheard-of 3D detail and resolution.”

Image Credit: dubassy / Shutterstock.com Continue reading

Posted in Human Robots

#432487 Can We Make a Musical Turing Test?

As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.

At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.

But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?

Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.

Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)

Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.

Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).

There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.

Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:

The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.

We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?

And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?

Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.

Image Credit: d1sk / Shutterstock.com Continue reading

Posted in Human Robots

#432262 How We Can ‘Robot-Proof’ Education ...

Like millions of other individuals in the workforce, you’re probably wondering if you will one day be replaced by a machine. If you’re a student, you’re probably wondering if your chosen profession will even exist by the time you’ve graduated. From driving to legal research, there isn’t much that technology hasn’t already automated (or begun to automate). Many of us will need to adapt to this disruption in the workforce.

But it’s not enough for students and workers to adapt, become lifelong learners, and re-skill themselves. We also need to see innovation and initiative at an institutional and governmental level. According to research by The Economist, almost half of all jobs could be automated by computers within the next two decades, and no government in the world is prepared for it.

While many see the current trend in automation as a terrifying threat, others see it as an opportunity. In Robot-Proof: Higher Education in the Age of Artificial Intelligence, Northeastern University president Joseph Aoun proposes educating students in a way that will allow them to do the things that machines can’t. He calls for a new paradigm that teaches young minds “to invent, to create, and to discover”—filling the relevant needs of our world that robots simply can’t fill. Aoun proposes a much-needed novel framework that will allow us to “robot-proof” education.

Literacies and Core Cognitive Capacities of the Future
Aoun lays a framework for a new discipline, humanics, which discusses the important capacities and literacies for emerging education systems. At its core, the framework emphasizes our uniquely human abilities and strengths.

The three key literacies include data literacy (being able to manage and analyze big data), technological literacy (being able to understand exponential technologies and conduct computational thinking), and human literacy (being able to communicate and evaluate social, ethical, and existential impact).

Beyond the literacies, at the heart of Aoun’s framework are four cognitive capacities that are crucial to develop in our students if they are to be resistant to automation: critical thinking, systems thinking, entrepreneurship, and cultural agility.

“These capacities are mindsets rather than bodies of knowledge—mental architecture rather than mental furniture,” he writes. “Going forward, people will still need to know specific bodies of knowledge to be effective in the workplace, but that alone will not be enough when intelligent machines are doing much of the heavy lifting of information. To succeed, tomorrow’s employees will have to demonstrate a higher order of thought.”

Like many other experts in education, Joseph Aoun emphasizes the importance of critical thinking. This is important not just when it comes to taking a skeptical approach to information, but also being able to logically break down a claim or problem into multiple layers of analysis. We spend so much time teaching students how to answer questions that we often neglect to teach them how to ask questions. Asking questions—and asking good ones—is a foundation of critical thinking. Before you can solve a problem, you must be able to critically analyze and question what is causing it. This is why critical thinking and problem solving are coupled together.

The second capacity, systems thinking, involves being able to think holistically about a problem. The most creative problem-solvers and thinkers are able to take a multidisciplinary perspective and connect the dots between many different fields. According to Aoun, it “involves seeing across areas that machines might be able to comprehend individually but that they cannot analyze in an integrated way, as a whole.” It represents the absolute opposite of how most traditional curricula is structured with emphasis on isolated subjects and content knowledge.

Among the most difficult-to-automate tasks or professions is entrepreneurship.

In fact, some have gone so far as to claim that in the future, everyone will be an entrepreneur. Yet traditionally, initiative has been something students show in spite of or in addition to their schoolwork. For most students, developing a sense of initiative and entrepreneurial skills has often been part of their extracurricular activities. It needs to be at the core of our curricula, not a supplement to it. At its core, teaching entrepreneurship is about teaching our youth to solve complex problems with resilience, to become global leaders, and to solve grand challenges facing our species.

Finally, with an increasingly globalized world, there is a need for more workers with cultural agility, the ability to build amongst different cultural contexts and norms.

One of the major trends today is the rise of the contingent workforce. We are seeing an increasing percentage of full-time employees working on the cloud. Multinational corporations have teams of employees collaborating at different offices across the planet. Collaboration across online networks requires a skillset of its own. As education expert Tony Wagner points out, within these digital contexts, leadership is no longer about commanding with top-down authority, but rather about leading by influence.

An Emphasis on Creativity
The framework also puts an emphasis on experiential or project-based learning, wherein the heart of the student experience is not lectures or exams but solving real-life problems and learning by doing, creating, and executing. Unsurprisingly, humans continue to outdo machines when it comes to innovating and pushing intellectual, imaginative, and creative boundaries, making jobs involving these skills the hardest to automate.

In fact, technological trends are giving rise to what many thought leaders refer to as the imagination economy. This is defined as “an economy where intuitive and creative thinking create economic value, after logical and rational thinking have been outsourced to other economies.” Consequently, we need to develop our students’ creative abilities to ensure their success against machines.

In its simplest form, creativity represents the ability to imagine radical ideas and then go about executing them in reality.

In many ways, we are already living in our creative imaginations. Consider this: every invention or human construct—whether it be the spaceship, an architectural wonder, or a device like an iPhone—once existed as a mere idea, imagined in someone’s mind. The world we have designed and built around us is an extension of our imaginations and is only possible because of our creativity. Creativity has played a powerful role in human progress—now imagine what the outcomes would be if we tapped into every young mind’s creative potential.

The Need for a Radical Overhaul
What is clear from the recommendations of Aoun and many other leading thinkers in this space is that an effective 21st-century education system is radically different from the traditional systems we currently have in place. There is a dramatic contrast between these future-oriented frameworks and the way we’ve structured our traditional, industrial-era and cookie-cutter-style education systems.

It’s time for a change, and incremental changes or subtle improvements are no longer enough. What we need to see are more moonshots and disruption in the education sector. In a world of exponential growth and accelerating change, it is never too soon for a much-needed dramatic overhaul.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots