Tag Archives: bots

#433799 The First Novel Written by AI Is ...

Last year, a novelist went on a road trip across the USA. The trip was an attempt to emulate Jack Kerouac—to go out on the road and find something essential to write about in the experience. There is, however, a key difference between this writer and anyone else talking your ear off in the bar. This writer is just a microphone, a GPS, and a camera hooked up to a laptop and a whole bunch of linear algebra.

People who are optimistic that artificial intelligence and machine learning won’t put us all out of a job say that human ingenuity and creativity will be difficult to imitate. The classic argument is that, just as machines freed us from repetitive manual tasks, machine learning will free us from repetitive intellectual tasks.

This leaves us free to spend more time on the rewarding aspects of our work, pursuing creative hobbies, spending time with loved ones, and generally being human.

In this worldview, creative works like a great novel or symphony, and the emotions they evoke, cannot be reduced to lines of code. Humans retain a dimension of superiority over algorithms.

But is creativity a fundamentally human phenomenon? Or can it be learned by machines?

And if they learn to understand us better than we understand ourselves, could the great AI novel—tailored, of course, to your own predispositions in fiction—be the best you’ll ever read?

Maybe Not a Beach Read
This is the futurist’s view, of course. The reality, as the jury-rigged contraption in Ross Goodwin’s Cadillac for that road trip can attest, is some way off.

“This is very much an imperfect document, a rapid prototyping project. The output isn’t perfect. I don’t think it’s a human novel, or anywhere near it,” Goodwin said of the novel that his machine created. 1 The Road is currently marketed as the first novel written by AI.

Once the neural network has been trained, it can generate any length of text that the author desires, either at random or working from a specific seed word or phrase. Goodwin used the sights and sounds of the road trip to provide these seeds: the novel is written one sentence at a time, based on images, locations, dialogue from the microphone, and even the computer’s own internal clock.

The results are… mixed.

The novel begins suitably enough, quoting the time: “It was nine seventeen in the morning, and the house was heavy.” Descriptions of locations begin according to the Foursquare dataset fed into the algorithm, but rapidly veer off into the weeds, becoming surreal. While experimentation in literature is a wonderful thing, repeatedly quoting longitude and latitude coordinates verbatim is unlikely to win anyone the Booker Prize.

Data In, Art Out?
Neural networks as creative agents have some advantages. They excel at being trained on large datasets, identifying the patterns in those datasets, and producing output that follows those same rules. Music inspired by or written by AI has become a growing subgenre—there’s even a pop album by human-machine collaborators called the Songularity.

A neural network can “listen to” all of Bach and Mozart in hours, and train itself on the works of Shakespeare to produce passable pseudo-Bard. The idea of artificial creativity has become so widespread that there’s even a meme format about forcibly training neural network ‘bots’ on human writing samples, with hilarious consequences—although the best joke was undoubtedly human in origin.

The AI that roamed from New York to New Orleans was an LSTM (long short-term memory) neural net. By default, information contained in individual neurons is preserved, and only small parts can be “forgotten” or “learned” in an individual timestep, rather than neurons being entirely overwritten.

The LSTM architecture performs better than previous recurrent neural networks at tasks such as handwriting and speech recognition. The neural net—and its programmer—looked further in search of literary influences, ingesting 60 million words (360 MB) of raw literature according to Goodwin’s recipe: one third poetry, one third science fiction, and one third “bleak” literature.

In this way, Goodwin has some creative control over the project; the source material influences the machine’s vocabulary and sentence structuring, and hence the tone of the piece.

The Thoughts Beneath the Words
The problem with artificially intelligent novelists is the same problem with conversational artificial intelligence that computer scientists have been trying to solve from Turing’s day. The machines can understand and reproduce complex patterns increasingly better than humans can, but they have no understanding of what these patterns mean.

Goodwin’s neural network spits out sentences one letter at a time, on a tiny printer hooked up to the laptop. Statistical associations such as those tracked by neural nets can form words from letters, and sentences from words, but they know nothing of character or plot.

When talking to a chatbot, the code has no real understanding of what’s been said before, and there is no dataset large enough to train it through all of the billions of possible conversations.

Unless restricted to a predetermined set of options, it loses the thread of the conversation after a reply or two. In a similar way, the creative neural nets have no real grasp of what they’re writing, and no way to produce anything with any overarching coherence or narrative.

Goodwin’s experiment is an attempt to add some coherent backbone to the AI “novel” by repeatedly grounding it with stimuli from the cameras or microphones—the thematic links and narrative provided by the American landscape the neural network drives through.

Goodwin feels that this approach (the car itself moving through the landscape, as if a character) borrows some continuity and coherence from the journey itself. “Coherent prose is the holy grail of natural-language generation—feeling that I had somehow solved a small part of the problem was exhilarating. And I do think it makes a point about language in time that’s unexpected and interesting.”

AI Is Still No Kerouac
A coherent tone and semantic “style” might be enough to produce some vaguely-convincing teenage poetry, as Google did, and experimental fiction that uses neural networks can have intriguing results. But wading through the surreal AI prose of this era, searching for some meaning or motif beyond novelty value, can be a frustrating experience.

Maybe machines can learn the complexities of the human heart and brain, or how to write evocative or entertaining prose. But they’re a long way off, and somehow “more layers!” or a bigger corpus of data doesn’t feel like enough to bridge that gulf.

Real attempts by machines to write fiction have so far been broadly incoherent, but with flashes of poetry—dreamlike, hallucinatory ramblings.

Neural networks might not be capable of writing intricately-plotted works with charm and wit, like Dickens or Dostoevsky, but there’s still an eeriness to trying to decipher the surreal, Finnegans’ Wake mish-mash.

You might see, in the odd line, the flickering ghost of something like consciousness, a deeper understanding. Or you might just see fragments of meaning thrown into a neural network blender, full of hype and fury, obeying rules in an occasionally striking way, but ultimately signifying nothing. In that sense, at least, the RNN’s grappling with metaphor feels like a metaphor for the hype surrounding the latest AI summer as a whole.

Or, as the human author of On The Road put it: “You guys are going somewhere or just going?”

Image Credit: eurobanks / Shutterstock.com Continue reading

Posted in Human Robots

#433696 3 Big Ways Tech Is Disrupting Global ...

Disruptive business models are often powered by alternative financing. In Part 1 of this series, I discussed how mobile is redefining money and banking and shared some of the dramatic transformations in the global remittance infrastructure.

In this article, we’ll discuss:

Peer-to-peer lending
AI financial advisors and robo traders
Seamless Transactions

Let’s dive right back in…

Decentralized Lending = Democratized Access to Finances
Peer-to-peer (P2P) lending is an age-old practice, traditionally with high risk and extreme locality. Now, the P2P funding model is being digitized and delocalized, bringing lending online and across borders.

Zopa, the first official crowdlending platform, arrived in the United Kingdom in 2004. Since then, the consumer crowdlending platform has facilitated lending of over 3 billion euros ($3.5 billion USD) of loans.

Person-to-business crowdlending took off, again in the U.K., in 2005 with Funding Circle, now with over 5 billion euros (~5.8 billion USD) of capital loaned to small businesses around the world.

Crowdlending next took off in the US in 2006, with platforms like Prosper and Lending Club. The US crowdlending industry has boomed to $21 billion in loans, across 515,000 loans.

Let’s take a step back… to a time before banks, when lending took place between trusted neighbors in small villages across the globe. Lending started as peer-to-peer transactions.

As villages turned into towns, towns turned into cities, and cities turned into sprawling metropolises, neighborly trust and the ability to communicate across urban landscapes broke down. That’s where banks and other financial institutions came into play—to add trust back into the lending equation.

With crowdlending, we are evidently returning to this pre-centralized-banking model of loans, and moving away from cumbersome intermediaries (e.g. high fees, regulations, and extra complexity).

Fueled by the permeation of the internet, P2P lending took on a new form as ‘crowdlending’ in the early 2000s. Now, as blockchain and artificial intelligence arrive on the digital scene, P2P lending platforms are being overhauled with transparency, accountability, reliability, and immutability.

Artificial Intelligence Micro Lending & Credit Scores
We are beginning to augment our quantitative decision-making with neural networks processing borrowers’ financial data to determine their financial ‘fate’ (or, as some call it, your credit score). Companies like Smart Finance Group (backed by Kai Fu Lee and Sinovation Ventures) are using artificial intelligence to minimize default rates for tens of millions of microloans.

Smart Finance is fueled by users’ personal data, particularly smartphone data and usage behavior. Users are required to give Smart Finance access to their smartphone data, so that Smart Finance’s artificial intelligence engine can generate a credit score from the personal information.

The benefits of this AI-powered lending platform do not stop at increased loan payback rates; there’s a massive speed increase as well. Smart Finance loans are frequently approved in under eight seconds. As we’ve seen with other artificial intelligence disruptions, data is the new gold.

Digitizing access to P2P loans paves the way for billions of people currently without access to banking to leapfrog the centralized banking system, just as Africa bypassed landline phones and went straight to mobile. Leapfrogging centralized banking and the credit system is exactly what Smart Finance has done for hundreds of millions of people in China.

Blockchain-Backed Crowdlending
As artificial intelligence accesses even the most mundane mobile browsing data to assign credit scores, blockchain technologies, particularly immutable ledgers and smart contracts, are massive disruptors to the archaic banking system, building additional trust and transparency on top of current P2P lending models.

Immutable ledgers provide the necessary transparency for accurate credit and loan defaulting history. Smart contracts executed on these immutable ledgers bring the critical ability to digitally replace cumbersome, expensive third parties (like banks), allowing individual borrowers or businesses to directly connect with willing lenders.

Two of the leading blockchain platforms for P2P lending are ETHLend and SALT Lending.

ETHLend is an Ethereum-based decentralized application aiming to bring transparency and trust to P2P lending through Ethereum network smart contracts.

Secure Automated Lending Technology (SALT) allows cryptocurrency asset holders to use their digital assets as collateral for cash loans, without the need to liquidate their holdings, giving rise to a digital-asset-backed lending market.

While blockchain poses a threat to many of the large, centralized banking institutions, some are taking advantage of the new technology to optimize their internal lending, credit scoring, and collateral operations.

In March 2018, ING and Credit Suisse successfully exchanged 25 million euros using HQLA-X, a blockchain-based collateral lending platform.

HQLA-X runs on the R3 Corda blockchain, a platform designed specifically to help heritage financial and commerce institutions migrate away from their inefficient legacy financial infrastructure.

Blockchain and tokenization are going through their own fintech and regulation shakeup right now. In a future blog, I’ll discuss the various efforts to more readily assure smart contracts, and the disruptive business model of security tokens and the US Securities and Exchange Commission.

Parallels to the Global Abundance of Capital
The abundance of capital being created by the advent of P2P loans closely relates to the unprecedented global abundance of capital.

Initial coin offerings (ICOs) and crowdfunding are taking a strong stand in disrupting the $164 billion venture capital market. The total amount invested in ICOs has risen from $6.6 billion in 2017 to $7.15 billion USD in the first half of 2018. Crowdfunding helped projects raise more than $34 billion in 2017, with experts projecting that global crowdfunding investments will reach $300 billion by 2025.

In the last year alone, using ICOs, over a dozen projects have raised hundreds of millions of dollars in mere hours. Take Filecoin, for example, which raised $257 million  in only 30 days; its first $135 million was raised in the first hour. Similarly, the Dragon Coin project (which itself is revolutionizing remittance in high-stakes casinos around the world) raised $320 million in its 30-day public ICO.

Some Important Takeaways…

Technology-backed fundraising and financial services are disrupting the world’s largest financial institutions. Anyone, anywhere, at anytime will be able to access the capital they need to pursue their idea.

The speed at which we can go from “I’ve got an idea” to “I run a billion-dollar company” is moving faster than ever.

Following Ray Kurzweil’s Law of Accelerating Returns, the rapid decrease in time to access capital is intimately linked (and greatly dependent on) a financial infrastructure (technology, institutions, platforms, and policies) that can adapt and evolve just as rapidly.

This new abundance of capital requires financial decision-making with ever-higher market prediction precision. That’s exactly where artificial intelligence is already playing a massive role.

Artificial Intelligence, Robo Traders, and Financial Advisors
On May 6, 2010, the Dow Jones Industrial Average suddenly collapsed by 998.5 points (equal to 8 percent, or $1 trillion). The crash lasted over 35 minutes and is now known as the ‘Flash Crash’. While no one knows the specific reason for this 2010 stock market anomaly, experts widely agree that the Flash Crash had to do with algorithmic trading.

With the ability to have instant, trillion-dollar market impacts, algorithmic trading and artificial intelligence are undoubtedly ingrained in how financial markets operate.

In 2017, CNBC.com estimated that 90 percent of daily trading volume in stock trading is done by machine algorithms, and only 10 percent is carried out directly by humans.

Artificial intelligence and financial management algorithms are not only available to top Wall Street players.

Robo-advisor financial management apps, like Wealthfront and Betterment, are rapidly permeating the global market. Wealthfront currently has $9.5 billion in assets under management, and Betterment has $10 billion.

Artificial intelligent financial agents are already helping financial institutions protect your money and fight fraud. A prime application for machine learning is in detecting anomalies in your spending and transaction habits, and flagging potentially fraudulent transactions.

As artificial intelligence continues to exponentially increase in power and capabilities, increasingly powerful trading and financial management bots will come online, finding massive new and previously lost streams of wealth.

How else are artificial intelligence and automation transforming finance?

Disruptive Remittance and Seamless Transactions
When was the last time you paid in cash at a toll booth? How about for a taxi ride?

EZ-Pass, the electronic tolling company implemented extensively on the East Coast, has done wonders to reduce traffic congestion and increase traffic flow.

Driving down I-95 on the East Coast of the United States, drivers rarely notice their financial transaction with the state’s tolling agencies. The transactions are seamless.

The Uber app enables me to travel without my wallet. I can forget about payment on my trip, free up my mental bandwidth and time for higher-priority tasks. The entire process is digitized and, by extension, automated and integrated into Uber’s platform (Note: This incredible convenience many times causes me to accidentally walk out of taxi cabs without paying!).

In January 2018, we saw the success of the first cutting-edge, AI-powered Amazon Go store open in Seattle, Washington. The store marked a new era in remittance and transactions. Gone are the days of carrying credit cards and cash, and gone are the cash registers. And now, on the heals of these early ‘beta-tests’, Amazon is considering opening as many as 3,000 of these cashierless stores by 2023.

Amazon Go stores use AI algorithms that watch various video feeds (from advanced cameras) throughout the store to identify who picks up groceries, exactly what products they select, and how much to charge that person when they walk out of the store. It’s a grab and go experience.

Let’s extrapolate the notion of seamless, integrated payment systems from Amazon Go and Uber’s removal of post-ride payment to the rest of our day-to-day experience.

Imagine this near future:

As you near the front door of your home, your AI assistant summons a self-driving Uber that takes you to the Hyperloop station (after all, you work in L.A. but live in San Francisco).

At the station, you board your pod, without noticing that your ticket purchase was settled via a wireless payment checkpoint.

After work, you stop at the Amazon Go and pick up dinner. Your virtual AI assistant passes your Amazon account information to the store’s payment checkpoint, as the store’s cameras and sensors track you, your cart and charge you auto-magically.

At home, unbeknownst to you, your AI has already restocked your fridge and pantry with whatever items you failed to pick up at the Amazon Go.

Once we remove the actively transacting aspect of finance, what else becomes possible?

Top Conclusions
Extraordinary transformations are happening in the finance world. We’ve only scratched the surface of the fintech revolution. All of these transformative financial technologies require high-fidelity assurance, robust insurance, and a mechanism for storing value.

I’ll dive into each of these other facets of financial services in future articles.

For now, thanks to coming global communication networks being deployed on 5G, Alphabet’s LUNE, SpaceX’s Starlink and OneWeb, by 2024, nearly all 8 billion people on Earth will be online.

Once connected, these new minds, entrepreneurs, and customers need access to money and financial services to meaningfully participate in the world economy.

By connecting lenders and borrowers around the globe, decentralized lending drives down global interest rates, increases global financial market participation, and enables economic opportunity to the billions of people who are about to come online.

We’re living in the most abundant time in human history, and fintech is just getting started.

Join Me
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Novikov Aleksey / Shutterstock.com Continue reading

Posted in Human Robots

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432572 Robots Can Swim, Fetch, Lift, and Dance. ...

Robotics has come a long way in the past few years. Robots can now fetch items from specific spots in massive warehouses, swim through the ocean to study marine life, and lift 200 times their own weight. They can even perform synchronized dance routines.

But the really big question is—can robots put together an Ikea chair?

A team of engineers from Nanyang Technological University in Singapore decided to find out, detailing their work in a paper published last week in the journal Science Robotics. The team took industrial robot arms and equipped them with parallel grippers, force-detecting sensors, and 3D cameras, and wrote software enabling the souped-up bots to tackle chair assembly. The robots’ starting point was a set of chair parts randomly scattered within reach.

As impressive as the above-mentioned robotic capabilities are, it’s worth noting that they’re mostly limited to a single skill. Putting together furniture, on the other hand, requires using and precisely coordinating multiple skills, including force control, visual localization, hand-eye coordination, and the patience to read each step of the manual without rushing through it and messing everything up.

Indeed, Ikea furniture, while meant to be simple and user-friendly, has left even the best of us scratching our heads and holding a spare oddly-shaped piece of wood as we stare at the desk or bed frame we just put together—or, for the less even-tempered among us, throwing said piece of wood across the room.

It’s a good thing robots don’t have tempers, because it took a few tries for the bots to get the chair assembly right.

Practice makes perfect, though (or in this case, rewriting code makes perfect), and these bots didn’t give up so easily. They had to hone three different skills: identifying which part was which among the scattered, differently-shaped pieces of wood, coordinating their movements to put those pieces in the right place, and knowing how much force to use in various steps of the process (i.e., more force is needed to connect two pieces than to pick up one piece).

A few tries later, the bots were able to assemble the chair from start to finish in about nine minutes.

On the whole, nicely done. But before we applaud the robots’ success too loudly, it’s important to note that they didn’t autonomously assemble the chair. Rather, each step of the process was planned and coded by engineers, down to the millimeter.

However, the team believes this closely-guided chair assembly was just a first step, and they see a not-so-distant future where combining artificial intelligence with advanced robotic capabilities could produce smart bots that would learn to assemble furniture and do other complex tasks on their own.

Future applications mentioned in the paper include electronics and aircraft manufacturing, logistics, and other high-mix, low-volume sectors.

Image Credit: Francisco Suárez-Ruiz and Quang-Cuong Pham/Nanyang Technological University Continue reading

Posted in Human Robots

#432271 Your Shopping Experience Is on the Verge ...

Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.

E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.

Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.

Massive change is occurring in this arena.

For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.

Let’s dive in.

E-Commerce Trends
Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.

These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.

At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.

Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.

And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.

In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.

There’s plenty more room for digital disruption.

AI and the Retail Experience
For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.

In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.

Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.

Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, an Austrian company called Soul Machines.

Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.

Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.

Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.

Amazon’s Alexa marks an important user interface moment in this regard.

Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on Amazon.com when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.

As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.

But let’s take it one step further.

Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.

In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.

In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.

In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?

The dematerialization, demonetization, and democratization of personalized shopping has only just begun.

The In-Store Experience: Experiential Retailing
In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.

As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.

Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.

The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.

As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.

In the short term,retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).

In the long term,we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.

One thing is certain: the nominal shopping experience is on the verge of a major transformation.

Implications
The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.

Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.

And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.

Perhaps nothing will be more transformed than today’s $20 trillion retail sector.

Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.

Join Me
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.

Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots