Tag Archives: since
#432271 Your Shopping Experience Is on the Verge ...
Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.
E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.
Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.
Massive change is occurring in this arena.
For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.
Let’s dive in.
E-Commerce Trends
Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.
These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.
At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.
Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.
And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.
In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.
There’s plenty more room for digital disruption.
AI and the Retail Experience
For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.
In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.
Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.
Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, an Austrian company called Soul Machines.
Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.
Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.
Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.
Amazon’s Alexa marks an important user interface moment in this regard.
Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on Amazon.com when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.
As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.
But let’s take it one step further.
Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.
In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.
In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.
In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?
The dematerialization, demonetization, and democratization of personalized shopping has only just begun.
The In-Store Experience: Experiential Retailing
In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.
As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.
Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.
The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.
As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.
In the short term,retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).
In the long term,we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.
One thing is certain: the nominal shopping experience is on the verge of a major transformation.
Implications
The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.
Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.
And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.
Perhaps nothing will be more transformed than today’s $20 trillion retail sector.
Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.
Join Me
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.
Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Zapp2Photo / Shutterstock.com Continue reading
#432165 Silicon Valley Is Winning the Race to ...
Henry Ford didn’t invent the motor car. The late 1800s saw a flurry of innovation by hundreds of companies battling to deliver on the promise of fast, efficient and reasonably-priced mechanical transportation. Ford later came to dominate the industry thanks to the development of the moving assembly line.
Today, the sector is poised for another breakthrough with the advent of cars that drive themselves. But unlike the original wave of automobile innovation, the race for supremacy in autonomous vehicles is concentrated among a few corporate giants. So who is set to dominate this time?
I’ve analyzed six companies we think are leading the race to build the first truly driverless car. Three of these—General Motors, Ford, and Volkswagen—come from the existing car industry and need to integrate self-driving technology into their existing fleet of mass-produced vehicles. The other three—Tesla, Uber, and Waymo (owned by the same company as Google)—are newcomers from the digital technology world of Silicon Valley and have to build a mass manufacturing capability.
While it’s impossible to know all the developments at any given time, we have tracked investments, strategic partnerships, and official press releases to learn more about what’s happening behind the scenes. The car industry typically rates self-driving technology on a scale from Level 0 (no automation) to Level 5 (full automation). We’ve assessed where each company is now and estimated how far they are from reaching the top level. Here’s how we think each player is performing.
Volkswagen
Volkswagen has invested in taxi-hailing app Gett and partnered with chip-maker Nvidia to develop an artificial intelligence co-pilot for its cars. In 2018, the VW Group is set to release the Audi A8, the first production vehicle that reaches Level 3 on the scale, “conditional driving automation.” This means the car’s computer will handle all driving functions, but a human has to be ready to take over if necessary.
Ford
Ford already sells cars with a Level 2 autopilot, “partial driving automation.” This means one or more aspects of driving are controlled by a computer based on information about the environment, for example combined cruise control and lane centering. Alongside other investments, the company has put $1 billion into Argo AI, an artificial intelligence company for self-driving vehicles. Following a trial to test pizza delivery using autonomous vehicles, Ford is now testing Level 4 cars on public roads. These feature “high automation,” where the car can drive entirely on its own but not in certain conditions such as when the road surface is poor or the weather is bad.
General Motors
GM also sells vehicles with Level 2 automation but, after buying Silicon Valley startup Cruise Automation in 2016, now plans to launch the first mass-production-ready Level 5 autonomy vehicle that drives completely on its own by 2019. The Cruise AV will have no steering wheel or pedals to allow a human to take over and be part of a large fleet of driverless taxis the company plans to operate in big cities. But crucially the company hasn’t yet secured permission to test the car on public roads.
Waymo (Google)
Waymo Level 5 testing. Image Credit: Waymo
Founded as a special project in 2009, Waymo separated from Google (though they’re both owned by the same parent firm, Alphabet) in 2016. Though it has never made, sold, or operated a car on a commercial basis, Waymo has created test vehicles that have clocked more than 4 million miles without human drivers as of November 2017. Waymo tested its Level 5 car, “Firefly,” between 2015 and 2017 but then decided to focus on hardware that could be installed in other manufacturers’ vehicles, starting with the Chrysler Pacifica.
Uber
The taxi-hailing app maker Uber has been testing autonomous cars on the streets of Pittsburgh since 2016, always with an employee behind the wheel ready to take over in case of a malfunction. After buying the self-driving truck company Otto in 2016 for a reported $680 million, Uber is now expanding its AI capabilities and plans to test NVIDIA’s latest chips in Otto’s vehicles. It has also partnered with Volvo to create a self-driving fleet of cars and with Toyota to co-create a ride-sharing autonomous vehicle.
Tesla
The first major car manufacturer to come from Silicon Valley, Tesla was also the first to introduce Level 2 autopilot back in 2015. The following year, it announced that all new Teslas would have the hardware for full autonomy, meaning once the software is finished it can be deployed on existing cars with an instant upgrade. Some experts have challenged this approach, arguing that the company has merely added surround cameras to its production cars that aren’t as capable as the laser-based sensing systems that most other carmakers are using.
But the company has collected data from hundreds of thousands of cars, driving millions of miles across all terrains. So, we shouldn’t dismiss the firm’s founder, Elon Musk, when he claims a Level 4 Tesla will drive from LA to New York without any human interference within the first half of 2018.
Winners
Who’s leading the race? Image Credit: IMD
At the moment, the disruptors like Tesla, Waymo, and Uber seem to have the upper hand. While the traditional automakers are focusing on bringing Level 3 and 4 partial automation to market, the new companies are leapfrogging them by moving more directly towards Level 5 full automation. Waymo may have the least experience of dealing with consumers in this sector, but it has already clocked up a huge amount of time testing some of the most advanced technology on public roads.
The incumbent carmakers are also focused on the difficult process of integrating new technology and business models into their existing manufacturing operations by buying up small companies. The challengers, on the other hand, are easily partnering with other big players including manufacturers to get the scale and expertise they need more quickly.
Tesla is building its own manufacturing capability but also collecting vast amounts of critical data that will enable it to more easily upgrade its cars when ready for full automation. In particular, Waymo’s experience, technology capability, and ability to secure solid partnerships puts it at the head of the pack.
This article was originally published on The Conversation. Read the original article.
Image Credit: Waymo Continue reading
#432051 What Roboticists Are Learning From Early ...
You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.
Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.
The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.
A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.
Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.
Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.
The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).
The idea behind this realm of psychological horror is fairly simple, cognitively speaking.
We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.
You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.
Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.
The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.
Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.
Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.
Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.
As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.
We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.
As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.
Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading