Tag Archives: custom
#432271 Your Shopping Experience Is on the Verge ...
Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.
E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.
Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.
Massive change is occurring in this arena.
For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.
Let’s dive in.
E-Commerce Trends
Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.
These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.
At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.
Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.
And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.
In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.
There’s plenty more room for digital disruption.
AI and the Retail Experience
For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.
In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.
Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.
Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, an Austrian company called Soul Machines.
Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.
Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.
Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.
Amazon’s Alexa marks an important user interface moment in this regard.
Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on Amazon.com when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.
As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.
But let’s take it one step further.
Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.
In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.
In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.
In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?
The dematerialization, demonetization, and democratization of personalized shopping has only just begun.
The In-Store Experience: Experiential Retailing
In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.
As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.
Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.
The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.
As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.
In the short term,retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).
In the long term,we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.
One thing is certain: the nominal shopping experience is on the verge of a major transformation.
Implications
The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.
Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.
And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.
Perhaps nothing will be more transformed than today’s $20 trillion retail sector.
Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.
Join Me
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.
Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Zapp2Photo / Shutterstock.com Continue reading
#432181 Putting AI in Your Pocket: MIT Chip Cuts ...
Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets’ power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.
Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.
That’s not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user’s control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.
Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip’s memory and processors.
Neural nets consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input from multiple neurons in the layer below it, and if the combined input passes a certain threshold it then transmits an output to multiple neurons above it. The strength of the connection between neurons is governed by a weight, which is set during training.
This means that for every neuron, the chip has to retrieve the input data for a particular connection and the connection weight from memory, multiply them, store the result, and then repeat the process for every input. That requires a lot of data to be moved around, expending a lot of energy.
The new MIT chip does away with that, instead computing all the inputs in parallel within the memory using analog circuits. That significantly reduces the amount of data that needs to be shoved around and results in major energy savings.
The approach requires the weights of the connections to be binary rather than a range of values, but previous theoretical work had suggested this wouldn’t dramatically impact accuracy, and the researchers found the chip’s results were generally within two to three percent of the conventional non-binary neural net running on a standard computer.
This isn’t the first time researchers have created chips that carry out processing in memory to reduce the power consumption of neural nets, but it’s the first time the approach has been used to run powerful convolutional neural networks popular for image-based AI applications.
“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays,” Dario Gil, vice president of artificial intelligence at IBM, said in a statement.
“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”
It’s not just research groups working on this, though. The desire to get AI smarts into devices like smartphones, household appliances, and all kinds of IoT devices is driving the who’s who of Silicon Valley to pile into low-power AI chips.
Apple has already integrated its Neural Engine into the iPhone X to power things like its facial recognition technology, and Amazon is rumored to be developing its own custom AI chips for the next generation of its Echo digital assistant.
The big chip companies are also increasingly pivoting towards supporting advanced capabilities like machine learning, which has forced them to make their devices ever more energy-efficient. Earlier this year ARM unveiled two new chips: the Arm Machine Learning processor, aimed at general AI tasks from translation to facial recognition, and the Arm Object Detection processor for detecting things like faces in images.
Qualcomm’s latest mobile chip, the Snapdragon 845, features a GPU and is heavily focused on AI. The company has also released the Snapdragon 820E, which is aimed at drones, robots, and industrial devices.
Going a step further, IBM and Intel are developing neuromorphic chips whose architectures are inspired by the human brain and its incredible energy efficiency. That could theoretically allow IBM’s TrueNorth and Intel’s Loihi to run powerful machine learning on a fraction of the power of conventional chips, though they are both still highly experimental at this stage.
Getting these chips to run neural nets as powerful as those found in cloud services without burning through batteries too quickly will be a big challenge. But at the current pace of innovation, it doesn’t look like it will be too long before you’ll be packing some serious AI power in your pocket.
Image Credit: Blue Planet Studio / Shutterstock.com Continue reading
#432031 Why the Rise of Self-Driving Vehicles ...
It’s been a long time coming. For years Waymo (formerly known as Google Chauffeur) has been diligently developing, driving, testing and refining its fleets of various models of self-driving cars. Now Waymo is going big. The company recently placed an order for several thousand new Chrysler Pacifica minivans and next year plans to launch driverless taxis in a number of US cities.
This deal raises one of the biggest unanswered questions about autonomous vehicles: if fleets of driverless taxis make it cheap and easy for regular people to get around, what’s going to happen to car ownership?
One popular line of thought goes as follows: as autonomous ride-hailing services become ubiquitous, people will no longer need to buy their own cars. This notion has a certain logical appeal. It makes sense to assume that as driverless taxis become widely available, most of us will eagerly sell the family car and use on-demand taxis to get to work, run errands, or pick up the kids. After all, vehicle ownership is pricey and most cars spend the vast majority of their lives parked.
Even experts believe commercial availability of autonomous vehicles will cause car sales to drop.
Market research firm KPMG estimates that by 2030, midsize car sales in the US will decline from today’s 5.4 million units sold each year to nearly half that number, a measly 2.1 million units. Another market research firm, ReThinkX, offers an even more pessimistic estimate (or optimistic, depending on your opinion of cars), predicting that autonomous vehicles will reduce consumer demand for new vehicles by a whopping 70 percent.
The reality is that the impending death of private vehicle sales is greatly exaggerated. Despite the fact that autonomous taxis will be a beneficial and widely-embraced form of urban transportation, we will witness the opposite. Most people will still prefer to own their own autonomous vehicle. In fact, the total number of units of autonomous vehicles sold each year is going to increase rather than decrease.
When people predict the demise of car ownership, they are overlooking the reality that the new autonomous automotive industry is not going to be just a re-hash of today’s car industry with driverless vehicles. Instead, the automotive industry of the future will be selling what could be considered an entirely new product: a wide variety of intelligent, self-guiding transportation robots. When cars become a widely used type of transportation robot, they will be cheap, ubiquitous, and versatile.
Several unique characteristics of autonomous vehicles will ensure that people will continue to buy their own cars.
1. Cost: Thanks to simpler electric engines and lighter auto bodies, autonomous vehicles will be cheaper to buy and maintain than today’s human-driven vehicles. Some estimates bring the price to $10K per vehicle, a stark contrast with today’s average of $30K per vehicle.
2. Personal belongings: Consumers will be able to do much more in their driverless vehicles, including work, play, and rest. This means they will want to keep more personal items in their cars.
3. Frequent upgrades: The average (human-driven) car today is owned for 10 years. As driverless cars become software-driven devices, their price/performance ratio will track to Moore’s law. Their rapid improvement will increase the appeal and frequency of new vehicle purchases.
4. Instant accessibility: In a dense urban setting, a driverless taxi is able to show up within minutes of being summoned. But not so in rural areas, where people live miles apart. For many, delay and “loss of control” over their own mobility will increase the appeal of owning their own vehicle.
5. Diversity of form and function: Autonomous vehicles will be available in a wide variety of sizes and shapes. Consumers will drive demand for custom-made, purpose-built autonomous vehicles whose form is adapted for a particular function.
Let’s explore each of these characteristics in more detail.
Autonomous vehicles will cost less for several reasons. For one, they will be powered by electric engines, which are cheaper to construct and maintain than gasoline-powered engines. Removing human drivers will also save consumers money. Autonomous vehicles will be much less likely to have accidents, hence they can be built out of lightweight, lower-cost materials and will be cheaper to insure. With the human interface no longer needed, autonomous vehicles won’t be burdened by the manufacturing costs of a complex dashboard, steering wheel, and foot pedals.
While hop-on, hop-off autonomous taxi-based mobility services may be ideal for some of the urban population, several sizeable customer segments will still want to own their own cars.
These include people who live in sparsely-populated rural areas who can’t afford to wait extended periods of time for a taxi to appear. Families with children will prefer to own their own driverless cars to house their childrens’ car seats and favorite toys and sippy cups. Another loyal car-buying segment will be die-hard gadget-hounds who will eagerly buy a sexy upgraded model every year or so, unable to resist the siren song of AI that is three times as safe, or a ride that is twice as smooth.
Finally, consider the allure of robotic diversity.
Commuters will invest in a home office on wheels, a sleek, traveling workspace resembling the first-class suite on an airplane. On the high end of the market, city-dwellers and country-dwellers alike will special-order custom-made autonomous vehicles whose shape and on-board gadgetry is adapted for a particular function or hobby. Privately-owned small businesses will buy their own autonomous delivery robot that could range in size from a knee-high, last-mile delivery pod, to a giant, long-haul shipping device.
As autonomous vehicles near commercial viability, Waymo’s procurement deal with Fiat Chrysler is just the beginning.
The exact value of this future automotive industry has yet to be defined, but research from Intel’s internal autonomous vehicle division estimates this new so-called “passenger economy” could be worth nearly $7 trillion a year. To position themselves to capture a chunk of this potential revenue, companies whose businesses used to lie in previously disparate fields such as robotics, software, ships, and entertainment (to name but a few) have begun to form a bewildering web of what they hope will be symbiotic partnerships. Car hailing and chip companies are collaborating with car rental companies, who in turn are befriending giant software firms, who are launching joint projects with all sizes of hardware companies, and so on.
Last year, car companies sold an estimated 80 million new cars worldwide. Over the course of nearly a century, car companies and their partners, global chains of suppliers and service providers, have become masters at mass-producing and maintaining sturdy and cost-effective human-driven vehicles. As autonomous vehicle technology becomes ready for mainstream use, traditional automotive companies are being forced to grapple with the painful realization that they must compete in a new playing field.
The challenge for traditional car-makers won’t be that people no longer want to own cars. Instead, the challenge will be learning to compete in a new and larger transportation industry where consumers will choose their product according to the appeal of its customized body and the quality of its intelligent software.
—
Melba Kurman and Hod Lipson are the authors of Driverless: Intelligent Cars and the Road Ahead and Fabricated: the New World of 3D Printing.
Image Credit: hfzimages / Shutterstock.com
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading
#431872 AI Uses Titan Supercomputer to Create ...
You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading