Tag Archives: material

#432271 Your Shopping Experience Is on the Verge ...

Exponential technologies (AI, VR, 3D printing, and networks) are radically reshaping traditional retail.

E-commerce giants (Amazon, Walmart, Alibaba) are digitizing the retail industry, riding the exponential growth of computation.

Many brick-and-mortar stores have already gone bankrupt, or migrated their operations online.

Massive change is occurring in this arena.

For those “real-life stores” that survive, an evolution is taking place from a product-centric mentality to an experience-based business model by leveraging AI, VR/AR, and 3D printing.

Let’s dive in.

E-Commerce Trends
Last year, 3.8 billion people were connected online. By 2024, thanks to 5G, stratospheric and space-based satellites, we will grow to 8 billion people online, each with megabit to gigabit connection speeds.

These 4.2 billion new digital consumers will begin buying things online, a potential bonanza for the e-commerce world.

At the same time, entrepreneurs seeking to service these four-billion-plus new consumers can now skip the costly steps of procuring retail space and hiring sales clerks.

Today, thanks to global connectivity, contract production, and turnkey pack-and-ship logistics, an entrepreneur can go from an idea to building and scaling a multimillion-dollar business from anywhere in the world in record time.

And while e-commerce sales have been exploding (growing from $34 billion in Q1 2009 to $115 billion in Q3 2017), e-commerce only accounted for about 10 percent of total retail sales in 2017.

In 2016, global online sales totaled $1.8 trillion. Remarkably, this $1.8 trillion was spent by only 1.5 billion people — a mere 20 percent of Earth’s global population that year.

There’s plenty more room for digital disruption.

AI and the Retail Experience
For the business owner, AI will demonetize e-commerce operations with automated customer service, ultra-accurate supply chain modeling, marketing content generation, and advertising.

In the case of customer service, imagine an AI that is trained by every customer interaction, learns how to answer any consumer question perfectly, and offers feedback to product designers and company owners as a result.

Facebook’s handover protocol allows live customer service representatives and language-learning bots to work within the same Facebook Messenger conversation.

Taking it one step further, imagine an AI that is empathic to a consumer’s frustration, that can take any amount of abuse and come back with a smile every time. As one example, meet Ava. “Ava is a virtual customer service agent, to bring a whole new level of personalization and brand experience to that customer experience on a day-to-day basis,” says Greg Cross, CEO of Ava’s creator, an Austrian company called Soul Machines.

Predictive modeling and machine learning are also optimizing product ordering and the supply chain process. For example, Skubana, a platform for online sellers, leverages data analytics to provide entrepreneurs constant product performance feedback and maintain optimal warehouse stock levels.

Blockchain is set to follow suit in the retail space. ShipChain and Ambrosus plan to introduce transparency and trust into shipping and production, further reducing costs for entrepreneurs and consumers.

Meanwhile, for consumers, personal shopping assistants are shifting the psychology of the standard shopping experience.

Amazon’s Alexa marks an important user interface moment in this regard.

Alexa is in her infancy with voice search and vocal controls for smart homes. Already, Amazon’s Alexa users, on average, spent more on Amazon.com when purchasing than standard Amazon Prime customers — $1,700 versus $1,400.

As I’ve discussed in previous posts, the future combination of virtual reality shopping, coupled with a personalized, AI-enabled fashion advisor will make finding, selecting, and ordering products fast and painless for consumers.

But let’s take it one step further.

Imagine a future in which your personal AI shopper knows your desires better than you do. Possible? I think so. After all, our future AIs will follow us, watch us, and observe our interactions — including how long we glance at objects, our facial expressions, and much more.

In this future, shopping might be as easy as saying, “Buy me a new outfit for Saturday night’s dinner party,” followed by a surprise-and-delight moment in which the outfit that arrives is perfect.

In this future world of AI-enabled shopping, one of the most disruptive implications is that advertising is now dead.

In a world where an AI is buying my stuff, and I’m no longer in the decision loop, why would a big brand ever waste money on a Super Bowl advertisement?

The dematerialization, demonetization, and democratization of personalized shopping has only just begun.

The In-Store Experience: Experiential Retailing
In 2017, over 6,700 brick-and-mortar retail stores closed their doors, surpassing the former record year for store closures set in 2008 during the financial crisis. Regardless, business is still booming.

As shoppers seek the convenience of online shopping, brick-and-mortar stores are tapping into the power of the experience economy.

Rather than focusing on the practicality of the products they buy, consumers are instead seeking out the experience of going shopping.

The Internet of Things, artificial intelligence, and computation are exponentially improving the in-person consumer experience.

As AI dominates curated online shopping, AI and data analytics tools are also empowering real-life store owners to optimize staffing, marketing strategies, customer relationship management, and inventory logistics.

In the short term,retail store locations will serve as the next big user interface for production 3D printing (custom 3D printed clothes at the Ministry of Supply), virtual and augmented reality (DIY skills clinics), and the Internet of Things (checkout-less shopping).

In the long term,we’ll see how our desire for enhanced productivity and seamless consumption balances with our preference for enjoyable real-life consumer experiences — all of which will be driven by exponential technologies.

One thing is certain: the nominal shopping experience is on the verge of a major transformation.

Implications
The convergence of exponential technologies has already revamped how and where we shop, how we use our time, and how much we pay.

Twenty years ago, Amazon showed us how the web could offer each of us the long tail of available reading material, and since then, the world of e-commerce has exploded.

And yet we still haven’t experienced the cost savings coming our way from drone delivery, the Internet of Things, tokenized ecosystems, the impact of truly powerful AI, or even the other major applications for 3D printing and AR/VR.

Perhaps nothing will be more transformed than today’s $20 trillion retail sector.

Hold on, stay tuned, and get your AI-enabled cryptocurrency ready.

Join Me
Abundance Digital Online Community: I’ve created a digital/online community of bold, abundance-minded entrepreneurs called Abundance Digital.

Abundance Digital is my ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots

#431999 Brain-Like Chips Now Beat the Human ...

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.

Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

Experts are hopeful.

“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Software to Hardware
The human brain is the ultimate computational wizard. With roughly 100 billion neurons densely packed into the size of a small football, the brain can deftly handle complex computation at lightning speed using very little energy.

AI experts have taken note. The past few years saw brain-inspired algorithms that can identify faces, falsify voices, and play a variety of games at—and often above—human capability.

But software is only part of the equation. Our current computers, with their transistors and binary digital systems, aren’t equipped to run these powerful algorithms.

That’s where neuromorphic computing comes in. The idea is simple: fabricate a computer chip that mimics the brain at the hardware level. Here, data is both processed and stored within the chip in an analog manner. Each artificial synapse can accumulate and integrate small bits of information from multiple sources and fire only when it reaches a threshold—much like its biological counterpart.

Experts believe the speed and efficiency gains will be enormous.

For one, the chips will no longer have to transfer data between the central processing unit (CPU) and storage blocks, which wastes both time and energy. For another, like biological neural networks, neuromorphic devices can support neurons that run millions of streams of parallel computation.

A “Brain-on-a-chip”
Optimism aside, reproducing the biological synapse in hardware form hasn’t been as easy as anticipated.

Neuromorphic chips exist in many forms, but often look like a nanoscale metal sandwich. The “bread” pieces are generally made of conductive plates surrounding a switching medium—a conductive material of sorts that acts like the gap in a biological synapse.

When a voltage is applied, as in the case of data input, ions move within the switching medium, which then creates conductive streams to stimulate the downstream plate. This change in conductivity mimics the way biological neurons change their “weight,” or the strength of connectivity between two adjacent neurons.

But so far, neuromorphic synapses have been rather unpredictable. According to Kim, that’s because the switching medium is often comprised of material that can’t channel ions to exact locations on the downstream plate.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” explains Kim. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects.”

In his new study, Kim and colleagues swapped the jelly-like switching medium for silicon, a material with only a single line of defects that acts like a channel to guide ions.

The chip starts with a thin wafer of silicon etched with a honeycomb-like pattern. On top is a layer of silicon germanium—something often present in transistors—in the same pattern. This creates a funnel-like dislocation, a kind of Grand Canal that perfectly shuttles ions across the artificial synapse.

The researchers then made a neuromorphic chip containing these synapses and shot an electrical zap through them. Incredibly, the synapses’ response varied by only four percent—much higher than any neuromorphic device made with an amorphous switching medium.

In a computer simulation, the team built a multi-layer artificial neural network using parameters measured from their device. After tens of thousands of training examples, their neural network correctly recognized samples 95 percent of the time, just 2 percent lower than state-of-the-art software algorithms.

The upside? The neuromorphic chip requires much less space than the hardware that runs deep learning algorithms. Forget supercomputers—these chips could one day run complex computations right on our handheld devices.

A Magnetic Boost
Meanwhile, in Boulder, Colorado, Dr. Michael Schneider at the National Institute of Standards and Technology also realized that the standard switching medium had to go.

“There must be a better way to do this, because nature has figured out a better way to do this,” he says.

His solution? Nanoclusters of magnetic manganese.

Schneider’s chip contained two slices of superconducting electrodes made out of niobium, which channel electricity with no resistance. When researchers applied different magnetic fields to the synapse, they could control the alignment of the manganese “filling.”

The switch gave the chip a double boost. For one, by aligning the switching medium, the team could predict the ion flow and boost uniformity. For another, the magnetic manganese itself adds computational power. The chip can now encode data in both the level of electrical input and the direction of the magnetisms without bulking up the synapse.

It seriously worked. At one billion times per second, the chips fired several orders of magnitude faster than human neurons. Plus, the chips required just one ten-thousandth of the energy used by their biological counterparts, all the while synthesizing input from nine different sources in an analog manner.

The Road Ahead
These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.

But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

Many of the special materials used in these chips require specific temperatures, he says. Magnetic manganese chips, for example, require temperatures around absolute zero to operate, meaning they come with the need for giant cooling tanks filled with liquid helium—obviously not practical for everyday use.

Another is scalability. Millions of synapses are necessary before a neuromorphic device can be used to tackle everyday problems such as facial recognition. So far, no deal.

But these problems may in fact be a driving force for the entire field. Intense competition could push teams into exploring different ideas and solutions to similar problems, much like these two studies.

If so, future chips may come in diverse flavors. Similar to our vast array of deep learning algorithms and operating systems, the computer chips of the future may also vary depending on specific requirements and needs.

It is worth developing as many different technological approaches as possible, says Furber, especially as neuroscientists increasingly understand what makes our biological synapses—the ultimate inspiration—so amazingly efficient.

Image Credit: arakio / Shutterstock.com Continue reading

Posted in Human Robots

#431995 The 10 Grand Challenges Facing Robotics ...

Robotics research has been making great strides in recent years, but there are still many hurdles to the machines becoming a ubiquitous presence in our lives. The journal Science Robotics has now identified 10 grand challenges the field will have to grapple with to make that a reality.

Editors conducted an online survey on unsolved challenges in robotics and assembled an expert panel of roboticists to shortlist the 30 most important topics, which were then grouped into 10 grand challenges that could have major impact in the next 5 to 10 years. Here’s what they came up with.

1. New Materials and Fabrication Schemes
Roboticists are beginning to move beyond motors, gears, and sensors by experimenting with things like artificial muscles, soft robotics, and new fabrication methods that combine multiple functions in one material. But most of these advances have been “one-off” demonstrations, which are not easy to combine.

Multi-functional materials merging things like sensing, movement, energy harvesting, or energy storage could allow more efficient robot designs. But combining these various properties in a single machine will require new approaches that blend micro-scale and large-scale fabrication techniques. Another promising direction is materials that can change over time to adapt or heal, but this requires much more research.

2. Bioinspired and Bio-Hybrid Robots
Nature has already solved many of the problems roboticists are trying to tackle, so many are turning to biology for inspiration or even incorporating living systems into their robots. But there are still major bottlenecks in reproducing the mechanical performance of muscle and the ability of biological systems to power themselves.

There has been great progress in artificial muscles, but their robustness, efficiency, and energy and power density need to be improved. Embedding living cells into robots can overcome challenges of powering small robots, as well as exploit biological features like self-healing and embedded sensing, though how to integrate these components is still a major challenge. And while a growing “robo-zoo” is helping tease out nature’s secrets, more work needs to be done on how animals transition between capabilities like flying and swimming to build multimodal platforms.

3. Power and Energy
Energy storage is a major bottleneck for mobile robotics. Rising demand from drones, electric vehicles, and renewable energy is driving progress in battery technology, but the fundamental challenges have remained largely unchanged for years.

That means that in parallel to battery development, there need to be efforts to minimize robots’ power utilization and give them access to new sources of energy. Enabling them to harvest energy from their environment and transmitting power to them wirelessly are two promising approaches worthy of investigation.

4. Robot Swarms
Swarms of simple robots that assemble into different configurations to tackle various tasks can be a cheaper, more flexible alternative to large, task-specific robots. Smaller, cheaper, more powerful hardware that lets simple robots sense their environment and communicate is combining with AI that can model the kind of behavior seen in nature’s flocks.

But there needs to be more work on the most efficient forms of control at different scales—small swarms can be controlled centrally, but larger ones need to be more decentralized. They also need to be made robust and adaptable to the changing conditions of the real world and resilient to deliberate or accidental damage. There also needs to be more work on swarms of non-homogeneous robots with complementary capabilities.

5. Navigation and Exploration
A key use case for robots is exploring places where humans cannot go, such as the deep sea, space, or disaster zones. That means they need to become adept at exploring and navigating unmapped, often highly disordered and hostile environments.

The major challenges include creating systems that can adapt, learn, and recover from navigation failures and are able to make and recognize new discoveries. This will require high levels of autonomy that allow the robots to monitor and reconfigure themselves while being able to build a picture of the world from multiple data sources of varying reliability and accuracy.

6. AI for Robotics
Deep learning has revolutionized machines’ ability to recognize patterns, but that needs to be combined with model-based reasoning to create adaptable robots that can learn on the fly.

Key to this will be creating AI that’s aware of its own limitations and can learn how to learn new things. It will also be important to create systems that are able to learn quickly from limited data rather than the millions of examples used in deep learning. Further advances in our understanding of human intelligence will be essential to solving these problems.

7. Brain-Computer Interfaces
BCIs will enable seamless control of advanced robotic prosthetics but could also prove a faster, more natural way to communicate instructions to robots or simply help them understand human mental states.

Most current approaches to measuring brain activity are expensive and cumbersome, though, so work on compact, low-power, and wireless devices will be important. They also tend to involve extended training, calibration, and adaptation due to the imprecise nature of reading brain activity. And it remains to be seen if they will outperform simpler techniques like eye tracking or reading muscle signals.

8. Social Interaction
If robots are to enter human environments, they will need to learn to deal with humans. But this will be difficult, as we have very few concrete models of human behavior and we are prone to underestimate the complexity of what comes naturally to us.

Social robots will need to be able to perceive minute social cues like facial expression or intonation, understand the cultural and social context they are operating in, and model the mental states of people they interact with to tailor their dealings with them, both in the short term and as they develop long-standing relationships with them.

9. Medical Robotics
Medicine is one of the areas where robots could have significant impact in the near future. Devices that augment a surgeon’s capabilities are already in regular use, but the challenge will be to increase the autonomy of these systems in such a high-stakes environment.

Autonomous robot assistants will need to be able to recognize human anatomy in a variety of contexts and be able to use situational awareness and spoken commands to understand what’s required of them. In surgery, autonomous robots could perform the routine steps of a procedure, giving way to the surgeon for more complicated patient-specific bits.

Micro-robots that operate inside the human body also hold promise, but there are still many roadblocks to their adoption, including effective delivery systems, tracking and control methods, and crucially, finding therapies where they improve on current approaches.

10. Robot Ethics and Security
As the preceding challenges are overcome and robots are increasingly integrated into our lives, this progress will create new ethical conundrums. Most importantly, we may become over-reliant on robots.

That could lead to humans losing certain skills and capabilities, making us unable to take the reins in the case of failures. We may end up delegating tasks that should, for ethical reasons, have some human supervision, and allow people to pass the buck to autonomous systems in the case of failure. It could also reduce self-determination, as human behaviors change to accommodate the routines and restrictions required for robots and AI to work effectively.

Image Credit: Zenzen / Shutterstock.com Continue reading

Posted in Human Robots

#431862 Want Self-Healing Robots and Tires? ...

We all have scars, and each one tells a story. Tales of tomfoolery, tales of haphazardness, or in my case, tales of stupidity.
Whether the cause of your scar was a push-bike accident, a lack of concentration while cutting onions, or simply the byproduct of an active lifestyle, the experience was likely extremely painful and distressing. Not to mention the long and vexatious recovery period, stretching out for weeks and months after the actual event!
Cast your minds back to that time. How you longed for instant relief from your discomfort! How you longed to have your capabilities restored in an instant!
Well, materials that can heal themselves in an instant may not be far from becoming a reality—and a family of them known as elastomers holds the key.
“Elastomer” is essentially a big, fancy word for rubber. However, elastomers have one unique property—they are capable of returning to their original form after being vigorously stretched and deformed.
This unique property of elastomers has caught the eye of many scientists around the world, particularly those working in the field of robotics. The reason? Elastomer can be encouraged to return to its original shape, in many cases by simply applying heat. The implication of this is the quick and cost-effective repair of “wounds”—cuts, tears, and punctures to the soft, elastomer-based appendages of a robot’s exoskeleton.

Researchers from Vrije University in Brussels, Belgium have been toying with the technique, and with remarkable success. The team built a robotic hand with fingers made of a type of elastomer. They found that cuts and punctures were indeed able to repair themselves simply by applying heat to the affected area.
How long does the healing process take? In this instance, about a day. Now that’s a lot shorter than the weeks and months of recovery time we typically need for a flesh wound, during which we are unable to write, play the guitar, or do the dishes. If you consider the latter to be a bad thing…
However, it’s not the first time scientists have played around with elastomers and examined their self-healing properties. Another team of scientists, headed up by Cheng-Hui Li and Chao Wang, discovered another type of elastomer that exhibited autonomous self-healing properties. Just to help you picture this stuff, the material closely resembles animal muscle— strong, flexible, and elastic. With autogenetic restorative powers to boot.
Advancements in the world of self-healing elastomers, or rubbers, may also affect the lives of everyday motorists. Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a self-healing rubber material that could be used to make tires that repair their own punctures.
This time the mechanism of self-healing doesn’t involve heat. Rather, it is related to a physical phenomenon associated with the rubber’s unique structure. Normally, when a large enough stress is applied to a typical rubber, there is catastrophic failure at the focal point of that stress. The self-healing rubber the researchers created, on the other hand, distributes that same stress evenly over a network of “crazes”—which are like cracks connected by strands of fiber.
Here’s the interesting part. Not only does this unique physical characteristic of the rubber prevent catastrophic failure, it facilitates self-repair. According to Harvard researchers, when the stress is released, the material snaps back to its original form and the crazes heal.
This wonder material could be used in any number of rubber-based products.
Professor Jinrong Wu, of Sichuan University, China, and co-author of the study, happened to single out tires: “Imagine that we could use this material as one of the components to make a rubber tire… If you have a cut through the tire, this tire wouldn’t have to be replaced right away. Instead, it would self-heal while driving, enough to give you leeway to avoid dramatic damage,” said Wu.
So where to from here? Well, self-healing elastomers could have a number of different applications. According to the article published by Quartz, cited earlier, the material could be used on artificial limbs. Perhaps it will provide some measure of structural integrity without looking like a tattered mess after years of regular use.
Or perhaps a sort of elastomer-based hybrid skin is on the horizon. A skin in which wounds heal instantly. And recovery time, unlike your regular old human skin of yesteryear, is significantly slashed. Furthermore, this future skin might eliminate those little reminders we call scars.
For those with poor judgment skills, this spells an end to disquieting reminders of our own stupidity.
Image Credit: Vrije Universiteit Brussel / Prof. Dr. ir. Bram Vanderborght Continue reading

Posted in Human Robots

#431689 Robotic Materials Will Distribute ...

The classical view of a robot as a mechanical body with a central “brain” that controls its behavior could soon be on its way out. The authors of a recent article in Science Robotics argue that future robots will have intelligence distributed throughout their bodies.
The concept, and the emerging discipline behind it, are variously referred to as “material robotics” or “robotic materials” and are essentially a synthesis of ideas from robotics and materials science. Proponents say advances in both fields are making it possible to create composite materials capable of combining sensing, actuation, computation, and communication and operating independently of a central processing unit.
Much of the inspiration for the field comes from nature, with practitioners pointing to the adaptive camouflage of the cuttlefish’s skin, the ability of bird wings to morph in response to different maneuvers, or the banyan tree’s ability to grow roots above ground to support new branches.
Adaptive camouflage and morphing wings have clear applications in the defense and aerospace sector, but the authors say similar principles could be used to create everything from smart tires able to calculate the traction needed for specific surfaces to grippers that can tailor their force to the kind of object they are grasping.
“Material robotics represents an acknowledgment that materials can absorb some of the challenges of acting and reacting to an uncertain world,” the authors write. “Embedding distributed sensors and actuators directly into the material of the robot’s body engages computational capabilities and offloads the rigid information and computational requirements from the central processing system.”
The idea of making materials more adaptive is not new, and there are already a host of “smart materials” that can respond to stimuli like heat, mechanical stress, or magnetic fields by doing things like producing a voltage or changing shape. These properties can be carefully tuned to create materials capable of a wide variety of functions such as movement, self-repair, or sensing.
The authors say synthesizing these kinds of smart materials, alongside other advanced materials like biocompatible conductors or biodegradable elastomers, is foundational to material robotics. But the approach also involves integration of many different capabilities in the same material, careful mechanical design to make the most of mechanical capabilities, and closing the loop between sensing and control within the materials themselves.
While there are stand-alone applications for such materials in the near term, like smart fabrics or robotic grippers, the long-term promise of the field is to distribute decision-making in future advanced robots. As they are imbued with ever more senses and capabilities, these machines will be required to shuttle huge amounts of control and feedback data to and fro, placing a strain on both their communication and computation abilities.
Materials that can process sensor data at the source and either autonomously react to it or filter the most relevant information to be passed on to the central processing unit could significantly ease this bottleneck. In a press release related to an earlier study, Nikolaus Correll, an assistant professor of computer science at the University of Colorado Boulder who is also an author of the current paper, pointed out this is a tactic used by the human body.
“The human sensory system automatically filters out things like the feeling of clothing rubbing on the skin,” he said. “An artificial skin with possibly thousands of sensors could do the same thing, and only report to a central ‘brain’ if it touches something new.”
There are still considerable challenges to realizing this vision, though, the authors say, noting that so far the young field has only produced proof of concepts. The biggest challenge remains manufacturing robotic materials in a way that combines all these capabilities in a small enough package at an affordable cost.
Luckily, the authors note, the field can draw on convergent advances in both materials science, such as the development of new bulk materials with inherent multifunctionality, and robotics, such as the ever tighter integration of components.
And they predict that doing away with the prevailing dichotomy of “brain versus body” could lay the foundations for the emergence of “robots with brains in their bodies—the foundation of inexpensive and ubiquitous robots that will step into the real world.”
Image Credit: Anatomy Insider / Shutterstock.com Continue reading

Posted in Human Robots