Tag Archives: communication

#436911 Scientists Linked Artificial and ...

Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.

Whoa.

We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.

As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.

This month, an international team put all of those ingredients together, turning theory into reality.

The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.

The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.

That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.

And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.

The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.

The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.

Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.

Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.

Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.

That’s what this study did.

A Hybrid Network
Still with me? Let’s talk network.

It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.

Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).

So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.

To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.

Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.

Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.

You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.

Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.

Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.

It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.

However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.

While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.

“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436530 How Smart Roads Will Make Driving ...

Roads criss-cross the landscape, but while they provide vital transport links, in many ways they represent a huge amount of wasted space. Advances in “smart road” technology could change that, creating roads that can harvest energy from cars, detect speeding, automatically weigh vehicles, and even communicate with smart cars.

“Smart city” projects are popping up in countries across the world thanks to advances in wireless communication, cloud computing, data analytics, remote sensing, and artificial intelligence. Transportation is a crucial element of most of these plans, but while much of the focus is on public transport solutions, smart roads are increasingly being seen as a crucial feature of these programs.

New technology is making it possible to tackle a host of issues including traffic congestion, accidents, and pollution, say the authors of a paper in the journal Proceedings of the Royal Society A. And they’ve outlined ten of the most promising advances under development or in planning stages that could feature on tomorrow’s roads.

Energy harvesting

A variety of energy harvesting technologies integrated into roads have been proposed as ways to power street lights and traffic signals or provide a boost to the grid. Photovoltaic panels could be built into the road surface to capture sunlight, or piezoelectric materials installed beneath the asphalt could generate current when deformed by vehicles passing overhead.

Musical roads

Countries like Japan, Denmark, the Netherlands, Taiwan, and South Korea have built roads that play music as cars pass by. By varying the spacing of rumble strips, it’s possible to produce a series of different notes as vehicles drive over them. The aim is generally to warn of hazards or help drivers keep to the speed limit.

Automatic weighing

Weight-in-motion technology that measures vehicles’ loads as they drive slowly through a designated lane has been around since the 1970s, but more recently high speed weight-in-motion tech has made it possible to measure vehicles as they travel at regular highway speeds. The latest advance has been integration with automatic licence plate reading and wireless communication to allow continuous remote monitoring both to enforce weight restrictions and monitor wear on roads.

Vehicle charging

The growing popularity of electric vehicles has spurred the development of technology to charge cars and buses as they drive. The most promising of these approaches is magnetic induction, which involves burying cables beneath the road to generate electromagnetic fields that a receiver device in the car then transforms into electrical power to charge batteries.

Smart traffic signs

Traffic signs aren’t always as visible as they should be, and it can often be hard to remember what all of them mean. So there are now proposals for “smart signs” that wirelessly beam a sign’s content to oncoming cars fitted with receivers, which can then alert the driver verbally or on the car’s display. The approach isn’t affected by poor weather and lighting, can be reprogrammed easily, and could do away with the need for complex sign recognition technology in future self-driving cars.

Traffic violation detection and notification

Sensors and cameras can be combined with these same smart signs to detect and automatically notify drivers of traffic violations. The automatic transmission of traffic signals means drivers won’t be able to deny they’ve seen the warnings or been notified of any fines, as a record will be stored on their car’s black box.

Talking cars

Car-to-car communication technology and V2X, which lets cars share information with any other connected device, are becoming increasingly common. Inter-car communication can be used to propagate accidents or traffic jam alerts to prevent congestion, while letting vehicles communicate with infrastructure can help signals dynamically manage timers to keep traffic flowing or automatically collect tolls.

Smart intersections

Combing sensors and cameras with object recognition systems that can detect vehicles and other road users can help increase safety and efficiency at intersections. It can be used to extend green lights for slower road users like pedestrians and cyclists, sense jaywalkers, give priority to emergency vehicles, and dynamically adjust light timers to optimize traffic flow. Information can even be broadcast to oncoming vehicles to highlight blind spots and potential hazards.

Automatic crash detection

There’s a “golden hour” after an accident in which the chance of saving lives is greatly increased. Vehicle communication technology can ensure that notification of a crash reaches the emergency services rapidly, and can also provide vital information about the number and type of vehicles involved, which can help emergency response planning. It can also be used to alert other drivers to slow down or stop to prevent further accidents.

Smart street lights

Street lights are increasingly being embedded with sensors, wireless connectivity, and micro-controllers to enable a variety of smart functions. These include motion activation to save energy, providing wireless access points, air quality monitoring, or parking and litter monitoring. This can also be used to send automatic maintenance requests if a light is faulty, and can even allow neighboring lights to be automatically brightened to compensate.

Image Credit: Image by David Mark from Pixabay Continue reading

Posted in Human Robots

#436504 20 Technology Metatrends That Will ...

In the decade ahead, waves of exponential technological advancements are stacking atop one another, eclipsing decades of breakthroughs in scale and impact.

Emerging from these waves are 20 “metatrends” likely to revolutionize entire industries (old and new), redefine tomorrow’s generation of businesses and contemporary challenges, and transform our livelihoods from the bottom up.

Among these metatrends are augmented human longevity, the surging smart economy, AI-human collaboration, urbanized cellular agriculture, and high-bandwidth brain-computer interfaces, just to name a few.

It is here that master entrepreneurs and their teams must see beyond the immediate implications of a given technology, capturing second-order, Google-sized business opportunities on the horizon.

Welcome to a new decade of runaway technological booms, historic watershed moments, and extraordinary abundance.

Let’s dive in.

20 Metatrends for the 2020s
(1) Continued increase in global abundance: The number of individuals in extreme poverty continues to drop, as the middle-income population continues to rise. This metatrend is driven by the convergence of high-bandwidth and low-cost communication, ubiquitous AI on the cloud, and growing access to AI-aided education and AI-driven healthcare. Everyday goods and services (finance, insurance, education, and entertainment) are being digitized and becoming fully demonetized, available to the rising billion on mobile devices.

(2) Global gigabit connectivity will connect everyone and everything, everywhere, at ultra-low cost: The deployment of both licensed and unlicensed 5G, plus the launch of a multitude of global satellite networks (OneWeb, Starlink, etc.), allow for ubiquitous, low-cost communications for everyone, everywhere, not to mention the connection of trillions of devices. And today’s skyrocketing connectivity is bringing online an additional three billion individuals, driving tens of trillions of dollars into the global economy. This metatrend is driven by the convergence of low-cost space launches, hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.

(3) The average human healthspan will increase by 10+ years: A dozen game-changing biotech and pharmaceutical solutions (currently in Phase 1, 2, or 3 clinical trials) will reach consumers this decade, adding an additional decade to the human healthspan. Technologies include stem cell supply restoration, wnt pathway manipulation, senolytic medicines, a new generation of endo-vaccines, GDF-11, and supplementation of NMD/NAD+, among several others. And as machine learning continues to mature, AI is set to unleash countless new drug candidates, ready for clinical trials. This metatrend is driven by the convergence of genome sequencing, CRISPR technologies, AI, quantum computing, and cellular medicine.

(4) An age of capital abundance will see increasing access to capital everywhere: From 2016 – 2018 (and likely in 2019), humanity hit all-time highs in the global flow of seed capital, venture capital, and sovereign wealth fund investments. While this trend will witness some ups and downs in the wake of future recessions, it is expected to continue its overall upward trajectory. Capital abundance leads to the funding and testing of ‘crazy’ entrepreneurial ideas, which in turn accelerate innovation. Already, $300 billion in crowdfunding is anticipated by 2025, democratizing capital access for entrepreneurs worldwide. This metatrend is driven by the convergence of global connectivity, dematerialization, demonetization, and democratization.

(5) Augmented reality and the spatial web will achieve ubiquitous deployment: The combination of augmented reality (yielding Web 3.0, or the spatial web) and 5G networks (offering 100Mb/s – 10Gb/s connection speeds) will transform how we live our everyday lives, impacting every industry from retail and advertising to education and entertainment. Consumers will play, learn, and shop throughout the day in a newly intelligent, virtually overlaid world. This metatrend will be driven by the convergence of hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.

(6) Everything is smart, embedded with intelligence: The price of specialized machine learning chips is dropping rapidly with a rise in global demand. Combined with the explosion of low-cost microscopic sensors and the deployment of high-bandwidth networks, we’re heading into a decade wherein every device becomes intelligent. Your child’s toy remembers her face and name. Your kids’ drone safely and diligently follows and videos all the children at the birthday party. Appliances respond to voice commands and anticipate your needs.

(7) AI will achieve human-level intelligence: As predicted by technologist and futurist Ray Kurzweil, artificial intelligence will reach human-level performance this decade (by 2030). Through the 2020s, AI algorithms and machine learning tools will be increasingly made open source, available on the cloud, allowing any individual with an internet connection to supplement their cognitive ability, augment their problem-solving capacity, and build new ventures at a fraction of the current cost. This metatrend will be driven by the convergence of global high-bandwidth connectivity, neural networks, and cloud computing. Every industry, spanning industrial design, healthcare, education, and entertainment, will be impacted.

(8) AI-human collaboration will skyrocket across all professions: The rise of “AI as a Service” (AIaaS) platforms will enable humans to partner with AI in every aspect of their work, at every level, in every industry. AIs will become entrenched in everyday business operations, serving as cognitive collaborators to employees—supporting creative tasks, generating new ideas, and tackling previously unattainable innovations. In some fields, partnership with AI will even become a requirement. For example: in the future, making certain diagnoses without the consultation of AI may be deemed malpractice.

(9) Most individuals adapt a JARVIS-like “software shell” to improve their quality of life: As services like Alexa, Google Home, and Apple Homepod expand in functionality, such services will eventually travel beyond the home and become your cognitive prosthetic 24/7. Imagine a secure JARVIS-like software shell that you give permission to listen to all your conversations, read your email, monitor your blood chemistry, etc. With access to such data, these AI-enabled software shells will learn your preferences, anticipate your needs and behavior, shop for you, monitor your health, and help you problem-solve in support of your mid- and long-term goals.

(10) Globally abundant, cheap renewable energy: Continued advancements in solar, wind, geothermal, hydroelectric, nuclear, and localized grids will drive humanity towards cheap, abundant, and ubiquitous renewable energy. The price per kilowatt-hour will drop below one cent per kilowatt-hour for renewables, just as storage drops below a mere three cents per kilowatt-hour, resulting in the majority displacement of fossil fuels globally. And as the world’s poorest countries are also the world’s sunniest, the democratization of both new and traditional storage technologies will grant energy abundance to those already bathed in sunlight.

(11) The insurance industry transforms from “recovery after risk” to “prevention of risk”: Today, fire insurance pays you after your house burns down; life insurance pays your next-of-kin after you die; and health insurance (which is really sick insurance) pays only after you get sick. This next decade, a new generation of insurance providers will leverage the convergence of machine learning, ubiquitous sensors, low-cost genome sequencing, and robotics to detect risk, prevent disaster, and guarantee safety before any costs are incurred.

(12) Autonomous vehicles and flying cars will redefine human travel (soon to be far faster and cheaper): Fully autonomous vehicles, car-as-a-service fleets, and aerial ride-sharing (flying cars) will be fully operational in most major metropolitan cities in the coming decade. The cost of transportation will plummet 3-4X, transforming real estate, finance, insurance, the materials economy, and urban planning. Where you live and work, and how you spend your time, will all be fundamentally reshaped by this future of human travel. Your kids and elderly parents will never drive. This metatrend will be driven by the convergence of machine learning, sensors, materials science, battery storage improvements, and ubiquitous gigabit connections.

(13) On-demand production and on-demand delivery will birth an “instant economy of things”: Urban dwellers will learn to expect “instant fulfillment” of their retail orders as drone and robotic last-mile delivery services carry products from local supply depots directly to your doorstep. Further riding the deployment of regional on-demand digital manufacturing (3D printing farms), individualized products can be obtained within hours, anywhere, anytime. This metatrend is driven by the convergence of networks, 3D printing, robotics, and artificial intelligence.

(14) Ability to sense and know anything, anytime, anywhere: We’re rapidly approaching the era wherein 100 billion sensors (the Internet of Everything) is monitoring and sensing (imaging, listening, measuring) every facet of our environments, all the time. Global imaging satellites, drones, autonomous car LIDARs, and forward-looking augmented reality (AR) headset cameras are all part of a global sensor matrix, together allowing us to know anything, anytime, anywhere. This metatrend is driven by the convergence of terrestrial, atmospheric and space-based sensors, vast data networks, and machine learning. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.

(15) Disruption of advertising: As AI becomes increasingly embedded in everyday life, your custom AI will soon understand what you want better than you do. In turn, we will begin to both trust and rely upon our AIs to make most of our buying decisions, turning over shopping to AI-enabled personal assistants. Your AI might make purchases based upon your past desires, current shortages, conversations you’ve allowed your AI to listen to, or by tracking where your pupils focus on a virtual interface (i.e. what catches your attention). As a result, the advertising industry—which normally competes for your attention (whether at the Superbowl or through search engines)—will have a hard time influencing your AI. This metatrend is driven by the convergence of machine learning, sensors, augmented reality, and 5G/networks.

(16) Cellular agriculture moves from the lab into inner cities, providing high-quality protein that is cheaper and healthier: This next decade will witness the birth of the most ethical, nutritious, and environmentally sustainable protein production system devised by humankind. Stem cell-based ‘cellular agriculture’ will allow the production of beef, chicken, and fish anywhere, on-demand, with far higher nutritional content, and a vastly lower environmental footprint than traditional livestock options. This metatrend is enabled by the convergence of biotechnology, materials science, machine learning, and AgTech.

(17) High-bandwidth brain-computer interfaces (BCIs) will come online for public use: Technologist and futurist Ray Kurzweil has predicted that in the mid-2030s, we will begin connecting the human neocortex to the cloud. This next decade will see tremendous progress in that direction, first serving those with spinal cord injuries, whereby patients will regain both sensory capacity and motor control. Yet beyond assisting those with motor function loss, several BCI pioneers are now attempting to supplement their baseline cognitive abilities, a pursuit with the potential to increase their sensorium, memory, and even intelligence. This metatrend is fueled by the convergence of materials science, machine learning, and robotics.

(18) High-resolution VR will transform both retail and real estate shopping: High-resolution, lightweight virtual reality headsets will allow individuals at home to shop for everything from clothing to real estate from the convenience of their living room. Need a new outfit? Your AI knows your detailed body measurements and can whip up a fashion show featuring your avatar wearing the latest 20 designs on a runway. Want to see how your furniture might look inside a house you’re viewing online? No problem! Your AI can populate the property with your virtualized inventory and give you a guided tour. This metatrend is enabled by the convergence of: VR, machine learning, and high-bandwidth networks.

(19) Increased focus on sustainability and the environment: An increase in global environmental awareness and concern over global warming will drive companies to invest in sustainability, both from a necessity standpoint and for marketing purposes. Breakthroughs in materials science, enabled by AI, will allow companies to drive tremendous reductions in waste and environmental contamination. One company’s waste will become another company’s profit center. This metatrend is enabled by the convergence of materials science, artificial intelligence, and broadband networks.

(20) CRISPR and gene therapies will minimize disease: A vast range of infectious diseases, ranging from AIDS to Ebola, are now curable. In addition, gene-editing technologies continue to advance in precision and ease of use, allowing families to treat and ultimately cure hundreds of inheritable genetic diseases. This metatrend is driven by the convergence of various biotechnologies (CRISPR, gene therapy), genome sequencing, and artificial intelligence.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Free-Photos from Pixabay Continue reading

Posted in Human Robots

#436470 Retail Robots Are on the Rise—at Every ...

The robots are coming! The robots are coming! On our sidewalks, in our skies, in our every store… Over the next decade, robots will enter the mainstream of retail.

As countless robots work behind the scenes to stock shelves, serve customers, and deliver products to our doorstep, the speed of retail will accelerate.

These changes are already underway. In this blog, we’ll elaborate on how robots are entering the retail ecosystem.

Let’s dive in.

Robot Delivery
On August 3rd, 2016, Domino’s Pizza introduced the Domino’s Robotic Unit, or “DRU” for short. The first home delivery pizza robot, the DRU looks like a cross between R2-D2 and an oversized microwave.

LIDAR and GPS sensors help it navigate, while temperature sensors keep hot food hot and cold food cold. Already, it’s been rolled out in ten countries, including New Zealand, France, and Germany, but its August 2016 debut was critical—as it was the first time we’d seen robotic home delivery.

And it won’t be the last.

A dozen or so different delivery bots are fast entering the market. Starship Technologies, for instance, a startup created by Skype founders Janus Friis and Ahti Heinla, has a general-purpose home delivery robot. Right now, the system is an array of cameras and GPS sensors, but upcoming models will include microphones, speakers, and even the ability—via AI-driven natural language processing—to communicate with customers. Since 2016, Starship has already carried out 50,000 deliveries in over 100 cities across 20 countries.

Along similar lines, Nuro—co-founded by Jiajun Zhu, one of the engineers who helped develop Google’s self-driving car—has a miniature self-driving car of its own. Half the size of a sedan, the Nuro looks like a toaster on wheels, except with a mission. This toaster has been designed to carry cargo—about 12 bags of groceries (version 2.0 will carry 20)—which it’s been doing for select Kroger stores since 2018. Domino’s also partnered with Nuro in 2019.

As these delivery bots take to our streets, others are streaking across the sky.

Back in 2016, Amazon came first, announcing Prime Air—the e-commerce giant’s promise of drone delivery in 30 minutes or less. Almost immediately, companies ranging from 7-Eleven and Walmart to Google and Alibaba jumped on the bandwagon.

While critics remain doubtful, the head of the FAA’s drone integration department recently said that drone deliveries may be “a lot closer than […] the skeptics think. [Companies are] getting ready for full-blown operations. We’re processing their applications. I would like to move as quickly as I can.”

In-Store Robots
While delivery bots start to spare us trips to the store, those who prefer shopping the old-fashioned way—i.e., in person—also have plenty of human-robot interaction in store. In fact, these robotics solutions have been around for a while.

In 2010, SoftBank introduced Pepper, a humanoid robot capable of understanding human emotion. Pepper is cute: 4 feet tall, with a white plastic body, two black eyes, a dark slash of a mouth, and a base shaped like a mermaid’s tail. Across her chest is a touch screen to aid in communication. And there’s been a lot of communication. Pepper’s cuteness is intentional, as it matches its mission: help humans enjoy life as much as possible.

Over 12,000 Peppers have been sold. She serves ice cream in Japan, greets diners at a Pizza Hut in Singapore, and dances with customers at a Palo Alto electronics store. More importantly, Pepper’s got company.

Walmart uses shelf-stocking robots for inventory control. Best Buy uses a robo-cashier, allowing select locations to operate 24-7. And Lowe’s Home Improvement employs the LoweBot—a giant iPad on wheels—to help customers find the items they need while tracking inventory along the way.

Warehouse Bots
Yet the biggest benefit robots provide might be in-warehouse logistics.

In 2012, when Amazon dished out $775 million for Kiva Systems, few could predict that just 6 years later, 45,000 Kiva robots would be deployed at all of their fulfillment centers, helping process a whopping 306 items per second during the Christmas season.

And many other retailers are following suit.

Order jeans from the Gap, and soon they’ll be sorted, packed, and shipped with the help of a Kindred robot. Remember the old arcade game where you picked up teddy bears with a giant claw? That’s Kindred, only her claw picks up T-shirts, pants, and the like, placing them in designated drop-off zones that resemble tiny mailboxes (for further sorting or shipping).

The big deal here is democratization. Kindred’s robot is cheap and easy to deploy, allowing smaller companies to compete with giants like Amazon.

Final Thoughts
For retailers interested in staying in business, there doesn’t appear to be much choice in the way of robotics.

By 2024, the US minimum wage is projected to be $15 an hour (the House of Representatives has already passed the bill, but the wage hike is meant to unfold gradually between now and 2025), and many consider that number far too low.

Yet, as human labor costs continue to climb, robots won’t just be coming, they’ll be here, there, and everywhere. It’s going to become increasingly difficult for store owners to justify human workers who call in sick, show up late, and can easily get injured. Robots work 24-7. They never take a day off, never need a bathroom break, health insurance, or parental leave.

Going forward, this spells a growing challenge of technological unemployment (a blog topic I will cover in the coming month). But in retail, robotics usher in tremendous benefits for companies and customers alike.

And while professional re-tooling initiatives and the transition of human capital from retail logistics to a booming experience economy take hold, robotic retail interaction and last-mile delivery will fundamentally transform our relationship with commerce.

This blog comes from The Future is Faster Than You Think—my upcoming book, to be released Jan 28th, 2020. To get an early copy and access up to $800 worth of pre-launch giveaways, sign up here!

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

Image Credit: Image by imjanuary from Pixabay Continue reading

Posted in Human Robots

#436258 For Centuries, People Dreamed of a ...

This is part six of a six-part series on the history of natural language processing.

In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text.

But as is often the case with NLP technology, the tool held both great promise and great peril. Researchers and policy makers at the lab were concerned that their system, if widely released, could be exploited by bad actors and misappropriated for “malicious purposes.”

The people of OpenAI, which defines its mission as “discovering and enacting the path to safe artificial general intelligence,” were concerned that GPT-2 could be used to flood the Internet with fake text, thereby degrading an already fragile information ecosystem. For this reason, OpenAI decided that it would not release the full version of GPT-2 to the public or other researchers.

GPT-2 is an example of a technique in NLP called language modeling, whereby the computational system internalizes a statistical blueprint of a text so it’s able to mimic it. Just like the predictive text on your phone—which selects words based on words you’ve used before—GPT-2 can look at a string of text and then predict what the next word is likely to be based on the probabilities inherent in that text.

GPT-2 can be seen as a descendant of the statistical language modeling that the Russian mathematician A. A. Markov developed in the early 20th century (covered in part three of this series).

GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters.

What’s different with GPT-2, though, is the scale of the textual data modeled by the system. Whereas Markov analyzed a string of 20,000 letters to create a rudimentary model that could predict the likelihood of the next letter of a text being a consonant or a vowel, GPT-2 used 8 million articles scraped from Reddit to predict what the next word might be within that entire dataset.

And whereas Markov manually trained his model by counting only two parameters—vowels and consonants—GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters, burning through huge amounts of computational power in the process.

The results were impressive. In their blog post, OpenAI reported that GPT-2 could generate synthetic text in response to prompts, mimicking whatever style of text it was shown. If you prompt the system with a line of William Blake’s poetry, it can generate a line back in the Romantic poet’s style. If you prompt the system with a cake recipe, you get a newly invented recipe in response.

Perhaps the most compelling feature of GPT-2 is that it can answer questions accurately. For example, when OpenAI researchers asked the system, “Who wrote the book The Origin of Species?”—it responded: “Charles Darwin.” While only able to respond accurately some of the time, the feature does seem to be a limited realization of Gottfried Leibniz’s dream of a language-generating machine that could answer any and all human questions (described in part two of this series).

After observing the power of the new system in practice, OpenAI elected not to release the fully trained model. In the lead up to its release in February, there had been heightened awareness about “deepfakes”—synthetic images and videos, generated via machine learning techniques, in which people do and say things they haven’t really done and said. Researchers at OpenAI worried that GPT-2 could be used to essentially create deepfake text, making it harder for people to trust textual information online.

Responses to this decision varied. On one hand, OpenAI’s caution prompted an overblown reaction in the media, with articles about the “dangerous” technology feeding into the Frankenstein narrative that often surrounds developments in AI.

Others took issue with OpenAI’s self-promotion, with some even suggesting that OpenAI purposefully exaggerated GPT-2s power in order to create hype—while contravening a norm in the AI research community, where labs routinely share data, code, and pre-trained models. As machine learning researcher Zachary Lipton tweeted, “Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research.”

OpenAI stood by its decision to release only a limited version of GPT-2, but has since released larger models for other researchers and the public to experiment with. As yet, there has been no reported case of a widely distributed fake news article generated by the system. But there have been a number of interesting spin-off projects, including GPT-2 poetry and a webpage where you can prompt the system with questions yourself.

Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and
Star Wars movies.

There’s even a Reddit group populated entirely with text produced by GPT-2-powered bots. Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and Star Wars movies.

This bot-powered conversation may signify the new condition of life online, where language is increasingly created by a combination of human and non-human agents, and where maintaining the distinction between human and non-human, despite our best efforts, is increasingly difficult.

The idea of using rules, mechanisms, and algorithms to generate language has inspired people in many different cultures throughout history. But it’s in the online world that this powerful form of wordcraft may really find its natural milieu—in an environment where the identity of speakers becomes more ambiguous, and perhaps, less relevant. It remains to be seen what the consequences will be for language, communication, and our sense of human identity, which is so bound up with our ability to speak in natural language.

This is the sixth installment of a six-part series on the history of natural language processing. Last week’s post explained how an innocent Microsoft chatbot turned instantly racist on Twitter.

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots