Tag Archives: generation
#436504 20 Technology Metatrends That Will ...
In the decade ahead, waves of exponential technological advancements are stacking atop one another, eclipsing decades of breakthroughs in scale and impact.
Emerging from these waves are 20 “metatrends” likely to revolutionize entire industries (old and new), redefine tomorrow’s generation of businesses and contemporary challenges, and transform our livelihoods from the bottom up.
Among these metatrends are augmented human longevity, the surging smart economy, AI-human collaboration, urbanized cellular agriculture, and high-bandwidth brain-computer interfaces, just to name a few.
It is here that master entrepreneurs and their teams must see beyond the immediate implications of a given technology, capturing second-order, Google-sized business opportunities on the horizon.
Welcome to a new decade of runaway technological booms, historic watershed moments, and extraordinary abundance.
Let’s dive in.
20 Metatrends for the 2020s
(1) Continued increase in global abundance: The number of individuals in extreme poverty continues to drop, as the middle-income population continues to rise. This metatrend is driven by the convergence of high-bandwidth and low-cost communication, ubiquitous AI on the cloud, and growing access to AI-aided education and AI-driven healthcare. Everyday goods and services (finance, insurance, education, and entertainment) are being digitized and becoming fully demonetized, available to the rising billion on mobile devices.
(2) Global gigabit connectivity will connect everyone and everything, everywhere, at ultra-low cost: The deployment of both licensed and unlicensed 5G, plus the launch of a multitude of global satellite networks (OneWeb, Starlink, etc.), allow for ubiquitous, low-cost communications for everyone, everywhere, not to mention the connection of trillions of devices. And today’s skyrocketing connectivity is bringing online an additional three billion individuals, driving tens of trillions of dollars into the global economy. This metatrend is driven by the convergence of low-cost space launches, hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.
(3) The average human healthspan will increase by 10+ years: A dozen game-changing biotech and pharmaceutical solutions (currently in Phase 1, 2, or 3 clinical trials) will reach consumers this decade, adding an additional decade to the human healthspan. Technologies include stem cell supply restoration, wnt pathway manipulation, senolytic medicines, a new generation of endo-vaccines, GDF-11, and supplementation of NMD/NAD+, among several others. And as machine learning continues to mature, AI is set to unleash countless new drug candidates, ready for clinical trials. This metatrend is driven by the convergence of genome sequencing, CRISPR technologies, AI, quantum computing, and cellular medicine.
(4) An age of capital abundance will see increasing access to capital everywhere: From 2016 – 2018 (and likely in 2019), humanity hit all-time highs in the global flow of seed capital, venture capital, and sovereign wealth fund investments. While this trend will witness some ups and downs in the wake of future recessions, it is expected to continue its overall upward trajectory. Capital abundance leads to the funding and testing of ‘crazy’ entrepreneurial ideas, which in turn accelerate innovation. Already, $300 billion in crowdfunding is anticipated by 2025, democratizing capital access for entrepreneurs worldwide. This metatrend is driven by the convergence of global connectivity, dematerialization, demonetization, and democratization.
(5) Augmented reality and the spatial web will achieve ubiquitous deployment: The combination of augmented reality (yielding Web 3.0, or the spatial web) and 5G networks (offering 100Mb/s – 10Gb/s connection speeds) will transform how we live our everyday lives, impacting every industry from retail and advertising to education and entertainment. Consumers will play, learn, and shop throughout the day in a newly intelligent, virtually overlaid world. This metatrend will be driven by the convergence of hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.
(6) Everything is smart, embedded with intelligence: The price of specialized machine learning chips is dropping rapidly with a rise in global demand. Combined with the explosion of low-cost microscopic sensors and the deployment of high-bandwidth networks, we’re heading into a decade wherein every device becomes intelligent. Your child’s toy remembers her face and name. Your kids’ drone safely and diligently follows and videos all the children at the birthday party. Appliances respond to voice commands and anticipate your needs.
(7) AI will achieve human-level intelligence: As predicted by technologist and futurist Ray Kurzweil, artificial intelligence will reach human-level performance this decade (by 2030). Through the 2020s, AI algorithms and machine learning tools will be increasingly made open source, available on the cloud, allowing any individual with an internet connection to supplement their cognitive ability, augment their problem-solving capacity, and build new ventures at a fraction of the current cost. This metatrend will be driven by the convergence of global high-bandwidth connectivity, neural networks, and cloud computing. Every industry, spanning industrial design, healthcare, education, and entertainment, will be impacted.
(8) AI-human collaboration will skyrocket across all professions: The rise of “AI as a Service” (AIaaS) platforms will enable humans to partner with AI in every aspect of their work, at every level, in every industry. AIs will become entrenched in everyday business operations, serving as cognitive collaborators to employees—supporting creative tasks, generating new ideas, and tackling previously unattainable innovations. In some fields, partnership with AI will even become a requirement. For example: in the future, making certain diagnoses without the consultation of AI may be deemed malpractice.
(9) Most individuals adapt a JARVIS-like “software shell” to improve their quality of life: As services like Alexa, Google Home, and Apple Homepod expand in functionality, such services will eventually travel beyond the home and become your cognitive prosthetic 24/7. Imagine a secure JARVIS-like software shell that you give permission to listen to all your conversations, read your email, monitor your blood chemistry, etc. With access to such data, these AI-enabled software shells will learn your preferences, anticipate your needs and behavior, shop for you, monitor your health, and help you problem-solve in support of your mid- and long-term goals.
(10) Globally abundant, cheap renewable energy: Continued advancements in solar, wind, geothermal, hydroelectric, nuclear, and localized grids will drive humanity towards cheap, abundant, and ubiquitous renewable energy. The price per kilowatt-hour will drop below one cent per kilowatt-hour for renewables, just as storage drops below a mere three cents per kilowatt-hour, resulting in the majority displacement of fossil fuels globally. And as the world’s poorest countries are also the world’s sunniest, the democratization of both new and traditional storage technologies will grant energy abundance to those already bathed in sunlight.
(11) The insurance industry transforms from “recovery after risk” to “prevention of risk”: Today, fire insurance pays you after your house burns down; life insurance pays your next-of-kin after you die; and health insurance (which is really sick insurance) pays only after you get sick. This next decade, a new generation of insurance providers will leverage the convergence of machine learning, ubiquitous sensors, low-cost genome sequencing, and robotics to detect risk, prevent disaster, and guarantee safety before any costs are incurred.
(12) Autonomous vehicles and flying cars will redefine human travel (soon to be far faster and cheaper): Fully autonomous vehicles, car-as-a-service fleets, and aerial ride-sharing (flying cars) will be fully operational in most major metropolitan cities in the coming decade. The cost of transportation will plummet 3-4X, transforming real estate, finance, insurance, the materials economy, and urban planning. Where you live and work, and how you spend your time, will all be fundamentally reshaped by this future of human travel. Your kids and elderly parents will never drive. This metatrend will be driven by the convergence of machine learning, sensors, materials science, battery storage improvements, and ubiquitous gigabit connections.
(13) On-demand production and on-demand delivery will birth an “instant economy of things”: Urban dwellers will learn to expect “instant fulfillment” of their retail orders as drone and robotic last-mile delivery services carry products from local supply depots directly to your doorstep. Further riding the deployment of regional on-demand digital manufacturing (3D printing farms), individualized products can be obtained within hours, anywhere, anytime. This metatrend is driven by the convergence of networks, 3D printing, robotics, and artificial intelligence.
(14) Ability to sense and know anything, anytime, anywhere: We’re rapidly approaching the era wherein 100 billion sensors (the Internet of Everything) is monitoring and sensing (imaging, listening, measuring) every facet of our environments, all the time. Global imaging satellites, drones, autonomous car LIDARs, and forward-looking augmented reality (AR) headset cameras are all part of a global sensor matrix, together allowing us to know anything, anytime, anywhere. This metatrend is driven by the convergence of terrestrial, atmospheric and space-based sensors, vast data networks, and machine learning. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.
(15) Disruption of advertising: As AI becomes increasingly embedded in everyday life, your custom AI will soon understand what you want better than you do. In turn, we will begin to both trust and rely upon our AIs to make most of our buying decisions, turning over shopping to AI-enabled personal assistants. Your AI might make purchases based upon your past desires, current shortages, conversations you’ve allowed your AI to listen to, or by tracking where your pupils focus on a virtual interface (i.e. what catches your attention). As a result, the advertising industry—which normally competes for your attention (whether at the Superbowl or through search engines)—will have a hard time influencing your AI. This metatrend is driven by the convergence of machine learning, sensors, augmented reality, and 5G/networks.
(16) Cellular agriculture moves from the lab into inner cities, providing high-quality protein that is cheaper and healthier: This next decade will witness the birth of the most ethical, nutritious, and environmentally sustainable protein production system devised by humankind. Stem cell-based ‘cellular agriculture’ will allow the production of beef, chicken, and fish anywhere, on-demand, with far higher nutritional content, and a vastly lower environmental footprint than traditional livestock options. This metatrend is enabled by the convergence of biotechnology, materials science, machine learning, and AgTech.
(17) High-bandwidth brain-computer interfaces (BCIs) will come online for public use: Technologist and futurist Ray Kurzweil has predicted that in the mid-2030s, we will begin connecting the human neocortex to the cloud. This next decade will see tremendous progress in that direction, first serving those with spinal cord injuries, whereby patients will regain both sensory capacity and motor control. Yet beyond assisting those with motor function loss, several BCI pioneers are now attempting to supplement their baseline cognitive abilities, a pursuit with the potential to increase their sensorium, memory, and even intelligence. This metatrend is fueled by the convergence of materials science, machine learning, and robotics.
(18) High-resolution VR will transform both retail and real estate shopping: High-resolution, lightweight virtual reality headsets will allow individuals at home to shop for everything from clothing to real estate from the convenience of their living room. Need a new outfit? Your AI knows your detailed body measurements and can whip up a fashion show featuring your avatar wearing the latest 20 designs on a runway. Want to see how your furniture might look inside a house you’re viewing online? No problem! Your AI can populate the property with your virtualized inventory and give you a guided tour. This metatrend is enabled by the convergence of: VR, machine learning, and high-bandwidth networks.
(19) Increased focus on sustainability and the environment: An increase in global environmental awareness and concern over global warming will drive companies to invest in sustainability, both from a necessity standpoint and for marketing purposes. Breakthroughs in materials science, enabled by AI, will allow companies to drive tremendous reductions in waste and environmental contamination. One company’s waste will become another company’s profit center. This metatrend is enabled by the convergence of materials science, artificial intelligence, and broadband networks.
(20) CRISPR and gene therapies will minimize disease: A vast range of infectious diseases, ranging from AIDS to Ebola, are now curable. In addition, gene-editing technologies continue to advance in precision and ease of use, allowing families to treat and ultimately cure hundreds of inheritable genetic diseases. This metatrend is driven by the convergence of various biotechnologies (CRISPR, gene therapy), genome sequencing, and artificial intelligence.
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: Image by Free-Photos from Pixabay Continue reading →
#436488 Tech’s Biggest Leaps From the Last 10 ...
As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years.
The 2010s saw IBM’s Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind’s AlphaGO becoming the world’s Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online.
For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.
As I did last year for 2018 only, I’ve asked a collection of experts across the Singularity University faculty to help frame the biggest breakthroughs and moments that gave shape to the past 10 years. I asked them what, in their opinion, was the most important breakthrough in their respective fields over the past decade.
My own answer to this question, focused in the space of augmented and virtual reality, would be the stunning announcement in March of 2014 that Facebook acquired Oculus VR for $2 billion. Although VR technology had been around for a while, it was at this precise moment that VR arrived as a consumer technology platform. Facebook, largely fueled by the singular interest of CEO Mark Zuckerberg, has funded the development of this industry, keeping alive the hope that consumer VR can become a sustainable business. In the meantime, VR has continued to grow in sophistication and usefulness, though it has yet to truly take off as a mainstream concept. That will hopefully be a development for the 2020s.
Below is a decade in review across the technology areas that are giving shape to our modern world, as described by the SU community of experts.
Digital Biology
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University
In my mind, this decade of astounding breakthroughs in the life sciences and medicine rests on the achievement of the $1,000 human genome in 2016. More-than-exponentially falling costs of DNA sequencing have driven advances in medicine, agriculture, ecology, genome editing, synthetic biology, the battle against climate change, and our fundamental understanding of life and its breathtaking connections. The “digital” revolution in DNA constituted an important model for harnessing other types of biological information, from personalized bio data to massive datasets spanning populations and species.
Crucially, by aggressively driving down the cost of such analyses, researchers and entrepreneurs democratized access to the source code of life—with attendant financial, cultural, and ethical consequences. Exciting, but take heed: Veritas Genetics spearheaded a $600 genome in 2019, only to have to shutter USA operations due to a money trail tangled with the trade war with China. Stay tuned through the early 2020s to see the pricing of DNA sequencing fall even further … and to experience the many ways that cheaper, faster harvesting of biological data will enrich your daily life.
Cryptocurrency
Alex Gladstein | Chief Strategy Officer, Human Rights Foundation
The past decade has seen Bitcoin go from just an idea on an obscure online message board to a global financial network carrying more than 100 billion dollars in value. And we’re just getting started. One recent defining moment in the cryptocurrency space has been a stunning trend underway in Venezuela, where today, the daily dollar-denominated value of Bitcoin traded now far exceeds the daily dollar-denominated value traded on the Caracas Stock Exchange. It’s just one country, but it’s a significant country, and a paradigm shift.
Governments and corporations are following Bitcoin’s success too, and are looking to launch their own digital currencies. China will launch its “DC/EP” project in the coming months, and Facebook is trying to kickstart its Libra project. There are technical and regulatory uncertainties for both, but one thing is for certain: the era of digital currency has arrived.
Business Strategy and Entrepreneurship
Pascal Finnette | Chair, Entrepreneurship and Open Innovation, Singularity University
For me, without a doubt, the most interesting and quite possibly ground-shifting development in the fields of entrepreneurship and corporate innovation in the last ten years is the rapid maturing of customer-driven product development frameworks such as Lean Startup, and its subsequent adoption by corporates for their own innovation purposes.
Tools and frameworks like the Business Model Canvas, agile (software) development and the aforementioned Lean Startup methodology fundamentally shifted the way we think and go about building products, services, and companies, with many of these tools bursting onto the startup scene in the late 2000s and early 2010s.
As these tools matured they found mass adoption not only in startups around the world, but incumbent companies who eagerly adopted them to increase their own innovation velocity and success.
Energy
Ramez Naam | Co-Chair, Energy and Environment, Singularity University
The 2010s were the decade that saw clean electricity, energy storage, and electric vehicles break through price and performance barriers around the world. Solar, wind, batteries, and EVs started this decade as technologies that had to be subsidized. That was the first phase of their existence. Now they’re entering their third, most disruptive phase, where shifting to clean energy and mobility is cheaper than continuing to use existing coal, gas, or oil infrastructure.
Consider that at the start of 2010, there was no place on earth where building new solar or wind was cheaper than building new coal or gas power generation. By 2015, in some of the sunniest and windiest places on earth, solar and wind had entered their second phase, where they were cost-competitive for new power. And then, in 2018 and 2019, we started to see the edge of the third phase, as building new solar and wind, in some parts of the world, was cheaper than operating existing coal or gas power plants.
Food Technology
Liz Specht, Ph. D | Associate Director of Science & Technology, The Good Food Institute
The arrival of mainstream plant-based meat is easily the food tech advance of the decade. Meat analogs have, of course, been around forever. But only in the last decade have companies like Beyond Meat and Impossible Foods decided to cut animals out of the process and build no-compromise meat directly from plants.
Plant-based meat is already transforming the fast-food industry. For example, the introduction of the Impossible Whopper led Burger King to their most profitable quarter in many years. But the global food industry as a whole is shifting as well. Tyson, JBS, Nestle, Cargill, and many others are all embracing plant-based meat.
Augmented and Virtual Reality
Jody Medich | CEO, Superhuman-x
The breakthrough moment for augmented and virtual reality came in 2013 when Palmer Lucky took apart an Android smartphone and added optic lenses to make the first version of the Oculus Rift. Prior to that moment, we struggled with miniaturizing the components needed to develop low-latency head-worn devices. But thanks to the smartphone race started in 2006 with the iPhone, we finally had a suite of sensors, chips, displays, and computing power small enough to put on the head.
What will the next 10 years bring? Look for AR/VR to explode in a big way. We are right on the cusp of that tipping point when the tech is finally “good enough” for our linear expectations. Given all it can do today, we can’t even picture what’s possible. Just as today we can’t function without our phones, by 2029 we’ll feel lost without some AR/VR product. It will be the way we interact with computing, smart objects, and AI. Tim Cook, Apple CEO, predicts it will replace all of today’s computing devices. I can’t wait.
Philosophy of Technology
Alix Rübsaam | Faculty Fellow, Singularity University, Philosophy of Technology/Ethics of AI
The last decade has seen a significant shift in our general attitude towards the algorithms that we now know dictate much of our surroundings. Looking back at the beginning of the decade, it seems we were blissfully unaware of how the data we freely and willingly surrendered would feed the algorithms that would come to shape every aspect of our daily lives: the news we consume, the products we purchase, the opinions we hold, etc.
If I were to isolate a single publication that contributed greatly to the shift in public discourse on algorithms, it would have to be Cathy O’Neil’s Weapons of Math Destruction from 2016. It remains a comprehensive, readable, and highly informative insight into how algorithms dictate our finances, our jobs, where we go to school, or if we can get health insurance. Its publication represents a pivotal moment when the general public started to question whether we should be OK with outsourcing decision making to these opaque systems.
The ubiquity of ethical guidelines for AI and algorithms published just in the last year (perhaps most comprehensively by the AI Now Institute) fully demonstrates the shift in public opinion of this decade.
Data Science
Ola Kowalewski | Faculty Fellow, Singularity University, Data Innovation
In the last decade we entered the era of internet and smartphone ubiquity. The number of internet users doubled, with nearly 60 percent of the global population connected online and now over 35 percent of the globe owns a smartphone. With billions of people in a state of constant connectedness and therefore in a state of constant surveillance, the companies that have built the tech infrastructure and information pipelines have dominated the global economy. This shift from tech companies being the underdogs to arguably the world’s major powers sets the landscape we enter for the next decade.
Global Grand Challenges
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University
The biggest breakthrough over the last decade in social impact and technology is that the social impact sector switched from seeing technology as something problematic to avoid, to one of the most effective ways to create social change. We now see people using exponential technologies to solve all sorts of social challenges in areas ranging from disaster response to hunger to shelter.
The world’s leading social organizations, such as UNICEF and the World Food Programme, have launched their own venture funds and accelerators, and the United Nations recently declared that digitization is revolutionizing global development.
Digital Biology
Raymond McCauley | Chair, Digital Biology, Singularity University, Co-Founder & Chief Architect, BioCurious; Principal, Exponential Biosciences
CRISPR is bringing about a revolution in genetic engineering. It’s obvious, and it’s huge. What may not be so obvious is the widespread adoption of genetic testing. And this may have an even longer-lasting effect. It’s used to test new babies, to solve medical mysteries, and to catch serial killers. Thanks to holiday ads from 23andMe and Ancestry.com, it’s everywhere. Testing your DNA is now a common over-the-counter product. People are using it to set their diet, to pick drugs, and even for dating (or at least picking healthy mates).
And we’re just in the early stages. Further down the line, doing large-scale studies on more people, with more data, will lead to the use of polygenic risk scores to help us rank our genetic potential for everything from getting cancer to being a genius. Can you imagine what it would be like for parents to pick new babies, GATTACA-style, to get the smartest kids? You don’t have to; it’s already happening.
Artificial Intelligence
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University
The convergence of exponentially improved computing power, the deep learning algorithm, and access to massive data resulted in a series of AI breakthroughs over the past decade. These included: vastly improved accuracy in identifying images, making self driving cars practical, beating several world champions in Go, and identifying gender, smoking status, and age from retinal fundus photographs.
Combined, these breakthroughs convinced researchers and investors that after 50+ years of research and development, AI was ready for prime-time applications. Now, virtually every field of human endeavor is being revolutionized by machine learning. We still have a long way to go to achieve human-level intelligence and beyond, but the pace of worldwide improvement is blistering.
Hod Lipson | Professor of Engineering and Data Science, Columbia University
The biggest moment in AI in the past decade (and in its entire history, in my humble opinion) was midnight, Pacific time, September 30, 2012: the moment when machines finally opened their eyes. It was the moment when deep learning took off, breaking stagnant decades of machine blindness, when AI couldn’t reliably tell apart even a cat from a dog. That seemingly trivial accomplishment—a task any one-year-old child can do—has had a ripple effect on AI applications from driverless cars to health diagnostics. And this is just the beginning of what is sure to be a Cambrian explosion of AI.
Neuroscience
Divya Chander | Chair, Neuroscience, Singularity University
If the 2000s were the decade of brain mapping, then the 2010s were the decade of brain writing. Optogenetics, a technique for precisely mapping and controlling neurons and neural circuits using genetically-directed light, saw incredible growth in the 2010s.
Also in the last 10 years, neuromodulation, or the ability to rewire the brain using both invasive and non-invasive interfaces and energy, has exploded in use and form. For instance, the Braingate consortium showed us how electrode arrays implanted into the motor cortex could be used by paralyzed people to use their thoughts to direct a robotic arm. These technologies, alone or in combination with robotics, exoskeletons, and flexible, implantable, electronics also make possible a future of human augmentation.
Image Credit: Image by Jorge Guillen from Pixabay Continue reading →
#436426 Video Friday: This Robot Refuses to Fall ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.
[ Skydio ]
Sphero RVR is one of the 15 robots on our robot gift guide this year. Here’s a new video Sphero just released showing some of the things you can do with the robot.
[ RVR ]
NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.
[ NimbRo ]
Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.
[ DARPA SubT ]
Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.
[ Bristol ]
Happy Holidays from ABB!
Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.
[ ABB ]
We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.
And here’s a real pelican eel:
[ Science Robotics ]
Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.
[ Delft Dynamics ]
Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.
[ Paper ] via [ ROAM Lab ]
I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.
[ MotionLib ] via [ RobotStart ]
In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.
[ Paper ]
New weirdness from Toio!
[ Toio ]
Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.
[ Misty Robotics ]
We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.
[ CMU ]
How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.
[ Roboy Research Reviews ]
What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she “taught” her artistic style to a machine — and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. “Part of the beauty of human and machine systems is their inherent, shared fallibility,” she says.
[ TED ]
Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.
[ IEEE TechEthics ]
Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”
Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.
[ CMU RI ] Continue reading →
#436252 After AI, Fashion and Shopping Will ...
AI and broadband are eating retail for breakfast. In the first half of 2019, we’ve seen 19 retailer bankruptcies. And the retail apocalypse is only accelerating.
What’s coming next is astounding. Why drive when you can speak? Revenue from products purchased via voice commands is expected to quadruple from today’s US$2 billion to US$8 billion by 2023.
Virtual reality, augmented reality, and 3D printing are converging with artificial intelligence, drones, and 5G to transform shopping on every dimension. And as a result, shopping is becoming dematerialized, demonetized, democratized, and delocalized… a top-to-bottom transformation of the retail world.
Welcome to Part 1 of our series on the future of retail, a deep-dive into AI and its far-reaching implications.
Let’s dive in.
A Day in the Life of 2029
Welcome to April 21, 2029, a sunny day in Dallas. You’ve got a fundraising luncheon tomorrow, but nothing to wear. The last thing you want to do is spend the day at the mall.
No sweat. Your body image data is still current, as you were scanned only a week ago. Put on your VR headset and have a conversation with your AI. “It’s time to buy a dress for tomorrow’s event” is all you have to say. In a moment, you’re teleported to a virtual clothing store. Zero travel time. No freeway traffic, parking hassles, or angry hordes wielding baby strollers.
Instead, you’ve entered your own personal clothing store. Everything is in your exact size…. And I mean everything. The store has access to nearly every designer and style on the planet. Ask your AI to show you what’s hot in Shanghai, and presto—instant fashion show. Every model strutting down the runway looks exactly like you, only dressed in Shanghai’s latest.
When you’re done selecting an outfit, your AI pays the bill. And as your new clothes are being 3D printed at a warehouse—before speeding your way via drone delivery—a digital version has been added to your personal inventory for use at future virtual events.
The cost? Thanks to an era of no middlemen, less than half of what you pay in stores today. Yet this future is not all that far off…
Digital Assistants
Let’s begin with the basics: the act of turning desire into purchase.
Most of us navigate shopping malls or online marketplaces alone, hoping to stumble across the right item and fit. But if you’re lucky enough to employ a personal assistant, you have the luxury of describing what you want to someone who knows you well enough to buy that exact right thing most of the time.
For most of us who don’t, enter the digital assistant.
Right now, the four horsemen of the retail apocalypse are waging war for our wallets. Amazon’s Alexa, Google’s Now, Apple’s Siri, and Alibaba’s Tmall Genie are going head-to-head in a battle to become the platform du jour for voice-activated, AI-assisted commerce.
For baby boomers who grew up watching Captain Kirk talk to the Enterprise’s computer on Star Trek, digital assistants seem a little like science fiction. But for millennials, it’s just the next logical step in a world that is auto-magical.
And as those millennials enter their consumer prime, revenue from products purchased via voice-driven commands is projected to leap from today’s US$2 billion to US$8 billion by 2023.
We are already seeing a major change in purchasing habits. On average, consumers using Amazon Echo spent more than standard Amazon Prime customers: US$1,700 versus US$1,300.
And as far as an AI fashion advisor goes, those too are here, courtesy of both Alibaba and Amazon. During its annual Singles’ Day (November 11) shopping festival, Alibaba’s FashionAI concept store uses deep learning to make suggestions based on advice from human fashion experts and store inventory, driving a significant portion of the day’s US$25 billion in sales.
Similarly, Amazon’s shopping algorithm makes personalized clothing recommendations based on user preferences and social media behavior.
Customer Service
But AI is disrupting more than just personalized fashion and e-commerce. Its next big break will take place in the customer service arena.
According to a recent Zendesk study, good customer service increases the possibility of a purchase by 42 percent, while bad customer service translates into a 52 percent chance of losing that sale forever. This means more than half of us will stop shopping at a store due to a single disappointing customer service interaction. These are significant financial stakes. They’re also problems perfectly suited for an AI solution.
During the 2018 Google I/O conference, CEO Sundar Pichai demoed the Google Duplex, their next generation digital assistant. Pichai played the audience a series of pre-recorded phone calls made by Google Duplex. The first call made a reservation at a restaurant, the second one booked a haircut appointment, amusing the audience with a long “hmmm” mid-call.
In neither case did the person on the other end of the phone have any idea they were talking to an AI. The system’s success speaks to how seamlessly AI can blend into our retail lives and how convenient it will continue to make them. The same technology Pichai demonstrated that can make phone calls for consumers can also answer phones for retailers—a development that’s unfolding in two different ways:
(1) Customer service coaches: First, for organizations interested in keeping humans involved, there’s Beyond Verbal, a Tel Aviv-based startup that has built an AI customer service coach. Simply by analyzing customer voice intonation, the system can tell whether the person on the phone is about to blow a gasket, is genuinely excited, or anything in between.
Based on research of over 70,000 subjects in more than 30 languages, Beyond Verbal’s app can detect 400 different markers of human moods, attitudes, and personality traits. Already it’s been integrated in call centers to help human sales agents understand and react to customer emotions, making those calls more pleasant, and also more profitable.
For example, by analyzing word choice and vocal style, Beyond Verbal’s system can tell what kind of shopper the person on the line actually is. If they’re an early adopter, the AI alerts the sales agent to offer them the latest and greatest. If they’re more conservative, it suggests items more tried-and-true.
(2) Replacing customer service agents: Second, companies like New Zealand’s Soul Machines are working to replace human customer service agents altogether. Powered by IBM’s Watson, Soul Machines builds lifelike customer service avatars designed for empathy, making them one of many helping to pioneer the field of emotionally intelligent computing.
With their technology, 40 percent of all customer service interactions are now resolved with a high degree of satisfaction, no human intervention needed. And because the system is built using neural nets, it’s continuously learning from every interaction—meaning that percentage will continue to improve.
The number of these interactions continues to grow as well. Software manufacturer Autodesk now includes a Soul Machine avatar named AVA (Autodesk Virtual Assistant) in all of its new offerings. She lives in a small window on the screen, ready to soothe tempers, troubleshoot problems, and forever banish those long tech support hold times.
For Daimler Financial Services, Soul Machines built an avatar named Sarah, who helps customers with arguably three of modernity’s most annoying tasks: financing, leasing, and insuring a car.
This isn’t just about AI—it’s about AI converging with additional exponentials. Add networks and sensors to the story and it raises the scale of disruption, upping the FQ—the frictionless quotient—in our frictionless shopping adventure.
Final Thoughts
AI makes retail cheaper, faster, and more efficient, touching everything from customer service to product delivery. It also redefines the shopping experience, making it frictionless and—once we allow AI to make purchases for us—ultimately invisible.
Prepare for a future in which shopping is dematerialized, demonetized, democratized, and delocalized—otherwise known as “the end of malls.”
Of course, if you wait a few more years, you’ll be able to take an autonomous flying taxi to Westfield’s Destination 2028—so perhaps today’s converging exponentials are not so much spelling the end of malls but rather the beginning of an experience economy far smarter, more immersive, and whimsically imaginative than today’s shopping centers.
Either way, it’s a top-to-bottom transformation of the retail world.
Over the coming blog series, we will continue our discussion of the future of retail. Stay tuned to learn new implications for your business and how to future-proof your company in an age of smart, ultra-efficient, experiential retail.
Want a copy of my next book? If you’ve enjoyed this blogified snippet of The Future is Faster Than You Think, sign up here to be eligible for an early copy and access up to $800 worth of pre-launch giveaways!
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: Image by Pexels from Pixabay Continue reading →
#436220 How Boston Dynamics Is Redefining Robot ...
Gif: Bob O’Connor/IEEE Spectrum
With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.
Spot
Photo: Bob O’Connor
84 cm HEIGHT
25 kg WEIGHT
5.76 km/h SPEED
SENSING: Stereo cameras, inertial measurement unit, position/force sensors
ACTUATION: 12 DC motors
POWER: Battery (90 minutes per charge)
Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.
We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.
Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.
Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.
“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.
Atlas
Photo: Bob O’Connor
150 cm HEIGHT
80 kg WEIGHT
5.4 km/h SPEED
SENSING: Lidar and stereo vision
ACTUATION: 28 hydraulic actuators
POWER: Battery
Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.
While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.
The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.
In the Spotlight
Photo: Bob O’Connor
Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.
Super Senses
Photo: Bob O’Connor
Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.
Stepping Up
Photo: Bob O’Connor
In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.
Funky Feet
Gif: Bob O’Connor/IEEE Spectrum
Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”
Robot Blood
Photo: Bob O’Connor
Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.
Next Generation
Gif: Bob O’Connor/IEEE Spectrum
The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.
Walk This Way
Photo: Bob O’Connor
To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.
Biologically Inspired
Photos: Bob O’Connor
Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.
This article appears in the December 2019 print issue as “By Leaps and Bounds.” Continue reading →