Tag Archives: beyond
#436504 20 Technology Metatrends That Will ...
In the decade ahead, waves of exponential technological advancements are stacking atop one another, eclipsing decades of breakthroughs in scale and impact.
Emerging from these waves are 20 “metatrends” likely to revolutionize entire industries (old and new), redefine tomorrow’s generation of businesses and contemporary challenges, and transform our livelihoods from the bottom up.
Among these metatrends are augmented human longevity, the surging smart economy, AI-human collaboration, urbanized cellular agriculture, and high-bandwidth brain-computer interfaces, just to name a few.
It is here that master entrepreneurs and their teams must see beyond the immediate implications of a given technology, capturing second-order, Google-sized business opportunities on the horizon.
Welcome to a new decade of runaway technological booms, historic watershed moments, and extraordinary abundance.
Let’s dive in.
20 Metatrends for the 2020s
(1) Continued increase in global abundance: The number of individuals in extreme poverty continues to drop, as the middle-income population continues to rise. This metatrend is driven by the convergence of high-bandwidth and low-cost communication, ubiquitous AI on the cloud, and growing access to AI-aided education and AI-driven healthcare. Everyday goods and services (finance, insurance, education, and entertainment) are being digitized and becoming fully demonetized, available to the rising billion on mobile devices.
(2) Global gigabit connectivity will connect everyone and everything, everywhere, at ultra-low cost: The deployment of both licensed and unlicensed 5G, plus the launch of a multitude of global satellite networks (OneWeb, Starlink, etc.), allow for ubiquitous, low-cost communications for everyone, everywhere, not to mention the connection of trillions of devices. And today’s skyrocketing connectivity is bringing online an additional three billion individuals, driving tens of trillions of dollars into the global economy. This metatrend is driven by the convergence of low-cost space launches, hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.
(3) The average human healthspan will increase by 10+ years: A dozen game-changing biotech and pharmaceutical solutions (currently in Phase 1, 2, or 3 clinical trials) will reach consumers this decade, adding an additional decade to the human healthspan. Technologies include stem cell supply restoration, wnt pathway manipulation, senolytic medicines, a new generation of endo-vaccines, GDF-11, and supplementation of NMD/NAD+, among several others. And as machine learning continues to mature, AI is set to unleash countless new drug candidates, ready for clinical trials. This metatrend is driven by the convergence of genome sequencing, CRISPR technologies, AI, quantum computing, and cellular medicine.
(4) An age of capital abundance will see increasing access to capital everywhere: From 2016 – 2018 (and likely in 2019), humanity hit all-time highs in the global flow of seed capital, venture capital, and sovereign wealth fund investments. While this trend will witness some ups and downs in the wake of future recessions, it is expected to continue its overall upward trajectory. Capital abundance leads to the funding and testing of ‘crazy’ entrepreneurial ideas, which in turn accelerate innovation. Already, $300 billion in crowdfunding is anticipated by 2025, democratizing capital access for entrepreneurs worldwide. This metatrend is driven by the convergence of global connectivity, dematerialization, demonetization, and democratization.
(5) Augmented reality and the spatial web will achieve ubiquitous deployment: The combination of augmented reality (yielding Web 3.0, or the spatial web) and 5G networks (offering 100Mb/s – 10Gb/s connection speeds) will transform how we live our everyday lives, impacting every industry from retail and advertising to education and entertainment. Consumers will play, learn, and shop throughout the day in a newly intelligent, virtually overlaid world. This metatrend will be driven by the convergence of hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.
(6) Everything is smart, embedded with intelligence: The price of specialized machine learning chips is dropping rapidly with a rise in global demand. Combined with the explosion of low-cost microscopic sensors and the deployment of high-bandwidth networks, we’re heading into a decade wherein every device becomes intelligent. Your child’s toy remembers her face and name. Your kids’ drone safely and diligently follows and videos all the children at the birthday party. Appliances respond to voice commands and anticipate your needs.
(7) AI will achieve human-level intelligence: As predicted by technologist and futurist Ray Kurzweil, artificial intelligence will reach human-level performance this decade (by 2030). Through the 2020s, AI algorithms and machine learning tools will be increasingly made open source, available on the cloud, allowing any individual with an internet connection to supplement their cognitive ability, augment their problem-solving capacity, and build new ventures at a fraction of the current cost. This metatrend will be driven by the convergence of global high-bandwidth connectivity, neural networks, and cloud computing. Every industry, spanning industrial design, healthcare, education, and entertainment, will be impacted.
(8) AI-human collaboration will skyrocket across all professions: The rise of “AI as a Service” (AIaaS) platforms will enable humans to partner with AI in every aspect of their work, at every level, in every industry. AIs will become entrenched in everyday business operations, serving as cognitive collaborators to employees—supporting creative tasks, generating new ideas, and tackling previously unattainable innovations. In some fields, partnership with AI will even become a requirement. For example: in the future, making certain diagnoses without the consultation of AI may be deemed malpractice.
(9) Most individuals adapt a JARVIS-like “software shell” to improve their quality of life: As services like Alexa, Google Home, and Apple Homepod expand in functionality, such services will eventually travel beyond the home and become your cognitive prosthetic 24/7. Imagine a secure JARVIS-like software shell that you give permission to listen to all your conversations, read your email, monitor your blood chemistry, etc. With access to such data, these AI-enabled software shells will learn your preferences, anticipate your needs and behavior, shop for you, monitor your health, and help you problem-solve in support of your mid- and long-term goals.
(10) Globally abundant, cheap renewable energy: Continued advancements in solar, wind, geothermal, hydroelectric, nuclear, and localized grids will drive humanity towards cheap, abundant, and ubiquitous renewable energy. The price per kilowatt-hour will drop below one cent per kilowatt-hour for renewables, just as storage drops below a mere three cents per kilowatt-hour, resulting in the majority displacement of fossil fuels globally. And as the world’s poorest countries are also the world’s sunniest, the democratization of both new and traditional storage technologies will grant energy abundance to those already bathed in sunlight.
(11) The insurance industry transforms from “recovery after risk” to “prevention of risk”: Today, fire insurance pays you after your house burns down; life insurance pays your next-of-kin after you die; and health insurance (which is really sick insurance) pays only after you get sick. This next decade, a new generation of insurance providers will leverage the convergence of machine learning, ubiquitous sensors, low-cost genome sequencing, and robotics to detect risk, prevent disaster, and guarantee safety before any costs are incurred.
(12) Autonomous vehicles and flying cars will redefine human travel (soon to be far faster and cheaper): Fully autonomous vehicles, car-as-a-service fleets, and aerial ride-sharing (flying cars) will be fully operational in most major metropolitan cities in the coming decade. The cost of transportation will plummet 3-4X, transforming real estate, finance, insurance, the materials economy, and urban planning. Where you live and work, and how you spend your time, will all be fundamentally reshaped by this future of human travel. Your kids and elderly parents will never drive. This metatrend will be driven by the convergence of machine learning, sensors, materials science, battery storage improvements, and ubiquitous gigabit connections.
(13) On-demand production and on-demand delivery will birth an “instant economy of things”: Urban dwellers will learn to expect “instant fulfillment” of their retail orders as drone and robotic last-mile delivery services carry products from local supply depots directly to your doorstep. Further riding the deployment of regional on-demand digital manufacturing (3D printing farms), individualized products can be obtained within hours, anywhere, anytime. This metatrend is driven by the convergence of networks, 3D printing, robotics, and artificial intelligence.
(14) Ability to sense and know anything, anytime, anywhere: We’re rapidly approaching the era wherein 100 billion sensors (the Internet of Everything) is monitoring and sensing (imaging, listening, measuring) every facet of our environments, all the time. Global imaging satellites, drones, autonomous car LIDARs, and forward-looking augmented reality (AR) headset cameras are all part of a global sensor matrix, together allowing us to know anything, anytime, anywhere. This metatrend is driven by the convergence of terrestrial, atmospheric and space-based sensors, vast data networks, and machine learning. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.
(15) Disruption of advertising: As AI becomes increasingly embedded in everyday life, your custom AI will soon understand what you want better than you do. In turn, we will begin to both trust and rely upon our AIs to make most of our buying decisions, turning over shopping to AI-enabled personal assistants. Your AI might make purchases based upon your past desires, current shortages, conversations you’ve allowed your AI to listen to, or by tracking where your pupils focus on a virtual interface (i.e. what catches your attention). As a result, the advertising industry—which normally competes for your attention (whether at the Superbowl or through search engines)—will have a hard time influencing your AI. This metatrend is driven by the convergence of machine learning, sensors, augmented reality, and 5G/networks.
(16) Cellular agriculture moves from the lab into inner cities, providing high-quality protein that is cheaper and healthier: This next decade will witness the birth of the most ethical, nutritious, and environmentally sustainable protein production system devised by humankind. Stem cell-based ‘cellular agriculture’ will allow the production of beef, chicken, and fish anywhere, on-demand, with far higher nutritional content, and a vastly lower environmental footprint than traditional livestock options. This metatrend is enabled by the convergence of biotechnology, materials science, machine learning, and AgTech.
(17) High-bandwidth brain-computer interfaces (BCIs) will come online for public use: Technologist and futurist Ray Kurzweil has predicted that in the mid-2030s, we will begin connecting the human neocortex to the cloud. This next decade will see tremendous progress in that direction, first serving those with spinal cord injuries, whereby patients will regain both sensory capacity and motor control. Yet beyond assisting those with motor function loss, several BCI pioneers are now attempting to supplement their baseline cognitive abilities, a pursuit with the potential to increase their sensorium, memory, and even intelligence. This metatrend is fueled by the convergence of materials science, machine learning, and robotics.
(18) High-resolution VR will transform both retail and real estate shopping: High-resolution, lightweight virtual reality headsets will allow individuals at home to shop for everything from clothing to real estate from the convenience of their living room. Need a new outfit? Your AI knows your detailed body measurements and can whip up a fashion show featuring your avatar wearing the latest 20 designs on a runway. Want to see how your furniture might look inside a house you’re viewing online? No problem! Your AI can populate the property with your virtualized inventory and give you a guided tour. This metatrend is enabled by the convergence of: VR, machine learning, and high-bandwidth networks.
(19) Increased focus on sustainability and the environment: An increase in global environmental awareness and concern over global warming will drive companies to invest in sustainability, both from a necessity standpoint and for marketing purposes. Breakthroughs in materials science, enabled by AI, will allow companies to drive tremendous reductions in waste and environmental contamination. One company’s waste will become another company’s profit center. This metatrend is enabled by the convergence of materials science, artificial intelligence, and broadband networks.
(20) CRISPR and gene therapies will minimize disease: A vast range of infectious diseases, ranging from AIDS to Ebola, are now curable. In addition, gene-editing technologies continue to advance in precision and ease of use, allowing families to treat and ultimately cure hundreds of inheritable genetic diseases. This metatrend is driven by the convergence of various biotechnologies (CRISPR, gene therapy), genome sequencing, and artificial intelligence.
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2020 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: Image by Free-Photos from Pixabay Continue reading →
#436488 Tech’s Biggest Leaps From the Last 10 ...
As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years.
The 2010s saw IBM’s Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind’s AlphaGO becoming the world’s Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online.
For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.
As I did last year for 2018 only, I’ve asked a collection of experts across the Singularity University faculty to help frame the biggest breakthroughs and moments that gave shape to the past 10 years. I asked them what, in their opinion, was the most important breakthrough in their respective fields over the past decade.
My own answer to this question, focused in the space of augmented and virtual reality, would be the stunning announcement in March of 2014 that Facebook acquired Oculus VR for $2 billion. Although VR technology had been around for a while, it was at this precise moment that VR arrived as a consumer technology platform. Facebook, largely fueled by the singular interest of CEO Mark Zuckerberg, has funded the development of this industry, keeping alive the hope that consumer VR can become a sustainable business. In the meantime, VR has continued to grow in sophistication and usefulness, though it has yet to truly take off as a mainstream concept. That will hopefully be a development for the 2020s.
Below is a decade in review across the technology areas that are giving shape to our modern world, as described by the SU community of experts.
Digital Biology
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University
In my mind, this decade of astounding breakthroughs in the life sciences and medicine rests on the achievement of the $1,000 human genome in 2016. More-than-exponentially falling costs of DNA sequencing have driven advances in medicine, agriculture, ecology, genome editing, synthetic biology, the battle against climate change, and our fundamental understanding of life and its breathtaking connections. The “digital” revolution in DNA constituted an important model for harnessing other types of biological information, from personalized bio data to massive datasets spanning populations and species.
Crucially, by aggressively driving down the cost of such analyses, researchers and entrepreneurs democratized access to the source code of life—with attendant financial, cultural, and ethical consequences. Exciting, but take heed: Veritas Genetics spearheaded a $600 genome in 2019, only to have to shutter USA operations due to a money trail tangled with the trade war with China. Stay tuned through the early 2020s to see the pricing of DNA sequencing fall even further … and to experience the many ways that cheaper, faster harvesting of biological data will enrich your daily life.
Cryptocurrency
Alex Gladstein | Chief Strategy Officer, Human Rights Foundation
The past decade has seen Bitcoin go from just an idea on an obscure online message board to a global financial network carrying more than 100 billion dollars in value. And we’re just getting started. One recent defining moment in the cryptocurrency space has been a stunning trend underway in Venezuela, where today, the daily dollar-denominated value of Bitcoin traded now far exceeds the daily dollar-denominated value traded on the Caracas Stock Exchange. It’s just one country, but it’s a significant country, and a paradigm shift.
Governments and corporations are following Bitcoin’s success too, and are looking to launch their own digital currencies. China will launch its “DC/EP” project in the coming months, and Facebook is trying to kickstart its Libra project. There are technical and regulatory uncertainties for both, but one thing is for certain: the era of digital currency has arrived.
Business Strategy and Entrepreneurship
Pascal Finnette | Chair, Entrepreneurship and Open Innovation, Singularity University
For me, without a doubt, the most interesting and quite possibly ground-shifting development in the fields of entrepreneurship and corporate innovation in the last ten years is the rapid maturing of customer-driven product development frameworks such as Lean Startup, and its subsequent adoption by corporates for their own innovation purposes.
Tools and frameworks like the Business Model Canvas, agile (software) development and the aforementioned Lean Startup methodology fundamentally shifted the way we think and go about building products, services, and companies, with many of these tools bursting onto the startup scene in the late 2000s and early 2010s.
As these tools matured they found mass adoption not only in startups around the world, but incumbent companies who eagerly adopted them to increase their own innovation velocity and success.
Energy
Ramez Naam | Co-Chair, Energy and Environment, Singularity University
The 2010s were the decade that saw clean electricity, energy storage, and electric vehicles break through price and performance barriers around the world. Solar, wind, batteries, and EVs started this decade as technologies that had to be subsidized. That was the first phase of their existence. Now they’re entering their third, most disruptive phase, where shifting to clean energy and mobility is cheaper than continuing to use existing coal, gas, or oil infrastructure.
Consider that at the start of 2010, there was no place on earth where building new solar or wind was cheaper than building new coal or gas power generation. By 2015, in some of the sunniest and windiest places on earth, solar and wind had entered their second phase, where they were cost-competitive for new power. And then, in 2018 and 2019, we started to see the edge of the third phase, as building new solar and wind, in some parts of the world, was cheaper than operating existing coal or gas power plants.
Food Technology
Liz Specht, Ph. D | Associate Director of Science & Technology, The Good Food Institute
The arrival of mainstream plant-based meat is easily the food tech advance of the decade. Meat analogs have, of course, been around forever. But only in the last decade have companies like Beyond Meat and Impossible Foods decided to cut animals out of the process and build no-compromise meat directly from plants.
Plant-based meat is already transforming the fast-food industry. For example, the introduction of the Impossible Whopper led Burger King to their most profitable quarter in many years. But the global food industry as a whole is shifting as well. Tyson, JBS, Nestle, Cargill, and many others are all embracing plant-based meat.
Augmented and Virtual Reality
Jody Medich | CEO, Superhuman-x
The breakthrough moment for augmented and virtual reality came in 2013 when Palmer Lucky took apart an Android smartphone and added optic lenses to make the first version of the Oculus Rift. Prior to that moment, we struggled with miniaturizing the components needed to develop low-latency head-worn devices. But thanks to the smartphone race started in 2006 with the iPhone, we finally had a suite of sensors, chips, displays, and computing power small enough to put on the head.
What will the next 10 years bring? Look for AR/VR to explode in a big way. We are right on the cusp of that tipping point when the tech is finally “good enough” for our linear expectations. Given all it can do today, we can’t even picture what’s possible. Just as today we can’t function without our phones, by 2029 we’ll feel lost without some AR/VR product. It will be the way we interact with computing, smart objects, and AI. Tim Cook, Apple CEO, predicts it will replace all of today’s computing devices. I can’t wait.
Philosophy of Technology
Alix Rübsaam | Faculty Fellow, Singularity University, Philosophy of Technology/Ethics of AI
The last decade has seen a significant shift in our general attitude towards the algorithms that we now know dictate much of our surroundings. Looking back at the beginning of the decade, it seems we were blissfully unaware of how the data we freely and willingly surrendered would feed the algorithms that would come to shape every aspect of our daily lives: the news we consume, the products we purchase, the opinions we hold, etc.
If I were to isolate a single publication that contributed greatly to the shift in public discourse on algorithms, it would have to be Cathy O’Neil’s Weapons of Math Destruction from 2016. It remains a comprehensive, readable, and highly informative insight into how algorithms dictate our finances, our jobs, where we go to school, or if we can get health insurance. Its publication represents a pivotal moment when the general public started to question whether we should be OK with outsourcing decision making to these opaque systems.
The ubiquity of ethical guidelines for AI and algorithms published just in the last year (perhaps most comprehensively by the AI Now Institute) fully demonstrates the shift in public opinion of this decade.
Data Science
Ola Kowalewski | Faculty Fellow, Singularity University, Data Innovation
In the last decade we entered the era of internet and smartphone ubiquity. The number of internet users doubled, with nearly 60 percent of the global population connected online and now over 35 percent of the globe owns a smartphone. With billions of people in a state of constant connectedness and therefore in a state of constant surveillance, the companies that have built the tech infrastructure and information pipelines have dominated the global economy. This shift from tech companies being the underdogs to arguably the world’s major powers sets the landscape we enter for the next decade.
Global Grand Challenges
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University
The biggest breakthrough over the last decade in social impact and technology is that the social impact sector switched from seeing technology as something problematic to avoid, to one of the most effective ways to create social change. We now see people using exponential technologies to solve all sorts of social challenges in areas ranging from disaster response to hunger to shelter.
The world’s leading social organizations, such as UNICEF and the World Food Programme, have launched their own venture funds and accelerators, and the United Nations recently declared that digitization is revolutionizing global development.
Digital Biology
Raymond McCauley | Chair, Digital Biology, Singularity University, Co-Founder & Chief Architect, BioCurious; Principal, Exponential Biosciences
CRISPR is bringing about a revolution in genetic engineering. It’s obvious, and it’s huge. What may not be so obvious is the widespread adoption of genetic testing. And this may have an even longer-lasting effect. It’s used to test new babies, to solve medical mysteries, and to catch serial killers. Thanks to holiday ads from 23andMe and Ancestry.com, it’s everywhere. Testing your DNA is now a common over-the-counter product. People are using it to set their diet, to pick drugs, and even for dating (or at least picking healthy mates).
And we’re just in the early stages. Further down the line, doing large-scale studies on more people, with more data, will lead to the use of polygenic risk scores to help us rank our genetic potential for everything from getting cancer to being a genius. Can you imagine what it would be like for parents to pick new babies, GATTACA-style, to get the smartest kids? You don’t have to; it’s already happening.
Artificial Intelligence
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University
The convergence of exponentially improved computing power, the deep learning algorithm, and access to massive data resulted in a series of AI breakthroughs over the past decade. These included: vastly improved accuracy in identifying images, making self driving cars practical, beating several world champions in Go, and identifying gender, smoking status, and age from retinal fundus photographs.
Combined, these breakthroughs convinced researchers and investors that after 50+ years of research and development, AI was ready for prime-time applications. Now, virtually every field of human endeavor is being revolutionized by machine learning. We still have a long way to go to achieve human-level intelligence and beyond, but the pace of worldwide improvement is blistering.
Hod Lipson | Professor of Engineering and Data Science, Columbia University
The biggest moment in AI in the past decade (and in its entire history, in my humble opinion) was midnight, Pacific time, September 30, 2012: the moment when machines finally opened their eyes. It was the moment when deep learning took off, breaking stagnant decades of machine blindness, when AI couldn’t reliably tell apart even a cat from a dog. That seemingly trivial accomplishment—a task any one-year-old child can do—has had a ripple effect on AI applications from driverless cars to health diagnostics. And this is just the beginning of what is sure to be a Cambrian explosion of AI.
Neuroscience
Divya Chander | Chair, Neuroscience, Singularity University
If the 2000s were the decade of brain mapping, then the 2010s were the decade of brain writing. Optogenetics, a technique for precisely mapping and controlling neurons and neural circuits using genetically-directed light, saw incredible growth in the 2010s.
Also in the last 10 years, neuromodulation, or the ability to rewire the brain using both invasive and non-invasive interfaces and energy, has exploded in use and form. For instance, the Braingate consortium showed us how electrode arrays implanted into the motor cortex could be used by paralyzed people to use their thoughts to direct a robotic arm. These technologies, alone or in combination with robotics, exoskeletons, and flexible, implantable, electronics also make possible a future of human augmentation.
Image Credit: Image by Jorge Guillen from Pixabay Continue reading →
#436484 If Machines Want to Make Art, Will ...
Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?
Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.
But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.
We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.
But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.
Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.
Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.
Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.
The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.
This article was originally published at Aeon and has been republished under Creative Commons.
Image Credit: Rene Böhmer / Unsplash Continue reading →
#436482 50+ Reasons Our Favorite Emerging ...
For most of history, technology was about atoms, the manipulation of physical stuff to extend humankind’s reach. But in the last five or six decades, atoms have partnered with bits, the elemental “particles” of the digital world as we know it today. As computing has advanced at the accelerating pace described by Moore’s Law, technological progress has become increasingly digitized.
SpaceX lands and reuses rockets and self-driving cars do away with drivers thanks to automation, sensors, and software. Businesses find and hire talent from anywhere in the world, and for better and worse, a notable fraction of the world learns and socializes online. From the sequencing of DNA to artificial intelligence and from 3D printing to robotics, more and more new technologies are moving at a digital pace and quickly emerging to reshape the world around us.
In 2019, stories charting the advances of some of these digital technologies consistently made headlines. Below is, what is at best, an incomplete list of some of the big stories that caught our eye this year. With so much happening, it’s likely we’ve missed some notable headlines and advances—as well as some of your personal favorites. In either instance, share your thoughts and candidates for the biggest stories and breakthroughs on Facebook and Twitter.
With that said, let’s dive straight into the year.
Artificial Intelligence
No technology garnered as much attention as AI in 2019. With good reason. Intelligent computer systems are transitioning from research labs to everyday life. Healthcare, weather forecasting, business process automation, traffic congestion—you name it, and machine learning algorithms are likely beginning to work on it. Yet, AI has also been hyped up and overmarketed, and the latest round of AI technology, deep learning, is likely only one piece of the AI puzzle.
This year, Open AI’s game-playing algorithms beat some of the world’s best Dota 2 players, DeepMind notched impressive wins in Starcraft, and Carnegie Mellon University’s Libratus “crushed” pros at six-player Texas Hold‘em.
Speaking of games, AI’s mastery of the incredibly complex game of Go prompted a former world champion to quit, stating that AI ‘”cannot be defeated.”
But it isn’t just fun and games. Practical, powerful applications that make the best of AI’s pattern recognition abilities are on the way. Insilico Medicine, for example, used machine learning to help discover and design a new drug in just 46 days, and DeepMind is focused on using AI to crack protein folding.
Of course, AI can be a double-edged sword. When it comes to deepfakes and fake news, for example, AI makes both easier to create and detect, and early in the year, OpenAI created and announced a powerful AI text generator but delayed releasing it for fear of malicious use.
Recognizing AI’s power for good and ill, the OECD, EU, World Economic Forum, and China all took a stab at defining an ethical framework for the development and deployment of AI.
Computing Systems
Processors and chips kickstarted the digital boom and are still the bedrock of continued growth. While progress in traditional silicon-based chips continues, it’s slowing and getting more expensive. Some say we’re reaching the end of Moore’s Law. While that may be the case for traditional chips, specialized chips and entirely new kinds of computing are waiting in the wings.
In fall 2019, Google confirmed its quantum computer had achieved “quantum supremacy,” a term that means a quantum computer can perform a calculation a normal computer cannot. IBM pushed back on the claim, and it should be noted the calculation was highly specialized. But while it’s still early days, there does appear to be some real progress (and more to come).
Should quantum computing become truly practical, “the implications are staggering.” It could impact machine learning, medicine, chemistry, and materials science, just to name a few areas.
Specialized chips continue to take aim at machine learning—a giant new chip with over a trillion transistors, for example, may make machine learning algorithms significantly more efficient.
Cellular computers also saw advances in 2019 thanks to CRISPR. And the year witnessed the emergence of the first reprogrammable DNA computer and new chips inspired by the brain.
The development of hardware computing platforms is intrinsically linked to software. 2019 saw a continued move from big technology companies towards open sourcing (at least parts of) their software, potentially democratizing the use of advanced systems.
Networks
Increasing interconnectedness has, in many ways, defined the 21st century so far. Your phone is no longer just a phone. It’s access to the world’s population and accumulated knowledge—and it fits in your pocket. Pretty neat. This is all thanks to networks, which had some notable advances in 2019.
The biggest network development of the year may well be the arrival of the first 5G networks.
5G’s faster speeds promise advances across many emerging technologies.
Self-driving vehicles, for example, may become both smarter and safer thanks to 5G C-V2X networks. (Don’t worry with trying to remember that. If they catch on, they’ll hopefully get a better name.)
Wi-Fi may have heard the news and said “hold my beer,” as 2019 saw the introduction of Wi-Fi 6. Perhaps the most important upgrade, among others, is that Wi-Fi 6 ensures that the ever-growing number of network connected devices get higher data rates.
Networks also went to space in 2019, as SpaceX began launching its Starlink constellation of broadband satellites. In typical fashion, Elon Musk showed off the network’s ability to bounce data around the world by sending a Tweet.
Augmented Reality and Virtual Reality
Forget Pokemon Go (unless you want to add me as a friend in the game—in which case don’t forget Pokemon Go). 2019 saw AR and VR advance, even as Magic Leap, the most hyped of the lot, struggled to live up to outsized expectations and sell headsets.
Mixed reality AR and VR technologies, along with the explosive growth of sensor-based data about the world around us, is creating a one-to-one “Mirror World” of our physical reality—a digital world you can overlay on our own or dive into immersively thanks to AR and VR.
Facebook launched Replica, for example, which is a photorealistic virtual twin of the real world that, among other things, will help train AIs to better navigate their physical surroundings.
Our other senses (beyond eyes) may also become part of the Mirror World through the use of peripherals like a newly developed synthetic skin that aim to bring a sense of touch to VR.
AR and VR equipment is also becoming cheaper—with more producers entering the space—and more user-friendly. Instead of a wired headset requiring an expensive gaming PC, the new Oculus Quest is a wireless, self-contained step toward the mainstream.
Niche uses also continue to gain traction, from Google Glass’s Enterprise edition to the growth of AR and VR in professional education—including on-the-job-training and roleplaying emotionally difficult work encounters, like firing an employee.
Digital Biology and Biotech
The digitization of biology is happening at an incredible rate. With wild new research coming to light every year and just about every tech giant pouring money into new solutions and startups, we’re likely to see amazing advances in 2020 added to those we saw in 2019.
None were, perhaps, more visible than the success of protein-rich, plant-based substitutes for various meats. This was the year Beyond Meat was the top IPO on the NASDAQ stock exchange and people stood in line for the plant-based Impossible Whopper and KFC’s Beyond Chicken.
In the healthcare space, a report about three people with HIV who became virus free thanks to a bone marrow transplants of stem cells caused a huge stir. The research is still in relatively early stages, and isn’t suitable for most people, but it does provides a glimmer of hope.
CRISPR technology, which almost deserves its own section, progressed by leaps and bounds. One tweak made CRISPR up to 50 times more accurate, while the latest new CRISPR-based system, CRISPR prime, was described as a “word processor” for gene editing.
Many areas of healthcare stand to gain from CRISPR. For instance, cancer treatment, were a first safety test showed ‘promising’ results.
CRISPR’s many potential uses, however, also include some weird/morally questionable areas, which was exemplified by one the year’s stranger CRISPR-related stories about a human-monkey hybrid embryo in China.
Incidentally, China could be poised to take the lead on CRISPR thanks to massive investments and research programs.
As a consequence of quick advances in gene editing, we are approaching a point where we will be able to design our own biology—but first we need to have a serious conversation as a society about the ethics of gene editing and what lines should be drawn.
3D Printing
3D printing has quietly been growing both market size and the objects the printers are capable of producing. While both are impressive, perhaps the biggest story of 2019 is their increased speed.
One example was a boat that was printed in just three days, which also set three new world records for 3D printing.
3D printing is also spreading in the construction industry. In Mexico, the technology is being used to construct 50 new homes with subsidized mortgages of just $20/month.
3D printers also took care of all parts of a 640 square-meter home in Dubai.
Generally speaking, the use of 3D printing to make parts for everything from rocket engines (even entire rockets) to trains to cars illustrates the sturdiness of the technology, anno 2019.
In healthcare, 3D printing is also advancing the cause of bio-printed organs and, in one example, was used to print vascularized parts of a human heart.
Robotics
Living in Japan, I get to see Pepper, Aibo, and other robots on pretty much a daily basis. The novelty of that experience is spreading to other countries, and robots are becoming a more visible addition to both our professional and private lives.
We can’t talk about robots and 2019 without mentioning Boston Dynamics’ Spot robot, which went on sale for the general public.
Meanwhile, Google, Boston Dynamics’ former owner, rebooted their robotics division with a more down-to-earth focus on everyday uses they hope to commercialize.
SoftBank’s Pepper robot is working as a concierge and receptionist in various countries. It is also being used as a home companion. Not satisfied, Pepper rounded off 2019 by heading to the gym—to coach runners.
Indeed, there’s a growing list of sports where robots perform as well—or better—than humans.
2019 also saw robots launch an assault on the kitchen, including the likes of Samsung’s robot chef, and invade the front yard, with iRobot’s Terra robotic lawnmower.
In the borderlands of robotics, full-body robotic exoskeletons got a bit more practical, as the (by all accounts) user-friendly, battery-powered Sarcos Robotics Guardian XO went commercial.
Autonomous Vehicles
Self-driving cars did not—if you will forgive the play on words—stay quite on track during 2019. The fallout from Uber’s 2018 fatal crash marred part of the year, while some big players ratcheted back expectations on a quick shift to the driverless future. Still, self-driving cars, trucks, and other autonomous systems did make progress this year.
Winner of my unofficial award for best name in self-driving goes to Optimus Ride. The company also illustrates that self-driving may not be about creating a one-size-fits-all solution but catering to specific markets.
Self-driving trucks had a good year, with tests across many countries and states. One of the year’s odder stories was a self-driving truck traversing the US with a delivery of butter.
A step above the competition may be the future slogan (or perhaps not) of Boeing’s self-piloted air taxi that saw its maiden test flight in 2019. It joins a growing list of companies looking to create autonomous, flying passenger vehicles.
2019 was also the year where companies seemed to go all in on last-mile autonomous vehicles. Who wins that particular competition could well emerge during 2020.
Blockchain and Digital Currencies
Bitcoin continues to be the cryptocurrency equivalent of a rollercoaster, but the underlying blockchain technology is progressing more steadily. Together, they may turn parts of our financial systems cashless and digital—though how and when remains a slightly open question.
One indication of this was Facebook’s hugely controversial announcement of Libra, its proposed cryptocurrency. The company faced immediate pushback and saw a host of partners jump ship. Still, it brought the tech into mainstream conversations as never before and is putting the pressure on governments and central banks to explore their own digital currencies.
Deloitte’s in-depth survey of the state of blockchain highlighted how the technology has moved from fintech into just about any industry you can think of.
One of the biggest issues facing the spread of many digital currencies—Bitcoin in particular, you could argue—is how much energy it consumes to mine them. 2019 saw the emergence of several new digital currencies with a much smaller energy footprint.
2019 was also a year where we saw a new kind of digital currency, stablecoins, rise to prominence. As the name indicates, stablecoins are a group of digital currencies whose price fluctuations are more stable than the likes of Bitcoin.
In a geopolitical sense, 2019 was a year of China playing catch-up. Having initially banned blockchain, the country turned 180 degrees and announced that it was “quite close” to releasing a digital currency and a wave of blockchain-programs.
Renewable Energy and Energy Storage
While not every government on the planet seems to be a fan of renewable energy, it keeps on outperforming fossil fuel after fossil fuel in places well suited to it—even without support from some of said governments.
One of the reasons for renewable energy’s continued growth is that energy efficiency levels keep on improving.
As a result, an increased number of coal plants are being forced to close due to an inability to compete, and the UK went coal-free for a record two weeks.
We are also seeing more and more financial institutions refusing to fund fossil fuel projects. One such example is the European Investment Bank.
Renewable energy’s advance is tied at the hip to the rise of energy storage, which also had a breakout 2019, in part thanks to investments from the likes of Bill Gates.
The size and capabilities of energy storage also grew in 2019. The best illustration came from Australia were Tesla’s mega-battery proved that energy storage has reached a stage where it can prop up entire energy grids.
Image Credit: Mathew Schwartz / Unsplash Continue reading →
#436263 Skydio 2 Review: This Is the Drone You ...
Let me begin this review by saying that the Skydio 2 is one of the most impressive robots that I have ever seen. Over the last decade, I’ve spent enough time around robots to have a very good sense of what kinds of things are particularly challenging for them, and to set my expectations accordingly. Those expectations include things like “unstructured environments are basically impossible” and “full autonomy is impractically expensive” and “robot videos rarely reflect reality.”
Skydio’s newest drone is an exception to all of this. It’s able to fly autonomously at speed through complex environments in challenging real-world conditions in a way that’s completely effortless and stress-free for the end user, allowing you to capture the kind of video that would be otherwise impossible, even (I’m guessing) for professional drone pilots. When you see this technology in action, it’s (almost) indistinguishable from magic.
Skydio 2 Price
To be clear, the Skydio 2 is not without compromises, and the price of $999 (on pre-order with delivery of the next batch expected in spring of 2020) requires some justification. But the week I’ve had with this drone has left me feeling like its fundamental autonomous capability is so far beyond just about anything that I’ve ever experienced that I’m questioning why I would every fly anything else ever again.
We’ve written extensively about Skydio, beginning in early 2016 when the company posted a video of a prototype drone dodging trees while following a dude on a bike. Even three years ago, Skydio’s tech was way better than anything we’d seen outside of a research lab, and in early 2018, they introduced their first consumer product, the Skydio R1. A little over a year later, Skydio has introduced the Skydio 2, which is smaller, smarter, and much more affordable. Here’s an overview video just to get you caught up:
Skydio sent me a Skydio 2 review unit last week, and while I’m reasonably experienced with drones in general, this is the first time I’ve tried a Skydio drone in person. I had a pretty good idea what to expect, and I was absolutely blown away. Like, I was giggling to myself while running through the woods as the drone zoomed around, deftly avoiding trees and keeping me in sight. Robots aren’t supposed to be this good.
A week is really not enough time to explore everything that the Skydio can do, especially Thanksgiving week in Washington, D.C. (a no-fly zone) in early winter. But I found a nearby state park in which I could legally and safely fly the drone, and I did my best to put the Skydio 2 through its paces.
Note: Throughout this review, we’ve got a bunch of GIFs to help illustrate different features of the drone. To fit them all in, these GIFs had to be heavily compressed. Underneath each GIF is a timestamped link to this YouTube video (also available at the bottom of the post), which you can click on to see the an extended cut of the original 4K 30 fps footage. And there’s a bunch of interesting extra video in there as well.
Skydio 2 Specs
Photo: Evan Ackerman/IEEE Spectrum
The Skydio 2 is primarily made out of magnesium, which (while light) is both heavier and more rigid and durable than plastic. The offset props (the back pair are above the body, and the front pair are below) are necessary to maintain the field of view of the navigation cameras.
The Skydio 2 both looks and feels like a well-designed and carefully thought-out drone. It’s solid, and a little on the heavy side as far as drones go—it’s primarily made out of magnesium, which (while light) is both heavier and more rigid and durable than plastic. The blue and black color scheme is far more attractive than you typically see with drones.
Photo: Evan Ackerman/IEEE Spectrum
To detect and avoid obstacles, the Skydio 2 uses an array of six 4K hemispherical cameras that feed data into an NVIDIA Jetson TX2 at 30 fps, with the drone processing a million points in 3D space per second to plan the safest path.
The Skydio 2 is built around an array of six hemispherical obstacle-avoidance cameras and the NVIDIA Jetson TX2 computing module that they’re connected to. This defines the placement of the gimbal, the motors and props, and the battery, since all of this stuff has to be as much as possible out of the view of the cameras in order for the drone to effectively avoid obstacles in any direction.
Without the bottom-mounted battery attached, the drone is quite flat. The offset props (the back pair are above the body, and the front pair are below) are necessary to maintain the field of view of the obstacle-avoidance cameras. These hemispherical cameras are on the end of each of the prop arms as well as above and below the body of the drone. They look awfully exposed, even though each is protected from ground contact by a little fin. You need to make sure these cameras are clean and smudge-free, and Skydio includes a cleaning cloth for this purpose. Underneath the drone there are slots for microSD cards, one for recording from the camera and a second one that the drone uses to store data. The attention to detail extends to the SD card insertion, which has a sloped channel that guides the card securely into its slot.
Once you snap the battery in, the drone goes from looking streamlined to looking a little chubby. Relative to other drones, the battery almost seems like an afterthought, like Skydio designed the drone and then remembered, “oops we have to add a battery somewhere, let’s just kludge it onto the bottom.” But again, the reason for this is to leave room inside the body for the NVIDIA TX2, while making sure that the battery stays out of view of the obstacle avoidance cameras.
The magnetic latching system for the battery is both solid and satisfying. I’m not sure why it’s necessary, strictly speaking, but I do like it, and it doesn’t seem like the battery will fly off even during the most aggressive maneuvers. Each battery includes an LED array that will display its charge level in 25 percent increments, as well as a button that you push to turn the drone on and off. Charging takes place via a USB-C port in the top of the drone, which I don’t like, because it means that the batteries can’t be charged on their own (like the Parrot Anafi’s battery), and that you can’t charge one battery while flying with another, like basically every other drone ever. A separate battery charger that will charge two at once is available from Skydio for an eyebrow-raising $129.
I appreciate that all of Skydio’s stuff (batteries, controller, and beacon) charges via USB-C, though. The included USB-C adapter with its beefy cable will output at up to 65 watts, which’ll charge a mostly depleted battery in under an hour. The drone turns itself on while charging, which seems unnecessary.
Photo: Evan Ackerman/IEEE Spectrum
The Skydio 2 is not foldable, making it not nearly as easy to transport as some other drones. But it does come with a nice case that mitigates this issue somewhat, and the drone plus two batteries end up as a passably flat package about the size of a laptop case.
The most obvious compromise that Skydio made with the Skydio 2 is that the drone is not foldable. Skydio CEO Adam Bry told us that adding folding joints to the arms of the Skydio 2 would have made calibrating all six cameras a nightmare and significantly impacted performance. This makes complete sense, of course, but it does mean that the Skydio 2 is not nearly as easy to transport as some other drones.
Photo: Evan Ackerman/IEEE Spectrum
Folded and unfolded: The Skydio 2 compared to the Parrot Anafi (upper left) and the DJI Mavic Pro (upper right).
The Skydio 2 does come with a very nice case that mitigates this issue somewhat, and the drone plus two batteries end up as a passably flat package about the size of a laptop case. Still, it’s just not as convenient to toss into a backpack as my Anafi, although the Mavic Mini might be even more portable.
Photo: Evan Ackerman/IEEE Spectrum
While the Skydio 2’s case is relatively compact, the non-foldable drone is overall a significantly larger package than the Parrot Anafi.
The design of the drone leads to some other compromises as well. Since landing gear would, I assume, occlude the camera system, the drone lands directly on the bottom of its battery pack, which has a slightly rubberized pad about the size of a playing card. This does’t feel particularly stable unless you end up on a very flat surface, and made me concerned for the exposed cameras underneath the drone as well as the lower set of props. I’d recommend hand takeoffs and landings—more on those later.
Skydio 2 Camera System
Photo: Evan Ackerman/IEEE Spectrum
The Skydio 2’s primary camera is a Sony IMX577 1/2.3″ 12.3-megapixel CMOS sensor. It’s mounted to a three-axis gimbal and records 4K video at 60 fps, or 1080p video at 120 fps.
The Skydio 2 comes with a three-axis gimbal supporting a 12-megapixel camera, just enough to record 4K video at 60 fps, or 1080p video at 120 fps. Skydio has provided plenty of evidence that its imaging system is at least as good if not better than other drone cameras. Tested against my Mavic Pro and Parrot Anafi, I found no reason to doubt that. To be clear, I didn’t do exhaustive pixel-peeping comparisons between them, you’re just getting my subjective opinion that the Skydio 2 has a totally decent camera that you won’t be disappointed with. I will say that I found the HDR photo function to be not all that great under the few situations in which I tested it—after looking at a few muddy sunset shots, I turned it off and was much happier.
Photo: Evan Ackerman/IEEE Spectrum
The Skydio 2’s 12-megapixel camera is solid, although we weren’t impressed with the HDR option.
The video stabilization is fantastic, to the point where watching the video footage can be underwhelming because it doesn’t reflect the motion of the drone. I almost wish there was a way to change to unstabilized (or less-stabilized) video so that the viewer could get a little more of a wild ride. Or, ideally, there’d be a way for the drone to provide you with a visualization of what it was doing using the data collected by its cameras. That’s probably wishful thinking, though. The drone itself doesn’t record audio because all you’d get would be an annoying buzz, but the app does record audio, so the audio from your phone gets combined with the drone video. Don’t expect great quality, but it’s better than nothing.
Skydio 2 App
The app is very simple compared to every other drone app I’ve tried, and that’s a good thing. Here’s what it looks like:
Image: Skydio
Trackable subjects get a blue “+” sign over them, and if you tap them, the “+” turns into a spinny blue circle. Once you’ve got a subject selected, you can choose from a variety of cinematic skills that the drone will execute while following you.
You get the controls that you need and the information that you need, and nothing else. Manual flight with the on-screen buttons works adequately, and the double-tap to fly function on the phone works surprisingly well, making it easy to direct the drone to a particular spot above the ground.
The settings menus are limited but functional, allowing you to change settings for the camera and a few basic tweaks for controlling the drone. One unique setting to the Skydio 2 is the height floor—since the drone only avoids static obstacles, you can set it to maintain a height of at least 8 feet above the ground while flying autonomously to make sure that if you’re flying around other people, it won’t run into anyone who isn’t absurdly tall and therefore asking for it.
Trackable subjects get a blue “+” sign over them in the app, and if you tap them, the “+” turns into a spinny blue circle. Once you’ve got a subject selected, you can choose from a variety of cinematic skills that the drone will execute while following you, and in addition, you can select “one-shot” skills that involve the drone performing a specific maneuver before returning to the previously selected cinematic skill. For example, you can tell the drone to orbit around you, and then do a “rocket” one-shot where it’ll fly straight up above you (recording the whole time, of course), before returning to its orbiting.
After you’re done flying, you can scroll through your videos and easily clip out excerpts from them and save them to your phone for sharing. Again, it’s a fairly simple interface without a lot of options. You could call it limited, I guess, but I appreciate that it just does a few things that you care about and otherwise doesn’t clutter itself up.
The real limitation of the app is that it uses Wi-Fi to connect to the Skydio 2, which restricts the range. To fly much beyond a hundred meters or so, you’ll need to use the controller or beacon instead.
Skydio 2 Controller and Beacon
Photo: Evan Ackerman/IEEE Spectrum
While the Skydio 2 controller provides a better hands-on flight experience than with the phone, plus an extended range of up to 3.5 km, more experienced pilots may find manual control a bit frustrating, because the underlying autonomy will supersede your maneuvers when you start getting close to objects.
I was looking forward to using the controller, because with every other drone I’ve had, the precision that a physically controller provides is, I find, mandatory for a good flying experience and to get the photos and videos that you want. With Skydio 2, that’s all out the window. It’s not that the controller is useless or anything, it’s just that because the drone tracks you and avoids obstacles on its own, that level of control precision becomes largely unnecessary.
The controller itself is perfectly fine. It’s a rebranded Parrot Skycontroller3, which is the same as the one that you get with a Parrot Anafi. It’s too bad that the sticks don’t unscrew to make it a little more portable, and overall it’s functional rather than fancy, but it feels good to use and includes a sizeable antenna that makes a significant difference to the range that you get (up to 3.5 kilometers).
You definitely get a better hands-on flight experience with the controller than with the phone, so if you want to (say) zip the drone around some big open space for fun, it’s good for that. And it’s nice to be able to hand the controller to someone who’s never flown a drone before and let them take it for a spin without freaking out about them crashing it the whole time. For more experienced pilots, though, the controller is ultimately just a bit frustrating, because the underlying autonomy will supersede your control when you start getting close to objects, which (again) limits how useful the controller is relative to your phone.
I do still prefer the controller over the phone, but I’m not sure that it’s worth the extra $150, unless you plan to fly the Skydio 2 at very long distances or primarily in manual mode. And honestly, if either of those two things are your top priority, the Skydio 2 is probably not the drone for you.
Photo: Evan Ackerman/IEEE Spectrum
The Skydio 2 beacon uses GPS tracking to help the drone follow you, extending range up to 1.5 km. You can also fly the with the beacon alone, no phone necessary.
The purpose of the beacon, according to Skydio, is to give the drone a way of tracking you if it can’t see you, which can happen, albeit infrequently. My initial impression of the beacon was that it was primarily useful as a range-extending bridge between my phone and the drone. But I accidentally left my phone at home one day (oops) and had to fly the drone with only the beacon, and it was a surprisingly decent experience. The beacon allows for full manual control of a sort—you can tap different buttons to rotate, fly forward, and ascend or descend. This is sufficient for takeoff, landing, to make sure that the drone is looking at you when you engage visual tracking, and to rescue it if it gets trapped somewhere.
The rest of the beacon’s control functions are centered around a few different tracking modes, and with these, it works just about as well as your phone. You have fewer options overall, but all the basic stuff is there with just a few intuitive button clicks, including tracking range and angle. If you’re willing to deal with this relatively minor compromise, it’s nice to not have your phone available for other things rather than being monopolized by the drone.
Skydio 2 In Flight
GIF: Evan Ackerman/IEEE Spectrum
Hand takeoffs are simple and reliable.
Click here for a full resolution clip.
Starting up the Skydio 2 doesn’t require any kind of unusual calibration steps or anything like that. It prefers to be kept still, but you can start it up while holding it, it’ll just take a few seconds longer to tell you that it’s ready to go. While the drone will launch from any flat surface with significant clearance around it (it’ll tell you if it needs more room), the small footprint of the battery means that I was more comfortable hand launching it. This is not a “throw” launch; you just let the drone rest on your palm, tell it to take off, and then stay still while it gets its motors going and then gently lifts off. The lift off is so gentle that you have to be careful not to pull your hand away too soon—I did that once and the drone, being not quite ready, dropped towards the ground, but managed to recover without much drama.
GIF: Evan Ackerman/IEEE Spectrum
Hand landings always look scary, but the Skydio 2 is incredibly gentle. After trying this once, it became the only way I ever landed the drone.
Click here for a full resolution clip.
Catching the drone for landing is perhaps very slightly more dangerous, but not any more difficult. You put the drone above and in front of you facing away, tell it to land in the app or with the beacon, and then put your hand underneath it to grasp it as it slowly descends. It settles delicately and promptly turns itself off. Every drone should land this way. The battery pack provides a good place to grip, although you do have to be mindful of the forward set of props, which (since they’re the pair that are beneath the body of drone) are quite close to your fingers. You’ll certainly be mindful after you catch a blade with your fingers once. Which I did. For the purposes of this review and totally not by accident. No damage, for the record.
Photo: Evan Ackerman/IEEE Spectrum
You won’t be disappointed with the Skydio 2’s in-flight performance, unless you’re looking for a dedicated racing drone.
In normal flight, the Skydio 2 performs as well as you’d expect. It’s stable and manages light to moderate wind without any problems, although I did notice some occasional lateral drifting when the drone should have been in a stationary hover. While the controller gains are adjustable, the Skydio 2 isn’t quite as aggressive in flight as my Mavic Pro on Sport Mode, but again, if you’re looking for a high-speed drone, that’s really not what the Skydio is all about.
The Skydio 2 is substantially louder than my Anafi, although the Anafi is notably quiet for a drone. It’s not annoying to hear (not a high-pitched whine), but you can hear it from a ways away, and farther away than my Mavic Pro. I’m not sure whether that’s because of the absolute volume or the volume plus the pitch. In some ways, this is a feature, since you can hear the drone following you even if you’re not looking at it, you just need to be aware of the noise it makes when you’re flying it around people.
Obstacle Avoidance
The primary reason Skydio 2 is the drone that you want to fly is because of its autonomous subject tracking and obstacle avoidance. Skydio’s PR videos make this capability look almost too good, and since I hadn’t tried out one of their drones before, the first thing I did with it was exactly what you’d expect: attempt to fly it directly into the nearest tree.
GIF: Evan Ackerman/IEEE Spectrum
The Skydio 2 deftly slides around trees and branches. The control inputs here were simple “forward” or “turn,” all obstacle avoidance is autonomous.
Click here for a full resolution clip.
And it just won’t do it. It slows down a bit, and then slides right around one tree after another, going over and under and around branches. I pointed the drone into a forest and just held down “forward” and away it went, without any fuss, effortlessly ducking and weaving its way around. Of course, it wasn’t effortless at all—six 4K cameras were feeding data into the NVIDIA TX2 at 30 fps, and the drone was processing a million points in 3D space per second to plan the safest path while simultaneously taking into account where I wanted it to go. I spent about 10 more minutes doing my level best to crash the drone into anything at all using a flying technique probably best described as “reckless,” but the drone was utterly unfazed. It’s incredible.
What knocked my socks off was telling the drone to pass through treetops—in the clip below, I’m just telling the drone to fly straight down. Watch as it weaves its way through gaps between the branches:
GIF: Evan Ackerman/IEEE Spectrum
The result of parking the Skydio 2 above some trees and holding “down” on the controller is this impressive fully autonomous descent through the branches.
Click here for a full resolution clip.
Here’s one more example, where I sent the drone across a lake and started poking around in a tree. Sometimes the Skydio 2 isn’t sure where you want it to go, and you have to give it a little bit of a nudge in a clear direction, but that’s it.
GIF: Evan Ackerman/IEEE Spectrum
In obstacle-heavy environments, the Skydio 2 prudently slows down, but it can pick its way through almost anything that it can see.
Click here for a full resolution clip.
It’s important to keep in mind that all of the Skydio 2’s intelligence is based on vision. It uses cameras to see the world, which means that it has similar challenges as your eyes do. Specifically, Skydio warns against flying in the following conditions:
Skydio 2 can’t see certain visually challenging obstacles. Do not fly around thin branches, telephone or power lines, ropes, netting, wires, chain link fencing or other objects less than ½ inch in diameter.
Do not fly around transparent surfaces like windows or reflective surfaces like mirrors greater than 60 cm wide.
When the sun is low on the horizon, it can temporarily blind Skydio 2’s cameras depending on the angle of flight. Your drone may be cautious or jerky when flying directly toward the sun.
Basically, if you’d have trouble seeing a thing, or seeing under some specific flight conditions, then the Skydio 2 almost certainly will also. It gets even more problematic when challenging obstacles are combined with challenging flight conditions, which is what I’m pretty sure led to the only near-crash I had with the drone. Here’s a video:
GIF: Evan Ackerman/IEEE Spectrum
Flying around very thin branches and into the sun can cause problems for the Skydio 2’s obstacle avoidance.
Click here for a full resolution clip.
I had the Skydio 2 set to follow me on my bike (more about following and tracking in a bit). It was mid afternoon, but since it’s late fall here in Washington, D.C., the sun doesn’t get much higher than 30 degrees above the horizon. Late fall also means that most of the deciduous trees have lost their leaves, and so there are a bunch of skinny branches all over the place. The drone was doing a pretty good job of following me along the road at a relatively slow speed, and then it clipped the branch that you can just barely see in the video above. It recovered in an acrobatic maneuver that has been mostly video-stabilized out, and resumed tracking me before I freaked and told it to land. You can see another example here, where the drone (again) clips a branch that has the sun behind it, and this clip shows me stopping my bike before the drone runs into another branch in a similar orientation. As the video shows, it’s very hard to see the branches until it’s too late.
As far as I can tell, the drone is no worse for wear from any of this, apart from a small nick in one of the props. But, this is a good illustration of a problematic situation for the Skydio 2: flying into a low sun angle around small bare branches. Should I not have been flying the drone in this situation? It’s hard to say. These probably qualify as “thin branches,” although there was plenty of room along with middle of the road. There is an open question with the Skydio 2 as to exactly how much responsibility the user should have about when and where it’s safe to fly—for branches, how thin is too thin? How low can the sun be? What if the branches are only kinda thin and the sun is only kinda low, but it’s also a little windy? Better to be safe than sorry, of course, but there’s really no way for the user (or the drone) to know what it can’t handle until it can’t handle it.
Edge cases like these aside, the obstacle avoidance just works. Even if you’re not deliberately trying to fly into branches, it’s keeping a lookout for you all the time, which means that flying the drone goes from somewhat stressful to just pure fun. I can’t emphasize enough how amazing it is to be able to fly without worrying about running into things, and how great it feels to be able to hand the controller to someone who’s never flown a drone before and say, with complete confidence, “go ahead, fly it around!”
Skydio 2 vs. DJI Mavic
Photo: Evan Ackerman/IEEE Spectrum
Both the Skydio 2 and many models of DJI’s Mavic use visual obstacle avoidance, but the Skydio 2 is so much more advanced that you can’t really compare the two systems.
It’s important to note that there’s a huge difference between the sort of obstacle avoidance that you get with a DJI Mavic, and the sort of obstacle avoidance that you get with the Skydio 2. The objective of the Mavic’s obstacle avoidance is really there to prevent you from accidentally running into things, and in that capacity, it usually works. But there are two things to keep in mind here—first, not running into things is not the same as avoiding things, because avoiding things means planning several steps ahead, not just one step.
Second, there’s the fact that the Mavic’s obstacle detection only works most of the time. Fundamentally, I don’t trust my Mavic Pro, because sometimes the safety system doesn’t kick in for whatever reason and the drone ends up alarmingly close to something. And that’s actually fine, because with the Mavic, I expect to be piloting it. It’s for this same reason that I don’t care that my Parrot Anafi doesn’t have obstacle avoidance at all: I’m piloting it anyway, and I’m a careful pilot, so it just doesn’t matter. The Skydio 2 is totally and completely different. It’s in a class by itself, and you can’t compare what it can do to what anything else out there right now. Period.
Skydio 2 Tracking
Skydio’s big selling point on the Skydio 2 is that it’ll autonomously track you while avoiding obstacles. It does this visually, by watching where you go, predicting your future motion, and then planning its own motion to keep you in frame. The works better than you might expect, in that it’s really very good at not losing you. Obviously, the drone prioritizes not running into stuff over tracking you, which means that it may not always be where you feel like it should be. It’s probably trying to get there, but in obstacle dense environments, it can take some creative paths.
Having said that, I found it to be very consistent with keeping me in the frame, and I only managed to lose it when changing direction while fully occluded by an obstacle, or while it was executing an avoidance maneuver that was more dynamic than normal. If you deliberately try to hide from the drone it’s not that hard to do so if there are enough obstacles around, but I didn’t find the tracking to be something that I had to worry about it most cases. When tracking does fail and you’re not using the beacon, the drone will come to a hover. It won’t try and find you, but it will reacquire you if you get back into its field of view.
The Skydio 2 had no problem tracking me running through fairly dense trees:
GIF: Evan Ackerman/IEEE Spectrum
The Skydio 2 had no problem chasing me around through these trees, even while I was asking it to continually change its tracking angle.
Click here for a full resolution clip.
It also managed to keep up with me as I rode my bike along a tree-lined road:
GIF: Evan Ackerman/IEEE Spectrum
The Skydio 2 is easily fast enough to keep up with me on a bike, even while avoiding tree branches.
Click here for a full resolution clip.
It lost me when I asked it to follow very close behind me as I wove through some particularly branch-y trees, but it fails more or less gracefully by just sort of nope-ing out of situations when they start to get bad and coming to a hover somewhere safe.
GIF: Evan Ackerman/IEEE Spectrum
The Skydio 2 knows better than to put itself into situations that it can’t handle, and will bail to a safe spot if things get too complicated.
Click here for a full resolution clip.
After a few days of playing with the drone, I started to get to the point where I could set it to track me and then just forget about it while I rode my bike or whatever, as opposed to constantly turning around to make sure it was still behind me, which is what I was doing initially. It’s a level of trust that I don’t think would be possible with any other drone.
Should You Buy a Skydio 2?
Photo: Evan Ackerman/IEEE Spectrum
We think the Skydio 2 is fun and relaxing to fly, with unique autonomous intelligence that makes it worth the cost.
In case I haven’t said it often enough in this review, the Skydio 2 is an incredible piece of technology. As far as I know (as a robotics journalist, mind you), this represents the state of the art in commercial drone autonomy, and quite possibly the state of the art in drone autonomy, period. And it’s available for $999, which is expensive, but less money than a Mavic Pro 2. If you’re interested in a new drone, you should absolutely consider the Skydio 2.
There are some things to keep in mind—battery life is a solid but not stellar 20 minutes. Extra batteries are expensive at $99 each (the base kit includes just one). The controller and the beacon are also expensive, at $150 each. And while I think the Skydio 2 is definitely the drone you want to fly, it may not be the drone you want to travel with, since it’s bulky compared to other options.
But there’s no denying the fact that the experience is uniquely magical. Once you’ve flown the Skydio 2, you won’t want to fly anything else. This drone makes it possible to get pictures and videos that would be otherwise impossible, and you can do it completely on your own. You can trust the drone to do what it promises, as long as you’re mindful of some basic and common sense safety guidelines. And we’ve been told that the drone is only going to get smarter and more capable over time.
If you buy a Skydio 2, it comes with the following warranty from Skydio:
“If you’re operating your Skydio 2 within our Safe Flight guidelines, and it crashes, we’ll repair or replace it for free.”
Skydio trusts their drone to go out into a chaotic and unstructured world and dodge just about anything that comes its way. And after a week with this drone, I can see how they’re able to offer this kind of guarantee. This is the kind of autonomy that robots have been promising for years, and the Skydio 2 makes it real.
Detailed technical specifications are available on Skydio’s website, and if you have any questions, post a comment—we’ve got this drone for a little while longer, and I’d be happy to try out (nearly) anything with it.
Skydio 2 Review Video Highlights
This video is about 7 minutes of 4K, 30 fps footage directly from the Skydio 2. The only editing I did was cutting clips together, no stabilization or color correcting or anything like that. The drone will record in 4K 60 fps, so it gets smoother than this, but I, er, forgot to change the setting.
[ Skydio ] Continue reading →