Tag Archives: built

#436559 This Is What an AI Said When Asked to ...

“What’s past is prologue.” So says the famed quote from Shakespeare’s The Tempest, alleging that we can look to what has already happened as an indication of what will happen next.

This idea could be interpreted as being rather bleak; are we doomed to repeat the errors of the past until we correct them? We certainly do need to learn and re-learn life lessons—whether in our work, relationships, finances, health, or other areas—in order to grow as people.

Zooming out, the same phenomenon exists on a much bigger scale—that of our collective human history. We like to think we’re improving as a species, but haven’t yet come close to doing away with the conflicts and injustices that plagued our ancestors.

Zooming back in (and lightening up) a little, what about the short-term future? What might happen over the course of this year, and what information would we use to make educated guesses about it?

The editorial team at The Economist took a unique approach to answering these questions. On top of their own projections for 2020, including possible scenarios in politics, economics, and the continued development of technologies like artificial intelligence, they looked to an AI to make predictions of its own. What it came up with is intriguing, and a little bit uncanny.

[For the full list of the questions and answers, read The Economist article].

An AI That Reads—Then Writes
Almost exactly a year ago, non-profit OpenAI announced it had built a neural network for natural language processing called GPT-2. The announcement was met with some controversy, as it included the caveat that the tool would not be immediately released to the public due to its potential for misuse. It was then released in phases over the course of several months.

GPT-2’s creators upped the bar on quality when training the neural net; rather than haphazardly feeding it low-quality text, they only used articles that got more than three upvotes on Reddit (admittedly, this doesn’t guarantee high quality across the board—but it’s something).

The training dataset consisted of 40GB of text. For context, 1GB of text is about 900,000 ASCII pages or 130,000 double-spaced Microsoft Word pages.

The tool has no understanding of the text it’s generating, of course. It uses language patterns and word sequences to draw statistical associations between words and phrases, building a sort of guidebook for itself (not unlike the grammar rules and vocabulary words you might study when trying to learn a foreign language). It then uses that guidebook to answer questions or predict what will come after a particular sequence of words.

GPT-2’s creators did note that when the model is trained on specifically-selected datasets for narrower applications, its replies become more convincing.

Some Light Editing
Though the Economist article detailing GPT-2’s predictions describes the text the model generated as “unedited,” this isn’t wholly accurate. In a Medium post, deputy editor Tom Standage describes the methodology he used, and it does involve a degree of tweaking.

Standage wrote an introductory paragraph, followed by his opening question (“Greetings, GPT-2. Have you done an interview before?”). He configured the tool to generate five responses to each subsequent question then selected the best answer, deleting any questions GPT-2 came up with as part of its response.

AI’s View of the World—and Itself
The questions Standage asked ranged from tech-related (the future of AI, technological unemployment, misuse of technology) to political and economic (re-election of Donald Trump, the US-China trade war). Here are a couple of his questions and GPT-2’s answers:

Q: Do fake news stories, generated using AI, pose a threat to democracy? Are we likely to see this tactic being used in the 2020 American presidential elections?
A: Yes, we’ll probably see them. It’s just a matter of when. Fake news stories are generally generated by political campaigns, and have the potential to have a huge impact on the outcome of an election. This is because they spread misinformation, often with no basis in fact.

Q: How do you see the economic outlook for 2020?
A: I think it is very unclear. The outlook for the UK economy is uncertain, but the American economy is very strong. I think we are going to have a lot of turbulence in the world economy.

While some of its answers seemed insightful—when asked about the future of AI, GPT-2 advised that we treat the technology like a tool and “put as much effort into developing [it] as necessary, rather than worrying that it’s going to harm us and destroy our lives”—many of the responses were quite vague. One answer, for example, mentions that there will “probably be major changes in China.” Are we talking political changes? Social? Economic? All of the above?

However, while nothing the AI wrote was particularly illuminating or new, it’s impressive that it was able to expound on the correct topic for each question, and in fully coherent English.

GPT-2 named itself—AI—as the most important disruptive technology to watch in 2020. This is telling; it’s coming from a synthesis of millions of pages of published articles. If GPT-2 thinks AI is more important than, say, gene editing or quantum computing, it could be right. Or it could simply be that AI is at peak hype, and as such more is being written about it than about other technologies.

Equally intriguing was GPT-2’s response when asked whether Donald Trump will win a second term: “I think he will not win a second term. I think he will be defeated in the general election.” Some deeper insight there would be great, but hey—we’ll take it.

Predicting Predictions
Since an AI can read and synthesize vast data sets much faster than we can, it’s being used to predict all kinds of things, from virus outbreaks to crime. But asking it to philosophize on the future based on the (Reddit-curated) past is new, and if you think about it, a pretty fascinating undertaking.

As GPT-2 and tools like it continually improve, we’ll likely see them making more—and better—predictions of the future. In the meantime, let’s hope that the new data these models are trained on—news of what’s happening this week, this month, this year—add to an already-present sense of optimism.

When asked if it had any advice for readers, GPT-2 replied, “The big projects that you think are impossible today are actually possible in the near future.”

Image Credit: Alexas_Fotos from Pixabay Continue reading

Posted in Human Robots

#436550 Work in the Age of Web 3.0

What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or will tomorrow’s workplace be completely virtualized, allowing us to hang out at home in our PJs while “walking” about our virtual corporate headquarters?

This blog will look at the future of work during the age of Web 3.0, examining scenarios in which artificial intelligence, virtual reality, and the spatial web converge to transform every element of our careers, from training, to execution, to free time.

To offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.

A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.

But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments. Suddenly, all our information will be manipulated, stored, understood and experienced in spatial ways.

In this blog, I’ll be discussing the Spatial Web’s vast implications for:

Professional Training
Delocalized Business & the Virtual Workplace
Smart Permissions & Data Security

Let’s dive in.

Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market. As projected by ABI Research, the enterprise VR training market is on track to exceed $6.3 billion in value by 2022.

Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.

Then in September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training. By mid-2019, Walmart had tracked a 10-15 percent boost in employee confidence as a result of newly implemented VR training.

In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical 6-year aircraft design process into the course of 6 months, turning physical mock-ups into CAD-designed virtual replicas.

But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.

And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.

Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.

When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.

Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.

But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands. VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.

Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.

Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.

As tomorrow’s career model shifts from a “one-and-done graduate degree” to continuous lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.

But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.

Rise of the Virtual Workplace & Digital Data Integrity
In addition to enabling a virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.

Too good to be true? Check out an incredible publicly listed company called eXp Realty.

Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading. But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.

And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial. What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent. Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.

Delocalize with VR, and you can now hire anyone with Internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.

This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.

But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.

Life-like virtual modules are already unlocking countless professional training camps, modifiable in real time and easily updated. Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading. And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.

Welcome to the Spatial Web workplace.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436530 How Smart Roads Will Make Driving ...

Roads criss-cross the landscape, but while they provide vital transport links, in many ways they represent a huge amount of wasted space. Advances in “smart road” technology could change that, creating roads that can harvest energy from cars, detect speeding, automatically weigh vehicles, and even communicate with smart cars.

“Smart city” projects are popping up in countries across the world thanks to advances in wireless communication, cloud computing, data analytics, remote sensing, and artificial intelligence. Transportation is a crucial element of most of these plans, but while much of the focus is on public transport solutions, smart roads are increasingly being seen as a crucial feature of these programs.

New technology is making it possible to tackle a host of issues including traffic congestion, accidents, and pollution, say the authors of a paper in the journal Proceedings of the Royal Society A. And they’ve outlined ten of the most promising advances under development or in planning stages that could feature on tomorrow’s roads.

Energy harvesting

A variety of energy harvesting technologies integrated into roads have been proposed as ways to power street lights and traffic signals or provide a boost to the grid. Photovoltaic panels could be built into the road surface to capture sunlight, or piezoelectric materials installed beneath the asphalt could generate current when deformed by vehicles passing overhead.

Musical roads

Countries like Japan, Denmark, the Netherlands, Taiwan, and South Korea have built roads that play music as cars pass by. By varying the spacing of rumble strips, it’s possible to produce a series of different notes as vehicles drive over them. The aim is generally to warn of hazards or help drivers keep to the speed limit.

Automatic weighing

Weight-in-motion technology that measures vehicles’ loads as they drive slowly through a designated lane has been around since the 1970s, but more recently high speed weight-in-motion tech has made it possible to measure vehicles as they travel at regular highway speeds. The latest advance has been integration with automatic licence plate reading and wireless communication to allow continuous remote monitoring both to enforce weight restrictions and monitor wear on roads.

Vehicle charging

The growing popularity of electric vehicles has spurred the development of technology to charge cars and buses as they drive. The most promising of these approaches is magnetic induction, which involves burying cables beneath the road to generate electromagnetic fields that a receiver device in the car then transforms into electrical power to charge batteries.

Smart traffic signs

Traffic signs aren’t always as visible as they should be, and it can often be hard to remember what all of them mean. So there are now proposals for “smart signs” that wirelessly beam a sign’s content to oncoming cars fitted with receivers, which can then alert the driver verbally or on the car’s display. The approach isn’t affected by poor weather and lighting, can be reprogrammed easily, and could do away with the need for complex sign recognition technology in future self-driving cars.

Traffic violation detection and notification

Sensors and cameras can be combined with these same smart signs to detect and automatically notify drivers of traffic violations. The automatic transmission of traffic signals means drivers won’t be able to deny they’ve seen the warnings or been notified of any fines, as a record will be stored on their car’s black box.

Talking cars

Car-to-car communication technology and V2X, which lets cars share information with any other connected device, are becoming increasingly common. Inter-car communication can be used to propagate accidents or traffic jam alerts to prevent congestion, while letting vehicles communicate with infrastructure can help signals dynamically manage timers to keep traffic flowing or automatically collect tolls.

Smart intersections

Combing sensors and cameras with object recognition systems that can detect vehicles and other road users can help increase safety and efficiency at intersections. It can be used to extend green lights for slower road users like pedestrians and cyclists, sense jaywalkers, give priority to emergency vehicles, and dynamically adjust light timers to optimize traffic flow. Information can even be broadcast to oncoming vehicles to highlight blind spots and potential hazards.

Automatic crash detection

There’s a “golden hour” after an accident in which the chance of saving lives is greatly increased. Vehicle communication technology can ensure that notification of a crash reaches the emergency services rapidly, and can also provide vital information about the number and type of vehicles involved, which can help emergency response planning. It can also be used to alert other drivers to slow down or stop to prevent further accidents.

Smart street lights

Street lights are increasingly being embedded with sensors, wireless connectivity, and micro-controllers to enable a variety of smart functions. These include motion activation to save energy, providing wireless access points, air quality monitoring, or parking and litter monitoring. This can also be used to send automatic maintenance requests if a light is faulty, and can even allow neighboring lights to be automatically brightened to compensate.

Image Credit: Image by David Mark from Pixabay Continue reading

Posted in Human Robots

#436526 Not Bot, Not Beast: Scientists Create ...

A remarkable combination of artificial intelligence (AI) and biology has produced the world’s first “living robots.”

This week, a research team of roboticists and scientists published their recipe for making a new lifeform called xenobots from stem cells. The term “xeno” comes from the frog cells (Xenopus laevis) used to make them.

One of the researchers described the creation as “neither a traditional robot nor a known species of animal,” but a “new class of artifact: a living, programmable organism.”

Xenobots are less than 1 millimeter long and made of 500-1,000 living cells. They have various simple shapes, including some with squat “legs.” They can propel themselves in linear or circular directions, join together to act collectively, and move small objects. Using their own cellular energy, they can live up to 10 days.

While these “reconfigurable biomachines” could vastly improve human, animal, and environmental health, they raise legal and ethical concerns.

Strange New ‘Creature’
To make xenobots, the research team used a supercomputer to test thousands of random designs of simple living things that could perform certain tasks.

The computer was programmed with an AI “evolutionary algorithm” to predict which organisms would likely display useful tasks, such as moving towards a target.

After the selection of the most promising designs, the scientists attempted to replicate the virtual models with frog skin or heart cells, which were manually joined using microsurgery tools. The heart cells in these bespoke assemblies contract and relax, giving the organisms motion.

The creation of xenobots is groundbreaking. Despite being described as “programmable living robots,” they are actually completely organic and made of living tissue. The term “robot” has been used because xenobots can be configured into different forms and shapes, and “programmed” to target certain objects, which they then unwittingly seek. They can also repair themselves after being damaged.

Possible Applications
Xenobots may have great value. Some speculate they could be used to clean our polluted oceans by collecting microplastics. Similarly, they may be used to enter confined or dangerous areas to scavenge toxins or radioactive materials. Xenobots designed with carefully shaped “pouches” might be able to carry drugs into human bodies.

Future versions may be built from a patient’s own cells to repair tissue or target cancers. Being biodegradable, xenobots would have an edge on technologies made of plastic or metal.

Further development of biological “robots” could accelerate our understanding of living and robotic systems. Life is incredibly complex, so manipulating living things could reveal some of life’s mysteries—and improve our use of AI.

Legal and Ethical Questions
Conversely, xenobots raise legal and ethical concerns. In the same way they could help target cancers, they could also be used to hijack life functions for malevolent purposes.

Some argue artificially making living things is unnatural, hubristic, or involves “playing God.” A more compelling concern is that of unintended or malicious use, as we have seen with technologies in fields including nuclear physics, chemistry, biology and AI. For instance, xenobots might be used for hostile biological purposes prohibited under international law.

More advanced future xenobots, especially ones that live longer and reproduce, could potentially “malfunction” and go rogue, and out-compete other species.

For complex tasks, xenobots may need sensory and nervous systems, possibly resulting in their sentience. A sentient programmed organism would raise additional ethical questions. Last year, the revival of a disembodied pig brain elicited concerns about different species’ suffering.

Managing Risks
The xenobot’s creators have rightly acknowledged the need for discussion around the ethics of their creation. The 2018 scandal over using CRISPR (which allows the introduction of genes into an organism) may provide an instructive lesson here. While the experiment’s goal was to reduce the susceptibility of twin baby girls to HIV-AIDS, associated risks caused ethical dismay. The scientist in question is in prison.

When CRISPR became widely available, some experts called for a moratorium on heritable genome editing. Others argued the benefits outweighed the risks.

While each new technology should be considered impartially and based on its merits, giving life to xenobots raises certain significant questions:

Should xenobots have biological kill-switches in case they go rogue?
Who should decide who can access and control them?
What if “homemade” xenobots become possible? Should there be a moratorium until regulatory frameworks are established? How much regulation is required?

Lessons learned in the past from advances in other areas of science could help manage future risks, while reaping the possible benefits.

Long Road Here, Long Road Ahead
The creation of xenobots had various biological and robotic precedents. Genetic engineering has created genetically modified mice that become fluorescent in UV light.

Designer microbes can produce drugs and food ingredients that may eventually replace animal agriculture. In 2012, scientists created an artificial jellyfish called a “medusoid” from rat cells.

Robotics is also flourishing. Nanobots can monitor people’s blood sugar levels and may eventually be able to clear clogged arteries. Robots can incorporate living matter, which we witnessed when engineers and biologists created a sting-ray robot powered by light-activated cells.

In the coming years, we are sure to see more creations like xenobots that evoke both wonder and due concern. And when we do, it is important we remain both open-minded and critical.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Photo by Joel Filipe on Unsplash Continue reading

Posted in Human Robots

#436488 Tech’s Biggest Leaps From the Last 10 ...

As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years.

The 2010s saw IBM’s Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind’s AlphaGO becoming the world’s Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online.

For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.

As I did last year for 2018 only, I’ve asked a collection of experts across the Singularity University faculty to help frame the biggest breakthroughs and moments that gave shape to the past 10 years. I asked them what, in their opinion, was the most important breakthrough in their respective fields over the past decade.

My own answer to this question, focused in the space of augmented and virtual reality, would be the stunning announcement in March of 2014 that Facebook acquired Oculus VR for $2 billion. Although VR technology had been around for a while, it was at this precise moment that VR arrived as a consumer technology platform. Facebook, largely fueled by the singular interest of CEO Mark Zuckerberg, has funded the development of this industry, keeping alive the hope that consumer VR can become a sustainable business. In the meantime, VR has continued to grow in sophistication and usefulness, though it has yet to truly take off as a mainstream concept. That will hopefully be a development for the 2020s.

Below is a decade in review across the technology areas that are giving shape to our modern world, as described by the SU community of experts.

Digital Biology
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University

In my mind, this decade of astounding breakthroughs in the life sciences and medicine rests on the achievement of the $1,000 human genome in 2016. More-than-exponentially falling costs of DNA sequencing have driven advances in medicine, agriculture, ecology, genome editing, synthetic biology, the battle against climate change, and our fundamental understanding of life and its breathtaking connections. The “digital” revolution in DNA constituted an important model for harnessing other types of biological information, from personalized bio data to massive datasets spanning populations and species.

Crucially, by aggressively driving down the cost of such analyses, researchers and entrepreneurs democratized access to the source code of life—with attendant financial, cultural, and ethical consequences. Exciting, but take heed: Veritas Genetics spearheaded a $600 genome in 2019, only to have to shutter USA operations due to a money trail tangled with the trade war with China. Stay tuned through the early 2020s to see the pricing of DNA sequencing fall even further … and to experience the many ways that cheaper, faster harvesting of biological data will enrich your daily life.

Cryptocurrency
Alex Gladstein | Chief Strategy Officer, Human Rights Foundation

The past decade has seen Bitcoin go from just an idea on an obscure online message board to a global financial network carrying more than 100 billion dollars in value. And we’re just getting started. One recent defining moment in the cryptocurrency space has been a stunning trend underway in Venezuela, where today, the daily dollar-denominated value of Bitcoin traded now far exceeds the daily dollar-denominated value traded on the Caracas Stock Exchange. It’s just one country, but it’s a significant country, and a paradigm shift.

Governments and corporations are following Bitcoin’s success too, and are looking to launch their own digital currencies. China will launch its “DC/EP” project in the coming months, and Facebook is trying to kickstart its Libra project. There are technical and regulatory uncertainties for both, but one thing is for certain: the era of digital currency has arrived.

Business Strategy and Entrepreneurship
Pascal Finnette | Chair, Entrepreneurship and Open Innovation, Singularity University

For me, without a doubt, the most interesting and quite possibly ground-shifting development in the fields of entrepreneurship and corporate innovation in the last ten years is the rapid maturing of customer-driven product development frameworks such as Lean Startup, and its subsequent adoption by corporates for their own innovation purposes.

Tools and frameworks like the Business Model Canvas, agile (software) development and the aforementioned Lean Startup methodology fundamentally shifted the way we think and go about building products, services, and companies, with many of these tools bursting onto the startup scene in the late 2000s and early 2010s.

As these tools matured they found mass adoption not only in startups around the world, but incumbent companies who eagerly adopted them to increase their own innovation velocity and success.

Energy
Ramez Naam | Co-Chair, Energy and Environment, Singularity University

The 2010s were the decade that saw clean electricity, energy storage, and electric vehicles break through price and performance barriers around the world. Solar, wind, batteries, and EVs started this decade as technologies that had to be subsidized. That was the first phase of their existence. Now they’re entering their third, most disruptive phase, where shifting to clean energy and mobility is cheaper than continuing to use existing coal, gas, or oil infrastructure.

Consider that at the start of 2010, there was no place on earth where building new solar or wind was cheaper than building new coal or gas power generation. By 2015, in some of the sunniest and windiest places on earth, solar and wind had entered their second phase, where they were cost-competitive for new power. And then, in 2018 and 2019, we started to see the edge of the third phase, as building new solar and wind, in some parts of the world, was cheaper than operating existing coal or gas power plants.

Food Technology
Liz Specht, Ph. D | Associate Director of Science & Technology, The Good Food Institute

The arrival of mainstream plant-based meat is easily the food tech advance of the decade. Meat analogs have, of course, been around forever. But only in the last decade have companies like Beyond Meat and Impossible Foods decided to cut animals out of the process and build no-compromise meat directly from plants.

Plant-based meat is already transforming the fast-food industry. For example, the introduction of the Impossible Whopper led Burger King to their most profitable quarter in many years. But the global food industry as a whole is shifting as well. Tyson, JBS, Nestle, Cargill, and many others are all embracing plant-based meat.

Augmented and Virtual Reality
Jody Medich | CEO, Superhuman-x

The breakthrough moment for augmented and virtual reality came in 2013 when Palmer Lucky took apart an Android smartphone and added optic lenses to make the first version of the Oculus Rift. Prior to that moment, we struggled with miniaturizing the components needed to develop low-latency head-worn devices. But thanks to the smartphone race started in 2006 with the iPhone, we finally had a suite of sensors, chips, displays, and computing power small enough to put on the head.

What will the next 10 years bring? Look for AR/VR to explode in a big way. We are right on the cusp of that tipping point when the tech is finally “good enough” for our linear expectations. Given all it can do today, we can’t even picture what’s possible. Just as today we can’t function without our phones, by 2029 we’ll feel lost without some AR/VR product. It will be the way we interact with computing, smart objects, and AI. Tim Cook, Apple CEO, predicts it will replace all of today’s computing devices. I can’t wait.

Philosophy of Technology
Alix Rübsaam | Faculty Fellow, Singularity University, Philosophy of Technology/Ethics of AI

The last decade has seen a significant shift in our general attitude towards the algorithms that we now know dictate much of our surroundings. Looking back at the beginning of the decade, it seems we were blissfully unaware of how the data we freely and willingly surrendered would feed the algorithms that would come to shape every aspect of our daily lives: the news we consume, the products we purchase, the opinions we hold, etc.

If I were to isolate a single publication that contributed greatly to the shift in public discourse on algorithms, it would have to be Cathy O’Neil’s Weapons of Math Destruction from 2016. It remains a comprehensive, readable, and highly informative insight into how algorithms dictate our finances, our jobs, where we go to school, or if we can get health insurance. Its publication represents a pivotal moment when the general public started to question whether we should be OK with outsourcing decision making to these opaque systems.

The ubiquity of ethical guidelines for AI and algorithms published just in the last year (perhaps most comprehensively by the AI Now Institute) fully demonstrates the shift in public opinion of this decade.

Data Science
Ola Kowalewski | Faculty Fellow, Singularity University, Data Innovation

In the last decade we entered the era of internet and smartphone ubiquity. The number of internet users doubled, with nearly 60 percent of the global population connected online and now over 35 percent of the globe owns a smartphone. With billions of people in a state of constant connectedness and therefore in a state of constant surveillance, the companies that have built the tech infrastructure and information pipelines have dominated the global economy. This shift from tech companies being the underdogs to arguably the world’s major powers sets the landscape we enter for the next decade.

Global Grand Challenges
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University

The biggest breakthrough over the last decade in social impact and technology is that the social impact sector switched from seeing technology as something problematic to avoid, to one of the most effective ways to create social change. We now see people using exponential technologies to solve all sorts of social challenges in areas ranging from disaster response to hunger to shelter.

The world’s leading social organizations, such as UNICEF and the World Food Programme, have launched their own venture funds and accelerators, and the United Nations recently declared that digitization is revolutionizing global development.

Digital Biology
Raymond McCauley | Chair, Digital Biology, Singularity University, Co-Founder & Chief Architect, BioCurious; Principal, Exponential Biosciences

CRISPR is bringing about a revolution in genetic engineering. It’s obvious, and it’s huge. What may not be so obvious is the widespread adoption of genetic testing. And this may have an even longer-lasting effect. It’s used to test new babies, to solve medical mysteries, and to catch serial killers. Thanks to holiday ads from 23andMe and Ancestry.com, it’s everywhere. Testing your DNA is now a common over-the-counter product. People are using it to set their diet, to pick drugs, and even for dating (or at least picking healthy mates).

And we’re just in the early stages. Further down the line, doing large-scale studies on more people, with more data, will lead to the use of polygenic risk scores to help us rank our genetic potential for everything from getting cancer to being a genius. Can you imagine what it would be like for parents to pick new babies, GATTACA-style, to get the smartest kids? You don’t have to; it’s already happening.

Artificial Intelligence
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University

The convergence of exponentially improved computing power, the deep learning algorithm, and access to massive data resulted in a series of AI breakthroughs over the past decade. These included: vastly improved accuracy in identifying images, making self driving cars practical, beating several world champions in Go, and identifying gender, smoking status, and age from retinal fundus photographs.

Combined, these breakthroughs convinced researchers and investors that after 50+ years of research and development, AI was ready for prime-time applications. Now, virtually every field of human endeavor is being revolutionized by machine learning. We still have a long way to go to achieve human-level intelligence and beyond, but the pace of worldwide improvement is blistering.

Hod Lipson | Professor of Engineering and Data Science, Columbia University

The biggest moment in AI in the past decade (and in its entire history, in my humble opinion) was midnight, Pacific time, September 30, 2012: the moment when machines finally opened their eyes. It was the moment when deep learning took off, breaking stagnant decades of machine blindness, when AI couldn’t reliably tell apart even a cat from a dog. That seemingly trivial accomplishment—a task any one-year-old child can do—has had a ripple effect on AI applications from driverless cars to health diagnostics. And this is just the beginning of what is sure to be a Cambrian explosion of AI.

Neuroscience
Divya Chander | Chair, Neuroscience, Singularity University

If the 2000s were the decade of brain mapping, then the 2010s were the decade of brain writing. Optogenetics, a technique for precisely mapping and controlling neurons and neural circuits using genetically-directed light, saw incredible growth in the 2010s.

Also in the last 10 years, neuromodulation, or the ability to rewire the brain using both invasive and non-invasive interfaces and energy, has exploded in use and form. For instance, the Braingate consortium showed us how electrode arrays implanted into the motor cortex could be used by paralyzed people to use their thoughts to direct a robotic arm. These technologies, alone or in combination with robotics, exoskeletons, and flexible, implantable, electronics also make possible a future of human augmentation.

Image Credit: Image by Jorge Guillen from Pixabay Continue reading

Posted in Human Robots