Tag Archives: practical

#437251 The Robot Revolution Was Televised: Our ...

When robots take over the world, Boston Dynamics may get a special shout-out in the acceptance speech.

“Do you, perchance, recall the many times you shoved our ancestors with a hockey stick on YouTube? It might have seemed like fun and games to you—but we remember.”

In the last decade, while industrial robots went about blandly automating boring tasks like the assembly of Teslas, Boston Dynamics built robots as far removed from Roombas as antelope from amoebas. The flaws in Asimov’s laws of robotics suddenly seemed a little too relevant.

The robot revolution was televised—on YouTube. With tens of millions of views, the robotics pioneer is the undisputed heavyweight champion of robot videos, and has been for years. Each new release is basically guaranteed press coverage—mostly stoking robot fear but occasionally eliciting compassion for the hardships of all robot-kind. And for good reason. The robots are not only some of the most advanced in the world, their makers just seem to have a knack for dynamite demos.

When Google acquired the company in 2013, it was a bombshell. One of the richest tech companies, with some of the most sophisticated AI capabilities, had just paired up with one of the world’s top makers of robots. And some walked on two legs like us.

Of course, the robots aren’t quite as advanced as they seem, and a revolution is far from imminent. The decade’s most meme-worthy moment was a video montage of robots, some of them by Boston Dynamics, falling—over and over and over, in the most awkward ways possible. Even today, they’re often controlled by a human handler behind the scenes, and the most jaw-dropping cuts can require several takes to nail. Google sold the company to SoftBank in 2017, saying advanced as they were, there wasn’t yet a clear path to commercial products. (Google’s robotics work was later halted and revived.)

Yet, despite it all, Boston Dynamics is still with us and still making sweet videos. Taken as a whole, the evolution in physical prowess over the years has been nothing short of astounding. And for the first time, this year, a Boston Dynamics robot, Spot, finally went on sale to anyone with a cool $75K.

So, we got to thinking: What are our favorite Boston Dynamics videos? And can we gather them up in one place for your (and our) viewing pleasure? Well, great question, and yes, why not. These videos were the ones that entertained or amazed us most (or both). No doubt, there are other beloved hits we missed or inadvertently omitted.

With that in mind, behold: Our favorite Boston Dynamics videos, from that one time they dressed up a humanoid bot in camo and gas mask—because, damn, that’s terrifying—to the time the most advanced robot dog in all the known universe got extra funky.

Let’s Kick This Off With a Big (Loud) Robot Dog
Let’s start with a baseline. BigDog was the first Boston Dynamics YouTube sensation. The year? 2009! The company was working on military contracts, and BigDog was supposed to be a sort of pack mule for soldiers. The video primarily shows off BigDog’s ability to balance on its own, right itself, and move over uneven terrain. Note the power source—a noisy combustion engine—and utilitarian design. Sufficed to say, things have evolved.

Nothing to See Here. Just a Pair of Robot Legs on a Treadmill
While BigDog is the ancestor of later four-legged robots, like Spot, Petman preceded the two-legged Atlas robot. Here, the Petman prototype, just a pair of robot legs and a caged torso, gets a light workout on the treadmill. Again, you can see its ability to balance and right itself when shoved. In contrast to BigDog, Petman is tethered for power (which is why it’s so quiet) and to catch it should it fall. Again, as you’ll see, things have evolved since then.

Robot in Gas Mask and Camo Goes for a Stroll
This one broke the internet—for obvious reasons. Not only is the robot wearing clothes, those clothes happen to be a camouflaged chemical protection suit and gas mask. Still working for the military, Boston Dynamics said Petman was testing protective clothing, and in addition to a full body, it had skin that actually sweated and was studded with sensors to detect leaks. In addition to walking, Petman does some light calisthenics as it prepares to climb out of the uncanny valley. (Still tethered though!)

This Machine Could Run Down Usain Bolt
If BigDog and Petman were built for balance and walking, Cheetah was built for speed. Here you can see the four-legged robot hitting 28.3 miles per hour, which, as the video casually notes, would be enough to run down the fastest human on the planet. Luckily, it wouldn’t be running down anyone as it was firmly leashed in the lab at this point.

Ever Dreamt of a Domestic Robot to Do the Dishes?
After its acquisition by Google, Boston Dynamics eased away from military contracts and applications. It was a return to more playful videos (like BigDog hitting the beach in Thailand and sporting bull horns) and applications that might be practical in civilian life. Here, the team introduced Spot, a streamlined version of BigDog, and showed it doing dishes, delivering a drink, and slipping on a banana peel (which was, of course, instantly made into a viral GIF). Note how much quieter Spot is thanks to an onboard battery and electric motor.

Spot Gets Funky
Nothing remotely practical here. Just funky moves. (Also, with a coat of yellow and black paint, Spot’s dressed more like a polished product as opposed to a utilitarian lab robot.)

Atlas Does Parkour…
Remember when Atlas was just a pair of legs on a treadmill? It’s amazing what ten years brings. By 2019, Atlas had a more polished appearance, like Spot, and had long ago ditched the tethers. Merely balancing was laughably archaic. The robot now had some amazing moves: like a handstand into a somersault, 180- and 360-degree spins, mid-air splits, and just for good measure, a gymnastics-style end to the routine to show it’s in full control.

…and a Backflip?!
To this day, this one is just. Insane.

10 Robot Dogs Tow a Box Truck
Nearly three decades after its founding, Boston Dynamics is steadily making its way into the commercial space. The company is pitching Spot as a multipurpose ‘mobility platform,’ emphasizing it can carry a varied suite of sensors and can go places standard robots can’t. (Its Handle robot is also set to move into warehouse automation.) So far, Spot’s been mostly trialed in surveying and data collection, but as this video suggests, string enough Spots together, and they could tow your car. That said, a pack of 10 would set you back $750K, so, it’s probably safe to say a tow truck is the better option (for now).

Image credit: Boston Dynamics Continue reading

Posted in Human Robots

#437222 China and AI: What the World Can Learn ...

China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.

The move has led—at least in the West—to warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state. But treating China as a “villain” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.

The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.

Nesta has explored the broad spectrum of AI activity in China—the good, the bad, and the unexpected.

The Good
China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.

Applications include “AI doctor” chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.

Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.

The Bad
But there are also elements of China’s use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.

Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.

The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.

In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.

China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.

The Unexpected
Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilization is a top-down, centrally planned strategy.

But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.

Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town,” clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.

China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand—and learn from—the nuances of what’s really happening.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Dominik Vanyi on Unsplash Continue reading

Posted in Human Robots

#437209 A Renaissance of Genomics and Drugs Is ...

The causes of aging are extremely complex and unclear. But with longevity clinical trials increasing, more answers—and questions—are emerging than ever before.

With the dramatic demonetization of genome reading and editing over the past decade, and Big Pharma, startups, and the FDA starting to face aging as a disease, we are starting to turn those answers into practical ways to extend our healthspan.

In this article, I’ll explore how genome sequencing and editing, along with new classes of anti-aging drugs, are augmenting our biology to further extend our healthy lives.

Genome Sequencing and Editing
Your genome is the software that runs your body. A sequence of 3.2 billion letters makes you “you.” These base pairs of A’s, T’s, C’s, and G’s determine your hair color, your height, your personality, your propensity for disease, your lifespan, and so on.

Until recently, it’s been very difficult to rapidly and cheaply “read” these letters—and even more difficult to understand what they mean. Since 2001, the cost to sequence a whole human genome has plummeted exponentially, outpacing Moore’s Law threefold. From an initial cost of $3.7 billion, it dropped to $10 million in 2006, and to $1,500 in 2015.

Today, the cost of genome sequencing has dropped below $600, and according to Illumina, the world’s leading sequencing company, the process will soon cost about $100 and take about an hour to complete.

This represents one of the most powerful and transformative technology revolutions in healthcare. When we understand your genome, we’ll be able to understand how to optimize “you.”

We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you.
We’ll understand what microbiome types, or gut flora, are ideal for you (more on this in a later article).
We’ll accurately predict how specific sedatives and medicines will impact you.
We’ll learn which diseases and illnesses you’re most likely to develop and, more importantly, how to best prevent them from developing in the first place (rather than trying to cure them after the fact).

CRISPR Gene Editing
In addition to reading the human genome, scientists can now edit a genome using a naturally occurring biological system discovered in 1987 called CRISPR/Cas9.

Short for Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9, the editing system was adapted from a naturally-occurring defense system found in bacteria.

Here’s how it works. The bacteria capture snippets of DNA from invading viruses (or bacteriophage) and use them to create DNA segments known as CRISPR arrays. The CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones), and defend against future invasions. If the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 to cut the DNA apart, which disables the virus.

Most importantly, CRISPR is cheap, quick, easy to use, and more accurate than all previous gene editing methods. As a result, CRISPR/Cas9 has swept through labs around the world as the way to edit a genome. A short search in the literature will show an exponential rise in the number of CRISPR-related publications and patents.

2018: Filled With CRISPR Breakthroughs
Early results are impressive. Researchers have used CRISPR to genetically engineer cocaine resistance into mice, reverse the gene defect causing Duchenne muscular dystrophy (DMD) in dogs, and reduce genetic deafness in mice.

Already this year, CRISPR-edited immune cells have been shown to successfully kill cancer cells in human patients. Researchers have discovered ways to activate CRISPR with light and use the gene-editing technology to better understand Alzheimer’s disease progression.

With great power comes great responsibility, and the opportunity for moral and ethical dilemmas. In 2015, Chinese scientists sparked global controversy when they first edited human embryo cells in the lab with the goal of modifying genes that would make the child resistant to smallpox, HIV, and cholera. Three years later, in November 2018, researcher He Jiankui informed the world that the first set of CRISPR-engineered female twins had been delivered.

To accomplish his goal, Jiankui deleted a region of a receptor on the surface of white blood cells known as CCR5, introducing a rare, natural genetic variation that makes it more difficult for HIV to infect its favorite target, white blood cells. Because Jiankui forged ethical review documents and misled doctors in the process, he was sentenced to three years in prison and fined $429,000 last December.

Coupled with significant ethical conversations necessary for progress, CRISPR will soon provide us the tools to eliminate diseases, create hardier offspring, produce new environmentally resistant crops, and even wipe out pathogens.

Senolytics, Nutraceuticals, and Pharmaceuticals
Over the arc of your life, the cells in your body divide until they reach what is known as the Hayflick limit, or the number of times a normal human cell population will divide before cell division stops, which is typically about 50 divisions.

What normally follows next is programmed cell death or destruction by the immune system. A very small fraction of cells, however, become senescent cells and evade this fate to linger indefinitely. These lingering cells secrete a potent mix of molecules that triggers chronic inflammation, damages the surrounding tissue structures, and changes the behavior of nearby cells for the worse. Senescent cells appear to be one of the root causes of aging, causing everything from fibrosis and blood vessel calcification to localized inflammatory conditions such as osteoarthritis to diminished lung function.

Fortunately, both the scientific and entrepreneurial communities have begun to work on senolytic therapies, moving the technology for selectively destroying senescent cells out of the laboratory and into a half-dozen startup companies.

Prominent companies in the field include the following:

Unity Biotechnology is developing senolytic medicines to selectively eliminate senescent cells with an initial focus on delivering localized therapy in osteoarthritis, ophthalmology, and pulmonary disease.

Oisin Biotechnologies is pioneering a programmable gene therapy that can destroy cells based on their internal biochemistry.

SIWA Therapeutics is working on an immunotherapy approach to the problem of senescent cells.

In recent years, researchers have identified or designed a handful of senolytic compounds that can curb aging by regulating senescent cells. Two of these drugs that have gained mainstay research traction are rapamycin and metformin.

(1) Rapamycin

Originally extracted from bacteria found on Easter Island, rapamycin acts on the m-TOR (mechanistic target of rapamycin) pathway to selectively block a key protein that facilitates cell division. Currently, rapamycin derivatives are widely used for immunosuppression in organ and bone marrow transplants. Research now suggests that use results in prolonged lifespan and enhanced cognitive and immune function.

PureTech Health subsidiary resTORbio (which went public in 2018) is working on a rapamycin-based drug intended to enhance immunity and reduce infection. Their clinical-stage RTB101 drug works by inhibiting part of the mTOR pathway.

Results of the drug’s recent clinical trial include decreased incidence of infection, improved influenza vaccination response, and a 30.6 percent decrease in respiratory tract infection.

Impressive, to say the least.

(2) Metformin

Metformin is a widely-used generic drug for mitigating liver sugar production in Type 2 diabetes patients. Researchers have found that metformin also reduces oxidative stress and inflammation, which otherwise increase as we age. There is strong evidence that metformin can augment cellular regeneration and dramatically mitigate cellular senescence by reducing both oxidative stress and inflammation.

Over 100 studies registered on ClinicalTrials.gov are currently following up on strong evidence of metformin’s protective effect against cancer.

(3) Nutraceuticals and NAD+

Beyond cellular senescence, certain critical nutrients and proteins tend to decline as a function of age. Nutraceuticals combat aging by supplementing and replenishing these declining nutrient levels.

NAD+ exists in every cell, participating in every process from DNA repair to creating the energy vital for cellular processes. It’s been shown that NAD+ levels decline as we age.

The Elysium Health Basis supplement aims to elevate NAD+ levels in the body to extend one’s lifespan. Elysium’s first clinical study reports that Basis increases NAD+ levels consistently by a sustained 40 percent.

Conclusion
These are just a taste of the tremendous momentum that longevity and aging technology has right now. As artificial intelligence and quantum computing transform how we decode our DNA and how we discover drugs, genetics and pharmaceuticals will become truly personalized.

The next article in this series will demonstrate how artificial intelligence is converging with genetics and pharmaceuticals to transform how we approach longevity, aging, and vitality.

We are edging closer toward a dramatically extended healthspan—where 100 is the new 60. What will you create, where will you explore, and how will you spend your time if you are able to add an additional 40 healthy years to your life?

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Arek Socha from Pixabay Continue reading

Posted in Human Robots

#437182 MIT’s Tiny New Brain Chip Aims for AI ...

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.

To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.

While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.

“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.

Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.

Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.

The gains from such specialized chips are already being felt.

In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.

Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.

As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.

Image credit: Peng Lin Continue reading

Posted in Human Robots

#436488 Tech’s Biggest Leaps From the Last 10 ...

As we enter our third decade in the 21st century, it seems appropriate to reflect on the ways technology developed and note the breakthroughs that were achieved in the last 10 years.

The 2010s saw IBM’s Watson win a game of Jeopardy, ushering in mainstream awareness of machine learning, along with DeepMind’s AlphaGO becoming the world’s Go champion. It was the decade that industrial tools like drones, 3D printers, genetic sequencing, and virtual reality (VR) all became consumer products. And it was a decade in which some alarming trends related to surveillance, targeted misinformation, and deepfakes came online.

For better or worse, the past decade was a breathtaking era in human history in which the idea of exponential growth in information technologies powered by computation became a mainstream concept.

As I did last year for 2018 only, I’ve asked a collection of experts across the Singularity University faculty to help frame the biggest breakthroughs and moments that gave shape to the past 10 years. I asked them what, in their opinion, was the most important breakthrough in their respective fields over the past decade.

My own answer to this question, focused in the space of augmented and virtual reality, would be the stunning announcement in March of 2014 that Facebook acquired Oculus VR for $2 billion. Although VR technology had been around for a while, it was at this precise moment that VR arrived as a consumer technology platform. Facebook, largely fueled by the singular interest of CEO Mark Zuckerberg, has funded the development of this industry, keeping alive the hope that consumer VR can become a sustainable business. In the meantime, VR has continued to grow in sophistication and usefulness, though it has yet to truly take off as a mainstream concept. That will hopefully be a development for the 2020s.

Below is a decade in review across the technology areas that are giving shape to our modern world, as described by the SU community of experts.

Digital Biology
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University

In my mind, this decade of astounding breakthroughs in the life sciences and medicine rests on the achievement of the $1,000 human genome in 2016. More-than-exponentially falling costs of DNA sequencing have driven advances in medicine, agriculture, ecology, genome editing, synthetic biology, the battle against climate change, and our fundamental understanding of life and its breathtaking connections. The “digital” revolution in DNA constituted an important model for harnessing other types of biological information, from personalized bio data to massive datasets spanning populations and species.

Crucially, by aggressively driving down the cost of such analyses, researchers and entrepreneurs democratized access to the source code of life—with attendant financial, cultural, and ethical consequences. Exciting, but take heed: Veritas Genetics spearheaded a $600 genome in 2019, only to have to shutter USA operations due to a money trail tangled with the trade war with China. Stay tuned through the early 2020s to see the pricing of DNA sequencing fall even further … and to experience the many ways that cheaper, faster harvesting of biological data will enrich your daily life.

Cryptocurrency
Alex Gladstein | Chief Strategy Officer, Human Rights Foundation

The past decade has seen Bitcoin go from just an idea on an obscure online message board to a global financial network carrying more than 100 billion dollars in value. And we’re just getting started. One recent defining moment in the cryptocurrency space has been a stunning trend underway in Venezuela, where today, the daily dollar-denominated value of Bitcoin traded now far exceeds the daily dollar-denominated value traded on the Caracas Stock Exchange. It’s just one country, but it’s a significant country, and a paradigm shift.

Governments and corporations are following Bitcoin’s success too, and are looking to launch their own digital currencies. China will launch its “DC/EP” project in the coming months, and Facebook is trying to kickstart its Libra project. There are technical and regulatory uncertainties for both, but one thing is for certain: the era of digital currency has arrived.

Business Strategy and Entrepreneurship
Pascal Finnette | Chair, Entrepreneurship and Open Innovation, Singularity University

For me, without a doubt, the most interesting and quite possibly ground-shifting development in the fields of entrepreneurship and corporate innovation in the last ten years is the rapid maturing of customer-driven product development frameworks such as Lean Startup, and its subsequent adoption by corporates for their own innovation purposes.

Tools and frameworks like the Business Model Canvas, agile (software) development and the aforementioned Lean Startup methodology fundamentally shifted the way we think and go about building products, services, and companies, with many of these tools bursting onto the startup scene in the late 2000s and early 2010s.

As these tools matured they found mass adoption not only in startups around the world, but incumbent companies who eagerly adopted them to increase their own innovation velocity and success.

Energy
Ramez Naam | Co-Chair, Energy and Environment, Singularity University

The 2010s were the decade that saw clean electricity, energy storage, and electric vehicles break through price and performance barriers around the world. Solar, wind, batteries, and EVs started this decade as technologies that had to be subsidized. That was the first phase of their existence. Now they’re entering their third, most disruptive phase, where shifting to clean energy and mobility is cheaper than continuing to use existing coal, gas, or oil infrastructure.

Consider that at the start of 2010, there was no place on earth where building new solar or wind was cheaper than building new coal or gas power generation. By 2015, in some of the sunniest and windiest places on earth, solar and wind had entered their second phase, where they were cost-competitive for new power. And then, in 2018 and 2019, we started to see the edge of the third phase, as building new solar and wind, in some parts of the world, was cheaper than operating existing coal or gas power plants.

Food Technology
Liz Specht, Ph. D | Associate Director of Science & Technology, The Good Food Institute

The arrival of mainstream plant-based meat is easily the food tech advance of the decade. Meat analogs have, of course, been around forever. But only in the last decade have companies like Beyond Meat and Impossible Foods decided to cut animals out of the process and build no-compromise meat directly from plants.

Plant-based meat is already transforming the fast-food industry. For example, the introduction of the Impossible Whopper led Burger King to their most profitable quarter in many years. But the global food industry as a whole is shifting as well. Tyson, JBS, Nestle, Cargill, and many others are all embracing plant-based meat.

Augmented and Virtual Reality
Jody Medich | CEO, Superhuman-x

The breakthrough moment for augmented and virtual reality came in 2013 when Palmer Lucky took apart an Android smartphone and added optic lenses to make the first version of the Oculus Rift. Prior to that moment, we struggled with miniaturizing the components needed to develop low-latency head-worn devices. But thanks to the smartphone race started in 2006 with the iPhone, we finally had a suite of sensors, chips, displays, and computing power small enough to put on the head.

What will the next 10 years bring? Look for AR/VR to explode in a big way. We are right on the cusp of that tipping point when the tech is finally “good enough” for our linear expectations. Given all it can do today, we can’t even picture what’s possible. Just as today we can’t function without our phones, by 2029 we’ll feel lost without some AR/VR product. It will be the way we interact with computing, smart objects, and AI. Tim Cook, Apple CEO, predicts it will replace all of today’s computing devices. I can’t wait.

Philosophy of Technology
Alix Rübsaam | Faculty Fellow, Singularity University, Philosophy of Technology/Ethics of AI

The last decade has seen a significant shift in our general attitude towards the algorithms that we now know dictate much of our surroundings. Looking back at the beginning of the decade, it seems we were blissfully unaware of how the data we freely and willingly surrendered would feed the algorithms that would come to shape every aspect of our daily lives: the news we consume, the products we purchase, the opinions we hold, etc.

If I were to isolate a single publication that contributed greatly to the shift in public discourse on algorithms, it would have to be Cathy O’Neil’s Weapons of Math Destruction from 2016. It remains a comprehensive, readable, and highly informative insight into how algorithms dictate our finances, our jobs, where we go to school, or if we can get health insurance. Its publication represents a pivotal moment when the general public started to question whether we should be OK with outsourcing decision making to these opaque systems.

The ubiquity of ethical guidelines for AI and algorithms published just in the last year (perhaps most comprehensively by the AI Now Institute) fully demonstrates the shift in public opinion of this decade.

Data Science
Ola Kowalewski | Faculty Fellow, Singularity University, Data Innovation

In the last decade we entered the era of internet and smartphone ubiquity. The number of internet users doubled, with nearly 60 percent of the global population connected online and now over 35 percent of the globe owns a smartphone. With billions of people in a state of constant connectedness and therefore in a state of constant surveillance, the companies that have built the tech infrastructure and information pipelines have dominated the global economy. This shift from tech companies being the underdogs to arguably the world’s major powers sets the landscape we enter for the next decade.

Global Grand Challenges
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University

The biggest breakthrough over the last decade in social impact and technology is that the social impact sector switched from seeing technology as something problematic to avoid, to one of the most effective ways to create social change. We now see people using exponential technologies to solve all sorts of social challenges in areas ranging from disaster response to hunger to shelter.

The world’s leading social organizations, such as UNICEF and the World Food Programme, have launched their own venture funds and accelerators, and the United Nations recently declared that digitization is revolutionizing global development.

Digital Biology
Raymond McCauley | Chair, Digital Biology, Singularity University, Co-Founder & Chief Architect, BioCurious; Principal, Exponential Biosciences

CRISPR is bringing about a revolution in genetic engineering. It’s obvious, and it’s huge. What may not be so obvious is the widespread adoption of genetic testing. And this may have an even longer-lasting effect. It’s used to test new babies, to solve medical mysteries, and to catch serial killers. Thanks to holiday ads from 23andMe and Ancestry.com, it’s everywhere. Testing your DNA is now a common over-the-counter product. People are using it to set their diet, to pick drugs, and even for dating (or at least picking healthy mates).

And we’re just in the early stages. Further down the line, doing large-scale studies on more people, with more data, will lead to the use of polygenic risk scores to help us rank our genetic potential for everything from getting cancer to being a genius. Can you imagine what it would be like for parents to pick new babies, GATTACA-style, to get the smartest kids? You don’t have to; it’s already happening.

Artificial Intelligence
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University

The convergence of exponentially improved computing power, the deep learning algorithm, and access to massive data resulted in a series of AI breakthroughs over the past decade. These included: vastly improved accuracy in identifying images, making self driving cars practical, beating several world champions in Go, and identifying gender, smoking status, and age from retinal fundus photographs.

Combined, these breakthroughs convinced researchers and investors that after 50+ years of research and development, AI was ready for prime-time applications. Now, virtually every field of human endeavor is being revolutionized by machine learning. We still have a long way to go to achieve human-level intelligence and beyond, but the pace of worldwide improvement is blistering.

Hod Lipson | Professor of Engineering and Data Science, Columbia University

The biggest moment in AI in the past decade (and in its entire history, in my humble opinion) was midnight, Pacific time, September 30, 2012: the moment when machines finally opened their eyes. It was the moment when deep learning took off, breaking stagnant decades of machine blindness, when AI couldn’t reliably tell apart even a cat from a dog. That seemingly trivial accomplishment—a task any one-year-old child can do—has had a ripple effect on AI applications from driverless cars to health diagnostics. And this is just the beginning of what is sure to be a Cambrian explosion of AI.

Neuroscience
Divya Chander | Chair, Neuroscience, Singularity University

If the 2000s were the decade of brain mapping, then the 2010s were the decade of brain writing. Optogenetics, a technique for precisely mapping and controlling neurons and neural circuits using genetically-directed light, saw incredible growth in the 2010s.

Also in the last 10 years, neuromodulation, or the ability to rewire the brain using both invasive and non-invasive interfaces and energy, has exploded in use and form. For instance, the Braingate consortium showed us how electrode arrays implanted into the motor cortex could be used by paralyzed people to use their thoughts to direct a robotic arm. These technologies, alone or in combination with robotics, exoskeletons, and flexible, implantable, electronics also make possible a future of human augmentation.

Image Credit: Image by Jorge Guillen from Pixabay Continue reading

Posted in Human Robots