Tag Archives: human

#437216 New Report: Tech Could Fuel an Age of ...

With rapid technological progress running headlong into dramatic climate change and widening inequality, most experts agree the coming decade will be tumultuous. But a new report predicts it could actually make or break civilization as we know it.

The idea that humanity is facing a major shake-up this century is not new. The Fourth Industrial Revolution being brought about by technologies like AI, gene editing, robotics, and 3D printing is predicted to cause dramatic social, political, and economic upheaval in the coming decades.

But according to think tank RethinkX, thinking about the coming transition as just another industrial revolution is too simplistic. In a report released last week called Rethinking Humanity, the authors argue that we are about to see a reordering of our relationship with the world as fundamental as when hunter-gatherers came together to build the first civilizations.

At the core of their argument is the fact that since the first large human settlements appeared 10,000 years ago, civilization has been built on the back of our ability to extract resources from nature, be they food, energy, or materials. This led to a competitive landscape where the governing logic is grow or die, which has driven all civilizations to date.

That could be about to change thanks to emerging technologies that will fundamentally disrupt the five foundational sectors underpinning society: information, energy, food, transportation, and materials. They predict that across all five, costs will fall by 10 times or more, while production processes will become 10 times more efficient and will use 90 percent fewer natural resources with 10 to 100 times less waste.

They say that this transformation has already happened in information, where the internet has dramatically reduced barriers to communication and knowledge. They predict the combination of cheap solar and grid storage will soon see energy costs drop as low as one cent per kilowatt hour, and they envisage widespread adoption of autonomous electric vehicles and the replacement of car ownership with ride-sharing.

The authors laid out their vision for the future of food in another report last year, where they predicted that traditional agriculture would soon be replaced by industrial-scale brewing of single-celled organisms genetically modified to produce all the nutrients we need. In a similar vein, they believe the same processes combined with additive manufacturing and “nanotechnologies” will allow us to build all the materials required for the modern world from the molecule up rather than extracting scarce natural resources.

They believe this could allow us to shift from a system of production based on extraction to one built on creation, as limitless renewable energy makes it possible to build everything we need from scratch and barriers to movement and information disappear. As a result, a lifestyle worthy of the “American Dream” could be available to anyone for as little as $250/month by 2030.

This will require a fundamental reimagining of our societies, though. All great civilizations have eventually hit fundamental limits on their growth and we are no different, as demonstrated by our growing impact on the environment and the increasing concentration of wealth. Historically this stage of development has lead to a doubling down on old tactics in search of short-term gains, but this invariably leads to the collapse of the civilization.

The authors argue that we’re in a unique position. Because of the technological disruption detailed above, we have the ability to break through the limits on our growth. But only if we change what the authors call our “Organizing System.” They describe this as “the prevailing models of thought, belief systems, myths, values, abstractions, and conceptual frameworks that help explain how the world works and our relationship to it.”

They say that the current hierarchical, centralized system based on nation-states is unfit for the new system of production that is emerging. The cracks are already starting to appear, with problems like disinformation campaigns, fake news, and growing polarization demonstrating how ill-suited our institutions are for dealing with the distributed nature of today’s information systems. And as this same disruption comes to the other foundational sectors the shockwaves could lead to the collapse of civilization as we know it.

Their solution is a conscious shift towards a new way of organizing the world. As emerging technology allows communities to become self-sufficient, flows of physical resources will be replaced by flows of information, and we will require a decentralized but highly networked Organizing System.

The report includes detailed recommendations on how to usher this in. Examples include giving individuals control and ownership of data rights; developing new models for community ownership of energy, information, and transportation networks; and allowing states and cities far greater autonomy on policies like immigration, taxation, education, and public expenditure.

How easy it will be to get people on board with such a shift is another matter. The authors say it may require us to re-examine the foundations of our society, like representative democracy, capitalism, and nation-states. While they acknowledge that these ideas are deeply entrenched, they appear to believe we can reason our way around them.

That seems optimistic. Cultural and societal change can be glacial, and efforts to impose it top-down through reason and logic are rarely successful. The report seems to brush over many of the messy realities of humanity, such as the huge sway that tradition and religion hold over the vast majority of people.

It also doesn’t deal with the uneven distribution of the technology that is supposed to catapult us into this new age. And while the predicted revolutions in transportation, energy, and information do seem inevitable, the idea that in the next decade or two we’ll be able to produce any material we desire using cheap and abundant stock materials seems like a stretch.

Despite the techno-utopianism though, many of the ideas in the report hold promise for building societies that are better adapted for the disruptive new age we are about to enter.

Image Credit: Futuristic Society/flickr Continue reading

Posted in Human Robots

#437209 A Renaissance of Genomics and Drugs Is ...

The causes of aging are extremely complex and unclear. But with longevity clinical trials increasing, more answers—and questions—are emerging than ever before.

With the dramatic demonetization of genome reading and editing over the past decade, and Big Pharma, startups, and the FDA starting to face aging as a disease, we are starting to turn those answers into practical ways to extend our healthspan.

In this article, I’ll explore how genome sequencing and editing, along with new classes of anti-aging drugs, are augmenting our biology to further extend our healthy lives.

Genome Sequencing and Editing
Your genome is the software that runs your body. A sequence of 3.2 billion letters makes you “you.” These base pairs of A’s, T’s, C’s, and G’s determine your hair color, your height, your personality, your propensity for disease, your lifespan, and so on.

Until recently, it’s been very difficult to rapidly and cheaply “read” these letters—and even more difficult to understand what they mean. Since 2001, the cost to sequence a whole human genome has plummeted exponentially, outpacing Moore’s Law threefold. From an initial cost of $3.7 billion, it dropped to $10 million in 2006, and to $1,500 in 2015.

Today, the cost of genome sequencing has dropped below $600, and according to Illumina, the world’s leading sequencing company, the process will soon cost about $100 and take about an hour to complete.

This represents one of the most powerful and transformative technology revolutions in healthcare. When we understand your genome, we’ll be able to understand how to optimize “you.”

We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you.
We’ll understand what microbiome types, or gut flora, are ideal for you (more on this in a later article).
We’ll accurately predict how specific sedatives and medicines will impact you.
We’ll learn which diseases and illnesses you’re most likely to develop and, more importantly, how to best prevent them from developing in the first place (rather than trying to cure them after the fact).

CRISPR Gene Editing
In addition to reading the human genome, scientists can now edit a genome using a naturally occurring biological system discovered in 1987 called CRISPR/Cas9.

Short for Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9, the editing system was adapted from a naturally-occurring defense system found in bacteria.

Here’s how it works. The bacteria capture snippets of DNA from invading viruses (or bacteriophage) and use them to create DNA segments known as CRISPR arrays. The CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones), and defend against future invasions. If the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 to cut the DNA apart, which disables the virus.

Most importantly, CRISPR is cheap, quick, easy to use, and more accurate than all previous gene editing methods. As a result, CRISPR/Cas9 has swept through labs around the world as the way to edit a genome. A short search in the literature will show an exponential rise in the number of CRISPR-related publications and patents.

2018: Filled With CRISPR Breakthroughs
Early results are impressive. Researchers have used CRISPR to genetically engineer cocaine resistance into mice, reverse the gene defect causing Duchenne muscular dystrophy (DMD) in dogs, and reduce genetic deafness in mice.

Already this year, CRISPR-edited immune cells have been shown to successfully kill cancer cells in human patients. Researchers have discovered ways to activate CRISPR with light and use the gene-editing technology to better understand Alzheimer’s disease progression.

With great power comes great responsibility, and the opportunity for moral and ethical dilemmas. In 2015, Chinese scientists sparked global controversy when they first edited human embryo cells in the lab with the goal of modifying genes that would make the child resistant to smallpox, HIV, and cholera. Three years later, in November 2018, researcher He Jiankui informed the world that the first set of CRISPR-engineered female twins had been delivered.

To accomplish his goal, Jiankui deleted a region of a receptor on the surface of white blood cells known as CCR5, introducing a rare, natural genetic variation that makes it more difficult for HIV to infect its favorite target, white blood cells. Because Jiankui forged ethical review documents and misled doctors in the process, he was sentenced to three years in prison and fined $429,000 last December.

Coupled with significant ethical conversations necessary for progress, CRISPR will soon provide us the tools to eliminate diseases, create hardier offspring, produce new environmentally resistant crops, and even wipe out pathogens.

Senolytics, Nutraceuticals, and Pharmaceuticals
Over the arc of your life, the cells in your body divide until they reach what is known as the Hayflick limit, or the number of times a normal human cell population will divide before cell division stops, which is typically about 50 divisions.

What normally follows next is programmed cell death or destruction by the immune system. A very small fraction of cells, however, become senescent cells and evade this fate to linger indefinitely. These lingering cells secrete a potent mix of molecules that triggers chronic inflammation, damages the surrounding tissue structures, and changes the behavior of nearby cells for the worse. Senescent cells appear to be one of the root causes of aging, causing everything from fibrosis and blood vessel calcification to localized inflammatory conditions such as osteoarthritis to diminished lung function.

Fortunately, both the scientific and entrepreneurial communities have begun to work on senolytic therapies, moving the technology for selectively destroying senescent cells out of the laboratory and into a half-dozen startup companies.

Prominent companies in the field include the following:

Unity Biotechnology is developing senolytic medicines to selectively eliminate senescent cells with an initial focus on delivering localized therapy in osteoarthritis, ophthalmology, and pulmonary disease.

Oisin Biotechnologies is pioneering a programmable gene therapy that can destroy cells based on their internal biochemistry.

SIWA Therapeutics is working on an immunotherapy approach to the problem of senescent cells.

In recent years, researchers have identified or designed a handful of senolytic compounds that can curb aging by regulating senescent cells. Two of these drugs that have gained mainstay research traction are rapamycin and metformin.

(1) Rapamycin

Originally extracted from bacteria found on Easter Island, rapamycin acts on the m-TOR (mechanistic target of rapamycin) pathway to selectively block a key protein that facilitates cell division. Currently, rapamycin derivatives are widely used for immunosuppression in organ and bone marrow transplants. Research now suggests that use results in prolonged lifespan and enhanced cognitive and immune function.

PureTech Health subsidiary resTORbio (which went public in 2018) is working on a rapamycin-based drug intended to enhance immunity and reduce infection. Their clinical-stage RTB101 drug works by inhibiting part of the mTOR pathway.

Results of the drug’s recent clinical trial include decreased incidence of infection, improved influenza vaccination response, and a 30.6 percent decrease in respiratory tract infection.

Impressive, to say the least.

(2) Metformin

Metformin is a widely-used generic drug for mitigating liver sugar production in Type 2 diabetes patients. Researchers have found that metformin also reduces oxidative stress and inflammation, which otherwise increase as we age. There is strong evidence that metformin can augment cellular regeneration and dramatically mitigate cellular senescence by reducing both oxidative stress and inflammation.

Over 100 studies registered on ClinicalTrials.gov are currently following up on strong evidence of metformin’s protective effect against cancer.

(3) Nutraceuticals and NAD+

Beyond cellular senescence, certain critical nutrients and proteins tend to decline as a function of age. Nutraceuticals combat aging by supplementing and replenishing these declining nutrient levels.

NAD+ exists in every cell, participating in every process from DNA repair to creating the energy vital for cellular processes. It’s been shown that NAD+ levels decline as we age.

The Elysium Health Basis supplement aims to elevate NAD+ levels in the body to extend one’s lifespan. Elysium’s first clinical study reports that Basis increases NAD+ levels consistently by a sustained 40 percent.

Conclusion
These are just a taste of the tremendous momentum that longevity and aging technology has right now. As artificial intelligence and quantum computing transform how we decode our DNA and how we discover drugs, genetics and pharmaceuticals will become truly personalized.

The next article in this series will demonstrate how artificial intelligence is converging with genetics and pharmaceuticals to transform how we approach longevity, aging, and vitality.

We are edging closer toward a dramatically extended healthspan—where 100 is the new 60. What will you create, where will you explore, and how will you spend your time if you are able to add an additional 40 healthy years to your life?

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Arek Socha from Pixabay Continue reading

Posted in Human Robots

#437182 MIT’s Tiny New Brain Chip Aims for AI ...

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.

To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.

While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.

“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.

Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.

Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.

The gains from such specialized chips are already being felt.

In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.

Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.

As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.

Image credit: Peng Lin Continue reading

Posted in Human Robots

#437171 Scientists Tap the World’s Most ...

In The Hitchhiker’s Guide to the Galaxy by Douglas Adams, the haughty supercomputer Deep Thought is asked whether it can find the answer to the ultimate question concerning life, the universe, and everything. It replies that, yes, it can do it, but it’s tricky and it’ll have to think about it. When asked how long it will take it replies, “Seven-and-a-half million years. I told you I’d have to think about it.”

Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the Covid-19 pandemic. They’re being used in many facets of responding to the disease, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools, and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.

The largest number of Covid-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively muzzle proteins that SARS-CoV-2, the virus that causes Covid-19, needs for its life cycle.

The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates its human cellular target, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.

The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second—equivalent to about a million laptops. Image credit: Oak Ridge National Laboratory, U.S. Dept. of Energy, CC BY

I am a molecular biophysicist. My lab, at the Center for Molecular Biophysics at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.

The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of Covid-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the anti-HIV drugs that revolutionized AIDS treatment in the 1990s.

World’s Most Powerful Computer
Since the 1990s the power of supercomputers has increased by a factor of a million or so. Summit at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly a million laptops. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.

However, in order to gin up speed, supercomputer architectures have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually.

Further, researchers use supercomputers to figure out by simulation the different shapes formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits—chemicals that work—for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections.

Scientists are using supercomputers to find ways to disable the various proteins—including the infamous spike protein (green protrusions)—produced by SARS-CoV-2, the virus responsible for Covid-19. Image credit: Thomas Splettstoesser scistyle.com, CC BY-ND

Billions of Possibilities
So which chemicals are being tested for Covid-19? A first approach is trying out drugs that already exist for other indications and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.

But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against Covid-19, even when the SARS-CoV-2 protein targets don’t have the same shape.

Our own work has now expanded to about 10 targets on SARS-CoV-2, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven docking study of a repurposing compound database in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a high-performance computing consortium against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.

Also, we and others are venturing out into the wild world of new drug discovery for Covid-19—looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.

Dawn of the Exascale Era
Work will be helped by the arrival of the next big machine at Oak Ridge, called Frontier, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.

Although some fear supercomputers will take over the world, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best—some prefer artificial intelligence, for example—so there’s quite a lot of arguing going on.

Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle Covid-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply 42.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: NIH/NIAID Continue reading

Posted in Human Robots

#437157 A Human-Centric World of Work: Why It ...

Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.

The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.

To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.

Gary Bolles, SU’s chair for the Future of Work, kicked off the discussion with his thoughts on a future of work that’s human-centric, including why it matters and how to build it.

What Is Work?
“Work” seems like a straightforward concept to define, but since it’s constantly shifting shape over time, let’s make sure we’re on the same page. Bolles defined work, very basically, as human skills applied to problems.

“It doesn’t matter if it’s a dirty floor or a complex market entry strategy or a major challenge in the world,” he said. “We as humans create value by applying our skills to solve problems in the world.” You can think of the problems that need solving as the demand and human skills as the supply, and the two are in constant oscillation, including, every few decades or centuries, a massive shift.

We’re in the midst of one of those shifts right now (and we already were, long before the pandemic). Skills that have long been in demand are declining. The World Economic Forum’s 2018 Future of Jobs report listed things like manual dexterity, management of financial and material resources, and quality control and safety awareness as declining skills. Meanwhile, skills the next generation will need include analytical thinking and innovation, emotional intelligence, creativity, and systems analysis.

Along Came a Pandemic
With the outbreak of coronavirus and its spread around the world, the demand side of work shrunk; all the problems that needed solving gave way to the much bigger, more immediate problem of keeping people alive. But as a result, tens of millions of people around the world are out of work—and those are just the ones that are being counted, and they’re a fraction of the true total. There are additional millions in seasonal or gig jobs or who work in informal economies now without work, too.

“This is our opportunity to focus,” Bolles said. “How do we help people re-engage with work? And make it better work, a better economy, and a better set of design heuristics for a world that we all want?”

Bolles posed five key questions—some spurred by impact of the pandemic—on which future of work conversations should focus to make sure it’s a human-centric future.

1. What does an inclusive world of work look like? Rather than seeing our current systems of work as immutable, we need to actually understand those systems and how we want to change them.

2. How can we increase the value of human work? We know that robots and software are going to be fine in the future—but for humans to be fine, we need to design for that very intentionally.

3. How can entrepreneurship help create a better world of work? In many economies the new value that’s created often comes from younger companies; how do we nurture entrepreneurship?

4. What will the intersection of workplace and geography look like? A large percentage of the global workforce is now working from home; what could some of the outcomes of that be? How does gig work fit in?

5. How can we ensure a healthy evolution of work and life? The health and the protection of those at risk is why we shut down our economies, but we need to find a balance that allows people to work while keeping them safe.

Problem-Solving Doesn’t End
The end result these questions are driving towards, and our overarching goal, is maximizing human potential. “If we come up with ways we can continue to do that, we’ll have a much more beneficial future of work,” Bolles said. “We should all be talking about where we can have an impact.”

One small silver lining? We had plenty of problems to solve in the world before ever hearing about coronavirus, and now we have even more. Is the pace of automation accelerating due to the virus? Yes. Are companies finding more ways to automate their processes in order to keep people from getting sick? They are.

But we have a slew of new problems on our hands, and we’re not going to stop needing human skills to solve them (not to mention the new problems that will surely emerge as second- and third-order effects of the shutdowns). If Bolles’ definition of work holds up, we’ve got ours cut out for us.

In an article from April titled The Great Reset, Bolles outlined three phases of the unemployment slump (we’re currently still in the first phase) and what we should be doing to minimize the damage. “The evolution of work is not about what will happen 10 to 20 years from now,” he said. “It’s about what we could be doing differently today.”

Watch Bolles’ talk and those of dozens of other experts for more insights into building a human-centric future of work here.

Image Credit: www_slon_pics from Pixabay Continue reading

Posted in Human Robots