Tag Archives: new

#434655 Purposeful Evolution: Creating an ...

More often than not, we fall into the trap of trying to predict and anticipate the future, forgetting that the future is up to us to envision and create. In the words of Buckminster Fuller, “We are called to be architects of the future, not its victims.”

But how, exactly, do we create a “good” future? What does such a future look like to begin with?

In Future Consciousness: The Path to Purposeful Evolution, Tom Lombardo analytically deconstructs how we can flourish in the flow of evolution and create a prosperous future for humanity. Scientifically informed, the books taps into themes that are constructive and profound, from both eastern and western philosophies.

As the executive director of the Center for Future Consciousness and an executive board member and fellow of the World Futures Studies Federation, Lombardo has dedicated his life and career to studying how we can create a “realistic, constructive, and ethical future.”

In a conversation with Singularity Hub, Lombardo discussed purposeful evolution, ethical use of technology, and the power of optimism.

Raya Bidshahri: Tell me more about the title of your book. What is future consciousness and what role does it play in what you call purposeful evolution?

Tom Lombardo: Humans have the unique capacity to purposefully evolve themselves because they possess future consciousness. Future consciousness contains all of the cognitive, motivational, and emotional aspects of the human mind that pertain to the future. It’s because we can imagine and think about the future that we can manipulate and direct our future evolution purposefully. Future consciousness empowers us to become self-responsible in our own evolutionary future. This is a jump in the process of evolution itself.

RB: In several places in the book, you discuss the importance of various eastern philosophies. What can we learn from the east that is often missing from western models?

TL: The key idea in the east that I have been intrigued by for decades is the Taoist Yin Yang, which is the idea that reality should be conceptualized as interdependent reciprocities.

In the west we think dualistically, or we attempt to think in terms of one end of the duality to the exclusion of the other, such as whole versus parts or consciousness versus physical matter. Yin Yang thinking is seeing how both sides of a “duality,” even though they appear to be opposites, are interdependent; you can’t have one without the other. You can’t have order without chaos, consciousness without the physical world, individuals without the whole, humanity without technology, and vice versa for all these complementary pairs.

RB: You talk about the importance of chaos and destruction in the trajectory of human progress. In your own words, “Creativity frequently involves destruction as a prelude to the emergence of some new reality.” Why is this an important principle for readers to keep in mind, especially in the context of today’s world?

TL: In order for there to be progress, there often has to be a disintegration of aspects of the old. Although progress and evolution involve a process of building up, growth isn’t entirely cumulative; it’s also transformative. Things fall apart and come back together again.

Throughout history, we have seen a transformation of what are the most dominant human professions or vocations. At some point, almost everybody worked in agriculture, but most of those agricultural activities were replaced by machines, and a lot of people moved over to industry. Now we’re seeing that jobs and functions are increasingly automated in industry, and humans are being pushed into vocations that involve higher cognitive and artistic skills, services, information technology, and so on.

RB: You raise valid concerns about the dark side of technological progress, especially when it’s combined with mass consumerism, materialism, and anti-intellectualism. How do we counter these destructive forces as we shape the future of humanity?

TL: We can counter such forces by always thoughtfully considering how our technologies are affecting the ongoing purposeful evolution of our conscious minds, bodies, and societies. We should ask ourselves what are the ethical values that are being served by the development of various technologies.

For example, we often hear the criticism that technologies that are driven by pure capitalism degrade human life and only benefit the few people who invented and market them. So we need to also think about what good these new technologies can serve. It’s what I mean when I talk about the “wise cyborg.” A wise cyborg is somebody who uses technology to serve wisdom, or values connected with wisdom.

RB: Creating an ideal future isn’t just about progress in technology, but also progress in morality. How we do decide what a “good” future is? What are some philosophical tools we can use to determine a code of ethics that is as objective as possible?

TL: Let’s keep in mind that ethics will always have some level of subjectivity. That being said, the way to determine a good future is to base it on the best theory of reality that we have, which is that we are evolutionary beings in an evolutionary universe and we are interdependent with everything else in that universe. Our ethics should acknowledge that we are fluid and interactive.

Hence, the “good” can’t be something static, and it can’t be something that pertains to me and not everybody else. It can’t be something that only applies to humans and ignores all other life on Earth, and it must be a mode of change rather than something stable.

RB: You present a consciousness-centered approach to creating a good future for humanity. What are some of the values we should develop in order to create a prosperous future?

TL: A sense of self-responsibility for the future is critical. This means realizing that the “good future” is something we have to take upon ourselves to create; we can’t let something or somebody else do that. We need to feel responsible both for our own futures and for the future around us.

Another one is going to be an informed and hopeful optimism about the future, because both optimism and pessimism have self-fulfilling prophecy effects. If you hope for the best, you are more likely to look deeply into your reality and increase the chance of it coming out that way. In fact, all of the positive emotions that have to do with future consciousness actually make people more intelligent and creative.

Some other important character virtues are discipline and tenacity, deep purpose, the love of learning and thinking, and creativity.

RB: Are you optimistic about the future? If so, what informs your optimism?

I justify my optimism the same way that I have seen Ray Kurzweil, Peter Diamandis, Kevin Kelly, and Steven Pinker justify theirs. If we look at the history of human civilization and even the history of nature, we see a progressive motion forward toward greater complexity and even greater intelligence. There’s lots of ups and downs, and catastrophes along the way, but the facts of nature and human history support the long-term expectation of continued evolution into the future.

You don’t have to be unrealistic to be optimistic. It’s also, psychologically, the more empowering position. That’s the position we should take if we want to maximize the chances of our individual or collective reality turning out better.

A lot of pessimists are pessimistic because they’re afraid of the future. There are lots of reasons to be afraid, but all in all, fear disempowers, whereas hope empowers.

Image Credit: Quick Shot / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#434648 The Pediatric AI That Outperformed ...

Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.

Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.

The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.

The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.

Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.

Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.

Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.

When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.

Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.

That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.

Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.

Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.

In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.

Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.

In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.

Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.

Image Credit: Nadia Snopek / Shutterstock.com Continue reading

Posted in Human Robots

#434643 Sensors and Machine Learning Are Giving ...

According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.

This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.

Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.

Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.

Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?

New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.

The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.

In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.

Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.

Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.

They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.

Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.

Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.

Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.

But before they can get out and shape the world, as these studies show, they will need to understand themselves.

Image Credit: jumbojan / Shutterstock.com Continue reading

Posted in Human Robots

#434637 AI Is Rapidly Augmenting Healthcare and ...

When it comes to the future of healthcare, perhaps the only technology more powerful than CRISPR is artificial intelligence.

Over the past five years, healthcare AI startups around the globe raised over $4.3 billion across 576 deals, topping all other industries in AI deal activity.

During this same period, the FDA has given 70 AI healthcare tools and devices ‘fast-tracked approval’ because of their ability to save both lives and money.

The pace of AI-augmented healthcare innovation is only accelerating.

In Part 3 of this blog series on longevity and vitality, I cover the different ways in which AI is augmenting our healthcare system, enabling us to live longer and healthier lives.

In this blog, I’ll expand on:

Machine learning and drug design
Artificial intelligence and big data in medicine
Healthcare, AI & China

Let’s dive in.

Machine Learning in Drug Design
What if AI systems, specifically neural networks, could predict the design of novel molecules (i.e. medicines) capable of targeting and curing any disease?

Imagine leveraging cutting-edge artificial intelligence to accomplish with 50 people what the pharmaceutical industry can barely do with an army of 5,000.

And what if these molecules, accurately engineered by AIs, always worked? Such a feat would revolutionize our $1.3 trillion global pharmaceutical industry, which currently holds a dismal record of 1 in 10 target drugs ever reaching human trials.

It’s no wonder that drug development is massively expensive and slow. It takes over 10 years to bring a new drug to market, with costs ranging from $2.5 billion to $12 billion.

This inefficient, slow-to-innovate, and risk-averse industry is a sitting duck for disruption in the years ahead.

One of the hottest startups in digital drug discovery today is Insilico Medicine. Leveraging AI in its end-to-end drug discovery pipeline, Insilico Medicine aims to extend healthy longevity through drug discovery and aging research.

Their comprehensive drug discovery engine uses millions of samples and multiple data types to discover signatures of disease, identify the most promising protein targets, and generate perfect molecules for these targets. These molecules either already exist or can be generated de novo with the desired set of parameters.

In late 2018, Insilico’s CEO Dr. Alex Zhavoronkov announced the groundbreaking result of generating novel molecules for a challenging protein target with an unprecedented hit rate in under 46 days. This included both synthesis of the molecules and experimental validation in a biological test system—an impressive feat made possible by converging exponential technologies.

Underpinning Insilico’s drug discovery pipeline is a novel machine learning technique called Generative Adversarial Networks (GANs), used in combination with deep reinforcement learning.

Generating novel molecular structures for diseases both with and without known targets, Insilico is now pursuing drug discovery in aging, cancer, fibrosis, Parkinson’s disease, Alzheimer’s disease, ALS, diabetes, and many others. Once rolled out, the implications will be profound.

Dr. Zhavoronkov’s ultimate goal is to develop a fully-automated Health-as-a-Service (HaaS) and Longevity-as-a-Service (LaaS) engine.

Once plugged into the services of companies from Alibaba to Alphabet, such an engine would enable personalized solutions for online users, helping them prevent diseases and maintain optimal health.

Insilico, alongside other companies tackling AI-powered drug discovery, truly represents the application of the 6 D’s. What was once a prohibitively expensive and human-intensive process is now rapidly becoming digitized, dematerialized, demonetized and, perhaps most importantly, democratized.

Companies like Insilico can now do with a fraction of the cost and personnel what the pharmaceutical industry can barely accomplish with thousands of employees and a hefty bill to foot.

As I discussed in my blog on ‘The Next Hundred-Billion-Dollar Opportunity,’ Google’s DeepMind has now turned its neural networks to healthcare, entering the digitized drug discovery arena.

In 2017, DeepMind achieved a phenomenal feat by matching the fidelity of medical experts in correctly diagnosing over 50 eye disorders.

And just a year later, DeepMind announced a new deep learning tool called AlphaFold. By predicting the elusive ways in which various proteins fold on the basis of their amino acid sequences, AlphaFold may soon have a tremendous impact in aiding drug discovery and fighting some of today’s most intractable diseases.

Artificial Intelligence and Data Crunching
AI is especially powerful in analyzing massive quantities of data to uncover patterns and insights that can save lives. Take WAVE, for instance. Every year, over 400,000 patients die prematurely in US hospitals as a result of heart attack or respiratory failure.

Yet these patients don’t die without leaving plenty of clues. Given information overload, however, human physicians and nurses alone have no way of processing and analyzing all necessary data in time to save these patients’ lives.

Enter WAVE, an algorithm that can process enough data to offer a six-hour early warning of patient deterioration.

Just last year, the FDA approved WAVE as an AI-based predictive patient surveillance system to predict and thereby prevent sudden death.

Another highly valuable yet difficult-to-parse mountain of medical data comprises the 2.5 million medical papers published each year.

For some time, it has become physically impossible for a human physician to read—let alone remember—all of the relevant published data.

To counter this compounding conundrum, Johnson & Johnson is teaching IBM Watson to read and understand scientific papers that detail clinical trial outcomes.

Enriching Watson’s data sources, Apple is also partnering with IBM to provide access to health data from mobile apps.

One such Watson system contains 40 million documents, ingesting an average of 27,000 new documents per day, and providing insights for thousands of users.

After only one year, Watson’s successful diagnosis rate of lung cancer has reached 90 percent, compared to the 50 percent success rate of human doctors.

But what about the vast amount of unstructured medical patient data that populates today’s ancient medical system? This includes medical notes, prescriptions, audio interview transcripts, and pathology and radiology reports.

In late 2018, Amazon announced a new HIPAA-eligible machine learning service that digests and parses unstructured data into categories, such as patient diagnoses, treatments, dosages, symptoms and signs.

Taha Kass-Hout, Amazon’s senior leader in health care and artificial intelligence, told the Wall Street Journal that internal tests demonstrated that the software even performs as well as or better than other published efforts.

On the heels of this announcement, Amazon confirmed it was teaming up with the Fred Hutchinson Cancer Research Center to evaluate “millions of clinical notes to extract and index medical conditions.”

Having already driven extraordinary algorithmic success rates in other fields, data is the healthcare industry’s goldmine for future innovation.

Healthcare, AI & China
In 2017, the Chinese government published its ambitious national plan to become a global leader in AI research by 2030, with healthcare listed as one of four core research areas during the first wave of the plan.

Just a year earlier, China began centralizing healthcare data, tackling a major roadblock to developing longevity and healthcare technologies (particularly AI systems): scattered, dispersed, and unlabeled patient data.

Backed by the Chinese government, China’s largest tech companies—particularly Tencent—have now made strong entrances into healthcare.

Just recently, Tencent participated in a $154 million megaround for China-based healthcare AI unicorn iCarbonX.

Hoping to develop a complete digital representation of your biological self, iCarbonX has acquired numerous US personalized medicine startups.

Considering Tencent’s own Miying healthcare AI platform—aimed at assisting healthcare institutions in AI-driven cancer diagnostics—Tencent is quickly expanding into the drug discovery space, participating in two multimillion-dollar, US-based AI drug discovery deals just this year.

China’s biggest, second-order move into the healthtech space comes through Tencent’s WeChat. In the course of a mere few years, already 60 percent of the 38,000 medical institutions registered on WeChat allow patients to digitally book appointments through Tencent’s mobile platform. At the same time, 2,000 Chinese hospitals accept WeChat payments.

Tencent has additionally partnered with the U.K.’s Babylon Health, a virtual healthcare assistant startup whose app now allows Chinese WeChat users to message their symptoms and receive immediate medical feedback.

Similarly, Alibaba’s healthtech focus started in 2016 when it released its cloud-based AI medical platform, ET Medical Brain, to augment healthcare processes through everything from diagnostics to intelligent scheduling.

Conclusion
As Nvidia CEO Jensen Huang has stated, “Software ate the world, but AI is going to eat software.” Extrapolating this statement to a more immediate implication, AI will first eat healthcare, resulting in dramatic acceleration of longevity research and an amplification of the human healthspan.

Next week, I’ll continue to explore this concept of AI systems in healthcare.

Particularly, I’ll expand on how we’re acquiring and using the data for these doctor-augmenting AI systems: from ubiquitous biosensors, to the mobile healthcare revolution, and finally, to the transformative power of the health nucleus.

As AI and other exponential technologies increase our healthspan by 30 to 40 years, how will you leverage these same exponential technologies to take on your moonshots and live out your massively transformative purpose?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Zapp2Photo / Shutterstock.com Continue reading

Posted in Human Robots

#434623 The Great Myth of the AI Skills Gap

One of the most contentious debates in technology is around the question of automation and jobs. At issue is whether advances in automation, specifically with regards to artificial intelligence and robotics, will spell trouble for today’s workers. This debate is played out in the media daily, and passions run deep on both sides of the issue. In the past, however, automation has created jobs and increased real wages.

A widespread concern with the current scenario is that the workers most likely to be displaced by technology lack the skills needed to do the new jobs that same technology will create.

Let’s look at this concern in detail. Those who fear automation will hurt workers start by pointing out that there is a wide range of jobs, from low-pay, low-skill to high-pay, high-skill ones. This can be represented as follows:

They then point out that technology primarily creates high-paying jobs, like geneticists, as shown in the diagram below.

Meanwhile, technology destroys low-wage, low-skill jobs like those in fast food restaurants, as shown below:

Then, those who are worried about this dynamic often pose the question, “Do you really think a fast-food worker is going to become a geneticist?”

They worry that we are about to face a huge amount of systemic permanent unemployment, as the unskilled displaced workers are ill-equipped to do the jobs of tomorrow.

It is important to note that both sides of the debate are in agreement at this point. Unquestionably, technology destroys low-skilled, low-paying jobs while creating high-skilled, high-paying ones.

So, is that the end of the story? As a society are we destined to bifurcate into two groups, those who have training and earn high salaries in the new jobs, and those with less training who see their jobs vanishing to machines? Is this latter group forever locked out of economic plenty because they lack training?

No.

The question, “Can a fast food worker become a geneticist?” is where the error comes in. Fast food workers don’t become geneticists. What happens is that a college biology professor becomes a geneticist. Then a high-school biology teacher gets the college job. Then the substitute teacher gets hired on full-time to fill the high school teaching job. All the way down.

The question is not whether those in the lowest-skilled jobs can do the high-skilled work. Instead the question is, “Can everyone do a job just a little harder than the job they have today?” If so, and I believe very deeply that this is the case, then every time technology creates a new job “at the top,” everyone gets a promotion.

This isn’t just an academic theory—it’s 200 years of economic history in the west. For 200 years, with the exception of the Great Depression, unemployment in the US has been between 2 percent and 13 percent. Always. Europe’s range is a bit wider, but not much.

If I took 200 years of unemployment rates and graphed them, and asked you to find where the assembly line took over manufacturing, or where steam power rapidly replaced animal power, or the lightning-fast adoption of electricity by industry, you wouldn’t be able to find those spots. They aren’t even blips in the unemployment record.

You don’t even have to look back as far as the assembly line to see this happening. It has happened non-stop for 200 years. Every fifty years, we lose about half of all jobs, and this has been pretty steady since 1800.

How is it that for 200 years we have lost half of all jobs every half century, but never has this process caused unemployment? Not only has it not caused unemployment, but during that time, we have had full employment against the backdrop of rising wages.

How can wages rise while half of all jobs are constantly being destroyed? Simple. Because new technology always increases worker productivity. It creates new jobs, like web designer and programmer, while destroying low-wage backbreaking work. When this happens, everyone along the way gets a better job.

Our current situation isn’t any different than the past. The nature of technology has always been to create high-skilled jobs and increase worker productivity. This is good news for everyone.

People often ask me what their children should study to make sure they have a job in the future. I usually say it doesn’t really matter. If I knew everything I know now and went back to the mid 1980s, what could I have taken in high school to make me better prepared for today? There is only one class, and it wasn’t computer science. It was typing. Who would have guessed?

The great skill is to be able to learn new things, and luckily, we all have that. In fact, that is our singular ability as a species. What I do in my day-to-day job consists largely of skills I have learned as the years have passed. In my experience, if you ask people at all job levels,“Would you like a little more challenging job to make a little more money?” almost everyone says yes.

That’s all it has taken for us to collectively get here today, and that’s all we need going forward.

Image Credit: Lightspring / Shutterstock.com Continue reading

Posted in Human Robots