Tag Archives: Should
#434701 3 Practical Solutions to Offset ...
In recent years, the media has sounded the alarm about mass job loss to automation and robotics—some studies predict that up to 50 percent of current jobs or tasks could be automated in coming decades. While this topic has received significant attention, much of the press focuses on potential problems without proposing realistic solutions or considering new opportunities.
The economic impacts of AI, robotics, and automation are complex topics that require a more comprehensive perspective to understand. Is universal basic income, for example, the answer? Many believe so, and there are a number of experiments in progress. But it’s only one strategy, and without a sustainable funding source, universal basic income may not be practical.
As automation continues to accelerate, we’ll need a multi-pronged approach to ease the transition. In short, we need to update broad socioeconomic strategies for a new century of rapid progress. How, then, do we plan practical solutions to support these new strategies?
Take history as a rough guide to the future. Looking back, technology revolutions have three themes in common.
First, past revolutions each produced profound benefits to productivity, increasing human welfare. Second, technological innovation and technology diffusion have accelerated over time, each iteration placing more strain on the human ability to adapt. And third, machines have gradually replaced more elements of human work, with human societies adapting by moving into new forms of work—from agriculture to manufacturing to service, for example.
Public and private solutions, therefore, need to be developed to address each of these three components of change. Let’s explore some practical solutions for each in turn.
Figure 1. Technology’s structural impacts in the 21st century. Refer to Appendix I for quantitative charts and technological examples corresponding to the numbers (1-22) in each slice.
Solution 1: Capture New Opportunities Through Aggressive Investment
The rapid emergence of new technology promises a bounty of opportunity for the twenty-first century’s economic winners. This technological arms race is shaping up to be a global affair, and the winners will be determined in part by who is able to build the future economy fastest and most effectively. Both the private and public sectors have a role to play in stimulating growth.
At the country level, several nations have created competitive strategies to promote research and development investments as automation technologies become more mature.
Germany and China have two of the most notable growth strategies. Germany’s Industrie 4.0 plan targets a 50 percent increase in manufacturing productivity via digital initiatives, while halving the resources required. China’s Made in China 2025 national strategy sets ambitious targets and provides subsidies for domestic innovation and production. It also includes building new concept cities, investing in robotics capabilities, and subsidizing high-tech acquisitions abroad to become the leader in certain high-tech industries. For China, specifically, tech innovation is driven partially by a fear that technology will disrupt social structures and government control.
Such opportunities are not limited to existing economic powers. Estonia’s progress after the breakup of the Soviet Union is a good case study in transitioning to a digital economy. The nation rapidly implemented capitalistic reforms and transformed itself into a technology-centric economy in preparation for a massive tech disruption. Internet access was declared a right in 2000, and the country’s classrooms were outfitted for a digital economy, with coding as a core educational requirement starting at kindergarten. Internet broadband speeds in Estonia are among the fastest in the world. Accordingly, the World Bank now ranks Estonia as a high-income country.
Solution 2: Address Increased Rate of Change With More Nimble Education Systems
Education and training are currently not set for the speed of change in the modern economy. Schools are still based on a one-time education model, with school providing the foundation for a single lifelong career. With content becoming obsolete faster and rapidly escalating costs, this system may be unsustainable in the future. To help workers more smoothly transition from one job into another, for example, we need to make education a more nimble, lifelong endeavor.
Primary and university education may still have a role in training foundational thinking and general education, but it will be necessary to curtail rising price of tuition and increase accessibility. Massive open online courses (MooCs) and open-enrollment platforms are early demonstrations of what the future of general education may look like: cheap, effective, and flexible.
Georgia Tech’s online Engineering Master’s program (a fraction of the cost of residential tuition) is an early example in making university education more broadly available. Similarly, nanodegrees or microcredentials provided by online education platforms such as Udacity and Coursera can be used for mid-career adjustments at low cost. AI itself may be deployed to supplement the learning process, with applications such as AI-enhanced tutorials or personalized content recommendations backed by machine learning. Recent developments in neuroscience research could optimize this experience by perfectly tailoring content and delivery to the learner’s brain to maximize retention.
Finally, companies looking for more customized skills may take a larger role in education, providing on-the-job training for specific capabilities. One potential model involves partnering with community colleges to create apprenticeship-style learning, where students work part-time in parallel with their education. Siemens has pioneered such a model in four states and is developing a playbook for other companies to do the same.
Solution 3: Enhance Social Safety Nets to Smooth Automation Impacts
If predicted job losses to automation come to fruition, modernizing existing social safety nets will increasingly become a priority. While the issue of safety nets can become quickly politicized, it is worth noting that each prior technological revolution has come with corresponding changes to the social contract (see below).
The evolving social contract (U.S. examples)
– 1842 | Right to strike
– 1924 | Abolish child labor
– 1935 | Right to unionize
– 1938 | 40-hour work week
– 1962, 1974 | Trade adjustment assistance
– 1964 | Pay discrimination prohibited
– 1970 | Health and safety laws
– 21st century | AI and automation adjustment assistance?
Figure 2. Labor laws have historically adjusted as technology and society progressed
Solutions like universal basic income (no-strings-attached monthly payout to all citizens) are appealing in concept, but somewhat difficult to implement as a first measure in countries such as the US or Japan that already have high debt. Additionally, universal basic income may create dis-incentives to stay in the labor force. A similar cautionary tale in program design was the Trade Adjustment Assistance (TAA), which was designed to protect industries and workers from import competition shocks from globalization, but is viewed as a missed opportunity due to insufficient coverage.
A near-term solution could come in the form of graduated wage insurance (compensation for those forced to take a lower-paying job), including health insurance subsidies to individuals directly impacted by automation, with incentives to return to the workforce quickly. Another topic to tackle is geographic mismatch between workers and jobs, which can be addressed by mobility assistance. Lastly, a training stipend can be issued to individuals as means to upskill.
Policymakers can intervene to reverse recent historical trends that have shifted incomes from labor to capital owners. The balance could be shifted back to labor by placing higher taxes on capital—an example is the recently proposed “robot tax” where the taxation would be on the work rather than the individual executing it. That is, if a self-driving car performs the task that formerly was done by a human, the rideshare company will still pay the tax as if a human was driving.
Other solutions may involve distribution of work. Some countries, such as France and Sweden, have experimented with redistributing working hours. The idea is to cap weekly hours, with the goal of having more people employed and work more evenly spread. So far these programs have had mixed results, with lower unemployment but high costs to taxpayers, but are potential models that can continue to be tested.
We cannot stop growth, nor should we. With the roles in response to this evolution shifting, so should the social contract between the stakeholders. Government will continue to play a critical role as a stabilizing “thumb” in the invisible hand of capitalism, regulating and cushioning against extreme volatility, particularly in labor markets.
However, we already see business leaders taking on some of the role traditionally played by government—thinking about measures to remedy risks of climate change or economic proposals to combat unemployment—in part because of greater agility in adapting to change. Cross-disciplinary collaboration and creative solutions from all parties will be critical in crafting the future economy.
Note: The full paper this article is based on is available here.
Image Credit: Dmitry Kalinovsky / Shutterstock.com Continue reading
#434655 Purposeful Evolution: Creating an ...
More often than not, we fall into the trap of trying to predict and anticipate the future, forgetting that the future is up to us to envision and create. In the words of Buckminster Fuller, “We are called to be architects of the future, not its victims.”
But how, exactly, do we create a “good” future? What does such a future look like to begin with?
In Future Consciousness: The Path to Purposeful Evolution, Tom Lombardo analytically deconstructs how we can flourish in the flow of evolution and create a prosperous future for humanity. Scientifically informed, the books taps into themes that are constructive and profound, from both eastern and western philosophies.
As the executive director of the Center for Future Consciousness and an executive board member and fellow of the World Futures Studies Federation, Lombardo has dedicated his life and career to studying how we can create a “realistic, constructive, and ethical future.”
In a conversation with Singularity Hub, Lombardo discussed purposeful evolution, ethical use of technology, and the power of optimism.
Raya Bidshahri: Tell me more about the title of your book. What is future consciousness and what role does it play in what you call purposeful evolution?
Tom Lombardo: Humans have the unique capacity to purposefully evolve themselves because they possess future consciousness. Future consciousness contains all of the cognitive, motivational, and emotional aspects of the human mind that pertain to the future. It’s because we can imagine and think about the future that we can manipulate and direct our future evolution purposefully. Future consciousness empowers us to become self-responsible in our own evolutionary future. This is a jump in the process of evolution itself.
RB: In several places in the book, you discuss the importance of various eastern philosophies. What can we learn from the east that is often missing from western models?
TL: The key idea in the east that I have been intrigued by for decades is the Taoist Yin Yang, which is the idea that reality should be conceptualized as interdependent reciprocities.
In the west we think dualistically, or we attempt to think in terms of one end of the duality to the exclusion of the other, such as whole versus parts or consciousness versus physical matter. Yin Yang thinking is seeing how both sides of a “duality,” even though they appear to be opposites, are interdependent; you can’t have one without the other. You can’t have order without chaos, consciousness without the physical world, individuals without the whole, humanity without technology, and vice versa for all these complementary pairs.
RB: You talk about the importance of chaos and destruction in the trajectory of human progress. In your own words, “Creativity frequently involves destruction as a prelude to the emergence of some new reality.” Why is this an important principle for readers to keep in mind, especially in the context of today’s world?
TL: In order for there to be progress, there often has to be a disintegration of aspects of the old. Although progress and evolution involve a process of building up, growth isn’t entirely cumulative; it’s also transformative. Things fall apart and come back together again.
Throughout history, we have seen a transformation of what are the most dominant human professions or vocations. At some point, almost everybody worked in agriculture, but most of those agricultural activities were replaced by machines, and a lot of people moved over to industry. Now we’re seeing that jobs and functions are increasingly automated in industry, and humans are being pushed into vocations that involve higher cognitive and artistic skills, services, information technology, and so on.
RB: You raise valid concerns about the dark side of technological progress, especially when it’s combined with mass consumerism, materialism, and anti-intellectualism. How do we counter these destructive forces as we shape the future of humanity?
TL: We can counter such forces by always thoughtfully considering how our technologies are affecting the ongoing purposeful evolution of our conscious minds, bodies, and societies. We should ask ourselves what are the ethical values that are being served by the development of various technologies.
For example, we often hear the criticism that technologies that are driven by pure capitalism degrade human life and only benefit the few people who invented and market them. So we need to also think about what good these new technologies can serve. It’s what I mean when I talk about the “wise cyborg.” A wise cyborg is somebody who uses technology to serve wisdom, or values connected with wisdom.
RB: Creating an ideal future isn’t just about progress in technology, but also progress in morality. How we do decide what a “good” future is? What are some philosophical tools we can use to determine a code of ethics that is as objective as possible?
TL: Let’s keep in mind that ethics will always have some level of subjectivity. That being said, the way to determine a good future is to base it on the best theory of reality that we have, which is that we are evolutionary beings in an evolutionary universe and we are interdependent with everything else in that universe. Our ethics should acknowledge that we are fluid and interactive.
Hence, the “good” can’t be something static, and it can’t be something that pertains to me and not everybody else. It can’t be something that only applies to humans and ignores all other life on Earth, and it must be a mode of change rather than something stable.
RB: You present a consciousness-centered approach to creating a good future for humanity. What are some of the values we should develop in order to create a prosperous future?
TL: A sense of self-responsibility for the future is critical. This means realizing that the “good future” is something we have to take upon ourselves to create; we can’t let something or somebody else do that. We need to feel responsible both for our own futures and for the future around us.
Another one is going to be an informed and hopeful optimism about the future, because both optimism and pessimism have self-fulfilling prophecy effects. If you hope for the best, you are more likely to look deeply into your reality and increase the chance of it coming out that way. In fact, all of the positive emotions that have to do with future consciousness actually make people more intelligent and creative.
Some other important character virtues are discipline and tenacity, deep purpose, the love of learning and thinking, and creativity.
RB: Are you optimistic about the future? If so, what informs your optimism?
I justify my optimism the same way that I have seen Ray Kurzweil, Peter Diamandis, Kevin Kelly, and Steven Pinker justify theirs. If we look at the history of human civilization and even the history of nature, we see a progressive motion forward toward greater complexity and even greater intelligence. There’s lots of ups and downs, and catastrophes along the way, but the facts of nature and human history support the long-term expectation of continued evolution into the future.
You don’t have to be unrealistic to be optimistic. It’s also, psychologically, the more empowering position. That’s the position we should take if we want to maximize the chances of our individual or collective reality turning out better.
A lot of pessimists are pessimistic because they’re afraid of the future. There are lots of reasons to be afraid, but all in all, fear disempowers, whereas hope empowers.
Image Credit: Quick Shot / Shutterstock.com
We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading
#434648 The Pediatric AI That Outperformed ...
Training a doctor takes years of grueling work in universities and hospitals. Building a doctor may be as easy as teaching an AI how to read.
Artificial intelligence has taken another step towards becoming an integral part of 21st-century medicine. New research out of Guangzhou, China, published February 11th in Nature Medicine Letters, has demonstrated a natural-language processing AI that is capable of out-performing rookie pediatricians in diagnosing common childhood ailments.
The massive study examined the electronic health records (EHR) from nearly 600,000 patients over an 18-month period at the Guangzhou Women and Children’s Medical Center and then compared AI-generated diagnoses against new assessments from physicians with a range of experience.
The verdict? On average, the AI was noticeably more accurate than junior physicians and nearly as reliable as the more senior ones. These results are the latest demonstration that artificial intelligence is on the cusp of becoming a healthcare staple on a global scale.
Less Like a Computer, More Like a Person
To outshine human doctors, the AI first had to become more human. Like IBM’s Watson, the pediatric AI leverages natural language processing, in essence “reading” written notes from EHRs not unlike how a human doctor would review those same records. But the similarities to human doctors don’t end there. The AI is a machine learning classifier (MLC), capable of placing the information learned from the EHRs into categories to improve performance.
Like traditionally-trained pediatricians, the AI broke cases down into major organ groups and infection areas (upper/lower respiratory, gastrointestinal, etc.) before breaking them down even further into subcategories. It could then develop associations between various symptoms and organ groups and use those associations to improve its diagnoses. This hierarchical approach mimics the deductive reasoning human doctors employ.
Another key strength of the AI developed for this study was the enormous size of the dataset collected to teach it: 1,362,559 outpatient visits from 567,498 patients yielded some 101.6 million data points for the MLC to devour on its quest for pediatric dominance. This allowed the AI the depth of learning needed to distinguish and accurately select from the 55 different diagnosis codes across the various organ groups and subcategories.
When comparing against the human doctors, the study used 11,926 records from an unrelated group of children, giving both the MLC and the 20 humans it was compared against an even playing field. The results were clear: while cohorts of senior pediatricians performed better than the AI, junior pediatricians (those with 3-15 years of experience) were outclassed.
Helping, Not Replacing
While the research used a competitive analysis to measure the success of the AI, the results should be seen as anything but hostile to human doctors. The near future of artificial intelligence in medicine will see these machine learning programs augment, not replace, human physicians. The authors of the study specifically call out augmentation as the key short-term application of their work. Triaging incoming patients via intake forms, performing massive metastudies using EHRs, providing rapid ‘second opinions’—the applications for an AI doctor that is better-but-not-the-best are as varied as the healthcare industry itself.
That’s only considering how artificial intelligence could make a positive impact immediately upon implementation. It’s easy to see how long-term use of a diagnostic assistant could reshape the way modern medical institutions approach their work.
Look at how the MLC results fit snugly between the junior and senior physician groups. Essentially, it took nearly 15 years before a physician could consistently out-diagnose the machine. That’s a decade and a half wherein an AI diagnostic assistant would be an invaluable partner—both as a training tool and a safety measure. Likewise, on the other side of the experience curve you have physicians whose performance could be continuously leveraged to improve the AI’s effectiveness. This is a clear opportunity for a symbiotic relationship, with humans and machines each assisting the other as they mature.
Closer to Us, But Still Dependent on Us
No matter the ultimate application, the AI doctors of the future are drawing nearer to us step by step. This latest research is a demonstration that artificial intelligence can mimic the results of human deductive reasoning even in some of the most complex and important decision-making processes. True, the MLC required input from humans to function; both the initial data points and the cases used to evaluate the AI depended on EHRs written by physicians. While every effort was made to design a test schema that removed any indication of the eventual diagnosis, some “data leakage” is bound to occur.
In other words, when AIs use human-created data, they inherit human insight to some degree. Yet the progress made in machine imaging, chatbots, sensors, and other fields all suggest that this dependence on human input is more about where we are right now than where we could be in the near future.
Data, and More Data
That near future may also have some clear winners and losers. For now, those winners seem to be the institutions that can capture and apply the largest sets of data. With a rapidly digitized society gathering incredible amounts of data, China has a clear advantage. Combined with their relatively relaxed approach to privacy, they are likely to continue as one of the driving forces behind machine learning and its applications. So too will Google/Alphabet with their massive medical studies. Data is the uranium in this AI arms race, and everyone seems to be scrambling to collect more.
In a global community that seems increasingly aware of the potential problems arising from this need for and reliance on data, it’s nice to know there’ll be an upside as well. The technology behind AI medical assistants is looking more and more mature—even if we are still struggling to find exactly where, when, and how that technology should first become universal.
Yet wherever we see the next push to make AI a standard tool in a real-world medical setting, I have little doubt it will greatly improve the lives of human patients. Today Doctor AI is performing as well as a human colleague with more than 10 years of experience. By next year or so, it may take twice as long for humans to be competitive. And in a decade, the combined medical knowledge of all human history may be a tool as common as a stethoscope in your doctor’s hands.
Image Credit: Nadia Snopek / Shutterstock.com Continue reading