Tag Archives: kind
#434270 AI Will Create Millions More Jobs Than ...
In the past few years, artificial intelligence has advanced so quickly that it now seems hardly a month goes by without a newsworthy AI breakthrough. In areas as wide-ranging as speech translation, medical diagnosis, and gameplay, we have seen computers outperform humans in startling ways.
This has sparked a discussion about how AI will impact employment. Some fear that as AI improves, it will supplant workers, creating an ever-growing pool of unemployable humans who cannot compete economically with machines.
This concern, while understandable, is unfounded. In fact, AI will be the greatest job engine the world has ever seen.
New Technology Isn’t a New Phenomenon
On the one hand, those who predict massive job loss from AI can be excused. It is easier to see existing jobs disrupted by new technology than to envision what new jobs the technology will enable.
But on the other hand, radical technological advances aren’t a new phenomenon. Technology has progressed nonstop for 250 years, and in the US unemployment has stayed between 5 to 10 percent for almost all that time, even when radical new technologies like steam power and electricity came on the scene.
But you don’t have to look back to steam, or even electricity. Just look at the internet. Go back 25 years, well within the memory of today’s pessimistic prognosticators, to 1993. The web browser Mosaic had just been released, and the phrase “surfing the web,” that most mixed of metaphors, was just a few months old.
If someone had asked you what would be the result of connecting a couple billion computers into a giant network with common protocols, you might have predicted that email would cause us to mail fewer letters, and the web might cause us to read fewer newspapers and perhaps even do our shopping online. If you were particularly farsighted, you might have speculated that travel agents and stockbrokers would be adversely affected by this technology. And based on those surmises, you might have thought the internet would destroy jobs.
But now we know what really happened. The obvious changes did occur. But a slew of unexpected changes happened as well. We got thousands of new companies worth trillions of dollars. We bettered the lot of virtually everyone on the planet touched by the technology. Dozens of new careers emerged, from web designer to data scientist to online marketer. The cost of starting a business with worldwide reach plummeted, and the cost of communicating with customers and leads went to nearly zero. Vast storehouses of information were made freely available and used by entrepreneurs around the globe to build new kinds of businesses.
But yes, we mail fewer letters and buy fewer newspapers.
The Rise of Artificial Intelligence
Then along came a new, even bigger technology: artificial intelligence. You hear the same refrain: “It will destroy jobs.”
Consider the ATM. If you had to point to a technology that looked as though it would replace people, the ATM might look like a good bet; it is, after all, an automated teller machine. And yet, there are more tellers now than when ATMs were widely released. How can this be? Simple: ATMs lowered the cost of opening bank branches, and banks responded by opening more, which required hiring more tellers.
In this manner, AI will create millions of jobs that are far beyond our ability to imagine. For instance, AI is becoming adept at language translation—and according to the US Bureau of Labor Statistics, demand for human translators is skyrocketing. Why? If the cost of basic translation drops to nearly zero, the cost of doing business with those who speak other languages falls. Thus, it emboldens companies to do more business overseas, creating more work for human translators. AI may do the simple translations, but humans are needed for the nuanced kind.
In fact, the BLS forecasts faster-than-average job growth in many occupations that AI is expected to impact: accountants, forensic scientists, geological technicians, technical writers, MRI operators, dietitians, financial specialists, web developers, loan officers, medical secretaries, and customer service representatives, to name a very few. These fields will not experience job growth in spite of AI, but through it.
But just as with the internet, the real gains in jobs will come from places where our imaginations cannot yet take us.
Parsing Pessimism
You may recall waking up one morning to the news that “47 percent of jobs will be lost to technology.”
That report by Carl Frey and Michael Osborne is a fine piece of work, but readers and the media distorted their 47 percent number. What the authors actually said is that some functions within 47 percent of jobs will be automated, not that 47 percent of jobs will disappear.
Frey and Osborne go on to rank occupations by “probability of computerization” and give the following jobs a 65 percent or higher probability: social science research assistants, atmospheric and space scientists, and pharmacy aides. So what does this mean? Social science professors will no longer have research assistants? Of course they will. They will just do different things because much of what they do today will be automated.
The intergovernmental Organization for Economic Co-operation and Development released a report of their own in 2016. This report, titled “The Risk of Automation for Jobs in OECD Countries,” applies a different “whole occupations” methodology and puts the share of jobs potentially lost to computerization at nine percent. That is normal churn for the economy.
But what of the skills gap? Will AI eliminate low-skilled workers and create high-skilled job opportunities? The relevant question is whether most people can do a job that’s just a little more complicated than the one they currently have. This is exactly what happened with the industrial revolution; farmers became factory workers, factory workers became factory managers, and so on.
Embracing AI in the Workplace
A January 2018 Accenture report titled “Reworking the Revolution” estimates that new applications of AI combined with human collaboration could boost employment worldwide as much as 10 percent by 2020.
Electricity changed the world, as did mechanical power, as did the assembly line. No one can reasonably claim that we would be better off without those technologies. Each of them bettered our lives, created jobs, and raised wages. AI will be bigger than electricity, bigger than mechanization, bigger than anything that has come before it.
This is how free economies work, and why we have never run out of jobs due to automation. There are not a fixed number of jobs that automation steals one by one, resulting in progressively more unemployment. There are as many jobs in the world as there are buyers and sellers of labor.
Image Credit: enzozo / Shutterstock.com Continue reading
#433928 The Surprising Parallels Between ...
The human mind can be a confusing and overwhelming place. Despite incredible leaps in human progress, many of us still struggle to make our peace with our thoughts. The roots of this are complex and multifaceted. To find explanations for the global mental health epidemic, one can tap into neuroscience, psychology, evolutionary biology, or simply observe the meaningless systems that dominate our modern-day world.
This is not only the context of our reality but also that of the critically-acclaimed Netflix series, Maniac. Psychological dark comedy meets science fiction, Maniac is a retro, futuristic, and hallucinatory trip that is filled with hidden symbols. Directed by Cary Joji Fukunaga, the series tells the story of two strangers who decide to participate in the final stage of a “groundbreaking” pharmaceutical trial—one that combines novel pharmaceuticals with artificial intelligence, and promises to make their emotional pain go away.
Naturally, things don’t go according to plan.
From exams used for testing defense mechanisms to techniques such as cognitive behavioral therapy, the narrative infuses genuine psychological science. As perplexing as the series may be to some viewers, many of the tools depicted actually have a strong grounding in current technological advancements.
Catalysts for Alleviating Suffering
In the therapy of Maniac, participants undergo a three-day trial wherein they ingest three pills and appear to connect their consciousness to a superintelligent AI. Each participant is hurled into the traumatic experiences imprinted in their subconscious and forced to cope with them in a series of hallucinatory and dream-like experiences.
Perhaps the most recognizable parallel that can be drawn is with the latest advancements in psychedelic therapy. Psychedelics are a class of drugs that alter the experience of consciousness, and often cause radical changes in perception and cognitive processes.
Through a process known as transient hypofrontality, the executive “over-thinking” parts of our brains get a rest, and deeper areas become more active. This experience, combined with the breakdown of the ego, is often correlated with feelings of timelessness, peacefulness, presence, unity, and above all, transcendence.
Despite being not addictive and extremely difficult to overdose on, regulators looked down on the use of psychedelics for decades and many continue to dismiss them as “party drugs.” But in the last few years, all of this began to change.
Earlier this summer, the FDA granted breakthrough therapy designation to MDMA for the treatment of PTSD, after several phases of successful trails. Similar research has discovered that Psilocybin (also known as magic mushrooms) combined with therapy is far more effective than traditional forms of treatment to treat depression and anxiety. Today, there is a growing and overwhelming body of research that proves that not only are psychedelics such as LSD, MDMA, or Psylicybin effective catalysts to alleviate suffering and enhance the human condition, but they are potentially the most effective tools out there.
It’s important to realize that these substances are not solutions on their own, but rather catalysts for more effective therapy. They can be groundbreaking, but only in the right context and setting.
Brain-Machine Interfaces
In Maniac, the medication-assisted therapy is guided by what appears to be a super-intelligent form of artificial intelligence called the GRTA, nicknamed Gertie. Gertie, who is a “guide” in machine form, accesses the minds of the participants through what appears to be a futuristic brain-scanning technology and curates customized hallucinatory experiences with the goal of accelerating the healing process.
Such a powerful form of brain-scanning technology is not unheard of. Current levels of scanning technology are already allowing us to decipher dreams and connect three human brains, and are only growing exponentially. Though they are nowhere as advanced as Gertie (we have a long way to go before we get to this kind of general AI), we are also seeing early signs of AI therapy bots, chatbots that listen, think, and communicate with users like a therapist would.
The parallels between current advancements in mental health therapy and the methods in Maniac can be startling, and are a testament to how science fiction and the arts can be used to explore the existential implications of technology.
Not Necessarily a Dystopia
While there are many ingenious similarities between the technology in Maniac and the state of mental health therapy, it’s important to recognize the stark differences. Like many other blockbuster science fiction productions, Maniac tells a fundamentally dystopian tale.
The series tells the story of the 73rd iteration of a controversial drug trial, one that has experienced many failures and even led to various participants being braindead. The scientists appear to be evil, secretive, and driven by their own superficial agendas and deep unresolved emotional issues.
In contrast, clinicians and researchers are not only required to file an “investigational new drug application” with the FDA (and get approval) but also update the agency with safety and progress reports throughout the trial.
Furthermore, many of today’s researchers are driven by a strong desire to contribute to the well-being and progress of our species. Even more, the results of decades of research by organizations like MAPS have been exceptionally promising and aligned with positive values. While Maniac is entertaining and thought-provoking, viewers must not forget the positive potential of such advancements in mental health therapy.
Science, technology, and psychology aside, Maniac is a deep commentary on the human condition and the often disorienting states that pain us all. Within any human lifetime, suffering is inevitable. It is the disproportionate, debilitating, and unjust levels of suffering that we ought to tackle as a society. Ultimately, Maniac explores whether advancements in science and technology can help us live not a life devoid of suffering, but one where it is balanced with fulfillment.
Image Credit: xpixel / Shutterstock.com Continue reading
#433785 DeepMind’s Eerie Reimagination of the ...
If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.
But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.
From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.
They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.
The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.
Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.
“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.
The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.
That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”
Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.
BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’
However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.
Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.
The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.
One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.
For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.
For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.
Image Credit: Ondrej Prosicky/Shutterstock Continue reading