Tag Archives: society

#434623 The Great Myth of the AI Skills Gap

One of the most contentious debates in technology is around the question of automation and jobs. At issue is whether advances in automation, specifically with regards to artificial intelligence and robotics, will spell trouble for today’s workers. This debate is played out in the media daily, and passions run deep on both sides of the issue. In the past, however, automation has created jobs and increased real wages.

A widespread concern with the current scenario is that the workers most likely to be displaced by technology lack the skills needed to do the new jobs that same technology will create.

Let’s look at this concern in detail. Those who fear automation will hurt workers start by pointing out that there is a wide range of jobs, from low-pay, low-skill to high-pay, high-skill ones. This can be represented as follows:

They then point out that technology primarily creates high-paying jobs, like geneticists, as shown in the diagram below.

Meanwhile, technology destroys low-wage, low-skill jobs like those in fast food restaurants, as shown below:

Then, those who are worried about this dynamic often pose the question, “Do you really think a fast-food worker is going to become a geneticist?”

They worry that we are about to face a huge amount of systemic permanent unemployment, as the unskilled displaced workers are ill-equipped to do the jobs of tomorrow.

It is important to note that both sides of the debate are in agreement at this point. Unquestionably, technology destroys low-skilled, low-paying jobs while creating high-skilled, high-paying ones.

So, is that the end of the story? As a society are we destined to bifurcate into two groups, those who have training and earn high salaries in the new jobs, and those with less training who see their jobs vanishing to machines? Is this latter group forever locked out of economic plenty because they lack training?

No.

The question, “Can a fast food worker become a geneticist?” is where the error comes in. Fast food workers don’t become geneticists. What happens is that a college biology professor becomes a geneticist. Then a high-school biology teacher gets the college job. Then the substitute teacher gets hired on full-time to fill the high school teaching job. All the way down.

The question is not whether those in the lowest-skilled jobs can do the high-skilled work. Instead the question is, “Can everyone do a job just a little harder than the job they have today?” If so, and I believe very deeply that this is the case, then every time technology creates a new job “at the top,” everyone gets a promotion.

This isn’t just an academic theory—it’s 200 years of economic history in the west. For 200 years, with the exception of the Great Depression, unemployment in the US has been between 2 percent and 13 percent. Always. Europe’s range is a bit wider, but not much.

If I took 200 years of unemployment rates and graphed them, and asked you to find where the assembly line took over manufacturing, or where steam power rapidly replaced animal power, or the lightning-fast adoption of electricity by industry, you wouldn’t be able to find those spots. They aren’t even blips in the unemployment record.

You don’t even have to look back as far as the assembly line to see this happening. It has happened non-stop for 200 years. Every fifty years, we lose about half of all jobs, and this has been pretty steady since 1800.

How is it that for 200 years we have lost half of all jobs every half century, but never has this process caused unemployment? Not only has it not caused unemployment, but during that time, we have had full employment against the backdrop of rising wages.

How can wages rise while half of all jobs are constantly being destroyed? Simple. Because new technology always increases worker productivity. It creates new jobs, like web designer and programmer, while destroying low-wage backbreaking work. When this happens, everyone along the way gets a better job.

Our current situation isn’t any different than the past. The nature of technology has always been to create high-skilled jobs and increase worker productivity. This is good news for everyone.

People often ask me what their children should study to make sure they have a job in the future. I usually say it doesn’t really matter. If I knew everything I know now and went back to the mid 1980s, what could I have taken in high school to make me better prepared for today? There is only one class, and it wasn’t computer science. It was typing. Who would have guessed?

The great skill is to be able to learn new things, and luckily, we all have that. In fact, that is our singular ability as a species. What I do in my day-to-day job consists largely of skills I have learned as the years have passed. In my experience, if you ask people at all job levels,“Would you like a little more challenging job to make a little more money?” almost everyone says yes.

That’s all it has taken for us to collectively get here today, and that’s all we need going forward.

Image Credit: Lightspring / Shutterstock.com Continue reading

Posted in Human Robots

#434559 Can AI Tell the Difference Between a ...

Scarcely a day goes by without another headline about neural networks: some new task that deep learning algorithms can excel at, approaching or even surpassing human competence. As the application of this approach to computer vision has continued to improve, with algorithms capable of specialized recognition tasks like those found in medicine, the software is getting closer to widespread commercial use—for example, in self-driving cars. Our ability to recognize patterns is a huge part of human intelligence: if this can be done faster by machines, the consequences will be profound.

Yet, as ever with algorithms, there are deep concerns about their reliability, especially when we don’t know precisely how they work. State-of-the-art neural networks will confidently—and incorrectly—classify images that look like television static or abstract art as real-world objects like school-buses or armadillos. Specific algorithms could be targeted by “adversarial examples,” where adding an imperceptible amount of noise to an image can cause an algorithm to completely mistake one object for another. Machine learning experts enjoy constructing these images to trick advanced software, but if a self-driving car could be fooled by a few stickers, it might not be so fun for the passengers.

These difficulties are hard to smooth out in large part because we don’t have a great intuition for how these neural networks “see” and “recognize” objects. The main insight analyzing a trained network itself can give us is a series of statistical weights, associating certain groups of points with certain objects: this can be very difficult to interpret.

Now, new research from UCLA, published in the journal PLOS Computational Biology, is testing neural networks to understand the limits of their vision and the differences between computer vision and human vision. Nicholas Baker, Hongjing Lu, and Philip J. Kellman of UCLA, alongside Gennady Erlikhman of the University of Nevada, tested a deep convolutional neural network called VGG-19. This is state-of-the-art technology that is already outperforming humans on standardized tests like the ImageNet Large Scale Visual Recognition Challenge.

They found that, while humans tend to classify objects based on their overall (global) shape, deep neural networks are far more sensitive to the textures of objects, including local color gradients and the distribution of points on the object. This result helps explain why neural networks in image recognition make mistakes that no human ever would—and could allow for better designs in the future.

In the first experiment, a neural network was trained to sort images into 1 of 1,000 different categories. It was then presented with silhouettes of these images: all of the local information was lost, while only the outline of the object remained. Ordinarily, the trained neural net was capable of recognizing these objects, assigning more than 90% probability to the correct classification. Studying silhouettes, this dropped to 10%. While human observers could nearly always produce correct shape labels, the neural networks appeared almost insensitive to the overall shape of the images. On average, the correct object was ranked as the 209th most likely solution by the neural network, even though the overall shapes were an exact match.

A particularly striking example arose when they tried to get the neural networks to classify glass figurines of objects they could already recognize. While you or I might find it easy to identify a glass model of an otter or a polar bear, the neural network classified them as “oxygen mask” and “can opener” respectively. By presenting glass figurines, where the texture information that neural networks relied on for classifying objects is lost, the neural network was unable to recognize the objects by shape alone. The neural network was similarly hopeless at classifying objects based on drawings of their outline.

If you got one of these right, you’re better than state-of-the-art image recognition software. Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
When the neural network was explicitly trained to recognize object silhouettes—given no information in the training data aside from the object outlines—the researchers found that slight distortions or “ripples” to the contour of the image were again enough to fool the AI, while humans paid them no mind.

The fact that neural networks seem to be insensitive to the overall shape of an object—relying instead on statistical similarities between local distributions of points—suggests a further experiment. What if you scrambled the images so that the overall shape was lost but local features were preserved? It turns out that the neural networks are far better and faster at recognizing scrambled versions of objects than outlines, even when humans struggle. Students could classify only 37% of the scrambled objects, while the neural network succeeded 83% of the time.

Humans vastly outperform machines at classifying object (a) as a bear, while the machine learning algorithm has few problems classifying the bear in figure (b). Image Credit: Nicholas Baker, Hongjing Lu, Gennady Erlikhman, Philip J. Kelman. “Deep convolutional networks do not classify based on global object shape.” Plos Computational Biology. 12/7/18. / CC BY 4.0
“This study shows these systems get the right answer in the images they were trained on without considering shape,” Kellman said. “For humans, overall shape is primary for object recognition, and identifying images by overall shape doesn’t seem to be in these deep learning systems at all.”

Naively, one might expect that—as the many layers of a neural network are modeled on connections between neurons in the brain and resemble the visual cortex specifically—the way computer vision operates must necessarily be similar to human vision. But this kind of research shows that, while the fundamental architecture might resemble that of the human brain, the resulting “mind” operates very differently.

Researchers can, increasingly, observe how the “neurons” in neural networks light up when exposed to stimuli and compare it to how biological systems respond to the same stimuli. Perhaps someday it might be possible to use these comparisons to understand how neural networks are “thinking” and how those responses differ from humans.

But, as yet, it takes a more experimental psychology to probe how neural networks and artificial intelligence algorithms perceive the world. The tests employed against the neural network are closer to how scientists might try to understand the senses of an animal or the developing brain of a young child rather than a piece of software.

By combining this experimental psychology with new neural network designs or error-correction techniques, it may be possible to make them even more reliable. Yet this research illustrates just how much we still don’t understand about the algorithms we’re creating and using: how they tick, how they make decisions, and how they’re different from us. As they play an ever-greater role in society, understanding the psychology of neural networks will be crucial if we want to use them wisely and effectively—and not end up missing the woods for the trees.

Image Credit: Irvan Pratama / Shutterstock.com Continue reading

Posted in Human Robots

#434297 How Can Leaders Ensure Humanity in a ...

It’s hard to avoid the prominence of AI in our lives, and there is a plethora of predictions about how it will influence our future. In their new book Solomon’s Code: Humanity in a World of Thinking Machines, co-authors Olaf Groth, Professor of Strategy, Innovation and Economics at HULT International Business School and CEO of advisory network Cambrian.ai, and Mark Nitzberg, Executive Director of UC Berkeley’s Center for Human-Compatible AI, believe that the shift in balance of power between intelligent machines and humans is already here.

I caught up with the authors about how the continued integration between technology and humans, and their call for a “Digital Magna Carta,” a broadly-accepted charter developed by a multi-stakeholder congress that would help guide the development of advanced technologies to harness their power for the benefit of all humanity.

Lisa Kay Solomon: Your new book, Solomon’s Code, explores artificial intelligence and its broader human, ethical, and societal implications that all leaders need to consider. AI is a technology that’s been in development for decades. Why is it so urgent to focus on these topics now?

Olaf Groth and Mark Nitzberg: Popular perception always thinks of AI in terms of game-changing narratives—for instance, Deep Blue beating Gary Kasparov at chess. But it’s the way these AI applications are “getting into our heads” and making decisions for us that really influences our lives. That’s not to say the big, headline-grabbing breakthroughs aren’t important; they are.

But it’s the proliferation of prosaic apps and bots that changes our lives the most, by either empowering or counteracting who we are and what we do. Today, we turn a rapidly growing number of our decisions over to these machines, often without knowing it—and even more often without understanding the second- and third-order effects of both the technologies and our decisions to rely on them.

There is genuine power in what we call a “symbio-intelligent” partnership between human, machine, and natural intelligences. These relationships can optimize not just economic interests, but help improve human well-being, create a more purposeful workplace, and bring more fulfillment to our lives.

However, mitigating the risks while taking advantage of the opportunities will require a serious, multidisciplinary consideration of how AI influences human values, trust, and power relationships. Whether or not we acknowledge their existence in our everyday life, these questions are no longer just thought exercises or fodder for science fiction.

In many ways, these technologies can challenge what it means to be human, and their ramifications already affect us in real and often subtle ways. We need to understand how

LKS: There is a lot of hype and misconceptions about AI. In your book, you provide a useful distinction between the cognitive capability that we often associate with AI processes, and the more human elements of consciousness and conscience. Why are these distinctions so important to understand?

OG & MN: Could machines take over consciousness some day as they become more powerful and complex? It’s hard to say. But there’s little doubt that, as machines become more capable, humans will start to think of them as something conscious—if for no other reason than our natural inclination to anthropomorphize.

Machines are already learning to recognize our emotional states and our physical health. Once they start talking that back to us and adjusting their behavior accordingly, we will be tempted to develop a certain rapport with them, potentially more trusting or more intimate because the machine recognizes us in our various states.

Consciousness is hard to define and may well be an emergent property, rather than something you can easily create or—in turn—deduce to its parts. So, could it happen as we put more and more elements together, from the realms of AI, quantum computing, or brain-computer interfaces? We can’t exclude that possibility.

Either way, we need to make sure we’re charting out a clear path and guardrails for this development through the Three Cs in machines: cognition (where AI is today); consciousness (where AI could go); and conscience (what we need to instill in AI before we get there). The real concern is that we reach machine consciousness—or what humans decide to grant as consciousness—without a conscience. If that happens, we will have created an artificial sociopath.

LKS: We have been seeing major developments in how AI is influencing product development and industry shifts. How is the rise of AI changing power at the global level?

OG & MN: Both in the public and private sectors, the data holder has the power. We’ve already seen the ascendance of about 10 “digital barons” in the US and China who sit on huge troves of data, massive computing power, and the resources and money to attract the world’s top AI talent. With these gaps already open between the haves and the have-nots on the technological and corporate side, we’re becoming increasingly aware that similar inequalities are forming at a societal level as well.

Economic power flows with data, leaving few options for socio-economically underprivileged populations and their corrupt, biased, or sparse digital footprints. By concentrating power and overlooking values, we fracture trust.

We can already see this tension emerging between the two dominant geopolitical models of AI. China and the US have emerged as the most powerful in both technological and economic terms, and both remain eager to drive that influence around the world. The EU countries are more contained on these economic and geopolitical measures, but they’ve leaped ahead on privacy and social concerns.

The problem is, no one has yet combined leadership on all three critical elements of values, trust, and power. The nations and organizations that foster all three of these elements in their AI systems and strategies will lead the future. Some are starting to recognize the need for the combination, but we found just 13 countries that have created significant AI strategies. Countries that wait too long to join them risk subjecting themselves to a new “data colonialism” that could change their economies and societies from the outside.

LKS: Solomon’s Code looks at AI from a variety of perspectives, considering both positive and potentially dangerous effects. You caution against the rising global threat and weaponization of AI and data, suggesting that “biased or dirty data is more threatening than nuclear arms or a pandemic.” For global leaders, entrepreneurs, technologists, policy makers and social change agents reading this, what specific strategies do you recommend to ensure ethical development and application of AI?

OG & MN: We’ve surrendered many of our most critical decisions to the Cult of Data. In most cases, that’s a great thing, as we rely more on scientific evidence to understand our world and our way through it. But we swing too far in other instances, assuming that datasets and algorithms produce a complete story that’s unsullied by human biases or intellectual shortcomings. We might choose to ignore it, but no one is blind to the dangers of nuclear war or pandemic disease. Yet, we willfully blind ourselves to the threat of dirty data, instead believing it to be pristine.

So, what do we do about it? On an individual level, it’s a matter of awareness, knowing who controls your data and how outsourcing of decisions to thinking machines can present opportunities and threats alike.

For business, government, and political leaders, we need to see a much broader expansion of ethics committees with transparent criteria with which to evaluate new products and services. We might consider something akin to clinical trials for pharmaceuticals—a sort of testing scheme that can transparently and independently measure the effects on humans of algorithms, bots, and the like. All of this needs to be multidisciplinary, bringing in expertise from across technology, social systems, ethics, anthropology, psychology, and so on.

Finally, on a global level, we need a new charter of rights—a Digital Magna Carta—that formalizes these protections and guides the development of new AI technologies toward all of humanity’s benefit. We’ve suggested the creation of a multi-stakeholder Cambrian Congress (harkening back to the explosion of life during the Cambrian period) that can not only begin to frame benefits for humanity, but build the global consensus around principles for a basic code-of-conduct, and ideas for evaluation and enforcement mechanisms, so we can get there without any large-scale failures or backlash in society. So, it’s not one or the other—it’s both.

Image Credit: whiteMocca / Shutterstock.com Continue reading

Posted in Human Robots

#434194 Educating the Wise Cyborgs of the Future

When we think of wisdom, we often think of ancient philosophers, mystics, or spiritual leaders. Wisdom is associated with the past. Yet some intellectual leaders are challenging us to reconsider wisdom in the context of the technological evolution of the future.

With the rise of exponential technologies like virtual reality, big data, artificial intelligence, and robotics, people are gaining access to increasingly powerful tools. These tools are neither malevolent nor benevolent on their own; human values and decision-making influence how they are used.

In future-themed discussions we often focus on technological progress far more than on intellectual and moral advancements. In reality, the virtuous insights that future humans possess will be even more powerful than their technological tools.

Tom Lombardo and Ray Todd Blackwood are advocating for exactly this. In their interdisciplinary paper “Educating the Wise Cyborg of the Future,” they propose a new definition of wisdom—one that is relevant in the context of the future of humanity.

We Are Already Cyborgs
The core purpose of Lombardo and Blackwood’s paper is to explore revolutionary educational models that will prepare humans, soon-to-be-cyborgs, for the future. The idea of educating such “cyborgs” may sound like science fiction, but if you pay attention to yourself and the world around you, cyborgs came into being a long time ago.

Techno-philosophers like Jason Silva point out that our tech devices are an abstract form of brain-machine interfaces. We use smartphones to store and retrieve information, perform calculations, and communicate with each other. Our devices are an extension of our minds.

According to philosophers Andy Clark and David Chalmers’ theory of the extended mind, we use this technology to expand the boundaries of our minds. We use tools like machine learning to enhance our cognitive skills or powerful telescopes to enhance our visual reach. Such is how technology has become a part of our exoskeletons, allowing us to push beyond our biological limitations.

In other words, you are already a cyborg. You have been all along.

Such an abstract definition of cyborgs is both relevant and thought-provoking. But it won’t stay abstract for much longer. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret the neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate purely through brainwaves. Technologists like Ray Kurzweil believe that by 2030 we will connect the neocortex of our brains to the cloud via nanobots.

Given these trends, humans will continue to be increasingly cyborg-like. Our future schools may not necessarily educate people as we are today, but rather will be educating a new species of human-machine hybrid.

Wisdom-Based Education
Whether you take an abstract or literal definition of a cyborg, we need to completely revamp our educational models. Even if you don’t buy into the scenario where humans integrate powerful brain-machine interfaces into our minds, there is still a desperate need for wisdom-based education to equip current generations to tackle 21st-century issues.

With an emphasis on isolated subjects, standardized assessments, and content knowledge, our current educational models were designed for the industrial era, with the intended goal of creating masses of efficient factory workers—not to empower critical thinkers, innovators, or wise cyborgs.

Currently, the goal of higher education is to provide students with the degree that society tells them they need, and ostensibly to prepare them for the workforce. In contrast, Lombardo and Blackwood argue that wisdom should be the central goal of higher education, and they elaborate on how we can practically make this happen. Lombardo has developed a comprehensive two-year foundational education program for incoming university students aimed at the development of wisdom.

What does such an educational model look like? Lombardo and Blackwood break wisdom down into individual traits and capacities, each of which can be developed and measured independently or in combination with others. The authors lay out an expansive list of traits that can influence our decision-making as we strive to tackle global challenges and pave a more exciting future. These include big-picture thinking, curiosity, wonder, compassion, self-transcendence, love of learning, optimism, and courage.

As the authors point out, “given the complex and transforming nature of the world we live in, the development of wisdom provides a holistic, perspicacious, and ethically informed foundation for understanding the world, identifying its critical problems and positive opportunities, and constructively addressing its challenges.”

After all, many of the challenges we see in our world today boil down to out-dated ways of thinking, be they regressive mindsets, superficial value systems, or egocentric mindsets. The development of wisdom would immunize future societies against such debilitating values; imagine what our world would be like if wisdom was ingrained in all leaders and participating members of society.

The Wise Cyborg
Lombardo and Blackwood invite us to imagine how the wise cyborgs of the future would live their lives. What would happen if the powerful human-machine hybrids of tomorrow were also purpose-driven, compassionate, and ethical?

They would perceive the evolving digital world through a lens of wonder, awe, and curiosity. They would use digital information as a tool for problem-solving and a source of infinite knowledge. They would leverage immersive mediums like virtual reality to enhance creative expression and experimentation. They would continue to adapt and thrive in an unpredictable world of accelerating change.

Our media often depict a dystopian future for our species. It is worth considering a radically positive yet plausible scenario where instead of the machines taking over, we converge with them into wise cyborgs. This is just a glimpse of what is possible if we combine transcendent wisdom with powerful exponential technologies.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots

#433928 The Surprising Parallels Between ...

The human mind can be a confusing and overwhelming place. Despite incredible leaps in human progress, many of us still struggle to make our peace with our thoughts. The roots of this are complex and multifaceted. To find explanations for the global mental health epidemic, one can tap into neuroscience, psychology, evolutionary biology, or simply observe the meaningless systems that dominate our modern-day world.

This is not only the context of our reality but also that of the critically-acclaimed Netflix series, Maniac. Psychological dark comedy meets science fiction, Maniac is a retro, futuristic, and hallucinatory trip that is filled with hidden symbols. Directed by Cary Joji Fukunaga, the series tells the story of two strangers who decide to participate in the final stage of a “groundbreaking” pharmaceutical trial—one that combines novel pharmaceuticals with artificial intelligence, and promises to make their emotional pain go away.

Naturally, things don’t go according to plan.

From exams used for testing defense mechanisms to techniques such as cognitive behavioral therapy, the narrative infuses genuine psychological science. As perplexing as the series may be to some viewers, many of the tools depicted actually have a strong grounding in current technological advancements.

Catalysts for Alleviating Suffering
In the therapy of Maniac, participants undergo a three-day trial wherein they ingest three pills and appear to connect their consciousness to a superintelligent AI. Each participant is hurled into the traumatic experiences imprinted in their subconscious and forced to cope with them in a series of hallucinatory and dream-like experiences.

Perhaps the most recognizable parallel that can be drawn is with the latest advancements in psychedelic therapy. Psychedelics are a class of drugs that alter the experience of consciousness, and often cause radical changes in perception and cognitive processes.

Through a process known as transient hypofrontality, the executive “over-thinking” parts of our brains get a rest, and deeper areas become more active. This experience, combined with the breakdown of the ego, is often correlated with feelings of timelessness, peacefulness, presence, unity, and above all, transcendence.

Despite being not addictive and extremely difficult to overdose on, regulators looked down on the use of psychedelics for decades and many continue to dismiss them as “party drugs.” But in the last few years, all of this began to change.

Earlier this summer, the FDA granted breakthrough therapy designation to MDMA for the treatment of PTSD, after several phases of successful trails. Similar research has discovered that Psilocybin (also known as magic mushrooms) combined with therapy is far more effective than traditional forms of treatment to treat depression and anxiety. Today, there is a growing and overwhelming body of research that proves that not only are psychedelics such as LSD, MDMA, or Psylicybin effective catalysts to alleviate suffering and enhance the human condition, but they are potentially the most effective tools out there.

It’s important to realize that these substances are not solutions on their own, but rather catalysts for more effective therapy. They can be groundbreaking, but only in the right context and setting.

Brain-Machine Interfaces
In Maniac, the medication-assisted therapy is guided by what appears to be a super-intelligent form of artificial intelligence called the GRTA, nicknamed Gertie. Gertie, who is a “guide” in machine form, accesses the minds of the participants through what appears to be a futuristic brain-scanning technology and curates customized hallucinatory experiences with the goal of accelerating the healing process.

Such a powerful form of brain-scanning technology is not unheard of. Current levels of scanning technology are already allowing us to decipher dreams and connect three human brains, and are only growing exponentially. Though they are nowhere as advanced as Gertie (we have a long way to go before we get to this kind of general AI), we are also seeing early signs of AI therapy bots, chatbots that listen, think, and communicate with users like a therapist would.

The parallels between current advancements in mental health therapy and the methods in Maniac can be startling, and are a testament to how science fiction and the arts can be used to explore the existential implications of technology.

Not Necessarily a Dystopia
While there are many ingenious similarities between the technology in Maniac and the state of mental health therapy, it’s important to recognize the stark differences. Like many other blockbuster science fiction productions, Maniac tells a fundamentally dystopian tale.

The series tells the story of the 73rd iteration of a controversial drug trial, one that has experienced many failures and even led to various participants being braindead. The scientists appear to be evil, secretive, and driven by their own superficial agendas and deep unresolved emotional issues.

In contrast, clinicians and researchers are not only required to file an “investigational new drug application” with the FDA (and get approval) but also update the agency with safety and progress reports throughout the trial.

Furthermore, many of today’s researchers are driven by a strong desire to contribute to the well-being and progress of our species. Even more, the results of decades of research by organizations like MAPS have been exceptionally promising and aligned with positive values. While Maniac is entertaining and thought-provoking, viewers must not forget the positive potential of such advancements in mental health therapy.

Science, technology, and psychology aside, Maniac is a deep commentary on the human condition and the often disorienting states that pain us all. Within any human lifetime, suffering is inevitable. It is the disproportionate, debilitating, and unjust levels of suffering that we ought to tackle as a society. Ultimately, Maniac explores whether advancements in science and technology can help us live not a life devoid of suffering, but one where it is balanced with fulfillment.

Image Credit: xpixel / Shutterstock.com Continue reading

Posted in Human Robots