Tag Archives: learning

#435127 Teaching AI the Concept of ‘Similar, ...

As a human you instinctively know that a leopard is closer to a cat than a motorbike, but the way we train most AI makes them oblivious to these kinds of relations. Building the concept of similarity into our algorithms could make them far more capable, writes the author of a new paper in Science Robotics.

Convolutional neural networks have revolutionized the field of computer vision to the point that machines are now outperforming humans on some of the most challenging visual tasks. But the way we train them to analyze images is very different from the way humans learn, says Atsuto Maki, an associate professor at KTH Royal Institute of Technology.

“Imagine that you are two years old and being quizzed on what you see in a photo of a leopard,” he writes. “You might answer ‘a cat’ and your parents might say, ‘yeah, not quite but similar’.”

In contrast, the way we train neural networks rarely gives that kind of partial credit. They are typically trained to have very high confidence in the correct label and consider all incorrect labels, whether ”cat” or “motorbike,” equally wrong. That’s a mistake, says Maki, because ignoring the fact that something can be “less wrong” means you’re not exploiting all of the information in the training data.

Even when models are trained this way, there will be small differences in the probabilities assigned to incorrect labels that can tell you a lot about how well the model can generalize what it has learned to unseen data.

If you show a model a picture of a leopard and it gives “cat” a probability of five percent and “motorbike” one percent, that suggests it picked up on the fact that a cat is closer to a leopard than a motorbike. In contrast, if the figures are the other way around it means the model hasn’t learned the broad features that make cats and leopards similar, something that could potentially be helpful when analyzing new data.

If we could boost this ability to identify similarities between classes we should be able to create more flexible models better able to generalize, says Maki. And recent research has demonstrated how variations of an approach called regularization might help us achieve that goal.

Neural networks are prone to a problem called “overfitting,” which refers to a tendency to pay too much attention to tiny details and noise specific to their training set. When that happens, models will perform excellently on their training data but poorly when applied to unseen test data without these particular quirks.

Regularization is used to circumvent this problem, typically by reducing the network’s capacity to learn all this unnecessary information and therefore boost its ability to generalize to new data. Techniques are varied, but generally involve modifying the network’s structure or the strength of the weights between artificial neurons.

More recently, though, researchers have suggested new regularization approaches that work by encouraging a broader spread of probabilities across all classes. This essentially helps them capture more of the class similarities, says Maki, and therefore boosts their ability to generalize.

One such approach was devised in 2017 by Google Brain researchers, led by deep learning pioneer Geoffrey Hinton. They introduced a penalty to their training process that directly punished overconfident predictions in the model’s outputs, and a technique called label smoothing that prevents the largest probability becoming much larger than all others. This meant the probabilities were lower for correct labels and higher for incorrect ones, which was found to boost performance of models on varied tasks from image classification to speech recognition.

Another came from Maki himself in 2017 and achieves the same goal, but by suppressing high values in the model’s feature vector—the mathematical construct that describes all of an object’s important characteristics. This has a knock-on effect on the spread of output probabilities and also helped boost performance on various image classification tasks.

While it’s still early days for the approach, the fact that humans are able to exploit these kinds of similarities to learn more efficiently suggests that models that incorporate them hold promise. Maki points out that it could be particularly useful in applications such as robotic grasping, where distinguishing various similar objects is important.

Image Credit: Marianna Kalashnyk / Shutterstock.com Continue reading

Posted in Human Robots

#435098 Coming of Age in the Age of AI: The ...

The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.

Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.

“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”

No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.

What will it be like to come of age in the Age of AI?

The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.

AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.

A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.

Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.

“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”

Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.

Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.

The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.

AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.

China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.

To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.

In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.

Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.

Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.

Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.

“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.

Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.

“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.

AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.

For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.

In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.

“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”

Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”

Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.

The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.

“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”

In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.

Image Credit: phoelixDE / Shutterstock.com Continue reading

Posted in Human Robots

#435070 5 Breakthroughs Coming Soon in Augmented ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

In this third installment of my Convergence Catalyzer series, I’ll be synthesizing key insights from my annual entrepreneurs’ mastermind event, Abundance 360. This five-blog series looks at 3D printing, artificial intelligence, VR/AR, energy and transportation, and blockchain.

Today, let’s dive into virtual and augmented reality.

Today’s most prominent tech giants are leaping onto the VR/AR scene, each driving forward new and upcoming product lines. Think: Microsoft’s HoloLens, Facebook’s Oculus, Amazon’s Sumerian, and Google’s Cardboard (Apple plans to release a headset by 2021).

And as plummeting prices meet exponential advancements in VR/AR hardware, this burgeoning disruptor is on its way out of the early adopters’ market and into the majority of consumers’ homes.

My good friend Philip Rosedale is my go-to expert on AR/VR and one of the foremost creators of today’s most cutting-edge virtual worlds. After creating the virtual civilization Second Life in 2013, now populated by almost 1 million active users, Philip went on to co-found High Fidelity, which explores the future of next-generation shared VR.

In just the next five years, he predicts five emerging trends will take hold, together disrupting major players and birthing new ones.

Let’s dive in…

Top 5 Predictions for VR/AR Breakthroughs (2019-2024)
“If you think you kind of understand what’s going on with that tech today, you probably don’t,” says Philip. “We’re still in the middle of landing the airplane of all these new devices.”

(1) Transition from PC-based to standalone mobile VR devices

Historically, VR devices have relied on PC connections, usually involving wires and clunky hardware that restrict a user’s field of motion. However, as VR enters the dematerialization stage, we are about to witness the rapid rise of a standalone and highly mobile VR experience economy.

Oculus Go, the leading standalone mobile VR device on the market, requires only a mobile app for setup and can be transported anywhere with WiFi.

With a consumer audience in mind, the 32GB headset is priced at $200 and shares an app ecosystem with Samsung’s Gear VR. While Google Daydream are also standalone VR devices, they require a docked mobile phone instead of the built-in screen of Oculus Go.

In the AR space, Lenovo’s standalone Microsoft’s HoloLens 2 leads the way in providing tetherless experiences.

Freeing headsets from the constraints of heavy hardware will make VR/AR increasingly interactive and transportable, a seamless add-on whenever, wherever. Within a matter of years, it may be as simple as carrying lightweight VR goggles wherever you go and throwing them on at a moment’s notice.

(2) Wide field-of-view AR displays

Microsoft’s HoloLens 2 leads the AR industry in headset comfort and display quality. The most significant issue with their prior version was the limited rectangular field of view (FOV).

By implementing laser technology to create a microelectromechanical systems (MEMS) display, however, HoloLens 2 can position waveguides in front of users’ eyes, directed by mirrors. Subsequently enlarging images can be accomplished by shifting the angles of these mirrors. Coupled with a 47 pixel per degree resolution, HoloLens 2 has now doubled its predecessor’s FOV. Microsoft anticipates the release of its headset by the end of this year at a $3,500 price point, first targeting businesses and eventually rolling it out to consumers.

Magic Leap provides a similar FOV but with lower resolution than the HoloLens 2. The Meta 2 boasts an even wider 90-degree FOV, but requires a cable attachment. The race to achieve the natural human 120-degree horizontal FOV continues.

“The technology to expand the field of view is going to make those devices much more usable by giving you bigger than a small box to look through,” Rosedale explains.

(3) Mapping of real world to enable persistent AR ‘mirror worlds’

‘Mirror worlds’ are alternative dimensions of reality that can blanket a physical space. While seated in your office, the floor beneath you could dissolve into a calm lake and each desk into a sailboat. In the classroom, mirror worlds would convert pencils into magic wands and tabletops into touch screens.

Pokémon Go provides an introductory glimpse into the mirror world concept and its massive potential to unite people in real action.

To create these mirror worlds, AR headsets must precisely understand the architecture of the surrounding world. Rosedale predicts the scanning accuracy of devices will improve rapidly over the next five years to make these alternate dimensions possible.

(4) 5G mobile devices reduce latency to imperceptible levels

Verizon has already launched 5G networks in Minneapolis and Chicago, compatible with the Moto Z3. Sprint plans to follow with its own 5G launch in May. Samsung, LG, Huawei, and ZTE have all announced upcoming 5G devices.

“5G is rolling out this year and it’s going to materially affect particularly my work, which is making you feel like you’re talking to somebody else directly face to face,” explains Rosedale. “5G is critical because currently the cell devices impose too much delay, so it doesn’t feel real to talk to somebody face to face on these devices.”

To operate seamlessly from anywhere on the planet, standalone VR/AR devices will require a strong 5G network. Enhancing real-time connectivity in VR/AR will transform the communication methods of tomorrow.

(5) Eye-tracking and facial expressions built in for full natural communication

Companies like Pupil Labs and Tobii provide eye tracking hardware add-ons and software to VR/AR headsets. This technology allows for foveated rendering, which renders a given scene in high resolution only in the fovea region, while the peripheral regions appear in lower resolution, conserving processing power.

As seen in the HoloLens 2, eye tracking can also be used to identify users and customize lens widths to provide a comfortable, personalized experience for each individual.

According to Rosedale, “The fundamental opportunity for both VR and AR is to improve human communication.” He points out that current VR/AR headsets miss many of the subtle yet important aspects of communication. Eye movements and microexpressions provide valuable insight into a user’s emotions and desires.

Coupled with emotion-detecting AI software, such as Affectiva, VR/AR devices might soon convey much more richly textured and expressive interactions between any two people, transcending physical boundaries and even language gaps.

Final Thoughts
As these promising trends begin to transform the market, VR/AR will undoubtedly revolutionize our lives… possibly to the point at which our virtual worlds become just as consequential and enriching as our physical world.

A boon for next-gen education, VR/AR will empower youth and adults alike with holistic learning that incorporates social, emotional, and creative components through visceral experiences, storytelling, and simulation. Traveling to another time, manipulating the insides of a cell, or even designing a new city will become daily phenomena of tomorrow’s classrooms.

In real estate, buyers will increasingly make decisions through virtual tours. Corporate offices might evolve into spaces that only exist in ‘mirror worlds’ or grow virtual duplicates for remote workers.

In healthcare, accuracy of diagnosis will skyrocket, while surgeons gain access to digital aids as they conduct life-saving procedures. Or take manufacturing, wherein training and assembly will become exponentially more efficient as visual cues guide complex tasks.

In the mere matter of a decade, VR and AR will unlock limitless applications for new and converging industries. And as virtual worlds converge with AI, 3D printing, computing advancements and beyond, today’s experience economies will explode in scale and scope. Prepare yourself for the exciting disruption ahead!

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements, and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: Mariia Korneeva / Shutterstock.com Continue reading

Posted in Human Robots

#435056 How Researchers Used AI to Better ...

A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.

We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.

Hassabis is about to be proven right again.

Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?

The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.

In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.

The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.

It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.

Why Vision?
ANNs and biological vision have quite the history.

In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.

In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.

That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.

Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.

It only seems fair that AI would feed back into vision neuroscience.

Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.

Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.

But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.

The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.

This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.

This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”

This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.

The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.

The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.

“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”

It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.

“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”

To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.

“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”

Image Credit: Sangoiri / Shutterstock.com Continue reading

Posted in Human Robots

#435046 The Challenge of Abundance: Boredom, ...

As technology continues to progress, the possibility of an abundant future seems more likely. Artificial intelligence is expected to drive down the cost of labor, infrastructure, and transport. Alternative energy systems are reducing the cost of a wide variety of goods. Poverty rates are falling around the world as more people are able to make a living, and resources that were once inaccessible to millions are becoming widely available.

But such a life presents fuel for the most common complaint against abundance: if robots take all the jobs, basic income provides us livable welfare for doing nothing, and healthcare is a guarantee free of charge, then what is the point of our lives? What would motivate us to work and excel if there are no real risks or rewards? If everything is simply given to us, how would we feel like we’ve ever earned anything?

Time has proven that humans inherently yearn to overcome challenges—in fact, this very desire likely exists as the root of most technological innovation. And the idea that struggling makes us stronger isn’t just anecdotal, it’s scientifically validated.

For instance, kids who use anti-bacterial soaps and sanitizers too often tend to develop weak immune systems, causing them to get sick more frequently and more severely. People who work out purposely suffer through torn muscles so that after a few days of healing their muscles are stronger. And when patients visit a psychologist to handle a fear that is derailing their lives, one of the most common treatments is exposure therapy: a slow increase of exposure to the suffering so that the patient gets stronger and braver each time, able to take on an incrementally more potent manifestation of their fears.

Different Kinds of Struggle
It’s not hard to understand why people might fear an abundant future as a terribly mundane one. But there is one crucial mistake made in this assumption, and it was well summarized by Indian mystic and author Sadhguru, who said during a recent talk at Google:

Stomach empty, only one problem. Stomach full—one hundred problems; because what we refer to as human really begins only after survival is taken care of.

This idea is backed up by Maslow’s hierarchy of needs, which was first presented in his 1943 paper “A Theory of Human Motivation.” Maslow shows the steps required to build to higher and higher levels of the human experience. Not surprisingly, the first two levels deal with physiological needs and the need for safety—in other words, with the body. You need to have food, water, and sleep, or you die. After that, you need to be protected from threats, from the elements, from dangerous people, and from disease and pain.

Maslow’s Hierarchy of Needs. Photo by Wikimedia User:Factoryjoe / CC BY-SA 3.0
The beauty of these first two levels is that they’re clear-cut problems with clear-cut solutions: if you’re hungry, then you eat; if you’re thirsty, then you drink; if you’re tired, then you sleep.

But what about the next tiers of the hierarchy? What of love and belonging, of self-esteem and self-actualization? If we’re lonely, can we just summon up an authentic friend or lover? If we feel neglected by society, can we demand it validate us? If we feel discouraged and disappointed in ourselves, can we simply dial up some confidence and self-esteem?

Of course not, and that’s because these psychological needs are nebulous; they don’t contain clear problems with clear solutions. They involve the external world and other people, and are complicated by the infinite flavors of nuance and compromise that are required to navigate human relationships and personal meaning.

These psychological difficulties are where we grow our personalities, outlooks, and beliefs. The truly defining characteristics of a person are dictated not by the physical situations they were forced into—like birth, socioeconomic class, or physical ailment—but instead by the things they choose. So a future of abundance helps to free us from the physical limitations so that we can truly commit to a life of purpose and meaning, rather than just feel like survival is our purpose.

The Greatest Challenge
And that’s the plot twist. This challenge to come to grips with our own individuality and freedom could actually be the greatest challenge our species has ever faced. Can you imagine waking up every day with infinite possibility? Every choice you make says no to the rest of reality, and so every decision carries with it truly life-defining purpose and meaning. That sounds overwhelming. And that’s probably because in our current socio-economic systems, it is.

Studies have shown that people in wealthier nations tend to experience more anxiety and depression. Ron Kessler, professor of health care policy at Harvard and World Health Organization (WHO) researcher, summarized his findings of global mental health by saying, “When you’re literally trying to survive, who has time for depression? Americans, on the other hand, many of whom lead relatively comfortable lives, blow other nations away in the depression factor, leading some to suggest that depression is a ‘luxury disorder.’”

This might explain why America scores in the top rankings for the most depressed and anxious country on the planet. We surpassed our survival needs, and instead became depressed because our jobs and relationships don’t fulfill our expectations for the next three levels of Maslow’s hierarchy (belonging, esteem, and self-actualization).

But a future of abundance would mean we’d have to deal with these levels. This is the challenge for the future; this is what keeps things from being mundane.

As a society, we would be forced to come to grips with our emotional intelligence, to reckon with philosophy rather than simply contemplate it. Nearly every person you meet will be passionately on their own customized life journey, not following a routine simply because of financial limitations. Such a world seems far more vibrant and interesting than one where most wander sleep-deprived and numb while attempting to survive the rat race.

We can already see the forceful hand of this paradigm shift as self-driving cars become ubiquitous. For example, consider the famous psychological and philosophical “trolley problem.” In this thought experiment, a person sees a trolley car heading towards five people on the train tracks; they see a lever that will allow them to switch the trolley car to a track that instead only has one person on it. Do you switch the lever and have a hand in killing one person, or do you let fate continue and kill five people instead?

For the longest time, this was just an interesting quandary to consider. But now, massive corporations have to have an answer, so they can program their self-driving cars with the ability to choose between hitting a kid who runs into the road or swerving into an oncoming car carrying a family of five. When companies need philosophers to make business decisions, it’s a good sign of what’s to come.

Luckily, it’s possible this forceful reckoning with philosophy and our own consciousness may be exactly what humanity needs. Perhaps our great failure as a species has been a result of advanced cognition still trapped in the first two levels of Maslow’s hierarchy due to a long history of scarcity.

As suggested in the opening scenes in 2001: A Space Odyssey, our ape-like proclivity for violence has long stayed the same while the technology we fight with and live amongst has progressed. So while well-off Americans may have comfortable lives, they still know they live in a system where there is no safety net, where a single tragic failure could still mean hunger and homelessness. And because of this, that evolutionarily hard-wired neurotic part of our brain that fears for our survival has never been able to fully relax, and so that anxiety and depression that come with too much freedom but not enough security stays ever present.

Not only might this shift in consciousness help liberate humanity, but it may be vital if we’re to survive our future creations as well. Whatever values we hold dear as a species are the ones we will imbue into the sentient robots we create. If machine learning is going to take its guidance from humanity, we need to level up humanity’s emotional maturity.

While the physical struggles of the future may indeed fall to the wayside amongst abundance, it’s unlikely to become a mundane world; instead, it will become a vibrant culture where each individual is striving against the most important struggle that affects all of us: the challenge to find inner peace, to find fulfillment, to build meaningful relationships, and ultimately, the challenge to find ourselves.

Image Credit: goffkein.pro / Shutterstock.com Continue reading

Posted in Human Robots