Tag Archives: emotional
#435098 Coming of Age in the Age of AI: The ...
The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.
Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.
“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”
No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.
What will it be like to come of age in the Age of AI?
The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.
AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.
A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.
Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.
“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”
Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.
Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.
The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.
AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.
China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.
To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.
In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.
Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.
Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.
Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.
“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.
Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.
“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.
AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.
For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.
In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.
“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”
Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”
Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.
The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.
“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”
In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.
Image Credit: phoelixDE / Shutterstock.com Continue reading
#435046 The Challenge of Abundance: Boredom, ...
As technology continues to progress, the possibility of an abundant future seems more likely. Artificial intelligence is expected to drive down the cost of labor, infrastructure, and transport. Alternative energy systems are reducing the cost of a wide variety of goods. Poverty rates are falling around the world as more people are able to make a living, and resources that were once inaccessible to millions are becoming widely available.
But such a life presents fuel for the most common complaint against abundance: if robots take all the jobs, basic income provides us livable welfare for doing nothing, and healthcare is a guarantee free of charge, then what is the point of our lives? What would motivate us to work and excel if there are no real risks or rewards? If everything is simply given to us, how would we feel like we’ve ever earned anything?
Time has proven that humans inherently yearn to overcome challenges—in fact, this very desire likely exists as the root of most technological innovation. And the idea that struggling makes us stronger isn’t just anecdotal, it’s scientifically validated.
For instance, kids who use anti-bacterial soaps and sanitizers too often tend to develop weak immune systems, causing them to get sick more frequently and more severely. People who work out purposely suffer through torn muscles so that after a few days of healing their muscles are stronger. And when patients visit a psychologist to handle a fear that is derailing their lives, one of the most common treatments is exposure therapy: a slow increase of exposure to the suffering so that the patient gets stronger and braver each time, able to take on an incrementally more potent manifestation of their fears.
Different Kinds of Struggle
It’s not hard to understand why people might fear an abundant future as a terribly mundane one. But there is one crucial mistake made in this assumption, and it was well summarized by Indian mystic and author Sadhguru, who said during a recent talk at Google:
Stomach empty, only one problem. Stomach full—one hundred problems; because what we refer to as human really begins only after survival is taken care of.
This idea is backed up by Maslow’s hierarchy of needs, which was first presented in his 1943 paper “A Theory of Human Motivation.” Maslow shows the steps required to build to higher and higher levels of the human experience. Not surprisingly, the first two levels deal with physiological needs and the need for safety—in other words, with the body. You need to have food, water, and sleep, or you die. After that, you need to be protected from threats, from the elements, from dangerous people, and from disease and pain.
Maslow’s Hierarchy of Needs. Photo by Wikimedia User:Factoryjoe / CC BY-SA 3.0
The beauty of these first two levels is that they’re clear-cut problems with clear-cut solutions: if you’re hungry, then you eat; if you’re thirsty, then you drink; if you’re tired, then you sleep.
But what about the next tiers of the hierarchy? What of love and belonging, of self-esteem and self-actualization? If we’re lonely, can we just summon up an authentic friend or lover? If we feel neglected by society, can we demand it validate us? If we feel discouraged and disappointed in ourselves, can we simply dial up some confidence and self-esteem?
Of course not, and that’s because these psychological needs are nebulous; they don’t contain clear problems with clear solutions. They involve the external world and other people, and are complicated by the infinite flavors of nuance and compromise that are required to navigate human relationships and personal meaning.
These psychological difficulties are where we grow our personalities, outlooks, and beliefs. The truly defining characteristics of a person are dictated not by the physical situations they were forced into—like birth, socioeconomic class, or physical ailment—but instead by the things they choose. So a future of abundance helps to free us from the physical limitations so that we can truly commit to a life of purpose and meaning, rather than just feel like survival is our purpose.
The Greatest Challenge
And that’s the plot twist. This challenge to come to grips with our own individuality and freedom could actually be the greatest challenge our species has ever faced. Can you imagine waking up every day with infinite possibility? Every choice you make says no to the rest of reality, and so every decision carries with it truly life-defining purpose and meaning. That sounds overwhelming. And that’s probably because in our current socio-economic systems, it is.
Studies have shown that people in wealthier nations tend to experience more anxiety and depression. Ron Kessler, professor of health care policy at Harvard and World Health Organization (WHO) researcher, summarized his findings of global mental health by saying, “When you’re literally trying to survive, who has time for depression? Americans, on the other hand, many of whom lead relatively comfortable lives, blow other nations away in the depression factor, leading some to suggest that depression is a ‘luxury disorder.’”
This might explain why America scores in the top rankings for the most depressed and anxious country on the planet. We surpassed our survival needs, and instead became depressed because our jobs and relationships don’t fulfill our expectations for the next three levels of Maslow’s hierarchy (belonging, esteem, and self-actualization).
But a future of abundance would mean we’d have to deal with these levels. This is the challenge for the future; this is what keeps things from being mundane.
As a society, we would be forced to come to grips with our emotional intelligence, to reckon with philosophy rather than simply contemplate it. Nearly every person you meet will be passionately on their own customized life journey, not following a routine simply because of financial limitations. Such a world seems far more vibrant and interesting than one where most wander sleep-deprived and numb while attempting to survive the rat race.
We can already see the forceful hand of this paradigm shift as self-driving cars become ubiquitous. For example, consider the famous psychological and philosophical “trolley problem.” In this thought experiment, a person sees a trolley car heading towards five people on the train tracks; they see a lever that will allow them to switch the trolley car to a track that instead only has one person on it. Do you switch the lever and have a hand in killing one person, or do you let fate continue and kill five people instead?
For the longest time, this was just an interesting quandary to consider. But now, massive corporations have to have an answer, so they can program their self-driving cars with the ability to choose between hitting a kid who runs into the road or swerving into an oncoming car carrying a family of five. When companies need philosophers to make business decisions, it’s a good sign of what’s to come.
Luckily, it’s possible this forceful reckoning with philosophy and our own consciousness may be exactly what humanity needs. Perhaps our great failure as a species has been a result of advanced cognition still trapped in the first two levels of Maslow’s hierarchy due to a long history of scarcity.
As suggested in the opening scenes in 2001: A Space Odyssey, our ape-like proclivity for violence has long stayed the same while the technology we fight with and live amongst has progressed. So while well-off Americans may have comfortable lives, they still know they live in a system where there is no safety net, where a single tragic failure could still mean hunger and homelessness. And because of this, that evolutionarily hard-wired neurotic part of our brain that fears for our survival has never been able to fully relax, and so that anxiety and depression that come with too much freedom but not enough security stays ever present.
Not only might this shift in consciousness help liberate humanity, but it may be vital if we’re to survive our future creations as well. Whatever values we hold dear as a species are the ones we will imbue into the sentient robots we create. If machine learning is going to take its guidance from humanity, we need to level up humanity’s emotional maturity.
While the physical struggles of the future may indeed fall to the wayside amongst abundance, it’s unlikely to become a mundane world; instead, it will become a vibrant culture where each individual is striving against the most important struggle that affects all of us: the challenge to find inner peace, to find fulfillment, to build meaningful relationships, and ultimately, the challenge to find ourselves.
Image Credit: goffkein.pro / Shutterstock.com Continue reading
#434759 To Be Ethical, AI Must Become ...
As over-hyped as artificial intelligence is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. AI can now write realistic-sounding text, give a debating champ a run for his money, diagnose illnesses, and generate fake human faces—among much more.
After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a machine learning system’s rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.
In a panel at the South by Southwest interactive festival last week titled “Ethics and AI: How to plan for the unpredictable,” experts in the field shared their thoughts on building more transparent, explainable, and accountable AI systems.
Not New, but Different
Ryan Welsh, founder and director of explainable AI startup Kyndi, pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.
“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”
Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a machine learning network, as well as the weights of those units and how they map to specific data and outputs.
Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open AI’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.
Explainability itself is the highest and final bar and refers to a system’s ability to explain itself in natural language to the average user by being able to say, “I generated this output because x, y, z.”
“Humans are unique in our ability and our desire to ask why,” said Josh Marcuse, executive director of the Defense Innovation Board, which advises Department of Defense senior leaders on innovation. “The reason we want explanations from people is so we can understand their belief system and see if we agree with it and want to continue to work with them.”
Similarly, we need to have the ability to interrogate AIs.
Two Types of Thinking
Welsh explained that one big barrier standing in the way of explainability is the tension between the deep learning community and the symbolic AI community, which see themselves as two different paradigms and historically haven’t collaborated much.
Symbolic or classical AI focuses on concepts and rules, while deep learning is centered around perceptions. In human thought this is the difference between, for example, deciding to pass a soccer ball to a teammate who is open (you make the decision because conceptually you know that only open players can receive passes), and registering that the ball is at your feet when someone else passes it to you (you’re taking in information without making a decision about it).
“Symbolic AI has abstractions and representation based on logic that’s more humanly comprehensible,” Welsh said. To truly mimic human thinking, AI needs to be able to both perceive information and conceptualize it. An example of perception (deep learning) in an AI is recognizing numbers within an image, while conceptualization (symbolic learning) would give those numbers a hierarchical order and extract rules from the hierachy (4 is greater than 3, and 5 is greater than 4, therefore 5 is also greater than 3).
Explainability comes in when the system can say, “I saw a, b, and c, and based on that decided x, y, or z.” DeepMind and others have recently published papers emphasizing the need to fuse the two paradigms together.
Implications Across Industries
One of the most prominent fields where AI ethics will come into play, and where the transparency and accountability of AI systems will be crucial, is defense. Marcuse said, “We’re accountable beings, and we’re responsible for the choices we make. Bringing in tech or AI to a battlefield doesn’t strip away that meaning and accountability.”
In fact, he added, rather than worrying about how AI might degrade human values, people should be asking how the tech could be used to help us make better moral choices.
It’s also important not to conflate AI with autonomy—a worst-case scenario that springs to mind is an intelligent destructive machine on a rampage. But in fact, Marcuse said, in the defense space, “We have autonomous systems today that don’t rely on AI, and most of the AI systems we’re contemplating won’t be autonomous.”
The US Department of Defense released its 2018 artificial intelligence strategy last month. It includes developing a robust and transparent set of principles for defense AI, investing in research and development for AI that’s reliable and secure, continuing to fund research in explainability, advocating for a global set of military AI guidelines, and finding ways to use AI to reduce the risk of civilian casualties and other collateral damage.
Though these were designed with defense-specific aims in mind, Marcuse said, their implications extend across industries. “The defense community thinks of their problems as being unique, that no one deals with the stakes and complexity we deal with. That’s just wrong,” he said. Making high-stakes decisions with technology is widespread; safety-critical systems are key to aviation, medicine, and self-driving cars, to name a few.
Marcuse believes the Department of Defense can invest in AI safety in a way that has far-reaching benefits. “We all depend on technology to keep us alive and safe, and no one wants machines to harm us,” he said.
A Creation Superior to Its Creator
That said, we’ve come to expect technology to meet our needs in just the way we want, all the time—servers must never be down, GPS had better not take us on a longer route, Google must always produce the answer we’re looking for.
With AI, though, our expectations of perfection may be less reasonable.
“Right now we’re holding machines to superhuman standards,” Marcuse said. “We expect them to be perfect and infallible.” Take self-driving cars. They’re conceived of, built by, and programmed by people, and people as a whole generally aren’t great drivers—just look at traffic accident death rates to confirm that. But the few times self-driving cars have had fatal accidents, there’s been an ensuing uproar and backlash against the industry, as well as talk of implementing more restrictive regulations.
This can be extrapolated to ethics more generally. We as humans have the ability to explain our decisions, but many of us aren’t very good at doing so. As Marcuse put it, “People are emotional, they confabulate, they lie, they’re full of unconscious motivations. They don’t pass the explainability test.”
Why, then, should explainability be the standard for AI?
Even if humans aren’t good at explaining our choices, at least we can try, and we can answer questions that probe at our decision-making process. A deep learning system can’t do this yet, so working towards being able to identify which input data the systems are triggering on to make decisions—even if the decisions and the process aren’t perfect—is the direction we need to head.
Image Credit: a-image / Shutterstock.com Continue reading