Tag Archives: humans

#434792 Extending Human Longevity With ...

Lizards can regrow entire limbs. Flatworms, starfish, and sea cucumbers regrow entire bodies. Sharks constantly replace lost teeth, often growing over 20,000 teeth throughout their lifetimes. How can we translate these near-superpowers to humans?

The answer: through the cutting-edge innovations of regenerative medicine.

While big data and artificial intelligence transform how we practice medicine and invent new treatments, regenerative medicine is about replenishing, replacing, and rejuvenating our physical bodies.

In Part 5 of this blog series on Longevity and Vitality, I detail three of the regenerative technologies working together to fully augment our vital human organs.

Replenish: Stem cells, the regenerative engine of the body
Replace: Organ regeneration and bioprinting
Rejuvenate: Young blood and parabiosis

Let’s dive in.

Replenish: Stem Cells – The Regenerative Engine of the Body
Stem cells are undifferentiated cells that can transform into specialized cells such as heart, neurons, liver, lung, skin and so on, and can also divide to produce more stem cells.

In a child or young adult, these stem cells are in large supply, acting as a built-in repair system. They are often summoned to the site of damage or inflammation to repair and restore normal function.

But as we age, our supply of stem cells begins to diminish as much as 100- to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing your body.

Imagine your stem cells as a team of repairmen in your newly constructed mansion. When the mansion is new and the repairmen are young, they can fix everything perfectly. But as the repairmen age and reduce in number, your mansion eventually goes into disrepair and finally crumbles.

What if you could restore and rejuvenate your stem cell population?

One option to accomplish this restoration and rejuvenation is to extract and concentrate your own autologous adult stem cells from places like your adipose (or fat) tissue or bone marrow.

These stem cells, however, are fewer in number and have undergone mutations (depending on your age) from their original ‘software code.’ Many scientists and physicians now prefer an alternative source, obtaining stem cells from the placenta or umbilical cord, the leftovers of birth.

These stem cells, available in large supply and expressing the undamaged software of a newborn, can be injected into joints or administered intravenously to rejuvenate and revitalize.

Think of these stem cells as chemical factories generating vital growth factors that can help to reduce inflammation, fight autoimmune disease, increase muscle mass, repair joints, and even revitalize skin and grow hair.

Over the last decade, the number of publications per year on stem cell-related research has increased 40x, and the stem cell market is expected to increase to $297 billion by 2022.

Rising research and development initiatives to develop therapeutic options for chronic diseases and growing demand for regenerative treatment options are the most significant drivers of this budding industry.

Biologists led by Kohji Nishida at Osaka University in Japan have discovered a new way to nurture and grow the tissues that make up the human eyeball. The scientists are able to grow retinas, corneas, the eye’s lens, and more, using only a small sample of adult skin.

In a Stanford study, seven of 18 stroke victims who agreed to stem cell treatments showed remarkable motor function improvements. This treatment could work for other neurodegenerative conditions such as Alzheimer’s, Parkinson’s, and ALS.

Doctors from the USC Neurorestoration Center and Keck Medicine of USC injected stem cells into the damaged cervical spine of a recently paralyzed 21-year-old man. Three months later, he showed dramatic improvement in sensation and movement of both arms.

In 2019, doctors in the U.K. cured a patient with HIV for the second time ever thanks to the efficacy of stem cells. After giving the cancer patient (who also had HIV) an allogeneic haematopoietic (e.g. blood) stem cell treatment for his Hodgkin’s lymphoma, the patient went into long-term HIV remission—18 months and counting at the time of the study’s publication.

Replace: Organ Regeneration and 3D Printing
Every 10 minutes, someone is added to the US organ transplant waiting list, totaling over 113,000 people waiting for replacement organs as of January 2019.

Countless more people in need of ‘spare parts’ never make it onto the waiting list. And on average, 20 people die each day while waiting for a transplant.

As a result, 35 percent of all US deaths (~900,000 people) could be prevented or delayed with access to organ replacements.

The excessive demand for donated organs will only intensify as technologies like self-driving cars make the world safer, given that many organ donors result from auto and motorcycle accidents. Safer vehicles mean less accidents and donations.

Clearly, replacement and regenerative medicine represent a massive opportunity.

Organ Entrepreneurs
Enter United Therapeutics CEO, Dr. Martine Rothblatt. A one-time aerospace entrepreneur (she was the founder of Sirius Satellite Radio), Rothblatt changed careers in the 1990s after her daughter developed a rare lung disease.

Her moonshot today is to create an industry of replacement organs. With an initial focus on diseases of the lung, Rothblatt set out to create replacement lungs. To accomplish this goal, her company United Therapeutics has pursued a number of technologies in parallel.

3D Printing Lungs
In 2017, United teamed up with one of the world’s largest 3D printing companies, 3D Systems, to build a collagen bioprinter and is paying another company, 3Scan, to slice up lungs and create detailed maps of their interior.

This 3D Systems bioprinter now operates according to a method called stereolithography. A UV laser flickers through a shallow pool of collagen doped with photosensitive molecules. Wherever the laser lingers, the collagen cures and becomes solid.

Gradually, the object being printed is lowered and new layers are added. The printer can currently lay down collagen at a resolution of around 20 micrometers, but will need to achieve resolution of a micrometer in size to make the lung functional.

Once a collagen lung scaffold has been printed, the next step is to infuse it with human cells, a process called recellularization.

The goal here is to use stem cells that grow on scaffolding and differentiate, ultimately providing the proper functionality. Early evidence indicates this approach can work.

In 2018, Harvard University experimental surgeon Harald Ott reported that he pumped billions of human cells (from umbilical cords and diced lungs) into a pig lung stripped of its own cells. When Ott’s team reconnected it to a pig’s circulation, the resulting organ showed rudimentary function.

Humanizing Pig Lungs
Another of Rothblatt’s organ manufacturing strategies is called xenotransplantation, the idea of transplanting an animal’s organs into humans who need a replacement.

Given the fact that adult pig organs are similar in size and shape to those of humans, United Therapeutics has focused on genetically engineering pigs to allow humans to use their organs. “It’s actually not rocket science,” said Rothblatt in her 2015 TED talk. “It’s editing one gene after another.”

To accomplish this goal, United Therapeutics made a series of investments in companies such as Revivicor Inc. and Synthetic Genomics Inc., and signed large funding agreements with the University of Maryland, University of Alabama, and New York Presbyterian/Columbia University Medical Center to create xenotransplantation programs for new hearts, kidneys, and lungs, respectively. Rothblatt hopes to see human translation in three to four years.

In preparation for that day, United Therapeutics owns a 132-acre property in Research Triangle Park and built a 275,000-square-foot medical laboratory that will ultimately have the capability to annually produce up to 1,000 sets of healthy pig lungs—known as xenolungs—from genetically engineered pigs.

Lung Ex Vivo Perfusion Systems
Beyond 3D printing and genetically engineering pig lungs, Rothblatt has already begun implementing a third near-term approach to improve the supply of lungs across the US.

Only about 30 percent of potential donor lungs meet transplant criteria in the first place; of those, only about 85 percent of those are usable once they arrive at the surgery center. As a result, nearly 75 percent of possible lungs never make it to the recipient in need.

What if these lungs could be rejuvenated? This concept informs Dr. Rothblatt’s next approach.

In 2016, United Therapeutics invested $41.8 million in TransMedics Inc., an Andover, Massachusetts company that develops ex vivo perfusion systems for donor lungs, hearts, and kidneys.

The XVIVO Perfusion System takes marginal-quality lungs that initially failed to meet transplantation standard-of-care criteria and perfuses and ventilates them at normothermic conditions, providing an opportunity for surgeons to reassess transplant suitability.

Rejuvenate Young Blood and Parabiosis
In HBO’s parody of the Bay Area tech community, Silicon Valley, one of the episodes (Season 4, Episode 5) is named “The Blood Boy.”

In this installment, tech billionaire Gavin Belson (Matt Ross) is meeting with Richard Hendricks (Thomas Middleditch) and his team, speaking about the future of the decentralized internet. A young, muscled twenty-something disrupts the meeting when he rolls in a transfusion stand and silently hooks an intravenous connection between himself and Belson.

Belson then introduces the newcomer as his “transfusion associate” and begins to explain the science of parabiosis: “Regular transfusions of the blood of a younger physically fit donor can significantly retard the aging process.”

While the sitcom is fiction, that science has merit, and the scenario portrayed in the episode is already happening today.

On the first point, research at Stanford and Harvard has demonstrated that older animals, when transfused with the blood of young animals, experience regeneration across many tissues and organs.

The opposite is also true: young animals, when transfused with the blood of older animals, experience accelerated aging. But capitalizing on this virtual fountain of youth has been tricky.

Ambrosia
One company, a San Francisco-based startup called Ambrosia, recently commenced one of the trials on parabiosis. Their protocol is simple: Healthy participants aged 35 and older get a transfusion of blood plasma from donors under 25, and researchers monitor their blood over the next two years for molecular indicators of health and aging.

Ambrosia’s founder Jesse Karmazin became interested in launching a company around parabiosis after seeing impressive data from animals and studies conducted abroad in humans: In one trial after another, subjects experience a reversal of aging symptoms across every major organ system. “The effects seem to be almost permanent,” he said. “It’s almost like there’s a resetting of gene expression.”

Infusing your own cord blood stem cells as you age may have tremendous longevity benefits. Following an FDA press release in February 2019, Ambrosia halted its consumer-facing treatment after several months of operation.

Understandably, the FDA raised concerns about the practice of parabiosis because to date, there is a marked lack of clinical data to support the treatment’s effectiveness.

Elevian
On the other end of the reputability spectrum is a startup called Elevian, spun out of Harvard University. Elevian is approaching longevity with a careful, scientifically validated strategy. (Full Disclosure: I am both an advisor to and investor in Elevian.)

CEO Mark Allen, MD, is joined by a dozen MDs and Ph.Ds out of Harvard. Elevian’s scientific founders started the company after identifying specific circulating factors that may be responsible for the “young blood” effect.

One example: A naturally occurring molecule known as “growth differentiation factor 11,” or GDF11, when injected into aged mice, reproduces many of the regenerative effects of young blood, regenerating heart, brain, muscles, lungs, and kidneys.

More specifically, GDF11 supplementation reduces age-related cardiac hypertrophy, accelerates skeletal muscle repair, improves exercise capacity, improves brain function and cerebral blood flow, and improves metabolism.

Elevian is developing a number of therapeutics that regulate GDF11 and other circulating factors. The goal is to restore our body’s natural regenerative capacity, which Elevian believes can address some of the root causes of age-associated disease with the promise of reversing or preventing many aging-related diseases and extending the healthy lifespan.

Conclusion
In 1992, futurist Leland Kaiser coined the term “regenerative medicine”:

“A new branch of medicine will develop that attempts to change the course of chronic disease and in many instances will regenerate tired and failing organ systems.”

Since then, the powerful regenerative medicine industry has grown exponentially, and this rapid growth is anticipated to continue.

A dramatic extension of the human healthspan is just over the horizon. Soon, we’ll all have the regenerative superpowers previously relegated to a handful of animals and comic books.

What new opportunities open up when anybody, anywhere, and at anytime can regenerate, replenish, and replace entire organs and metabolic systems on command?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Giovanni Cancemi / Shutterstock.com Continue reading

Posted in Human Robots

#434786 AI Performed Like a Human on a Gestalt ...

Dr. Been Kim wants to rip open the black box of deep learning.

A senior researcher at Google Brain, Kim specializes in a sort of AI psychology. Like cognitive psychologists before her, she develops various ways to probe the alien minds of artificial neural networks (ANNs), digging into their gory details to better understand the models and their responses to inputs.

The more interpretable ANNs are, the reasoning goes, the easier it is to reveal potential flaws in their reasoning. And if we understand when or why our systems choke, we’ll know when not to use them—a foundation for building responsible AI.

There are already several ways to tap into ANN reasoning, but Kim’s inspiration for unraveling the AI black box came from an entirely different field: cognitive psychology. The field aims to discover fundamental rules of how the human mind—essentially also a tantalizing black box—operates, Kim wrote with her colleagues.

In a new paper uploaded to the pre-publication server arXiv, the team described a way to essentially perform a human cognitive test on ANNs. The test probes how we automatically complete gaps in what we see, so that they form entire objects—for example, perceiving a circle from a bunch of loose dots arranged along a clock face. Psychologist dub this the “law of completion,” a highly influential idea that led to explanations of how our minds generalize data into concepts.

Because deep neural networks in machine vision loosely mimic the structure and connections of the visual cortex, the authors naturally asked: do ANNs also exhibit the law of completion? And what does that tell us about how an AI thinks?

Enter the Germans
The law of completion is part of a series of ideas from Gestalt psychology. Back in the 1920s, long before the advent of modern neuroscience, a group of German experimental psychologists asked: in this chaotic, flashy, unpredictable world, how do we piece together input in a way that leads to meaningful perceptions?

The result is a group of principles known together as the Gestalt effect: that the mind self-organizes to form a global whole. In the more famous words of Gestalt psychologist Kurt Koffka, our perception forms a whole that’s “something else than the sum of its parts.” Not greater than; just different.

Although the theory has its critics, subsequent studies in humans and animals suggest that the law of completion happens on both the cognitive and neuroanatomical level.

Take a look at the drawing below. You immediately “see” a shape that’s actually the negative: a triangle or a square (A and B). Or you further perceive a 3D ball (C), or a snake-like squiggle (D). Your mind fills in blank spots, so that the final perception is more than just the black shapes you’re explicitly given.

Image Credit: Wikimedia Commons contributors, the free media repository.
Neuroscientists now think that the effect comes from how our visual system processes information. Arranged in multiple layers and columns, lower-level neurons—those first to wrangle the data—tend to extract simpler features such as lines or angles. In Gestalt speak, they “see” the parts.

Then, layer by layer, perception becomes more abstract, until higher levels of the visual system directly interpret faces or objects—or things that don’t really exist. That is, the “whole” emerges.

The Experiment Setup
Inspired by these classical experiments, Kim and team developed a protocol to test the Gestalt effect on feed-forward ANNs: one simple, the other, dubbed the “Inception V3,” far more complex and widely used in the machine vision community.

The main idea is similar to the triangle drawings above. First, the team generated three datasets: one set shows complete, ordinary triangles. The second—the “Illusory” set, shows triangles with the edges removed but the corners intact. Thanks to the Gestalt effect, to us humans these generally still look like triangles. The third set also only shows incomplete triangle corners. But here, the corners are randomly rotated so that we can no longer imagine a line connecting them—hence, no more triangle.

To generate a dataset large enough to tease out small effects, the authors changed the background color, image rotation, and other aspects of the dataset. In all, they produced nearly 1,000 images to test their ANNs on.

“At a high level, we compare an ANN’s activation similarities between the three sets of stimuli,” the authors explained. The process is two steps: first, train the AI on complete triangles. Second, test them on the datasets. If the response is more similar between the illusory set and the complete triangle—rather than the randomly rotated set—it should suggest a sort of Gestalt closure effect in the network.

Machine Gestalt
Right off the bat, the team got their answer: yes, ANNs do seem to exhibit the law of closure.

When trained on natural images, the networks better classified the illusory set as triangles than those with randomized connection weights or networks trained on white noise.

When the team dug into the “why,” things got more interesting. The ability to complete an image correlated with the network’s ability to generalize.

Humans subconsciously do this constantly: anything with a handle made out of ceramic, regardless of shape, could easily be a mug. ANNs still struggle to grasp common features—clues that immediately tells us “hey, that’s a mug!” But when they do, it sometimes allows the networks to better generalize.

“What we observe here is that a network that is able to generalize exhibits…more of the closure effect [emphasis theirs], hinting that the closure effect reflects something beyond simply learning features,” the team wrote.

What’s more, remarkably similar to the visual cortex, “higher” levels of the ANNs showed more of the closure effect than lower layers, and—perhaps unsurprisingly—the more layers a network had, the more it exhibited the closure effect.

As the networks learned, their ability to map out objects from fragments also improved. When the team messed around with the brightness and contrast of the images, the AI still learned to see the forest from the trees.

“Our findings suggest that neural networks trained with natural images do exhibit closure,” the team concluded.

AI Psychology
That’s not to say that ANNs recapitulate the human brain. As Google’s Deep Dream, an effort to coax AIs into spilling what they’re perceiving, clearly demonstrates, machine vision sees some truly weird stuff.

In contrast, because they’re modeled after the human visual cortex, perhaps it’s not all that surprising that these networks also exhibit higher-level properties inherent to how we process information.

But to Kim and her colleagues, that’s exactly the point.

“The field of psychology has developed useful tools and insights to study human brains– tools that we may be able to borrow to analyze artificial neural networks,” they wrote.

By tweaking these tools to better analyze machine minds, the authors were able to gain insight on how similarly or differently they see the world from us. And that’s the crux: the point isn’t to say that ANNs perceive the world sort of, kind of, maybe similar to humans. It’s to tap into a wealth of cognitive psychology tools, established over decades using human minds, to probe that of ANNs.

“The work here is just one step along a much longer path,” the authors conclude.

“Understanding where humans and neural networks differ will be helpful for research on interpretability by enlightening the fundamental differences between the two interesting species.”

Image Credit: Popova Alena / Shutterstock.com Continue reading

Posted in Human Robots

#434784 Killer robots already exist, and ...

Humans will always make the final decision on whether armed robots can shoot, according to a statement by the US Department of Defense. Their clarification comes amid fears about a new advanced targeting system, known as ATLAS, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about so-called “killer robots”, the concept is nothing new – machine-gun wielding “SWORDS” robots were deployed in Iraq as early as 2007. Continue reading

Posted in Human Robots

#434781 What Would It Mean for AI to Become ...

As artificial intelligence systems take on more tasks and solve more problems, it’s hard to say which is rising faster: our interest in them or our fear of them. Futurist Ray Kurzweil famously predicted that “By 2029, computers will have emotional intelligence and be convincing as people.”

We don’t know how accurate this prediction will turn out to be. Even if it takes more than 10 years, though, is it really possible for machines to become conscious? If the machines Kurzweil describes say they’re conscious, does that mean they actually are?

Perhaps a more relevant question at this juncture is: what is consciousness, and how do we replicate it if we don’t understand it?

In a panel discussion at South By Southwest titled “How AI Will Design the Human Future,” experts from academia and industry discussed these questions and more.

Wait, What Is AI?
Most of AI’s recent feats—diagnosing illnesses, participating in debate, writing realistic text—involve machine learning, which uses statistics to find patterns in large datasets then uses those patterns to make predictions. However, “AI” has been used to refer to everything from basic software automation and algorithms to advanced machine learning and deep learning.

“The term ‘artificial intelligence’ is thrown around constantly and often incorrectly,” said Jennifer Strong, a reporter at the Wall Street Journal and host of the podcast “The Future of Everything.” Indeed, one study found that 40 percent of European companies that claim to be working on or using AI don’t actually use it at all.

Dr. Peter Stone, associate chair of computer science at UT Austin, was the study panel chair on the 2016 One Hundred Year Study on Artificial Intelligence (or AI100) report. Based out of Stanford University, AI100 is studying and anticipating how AI will impact our work, our cities, and our lives.

“One of the first things we had to do was define AI,” Stone said. They defined it as a collection of different technologies inspired by the human brain to be able to perceive their surrounding environment and figure out what actions to take given these inputs.

Modeling on the Unknown
Here’s the crazy thing about that definition (and about AI itself): we’re essentially trying to re-create the abilities of the human brain without having anything close to a thorough understanding of how the human brain works.

“We’re starting to pair our brains with computers, but brains don’t understand computers and computers don’t understand brains,” Stone said. Dr. Heather Berlin, cognitive neuroscientist and professor of psychiatry at the Icahn School of Medicine at Mount Sinai, agreed. “It’s still one of the greatest mysteries how this three-pound piece of matter can give us all our subjective experiences, thoughts, and emotions,” she said.

This isn’t to say we’re not making progress; there have been significant neuroscience breakthroughs in recent years. “This has been the stuff of science fiction for a long time, but now there’s active work being done in this area,” said Amir Husain, CEO and founder of Austin-based AI company Spark Cognition.

Advances in brain-machine interfaces show just how much more we understand the brain now than we did even a few years ago. Neural implants are being used to restore communication or movement capabilities in people who’ve been impaired by injury or illness. Scientists have been able to transfer signals from the brain to prosthetic limbs and stimulate specific circuits in the brain to treat conditions like Parkinson’s, PTSD, and depression.

But much of the brain’s inner workings remain a deep, dark mystery—one that will have to be further solved if we’re ever to get from narrow AI, which refers to systems that can perform specific tasks and is where the technology stands today, to artificial general intelligence, or systems that possess the same intelligence level and learning capabilities as humans.

The biggest question that arises here, and one that’s become a popular theme across stories and films, is if machines achieve human-level general intelligence, does that also mean they’d be conscious?

Wait, What Is Consciousness?
As valuable as the knowledge we’ve accumulated about the brain is, it seems like nothing more than a collection of disparate facts when we try to put it all together to understand consciousness.

“If you can replace one neuron with a silicon chip that can do the same function, then replace another neuron, and another—at what point are you still you?” Berlin asked. “These systems will be able to pass the Turing test, so we’re going to need another concept of how to measure consciousness.”

Is consciousness a measurable phenomenon, though? Rather than progressing by degrees or moving through some gray area, isn’t it pretty black and white—a being is either conscious or it isn’t?

This may be an outmoded way of thinking, according to Berlin. “It used to be that only philosophers could study consciousness, but now we can study it from a scientific perspective,” she said. “We can measure changes in neural pathways. It’s subjective, but depends on reportability.”

She described three levels of consciousness: pure subjective experience (“Look, the sky is blue”), awareness of one’s own subjective experience (“Oh, it’s me that’s seeing the blue sky”), and relating one subjective experience to another (“The blue sky reminds me of a blue ocean”).

“These subjective states exist all the way down the animal kingdom. As humans we have a sense of self that gives us another depth to that experience, but it’s not necessary for pure sensation,” Berlin said.

Husain took this definition a few steps farther. “It’s this self-awareness, this idea that I exist separate from everything else and that I can model myself,” he said. “Human brains have a wonderful simulator. They can propose a course of action virtually, in their minds, and see how things play out. The ability to include yourself as an actor means you’re running a computation on the idea of yourself.”

Most of the decisions we make involve envisioning different outcomes, thinking about how each outcome would affect us, and choosing which outcome we’d most prefer.

“Complex tasks you want to achieve in the world are tied to your ability to foresee the future, at least based on some mental model,” Husain said. “With that view, I as an AI practitioner don’t see a problem implementing that type of consciousness.”

Moving Forward Cautiously (But Not too Cautiously)
To be clear, we’re nowhere near machines achieving artificial general intelligence or consciousness, and whether a “conscious machine” is possible—not to mention necessary or desirable—is still very much up for debate.

As machine intelligence continues to advance, though, we’ll need to walk the line between progress and risk management carefully.

Improving the transparency and explainability of AI systems is one crucial goal AI developers and researchers are zeroing in on. Especially in applications that could mean the difference between life and death, AI shouldn’t advance without people being able to trace how it’s making decisions and reaching conclusions.

Medicine is a prime example. “There are already advances that could save lives, but they’re not being used because they’re not trusted by doctors and nurses,” said Stone. “We need to make sure there’s transparency.” Demanding too much transparency would also be a mistake, though, because it will hinder the development of systems that could at best save lives and at worst improve efficiency and free up doctors to have more face time with patients.

Similarly, self-driving cars have great potential to reduce deaths from traffic fatalities. But even though humans cause thousands of deadly crashes every day, we’re terrified by the idea of self-driving cars that are anything less than perfect. “If we only accept autonomous cars when there’s zero probability of an accident, then we will never accept them,” Stone said. “Yet we give 16-year-olds the chance to take a road test with no idea what’s going on in their brains.”

This brings us back to the fact that, in building tech modeled after the human brain—which has evolved over millions of years—we’re working towards an end whose means we don’t fully comprehend, be it something as basic as choosing when to brake or accelerate or something as complex as measuring consciousness.

“We shouldn’t charge ahead and do things just because we can,” Stone said. “The technology can be very powerful, which is exciting, but we have to consider its implications.”

Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots

#434759 To Be Ethical, AI Must Become ...

As over-hyped as artificial intelligence is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. AI can now write realistic-sounding text, give a debating champ a run for his money, diagnose illnesses, and generate fake human faces—among much more.

After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a machine learning system’s rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.

In a panel at the South by Southwest interactive festival last week titled “Ethics and AI: How to plan for the unpredictable,” experts in the field shared their thoughts on building more transparent, explainable, and accountable AI systems.

Not New, but Different
Ryan Welsh, founder and director of explainable AI startup Kyndi, pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.

“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”

Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a machine learning network, as well as the weights of those units and how they map to specific data and outputs.

Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open AI’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.

Explainability itself is the highest and final bar and refers to a system’s ability to explain itself in natural language to the average user by being able to say, “I generated this output because x, y, z.”

“Humans are unique in our ability and our desire to ask why,” said Josh Marcuse, executive director of the Defense Innovation Board, which advises Department of Defense senior leaders on innovation. “The reason we want explanations from people is so we can understand their belief system and see if we agree with it and want to continue to work with them.”

Similarly, we need to have the ability to interrogate AIs.

Two Types of Thinking
Welsh explained that one big barrier standing in the way of explainability is the tension between the deep learning community and the symbolic AI community, which see themselves as two different paradigms and historically haven’t collaborated much.

Symbolic or classical AI focuses on concepts and rules, while deep learning is centered around perceptions. In human thought this is the difference between, for example, deciding to pass a soccer ball to a teammate who is open (you make the decision because conceptually you know that only open players can receive passes), and registering that the ball is at your feet when someone else passes it to you (you’re taking in information without making a decision about it).

“Symbolic AI has abstractions and representation based on logic that’s more humanly comprehensible,” Welsh said. To truly mimic human thinking, AI needs to be able to both perceive information and conceptualize it. An example of perception (deep learning) in an AI is recognizing numbers within an image, while conceptualization (symbolic learning) would give those numbers a hierarchical order and extract rules from the hierachy (4 is greater than 3, and 5 is greater than 4, therefore 5 is also greater than 3).

Explainability comes in when the system can say, “I saw a, b, and c, and based on that decided x, y, or z.” DeepMind and others have recently published papers emphasizing the need to fuse the two paradigms together.

Implications Across Industries
One of the most prominent fields where AI ethics will come into play, and where the transparency and accountability of AI systems will be crucial, is defense. Marcuse said, “We’re accountable beings, and we’re responsible for the choices we make. Bringing in tech or AI to a battlefield doesn’t strip away that meaning and accountability.”

In fact, he added, rather than worrying about how AI might degrade human values, people should be asking how the tech could be used to help us make better moral choices.

It’s also important not to conflate AI with autonomy—a worst-case scenario that springs to mind is an intelligent destructive machine on a rampage. But in fact, Marcuse said, in the defense space, “We have autonomous systems today that don’t rely on AI, and most of the AI systems we’re contemplating won’t be autonomous.”

The US Department of Defense released its 2018 artificial intelligence strategy last month. It includes developing a robust and transparent set of principles for defense AI, investing in research and development for AI that’s reliable and secure, continuing to fund research in explainability, advocating for a global set of military AI guidelines, and finding ways to use AI to reduce the risk of civilian casualties and other collateral damage.

Though these were designed with defense-specific aims in mind, Marcuse said, their implications extend across industries. “The defense community thinks of their problems as being unique, that no one deals with the stakes and complexity we deal with. That’s just wrong,” he said. Making high-stakes decisions with technology is widespread; safety-critical systems are key to aviation, medicine, and self-driving cars, to name a few.

Marcuse believes the Department of Defense can invest in AI safety in a way that has far-reaching benefits. “We all depend on technology to keep us alive and safe, and no one wants machines to harm us,” he said.

A Creation Superior to Its Creator
That said, we’ve come to expect technology to meet our needs in just the way we want, all the time—servers must never be down, GPS had better not take us on a longer route, Google must always produce the answer we’re looking for.

With AI, though, our expectations of perfection may be less reasonable.

“Right now we’re holding machines to superhuman standards,” Marcuse said. “We expect them to be perfect and infallible.” Take self-driving cars. They’re conceived of, built by, and programmed by people, and people as a whole generally aren’t great drivers—just look at traffic accident death rates to confirm that. But the few times self-driving cars have had fatal accidents, there’s been an ensuing uproar and backlash against the industry, as well as talk of implementing more restrictive regulations.

This can be extrapolated to ethics more generally. We as humans have the ability to explain our decisions, but many of us aren’t very good at doing so. As Marcuse put it, “People are emotional, they confabulate, they lie, they’re full of unconscious motivations. They don’t pass the explainability test.”

Why, then, should explainability be the standard for AI?

Even if humans aren’t good at explaining our choices, at least we can try, and we can answer questions that probe at our decision-making process. A deep learning system can’t do this yet, so working towards being able to identify which input data the systems are triggering on to make decisions—even if the decisions and the process aren’t perfect—is the direction we need to head.

Image Credit: a-image / Shutterstock.com Continue reading

Posted in Human Robots