Tag Archives: thought

#432539 10 Amazing Things You Can Learn From ...

Hardly a day goes by without a research study or article published talking sh*t—or more precisely, talking about the gut microbiome. When it comes to cutting-edge innovations in medicine, all signs point to the microbiome. Maybe we should have listened to Hippocrates: “All disease begins in the gut.”

Your microbiome is mostly located in your gut and contains trillions of little guys and gals called microbes. If you want to optimize your health, biohack your body, make progress against chronic disease, or know which foods are right for you—almost all of this information can be found in your microbiome.

My company, Viome, offers technology to measure your microscopic organisms and their behavior at a molecular level. Think of it as the Instagram of your inner world. A snapshot of what’s happening inside your body. New research about the microbiome is changing our understanding of who we are as humans and how the human body functions.

It turns out the microbiome may be mission control for your body and mind. Your healthy microbiome is part best friend, part power converter, part engine, and part pharmacist. At Viome, we’re working to analyze these microbial functions and recommend a list of personalized food and supplements to keep these internal complex machines in a finely tuned balance.

We now have more information than ever before about what your microbiome is doing, and it’s going to help you and the rest of the world do a whole lot better. The new insights emerging from microbiome research are changing our perception of what keeps us healthy and what makes us sick. This new understanding of the microbiome activities may put an end to conflicting food advice and make fad diets a thing of the past.

What are these new insights showing us? The information is nothing short of mind-blowing. The value of your poop just got an upgrade.

Here are some of the amazing things we’ve learned from our work at Viome.

1. Was Popeye wrong? Why “health food” isn’t necessarily healthy.
Each week there is a new fad diet released, discussed, and followed. The newest “research” shows that this is now the superfood to eat for everyone. But, too often, the fad diet is just a regurgitation of what worked for one person and shouldn’t be followed by everyone else.

For example, we’ve been told to eat our greens and that greens and nuts are “anti-inflammatory,” but this is actually not always true. Spinach, bran, rhubarb, beets, nuts, and nut butters all contain oxalate. We now know that oxalate-containing food can be harmful, unless you have the microbes present that can metabolize it into a non-harmful substance.

30% of Viome customers do not have the microbes to metabolize oxalates properly. In other words, “healthy foods” like spinach are actually not healthy for these people.

Looks like not everyone should follow Popeye’s food plan.

2. Aren’t foods containing “antioxidants” always good for everyone?
Just like oxalates, polyphenols in foods are usually considered very healthy, but unless you have microbes that utilize specific polyphenols, you may not get full benefit from them. One example is a substance found in these foods called ellagic acid. We can detect if your microbiome is metabolizing ellagic acid and converting it into urolithin A. It is only the urolithin A that has anti-inflammatory and antioxidant effects. Without the microbes to do this conversion you will not benefit from the ellagic acid in foods.

Examples: Walnuts, raspberries, pomegranate, blackberries, pecans, and cranberries all contain ellagic acid.

We have analyzed tens of thousands of people, and only about 50% of the people actually benefit from eating more foods containing ellagic acid.

3. You’re probably eating too much protein (and it may be causing inflammation).
When you think high-protein diet, you think paleo, keto, and high-performance diets.

Protein is considered good for you. It helps build muscle and provide energy—but if you eat too much, it can cause inflammation and decrease longevity.

We can analyze the activity of your microbiome to determine if you are eating too much protein that feeds protein-fermenting bacteria like Alistipes putredinis and Tannerella forsythia, and if these organisms are producing harmful substances such as ammonia, hydrogen sulfide, p-cresol, or putrescine. These substances can damage your gut lining and lead to things like leaky gut.

4. Something’s fishy. Are “healthy foods” causing heart disease?
Choline in certain foods can get converted by bacteria into a substance called trimethylamine (TMA) that is associated with heart disease when it gets absorbed into your body and converted to TMAO. However, TMA conversion doesn’t happen in individuals without these types of bacteria in their microbiome.

We can see the TMA production pathways and many of the gammaproteobacteria that do this conversion.

What foods contain choline? Liver, salmon, chickpeas, split peas, eggs, navy beans, peanuts, and many others.

Before you decide to go full-on pescatarian or paleo, you may want to check if your microbiome is producing TMA with that salmon or steak.

5. Hold up, Iron Man. We can see inflammation from too much iron.
Minerals like iron in your food can, in certain inflammatory microbial environments, promote growth of pathogens like Esherichia, Shigella, and Salmonella.

Maybe it wasn’t just that raw chicken that gave you food poisoning, but your toxic microbiome that made you sick.

On the other hand, when you don’t have enough iron, you could become anemic leading to weakness and shortness of breath.

So, just like Iron Man, it’s about finding your balance so that you can fly.

6. Are you anxious or stressed? Your poop will tell you.
Our gut and brain are connected via the vagus nerve. A large majority of neurotransmitters are either produced or consumed by our microbiome. In fact, some 90% of all serotonin (a feel-good neurotransmitter) is produced by your gut microbiome and not by your brain.

When you have a toxic microbiome that’s producing a large amount of toxins like hydrogen sulfide, the lining of your gut starts to deteriorate into what’s known as leaky gut. Think of leaky gut as your gut not having healthy borders or boundaries. And when this happens, all kinds of disease can emerge. When the barrier of the gut breaks down, it starts a chain reaction causing low-grade chronic inflammation—which has been identified as a potential source of depression and higher levels of anxiety, in addition to many other chronic diseases.

We’re not saying you shouldn’t meditate, but if you want to get the most out of your meditation and really reduce your stress levels, make sure you are eating the right food that promotes a healthy microbiome.

7. Your microbiome is better than Red Bull.
If you want more energy, get your microbiome back into balance.

No you don’t need three pots of coffee to keep you going, you just need a balanced microbiome.

Your microbiome is responsible for calorie extraction, or creating energy, through pathways such as the Tricarboxylic acid cycle. Our bodies depend on the energy that our microbiome produces.

How much energy we get from our food is dependent on how efficient our microbiome is at converting the food into energy. High-performing microbiomes are excellent at converting food into energy. This is great when you are an athlete and need the extra energy, but if you don’t use up the energy it may be the source of some of those unwanted pounds.

If the microbes can’t or won’t metabolize the glucose (sugar) that you eat, it will be stored as fat. If the microbes are extracting too many calories from your food or producing lipopolysaccharides (LPS) and causing metabolic endotoxemia leading to activation of toll-like receptors and insulin resistance you may end up storing what you eat as fat.

Think of your microbiome as Doc Brown’s car from the future—it can take pretty much anything and turn it into fuel if it’s strong and resilient enough.

8. We can see your joint pain in your poop.
Got joint pain? Your microbiome can tell you why.

Lipopolysaccharide (LPS) is a key pro-inflammatory molecule made by some of your microbes. If your microbes are making too much LPS, it can wreak havoc on your immune system by putting it into overdrive. When your immune system goes on the warpath there is often collateral damage to your joints and other body parts.

Perhaps balancing your microbiome is a better solution than reaching for the glucosamine. Think of your microbiome as the top general of your immune army. It puts your immune system through basic training and determines when it goes to war.

Ideally, your immune system wins the quick battle and gets some rest, but sometimes if your microbiome keeps it on constant high alert it becomes a long, drawn-out war resulting in chronic inflammation and chronic diseases.

Are you really “getting older” or is your microbiome just making you “feel” older because it keeps giving warnings to your immune system ultimately leading to chronic pain?

Before you throw in the towel on your favorite activities, check your microbiome. And, if you have anything with “itis” in it, it’s possible that when you balance your microbiome the inflammation from your “itis” will be reduced.

9. Your gut is doing the talking for your mouth.
When you have low stomach acid, your mouth bacteria makes it down to your GI tract.

Stomach acid is there to protect you from the bacteria in your mouth and the parasites and fungi that are in your food. If you don’t have enough of it, the bacteria in your mouth will invade your gut. This invasion is associated with and a risk factor for autoimmune disease and inflammation in the gut.

We are learning that low stomach acid is perhaps one of the major causes of chronic disease. This stomach acid is essential to kill mouth bacteria and help us digest our food.

What kinds of things cause low stomach acid? Stress and antacids like Nexium, Zantac, and Prilosec.

10. Carbs can be protein precursors.
Rejoice! Perhaps carbs aren’t as bad as we thought (as long as your microbiome is up to the task). We can see if some of the starches you eat can be made into amino acids by the microbiome.

Our microbiome makes 20% of our branched-chain amino acids (BCAAs) for us, and it will adapt to make these vital BCAAs for us in almost any way it can.

Essentially, your microbiome is hooking up carbons and hydrogens into different formulations of BCAAs, depending on what you feed it. The microbiome is excellent at adapting and pivoting based on the food you feed it and the environment that it’s in.

So, good news: Carbs are protein precursors, as long as you have the right microbiome.

Stop Talking Sh*t Now
Your microbiome is a world class entrepreneur that can take low-grade sources of food and turn them into valuable and useable energy.

You have a best friend and confidant within you that is working wonders to make sure you have energy and that all of your needs are met.

And, just like a best friend, if you take great care of your microbiome, it will take great care of you.

Given the research emerging daily about the microbiome and its importance on your quality of life, prioritizing the health of your microbiome is essential.

When you have a healthy microbiome, you’ll have a healthy life.

It’s now clear that some of the greatest insights for your health will come from your poop.

It’s time to stop talking sh*t and get your sh*t together. Your life may depend on it.

Viome can help you identify what your microbiome is actually doing. The combination of Viome’s metatranscriptomic technology and cutting-edge artificial intelligence is paving a brand new path forward for microbiome health.

Image Credit: WhiteDragon / Shutterstock.com Continue reading

Posted in Human Robots

#432512 How Will Merging Minds and Machines ...

One of the most exciting and frightening outcomes of technological advancement is the potential to merge our minds with machines. If achieved, this would profoundly boost our cognitive capabilities. More importantly, however, it could be a revolution in human identity, emotion, spirituality, and self-awareness.

Brain-machine interface technology is already being developed by pioneers and researchers around the globe. It’s still early and today’s tech is fairly rudimentary, but it’s a fast-moving field, and some believe it will advance faster than generally expected. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to connect our brains to the internet via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” Even if the advances are less dramatic, however, they’ll have significant implications.

How might this technology affect human consciousness? What about its implications on our sentience, self-awareness, or subjective experience of our illusion of self?

Consciousness can be hard to define, but a holistic definition often encompasses many of our most fundamental capacities, such as wakefulness, self-awareness, meta-cognition, and sense of agency. Beyond that, consciousness represents a spectrum of awareness, as seen across various species of animals. Even humans experience different levels of existential awareness.

From psychedelics to meditation, there are many tools we already use to alter and heighten our conscious experience, both temporarily and permanently. These tools have been said to contribute to a richer life, with the potential to bring experiences of beauty, love, inner peace, and transcendence. Relatively non-invasive, these tools show us what a seemingly minor imbalance of neurochemistry and conscious internal effort can do to the subjective experience of being human.

Taking this into account, what implications might emerging brain-machine interface technologies have on the “self”?

The Tools for Self-Transcendence
At the basic level, we are currently seeing the rise of “consciousness hackers” using techniques like non-invasive brain stimulation through EEG, nutrition, virtual reality, and ecstatic experiences to create environments for heightened consciousness and self-awareness. In Stealing Fire, Steven Kotler and Jamie Wheal explore this trillion-dollar altered-states economy and how innovators and thought leaders are “harnessing rare and controversial states of consciousness to solve critical challenges and outperform the competition.” Beyond enhanced productivity, these altered states expose our inner potential and give us a glimpse of a greater state of being.

Expanding consciousness through brain augmentation and implants could one day be just as accessible. Researchers are working on an array of neurotechnologies as simple and non-invasive as electrode-based EEGs to invasive implants and techniques like optogenetics, where neurons are genetically reprogrammed to respond to pulses of light. We’ve already connected two brains via the internet, allowing the two to communicate, and future-focused startups are researching the possibilities too. With an eye toward advanced brain-machine interfaces, last year Elon Musk unveiled Neuralink, a company whose ultimate goal is to merge the human mind with AI through a “neural lace.”

Many technologists predict we will one day merge with and, more speculatively, upload our minds onto machines. Neuroscientist Kenneth Hayworth writes in Skeptic magazine, “All of today’s neuroscience models are fundamentally computational by nature, supporting the theoretical possibility of mind-uploading.” This might include connecting with other minds using digital networks or even uploading minds onto quantum computers, which can be in multiple states of computation at a given time.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. With advancements in genetic engineering, we are indeed seeing evolution become an increasingly conscious process with an accelerated pace. This could one day apply to the evolution of our consciousness as well; we would be using our consciousness to expand our consciousness.

What Will It Feel Like?
We may be able to come up with predictions of the impact of these technologies on society, but we can only wonder what they will feel like subjectively.

It’s hard to imagine, for example, what our stream of consciousness will feel like when we can process thoughts and feelings 1,000 times faster, or how artificially intelligent brain implants will impact our capacity to love and hate. What will the illusion of “I” feel like when our consciousness is directly plugged into the internet? Overall, what impact will the process of merging with technology have on the subjective experience of being human?

The Evolution of Consciousness
In The Future Evolution of Consciousness, Thomas Lombardo points out, “We are a journey rather than a destination—a chapter in the evolutionary saga rather than a culmination. Just as probable, there will also be a diversification of species and types of conscious minds. It is also very likely that new psychological capacities, incomprehensible to us, will emerge as well.”

Humans are notorious for fearing the unknown. For any individual who has never experienced an altered state, be it spiritual or psychedelic-induced, it is difficult to comprehend the subjective experience of that state. It is why many refer to their first altered-state experience as “waking up,” wherein they didn’t even realize they were asleep.

Similarly, exponential neurotechnology represents the potential of a higher state of consciousness and a range of experiences that are unimaginable to our current default state.

Our capacity to think and feel is set by the boundaries of our biological brains. To transform and expand these boundaries is to transform and expand the first-hand experience of consciousness. Emerging neurotechnology may end up providing the awakening our species needs.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots

#432487 Can We Make a Musical Turing Test?

As artificial intelligence advances, we’re encountering the same old questions. How much of what we consider to be fundamentally human can be reduced to an algorithm? Can we create something sufficiently advanced that people can no longer distinguish between the two? This, after all, is the idea behind the Turing Test, which has yet to be passed.

At first glance, you might think music is beyond the realm of algorithms. Birds can sing, and people can compose symphonies. Music is evocative; it makes us feel. Very often, our intense personal and emotional attachments to music are because it reminds us of our shared humanity. We are told that creative jobs are the least likely to be automated. Creativity seems fundamentally human.

But I think above all, we view it as reductionist sacrilege: to dissect beautiful things. “If you try to strangle a skylark / to cut it up, see how it works / you will stop its heart from beating / you will stop its mouth from singing.” A human musician wrote that; a machine might be able to string words together that are happy or sad; it might even be able to conjure up a decent metaphor from the depths of some neural network—but could it understand humanity enough to produce art that speaks to humans?

Then, of course, there’s the other side of the debate. Music, after all, has a deeply mathematical structure; you can train a machine to produce harmonics. “In the teachings of Pythagoras and his followers, music was inseparable from numbers, which were thought to be the key to the whole spiritual and physical universe,” according to Grout in A History of Western Music. You might argue that the process of musical composition cannot be reduced to a simple algorithm, yet musicians have often done so. Mozart, with his “Dice Music,” used the roll of a dice to decide how to order musical fragments; creativity through an 18th-century random number generator. Algorithmic music goes back a very long way, with the first papers on the subject from the 1960s.

Then there’s the techno-enthusiast side of the argument. iTunes has 26 million songs, easily more than a century of music. A human could never listen to and learn from them all, but a machine could. It could also memorize every note of Beethoven. Music can be converted into MIDI files, a nice chewable data format that allows even a character-by-character neural net you can run on your computer to generate music. (Seriously, even I could get this thing working.)

Indeed, generating music in the style of Bach has long been a test for AI, and you can see neural networks gradually learn to imitate classical composers while trying to avoid overfitting. When an algorithm overfits, it essentially starts copying the existing music, rather than being inspired by it but creating something similar: a tightrope the best human artists learn to walk. Creativity doesn’t spring from nowhere; even maverick musical geniuses have their influences.

Does a machine have to be truly ‘creative’ to produce something that someone would find valuable? To what extent would listeners’ attitudes change if they thought they were hearing a human vs. an AI composition? This all suggests a musical Turing Test. Of course, it already exists. In fact, it’s run out of Dartmouth, the school that hosted that first, seminal AI summer conference. This year, the contest is bigger than ever: alongside the PoetiX, LimeriX and LyriX competitions for poetry and lyrics, there’s a DigiKidLit competition for children’s literature (although you may have reservations about exposing your children to neural-net generated content… it can get a bit surreal).

There’s also a pair of musical competitions, including one for original compositions in different genres. Key genres and styles are represented by Charlie Parker for Jazz and the Bach chorales for classical music. There’s also a free composition, and a contest where a human and an AI try to improvise together—the AI must respond to a human spontaneously, in real time, and in a musically pleasing way. Quite a challenge! In all cases, if any of the generated work is indistinguishable from human performers, the neural net has passed the Turing Test.

Did they? Here’s part of 2017’s winning sonnet from Charese Smiley and Hiroko Bretz:

The large cabin was in total darkness.
Come marching up the eastern hill afar.
When is the clock on the stairs dangerous?
Everything seemed so near and yet so far.
Behind the wall silence alone replied.
Was, then, even the staircase occupied?
Generating the rhymes is easy enough, the sentence structure a little trickier, but what’s impressive about this sonnet is that it sticks to a single topic and appears to be a more coherent whole. I’d guess they used associated “lexical fields” of similar words to help generate something coherent. In a similar way, most of the more famous examples of AI-generated music still involve some amount of human control, even if it’s editorial; a human will build a song around an AI-generated riff, or select the most convincing Bach chorale from amidst many different samples.

We are seeing strides forward in the ability of AI to generate human voices and human likenesses. As the latter example shows, in the fake news era people have focused on the dangers of this tech– but might it also be possible to create a virtual performer, trained on a dataset of their original music? Did you ever want to hear another Beatles album, or jam with Miles Davis? Of course, these things are impossible—but could we create a similar experience that people would genuinely value? Even, to the untrained eye, something indistinguishable from the real thing?

And if it did measure up to the real thing, what would this mean? Jaron Lanier is a fascinating technology writer, a critic of strong AI, and a believer in the power of virtual reality to change the world and provide truly meaningful experiences. He’s also a composer and a musical aficionado. He pointed out in a recent interview that translation algorithms, by reducing the amount of work translators are commissioned to do, have, in some sense, profited from stolen expertise. They were trained on huge datasets purloined from human linguists and translators. If you can train an AI on someone’s creative output and it produces new music, who “owns” it?

Although companies that offer AI music tools are starting to proliferate, and some groups will argue that the musical Turing test has been passed already, AI-generated music is hardly racing to the top of the pop charts just yet. Even as the line between human-composed and AI-generated music starts to blur, there’s still a gulf between the average human and musical genius. In the next few years, we’ll see how far the current techniques can take us. It may be the case that there’s something in the skylark’s song that can’t be generated by machines. But maybe not, and then this song might need an extra verse.

Image Credit: d1sk / Shutterstock.com Continue reading

Posted in Human Robots

#432467 Dungeons and Dragons, Not Chess and Go: ...

Everyone had died—not that you’d know it, from how they were laughing about their poor choices and bad rolls of the dice. As a social anthropologist, I study how people understand artificial intelligence (AI) and our efforts towards attaining it; I’m also a life-long fan of Dungeons and Dragons (D&D), the inventive fantasy roleplaying game. During a recent quest, when I was playing an elf ranger, the trainee paladin (or holy knight) acted according to his noble character, and announced our presence at the mouth of a dragon’s lair. The results were disastrous. But while success in D&D means “beating the bad guy,” the game is also a creative sandbox, where failure can count as collective triumph so long as you tell a great tale.

What does this have to do with AI? In computer science, games are frequently used as a benchmark for an algorithm’s “intelligence.” The late Robert Wilensky, a professor at the University of California, Berkeley and a leading figure in AI, offered one reason why this might be. Computer scientists “looked around at who the smartest people were, and they were themselves, of course,” he told the authors of Compulsive Technology: Computers as Culture (1985). “They were all essentially mathematicians by training, and mathematicians do two things—they prove theorems and play chess. And they said, hey, if it proves a theorem or plays chess, it must be smart.” No surprise that demonstrations of AI’s “smarts” have focused on the artificial player’s prowess.

Yet the games that get chosen—like Go, the main battlefield for Google DeepMind’s algorithms in recent years—tend to be tightly bounded, with set objectives and clear paths to victory or defeat. These experiences have none of the open-ended collaboration of D&D. Which got me thinking: do we need a new test for intelligence, where the goal is not simply about success, but storytelling? What would it mean for an AI to “pass” as human in a game of D&D? Instead of the Turing test, perhaps we need an elf ranger test?

Of course, this is just a playful thought experiment, but it does highlight the flaws in certain models of intelligence. First, it reveals how intelligence has to work across a variety of environments. D&D participants can inhabit many characters in many games, and the individual player can “switch” between roles (the fighter, the thief, the healer). Meanwhile, AI researchers know that it’s super difficult to get a well-trained algorithm to apply its insights in even slightly different domains—something that we humans manage surprisingly well.

Second, D&D reminds us that intelligence is embodied. In computer games, the bodily aspect of the experience might range from pressing buttons on a controller in order to move an icon or avatar (a ping-pong paddle; a spaceship; an anthropomorphic, eternally hungry, yellow sphere), to more recent and immersive experiences involving virtual-reality goggles and haptic gloves. Even without these add-ons, games can still produce biological responses associated with stress and fear (if you’ve ever played Alien: Isolation you’ll understand). In the original D&D, the players encounter the game while sitting around a table together, feeling the story and its impact. Recent research in cognitive science suggests that bodily interactions are crucial to how we grasp more abstract mental concepts. But we give minimal attention to the embodiment of artificial agents, and how that might affect the way they learn and process information.

Finally, intelligence is social. AI algorithms typically learn through multiple rounds of competition, in which successful strategies get reinforced with rewards. True, it appears that humans also evolved to learn through repetition, reward and reinforcement. But there’s an important collaborative dimension to human intelligence. In the 1930s, the psychologist Lev Vygotsky identified the interaction of an expert and a novice as an example of what became called “scaffolded” learning, where the teacher demonstrates and then supports the learner in acquiring a new skill. In unbounded games, this cooperation is channelled through narrative. Games of It among small children can evolve from win/lose into attacks by terrible monsters, before shifting again to more complex narratives that explain why the monsters are attacking, who is the hero, and what they can do and why—narratives that aren’t always logical or even internally compatible. An AI that could engage in social storytelling is doubtless on a surer, more multifunctional footing than one that plays chess; and there’s no guarantee that chess is even a step on the road to attaining intelligence of this sort.

In some ways, this failure to look at roleplaying as a technical hurdle for intelligence is strange. D&D was a key cultural touchstone for technologists in the 1980s and the inspiration for many early text-based computer games, as Katie Hafner and Matthew Lyon point out in Where Wizards Stay up Late: The Origins of the Internet (1996). Even today, AI researchers who play games in their free time often mention D&D specifically. So instead of beating adversaries in games, we might learn more about intelligence if we tried to teach artificial agents to play together as we do: as paladins and elf rangers.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit:Benny Mazur/Flickr / CC BY 2.0 Continue reading

Posted in Human Robots

#432311 Everyone Is Talking About AI—But Do ...

In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?

At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”

However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?

If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.

I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.

This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.

Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.

As Smart As a Human?
Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”

Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.

He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.

However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.

“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.

This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.

Assisted, Augmented, or Autonomous
When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”

In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.

The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.

Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.

Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”

When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.

Is This Buzzword All Buzzed Out?
Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.

He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”

I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.

According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.

Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”

Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.

Image Credit: vrender / Shutterstock.com Continue reading

Posted in Human Robots