Tag Archives: feel

#434837 In Defense of Black Box AI

Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?

In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.

Edd Gent: What’s your experience with black box algorithms?

Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.

I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.

Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.

EG: What made you feel like you had to mount a defense of these black box algorithms?

EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.

It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.

It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.

EG: In what situations do you think we should be using black box algorithms?

EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.

There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.

But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.

The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.

What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.

EG: Do you think there’s been too much emphasis on interpretability?

EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.

I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.

Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.

EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?

EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.

With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.

So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.

*Responses have been edited for length and style

Image Credit: Chingraph / Shutterstock.com Continue reading

Posted in Human Robots

#434784 Killer robots already exist, and ...

Humans will always make the final decision on whether armed robots can shoot, according to a statement by the US Department of Defense. Their clarification comes amid fears about a new advanced targeting system, known as ATLAS, that will use artificial intelligence in combat vehicles to target and execute threats. While the public may feel uneasy about so-called “killer robots”, the concept is nothing new – machine-gun wielding “SWORDS” robots were deployed in Iraq as early as 2007. Continue reading

Posted in Human Robots

#434767 7 Non-Obvious Trends Shaping the Future

When you think of trends that might be shaping the future, the first things that come to mind probably have something to do with technology: Robots taking over jobs. Artificial intelligence advancing and proliferating. 5G making everything faster, connected cities making everything easier, data making everything more targeted.

Technology is undoubtedly changing the way we live, and will continue to do so—probably at an accelerating rate—in the near and far future. But there are other trends impacting the course of our lives and societies, too. They’re less obvious, and some have nothing to do with technology.

For the past nine years, entrepreneur and author Rohit Bhargava has read hundreds of articles across all types of publications, tagged and categorized them by topic, funneled frequent topics into broader trends, analyzed those trends, narrowed them down to the most significant ones, and published a book about them as part of his ‘Non-Obvious’ series. He defines a trend as “a unique curated observation of the accelerating present.”

In an encore session at South by Southwest last week (his initial talk couldn’t fit hundreds of people who wanted to attend, so a re-do was scheduled), Bhargava shared details of his creative process, why it’s hard to think non-obviously, the most important trends of this year, and how to make sure they don’t get the best of you.

Thinking Differently
“Non-obvious thinking is seeing the world in a way other people don’t see it,” Bhargava said. “The secret is curating your ideas.” Curation collects ideas and presents them in a meaningful way; museum curators, for example, decide which works of art to include in an exhibit and how to present them.

For his own curation process, Bhargava uses what he calls the haystack method. Rather than searching for a needle in a haystack, he gathers ‘hay’ (ideas and stories) then uses them to locate and define a ‘needle’ (a trend). “If you spend enough time gathering information, you can put the needle into the middle of the haystack,” he said.

A big part of gathering information is looking for it in places you wouldn’t normally think to look. In his case, that means that on top of reading what everyone else reads—the New York Times, the Washington Post, the Economist—he also buys publications like Modern Farmer, Teen Vogue, and Ink magazine. “It’s like stepping into someone else’s world who’s not like me,” he said. “That’s impossible to do online because everything is personalized.”

Three common barriers make non-obvious thinking hard.

The first is unquestioned assumptions, which are facts or habits we think will never change. When James Dyson first invented the bagless vacuum, he wanted to sell the license to it, but no one believed people would want to spend more money up front on a vacuum then not have to buy bags. The success of Dyson’s business today shows how mistaken that assumption—that people wouldn’t adapt to a product that, at the end of the day, was far more sensible—turned out to be. “Making the wrong basic assumptions can doom you,” Bhargava said.

The second barrier to thinking differently is constant disruption. “Everything is changing as industries blend together,” Bhargava said. “The speed of change makes everyone want everything, all the time, and people expect the impossible.” We’ve come to expect every alternative to be presented to us in every moment, but in many cases this doesn’t serve us well; we’re surrounded by noise and have trouble discerning what’s valuable and authentic.

This ties into the third barrier, which Bhargava calls the believability crisis. “Constant sensationalism makes people skeptical about everything,” he said. With the advent of fake news and technology like deepfakes, we’re in a post-truth, post-fact era, and are in a constant battle to discern what’s real from what’s not.

2019 Trends
Bhargava’s efforts to see past these barriers and curate information yielded 15 trends he believes are currently shaping the future. He shared seven of them, along with thoughts on how to stay ahead of the curve.

Retro Trust
We tend to trust things we have a history with. “People like nostalgic experiences,” Bhargava said. With tech moving as fast as it is, old things are quickly getting replaced by shinier, newer, often more complex things. But not everyone’s jumping on board—and some who’ve been on board are choosing to jump off in favor of what worked for them in the past.

“We’re turning back to vinyl records and film cameras, deliberately downgrading to phones that only text and call,” Bhargava said. In a period of too much change too fast, people are craving familiarity and dependability. To capitalize on that sentiment, entrepreneurs should seek out opportunities for collaboration—how can you build a product that’s new, but feels reliable and familiar?

Muddled Masculinity
Women have increasingly taken on more leadership roles, advanced in the workplace, now own more homes than men, and have higher college graduation rates. That’s all great for us ladies—but not so great for men or, perhaps more generally, for the concept of masculinity.

“Female empowerment is causing confusion about what it means to be a man today,” Bhargava said. “Men don’t know what to do—should they say something? Would that make them an asshole? Should they keep quiet? Would that make them an asshole?”

By encouraging the non-conforming, we can help take some weight off the traditional gender roles, and their corresponding divisions and pressures.

Innovation Envy
Innovation has become an over-used word, to the point that it’s thrown onto ideas and actions that aren’t really innovative at all. “We innovate by looking at someone else and doing the same,” Bhargava said. If an employee brings a radical idea to someone in a leadership role, in many companies the leadership will say they need a case study before implementing the radical idea—but if it’s already been done, it’s not innovative. “With most innovation what ends up happening is not spectacular failure, but irrelevance,” Bhargava said.

He suggests that rather than being on the defensive, companies should play offense with innovation, and when it doesn’t work “fail as if no one’s watching” (often, no one will be).

Artificial Influence
Thanks to social media and other technologies, there are a growing number of fabricated things that, despite not being real, influence how we think. “15 percent of all Twitter accounts may be fake, and there are 60 million fake Facebook accounts,” Bhargava said. There are virtual influencers and even virtual performers.

“Don’t hide the artificial ingredients,” Bhargava advised. “Some people are going to pretend it’s all real. We have to be ethical.” The creators of fabrications meant to influence the way people think, or the products they buy, or the decisions they make, should make it crystal-clear that there aren’t living, breathing people behind the avatars.

Enterprise Empathy
Another reaction to the fast pace of change these days—and the fast pace of life, for that matter—is that empathy is regaining value and even becoming a driver of innovation. Companies are searching for ways to give people a sense of reassurance. The Tesco grocery brand in the UK has a “relaxed lane” for those who don’t want to feel rushed as they check out. Starbucks opened a “signing store” in Washington DC, and most of its regular customers have learned some sign language.

“Use empathy as a principle to help yourself stand out,” Bhargava said. Besides being a good business strategy, “made with empathy” will ideally promote, well, more empathy, a quality there’s often a shortage of.

Robot Renaissance
From automating factory jobs to flipping burgers to cleaning our floors, robots have firmly taken their place in our day-to-day lives—and they’re not going away anytime soon. “There are more situations with robots than ever before,” Bhargava said. “They’re exploring underwater. They’re concierges at hotels.”

The robot revolution feels intimidating. But Bhargava suggests embracing robots with more curiosity than concern. While they may replace some tasks we don’t want replaced, they’ll also be hugely helpful in multiple contexts, from elderly care to dangerous manual tasks.

Back-storytelling
Similar to retro trust and enterprise empathy, organizations have started to tell their brand’s story to gain customer loyalty. “Stories give us meaning, and meaning is what we need in order to be able to put the pieces together,” Bhargava said. “Stories give us a way of understanding the world.”

Finding the story behind your business, brand, or even yourself, and sharing it openly, can help you connect with people, be they customers, coworkers, or friends.

Tech’s Ripple Effects
While it may not overtly sound like it, most of the trends Bhargava identified for 2019 are tied to technology, and are in fact a sort of backlash against it. Tech has made us question who to trust, how to innovate, what’s real and what’s fake, how to make the best decisions, and even what it is that makes us human.

By being aware of these trends, sharing them, and having conversations about them, we’ll help shape the way tech continues to be built, and thus the way it impacts us down the road.

Image Credit: Rohit Bhargava by Brian Smale Continue reading

Posted in Human Robots

#434655 Purposeful Evolution: Creating an ...

More often than not, we fall into the trap of trying to predict and anticipate the future, forgetting that the future is up to us to envision and create. In the words of Buckminster Fuller, “We are called to be architects of the future, not its victims.”

But how, exactly, do we create a “good” future? What does such a future look like to begin with?

In Future Consciousness: The Path to Purposeful Evolution, Tom Lombardo analytically deconstructs how we can flourish in the flow of evolution and create a prosperous future for humanity. Scientifically informed, the books taps into themes that are constructive and profound, from both eastern and western philosophies.

As the executive director of the Center for Future Consciousness and an executive board member and fellow of the World Futures Studies Federation, Lombardo has dedicated his life and career to studying how we can create a “realistic, constructive, and ethical future.”

In a conversation with Singularity Hub, Lombardo discussed purposeful evolution, ethical use of technology, and the power of optimism.

Raya Bidshahri: Tell me more about the title of your book. What is future consciousness and what role does it play in what you call purposeful evolution?

Tom Lombardo: Humans have the unique capacity to purposefully evolve themselves because they possess future consciousness. Future consciousness contains all of the cognitive, motivational, and emotional aspects of the human mind that pertain to the future. It’s because we can imagine and think about the future that we can manipulate and direct our future evolution purposefully. Future consciousness empowers us to become self-responsible in our own evolutionary future. This is a jump in the process of evolution itself.

RB: In several places in the book, you discuss the importance of various eastern philosophies. What can we learn from the east that is often missing from western models?

TL: The key idea in the east that I have been intrigued by for decades is the Taoist Yin Yang, which is the idea that reality should be conceptualized as interdependent reciprocities.

In the west we think dualistically, or we attempt to think in terms of one end of the duality to the exclusion of the other, such as whole versus parts or consciousness versus physical matter. Yin Yang thinking is seeing how both sides of a “duality,” even though they appear to be opposites, are interdependent; you can’t have one without the other. You can’t have order without chaos, consciousness without the physical world, individuals without the whole, humanity without technology, and vice versa for all these complementary pairs.

RB: You talk about the importance of chaos and destruction in the trajectory of human progress. In your own words, “Creativity frequently involves destruction as a prelude to the emergence of some new reality.” Why is this an important principle for readers to keep in mind, especially in the context of today’s world?

TL: In order for there to be progress, there often has to be a disintegration of aspects of the old. Although progress and evolution involve a process of building up, growth isn’t entirely cumulative; it’s also transformative. Things fall apart and come back together again.

Throughout history, we have seen a transformation of what are the most dominant human professions or vocations. At some point, almost everybody worked in agriculture, but most of those agricultural activities were replaced by machines, and a lot of people moved over to industry. Now we’re seeing that jobs and functions are increasingly automated in industry, and humans are being pushed into vocations that involve higher cognitive and artistic skills, services, information technology, and so on.

RB: You raise valid concerns about the dark side of technological progress, especially when it’s combined with mass consumerism, materialism, and anti-intellectualism. How do we counter these destructive forces as we shape the future of humanity?

TL: We can counter such forces by always thoughtfully considering how our technologies are affecting the ongoing purposeful evolution of our conscious minds, bodies, and societies. We should ask ourselves what are the ethical values that are being served by the development of various technologies.

For example, we often hear the criticism that technologies that are driven by pure capitalism degrade human life and only benefit the few people who invented and market them. So we need to also think about what good these new technologies can serve. It’s what I mean when I talk about the “wise cyborg.” A wise cyborg is somebody who uses technology to serve wisdom, or values connected with wisdom.

RB: Creating an ideal future isn’t just about progress in technology, but also progress in morality. How we do decide what a “good” future is? What are some philosophical tools we can use to determine a code of ethics that is as objective as possible?

TL: Let’s keep in mind that ethics will always have some level of subjectivity. That being said, the way to determine a good future is to base it on the best theory of reality that we have, which is that we are evolutionary beings in an evolutionary universe and we are interdependent with everything else in that universe. Our ethics should acknowledge that we are fluid and interactive.

Hence, the “good” can’t be something static, and it can’t be something that pertains to me and not everybody else. It can’t be something that only applies to humans and ignores all other life on Earth, and it must be a mode of change rather than something stable.

RB: You present a consciousness-centered approach to creating a good future for humanity. What are some of the values we should develop in order to create a prosperous future?

TL: A sense of self-responsibility for the future is critical. This means realizing that the “good future” is something we have to take upon ourselves to create; we can’t let something or somebody else do that. We need to feel responsible both for our own futures and for the future around us.

Another one is going to be an informed and hopeful optimism about the future, because both optimism and pessimism have self-fulfilling prophecy effects. If you hope for the best, you are more likely to look deeply into your reality and increase the chance of it coming out that way. In fact, all of the positive emotions that have to do with future consciousness actually make people more intelligent and creative.

Some other important character virtues are discipline and tenacity, deep purpose, the love of learning and thinking, and creativity.

RB: Are you optimistic about the future? If so, what informs your optimism?

I justify my optimism the same way that I have seen Ray Kurzweil, Peter Diamandis, Kevin Kelly, and Steven Pinker justify theirs. If we look at the history of human civilization and even the history of nature, we see a progressive motion forward toward greater complexity and even greater intelligence. There’s lots of ups and downs, and catastrophes along the way, but the facts of nature and human history support the long-term expectation of continued evolution into the future.

You don’t have to be unrealistic to be optimistic. It’s also, psychologically, the more empowering position. That’s the position we should take if we want to maximize the chances of our individual or collective reality turning out better.

A lot of pessimists are pessimistic because they’re afraid of the future. There are lots of reasons to be afraid, but all in all, fear disempowers, whereas hope empowers.

Image Credit: Quick Shot / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#434303 Making Superhumans Through Radical ...

Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.

Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.

These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.

Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.

Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.

If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.

Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.

Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.

Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.

Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?

Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.

The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.

Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.

By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.

Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.

Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.

These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.

Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.

This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.

Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.

Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.

The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.

When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.

Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.

The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.

Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.

Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.

Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.

This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.

Image Credit: jamesteohart / Shutterstock.com Continue reading

Posted in Human Robots