Tag Archives: definition

#431385 Here’s How to Get to Conscious ...

“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots

#431301 Collective Intelligence Is the Root of ...

Many of us intuitively think about intelligence as an individual trait. As a society, we have a tendency to praise individual game-changers for accomplishments that would not be possible without their teams, often tens of thousands of people that work behind the scenes to make extraordinary things happen.
Matt Ridley, best-selling author of multiple books, including The Rational Optimist: How Prosperity Evolves, challenges this view. He argues that human achievement and intelligence are entirely “networking phenomena.” In other words, intelligence is collective and emergent as opposed to individual.
When asked what scientific concept would improve everybody’s cognitive toolkit, Ridley highlights collective intelligence: “It is by putting brains together through the division of labor— through trade and specialization—that human society stumbled upon a way to raise the living standards, carrying capacity, technological virtuosity, and knowledge base of the species.”
Ridley has spent a lifetime exploring human prosperity and the factors that contribute to it. In a conversation with Singularity Hub, he redefined how we perceive intelligence and human progress.
Raya Bidshahri: The common perspective seems to be that competition is what drives innovation and, consequently, human progress. Why do you think collaboration trumps competition when it comes to human progress?
Matt Ridley: There is a tendency to think that competition is an animal instinct that is natural and collaboration is a human instinct we have to learn. I think there is no evidence for that. Both are deeply rooted in us as a species. The evidence from evolutionary biology tells us that collaboration is just as important as competition. Yet, at the end, the Darwinian perspective is quite correct: it’s usually cooperation for the purpose of competition, wherein a given group tries to achieve something more effectively than another group. But the point is that the capacity to co-operate is very deep in our psyche.
RB: You write that “human achievement is entirely a networking phenomenon,” and we need to stop thinking about intelligence as an individual trait, and that instead we should look at what you refer to as collective intelligence. Why is that?
MR: The best way to think about it is that IQ doesn’t matter, because a hundred stupid people who are talking to each other will accomplish more than a hundred intelligent people who aren’t. It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain. You can’t possibly hold in your head all the knowledge you need to do these things. For the last 200,000 years we’ve been exchanging and specializing, which enables us to achieve much greater intelligence than we can as individuals.
RB: We often think of achievement and intelligence on individual terms. Why do you think it’s so counter-intuitive for us to think about collective intelligence?
MR: People are surprisingly myopic to the extent they understand the nature of intelligence. I think it goes back to a pre-human tendency to think in terms of individual stories and actors. For example, we love to read about the famous inventor or scientist who invented or discovered something. We never tell these stories as network stories. We tell them as individual hero stories.

“It’s absolutely vital to see that everything from the manufacturing of a pencil to the manufacturing of a nuclear power station can’t be done by an individual human brain.”

This idea of a brilliant hero who saves the world in the face of every obstacle seems to speak to tribal hunter-gatherer societies, where the alpha male leads and wins. But it doesn’t resonate with how human beings have structured modern society in the last 100,000 years or so. We modern-day humans haven’t internalized a way of thinking that incorporates this definition of distributed and collective intelligence.
RB: One of the books you’re best known for is The Rational Optimist. What does it mean to be a rational optimist?
MR: My optimism is rational because it’s not based on a feeling, it’s based on evidence. If you look at the data on human living standards over the last 200 years and compare it with the way that most people actually perceive our progress during that time, you’ll see an extraordinary gap. On the whole, people seem to think that things are getting worse, but things are actually getting better.
We’ve seen the most astonishing improvements in human living standards: we’ve brought the number of people living in extreme poverty to 9 percent from about 70 percent when I was born. The human lifespan is expanding by five hours a day, child mortality has gone down by two thirds in half a century, and much more. These feats dwarf the things that are going wrong. Yet most people are quite pessimistic about the future despite the things we’ve achieved in the past.
RB: Where does this idea of collective intelligence fit in rational optimism?
MR: Underlying the idea of rational optimism was understanding what prosperity is, and why it happens to us and not to rabbits or rocks. Why are we the only species in the world that has concepts like a GDP, growth rate, or living standard? My answer is that it comes back to this phenomena of collective intelligence. The reason for a rise in living standards is innovation, and the cause of that innovation is our ability to collaborate.
The grand theme of human history is exchange of ideas, collaborating through specialization and the division of labor. Throughout history, it’s in places where there is a lot of open exchange and trade where you get a lot of innovation. And indeed, there are some extraordinary episodes in human history when societies get cut off from exchange and their innovation slows down and they start moving backwards. One example of this is Tasmania, which was isolated and lost a lot of the technologies it started off with.
RB: Lots of people like to point out that just because the world has been getting better doesn’t guarantee it will continue to do so. How do you respond to that line of argumentation?
MR: There is a quote by Thomas Babington Macaulay from 1830, where he was fed up with the pessimists of the time saying things will only get worse. He says, “On what principle is it that with nothing but improvement behind us, we are to expect nothing but deterioration before us?” And this was back in the 1830s, where in Britain and a few other parts of the world, we were only seeing the beginning of the rise of living standards. It’s perverse to argue that because things were getting better in the past, now they are about to get worse.

“I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news.”

Another thing to point out is that people have always said this. Every generation thought they were at the peak looking downhill. If you think about the opportunities technology is about to give us, whether it’s through blockchain, gene editing, or artificial intelligence, there is every reason to believe that 2017 is going to look like a time of absolute misery compared to what our children and grandchildren are going to experience.
RB: There seems to be a fair amount of mayhem in today’s world, and lots of valid problems to pay attention to in the news. What would you say to empower our readers that we will push through it and continue to grow and improve as a species?
MR: I think it’s worth remembering that good news tends to be gradual, and bad news tends to be sudden. Hence, the good stuff is rarely going to make the news. It’s happening in an inexorable way, as a result of ordinary people exchanging, specializing, collaborating, and innovating, and it’s surprisingly hard to stop it.
Even if you look back to the 1940s, at the end of a world war, there was still a lot of innovation happening. In some ways it feels like we are going through a bad period now. I do worry a lot about the anti-enlightenment values that I see spreading in various parts of the world. But then I remind myself that people are working on innovative projects in the background, and these things are going to come through and push us forward.
Image Credit: Sahacha Nilkumhang / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#431142 Will Privacy Survive the Future?

Technological progress has radically transformed our concept of privacy. How we share information and display our identities has changed as we’ve migrated to the digital world.
As the Guardian states, “We now carry with us everywhere devices that give us access to all the world’s information, but they can also offer almost all the world vast quantities of information about us.” We are all leaving digital footprints as we navigate through the internet. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including governments, corporations, marketers, and criminals.
The ethical debate around privacy is complex. The reality is that our definition and standards for privacy have evolved over time, and will continue to do so in the next few decades.
Implications of Emerging Technologies
Protecting privacy will only become more challenging as we experience the emergence of technologies such as virtual reality, the Internet of Things, brain-machine interfaces, and much more.
Virtual reality headsets are already gathering information about users’ locations and physical movements. In the future all of our emotional experiences, reactions, and interactions in the virtual world will be able to be accessed and analyzed. As virtual reality becomes more immersive and indistinguishable from physical reality, technology companies will be able to gather an unprecedented amount of data.
It doesn’t end there. The Internet of Things will be able to gather live data from our homes, cities and institutions. Drones may be able to spy on us as we live our everyday lives. As the amount of genetic data gathered increases, the privacy of our genes, too, may be compromised.
It gets even more concerning when we look farther into the future. As companies like Neuralink attempt to merge the human brain with machines, we are left with powerful implications for privacy. Brain-machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface.
Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking,” which we must take extreme precaution against. However, it is important to note that lesser versions of these risks currently exist, i.e., by phone hacking, identify fraud, and the like.
A New Much-Needed Definition of Privacy
In many ways we are already cyborgs interfacing with technology. According to theories like the extended mind hypothesis, our technological devices are an extension of our identities. We use our phones to store memories, retrieve information, and communicate. We use powerful tools like the Hubble Telescope to extend our sense of sight. In parallel, one can argue that the digital world has become an extension of the physical world.
These technological tools are a part of who we are. This has led to many ethical and societal implications. Our Facebook profiles can be processed to infer secondary information about us, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality. Some argue that many of our devices may be mapping our every move. Your browsing history could be spied on and even sold in the open market.
While the argument to protect privacy and individuals’ information is valid to a certain extent, we may also have to accept the possibility that privacy will become obsolete in the future. We have inherently become more open as a society in the digital world, voluntarily sharing our identities, interests, views, and personalities.

“The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental?”

There also seems to be a contradiction with the positive trend towards mass transparency and the need to protect privacy. Many advocate for a massive decentralization and openness of information through mechanisms like blockchain.
The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental? We want to live in a world of fewer secrets, but also don’t want to live in a world where our every move is followed (not to mention our every feeling, thought and interaction). So, how do we find a balance?
Traditionally, privacy is used synonymously with secrecy. Many are led to believe that if you keep your personal information secret, then you’ve accomplished privacy. Danny Weitzner, director of the MIT Internet Policy Research Initiative, rejects this notion and argues that this old definition of privacy is dead.
From Witzner’s perspective, protecting privacy in the digital age means creating rules that require governments and businesses to be transparent about how they use our information. In other terms, we can’t bring the business of data to an end, but we can do a better job of controlling it. If these stakeholders spy on our personal information, then we should have the right to spy on how they spy on us.
The Role of Policy and Discourse
Almost always, policy has been too slow to adapt to the societal and ethical implications of technological progress. And sometimes the wrong laws can do more harm than good. For instance, in March, the US House of Representatives voted to allow internet service providers to sell your web browsing history on the open market.
More often than not, the bureaucratic nature of governance can’t keep up with exponential growth. New technologies are emerging every day and transforming society. Can we confidently claim that our world leaders, politicians, and local representatives are having these conversations and debates? Are they putting a focus on the ethical and societal implications of emerging technologies? Probably not.
We also can’t underestimate the role of public awareness and digital activism. There needs to be an emphasis on educating and engaging the general public about the complexities of these issues and the potential solutions available. The current solution may not be robust or clear, but having these discussions will get us there.
Stock Media provided by blasbike / Pond5 Continue reading

Posted in Human Robots