Tag Archives: mental

#431385 Here’s How to Get to Conscious ...

“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430649 Robotherapy for children with autism

New Robotherapy for children with autism could reduce patient supervision by therapists.
05.07.2017
Autism treatments and therapies routinely make headlines. With robot enhanced therapies on the rise, often overlooked though, is the mental stress and physical toll the procedures take on therapists. As autism treatments can be taxing on both patient and therapists, few realize the stress and workload of those working with autistic patients.
It is against this backdrop, that researchers from the Vrije Universiteit Brussel are pioneering a new technology to aid behavioural therapy, and one with a very deliberate aspect: they are using robots to boost the basic social learning skills of children with ASD and while doing so, they hope to make the therapists’ job substantially easier.
A study, just published in PALADYN – Journal of Behavioural Robotics examines the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.
The growing deployment of robot-assisted therapies in recent decades means children with Autism Spectrum Disorder (ASD) can develop and nurture social behaviour and cognitive skills. Learning skills that hold out in real life is the first and foremost goal of all autism therapies, including the Robot-Assisted Therapy (RAT), with effectiveness always considered a key concern. However, this time round the scientists have set off on the additional mission to take the load off the human therapists by letting parts of the intervention be taken over by the supervised yet autonomous robots.
The researchers developed a complete system of robot-enhanced therapy (RET) for children with ASD. The therapy works by teaching behaviours during repeated sessions of interactive games. Since the individuals with ASD tend to be more responsive to feedback coming from an interaction with technology, robots are often used for this therapy. In this approach, the social robot acts as a mediator and typically remains remote-controlled by a human operator. The technique, called Wizard of Oz, requires the robot to be operated by an additional person and the robot is not recording the performance during the therapy. In order to reduce operator workload, authors introduced a system with a supervised autonomous robot – which is able to understand the psychological disposition of the child and use it to select actions appropriate to the current state of the interaction.
Admittedly, robots with supervised autonomy can substantially benefit behavioural therapy for children with ASD – diminishing the therapist workload on the one hand, and achieving more objective measurements of therapy outcomes on the other. Yet, complex as it is, this therapy requires a multidisciplinary approach, as RET provides mixed effectiveness for primary tasks: the turn-taking, joint attention and imitation task comparing to Standard Human Treatment (SHT).
Results are likely to prompt a further development of the robot assisted therapy with increasing robot’s autonomy. With many outstanding conceptual and technical issues yet to tackle –it is definitely the ethical questions that pose one of the major challenges as far as the potential and maximal degree of robot autonomy is concerned.
The article is fully available in open access to read, download and share on De Gruyter Online.
Research was conducted as a part of DREAM (Development of Robot-Enhanced therapy for children with Autism spectrum disorders) project.
DOI: 10.1515/pjbr-2017-0002
Image credit: P.G. Esteban
About the Journal: PALADYN – Journal of Behavioural Robotics is a fully peer-reviewed, electronic-only journal that publishes original, high-quality research on topics broadly related to neuronally and psychologically inspired robots and other behaving autonomous systems.
About De Gruyter Open: De Gruyter Open is a leading publisher of Open Access academic content. Publishing in all major disciplines, De Gruyter Open is home to more than 500 scholarly journals and over 100 books. The company is part of the De Gruyter Group (www.degruyter.com) and a member of the Association of Learned and Professional Society Publishers (ALPSP). De Gruyter Open’s book and journal programs have been endorsed by the international research community and some of the world’s top scientists, including Nobel laureates. The company’s mission is to make the very best in academic content freely available to scholars and lay readers alike.
The post Robotherapy for children with autism appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#428802 New AI Mental Health Tools Beat Human ...

About 20 percent of youth in the United States live with a mental health condition, according to the National Institute of Mental Health. That’s the bad news. The good news is that mental health professionals have smarter tools than ever before, with artificial intelligence-related technology coming to the forefront to help diagnose patients, often with much greater accuracy than humans. A new study published in the journal Suicide and Life-Threatening Behavior, for example, showed that… read more Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment