Tag Archives: simple
#433901 The SpiNNaker Supercomputer, Modeled ...
We’ve long used the brain as inspiration for computers, but the SpiNNaker supercomputer, switched on this month, is probably the closest we’ve come to recreating it in silicon. Now scientists hope to use the supercomputer to model the very thing that inspired its design.
The brain is the most complex machine in the known universe, but that complexity comes primarily from its architecture rather than the individual components that make it up. Its highly interconnected structure means that relatively simple messages exchanged between billions of individual neurons add up to carry out highly complex computations.
That’s the paradigm that has inspired the ‘Spiking Neural Network Architecture” (SpiNNaker) supercomputer at the University of Manchester in the UK. The project is the brainchild of Steve Furber, the designer of the original ARM processor. After a decade of development, a million-core version of the machine that will eventually be able to simulate up to a billion neurons was switched on earlier this month.
The idea of splitting computation into very small chunks and spreading them over many processors is already the leading approach to supercomputing. But even the most parallel systems require a lot of communication, and messages may have to pack in a lot of information, such as the task that needs to be completed or the data that needs to be processed.
In contrast, messages in the brain consist of simple electrochemical impulses, or spikes, passed between neurons, with information encoded primarily in the timing or rate of those spikes (which is more important is a topic of debate among neuroscientists). Each neuron is connected to thousands of others via synapses, and complex computation relies on how spikes cascade through these highly-connected networks.
The SpiNNaker machine attempts to replicate this using a model called Address Event Representation. Each of the million cores can simulate roughly a million synapses, so depending on the model, 1,000 neurons with 1,000 connections or 100 neurons with 10,000 connections. Information is encoded in the timing of spikes and the identity of the neuron sending them. When a neuron is activated it broadcasts a tiny packet of data that contains its address, and spike timing is implicitly conveyed.
By modeling their machine on the architecture of the brain, the researchers hope to be able to simulate more biological neurons in real time than any other machine on the planet. The project is funded by the European Human Brain Project, a ten-year science mega-project aimed at bringing together neuroscientists and computer scientists to understand the brain, and researchers will be able to apply for time on the machine to run their simulations.
Importantly, it’s possible to implement various different neuronal models on the machine. The operation of neurons involves a variety of complex biological processes, and it’s still unclear whether this complexity is an artefact of evolution or central to the brain’s ability to process information. The ability to simulate up to a billion simple neurons or millions of more complex ones on the same machine should help to slowly tease out the answer.
Even at a billion neurons, that still only represents about one percent of the human brain, so it’s still going to be limited to investigating isolated networks of neurons. But the previous 500,000-core machine has already been used to do useful simulations of the Basal Ganglia—an area affected in Parkinson’s disease—and an outer layer of the brain that processes sensory information.
The full-scale supercomputer will make it possible to study even larger networks previously out of reach, which could lead to breakthroughs in our understanding of both the healthy and unhealthy functioning of the brain.
And while neurological simulation is the main goal for the machine, it could also provide a useful research tool for roboticists. Previous research has already shown a small board of SpiNNaker chips can be used to control a simple wheeled robot, but Furber thinks the SpiNNaker supercomputer could also be used to run large-scale networks that can process sensory input and generate motor output in real time and at low power.
That low power operation is of particular promise for robotics. The brain is dramatically more power-efficient than conventional supercomputers, and by borrowing from its principles SpiNNaker has managed to capture some of that efficiency. That could be important for running mobile robotic platforms that need to carry their own juice around.
This ability to run complex neural networks at low power has been one of the main commercial drivers for so-called neuromorphic computing devices that are physically modeled on the brain, such as IBM’s TrueNorth chip and Intel’s Loihi. The hope is that complex artificial intelligence applications normally run in massive data centers could be run on edge devices like smartphones, cars, and robots.
But these devices, including SpiNNaker, operate very differently from the leading AI approaches, and its not clear how easy it would be to transfer between the two. The need to adopt an entirely new programming paradigm is likely to limit widespread adoption, and the lack of commercial traction for the aforementioned devices seems to back that up.
At the same time, though, this new paradigm could potentially lead to dramatic breakthroughs in massively parallel computing. SpiNNaker overturns many of the foundational principles of how supercomputers work that make it much more flexible and error-tolerant.
For now, the machine is likely to be firmly focused on accelerating our understanding of how the brain works. But its designers also hope those findings could in turn point the way to more efficient and powerful approaches to computing.
Image Credit: Adrian Grosu / Shutterstock.com Continue reading
#433884 Designer Babies, and Their Babies: How ...
As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?
Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.
Metzl is a senior fellow at the Atlantic Council and author of the upcoming book Hacking Darwin: Genetic Engineering and the Future of Humanity. At Singularity University’s Exponential Medicine conference last week, he shared his insights on genomics and AI, and where their convergence could take us.
Life As We Know It
Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.
In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.
Jamie Metzl at Exponential Medicine
Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.
“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”
Learning About Life Meets Machine Learning
In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.
Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.
Getting a standardized set of rules for our biology—and, eventually, maybe even outsmarting our biology—will require genomic data. Lots of it.
Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.
“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.
To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.
But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”
Designer Babies, and Their Babies
In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.
Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.
Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.
This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”
Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”
From Crazy to Commonplace?
It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.
“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?
As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.
The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”
Image Credit: hywards / Shutterstock.com Continue reading
#433807 The How, Why, and Whether of Custom ...
A digital afterlife may soon be within reach, but it might not be for your benefit.
The reams of data we’re creating could soon make it possible to create digital avatars that live on after we die, aimed at comforting our loved ones or sharing our experience with future generations.
That may seem like a disappointing downgrade from the vision promised by the more optimistic futurists, where we upload our consciousness to the cloud and live forever in machines. But it might be a realistic possibility in the not-too-distant future—and the first steps have already been taken.
After her friend died in a car crash, Eugenia Kuyda, co-founder of Russian AI startup Luka, trained a neural network-powered chatbot on their shared message history to mimic him. Journalist and amateur coder James Vlahos took a more involved approach, carrying out extensive interviews with his terminally ill father so that he could create a digital clone of him when he died.
For those of us without the time or expertise to build our own artificial intelligence-powered avatar, startup Eternime is offering to take your social media posts and interactions as well as basic personal information to build a copy of you that could then interact with relatives once you’re gone. The service is so far only running a private beta with a handful of people, but with 40,000 on its waiting list, it’s clear there’s a market.
Comforting—Or Creepy?
The whole idea may seem eerily similar to the Black Mirror episode Be Right Back, in which a woman pays a company to create a digital copy of her deceased husband and eventually a realistic robot replica. And given the show’s focus on the emotional turmoil she goes through, people might question whether the idea is a sensible one.
But it’s hard to say at this stage whether being able to interact with an approximation of a deceased loved one would be a help or a hindrance in the grieving process. The fear is that it could make it harder for people to “let go” or “move on,” but others think it could play a useful therapeutic role, reminding people that just because someone is dead it doesn’t mean they’re gone, and providing a novel way for them to express and come to terms with their feelings.
While at present most envisage these digital resurrections as a way to memorialize loved ones, there are also more ambitious plans to use the technology as a way to preserve expertise and experience. A project at MIT called Augmented Eternity is investigating whether we could use AI to trawl through someone’s digital footprints and extract both their knowledge and elements of their personality.
Project leader Hossein Rahnama says he’s already working with a CEO who wants to leave behind a digital avatar that future executives could consult with after he’s gone. And you wouldn’t necessarily have to wait until you’re dead—experts could create virtual clones of themselves that could dispense advice on demand to far more people. These clones could soon be more than simple chatbots, too. Hollywood has already started spending millions of dollars to create 3D scans of its most bankable stars so that they can keep acting beyond the grave.
It’s easy to see the appeal of the idea; imagine if we could bring back Stephen Hawking or Tim Cook to share their wisdom with us. And what if we could create a digital brain trust combining the experience and wisdom of all the world’s greatest thinkers, accessible on demand?
But there are still huge hurdles ahead before we could create truly accurate representations of people by simply trawling through their digital remains. The first problem is data. Most peoples’ digital footprints only started reaching significant proportions in the last decade or so, and cover a relatively small period of their lives. It could take many years before there’s enough data to create more than just a superficial imitation of someone.
And that’s assuming that the data we produce is truly representative of who we are. Carefully-crafted Instagram profiles and cautiously-worded work emails hardly capture the messy realities of most peoples’ lives.
Perhaps if the idea is simply to create a bank of someone’s knowledge and expertise, accurately capturing the essence of their character would be less important. But these clones would also be static. Real people continually learn and change, but a digital avatar is a snapshot of someone’s character and opinions at the point they died. An inability to adapt as the world around them changes could put a shelf life on the usefulness of these replicas.
Who’s Calling the (Digital) Shots?
It won’t stop people trying, though, and that raises a potentially more important question: Who gets to make the calls about our digital afterlife? The subjects, their families, or the companies that hold their data?
In most countries, the law is currently pretty hazy on this topic. Companies like Google and Facebook have processes to let you choose who should take control of your accounts in the event of your death. But if you’ve forgotten to do that, the fate of your virtual remains comes down to a tangle of federal law, local law, and tech company terms of service.
This lack of regulation could create incentives and opportunities for unscrupulous behavior. The voice of a deceased loved one could be a highly persuasive tool for exploitation, and digital replicas of respected experts could be powerful means of pushing a hidden agenda.
That means there’s a pressing need for clear and unambiguous rules. Researchers at Oxford University recently suggested ethical guidelines that would treat our digital remains the same way museums and archaeologists are required to treat mortal remains—with dignity and in the interest of society.
Whether those kinds of guidelines are ever enshrined in law remains to be seen, but ultimately they may decide whether the digital afterlife turns out to be heaven or hell.
Image Credit: frankie’s / Shutterstock.com Continue reading
#433728 AI Is Kicking Space Exploration into ...
Artificial intelligence in space exploration is gathering momentum. Over the coming years, new missions look likely to be turbo-charged by AI as we voyage to comets, moons, and planets and explore the possibilities of mining asteroids.
“AI is already a game-changer that has made scientific research and exploration much more efficient. We are not just talking about a doubling but about a multiple of ten,” Leopold Summerer, Head of the Advanced Concepts and Studies Office at ESA, said in an interview with Singularity Hub.
Examples Abound
The history of AI and space exploration is older than many probably think. It has already played a significant role in research into our planet, the solar system, and the universe. As computer systems and software have developed, so have AI’s potential use cases.
The Earth Observer 1 (EO-1) satellite is a good example. Since its launch in the early 2000s, its onboard AI systems helped optimize analysis of and response to natural occurrences, like floods and volcanic eruptions. In some cases, the AI was able to tell EO-1 to start capturing images before the ground crew were even aware that the occurrence had taken place.
Other satellite and astronomy examples abound. Sky Image Cataloging and Analysis Tool (SKICAT) has assisted with the classification of objects discovered during the second Palomar Sky Survey, classifying thousands more objects caught in low resolution than a human would be able to. Similar AI systems have helped astronomers to identify 56 new possible gravitational lenses that play a crucial role in connection with research into dark matter.
AI’s ability to trawl through vast amounts of data and find correlations will become increasingly important in relation to getting the most out of the available data. ESA’s ENVISAT produces around 400 terabytes of new data every year—but will be dwarfed by the Square Kilometre Array, which will produce around the same amount of data that is currently on the internet in a day.
AI Readying For Mars
AI is also being used for trajectory and payload optimization. Both are important preliminary steps to NASA’s next rover mission to Mars, the Mars 2020 Rover, which is, slightly ironically, set to land on the red planet in early 2021.
An AI known as AEGIS is already on the red planet onboard NASA’s current rovers. The system can handle autonomous targeting of cameras and choose what to investigate. However, the next generation of AIs will be able to control vehicles, autonomously assist with study selection, and dynamically schedule and perform scientific tasks.
Throughout his career, John Leif Jørgensen from DTU Space in Denmark has designed equipment and systems that have been on board about 100 satellites—and counting. He is part of the team behind the Mars 2020 Rover’s autonomous scientific instrument PIXL, which makes extensive use of AI. Its purpose is to investigate whether there have been lifeforms like stromatolites on Mars.
“PIXL’s microscope is situated on the rover’s arm and needs to be placed 14 millimetres from what we want it to study. That happens thanks to several cameras placed on the rover. It may sound simple, but the handover process and finding out exactly where to place the arm can be likened to identifying a building from the street from a picture taken from the roof. This is something that AI is eminently suited for,” he said in an interview with Singularity Hub.
AI also helps PIXL operate autonomously throughout the night and continuously adjust as the environment changes—the temperature changes between day and night can be more than 100 degrees Celsius, meaning that the ground beneath the rover, the cameras, the robotic arm, and the rock being studied all keep changing distance.
“AI is at the core of all of this work, and helps almost double productivity,” Jørgensen said.
First Mars, Then Moons
Mars is likely far from the final destination for AIs in space. Jupiter’s moons have long fascinated scientists. Especially Europa, which could house a subsurface ocean, buried beneath an approximately 10 km thick ice crust. It is one of the most likely candidates for finding life elsewhere in the solar system.
While that mission may be some time in the future, NASA is currently planning to launch the James Webb Space Telescope into an orbit of around 1.5 million kilometers from Earth in 2020. Part of the mission will involve AI-empowered autonomous systems overseeing the full deployment of the telescope’s 705-kilo mirror.
The distances between Earth and Europa, or Earth and the James Webb telescope, means a delay in communications. That, in turn, makes it imperative for the crafts to be able to make their own decisions. Examples from the Mars Rover project show that communication between a rover and Earth can take 20 minutes because of the vast distance. A Europa mission would see much longer communication times.
Both missions, to varying degrees, illustrate one of the most significant challenges currently facing the use of AI in space exploration. There tends to be a direct correlation between how well AI systems perform and how much data they have been fed. The more, the better, as it were. But we simply don’t have very much data to feed such a system about what it’s likely to encounter on a mission to a place like Europa.
Computing power presents a second challenge. A strenuous, time-consuming approval process and the risk of radiation mean that your computer at home would likely be more powerful than anything going into space in the near future. A 200 GHz processor, 256 megabytes of ram, and 2 gigabytes of memory sounds a lot more like a Nokia 3210 (the one you could use as an ice hockey puck without it noticing) than an iPhone X—but it’s actually the ‘brain’ that will be onboard the next rover.
Private Companies Taking Off
Private companies are helping to push those limitations. CB Insights charts 57 startups in the space-space, covering areas as diverse as natural resources, consumer tourism, R&D, satellites, spacecraft design and launch, and data analytics.
David Chew works as an engineer for the Japanese satellite company Axelspace. He explained how private companies are pushing the speed of exploration and lowering costs.
“Many private space companies are taking advantage of fall-back systems and finding ways of using parts and systems that traditional companies have thought of as non-space-grade. By implementing fall-backs, and using AI, it is possible to integrate and use parts that lower costs without adding risk of failure,” he said in an interview with Singularity Hub.
Terraforming Our Future Home
Further into the future, moonshots like terraforming Mars await. Without AI, these kinds of projects to adapt other planets to Earth-like conditions would be impossible.
Autonomous crafts are already terraforming here on Earth. BioCarbon Engineering uses drones to plant up to 100,000 trees in a single day. Drones first survey and map an area, then an algorithm decides the optimal locations for the trees before a second wave of drones carry out the actual planting.
As is often the case with exponential technologies, there is a great potential for synergies and convergence. For example with AI and robotics, or quantum computing and machine learning. Why not send an AI-driven robot to Mars and use it as a telepresence for scientists on Earth? It could be argued that we are already in the early stages of doing just that by using VR and AR systems that take data from the Mars rovers and create a virtual landscape scientists can walk around in and make decisions on what the rovers should explore next.
One of the biggest benefits of AI in space exploration may not have that much to do with its actual functions. Chew believes that within as little as ten years, we could see the first mining of asteroids in the Kuiper Belt with the help of AI.
“I think one of the things that AI does to space exploration is that it opens up a whole range of new possible industries and services that have a more immediate effect on the lives of people on Earth,” he said. “It becomes a relatable industry that has a real effect on people’s daily lives. In a way, space exploration becomes part of people’s mindset, and the border between our planet and the solar system becomes less important.”
Image Credit: Taily / Shutterstock.com Continue reading
#433620 Instilling the Best of Human Values in ...
Now that the era of artificial intelligence is unquestionably upon us, it behooves us to think and work harder to ensure that the AIs we create embody positive human values.
Science fiction is full of AIs that manifest the dark side of humanity, or are indifferent to humans altogether. Such possibilities cannot be ruled out, but nor is there any logical or empirical reason to consider them highly likely. I am among a large group of AI experts who see a strong potential for profoundly positive outcomes in the AI revolution currently underway.
We are facing a future with great uncertainty and tremendous promise, and the best we can do is to confront it with a combination of heart and mind, of common sense and rigorous science. In the realm of AI, what this means is, we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect.
The quest for beneficial AI has many dimensions, including its potential to reduce material scarcity and to help unlock the human capacity for love and compassion.
Reducing Scarcity
A large percentage of difficult issues in human society, many of which spill over into the AI domain, would be palliated significantly if material scarcity became less of a problem. Fortunately, AI has great potential to help here. AI is already increasing efficiency in nearly every industry.
In the next few decades, as nanotech and 3D printing continue to advance, AI-driven design will become a larger factor in the economy. Radical new tools like artificial enzymes built using Christian Schafmeister’s spiroligomer molecules, and designed using quantum physics-savvy AIs, will enable the creation of new materials and medicines.
For amazing advances like the intersection of AI and nanotech to lead toward broadly positive outcomes, however, the economic and political aspects of the AI industry may have to shift from the current status quo.
Currently, most AI development occurs under the aegis of military organizations or large corporations oriented heavily toward advertising and marketing. Put crudely, an awful lot of AI today is about “spying, brainwashing, or killing.” This is not really the ideal situation if we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial.
Also, as the bulk of AI development now occurs in large for-profit organizations bound by law to pursue the maximization of shareholder value, we face a situation where AI tends to exacerbate global wealth inequality and class divisions. This has the potential to lead to various civilization-scale failure modes involving the intersection of geopolitics, AI, cyberterrorism, and so forth. Part of my motivation for founding the decentralized AI project SingularityNET was to create an alternative mode of dissemination and utilization of both narrow AI and AGI—one that operates in a self-organizing way, outside of the direct grip of conventional corporate and governmental structures.
In the end, though, I worry that radical material abundance and novel political and economic structures may fail to create a positive future, unless they are coupled with advances in consciousness and compassion. AGIs have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness—and can effectively pass these values on.
Transmitting Human Values
Brain-computer interfacing is another critical aspect of the quest for creating more positive AIs and more positive humans. As Elon Musk has put it, “If you can’t beat ’em, join’ em.” Joining is more fun than beating anyway. What better way to infuse AIs with human values than to connect them directly to human brains, and let them learn directly from the source (while providing humans with valuable enhancements)?
Millions of people recently heard Elon Musk discuss AI and BCI on the Joe Rogan podcast. Musk’s embrace of brain-computer interfacing is laudable, but he tends to dodge some of the tough issues—for instance, he does not emphasize the trade-off cyborgs will face between retaining human-ness and maximizing intelligence, joy, and creativity. To make this trade-off effectively, the AI portion of the cyborg will need to have a deep sense of human values.
Musk calls humanity the “biological boot loader” for AGI, but to me this colorful metaphor misses a key point—that we can seed the AGI we create with our values as an initial condition. This is one reason why it’s important that the first really powerful AGIs are created by decentralized networks, and not conventional corporate or military organizations. The decentralized software/hardware ecosystem, for all its quirks and flaws, has more potential to lead to human-computer cybernetic collective minds that are reasonable and benevolent.
Algorithmic Love
BCI is still in its infancy, but a more immediate way of connecting people with AIs to infuse both with greater love and compassion is to leverage humanoid robotics technology. Toward this end, I conceived a project called Loving AI, focused on using highly expressive humanoid robots like the Hanson robot Sophia to lead people through meditations and other exercises oriented toward unlocking the human potential for love and compassion. My goals here were to explore the potential of AI and robots to have a positive impact on human consciousness, and to use this application to study and improve the OpenCog and SingularityNET tools used to control Sophia in these interactions.
The Loving AI project has now run two small sets of human trials, both with exciting and positive results. These have been small—dozens rather than hundreds of people—but have definitively proven the point. Put a person in a quiet room with a humanoid robot that can look them in the eye, mirror their facial expressions, recognize some of their emotions, and lead them through simple meditation, listening, and consciousness-oriented exercises…and quite a lot of the time, the result is a more relaxed person who has entered into a shifted state of consciousness, at least for a period of time.
In a certain percentage of cases, the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject—a deep meditative trance state, for instance. In most cases, the result was not so extreme, but statistically the positive effect was quite significant across all cases. Furthermore, a similar effect was found using an avatar simulation of the robot’s face on a tablet screen (together with a webcam for facial expression mirroring and recognition), but not with a purely auditory interaction.
The Loving AI experiments are not only about AI; they are about human-robot and human-avatar interaction, with AI as one significant aspect. The facial interaction with the robot or avatar is pushing “biological buttons” that trigger emotional reactions and prime the mind for changes of consciousness. However, this sort of body-mind interaction is arguably critical to human values and what it means to be human; it’s an important thing for robots and AIs to “get.”
Halting or pausing the advance of AI is not a viable possibility at this stage. Despite the risks, the potential economic and political benefits involved are clear and massive. The convergence of narrow AI toward AGI is also a near inevitability, because there are so many important applications where greater generality of intelligence will lead to greater practical functionality. The challenge is to make the outcome of this great civilization-level adventure as positive as possible.
Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading