Tag Archives: nature

#432512 How Will Merging Minds and Machines ...

One of the most exciting and frightening outcomes of technological advancement is the potential to merge our minds with machines. If achieved, this would profoundly boost our cognitive capabilities. More importantly, however, it could be a revolution in human identity, emotion, spirituality, and self-awareness.

Brain-machine interface technology is already being developed by pioneers and researchers around the globe. It’s still early and today’s tech is fairly rudimentary, but it’s a fast-moving field, and some believe it will advance faster than generally expected. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to connect our brains to the internet via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” Even if the advances are less dramatic, however, they’ll have significant implications.

How might this technology affect human consciousness? What about its implications on our sentience, self-awareness, or subjective experience of our illusion of self?

Consciousness can be hard to define, but a holistic definition often encompasses many of our most fundamental capacities, such as wakefulness, self-awareness, meta-cognition, and sense of agency. Beyond that, consciousness represents a spectrum of awareness, as seen across various species of animals. Even humans experience different levels of existential awareness.

From psychedelics to meditation, there are many tools we already use to alter and heighten our conscious experience, both temporarily and permanently. These tools have been said to contribute to a richer life, with the potential to bring experiences of beauty, love, inner peace, and transcendence. Relatively non-invasive, these tools show us what a seemingly minor imbalance of neurochemistry and conscious internal effort can do to the subjective experience of being human.

Taking this into account, what implications might emerging brain-machine interface technologies have on the “self”?

The Tools for Self-Transcendence
At the basic level, we are currently seeing the rise of “consciousness hackers” using techniques like non-invasive brain stimulation through EEG, nutrition, virtual reality, and ecstatic experiences to create environments for heightened consciousness and self-awareness. In Stealing Fire, Steven Kotler and Jamie Wheal explore this trillion-dollar altered-states economy and how innovators and thought leaders are “harnessing rare and controversial states of consciousness to solve critical challenges and outperform the competition.” Beyond enhanced productivity, these altered states expose our inner potential and give us a glimpse of a greater state of being.

Expanding consciousness through brain augmentation and implants could one day be just as accessible. Researchers are working on an array of neurotechnologies as simple and non-invasive as electrode-based EEGs to invasive implants and techniques like optogenetics, where neurons are genetically reprogrammed to respond to pulses of light. We’ve already connected two brains via the internet, allowing the two to communicate, and future-focused startups are researching the possibilities too. With an eye toward advanced brain-machine interfaces, last year Elon Musk unveiled Neuralink, a company whose ultimate goal is to merge the human mind with AI through a “neural lace.”

Many technologists predict we will one day merge with and, more speculatively, upload our minds onto machines. Neuroscientist Kenneth Hayworth writes in Skeptic magazine, “All of today’s neuroscience models are fundamentally computational by nature, supporting the theoretical possibility of mind-uploading.” This might include connecting with other minds using digital networks or even uploading minds onto quantum computers, which can be in multiple states of computation at a given time.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. With advancements in genetic engineering, we are indeed seeing evolution become an increasingly conscious process with an accelerated pace. This could one day apply to the evolution of our consciousness as well; we would be using our consciousness to expand our consciousness.

What Will It Feel Like?
We may be able to come up with predictions of the impact of these technologies on society, but we can only wonder what they will feel like subjectively.

It’s hard to imagine, for example, what our stream of consciousness will feel like when we can process thoughts and feelings 1,000 times faster, or how artificially intelligent brain implants will impact our capacity to love and hate. What will the illusion of “I” feel like when our consciousness is directly plugged into the internet? Overall, what impact will the process of merging with technology have on the subjective experience of being human?

The Evolution of Consciousness
In The Future Evolution of Consciousness, Thomas Lombardo points out, “We are a journey rather than a destination—a chapter in the evolutionary saga rather than a culmination. Just as probable, there will also be a diversification of species and types of conscious minds. It is also very likely that new psychological capacities, incomprehensible to us, will emerge as well.”

Humans are notorious for fearing the unknown. For any individual who has never experienced an altered state, be it spiritual or psychedelic-induced, it is difficult to comprehend the subjective experience of that state. It is why many refer to their first altered-state experience as “waking up,” wherein they didn’t even realize they were asleep.

Similarly, exponential neurotechnology represents the potential of a higher state of consciousness and a range of experiences that are unimaginable to our current default state.

Our capacity to think and feel is set by the boundaries of our biological brains. To transform and expand these boundaries is to transform and expand the first-hand experience of consciousness. Emerging neurotechnology may end up providing the awakening our species needs.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots

#432324 This Week’s Awesome Stories From ...

China Wants to Shape the Global Future of Artificial Intelligence
Will Knight | MIT Technology Review
“China’s booming AI industry and massive government investment in the technology have raised fears in the US and elsewhere that the nation will overtake international rivals in a fundamentally important technology. In truth, it may be possible for both the US and the Chinese economies to benefit from AI. But there may be more rivalry when it comes to influencing the spread of the technology worldwide. ‘I think this is the first technology area where China has a real chance to set the rules of the game,’ says Ding.”

Astronaut’s Gene Expression No Longer Same as His Identical Twin, NASA Finds
Susan Scutti | CNN
“Preliminary results from NASA’s Twins Study reveal that 7% of astronaut Scott Kelly’s genetic expression—how his genes function within cells—did not return to baseline after his return to Earth two years ago. The study looks at what happened to Kelly before, during and after he spent one year aboard the International Space Station through an extensive comparison with his identical twin, Mark, who remained on Earth.”

This Cheap 3D-Printed Home Is a Start for the 1 Billion Who Lack Shelter
Tamara Warren | The Verge
“ICON has developed a method for printing a single-story 650-square-foot house out of cement in only 12 to 24 hours, a fraction of the time it takes for new construction. If all goes according to plan, a community made up of about 100 homes will be constructed for residents in El Salvador next year. The company has partnered with New Story, a nonprofit that is vested in international housing solutions. ‘We have been building homes for communities in Haiti, El Salvador, and Bolivia,’ Alexandria Lafci, co-founder of New Story, tells The Verge.”

Our Microbiomes Are Making Scientists Question What It Means to Be Human
Rebecca Flowers | Motherboard
“Studies in genetics and Watson and Crick’s discovery of DNA gave more credence to the idea of individuality. But as scientists learn more about the microbiome, the idea of humans as a singular organism is being reconsidered: ‘There is now overwhelming evidence that normal development as well as the maintenance of the organism depend on the microorganisms…that we harbor,’ they state (others have taken this position, too).”

Stephen Hawking, Who Awed Both Scientists and the Public, Dies
Joe Palca | NPR
“Hawking was probably the best-known scientist in the world. He was a theoretical physicist whose early work on black holes transformed how scientists think about the nature of the universe. But his fame wasn’t just a result of his research. Hawking, who had a debilitating neurological disease that made it impossible for him to move his limbs or speak, was also a popular public figure and best-selling author. There was even a biopic about his life, The Theory of Everything, that won an Oscar for the actor, Eddie Redmayne, who portrayed Hawking.”

Image Credit: NASA/JPL-Caltech/STScI Continue reading

Posted in Human Robots

#432193 Are ‘You’ Just Inside Your Skin or ...

In November 2017, a gunman entered a church in Sutherland Springs in Texas, where he killed 26 people and wounded 20 others. He escaped in his car, with police and residents in hot pursuit, before losing control of the vehicle and flipping it into a ditch. When the police got to the car, he was dead. The episode is horrifying enough without its unsettling epilogue. In the course of their investigations, the FBI reportedly pressed the gunman’s finger to the fingerprint-recognition feature on his iPhone in an attempt to unlock it. Regardless of who’s affected, it’s disquieting to think of the police using a corpse to break into someone’s digital afterlife.

Most democratic constitutions shield us from unwanted intrusions into our brains and bodies. They also enshrine our entitlement to freedom of thought and mental privacy. That’s why neurochemical drugs that interfere with cognitive functioning can’t be administered against a person’s will unless there’s a clear medical justification. Similarly, according to scholarly opinion, law-enforcement officials can’t compel someone to take a lie-detector test, because that would be an invasion of privacy and a violation of the right to remain silent.

But in the present era of ubiquitous technology, philosophers are beginning to ask whether biological anatomy really captures the entirety of who we are. Given the role they play in our lives, do our devices deserve the same protections as our brains and bodies?

After all, your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself—and all this dating back years.

In 2014, the United States Supreme Court used this observation to justify the decision that police must obtain a warrant before rummaging through our smartphones. These devices “are now such a pervasive and insistent part of daily life that the proverbial visitor from Mars might conclude they were an important feature of human anatomy,” as Chief Justice John Roberts observed in his written opinion.

The Chief Justice probably wasn’t making a metaphysical point—but the philosophers Andy Clark and David Chalmers were when they argued in “The Extended Mind” (1998) that technology is actually part of us. According to traditional cognitive science, “thinking” is a process of symbol manipulation or neural computation, which gets carried out by the brain. Clark and Chalmers broadly accept this computational theory of mind, but claim that tools can become seamlessly integrated into how we think. Objects such as smartphones or notepads are often just as functionally essential to our cognition as the synapses firing in our heads. They augment and extend our minds by increasing our cognitive power and freeing up internal resources.

If accepted, the extended mind thesis threatens widespread cultural assumptions about the inviolate nature of thought, which sits at the heart of most legal and social norms. As the US Supreme Court declared in 1942: “freedom to think is absolute of its own nature; the most tyrannical government is powerless to control the inward workings of the mind.” This view has its origins in thinkers such as John Locke and René Descartes, who argued that the human soul is locked in a physical body, but that our thoughts exist in an immaterial world, inaccessible to other people. One’s inner life thus needs protecting only when it is externalized, such as through speech. Many researchers in cognitive science still cling to this Cartesian conception—only, now, the private realm of thought coincides with activity in the brain.

But today’s legal institutions are straining against this narrow concept of the mind. They are trying to come to grips with how technology is changing what it means to be human, and to devise new normative boundaries to cope with this reality. Justice Roberts might not have known about the idea of the extended mind, but it supports his wry observation that smartphones have become part of our body. If our minds now encompass our phones, we are essentially cyborgs: part-biology, part-technology. Given how our smartphones have taken over what were once functions of our brains—remembering dates, phone numbers, addresses—perhaps the data they contain should be treated on a par with the information we hold in our heads. So if the law aims to protect mental privacy, its boundaries would need to be pushed outwards to give our cyborg anatomy the same protections as our brains.

This line of reasoning leads to some potentially radical conclusions. Some philosophers have argued that when we die, our digital devices should be handled as remains: if your smartphone is a part of who you are, then perhaps it should be treated more like your corpse than your couch. Similarly, one might argue that trashing someone’s smartphone should be seen as a form of “extended” assault, equivalent to a blow to the head, rather than just destruction of property. If your memories are erased because someone attacks you with a club, a court would have no trouble characterizing the episode as a violent incident. So if someone breaks your smartphone and wipes its contents, perhaps the perpetrator should be punished as they would be if they had caused a head trauma.

The extended mind thesis also challenges the law’s role in protecting both the content and the means of thought—that is, shielding what and how we think from undue influence. Regulation bars non-consensual interference in our neurochemistry (for example, through drugs), because that meddles with the contents of our mind. But if cognition encompasses devices, then arguably they should be subject to the same prohibitions. Perhaps some of the techniques that advertisers use to hijack our attention online, to nudge our decision-making or manipulate search results, should count as intrusions on our cognitive process. Similarly, in areas where the law protects the means of thought, it might need to guarantee access to tools such as smartphones—in the same way that freedom of expression protects people’s right not only to write or speak, but also to use computers and disseminate speech over the internet.

The courts are still some way from arriving at such decisions. Besides the headline-making cases of mass shooters, there are thousands of instances each year in which police authorities try to get access to encrypted devices. Although the Fifth Amendment to the US Constitution protects individuals’ right to remain silent (and therefore not give up a passcode), judges in several states have ruled that police can forcibly use fingerprints to unlock a user’s phone. (With the new facial-recognition feature on the iPhone X, police might only need to get an unwitting user to look at her phone.) These decisions reflect the traditional concept that the rights and freedoms of an individual end at the skin.

But the concept of personal rights and freedoms that guides our legal institutions is outdated. It is built on a model of a free individual who enjoys an untouchable inner life. Now, though, our thoughts can be invaded before they have even been developed—and in a way, perhaps this is nothing new. The Nobel Prize-winning physicist Richard Feynman used to say that he thought with his notebook. Without a pen and pencil, a great deal of complex reflection and analysis would never have been possible. If the extended mind view is right, then even simple technologies such as these would merit recognition and protection as a part of the essential toolkit of the mind.This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Sergii Tverdokhlibov / Shutterstock.com Continue reading

Posted in Human Robots

#432009 How Swarm Intelligence Is Making Simple ...

As a group, simple creatures following simple rules can display a surprising amount of complexity, efficiency, and even creativity. Known as swarm intelligence, this trait is found throughout nature, but researchers have recently begun using it to transform various fields such as robotics, data mining, medicine, and blockchains.

Ants, for example, can only perform a limited range of functions, but an ant colony can build bridges, create superhighways of food and information, wage war, and enslave other ant species—all of which are beyond the comprehension of any single ant. Likewise, schools of fish, flocks of birds, beehives, and other species exhibit behavior indicative of planning by a higher intelligence that doesn’t actually exist.

It happens by a process called stigmergy. Simply put, a small change by a group member causes other members to behave differently, leading to a new pattern of behavior.

When an ant finds a food source, it marks the path with pheromones. This attracts other ants to that path, leads them to the food source, and prompts them to mark the same path with more pheromones. Over time, the most efficient route will become the superhighway, as the faster and easier a path is, the more ants will reach the food and the more pheromones will be on the path. Thus, it looks as if a more intelligent being chose the best path, but it emerged from the tiny, simple changes made by individuals.

So what does this mean for humans? Well, a lot. In the past few decades, researchers have developed numerous algorithms and metaheuristics, such as ant colony optimization and particle swarm optimization, and they are rapidly being adopted.

Swarm Robotics
A swarm of robots would work on the same principles as an ant colony: each member has a simple set of rules to follow, leading to self-organization and self-sufficiency.

For example, researchers at Georgia Robotics and InTelligent Systems (GRITS) created a small swarm of simple robots that can spell and play piano. The robots cannot communicate, but based solely on the position of surrounding robots, they are able to use their specially-created algorithm to determine the optimal path to complete their task.

This is also immensely useful for drone swarms.

Last February, Ehang, an aviation company out of China, created a swarm of a thousand drones that not only lit the sky with colorful, intricate displays, but demonstrated the ability to improvise and troubleshoot errors entirely autonomously.

Further, just recently, the University of Cambridge and Koc University unveiled their idea for what they call the Energy Neutral Internet of Drones. Amazingly, this drone swarm would take initiative to share information or energy with other drones that did not receive a communication or are running low on energy.

Militaries all of the world are utilizing this as well.

Last year, the US Department of Defense announced it had successfully tested a swarm of miniature drones that could carry out complex missions cheaper and more efficiently. They claimed, “The micro-drones demonstrated advanced swarm behaviors such as collective decision-making, adaptive formation flying, and self-healing.”

Some experts estimate at least 30 nations are actively developing drone swarms—and even submersible drones—for military missions, including intelligence gathering, missile defense, precision missile strikes, and enhanced communication.

NASA also plans on deploying swarms of tiny spacecraft for space exploration, and the medical community is looking into using swarms of nanobots for precision delivery of drugs, microsurgery, targeting toxins, and biological sensors.

What If Humans Are the Ants?
The strength of any blockchain comes from the size and diversity of the community supporting it. Cryptocurrencies like Bitcoin, Ethereum, and Litecoin are driven by the people using, investing in, and, most importantly, mining them so their blockchains can function. Without an active community, or swarm, their blockchains wither away.

When viewed from a great height, a blockchain performs eerily like an ant colony in that it will naturally find the most efficient way to move vast amounts of information.

Miners compete with each other to perform the complex calculations necessary to add another block, for which the winner is rewarded with the blockchain’s native currency and agreed-upon fees. Of course, the miner with the more powerful computers is more likely to win the reward, thereby empowering the winner’s ability to mine and receive even more rewards. Over time, fewer and fewer miners are going to exist, as the winners are able to more efficiently shoulder more of the workload, in much the same way that ants build superhighways.

Further, a company called Unanimous AI has developed algorithms that allow humans to collectively make predictions. So far, the AI algorithms and their human participants have made some astoundingly accurate predictions, such as the first four winning horses of the Kentucky Derby, the Oscar winners, the Stanley Cup winners, and others. The more people involved in the swarm, the greater their predictive power will be.

To be clear, this is not a prediction based on group consensus. Rather, the swarm of humans uses software to input their opinions in real time, thus making micro-changes to the rest of the swarm and the inputs of other members.

Studies show that swarm intelligence consistently outperforms individuals and crowds working without the algorithms. While this is only the tip of the iceberg, some have suggested swarm intelligence can revolutionize how doctors diagnose a patient or how products are marketed to consumers. It might even be an essential step in truly creating AI.

While swarm intelligence is an essential part of many species’ success, it’s only a matter of time before humans harness its effectiveness as well.

Image Credit: Nature Bird Photography / Shutterstock.com Continue reading

Posted in Human Robots

#431999 Brain-Like Chips Now Beat the Human ...

Move over, deep learning. Neuromorphic computing—the next big thing in artificial intelligence—is on fire.

Just last week, two studies individually unveiled computer chips modeled after information processing in the human brain.

The first, published in Nature Materials, found a perfect solution to deal with unpredictability at synapses—the gap between two neurons that transmit and store information. The second, published in Science Advances, further amped up the system’s computational power, filling synapses with nanoclusters of supermagnetic material to bolster information encoding.

The result? Brain-like hardware systems that compute faster—and more efficiently—than the human brain.

“Ultimately we want a chip as big as a fingernail to replace one big supercomputer,” said Dr. Jeehwan Kim, who led the first study at MIT in Cambridge, Massachusetts.

Experts are hopeful.

“The field’s full of hype, and it’s nice to see quality work presented in an objective way,” said Dr. Carver Mead, an engineer at the California Institute of Technology in Pasadena not involved in the work.

Software to Hardware
The human brain is the ultimate computational wizard. With roughly 100 billion neurons densely packed into the size of a small football, the brain can deftly handle complex computation at lightning speed using very little energy.

AI experts have taken note. The past few years saw brain-inspired algorithms that can identify faces, falsify voices, and play a variety of games at—and often above—human capability.

But software is only part of the equation. Our current computers, with their transistors and binary digital systems, aren’t equipped to run these powerful algorithms.

That’s where neuromorphic computing comes in. The idea is simple: fabricate a computer chip that mimics the brain at the hardware level. Here, data is both processed and stored within the chip in an analog manner. Each artificial synapse can accumulate and integrate small bits of information from multiple sources and fire only when it reaches a threshold—much like its biological counterpart.

Experts believe the speed and efficiency gains will be enormous.

For one, the chips will no longer have to transfer data between the central processing unit (CPU) and storage blocks, which wastes both time and energy. For another, like biological neural networks, neuromorphic devices can support neurons that run millions of streams of parallel computation.

A “Brain-on-a-chip”
Optimism aside, reproducing the biological synapse in hardware form hasn’t been as easy as anticipated.

Neuromorphic chips exist in many forms, but often look like a nanoscale metal sandwich. The “bread” pieces are generally made of conductive plates surrounding a switching medium—a conductive material of sorts that acts like the gap in a biological synapse.

When a voltage is applied, as in the case of data input, ions move within the switching medium, which then creates conductive streams to stimulate the downstream plate. This change in conductivity mimics the way biological neurons change their “weight,” or the strength of connectivity between two adjacent neurons.

But so far, neuromorphic synapses have been rather unpredictable. According to Kim, that’s because the switching medium is often comprised of material that can’t channel ions to exact locations on the downstream plate.

“Once you apply some voltage to represent some data with your artificial neuron, you have to erase and be able to write it again in the exact same way,” explains Kim. “But in an amorphous solid, when you write again, the ions go in different directions because there are lots of defects.”

In his new study, Kim and colleagues swapped the jelly-like switching medium for silicon, a material with only a single line of defects that acts like a channel to guide ions.

The chip starts with a thin wafer of silicon etched with a honeycomb-like pattern. On top is a layer of silicon germanium—something often present in transistors—in the same pattern. This creates a funnel-like dislocation, a kind of Grand Canal that perfectly shuttles ions across the artificial synapse.

The researchers then made a neuromorphic chip containing these synapses and shot an electrical zap through them. Incredibly, the synapses’ response varied by only four percent—much higher than any neuromorphic device made with an amorphous switching medium.

In a computer simulation, the team built a multi-layer artificial neural network using parameters measured from their device. After tens of thousands of training examples, their neural network correctly recognized samples 95 percent of the time, just 2 percent lower than state-of-the-art software algorithms.

The upside? The neuromorphic chip requires much less space than the hardware that runs deep learning algorithms. Forget supercomputers—these chips could one day run complex computations right on our handheld devices.

A Magnetic Boost
Meanwhile, in Boulder, Colorado, Dr. Michael Schneider at the National Institute of Standards and Technology also realized that the standard switching medium had to go.

“There must be a better way to do this, because nature has figured out a better way to do this,” he says.

His solution? Nanoclusters of magnetic manganese.

Schneider’s chip contained two slices of superconducting electrodes made out of niobium, which channel electricity with no resistance. When researchers applied different magnetic fields to the synapse, they could control the alignment of the manganese “filling.”

The switch gave the chip a double boost. For one, by aligning the switching medium, the team could predict the ion flow and boost uniformity. For another, the magnetic manganese itself adds computational power. The chip can now encode data in both the level of electrical input and the direction of the magnetisms without bulking up the synapse.

It seriously worked. At one billion times per second, the chips fired several orders of magnitude faster than human neurons. Plus, the chips required just one ten-thousandth of the energy used by their biological counterparts, all the while synthesizing input from nine different sources in an analog manner.

The Road Ahead
These studies show that we may be nearing a benchmark where artificial synapses match—or even outperform—their human inspiration.

But to Dr. Steven Furber, an expert in neuromorphic computing, we still have a ways before the chips go mainstream.

Many of the special materials used in these chips require specific temperatures, he says. Magnetic manganese chips, for example, require temperatures around absolute zero to operate, meaning they come with the need for giant cooling tanks filled with liquid helium—obviously not practical for everyday use.

Another is scalability. Millions of synapses are necessary before a neuromorphic device can be used to tackle everyday problems such as facial recognition. So far, no deal.

But these problems may in fact be a driving force for the entire field. Intense competition could push teams into exploring different ideas and solutions to similar problems, much like these two studies.

If so, future chips may come in diverse flavors. Similar to our vast array of deep learning algorithms and operating systems, the computer chips of the future may also vary depending on specific requirements and needs.

It is worth developing as many different technological approaches as possible, says Furber, especially as neuroscientists increasingly understand what makes our biological synapses—the ultimate inspiration—so amazingly efficient.

Image Credit: arakio / Shutterstock.com Continue reading

Posted in Human Robots