Tag Archives: musk
#437477 If a Robot Is Conscious, Is It OK to ...
In the Star Trek: The Next Generation episode “The Measure of a Man,” Data, an android crew member of the Enterprise, is to be dismantled for research purposes unless Captain Picard can argue that Data deserves the same rights as a human being. Naturally the question arises: What is the basis upon which something has rights? What gives an entity moral standing?
The philosopher Peter Singer argues that creatures that can feel pain or suffer have a claim to moral standing. He argues that nonhuman animals have moral standing, since they can feel pain and suffer. Limiting it to people would be a form of speciesism, something akin to racism and sexism.
Without endorsing Singer’s line of reasoning, we might wonder if it can be extended further to an android robot like Data. It would require that Data can either feel pain or suffer. And how you answer that depends on how you understand consciousness and intelligence.
As real artificial intelligence technology advances toward Hollywood’s imagined versions, the question of moral standing grows more important. If AIs have moral standing, philosophers like me reason, it could follow that they have a right to life. That means you cannot simply dismantle them, and might also mean that people shouldn’t interfere with their pursuing their goals.
Two Flavors of Intelligence and a Test
IBM’s Deep Blue chess machine was successfully trained to beat grandmaster Gary Kasparov. But it could not do anything else. This computer had what’s called domain-specific intelligence.
On the other hand, there’s the kind of intelligence that allows for the ability to do a variety of things well. It is called domain-general intelligence. It’s what lets people cook, ski, and raise children—tasks that are related, but also very different.
Artificial general intelligence, AGI, is the term for machines that have domain-general intelligence. Arguably no machine has yet demonstrated that kind of intelligence. This summer, a startup called OpenAI released a new version of its Generative Pre-Training language model. GPT-3 is a natural language processing system, trained to read and write so that it can be easily understood by people.
It drew immediate notice, not just because of its impressive ability to mimic stylistic flourishes and put together plausible content, but also because of how far it had come from a previous version. Despite this impressive performance, GPT-3 doesn’t actually know anything beyond how to string words together in various ways. AGI remains quite far off.
Named after pioneering AI researcher Alan Turing, the Turing test helps determine when an AI is intelligent. Can a person conversing with a hidden AI tell whether it’s an AI or a human being? If he can’t, then for all practical purposes, the AI is intelligent. But this test says nothing about whether the AI might be conscious.
Two Kinds of Consciousness
There are two parts to consciousness. First, there’s the what-it’s-like-for-me aspect of an experience, the sensory part of consciousness. Philosophers call this phenomenal consciousness. It’s about how you experience a phenomenon, like smelling a rose or feeling pain.
In contrast, there’s also access consciousness. That’s the ability to report, reason, behave, and act in a coordinated and responsive manner to stimuli based on goals. For example, when I pass the soccer ball to my friend making a play on the goal, I am responding to visual stimuli, acting from prior training, and pursuing a goal determined by the rules of the game. I make the pass automatically, without conscious deliberation, in the flow of the game.
Blindsight nicely illustrates the difference between the two types of consciousness. Someone with this neurological condition might report, for example, that they cannot see anything in the left side of their visual field. But if asked to pick up a pen from an array of objects in the left side of their visual field, they can reliably do so. They cannot see the pen, yet they can pick it up when prompted—an example of access consciousness without phenomenal consciousness.
Data is an android. How do these distinctions play out with respect to him?
The Data Dilemma
The android Data demonstrates that he is self-aware in that he can monitor whether or not, for example, he is optimally charged or there is internal damage to his robotic arm.
Data is also intelligent in the general sense. He does a lot of distinct things at a high level of mastery. He can fly the Enterprise, take orders from Captain Picard and reason with him about the best path to take.
He can also play poker with his shipmates, cook, discuss topical issues with close friends, fight with enemies on alien planets, and engage in various forms of physical labor. Data has access consciousness. He would clearly pass the Turing test.
However, Data most likely lacks phenomenal consciousness—he does not, for example, delight in the scent of roses or experience pain. He embodies a supersized version of blindsight. He’s self-aware and has access consciousness—can grab the pen—but across all his senses he lacks phenomenal consciousness.
Now, if Data doesn’t feel pain, at least one of the reasons Singer offers for giving a creature moral standing is not fulfilled. But Data might fulfill the other condition of being able to suffer, even without feeling pain. Suffering might not require phenomenal consciousness the way pain essentially does.
For example, what if suffering were also defined as the idea of being thwarted from pursuing a just cause without causing harm to others? Suppose Data’s goal is to save his crewmate, but he can’t reach her because of damage to one of his limbs. Data’s reduction in functioning that keeps him from saving his crewmate is a kind of nonphenomenal suffering. He would have preferred to save the crewmate, and would be better off if he did.
In the episode, the question ends up resting not on whether Data is self-aware—that is not in doubt. Nor is it in question whether he is intelligent—he easily demonstrates that he is in the general sense. What is unclear is whether he is phenomenally conscious. Data is not dismantled because, in the end, his human judges cannot agree on the significance of consciousness for moral standing.
Should an AI Get Moral Standing?
Data is kind; he acts to support the well-being of his crewmates and those he encounters on alien planets. He obeys orders from people and appears unlikely to harm them, and he seems to protect his own existence. For these reasons he appears peaceful and easier to accept into the realm of things that have moral standing.
But what about Skynet in the Terminator movies? Or the worries recently expressed by Elon Musk about AI being more dangerous than nukes, and by Stephen Hawking on AI ending humankind?
Human beings don’t lose their claim to moral standing just because they act against the interests of another person. In the same way, you can’t automatically say that just because an AI acts against the interests of humanity or another AI it doesn’t have moral standing. You might be justified in fighting back against an AI like Skynet, but that does not take away its moral standing. If moral standing is given in virtue of the capacity to nonphenomenally suffer, then Skynet and Data both get it even if only Data wants to help human beings.
There are no artificial general intelligence machines yet. But now is the time to consider what it would take to grant them moral standing. How humanity chooses to answer the question of moral standing for nonbiological creatures will have big implications for how we deal with future AIs—whether kind and helpful like Data, or set on destruction, like Skynet.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Ico Maker / Shutterstock.com Continue reading
#437145 3 Major Materials Science ...
Few recognize the vast implications of materials science.
To build today’s smartphone in the 1980s, it would cost about $110 million, require nearly 200 kilowatts of energy (compared to 2kW per year today), and the device would be 14 meters tall, according to Applied Materials CTO Omkaram Nalamasu.
That’s the power of materials advances. Materials science has democratized smartphones, bringing the technology to the pockets of over 3.5 billion people. But far beyond devices and circuitry, materials science stands at the center of innumerable breakthroughs across energy, future cities, transit, and medicine. And at the forefront of Covid-19, materials scientists are forging ahead with biomaterials, nanotechnology, and other materials research to accelerate a solution.
As the name suggests, materials science is the branch devoted to the discovery and development of new materials. It’s an outgrowth of both physics and chemistry, using the periodic table as its grocery store and the laws of physics as its cookbook.
And today, we are in the middle of a materials science revolution. In this article, we’ll unpack the most important materials advancements happening now.
Let’s dive in.
The Materials Genome Initiative
In June 2011 at Carnegie Mellon University, President Obama announced the Materials Genome Initiative, a nationwide effort to use open source methods and AI to double the pace of innovation in materials science. Obama felt this acceleration was critical to the US’s global competitiveness, and held the key to solving significant challenges in clean energy, national security, and human welfare. And it worked.
By using AI to map the hundreds of millions of different possible combinations of elements—hydrogen, boron, lithium, carbon, etc.—the initiative created an enormous database that allows scientists to play a kind of improv jazz with the periodic table.
This new map of the physical world lets scientists combine elements faster than ever before and is helping them create all sorts of novel elements. And an array of new fabrication tools are further amplifying this process, allowing us to work at altogether new scales and sizes, including the atomic scale, where we’re now building materials one atom at a time.
Biggest Materials Science Breakthroughs
These tools have helped create the metamaterials used in carbon fiber composites for lighter-weight vehicles, advanced alloys for more durable jet engines, and biomaterials to replace human joints. We’re also seeing breakthroughs in energy storage and quantum computing. In robotics, new materials are helping us create the artificial muscles needed for humanoid, soft robots—think Westworld in your world.
Let’s unpack some of the leading materials science breakthroughs of the past decade.
(1) Lithium-ion batteries
The lithium-ion battery, which today powers everything from our smartphones to our autonomous cars, was first proposed in the 1970s. It couldn’t make it to market until the 1990s, and didn’t begin to reach maturity until the past few years.
An exponential technology, these batteries have been dropping in price for three decades, plummeting 90 percent between 1990 and 2010, and 80 percent since. Concurrently, they’ve seen an eleven-fold increase in capacity.
But producing enough of them to meet demand has been an ongoing problem. Tesla has stepped up to the challenge: one of the company’s Gigafactories in Nevada churns out 20 gigawatts of energy storage per year, marking the first time we’ve seen lithium-ion batteries produced at scale.
Musk predicts 100 Gigafactories could store the energy needs of the entire globe. Other companies are moving quickly to integrate this technology as well: Renault is building a home energy storage based on their Zoe batteries, BMW’s 500 i3 battery packs are being integrated into the UK’s national energy grid, and Toyota, Nissan, and Audi have all announced pilot projects.
Lithium-ion batteries will continue to play a major role in renewable energy storage, helping bring down solar and wind energy prices to compete with those of coal and gasoline.
(2) Graphene
Derived from the same graphite found in everyday pencils, graphene is a sheet of carbon just one atom thick. It is nearly weightless, but 200 times stronger than steel. Conducting electricity and dissipating heat faster than any other known substance, this super-material has transformative applications.
Graphene enables sensors, high-performance transistors, and even gel that helps neurons communicate in the spinal cord. Many flexible device screens, drug delivery systems, 3D printers, solar panels, and protective fabric use graphene.
As manufacturing costs decrease, this material has the power to accelerate advancements of all kinds.
(3) Perovskite
Right now, the “conversion efficiency” of the average solar panel—a measure of how much captured sunlight can be turned into electricity—hovers around 16 percent, at a cost of roughly $3 per watt.
Perovskite, a light-sensitive crystal and one of our newer new materials, has the potential to get that up to 66 percent, which would double what silicon panels can muster.
Perovskite’s ingredients are widely available and inexpensive to combine. What do all these factors add up to? Affordable solar energy for everyone.
Materials of the Nano-World
Nanotechnology is the outer edge of materials science, the point where matter manipulation gets nano-small—that’s a million times smaller than an ant, 8,000 times smaller than a red blood cell, and 2.5 times smaller than a strand of DNA.
Nanobots are machines that can be directed to produce more of themselves, or more of whatever else you’d like. And because this takes place at an atomic scale, these nanobots can pull apart any kind of material—soil, water, air—atom by atom, and use these now raw materials to construct just about anything.
Progress has been surprisingly swift in the nano-world, with a bevy of nano-products now on the market. Never want to fold clothes again? Nanoscale additives to fabrics help them resist wrinkling and staining. Don’t do windows? Not a problem! Nano-films make windows self-cleaning, anti-reflective, and capable of conducting electricity. Want to add solar to your house? We’ve got nano-coatings that capture the sun’s energy.
Nanomaterials make lighter automobiles, airplanes, baseball bats, helmets, bicycles, luggage, power tools—the list goes on. Researchers at Harvard built a nanoscale 3D printer capable of producing miniature batteries less than one millimeter wide. And if you don’t like those bulky VR goggles, researchers are now using nanotech to create smart contact lenses with a resolution six times greater than that of today’s smartphones.
And even more is coming. Right now, in medicine, drug delivery nanobots are proving especially useful in fighting cancer. Computing is a stranger story, as a bioengineer at Harvard recently stored 700 terabytes of data in a single gram of DNA.
On the environmental front, scientists can take carbon dioxide from the atmosphere and convert it into super-strong carbon nanofibers for use in manufacturing. If we can do this at scale—powered by solar—a system one-tenth the size of the Sahara Desert could reduce CO2 in the atmosphere to pre-industrial levels in about a decade.
The applications are endless. And coming fast. Over the next decade, the impact of the very, very small is about to get very, very large.
Final Thoughts
With the help of artificial intelligence and quantum computing over the next decade, the discovery of new materials will accelerate exponentially.
And with these new discoveries, customized materials will grow commonplace. Future knee implants will be personalized to meet the exact needs of each body, both in terms of structure and composition.
Though invisible to the naked eye, nanoscale materials will integrate into our everyday lives, seamlessly improving medicine, energy, smartphones, and more.
Ultimately, the path to demonetization and democratization of advanced technologies starts with re-designing materials— the invisible enabler and catalyst. Our future depends on the materials we create.
(Note: This article is an excerpt from The Future Is Faster Than You Think—my new book, just released on January 28th! To get your own copy, click here!)
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2021 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: Anand Kumar from Pixabay Continue reading
#436774 AI Is an Energy-Guzzler. We Need to ...
There is a saying that has emerged among the tech set in recent years: AI is the new electricity. The platitude refers to the disruptive power of artificial intelligence for driving advances in everything from transportation to predicting the weather.
Of course, the computers and data centers that support AI’s complex algorithms are very much dependent on electricity. While that may seem pretty obvious, it may be surprising to learn that AI can be extremely power-hungry, especially when it comes to training the models that enable machines to recognize your face in a photo or for Alexa to understand a voice command.
The scale of the problem is difficult to measure, but there have been some attempts to put hard numbers on the environmental cost.
For instance, one paper published on the open-access repository arXiv claimed that the carbon emissions for training a basic natural language processing (NLP) model—algorithms that process and understand language-based data—are equal to the CO2 produced by the average American lifestyle over two years. A more robust model required the equivalent of about 17 years’ worth of emissions.
The authors noted that about a decade ago, NLP models could do the job on a regular commercial laptop. Today, much more sophisticated AI models use specialized hardware like graphics processing units, or GPUs, a chip technology popularized by Nvidia for gaming that also proved capable of supporting computing tasks for AI.
OpenAI, a nonprofit research organization co-founded by tech prophet and profiteer Elon Musk, said that the computing power “used in the largest AI training runs has been increasing exponentially with a 3.4-month doubling time” since 2012. That’s about the time that GPUs started making their way into AI computing systems.
Getting Smarter About AI Chip Design
While GPUs from Nvidia remain the gold standard in AI hardware today, a number of startups have emerged to challenge the company’s industry dominance. Many are building chipsets designed to work more like the human brain, an area that’s been dubbed neuromorphic computing.
One of the leading companies in this arena is Graphcore, a UK startup that has raised more than $450 million and boasts a valuation of $1.95 billion. The company’s version of the GPU is an IPU, which stands for intelligence processing unit.
To build a computer brain more akin to a human one, the big brains at Graphcore are bypassing the precise but time-consuming number-crunching typical of a conventional microprocessor with one that’s content to get by on less precise arithmetic.
The results are essentially the same, but IPUs get the job done much quicker. Graphcore claimed it was able to train the popular BERT NLP model in just 56 hours, while tripling throughput and reducing latency by 20 percent.
An article in Bloomberg compared the approach to the “human brain shifting from calculating the exact GPS coordinates of a restaurant to just remembering its name and neighborhood.”
Graphcore’s hardware architecture also features more built-in memory processing, boosting efficiency because there’s less need to send as much data back and forth between chips. That’s similar to an approach adopted by a team of researchers in Italy that recently published a paper about a new computing circuit.
The novel circuit uses a device called a memristor that can execute a mathematical function known as a regression in just one operation. The approach attempts to mimic the human brain by processing data directly within the memory.
Daniele Ielmini at Politecnico di Milano, co-author of the Science Advances paper, told Singularity Hub that the main advantage of in-memory computing is the lack of any data movement, which is the main bottleneck of conventional digital computers, as well as the parallel processing of data that enables the intimate interactions among various currents and voltages within the memory array.
Ielmini explained that in-memory computing can have a “tremendous impact on energy efficiency of AI, as it can accelerate very advanced tasks by physical computation within the memory circuit.” He added that such “radical ideas” in hardware design will be needed in order to make a quantum leap in energy efficiency and time.
It’s Not Just a Hardware Problem
The emphasis on designing more efficient chip architecture might suggest that AI’s power hunger is essentially a hardware problem. That’s not the case, Ielmini noted.
“We believe that significant progress could be made by similar breakthroughs at the algorithm and dataset levels,” he said.
He’s not the only one.
One of the key research areas at Qualcomm’s AI research lab is energy efficiency. Max Welling, vice president of Qualcomm Technology R&D division, has written about the need for more power-efficient algorithms. He has gone so far as to suggest that AI algorithms will be measured by the amount of intelligence they provide per joule.
One emerging area being studied, Welling wrote, is the use of Bayesian deep learning for deep neural networks.
It’s all pretty heady stuff and easily the subject of a PhD thesis. The main thing to understand in this context is that Bayesian deep learning is another attempt to mimic how the brain processes information by introducing random values into the neural network. A benefit of Bayesian deep learning is that it compresses and quantifies data in order to reduce the complexity of a neural network. In turn, that reduces the number of “steps” required to recognize a dog as a dog—and the energy required to get the right result.
A team at Oak Ridge National Laboratory has previously demonstrated another way to improve AI energy efficiency by converting deep learning neural networks into what’s called a spiking neural network. The researchers spiked their deep spiking neural network (DSNN) by introducing a stochastic process that adds random values like Bayesian deep learning.
The DSNN actually imitates the way neurons interact with synapses, which send signals between brain cells. Individual “spikes” in the network indicate where to perform computations, lowering energy consumption because it disregards unnecessary computations.
The system is being used by cancer researchers to scan millions of clinical reports to unearth insights on causes and treatments of the disease.
Helping battle cancer is only one of many rewards we may reap from artificial intelligence in the future, as long as the benefits of those algorithms outweigh the costs of using them.
“Making AI more energy-efficient is an overarching objective that spans the fields of algorithms, systems, architecture, circuits, and devices,” Ielmini said.
Image Credit: analogicus from Pixabay Continue reading