Tag Archives: end
#435161 Less Like Us: An Alternate Theory of ...
The question of whether an artificial general intelligence will be developed in the future—and, if so, when it might arrive—is controversial. One (very uncertain) estimate suggests 2070 might be the earliest we could expect to see such technology.
Some futurists point to Moore’s Law and the increasing capacity of machine learning algorithms to suggest that a more general breakthrough is just around the corner. Others suggest that extrapolating exponential improvements in hardware is unwise, and that creating narrow algorithms that can beat humans at specialized tasks brings us no closer to a “general intelligence.”
But evolution has produced minds like the human mind at least once. Surely we could create artificial intelligence simply by copying nature, either by guided evolution of simple algorithms or wholesale emulation of the human brain.
Both of these ideas are far easier to conceive of than they are to achieve. The 302 neurons of the nematode worm’s brain are still an extremely difficult engineering challenge, let alone the 86 billion in a human brain.
Leaving aside these caveats, though, many people are worried about artificial general intelligence. Nick Bostrom’s influential book on superintelligence imagines it will be an agent—an intelligence with a specific goal. Once such an agent reaches a human level of intelligence, it will improve itself—increasingly rapidly as it gets smarter—in pursuit of whatever goal it has, and this “recursive self-improvement” will lead it to become superintelligent.
This “intelligence explosion” could catch humans off guard. If the initial goal is poorly specified or malicious, or if improper safety features are in place, or if the AI decides it would prefer to do something else instead, humans may be unable to control our own creation. Bostrom gives examples of how a seemingly innocuous goal, such as “Make everyone happy,” could be misinterpreted; perhaps the AI decides to drug humanity into a happy stupor, or convert most of the world into computing infrastructure to pursue its goal.
Drexler and Comprehensive AI Services
These are increasingly familiar concerns for an AI that behaves like an agent, seeking to achieve its goal. There are dissenters to this picture of how artificial general intelligence might arise. One notable alternative point of view comes from Eric Drexler, famous for his work on molecular nanotechnology and Engines of Creation, the book that popularized it.
With respect to AI, Drexler believes our view of an artificial intelligence as a single “agent” that acts to maximize a specific goal is too narrow, almost anthropomorphizing AI, or modeling it as a more realistic route towards general intelligence. Instead, he proposes “Comprehensive AI Services” (CAIS) as an alternative route to artificial general intelligence.
What does this mean? Drexler’s argument is that we should look more closely at how machine learning and AI algorithms are actually being developed in the real world. The optimization effort is going into producing algorithms that can provide services and perform tasks like translation, music recommendations, classification, medical diagnoses, and so forth.
AI-driven improvements in technology, argues Drexler, will lead to a proliferation of different algorithms: technology and software improvement, which can automate increasingly more complicated tasks. Recursive improvement in this regime is already occurring—take the newer versions of AlphaGo, which can learn to improve themselves by playing against previous versions.
Many Smart Arms, No Smart Brain
Instead of relying on some unforeseen breakthrough, the CAIS model of AI just assumes that specialized, narrow AI will continue to improve at performing each of its tasks, and the range of tasks that machine learning algorithms will be able to perform will become wider. Ultimately, once a sufficient number of tasks have been automated, the services that an AI will provide will be so comprehensive that they will resemble a general intelligence.
One could then imagine a “general” intelligence as simply an algorithm that is extremely good at matching the task you ask it to perform to the specialized service algorithm that can perform that task. Rather than acting like a single brain that strives to achieve a particular goal, the central AI would be more like a search engine, looking through the tasks it can perform to find the closest match and calling upon a series of subroutines to achieve the goal.
For Drexler, this is inherently a safety feature. Rather than Bostrom’s single, impenetrable, conscious and superintelligent brain (which we must try to psychoanalyze in advance without really knowing what it will look like), we have a network of capabilities. If you don’t want your system to perform certain tasks, you can simply cut it off from access to those services. There is no superintelligent consciousness to outwit or “trap”: more like an extremely high-level programming language that can respond to complicated commands by calling upon one of the myriad specialized algorithms that have been developed by different groups.
This skirts the complex problem of consciousness and all of the sticky moral quandaries that arise in making minds that might be like ours. After all, if you could simulate a human mind, you could simulate it experiencing unimaginable pain. Black Mirror-esque dystopias where emulated minds have no rights and are regularly “erased” or forced to labor in dull and repetitive tasks, hove into view.
Drexler argues that, in this world, there is no need to ever build a conscious algorithm. Yet it seems likely that, at some point, humans will attempt to simulate our own brains, if only in the vain attempt to pursue immortality. This model cannot hold forever. Yet its proponents argue that any world in which we could develop general AI would probably also have developed superintelligent capabilities in a huge range of different tasks, such as computer programming, natural language understanding, and so on. In other words, CAIS arrives first.
The Future In Our Hands?
Drexler argues that his model already incorporates many of the ideas from general AI development. In the marketplace, algorithms compete all the time to perform these services: they undergo the same evolutionary pressures that lead to “higher intelligence,” but the behavior that’s considered superior is chosen by humans, and the nature of the “general intelligence” is far more shaped by human decision-making and human programmers. Development in AI services could still be rapid and disruptive.
But in Drexler’s case, the research and development capacity comes from humans and organizations driven by the desire to improve algorithms that are performing individualized and useful tasks, rather than from a conscious AI recursively reprogramming and improving itself.
In other words, this vision does not absolve us of the responsibility of making our AI safe; if anything, it gives us a greater degree of responsibility. As more and more complex “services” are automated, performing what used to be human jobs at superhuman speed, the economic disruption will be severe.
Equally, as machine learning is trusted to carry out more complex decisions, avoiding algorithmic bias becomes crucial. Shaping each of these individual decision-makers—and trying to predict the complex ways they might interact with each other—is no less daunting a task than specifying the goal for a hypothetical, superintelligent, God-like AI. Arguably, the consequences of the “misalignment” of these services algorithms are already multiplying around us.
The CAIS model bridges the gap between real-world AI, machine learning developments, and real-world safety considerations, as well as the speculative world of superintelligent agents and the safety considerations involved with controlling their behavior. We should keep our minds open as to what form AI and machine learning will take, and how it will influence our societies—and we must take care to ensure that the systems we create don’t end up forcing us all to live in a world of unintended consequences.
Image Credit: MF Production/Shutterstock.com Continue reading
#435080 12 Ways Big Tech Can Take Big Action on ...
Bill Gates and Mark Zuckerberg have invested $1 billion in Breakthrough Energy to fund next-generation solutions to tackle climate. But there is a huge risk that any successful innovation will only reach the market as the world approaches 2030 at the earliest.
We now know that reducing the risk of dangerous climate change means halving global greenhouse gas emissions by that date—in just 11 years. Perhaps Gates, Zuckerberg, and all the tech giants should invest equally in innovations to do with how their own platforms —search, social media, eCommerce—can support societal behavior changes to drive down emissions.
After all, the tech giants influence the decisions of four billion consumers every day. It is time for a social contract between tech and society.
Recently myself and collaborator Johan Falk published a report during the World Economic Forum in Davos outlining 12 ways the tech sector can contribute to supporting societal goals to stabilize Earth’s climate.
Become genuine climate guardians
Tech giants go to great lengths to show how serious they are about reducing their emissions. But I smell cognitive dissonance. Google and Microsoft are working in partnership with oil companies to develop AI tools to help maximize oil recovery. This is not the behavior of companies working flat-out to stabilize Earth’s climate. Indeed, few major tech firms have visions that indicate a stable and resilient planet might be a good goal, yet AI alone has the potential to slash greenhouse gas emissions by four percent by 2030—equivalent to the emissions of Australia, Canada, and Japan combined.
We are now developing a playbook, which we plan to publish later this year at the UN climate summit, about making it as simple as possible for a CEO to become a climate guardian.
Hey Alexa, do you care about the stability of Earth’s climate?
Increasingly, consumers are delegating their decisions to narrow artificial intelligence like Alexa and Siri. Welcome to a world of zero-click purchases.
Should algorithms and information architecture be designed to nudge consumer behavior towards low-carbon choices, for example by making these options the default? We think so. People don’t mind being nudged; in fact, they welcome efforts to make their lives better. For instance, if I want to lose weight, I know I will need all the help I can get. Let’s ‘nudge for good’ and experiment with supporting societal goals.
Use social media for good
Facebook’s goal is to bring the world closer together. With 2.2 billion users on the platform, CEO Mark Zuckerberg can reasonably claim this goal is possible. But social media has changed the flow of information in the world, creating a lucrative industry around a toxic brown-cloud of confusion and anger, with frankly terrifying implications for democracy. This has been linked to the rise of nationalism and populism, and to the election of leaders who shun international cooperation, dismiss scientific knowledge, and reverse climate action at a moment when we need it more than ever.
Social media tools need re-engineering to help people make sense of the world, support democratic processes, and build communities around societal goals. Make this your mission.
Design for a future on Earth
Almost everything is designed with computer software, from buildings to mobile phones to consumer packaging. It is time to make zero-carbon design the new default and design products for sharing, re-use and disassembly.
The future is circular
Halving emissions in a decade will require all companies to adopt circular business models to reduce material use. Some tech companies are leading the charge. Apple has committed to becoming 100 percent circular as soon as possible. Great.
While big tech companies strive to be market leaders here, many other companies lack essential knowledge. Tech companies can support rapid adoption in different economic sectors, not least because they have the know-how to scale innovations exponentially. It makes business sense. If economies of scale drive the price of recycled steel and aluminium down, everyone wins.
Reward low-carbon consumption
eCommerce platforms can create incentives for low-carbon consumption. The world’s largest experiment in greening consumer behavior is Ant Forest, set up by Chinese fintech giant Ant Financial.
An estimated 300 million customers—similar to the population of the United States—gain points for making low-carbon choices such as walking to work, using public transport, or paying bills online. Virtual points are eventually converted into real trees. Sure, big questions remain about its true influence on emissions, but this is a space for rapid experimentation for big impact.
Make information more useful
Science is our tool for defining reality. Scientific consensus is how we attain reliable knowledge. Even after the information revolution, reliable knowledge about the world remains fragmented and unstructured. Build the next generation of search engines to genuinely make the world’s knowledge useful for supporting societal goals.
We need to put these tools towards supporting shared world views of the state of the planet based on the best science. New AI tools being developed by startups like Iris.ai can help see through the fog. From Alexa to Google Home and Siri, the future is “Voice”, but who chooses the information source? The highest bidder? Again, the implications for climate are huge.
Create new standards for digital advertising and marketing
Half of global ad revenue will soon be online, and largely going to a small handful of companies. How about creating a novel ethical standard on what is advertised and where? Companies could consider promoting sustainable choices and healthy lifestyles and limiting advertising of high-emissions products such as cheap flights.
We are what we eat
It is no secret that tech is about to disrupt grocery. The supermarkets of the future will be built on personal consumer data. With about two billion people either obese or overweight, revolutions in choice architecture could support positive diet choices, reduce meat consumption, halve food waste and, into the bargain, slash greenhouse gas emissions.
The future of transport is not cars, it’s data
The 2020s look set to be the biggest disruption of the automobile industry since Henry Ford unveiled the Model T. Two seismic shifts are on their way.
First, electric cars now compete favorably with petrol engines on range. Growth will reach an inflection point within a year or two once prices reach parity. The death of the internal combustion engine in Europe and Asia is assured with end dates announced by China, India, France, the UK, and most of Scandinavia. Dates range from 2025 (Norway) to 2040 (UK and China).
Tech giants can accelerate the demise. Uber recently announced a passenger surcharge to help London drivers save around $1,500 a year towards the cost of an electric car.
Second, driverless cars can shift the transport economic model from ownership to service and ride sharing. A complete shift away from privately-owned vehicles is around the corner, with large implications for emissions.
Clean-energy living and working
Most buildings are barely used and inefficiently heated and cooled. Digitization can slash this waste and its corresponding emissions through measurement, monitoring, and new business models to use office space. While, just a few unicorns are currently in this space, the potential is enormous. Buildings are one of the five biggest sources of emissions, yet have the potential to become clean energy producers in a distributed energy network.
Creating liveable cities
More cities are setting ambitious climate targets to halve emissions in a decade or even less. Tech companies can support this transition by driving demand for low-carbon services for their workforces and offices, but also by providing tools to help monitor emissions and act to reduce them. Google, for example, is collecting travel and other data from across cities to estimate emissions in real time. This is possible through technologies like artificial intelligence and the internet of things. But beware of smart cities that turn out to be not so smart. Efficiencies can reduce resilience when cities face crises.
It’s a Start
Of course, it will take more than tech to solve the climate crisis. But tech is a wildcard. The actions of the current tech giants and their acolytes could serve to destabilize the climate further or bring it under control.
We need a new social contract between tech companies and society to achieve societal goals. The alternative is unthinkable. Without drastic action now, climate chaos threatens to engulf us all. As this future approaches, regulators will be forced to take ever more draconian action to rein in the problem. Acting now will reduce that risk.
Note: A version of this article was originally published on World Economic Forum
Image Credit: Bruce Rolff / Shutterstock.com Continue reading