Tag Archives: source
#431928 How Fast Is AI Progressing? Stanford’s ...
When? This is probably the question that futurists, AI experts, and even people with a keen interest in technology dread the most. It has proved famously difficult to predict when new developments in AI will take place. The scientists at the Dartmouth Summer Research Project on Artificial Intelligence in 1956 thought that perhaps two months would be enough to make “significant advances” in a whole range of complex problems, including computers that can understand language, improve themselves, and even understand abstract concepts.
Sixty years later, and these problems are not yet solved. The AI Index, from Stanford, is an attempt to measure how much progress has been made in artificial intelligence.
The index adopts a unique approach, and tries to aggregate data across many regimes. It contains Volume of Activity metrics, which measure things like venture capital investment, attendance at academic conferences, published papers, and so on. The results are what you might expect: tenfold increases in academic activity since 1996, an explosive growth in startups focused around AI, and corresponding venture capital investment. The issue with this metric is that it measures AI hype as much as AI progress. The two might be correlated, but then again, they may not.
The index also scrapes data from the popular coding website Github, which hosts more source code than anyone in the world. They can track the amount of AI-related software people are creating, as well as the interest levels in popular machine learning packages like Tensorflow and Keras. The index also keeps track of the sentiment of news articles that mention AI: surprisingly, given concerns about the apocalypse and an employment crisis, those considered “positive” outweigh the “negative” by three to one.
But again, this could all just be a measure of AI enthusiasm in general.
No one would dispute the fact that we’re in an age of considerable AI hype, but the progress of AI is littered by booms and busts in hype, growth spurts that alternate with AI winters. So the AI Index attempts to track the progress of algorithms against a series of tasks. How well does computer vision perform at the Large Scale Visual Recognition challenge? (Superhuman at annotating images since 2015, but they still can’t answer questions about images very well, combining natural language processing and image recognition). Speech recognition on phone calls is almost at parity.
In other narrow fields, AIs are still catching up to humans. Translation might be good enough that you can usually get the gist of what’s being said, but still scores poorly on the BLEU metric for translation accuracy. The AI index even keeps track of how well the programs can do on the SAT test, so if you took it, you can compare your score to an AI’s.
Measuring the performance of state-of-the-art AI systems on narrow tasks is useful and fairly easy to do. You can define a metric that’s simple to calculate, or devise a competition with a scoring system, and compare new software with old in a standardized way. Academics can always debate about the best method of assessing translation or natural language understanding. The Loebner prize, a simplified question-and-answer Turing Test, recently adopted Winograd Schema type questions, which rely on contextual understanding. AI has more difficulty with these.
Where the assessment really becomes difficult, though, is in trying to map these narrow-task performances onto general intelligence. This is hard because of a lack of understanding of our own intelligence. Computers are superhuman at chess, and now even a more complex game like Go. The braver predictors who came up with timelines thought AlphaGo’s success was faster than expected, but does this necessarily mean we’re closer to general intelligence than they thought?
Here is where it’s harder to track progress.
We can note the specialized performance of algorithms on tasks previously reserved for humans—for example, the index cites a Nature paper that shows AI can now predict skin cancer with more accuracy than dermatologists. We could even try to track one specific approach to general AI; for example, how many regions of the brain have been successfully simulated by a computer? Alternatively, we could simply keep track of the number of professions and professional tasks that can now be performed to an acceptable standard by AI.
“We are running a race, but we don’t know how to get to the endpoint, or how far we have to go.”
Progress in AI over the next few years is far more likely to resemble a gradual rising tide—as more and more tasks can be turned into algorithms and accomplished by software—rather than the tsunami of a sudden intelligence explosion or general intelligence breakthrough. Perhaps measuring the ability of an AI system to learn and adapt to the work routines of humans in office-based tasks could be possible.
The AI index doesn’t attempt to offer a timeline for general intelligence, as this is still too nebulous and confused a concept.
Michael Woodridge, head of Computer Science at the University of Oxford, notes, “The main reason general AI is not captured in the report is that neither I nor anyone else would know how to measure progress.” He is concerned about another AI winter, and overhyped “charlatans and snake-oil salesmen” exaggerating the progress that has been made.
A key concern that all the experts bring up is the ethics of artificial intelligence.
Of course, you don’t need general intelligence to have an impact on society; algorithms are already transforming our lives and the world around us. After all, why are Amazon, Google, and Facebook worth any money? The experts agree on the need for an index to measure the benefits of AI, the interactions between humans and AIs, and our ability to program values, ethics, and oversight into these systems.
Barbra Grosz of Harvard champions this view, saying, “It is important to take on the challenge of identifying success measures for AI systems by their impact on people’s lives.”
For those concerned about the AI employment apocalypse, tracking the use of AI in the fields considered most vulnerable (say, self-driving cars replacing taxi drivers) would be a good idea. Society’s flexibility for adapting to AI trends should be measured, too; are we providing people with enough educational opportunities to retrain? How about teaching them to work alongside the algorithms, treating them as tools rather than replacements? The experts also note that the data suffers from being US-centric.
We are running a race, but we don’t know how to get to the endpoint, or how far we have to go. We are judging by the scenery, and how far we’ve run already. For this reason, measuring progress is a daunting task that starts with defining progress. But the AI index, as an annual collection of relevant information, is a good start.
Image Credit: Photobank gallery / Shutterstock.com Continue reading
#431872 AI Uses Titan Supercomputer to Create ...
You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading
#431841 The importance of iCub as a standard ...
Robotic research benefited in the last 10 years from a standardized open-source platform for research on embodied artificial intelligence (AI), the humanoid robot iCub. Created in Italy, today it is available in laboratories across Europe, the U.S., South Korea, Singapore and Japan, and more than 100 researchers worldwide contribute to develop its skills. Researchers at IIT-Istituto Italiano di Tecnologia focused on the importance of such a research platform in a paper published today in Science Robotics. Continue reading
#431689 Robotic Materials Will Distribute ...
The classical view of a robot as a mechanical body with a central “brain” that controls its behavior could soon be on its way out. The authors of a recent article in Science Robotics argue that future robots will have intelligence distributed throughout their bodies.
The concept, and the emerging discipline behind it, are variously referred to as “material robotics” or “robotic materials” and are essentially a synthesis of ideas from robotics and materials science. Proponents say advances in both fields are making it possible to create composite materials capable of combining sensing, actuation, computation, and communication and operating independently of a central processing unit.
Much of the inspiration for the field comes from nature, with practitioners pointing to the adaptive camouflage of the cuttlefish’s skin, the ability of bird wings to morph in response to different maneuvers, or the banyan tree’s ability to grow roots above ground to support new branches.
Adaptive camouflage and morphing wings have clear applications in the defense and aerospace sector, but the authors say similar principles could be used to create everything from smart tires able to calculate the traction needed for specific surfaces to grippers that can tailor their force to the kind of object they are grasping.
“Material robotics represents an acknowledgment that materials can absorb some of the challenges of acting and reacting to an uncertain world,” the authors write. “Embedding distributed sensors and actuators directly into the material of the robot’s body engages computational capabilities and offloads the rigid information and computational requirements from the central processing system.”
The idea of making materials more adaptive is not new, and there are already a host of “smart materials” that can respond to stimuli like heat, mechanical stress, or magnetic fields by doing things like producing a voltage or changing shape. These properties can be carefully tuned to create materials capable of a wide variety of functions such as movement, self-repair, or sensing.
The authors say synthesizing these kinds of smart materials, alongside other advanced materials like biocompatible conductors or biodegradable elastomers, is foundational to material robotics. But the approach also involves integration of many different capabilities in the same material, careful mechanical design to make the most of mechanical capabilities, and closing the loop between sensing and control within the materials themselves.
While there are stand-alone applications for such materials in the near term, like smart fabrics or robotic grippers, the long-term promise of the field is to distribute decision-making in future advanced robots. As they are imbued with ever more senses and capabilities, these machines will be required to shuttle huge amounts of control and feedback data to and fro, placing a strain on both their communication and computation abilities.
Materials that can process sensor data at the source and either autonomously react to it or filter the most relevant information to be passed on to the central processing unit could significantly ease this bottleneck. In a press release related to an earlier study, Nikolaus Correll, an assistant professor of computer science at the University of Colorado Boulder who is also an author of the current paper, pointed out this is a tactic used by the human body.
“The human sensory system automatically filters out things like the feeling of clothing rubbing on the skin,” he said. “An artificial skin with possibly thousands of sensors could do the same thing, and only report to a central ‘brain’ if it touches something new.”
There are still considerable challenges to realizing this vision, though, the authors say, noting that so far the young field has only produced proof of concepts. The biggest challenge remains manufacturing robotic materials in a way that combines all these capabilities in a small enough package at an affordable cost.
Luckily, the authors note, the field can draw on convergent advances in both materials science, such as the development of new bulk materials with inherent multifunctionality, and robotics, such as the ever tighter integration of components.
And they predict that doing away with the prevailing dichotomy of “brain versus body” could lay the foundations for the emergence of “robots with brains in their bodies—the foundation of inexpensive and ubiquitous robots that will step into the real world.”
Image Credit: Anatomy Insider / Shutterstock.com Continue reading
#431343 How Technology Is Driving Us Toward Peak ...
At some point in the future—and in some ways we are already seeing this—the amount of physical stuff moving around the world will peak and begin to decline. By “stuff,” I am referring to liquid fuels, coal, containers on ships, food, raw materials, products, etc.
New technologies are moving us toward “production-at-the-point-of-consumption” of energy, food, and products with reduced reliance on a global supply chain.
The trade of physical stuff has been central to globalization as we’ve known it. So, this declining movement of stuff may signal we are approaching “peak globalization.”
To be clear, even as the movement of stuff may slow, if not decline, the movement of people, information, data, and ideas around the world is growing exponentially and is likely to continue doing so for the foreseeable future.
Peak globalization may provide a pathway to preserving the best of globalization and global interconnectedness, enhancing economic and environmental sustainability, and empowering individuals and communities to strengthen democracy.
At the same time, some of the most troublesome aspects of globalization may be eased, including massive financial transfers to energy producers and loss of jobs to manufacturing platforms like China. This shift could bring relief to the “losers” of globalization and ease populist, nationalist political pressures that are roiling the developed countries.
That is quite a claim, I realize. But let me explain the vision.
New Technologies and Businesses: Digital, Democratized, Decentralized
The key factors moving us toward peak globalization and making it economically viable are new technologies and innovative businesses and business models allowing for “production-at-the-point-of-consumption” of energy, food, and products.
Exponential technologies are enabling these trends by sharply reducing the “cost of entry” for creating businesses. Driven by Moore’s Law, powerful technologies have become available to almost anyone, anywhere.
Beginning with the microchip, which has had a 100-billion-fold improvement in 40 years—10,000 times faster and 10 million times cheaper—the marginal cost of producing almost everything that can be digitized has fallen toward zero.
A hard copy of a book, for example, will always entail the cost of materials, printing, shipping, etc., even if the marginal cost falls as more copies are produced. But the marginal cost of a second digital copy, such as an e-book, streaming video, or song, is nearly zero as it is simply a digital file sent over the Internet, the world’s largest copy machine.* Books are one product, but there are literally hundreds of thousands of dollars in once-physical, separate products jammed into our devices at little to no cost.
A smartphone alone provides half the human population access to artificial intelligence—from SIRI, search, and translation to cloud computing—geolocation, free global video calls, digital photography and free uploads to social network sites, free access to global knowledge, a million apps for a huge variety of purposes, and many other capabilities that were unavailable to most people only a few years ago.
As powerful as dematerialization and demonetization are for private individuals, they’re having a stronger effect on businesses. A small team can access expensive, advanced tools that before were only available to the biggest organizations. Foundational digital platforms, such as the internet and GPS, and the platforms built on top of them by the likes of Google, Apple, Amazon, and others provide the connectivity and services democratizing business tools and driving the next generation of new startups.
“As these trends gain steam in coming decades, they’ll bleed into and fundamentally transform global supply chains.”
An AI startup, for example, doesn’t need its own server farm to train its software and provide service to customers. The team can rent computing power from Amazon Web Services. This platform model enables small teams to do big things on the cheap. And it isn’t just in software. Similar trends are happening in hardware too. Makers can 3D print or mill industrial grade prototypes of physical stuff in a garage or local maker space and send or sell designs to anyone with a laptop and 3D printer via online platforms.
These are early examples of trends that are likely to gain steam in coming decades, and as they do, they’ll bleed into and fundamentally transform global supply chains.
The old model is a series of large, connected bits of centralized infrastructure. It makes sense to mine, farm, or manufacture in bulk when the conditions, resources, machines, and expertise to do so exist in particular places and are specialized and expensive. The new model, however, enables smaller-scale production that is local and decentralized.
To see this more clearly, let’s take a look at the technological trends at work in the three biggest contributors to the global trade of physical stuff—products, energy, and food.
Products
3D printing (additive manufacturing) allows for distributed manufacturing near the point of consumption, eliminating or reducing supply chains and factory production lines.
This is possible because product designs are no longer made manifest in assembly line parts like molds or specialized mechanical tools. Rather, designs are digital and can be called up at will to guide printers. Every time a 3D printer prints, it can print a different item, so no assembly line needs to be set up for every different product. 3D printers can also print an entire finished product in one piece or reduce the number of parts of larger products, such as engines. This further lessens the need for assembly.
Because each item can be customized and printed on demand, there is no cost benefit from scaling production. No inventories. No shipping items across oceans. No carbon emissions transporting not only the final product but also all the parts in that product shipped from suppliers to manufacturer. Moreover, 3D printing builds items layer by layer with almost no waste, unlike “subtractive manufacturing” in which an item is carved out of a piece of metal, and much or even most of the material can be waste.
Finally, 3D printing is also highly scalable, from inexpensive 3D printers (several hundred dollars) for home and school use to increasingly capable and expensive printers for industrial production. There are also 3D printers being developed for printing buildings, including houses and office buildings, and other infrastructure.
The technology for finished products is only now getting underway, and there are still challenges to overcome, such as speed, quality, and range of materials. But as methods and materials advance, it will likely creep into more manufactured goods.
Ultimately, 3D printing will be a general purpose technology that involves many different types of printers and materials—such as plastics, metals, and even human cells—to produce a huge range of items, from human tissue and potentially human organs to household items and a range of industrial items for planes, trains, and automobiles.
Energy
Renewable energy production is located at or relatively near the source of consumption.
Although electricity generated by solar, wind, geothermal, and other renewable sources can of course be transmitted over longer distances, it is mostly generated and consumed locally or regionally. It is not transported around the world in tankers, ships, and pipelines like petroleum, coal, and natural gas.
Moreover, the fuel itself is free—forever. There is no global price on sun or wind. The people relying on solar and wind power need not worry about price volatility and potential disruption of fuel supplies as a result of political, market, or natural causes.
Renewables have their problems, of course, including intermittency and storage, and currently they work best if complementary to other sources, especially natural gas power plants that, unlike coal plants, can be turned on or off and modulated like a gas stove, and are half the carbon emissions of coal.
Within the next decades or so, it is likely the intermittency and storage problems will be solved or greatly mitigated. In addition, unlike coal and natural gas power plants, solar is scalable, from solar panels on individual homes or even cars and other devices, to large-scale solar farms. Solar can be connected with microgrids and even allow for autonomous electricity generation by homes, commercial buildings, and communities.
It may be several decades before fossil fuel power plants can be phased out, but the development cost of renewables has been falling exponentially and, in places, is beginning to compete with coal and gas. Solar especially is expected to continue to increase in efficiency and decline in cost.
Given these trends in cost and efficiency, renewables should become obviously cheaper over time—if the fuel is free for solar and has to be continually purchased for coal and gas, at some point the former is cheaper than the latter. Renewables are already cheaper if externalities such as carbon emissions and environmental degradation involved in obtaining and transporting the fuel are included.
Food
Food can be increasingly produced near the point of consumption with vertical farms and eventually with printed food and even printed or cultured meat.
These sources bring production of food very near the consumer, so transportation costs, which can be a significant portion of the cost of food to consumers, are greatly reduced. The use of land and water are reduced by 95% or more, and energy use is cut by nearly 50%. In addition, fertilizers and pesticides are not required and crops can be grown 365 days a year whatever the weather and in more climates and latitudes than is possible today.
While it may not be practical to grow grains, corn, and other such crops in vertical farms, many vegetables and fruits can flourish in such facilities. In addition, cultured or printed meat is being developed—the big challenge is scaling up and reducing cost—that is based on cells from real animals without slaughtering the animals themselves.
There are currently some 70 billion animals being raised for food around the world [PDF] and livestock alone counts for about 15% of global emissions. Moreover, livestock places huge demands on land, water, and energy. Like vertical farms, cultured or printed meat could be produced with no more land use than a brewery and with far less water and energy.
A More Democratic Economy Goes Bottom Up
This is a very brief introduction to the technologies that can bring “production-at-the-point-of-consumption” of products, energy, and food to cities and regions.
What does this future look like? Here’s a simplified example.
Imagine a universal manufacturing facility with hundreds of 3D printers printing tens of thousands of different products on demand for the local community—rather than assembly lines in China making tens of thousands of the same product that have to be shipped all over the world since no local market can absorb all of the same product.
Nearby, a vertical farm and cultured meat facility produce much of tomorrow night’s dinner. These facilities would be powered by local or regional wind and solar. Depending on need and quality, some infrastructure and machinery, like solar panels and 3D printers, would live in these facilities and some in homes and businesses.
The facilities could be owned by a large global corporation—but still locally produce goods—or they could be franchised or even owned and operated independently by the local population. Upkeep and management at each would provide jobs for communities nearby. Eventually, not only would global trade of parts and products diminish, but even required supplies of raw materials and feed stock would decline since there would be less waste in production, and many materials would be recycled once acquired.
“Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.”
This model suggests a shift toward a “bottom up” economy that is more democratic, locally controlled, and likely to generate more local jobs.
The global trends in democratization of technology make the vision technologically plausible. Much of this technology already exists and is improving and scaling while exponentially decreasing in cost to become available to almost anyone, anywhere.
This includes not only access to key technologies, but also to education through digital platforms available globally. Online courses are available for free, ranging from advanced physics, math, and engineering to skills training in 3D printing, solar installations, and building vertical farms. Social media platforms can enable local and global collaboration and sharing of knowledge and best practices.
These new communities of producers can be the foundation for new forms of democratic governance as they recognize and “capitalize” on the reality that control of the means of production can translate to political power. More jobs and local control could weaken populist, anti-globalization political forces as people recognize they could benefit from the positive aspects of globalization and international cooperation and connectedness while diminishing the impact of globalization’s downsides.
There are powerful vested interests that stand to lose in such a global structural shift. But this vision builds on trends that are already underway and are gaining momentum. Peak globalization could be a viable pathway to an economic foundation that puts people first while building a more economically and environmentally sustainable future.
This article was originally posted on Open Democracy (CC BY-NC 4.0). The version above was edited with the author for length and includes additions. Read the original article on Open Democracy.
* See Jeremy Rifkin, The Zero Marginal Cost Society, (New York: Palgrave Macmillan, 2014), Part II, pp. 69-154.
Image Credit: Sergey Nivens / Shutterstock.com Continue reading