Tag Archives: 2014

#433895 Sci-Fi Movies Are the Secret Weapon That ...

If there’s one line that stands the test of time in Steven Spielberg’s 1993 classic Jurassic Park, it’s probably Jeff Goldblum’s exclamation, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”

Goldblum’s character, Dr. Ian Malcolm, was warning against the hubris of naively tinkering with dinosaur DNA in an effort to bring these extinct creatures back to life. Twenty-five years on, his words are taking on new relevance as a growing number of scientists and companies are grappling with how to tread the line between “could” and “should” in areas ranging from gene editing and real-world “de-extinction” to human augmentation, artificial intelligence and many others.

Despite growing concerns that powerful emerging technologies could lead to unexpected and wide-ranging consequences, innovators are struggling with how to develop beneficial new products while being socially responsible. Part of the answer could lie in watching more science fiction movies like Jurassic Park.

Hollywood Lessons in Societal Risks
I’ve long been interested in how innovators and others can better understand the increasingly complex landscape around the social risks and benefits associated with emerging technologies. Growing concerns over the impacts of tech on jobs, privacy, security and even the ability of people to live their lives without undue interference highlight the need for new thinking around how to innovate responsibly.

New ideas require creativity and imagination, and a willingness to see the world differently. And this is where science fiction movies can help.

Sci-fi flicks are, of course, notoriously unreliable when it comes to accurately depicting science and technology. But because their plots are often driven by the intertwined relationships between people and technology, they can be remarkably insightful in revealing social factors that affect successful and responsible innovation.

This is clearly seen in Jurassic Park. The movie provides a surprisingly good starting point for thinking about the pros and cons of modern-day genetic engineering and the growing interest in bringing extinct species back from the dead. But it also opens up conversations around the nature of complex systems that involve both people and technology, and the potential dangers of “permissionless” innovation that’s driven by power, wealth and a lack of accountability.

Similar insights emerge from a number of other movies, including Spielberg’s 2002 film “Minority Report”—which presaged a growing capacity for AI-enabled crime prediction and the ethical conundrums it’s raising—as well as the 2014 film Ex Machina.

As with Jurassic Park, Ex Machina centers around a wealthy and unaccountable entrepreneur who is supremely confident in his own abilities. In this case, the technology in question is artificial intelligence.

The movie tells a tale of an egotistical genius who creates a remarkable intelligent machine—but he lacks the awareness to recognize his limitations and the risks of what he’s doing. It also provides a chilling insight into potential dangers of creating machines that know us better than we know ourselves, while not being bound by human norms or values.

The result is a sobering reminder of how, without humility and a good dose of humanity, our innovations can come back to bite us.

The technologies in Jurassic Park, Minority Report, and Ex Machina lie beyond what is currently possible. Yet these films are often close enough to emerging trends that they help reveal the dangers of irresponsible, or simply naive, innovation. This is where these and other science fiction movies can help innovators better understand the social challenges they face and how to navigate them.

Real-World Problems Worked Out On-Screen
In a recent op-ed in the New York Times, journalist Kara Swisher asked, “Who will teach Silicon Valley to be ethical?” Prompted by a growing litany of socially questionable decisions amongst tech companies, Swisher suggests that many of them need to grow up and get serious about ethics. But ethics alone are rarely enough. It’s easy for good intentions to get swamped by fiscal pressures and mired in social realities.

Elon Musk has shown that brilliant tech innovators can take ethical missteps along the way. Image Credit:AP Photo/Chris Carlson
Technology companies increasingly need to find some way to break from business as usual if they are to become more responsible. High-profile cases involving companies like Facebook and Uber as well as Tesla’s Elon Musk have highlighted the social as well as the business dangers of operating without fully understanding the consequences of people-oriented actions.

Many more companies are struggling to create socially beneficial technologies and discovering that, without the necessary insights and tools, they risk blundering about in the dark.

For instance, earlier this year, researchers from Google and DeepMind published details of an artificial intelligence-enabled system that can lip-read far better than people. According to the paper’s authors, the technology has enormous potential to improve the lives of people who have trouble speaking aloud. Yet it doesn’t take much to imagine how this same technology could threaten the privacy and security of millions—especially when coupled with long-range surveillance cameras.

Developing technologies like this in socially responsible ways requires more than good intentions or simply establishing an ethics board. People need a sophisticated understanding of the often complex dynamic between technology and society. And while, as Mozilla’s Mitchell Baker suggests, scientists and technologists engaging with the humanities can be helpful, it’s not enough.

An Easy Way into a Serious Discipline
The “new formulation” of complementary skills Baker says innovators desperately need already exists in a thriving interdisciplinary community focused on socially responsible innovation. My home institution, the School for the Future of Innovation in Society at Arizona State University, is just one part of this.

Experts within this global community are actively exploring ways to translate good ideas into responsible practices. And this includes the need for creative insights into the social landscape around technology innovation, and the imagination to develop novel ways to navigate it.

People love to come together as a movie audience.Image credit: The National Archives UK, CC BY 4.0
Here is where science fiction movies become a powerful tool for guiding innovators, technology leaders and the companies where they work. Their fictional scenarios can reveal potential pitfalls and opportunities that can help steer real-world decisions toward socially beneficial and responsible outcomes, while avoiding unnecessary risks.

And science fiction movies bring people together. By their very nature, these films are social and educational levelers. Look at who’s watching and discussing the latest sci-fi blockbuster, and you’ll often find a diverse cross-section of society. The genre can help build bridges between people who know how science and technology work, and those who know what’s needed to ensure they work for the good of society.

This is the underlying theme in my new book Films from the Future: The Technology and Morality of Sci-Fi Movies. It’s written for anyone who’s curious about emerging trends in technology innovation and how they might potentially affect society. But it’s also written for innovators who want to do the right thing and just don’t know where to start.

Of course, science fiction films alone aren’t enough to ensure socially responsible innovation. But they can help reveal some profound societal challenges facing technology innovators and possible ways to navigate them. And what better way to learn how to innovate responsibly than to invite some friends round, open the popcorn and put on a movie?

It certainly beats being blindsided by risks that, with hindsight, could have been avoided.

Andrew Maynard, Director, Risk Innovation Lab, Arizona State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Fred Mantel / Shutterstock.com Continue reading

Posted in Human Robots

#433806 Soaring Demand for Robots is Leading to ...

While many of us might have anticipated that robots would be replacing humans in the near future, the speed of which this is progressing at is surge that was not as accounted for. In 2014, Boston Consulting Group (BCG) estimated that the global robotics market would be worth $67 billion by the year 2025. In …

The post Soaring Demand for Robots is Leading to Shortages appeared first on TFOT. Continue reading

Posted in Human Robots

#433785 DeepMind’s Eerie Reimagination of the ...

If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.

But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.

From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.

They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.

The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.

Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.

“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.

The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.

That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”

Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.

BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’

However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.

Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.

The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.

One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.

For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.

For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.

Image Credit: Ondrej Prosicky/Shutterstock Continue reading

Posted in Human Robots

#432893 These 4 Tech Trends Are Driving Us ...

From a first-principles perspective, the task of feeding eight billion people boils down to converting energy from the sun into chemical energy in our bodies.

Traditionally, solar energy is converted by photosynthesis into carbohydrates in plants (i.e., biomass), which are either eaten by the vegans amongst us, or fed to animals, for those with a carnivorous preference.

Today, the process of feeding humanity is extremely inefficient.

If we could radically reinvent what we eat, and how we create that food, what might you imagine that “future of food” would look like?

In this post we’ll cover:

Vertical farms
CRISPR engineered foods
The alt-protein revolution
Farmer 3.0

Let’s dive in.

Vertical Farming
Where we grow our food…

The average American meal travels over 1,500 miles from farm to table. Wine from France, beef from Texas, potatoes from Idaho.

Imagine instead growing all of your food in a 50-story tall vertical farm in downtown LA or off-shore on the Great Lakes where the travel distance is no longer 1,500 miles but 50 miles.

Delocalized farming will minimize travel costs at the same time that it maximizes freshness.

Perhaps more importantly, vertical farming also allows tomorrow’s farmer the ability to control the exact conditions of her plants year round.

Rather than allowing the vagaries of the weather and soil conditions to dictate crop quality and yield, we can now perfectly control the growing cycle.

LED lighting provides the crops with the maximum amount of light, at the perfect frequency, 24 hours a day, 7 days a week.

At the same time, sensors and robots provide the root system the exact pH and micronutrients required, while fine-tuning the temperature of the farm.

Such precision farming can generate yields that are 200% to 400% above normal.

Next let’s explore how we can precision-engineer the genetic properties of the plant itself.

CRISPR and Genetically Engineered Foods
What food do we grow?

A fundamental shift is occurring in our relationship with agriculture. We are going from evolution by natural selection (Darwinism) to evolution by human direction.

CRISPR (the cutting edge gene editing tool) is providing a pathway for plant breeding that is more predictable, faster and less expensive than traditional breeding methods.

Rather than our crops being subject to nature’s random, environmental whim, CRISPR unlocks our capability to modify our crops to match the available environment.

Further, using CRISPR we will be able to optimize the nutrient density of our crops, enhancing their value and volume.

CRISPR may also hold the key to eliminating common allergens from crops. As we identify the allergen gene in peanuts, for instance, we can use CRISPR to silence that gene, making the crops we raise safer for and more accessible to a rapidly growing population.

Yet another application is our ability to make plants resistant to infection or more resistant to drought or cold.

Helping to accelerate the impact of CRISPR, the USDA recently announced that genetically engineered crops will not be regulated—providing an opening for entrepreneurs to capitalize on the opportunities for optimization CRISPR enables.

CRISPR applications in agriculture are an opportunity to help a billion people and become a billionaire in the process.

Protecting crops against volatile environments, combating crop diseases and increasing nutrient values, CRISPR is a promising tool to help feed the world’s rising population.

The Alt-Protein/Lab-Grown Meat Revolution
Something like a third of the Earth’s arable land is used for raising livestock—a massive amount of land—and global demand for meat is predicted to double in the coming decade.

Today, we must grow an entire cow—all bones, skin, and internals included—to produce a steak.

Imagine if we could instead start with a single muscle stem cell and only grow the steak, without needing the rest of the cow? Think of it as cellular agriculture.

Imagine returning millions, perhaps billions, of acres of grazing land back to the wilderness? This is the promise of lab-grown meats.

Lab-grown meat can also be engineered (using technology like CRISPR) to be packed with nutrients and be the healthiest, most delicious protein possible.

We’re watching this technology develop in real time. Several startups across the globe are already working to bring artificial meats to the food industry.

JUST, Inc. (previously Hampton Creek) run by my friend Josh Tetrick, has been on a mission to build a food system where everyone can get and afford delicious, nutritious food. They started by exploring 300,000+ species of plants all around the world to see how they can make food better and now are investing heavily in stem-cell-grown meats.

Backed by Richard Branson and Bill Gates, Memphis Meats is working on ways to produce real meat from animal cells, rather than whole animals. So far, they have produced beef, chicken, and duck using cultured cells from living animals.

As with vertical farming, transitioning production of our majority protein source to a carefully cultivated environment allows for agriculture to optimize inputs (water, soil, energy, land footprint), nutrients and, importantly, taste.

Farmer 3.0
Vertical farming and cellular agriculture are reinventing how we think about our food supply chain and what food we produce.

The next question to answer is who will be producing the food?

Let’s look back at how farming evolved through history.

Farmers 0.0 (Neolithic Revolution, around 9000 BCE): The hunter-gatherer to agriculture transition gains momentum, and humans cultivated the ability to domesticate plants for food production.

Farmers 1.0 (until around the 19th century): Farmers spent all day in the field performing backbreaking labor, and agriculture accounted for most jobs.

Farmers 2.0 (mid-20th century, Green Revolution): From the invention of the first farm tractor in 1812 through today, transformative mechanical biochemical technologies (fertilizer) boosted yields and made the job of farming easier, driving the US farm job rate down to less than two percent today.

Farmers 3.0: In the near future, farmers will leverage exponential technologies (e.g., AI, networks, sensors, robotics, drones), CRISPR and genetic engineering, and new business models to solve the world’s greatest food challenges and efficiently feed the eight-billion-plus people on Earth.

An important driver of the Farmer 3.0 evolution is the delocalization of agriculture driven by vertical and urban farms. Vertical farms and urban agriculture are empowering a new breed of agriculture entrepreneurs.

Let’s take a look at an innovative incubator in Brooklyn, New York called Square Roots.

Ten farm-in-a-shipping-containers in a Brooklyn parking lot represent the first Square Roots campus. Each 8-foot x 8.5-foot x 20-foot shipping container contains an equivalent of 2 acres of produce and can yield more than 50 pounds of produce each week.

For 13 months, one cohort of next-generation food entrepreneurs takes part in a curriculum with foundations in farming, business, community and leadership.

The urban farming incubator raised a $5.4 million seed funding round in August 2017.

Training a new breed of entrepreneurs to apply exponential technology to growing food is essential to the future of farming.

One of our massive transformative purposes at the Abundance Group is to empower entrepreneurs to generate extraordinary wealth while creating a world of abundance. Vertical farms and cellular agriculture are key elements enabling the next generation of food and agriculture entrepreneurs.

Conclusion
Technology is driving food abundance.

We’re already seeing food become demonetized, as the graph below shows.

From 1960 to 2014, the percent of income spent on food in the U.S. fell from 19 percent to under 10 percent of total disposable income—a dramatic decrease over the 40 percent of household income spent on food in 1900.

The dropping percent of per-capita disposable income spent on food. Source: USDA, Economic Research Service, Food Expenditure Series
Ultimately, technology has enabled a massive variety of food at a significantly reduced cost and with fewer resources used for production.

We’re increasingly going to optimize and fortify the food supply chain to achieve more reliable, predictable, and nutritious ways to obtain basic sustenance.

And that means a world with abundant, nutritious, and inexpensive food for every man, woman, and child.

What an extraordinary time to be alive.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.

Abundance-Digital is my ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#432882 Why the Discovery of Room-Temperature ...

Superconductors are among the most bizarre and exciting materials yet discovered. Counterintuitive quantum-mechanical effects mean that, below a critical temperature, they have zero electrical resistance. This property alone is more than enough to spark the imagination.

A current that could flow forever without losing any energy means transmission of power with virtually no losses in the cables. When renewable energy sources start to dominate the grid and high-voltage transmission across continents becomes important to overcome intermittency, lossless cables will result in substantial savings.

What’s more, a superconducting wire carrying a current that never, ever diminishes would act as a perfect store of electrical energy. Unlike batteries, which degrade over time, if the resistance is truly zero, you could return to the superconductor in a billion years and find that same old current flowing through it. Energy could be captured and stored indefinitely!

With no resistance, a huge current could be passed through the superconducting wire and, in turn, produce magnetic fields of incredible power.

You could use them to levitate trains and produce astonishing accelerations, thereby revolutionizing the transport system. You could use them in power plants—replacing conventional methods which spin turbines in magnetic fields to generate electricity—and in quantum computers as the two-level system required for a “qubit,” in which the zeros and ones are replaced by current flowing clockwise or counterclockwise in a superconductor.

Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic; superconductors can certainly seem like magical devices. So, why aren’t they busy remaking the world? There’s a problem—that critical temperature.

For all known materials, it’s hundreds of degrees below freezing. Superconductors also have a critical magnetic field; beyond a certain magnetic field strength, they cease to work. There’s a tradeoff: materials with an intrinsically high critical temperature can also often provide the largest magnetic fields when cooled well below that temperature.

This has meant that superconductor applications so far have been limited to situations where you can afford to cool the components of your system to close to absolute zero: in particle accelerators and experimental nuclear fusion reactors, for example.

But even as some aspects of superconductor technology become mature in limited applications, the search for higher temperature superconductors moves on. Many physicists still believe a room-temperature superconductor could exist. Such a discovery would unleash amazing new technologies.

The Quest for Room-Temperature Superconductors
After Heike Kamerlingh Onnes discovered superconductivity by accident while attempting to prove Lord Kelvin’s theory that resistance would increase with decreasing temperature, theorists scrambled to explain the new property in the hope that understanding it might allow for room-temperature superconductors to be synthesized.

They came up with the BCS theory, which explained some of the properties of superconductors. It also predicted that the dream of technologists, a room-temperature superconductor, could not exist; the maximum temperature for superconductivity according to BCS theory was just 30 K.

Then, in the 1980s, the field changed again with the discovery of unconventional, or high-temperature, superconductivity. “High temperature” is still very cold: the highest temperature for superconductivity achieved was -70°C for hydrogen sulphide at extremely high pressures. For normal pressures, -140°C is near the upper limit. Unfortunately, high-temperature superconductors—which require relatively cheap liquid nitrogen, rather than liquid helium, to cool—are mostly brittle ceramics, which are expensive to form into wires and have limited application.

Given the limitations of high-temperature superconductors, researchers continue to believe there’s a better option awaiting discovery—an incredible new material that checks boxes like superconductivity approaching room temperature, affordability, and practicality.

Tantalizing Clues
Without a detailed theoretical understanding of how this phenomenon occurs—although incremental progress happens all the time—scientists can occasionally feel like they’re taking educated guesses at materials that might be likely candidates. It’s a little like trying to guess a phone number, but with the periodic table of elements instead of digits.

Yet the prospect remains, in the words of one researcher, tantalizing. A Nobel Prize and potentially changing the world of energy and electricity is not bad for a day’s work.

Some research focuses on cuprates, complex crystals that contain layers of copper and oxygen atoms. Doping cuprates with various different elements, such exotic compounds as mercury barium calcium copper oxide, are amongst the best superconductors known today.

Research also continues into some anomalous but unexplained reports that graphite soaked in water can act as a room-temperature superconductor, but there’s no indication that this could be used for technological applications yet.

In early 2017, as part of the ongoing effort to explore the most extreme and exotic forms of matter we can create on Earth, researchers managed to compress hydrogen into a metal.

The pressure required to do this was more than that at the core of the Earth and thousands of times higher than that at the bottom of the ocean. Some researchers in the field, called condensed-matter physics, doubt that metallic hydrogen was produced at all.

It’s considered possible that metallic hydrogen could be a room-temperature superconductor. But getting the samples to stick around long enough for detailed testing has proved tricky, with the diamonds containing the metallic hydrogen suffering a “catastrophic failure” under the pressure.

Superconductivity—or behavior that strongly resembles it—was also observed in yttrium barium copper oxide (YBCO) at room temperature in 2014. The only catch was that this electron transport lasted for a tiny fraction of a second and required the material to be bombarded with pulsed lasers.

Not very practical, you might say, but tantalizing nonetheless.

Other new materials display enticing properties too. The 2016 Nobel Prize in Physics was awarded for the theoretical work that characterizes topological insulators—materials that exhibit similarly strange quantum behaviors. They can be considered perfect insulators for the bulk of the material but extraordinarily good conductors in a thin layer on the surface.

Microsoft is betting on topological insulators as the key component in their attempt at a quantum computer. They’ve also been considered potentially important components in miniaturized circuitry.

A number of remarkable electronic transport properties have also been observed in new, “2D” structures—like graphene, these are materials synthesized to be as thick as a single atom or molecule. And research continues into how we can utilize the superconductors we’ve already discovered; for example, some teams are trying to develop insulating material that prevents superconducting HVDC cable from overheating.

Room-temperature superconductivity remains as elusive and exciting as it has been for over a century. It is unclear whether a room-temperature superconductor can exist, but the discovery of high-temperature superconductors is a promising indicator that unconventional and highly useful quantum effects may be discovered in completely unexpected materials.

Perhaps in the future—through artificial intelligence simulations or the serendipitous discoveries of a 21st century Kamerlingh Onnes—this little piece of magic could move into the realm of reality.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots