Tag Archives: small

#433284 Tech Can Sustainably Feed Developing ...

In the next 30 years, virtually all net population growth will occur in urban regions of developing countries. At the same time, worldwide food production will become increasingly limited by the availability of land, water, and energy. These constraints will be further worsened by climate change and the expected addition of two billion people to today’s four billion now living in urban regions. Meanwhile, current urban food ecosystems in the developing world are inefficient and critically inadequate to meet the challenges of the future.

Combined, these trends could have catastrophic economic and political consequences. A new path forward for urban food ecosystems needs to be found. But what is that path?

New technologies, coupled with new business models and supportive government policies, can create more resilient urban food ecosystems in the coming decades. These tech-enabled systems can sustainably link rural, peri-urban (areas just outside cities), and urban producers and consumers, increase overall food production, and generate opportunities for new businesses and jobs (Figure 1).

Figure 1: The urban food value chain nodes from rural, peri-urban and urban producers
to servicing end customers in urban and peri-urban markets.
Here’s a glimpse of the changes technology may bring to the systems feeding cities in the future.

A technology-linked urban food ecosystem would create unprecedented opportunities for small farms to reach wider markets and progress from subsistence farming to commercially producing niche cash crops and animal protein, such as poultry, fish, pork, and insects.

Meanwhile, new opportunities within cities will appear with the creation of vertical farms and other controlled-environment agricultural systems as well as production of plant-based and 3D printed foods and cultured meat. Uberized facilitation of production and distribution of food will reduce bottlenecks and provide new business opportunities and jobs. Off-the-shelf precision agriculture technology will increasingly be the new norm, from smallholders to larger producers.

As part of Agricultural Revolution 4.0, all this will be integrated into the larger collaborative economy—connected by digital platforms, the cloud, and the Internet of Things and powered by artificial intelligence. It will more efficiently and effectively use resources and people to connect the nexus of food, water, energy, nutrition, and human health. It will also aid in the development of a circular economy that is designed to be restorative and regenerative, minimizing waste and maximizing recycling and reuse to build economic, natural, and social capital.

In short, technology will enable transformation of urban food ecosystems, from expanded production in cities to more efficient and inclusive distribution and closer connections with rural farmers. Here’s a closer look at seven tech-driven trends that will help feed tomorrow’s cities.

1. Worldwide Connectivity: Information, Learning, and Markets
Connectivity from simple cell phone SMS communication to internet-enabled smartphones and cloud services are providing platforms for the increasingly powerful technologies enabling development of a new agricultural revolution. Internet connections currently reach more than 4 billion people, about 55% of the global population. That number will grow fast in coming years.

These information and communications technologies connect food producers to consumers with just-in-time data, enhanced good agricultural practices, mobile money and credit, telecommunications, market information and merchandising, and greater transparency and traceability of goods and services throughout the value chain. Text messages on mobile devices have become the one-stop-shop for small farmers to place orders, gain technology information for best management practices, and access market information to increase profitability.

Hershey’s CocoaLink in Ghana, for example, uses text and voice messages with cocoa industry experts and small farm producers. Digital Green is a technology-enabled communication system in Asia and Africa to bring needed agricultural and management practices to small farmers in their own language by filming and recording successful farmers in their own communities. MFarm is a mobile app that connects Kenyan farmers with urban markets via text messaging.

2. Blockchain Technology: Greater Access to Basic Financial Services and Enhanced Food Safety
Gaining access to credit and executing financial transactions have been persistent constraints for small farm producers. Blockchain promises to help the unbanked access basic financial services.

The Gates Foundation has released an open source platform, Mojaloop, to allow software developers and banks and financial service providers to build secure digital payment platforms at scale. Mojaloop software uses more secure blockchain technology to enable urban food system players in the developing world to conduct business and trade. The free software reduces complexity and cost in building payment platforms to connect small farmers with customers, merchants, banks, and mobile money providers. Such digital financial services will allow small farm producers in the developing world to conduct business without a brick-and-mortar bank.

Blockchain is also important for traceability and transparency requirements to meet food regulatory and consumer requirement during the production, post-harvest, shipping, processing and distribution to consumers. Combining blockchain with RFID technologies also will enhance food safety.

3. Uberized Services: On-Demand Equipment, Storage, and More
Uberized services can advance development of the urban food ecosystem across the spectrum, from rural to peri-urban to urban food production and distribution. Whereas Uber and Airbnb enable sharing of rides and homes, the model can be extended in the developing world to include on-demand use of expensive equipment, such as farm machinery, or storage space.

This includes uberization of planting and harvesting equipment (Hello Tractor), transportation vehicles, refrigeration facilities for temporary storage of perishable product, and “cloud kitchens” (EasyAppetite in Nigeria, FoodCourt in Rwanda, and Swiggy and Zomto in India) that produce fresh meals to be delivered to urban customers, enabling young people with motorbikes and cell phones to become entrepreneurs or contractors delivering meals to urban customers.

Another uberized service is marketing and distributing “ugly food” or imperfect produce to reduce food waste. About a third of the world’s food goes to waste, often because of appearance; this is enough to feed two billion people. Such services supply consumers with cheaper, nutritious, tasty, healthy fruits and vegetables that would normally be discarded as culls due to imperfections in shape or size.

4. Technology for Producing Plant-Based Foods in Cities
We need to change diet choices through education and marketing and by developing tasty plant-based substitutes. This is not only critical for environmental sustainability, but also offers opportunities for new businesses and services. It turns out that current agricultural production systems for “red meat” have a far greater detrimental impact on the environment than automobiles.

There have been great advances in plant-based foods, like the Impossible Burger and Beyond Meat, that can satisfy the consumer’s experience and perception of meat. Rather than giving up the experience of eating red meat, technology is enabling marketable, attractive plant-based products that can potentially drastically reduce world per capita consumption of red meat.

5. Cellular Agriculture, Lab-Grown Meat, and 3D Printed Food
Lab-grown meat, literally meat grown from cultured cells, may radically change where and how protein and food is produced, including the cities where it is consumed. There is a wide range of innovative alternatives to traditional meats that can supplement the need for livestock, farms, and butchers. The history of innovation is about getting rid of the bottleneck in the system, and with meat, the bottleneck is the animal. Finless Foods is a new company trying to replicate fish fillets, for example, while Memphis meats is working on beef and poultry.

3D printing or additive manufacturing is a “general purpose technology” used for making, plastic toys, human tissues, aircraft parts, and buildings. 3D printing can also be used to convert alternative ingredients such as proteins from algae, beet leaves, or insects into tasty and healthy products that can be produced by small, inexpensive printers in home kitchens. The food can be customized for individual health needs as well as preferences. 3D printing can also contribute to the food ecosystem by making possible on-demand replacement parts—which are badly needed in the developing world for tractors, pumps, and other equipment. Catapult Design 3D prints tractor replacement parts as well as corn shellers, cart designs, prosthetic limbs, and rolling water barrels for the Indian market.

6. Alt Farming: Vertical Farms to Produce Food in Urban Centers
Urban food ecosystem production systems will rely not only on field-grown crops, but also on production of food within cities. There are a host of new, alternative production systems using “controlled environmental agriculture.” These include low-cost, protected poly hoop houses, greenhouses, roof-top and sack/container gardens, and vertical farming in buildings using artificial lighting. Vertical farms enable year-round production of selected crops, regardless of weather—which will be increasingly important in response to climate change—and without concern for deteriorating soil conditions that affect crop quality and productivity. AeroFarms claims 390 times more productivity per square foot than normal field production.

7. Biotechnology and Nanotechnology for Sustainable Intensification of Agriculture
CRISPR is a promising gene editing technology that can be used to enhance crop productivity while avoiding societal concerns about GMOs. CRISPR can accelerate traditional breeding and selection programs for developing new climate and disease-resistant, higher-yielding, nutritious crops and animals.

Plant-derived coating materials, developed with nanotechnology, can decrease waste, extend shelf-life and transportability of fruits and vegetables, and significantly reduce post-harvest crop loss in developing countries that lack adequate refrigeration. Nanotechnology is also used in polymers to coat seeds to increase their shelf-life and increase their germination success and production for niche, high-value crops.

Putting It All Together
The next generation “urban food industry” will be part of the larger collaborative economy that is connected by digital platforms, the cloud, and the Internet of Things. A tech-enabled urban food ecosystem integrated with new business models and smart agricultural policies offers the opportunity for sustainable intensification (doing more with less) of agriculture to feed a rapidly growing global urban population—while also creating viable economic opportunities for rural and peri-urban as well as urban producers and value-chain players.

Image Credit: Akarawut / Shutterstock.com Continue reading

Posted in Human Robots

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots

#432563 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Pedro Domingos on the Arms Race in Artificial Intelligence
Christoph Scheuermann and Bernhard Zand | Spiegel Online
“AI lowers the cost of knowledge by orders of magnitude. One good, effective machine learning system can do the work of a million people, whether it’s for commercial purposes or for cyberespionage. Imagine a country that produces a thousand times more knowledge than another. This is the challenge we are facing.”

BIOTECHNOLOGY
Gene Therapy Could Free Some People From a Lifetime of Blood Transfusions
Emily Mullin | MIT Technology Review
“A one-time, experimental treatment for an inherited blood disorder has shown dramatic results in a small study. …[Lead author Alexis Thompson] says the effect on patients has been remarkable. ‘They have been tied to this ongoing medical therapy that is burdensome and expensive for their whole lives,’ she says. ‘Gene therapy has allowed people to have aspirations and really pursue them.’ ”

ENVIRONMENT
The Revolutionary Giant Ocean Cleanup Machine Is About to Set Sail
Adele Peters | Fast Company
“By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch—around 40,000 metric tons—within five years.”

ROBOTICS
Autonomous Boats Will Be on the Market Sooner Than Self-Driving Cars
Tracey Lindeman | Motherboard
“Some unmanned watercraft…may be at sea commercially before 2020. That’s partly because automating all ships could generate a ridiculous amount of revenue. According to the United Nations, 90 percent of the world’s trade is carried by sea and 10.3 billion tons of products were shipped in 2016.”

DIGITAL CULTURE
Style Is an Algorithm
Kyle Chayka | Racked
“Confronting the Echo Look’s opaque statements on my fashion sense, I realize that all of these algorithmic experiences are matters of taste: the question of what we like and why we like it, and what it means that taste is increasingly dictated by black-box robots like the camera on my shelf.”

COMPUTING
How Apple Will Use AR to Reinvent the Human-Computer Interface
Tim Bajarin | Fast Company
“It’s in Apple’s DNA to continually deliver the ‘next’ major advancement to the personal computing experience. Its innovation in man-machine interfaces started with the Mac and then extended to the iPod, the iPhone, the iPad, and most recently, the Apple Watch. Now, get ready for the next chapter, as Apple tackles augmented reality, in a way that could fundamentally transform the human-computer interface.”

SCIENCE
Advanced Microscope Shows Cells at Work in Incredible Detail
Steve Dent | Engadget
“For the first time, scientists have peered into living cells and created videos showing how they function with unprecedented 3D detail. Using a special microscope and new lighting techniques, a team from Harvard and the Howard Hughes Medical Institute captured zebrafish immune cell interactions with unheard-of 3D detail and resolution.”

Image Credit: dubassy / Shutterstock.com Continue reading

Posted in Human Robots

#432549 Your Next Pilot Could Be Drone Software

Would you get on a plane that didn’t have a human pilot in the cockpit? Half of air travelers surveyed in 2017 said they would not, even if the ticket was cheaper. Modern pilots do such a good job that almost any air accident is big news, such as the Southwest engine disintegration on April 17.

But stories of pilot drunkenness, rants, fights and distraction, however rare, are reminders that pilots are only human. Not every plane can be flown by a disaster-averting pilot, like Southwest Capt. Tammie Jo Shults or Capt. Chesley “Sully” Sullenberger. But software could change that, equipping every plane with an extremely experienced guidance system that is always learning more.

In fact, on many flights, autopilot systems already control the plane for basically all of the flight. And software handles the most harrowing landings—when there is no visibility and the pilot can’t see anything to even know where he or she is. But human pilots are still on hand as backups.

A new generation of software pilots, developed for self-flying vehicles, or drones, will soon have logged more flying hours than all humans have—ever. By combining their enormous amounts of flight data and experience, drone-control software applications are poised to quickly become the world’s most experienced pilots.

Drones That Fly Themselves
Drones come in many forms, from tiny quad-rotor copter toys to missile-firing winged planes, or even 7-ton aircraft that can stay aloft for 34 hours at a stretch.

When drones were first introduced, they were flown remotely by human operators. However, this merely substitutes a pilot on the ground for one aloft. And it requires significant communications bandwidth between the drone and control center, to carry real-time video from the drone and to transmit the operator’s commands.

Many newer drones no longer need pilots; some drones for hobbyists and photographers can now fly themselves along human-defined routes, leaving the human free to sightsee—or control the camera to get the best view.

University researchers, businesses, and military agencies are now testing larger and more capable drones that will operate autonomously. Swarms of drones can fly without needing tens or hundreds of humans to control them. And they can perform coordinated maneuvers that human controllers could never handle.

Could humans control these 1,218 drones all together?

Whether flying in swarms or alone, the software that controls these drones is rapidly gaining flight experience.

Importance of Pilot Experience
Experience is the main qualification for pilots. Even a person who wants to fly a small plane for personal and noncommercial use needs 40 hours of flying instruction before getting a private pilot’s license. Commercial airline pilots must have at least 1,000 hours before even serving as a co-pilot.

On-the-ground training and in-flight experience prepare pilots for unusual and emergency scenarios, ideally to help save lives in situations like the “Miracle on the Hudson.” But many pilots are less experienced than “Sully” Sullenberger, who saved his planeload of people with quick and creative thinking. With software, though, every plane can have on board a pilot with as much experience—if not more. A popular software pilot system, in use in many aircraft at once, could gain more flight time each day than a single human might accumulate in a year.

As someone who studies technology policy as well as the use of artificial intelligence for drones, cars, robots, and other uses, I don’t lightly suggest handing over the controls for those additional tasks. But giving software pilots more control would maximize computers’ advantages over humans in training, testing, and reliability.

Training and Testing Software Pilots
Unlike people, computers will follow sets of instructions in software the same way every time. That lets developers create instructions, test reactions, and refine aircraft responses. Testing could make it far less likely, for example, that a computer would mistake the planet Venus for an oncoming jet and throw the plane into a steep dive to avoid it.

The most significant advantage is scale: Rather than teaching thousands of individual pilots new skills, updating thousands of aircraft would require only downloading updated software.

These systems would also need to be thoroughly tested—in both real-life situations and in simulations—to handle a wide range of aviation situations and to withstand cyberattacks. But once they’re working well, software pilots are not susceptible to distraction, disorientation, fatigue, or other human impairments that can create problems or cause errors even in common situations.

Rapid Response and Adaptation
Already, aircraft regulators are concerned that human pilots are forgetting how to fly on their own and may have trouble taking over from an autopilot in an emergency.

In the “Miracle on the Hudson” event, for example, a key factor in what happened was how long it took for the human pilots to figure out what had happened—that the plane had flown through a flock of birds, which had damaged both engines—and how to respond. Rather than the approximately one minute it took the humans, a computer could have assessed the situation in seconds, potentially saving enough time that the plane could have landed on a runway instead of a river.

Aircraft damage can pose another particularly difficult challenge for human pilots: It can change what effects the controls have on its flight. In cases where damage renders a plane uncontrollable, the result is often tragedy. A sufficiently advanced automated system could make minute changes to the aircraft’s steering and use its sensors to quickly evaluate the effects of those movements—essentially learning how to fly all over again with a damaged plane.

Boosting Public Confidence
The biggest barrier to fully automated flight is psychological, not technical. Many people may not want to trust their lives to computer systems. But they might come around when reassured that the software pilot has tens, hundreds, or thousands more hours of flight experience than any human pilot.

Other autonomous technologies, too, are progressing despite public concerns. Regulators and lawmakers are allowing self-driving cars on the roads in many states. But more than half of Americans don’t want to ride in one, largely because they don’t trust the technology. And only 17 percent of travelers around the world are willing to board a plane without a pilot. However, as more people experience self-driving cars on the road and have drones deliver them packages, it is likely that software pilots will gain in acceptance.

The airline industry will certainly be pushing people to trust the new systems: Automating pilots could save tens of billions of dollars a year. And the current pilot shortage means software pilots may be the key to having any airline service to smaller destinations.

Both Boeing and Airbus have made significant investments in automated flight technology, which would remove or reduce the need for human pilots. Boeing has actually bought a drone manufacturer and is looking to add software pilot capabilities to the next generation of its passenger aircraft. (Other tests have tried to retrofit existing aircraft with robotic pilots.)

One way to help regular passengers become comfortable with software pilots—while also helping to both train and test the systems—could be to introduce them as co-pilots working alongside human pilots. Planes would be operated by software from gate to gate, with the pilots instructed to touch the controls only if the system fails. Eventually pilots could be removed from the aircraft altogether, just like they eventually were from the driverless trains that we routinely ride in airports around the world.

This article was originally published on The Conversation. Read the original article.

Image Credit: Skycolors / Shutterstock.com Continue reading

Posted in Human Robots

#432467 Dungeons and Dragons, Not Chess and Go: ...

Everyone had died—not that you’d know it, from how they were laughing about their poor choices and bad rolls of the dice. As a social anthropologist, I study how people understand artificial intelligence (AI) and our efforts towards attaining it; I’m also a life-long fan of Dungeons and Dragons (D&D), the inventive fantasy roleplaying game. During a recent quest, when I was playing an elf ranger, the trainee paladin (or holy knight) acted according to his noble character, and announced our presence at the mouth of a dragon’s lair. The results were disastrous. But while success in D&D means “beating the bad guy,” the game is also a creative sandbox, where failure can count as collective triumph so long as you tell a great tale.

What does this have to do with AI? In computer science, games are frequently used as a benchmark for an algorithm’s “intelligence.” The late Robert Wilensky, a professor at the University of California, Berkeley and a leading figure in AI, offered one reason why this might be. Computer scientists “looked around at who the smartest people were, and they were themselves, of course,” he told the authors of Compulsive Technology: Computers as Culture (1985). “They were all essentially mathematicians by training, and mathematicians do two things—they prove theorems and play chess. And they said, hey, if it proves a theorem or plays chess, it must be smart.” No surprise that demonstrations of AI’s “smarts” have focused on the artificial player’s prowess.

Yet the games that get chosen—like Go, the main battlefield for Google DeepMind’s algorithms in recent years—tend to be tightly bounded, with set objectives and clear paths to victory or defeat. These experiences have none of the open-ended collaboration of D&D. Which got me thinking: do we need a new test for intelligence, where the goal is not simply about success, but storytelling? What would it mean for an AI to “pass” as human in a game of D&D? Instead of the Turing test, perhaps we need an elf ranger test?

Of course, this is just a playful thought experiment, but it does highlight the flaws in certain models of intelligence. First, it reveals how intelligence has to work across a variety of environments. D&D participants can inhabit many characters in many games, and the individual player can “switch” between roles (the fighter, the thief, the healer). Meanwhile, AI researchers know that it’s super difficult to get a well-trained algorithm to apply its insights in even slightly different domains—something that we humans manage surprisingly well.

Second, D&D reminds us that intelligence is embodied. In computer games, the bodily aspect of the experience might range from pressing buttons on a controller in order to move an icon or avatar (a ping-pong paddle; a spaceship; an anthropomorphic, eternally hungry, yellow sphere), to more recent and immersive experiences involving virtual-reality goggles and haptic gloves. Even without these add-ons, games can still produce biological responses associated with stress and fear (if you’ve ever played Alien: Isolation you’ll understand). In the original D&D, the players encounter the game while sitting around a table together, feeling the story and its impact. Recent research in cognitive science suggests that bodily interactions are crucial to how we grasp more abstract mental concepts. But we give minimal attention to the embodiment of artificial agents, and how that might affect the way they learn and process information.

Finally, intelligence is social. AI algorithms typically learn through multiple rounds of competition, in which successful strategies get reinforced with rewards. True, it appears that humans also evolved to learn through repetition, reward and reinforcement. But there’s an important collaborative dimension to human intelligence. In the 1930s, the psychologist Lev Vygotsky identified the interaction of an expert and a novice as an example of what became called “scaffolded” learning, where the teacher demonstrates and then supports the learner in acquiring a new skill. In unbounded games, this cooperation is channelled through narrative. Games of It among small children can evolve from win/lose into attacks by terrible monsters, before shifting again to more complex narratives that explain why the monsters are attacking, who is the hero, and what they can do and why—narratives that aren’t always logical or even internally compatible. An AI that could engage in social storytelling is doubtless on a surer, more multifunctional footing than one that plays chess; and there’s no guarantee that chess is even a step on the road to attaining intelligence of this sort.

In some ways, this failure to look at roleplaying as a technical hurdle for intelligence is strange. D&D was a key cultural touchstone for technologists in the 1980s and the inspiration for many early text-based computer games, as Katie Hafner and Matthew Lyon point out in Where Wizards Stay up Late: The Origins of the Internet (1996). Even today, AI researchers who play games in their free time often mention D&D specifically. So instead of beating adversaries in games, we might learn more about intelligence if we tried to teach artificial agents to play together as we do: as paladins and elf rangers.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit:Benny Mazur/Flickr / CC BY 2.0 Continue reading

Posted in Human Robots