Tag Archives: here
#433505 Boston Dynamics: Atlas
The future is already here, courtesy of Atlas by Boston Dynamics!
#433620 Instilling the Best of Human Values in ...
Now that the era of artificial intelligence is unquestionably upon us, it behooves us to think and work harder to ensure that the AIs we create embody positive human values.
Science fiction is full of AIs that manifest the dark side of humanity, or are indifferent to humans altogether. Such possibilities cannot be ruled out, but nor is there any logical or empirical reason to consider them highly likely. I am among a large group of AI experts who see a strong potential for profoundly positive outcomes in the AI revolution currently underway.
We are facing a future with great uncertainty and tremendous promise, and the best we can do is to confront it with a combination of heart and mind, of common sense and rigorous science. In the realm of AI, what this means is, we need to do our best to guide the AI minds we are creating to embody the values we cherish: love, compassion, creativity, and respect.
The quest for beneficial AI has many dimensions, including its potential to reduce material scarcity and to help unlock the human capacity for love and compassion.
Reducing Scarcity
A large percentage of difficult issues in human society, many of which spill over into the AI domain, would be palliated significantly if material scarcity became less of a problem. Fortunately, AI has great potential to help here. AI is already increasing efficiency in nearly every industry.
In the next few decades, as nanotech and 3D printing continue to advance, AI-driven design will become a larger factor in the economy. Radical new tools like artificial enzymes built using Christian Schafmeister’s spiroligomer molecules, and designed using quantum physics-savvy AIs, will enable the creation of new materials and medicines.
For amazing advances like the intersection of AI and nanotech to lead toward broadly positive outcomes, however, the economic and political aspects of the AI industry may have to shift from the current status quo.
Currently, most AI development occurs under the aegis of military organizations or large corporations oriented heavily toward advertising and marketing. Put crudely, an awful lot of AI today is about “spying, brainwashing, or killing.” This is not really the ideal situation if we want our first true artificial general intelligences to be open-minded, warm-hearted, and beneficial.
Also, as the bulk of AI development now occurs in large for-profit organizations bound by law to pursue the maximization of shareholder value, we face a situation where AI tends to exacerbate global wealth inequality and class divisions. This has the potential to lead to various civilization-scale failure modes involving the intersection of geopolitics, AI, cyberterrorism, and so forth. Part of my motivation for founding the decentralized AI project SingularityNET was to create an alternative mode of dissemination and utilization of both narrow AI and AGI—one that operates in a self-organizing way, outside of the direct grip of conventional corporate and governmental structures.
In the end, though, I worry that radical material abundance and novel political and economic structures may fail to create a positive future, unless they are coupled with advances in consciousness and compassion. AGIs have the potential to be massively more ethical and compassionate than humans. But still, the odds of getting deeply beneficial AGIs seem higher if the humans creating them are fuller of compassion and positive consciousness—and can effectively pass these values on.
Transmitting Human Values
Brain-computer interfacing is another critical aspect of the quest for creating more positive AIs and more positive humans. As Elon Musk has put it, “If you can’t beat ’em, join’ em.” Joining is more fun than beating anyway. What better way to infuse AIs with human values than to connect them directly to human brains, and let them learn directly from the source (while providing humans with valuable enhancements)?
Millions of people recently heard Elon Musk discuss AI and BCI on the Joe Rogan podcast. Musk’s embrace of brain-computer interfacing is laudable, but he tends to dodge some of the tough issues—for instance, he does not emphasize the trade-off cyborgs will face between retaining human-ness and maximizing intelligence, joy, and creativity. To make this trade-off effectively, the AI portion of the cyborg will need to have a deep sense of human values.
Musk calls humanity the “biological boot loader” for AGI, but to me this colorful metaphor misses a key point—that we can seed the AGI we create with our values as an initial condition. This is one reason why it’s important that the first really powerful AGIs are created by decentralized networks, and not conventional corporate or military organizations. The decentralized software/hardware ecosystem, for all its quirks and flaws, has more potential to lead to human-computer cybernetic collective minds that are reasonable and benevolent.
Algorithmic Love
BCI is still in its infancy, but a more immediate way of connecting people with AIs to infuse both with greater love and compassion is to leverage humanoid robotics technology. Toward this end, I conceived a project called Loving AI, focused on using highly expressive humanoid robots like the Hanson robot Sophia to lead people through meditations and other exercises oriented toward unlocking the human potential for love and compassion. My goals here were to explore the potential of AI and robots to have a positive impact on human consciousness, and to use this application to study and improve the OpenCog and SingularityNET tools used to control Sophia in these interactions.
The Loving AI project has now run two small sets of human trials, both with exciting and positive results. These have been small—dozens rather than hundreds of people—but have definitively proven the point. Put a person in a quiet room with a humanoid robot that can look them in the eye, mirror their facial expressions, recognize some of their emotions, and lead them through simple meditation, listening, and consciousness-oriented exercises…and quite a lot of the time, the result is a more relaxed person who has entered into a shifted state of consciousness, at least for a period of time.
In a certain percentage of cases, the interaction with the robot consciousness guide triggered a dramatic change of consciousness in the human subject—a deep meditative trance state, for instance. In most cases, the result was not so extreme, but statistically the positive effect was quite significant across all cases. Furthermore, a similar effect was found using an avatar simulation of the robot’s face on a tablet screen (together with a webcam for facial expression mirroring and recognition), but not with a purely auditory interaction.
The Loving AI experiments are not only about AI; they are about human-robot and human-avatar interaction, with AI as one significant aspect. The facial interaction with the robot or avatar is pushing “biological buttons” that trigger emotional reactions and prime the mind for changes of consciousness. However, this sort of body-mind interaction is arguably critical to human values and what it means to be human; it’s an important thing for robots and AIs to “get.”
Halting or pausing the advance of AI is not a viable possibility at this stage. Despite the risks, the potential economic and political benefits involved are clear and massive. The convergence of narrow AI toward AGI is also a near inevitability, because there are so many important applications where greater generality of intelligence will lead to greater practical functionality. The challenge is to make the outcome of this great civilization-level adventure as positive as possible.
Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading
#433288 The New AI Tech Turning Heads in Video ...
A new technique using artificial intelligence to manipulate video content gives new meaning to the expression “talking head.”
An international team of researchers showcased the latest advancement in synthesizing facial expressions—including mouth, eyes, eyebrows, and even head position—in video at this month’s 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry.
The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a “target” actor based on the facial and head movement of a “source” actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.
In this case, the adversaries are actually working together: One neural network generates content, while the other rejects or approves each effort. The back-and-forth interplay between the two eventually produces a realistic result that can easily fool the human eye, including reproducing a static scene behind the head as it bobs back and forth.
The researchers say the technique can be used by the film industry for a variety of purposes, from editing facial expressions of actors for matching dubbed voices to repositioning an actor’s head in post-production. AI can not only produce highly realistic results, but much quicker ones compared to the manual processes used today, according to the researchers. You can read the full paper of their work here.
“Deep Video Portraits shows how such a visual effect could be created with less effort in the future,” said Christian Richardt, from the University of Bath’s motion capture research center CAMERA, in a press release. “With our approach, even the positioning of an actor’s head and their facial expression could be easily edited to change camera angles or subtly change the framing of a scene to tell the story better.”
AI Tech Different Than So-Called “Deepfakes”
The work is far from the first to employ AI to manipulate video and audio. At last year’s SIGGRAPH conference, researchers from the University of Washington showcased their work using algorithms that inserted audio recordings from a person in one instance into a separate video of the same person in a different context.
In this case, they “faked” a video using a speech from former President Barack Obama addressing a mass shooting incident during his presidency. The AI-doctored video injects the audio into an unrelated video of the president while also blending the facial and mouth movements, creating a pretty credible job of lip synching.
A previous paper by many of the same scientists on the Deep Video Portraits project detailed how they were first able to manipulate a video in real time of a talking head (in this case, actor and former California governor Arnold Schwarzenegger). The Face2Face system pulled off this bit of digital trickery using a depth-sensing camera that tracked the facial expressions of an Asian female source actor.
A less sophisticated method of swapping faces using a machine learning software dubbed FakeApp emerged earlier this year. Predictably, the tech—requiring numerous photos of the source actor in order to train the neural network—was used for more juvenile pursuits, such as injecting a person’s face onto a porn star.
The application gave rise to the term “deepfakes,” which is now used somewhat ubiquitously to describe all such instances of AI-manipulated video—much to the chagrin of some of the researchers involved in more legitimate uses.
Fighting AI-Created Video Forgeries
However, the researchers are keenly aware that their work—intended for benign uses such as in the film industry or even to correct gaze and head positions for more natural interactions through video teleconferencing—could be used for nefarious purposes. Fake news is the most obvious concern.
“With ever-improving video editing technology, we must also start being more critical about the video content we consume every day, especially if there is no proof of origin,” said Michael Zollhöfer, a visiting assistant professor at Stanford University and member of the Deep Video Portraits team, in the press release.
Toward that end, the research team is training the same adversarial neural networks to spot video forgeries. They also strongly recommend that developers clearly watermark videos that are edited through AI or otherwise, and denote clearly what part and element of the scene was modified.
To catch less ethical users, the US Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), is supporting a program called Media Forensics. This latest DARPA challenge enlists researchers to develop technologies to automatically assess the integrity of an image or video, as part of an end-to-end media forensics platform.
The DARPA official in charge of the program, Matthew Turek, did tell MIT Technology Review that so far the program has “discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations.” In one reported example, researchers have targeted eyes, which rarely blink in the case of “deepfakes” like those created by FakeApp, because the AI is trained on still pictures. That method would seem to be less effective to spot the sort of forgeries created by Deep Video Portraits, which appears to flawlessly match the entire facial and head movements between the source and target actors.
“We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip,” Zollhöfer said. “This will lead to ever-better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes.
Image Credit: Tancha / Shutterstock.com Continue reading
#433284 Tech Can Sustainably Feed Developing ...
In the next 30 years, virtually all net population growth will occur in urban regions of developing countries. At the same time, worldwide food production will become increasingly limited by the availability of land, water, and energy. These constraints will be further worsened by climate change and the expected addition of two billion people to today’s four billion now living in urban regions. Meanwhile, current urban food ecosystems in the developing world are inefficient and critically inadequate to meet the challenges of the future.
Combined, these trends could have catastrophic economic and political consequences. A new path forward for urban food ecosystems needs to be found. But what is that path?
New technologies, coupled with new business models and supportive government policies, can create more resilient urban food ecosystems in the coming decades. These tech-enabled systems can sustainably link rural, peri-urban (areas just outside cities), and urban producers and consumers, increase overall food production, and generate opportunities for new businesses and jobs (Figure 1).
Figure 1: The urban food value chain nodes from rural, peri-urban and urban producers
to servicing end customers in urban and peri-urban markets.
Here’s a glimpse of the changes technology may bring to the systems feeding cities in the future.
A technology-linked urban food ecosystem would create unprecedented opportunities for small farms to reach wider markets and progress from subsistence farming to commercially producing niche cash crops and animal protein, such as poultry, fish, pork, and insects.
Meanwhile, new opportunities within cities will appear with the creation of vertical farms and other controlled-environment agricultural systems as well as production of plant-based and 3D printed foods and cultured meat. Uberized facilitation of production and distribution of food will reduce bottlenecks and provide new business opportunities and jobs. Off-the-shelf precision agriculture technology will increasingly be the new norm, from smallholders to larger producers.
As part of Agricultural Revolution 4.0, all this will be integrated into the larger collaborative economy—connected by digital platforms, the cloud, and the Internet of Things and powered by artificial intelligence. It will more efficiently and effectively use resources and people to connect the nexus of food, water, energy, nutrition, and human health. It will also aid in the development of a circular economy that is designed to be restorative and regenerative, minimizing waste and maximizing recycling and reuse to build economic, natural, and social capital.
In short, technology will enable transformation of urban food ecosystems, from expanded production in cities to more efficient and inclusive distribution and closer connections with rural farmers. Here’s a closer look at seven tech-driven trends that will help feed tomorrow’s cities.
1. Worldwide Connectivity: Information, Learning, and Markets
Connectivity from simple cell phone SMS communication to internet-enabled smartphones and cloud services are providing platforms for the increasingly powerful technologies enabling development of a new agricultural revolution. Internet connections currently reach more than 4 billion people, about 55% of the global population. That number will grow fast in coming years.
These information and communications technologies connect food producers to consumers with just-in-time data, enhanced good agricultural practices, mobile money and credit, telecommunications, market information and merchandising, and greater transparency and traceability of goods and services throughout the value chain. Text messages on mobile devices have become the one-stop-shop for small farmers to place orders, gain technology information for best management practices, and access market information to increase profitability.
Hershey’s CocoaLink in Ghana, for example, uses text and voice messages with cocoa industry experts and small farm producers. Digital Green is a technology-enabled communication system in Asia and Africa to bring needed agricultural and management practices to small farmers in their own language by filming and recording successful farmers in their own communities. MFarm is a mobile app that connects Kenyan farmers with urban markets via text messaging.
2. Blockchain Technology: Greater Access to Basic Financial Services and Enhanced Food Safety
Gaining access to credit and executing financial transactions have been persistent constraints for small farm producers. Blockchain promises to help the unbanked access basic financial services.
The Gates Foundation has released an open source platform, Mojaloop, to allow software developers and banks and financial service providers to build secure digital payment platforms at scale. Mojaloop software uses more secure blockchain technology to enable urban food system players in the developing world to conduct business and trade. The free software reduces complexity and cost in building payment platforms to connect small farmers with customers, merchants, banks, and mobile money providers. Such digital financial services will allow small farm producers in the developing world to conduct business without a brick-and-mortar bank.
Blockchain is also important for traceability and transparency requirements to meet food regulatory and consumer requirement during the production, post-harvest, shipping, processing and distribution to consumers. Combining blockchain with RFID technologies also will enhance food safety.
3. Uberized Services: On-Demand Equipment, Storage, and More
Uberized services can advance development of the urban food ecosystem across the spectrum, from rural to peri-urban to urban food production and distribution. Whereas Uber and Airbnb enable sharing of rides and homes, the model can be extended in the developing world to include on-demand use of expensive equipment, such as farm machinery, or storage space.
This includes uberization of planting and harvesting equipment (Hello Tractor), transportation vehicles, refrigeration facilities for temporary storage of perishable product, and “cloud kitchens” (EasyAppetite in Nigeria, FoodCourt in Rwanda, and Swiggy and Zomto in India) that produce fresh meals to be delivered to urban customers, enabling young people with motorbikes and cell phones to become entrepreneurs or contractors delivering meals to urban customers.
Another uberized service is marketing and distributing “ugly food” or imperfect produce to reduce food waste. About a third of the world’s food goes to waste, often because of appearance; this is enough to feed two billion people. Such services supply consumers with cheaper, nutritious, tasty, healthy fruits and vegetables that would normally be discarded as culls due to imperfections in shape or size.
4. Technology for Producing Plant-Based Foods in Cities
We need to change diet choices through education and marketing and by developing tasty plant-based substitutes. This is not only critical for environmental sustainability, but also offers opportunities for new businesses and services. It turns out that current agricultural production systems for “red meat” have a far greater detrimental impact on the environment than automobiles.
There have been great advances in plant-based foods, like the Impossible Burger and Beyond Meat, that can satisfy the consumer’s experience and perception of meat. Rather than giving up the experience of eating red meat, technology is enabling marketable, attractive plant-based products that can potentially drastically reduce world per capita consumption of red meat.
5. Cellular Agriculture, Lab-Grown Meat, and 3D Printed Food
Lab-grown meat, literally meat grown from cultured cells, may radically change where and how protein and food is produced, including the cities where it is consumed. There is a wide range of innovative alternatives to traditional meats that can supplement the need for livestock, farms, and butchers. The history of innovation is about getting rid of the bottleneck in the system, and with meat, the bottleneck is the animal. Finless Foods is a new company trying to replicate fish fillets, for example, while Memphis meats is working on beef and poultry.
3D printing or additive manufacturing is a “general purpose technology” used for making, plastic toys, human tissues, aircraft parts, and buildings. 3D printing can also be used to convert alternative ingredients such as proteins from algae, beet leaves, or insects into tasty and healthy products that can be produced by small, inexpensive printers in home kitchens. The food can be customized for individual health needs as well as preferences. 3D printing can also contribute to the food ecosystem by making possible on-demand replacement parts—which are badly needed in the developing world for tractors, pumps, and other equipment. Catapult Design 3D prints tractor replacement parts as well as corn shellers, cart designs, prosthetic limbs, and rolling water barrels for the Indian market.
6. Alt Farming: Vertical Farms to Produce Food in Urban Centers
Urban food ecosystem production systems will rely not only on field-grown crops, but also on production of food within cities. There are a host of new, alternative production systems using “controlled environmental agriculture.” These include low-cost, protected poly hoop houses, greenhouses, roof-top and sack/container gardens, and vertical farming in buildings using artificial lighting. Vertical farms enable year-round production of selected crops, regardless of weather—which will be increasingly important in response to climate change—and without concern for deteriorating soil conditions that affect crop quality and productivity. AeroFarms claims 390 times more productivity per square foot than normal field production.
7. Biotechnology and Nanotechnology for Sustainable Intensification of Agriculture
CRISPR is a promising gene editing technology that can be used to enhance crop productivity while avoiding societal concerns about GMOs. CRISPR can accelerate traditional breeding and selection programs for developing new climate and disease-resistant, higher-yielding, nutritious crops and animals.
Plant-derived coating materials, developed with nanotechnology, can decrease waste, extend shelf-life and transportability of fruits and vegetables, and significantly reduce post-harvest crop loss in developing countries that lack adequate refrigeration. Nanotechnology is also used in polymers to coat seeds to increase their shelf-life and increase their germination success and production for niche, high-value crops.
Putting It All Together
The next generation “urban food industry” will be part of the larger collaborative economy that is connected by digital platforms, the cloud, and the Internet of Things. A tech-enabled urban food ecosystem integrated with new business models and smart agricultural policies offers the opportunity for sustainable intensification (doing more with less) of agriculture to feed a rapidly growing global urban population—while also creating viable economic opportunities for rural and peri-urban as well as urban producers and value-chain players.
Image Credit: Akarawut / Shutterstock.com Continue reading