Tag Archives: phone

#433284 Tech Can Sustainably Feed Developing ...

In the next 30 years, virtually all net population growth will occur in urban regions of developing countries. At the same time, worldwide food production will become increasingly limited by the availability of land, water, and energy. These constraints will be further worsened by climate change and the expected addition of two billion people to today’s four billion now living in urban regions. Meanwhile, current urban food ecosystems in the developing world are inefficient and critically inadequate to meet the challenges of the future.

Combined, these trends could have catastrophic economic and political consequences. A new path forward for urban food ecosystems needs to be found. But what is that path?

New technologies, coupled with new business models and supportive government policies, can create more resilient urban food ecosystems in the coming decades. These tech-enabled systems can sustainably link rural, peri-urban (areas just outside cities), and urban producers and consumers, increase overall food production, and generate opportunities for new businesses and jobs (Figure 1).

Figure 1: The urban food value chain nodes from rural, peri-urban and urban producers
to servicing end customers in urban and peri-urban markets.
Here’s a glimpse of the changes technology may bring to the systems feeding cities in the future.

A technology-linked urban food ecosystem would create unprecedented opportunities for small farms to reach wider markets and progress from subsistence farming to commercially producing niche cash crops and animal protein, such as poultry, fish, pork, and insects.

Meanwhile, new opportunities within cities will appear with the creation of vertical farms and other controlled-environment agricultural systems as well as production of plant-based and 3D printed foods and cultured meat. Uberized facilitation of production and distribution of food will reduce bottlenecks and provide new business opportunities and jobs. Off-the-shelf precision agriculture technology will increasingly be the new norm, from smallholders to larger producers.

As part of Agricultural Revolution 4.0, all this will be integrated into the larger collaborative economy—connected by digital platforms, the cloud, and the Internet of Things and powered by artificial intelligence. It will more efficiently and effectively use resources and people to connect the nexus of food, water, energy, nutrition, and human health. It will also aid in the development of a circular economy that is designed to be restorative and regenerative, minimizing waste and maximizing recycling and reuse to build economic, natural, and social capital.

In short, technology will enable transformation of urban food ecosystems, from expanded production in cities to more efficient and inclusive distribution and closer connections with rural farmers. Here’s a closer look at seven tech-driven trends that will help feed tomorrow’s cities.

1. Worldwide Connectivity: Information, Learning, and Markets
Connectivity from simple cell phone SMS communication to internet-enabled smartphones and cloud services are providing platforms for the increasingly powerful technologies enabling development of a new agricultural revolution. Internet connections currently reach more than 4 billion people, about 55% of the global population. That number will grow fast in coming years.

These information and communications technologies connect food producers to consumers with just-in-time data, enhanced good agricultural practices, mobile money and credit, telecommunications, market information and merchandising, and greater transparency and traceability of goods and services throughout the value chain. Text messages on mobile devices have become the one-stop-shop for small farmers to place orders, gain technology information for best management practices, and access market information to increase profitability.

Hershey’s CocoaLink in Ghana, for example, uses text and voice messages with cocoa industry experts and small farm producers. Digital Green is a technology-enabled communication system in Asia and Africa to bring needed agricultural and management practices to small farmers in their own language by filming and recording successful farmers in their own communities. MFarm is a mobile app that connects Kenyan farmers with urban markets via text messaging.

2. Blockchain Technology: Greater Access to Basic Financial Services and Enhanced Food Safety
Gaining access to credit and executing financial transactions have been persistent constraints for small farm producers. Blockchain promises to help the unbanked access basic financial services.

The Gates Foundation has released an open source platform, Mojaloop, to allow software developers and banks and financial service providers to build secure digital payment platforms at scale. Mojaloop software uses more secure blockchain technology to enable urban food system players in the developing world to conduct business and trade. The free software reduces complexity and cost in building payment platforms to connect small farmers with customers, merchants, banks, and mobile money providers. Such digital financial services will allow small farm producers in the developing world to conduct business without a brick-and-mortar bank.

Blockchain is also important for traceability and transparency requirements to meet food regulatory and consumer requirement during the production, post-harvest, shipping, processing and distribution to consumers. Combining blockchain with RFID technologies also will enhance food safety.

3. Uberized Services: On-Demand Equipment, Storage, and More
Uberized services can advance development of the urban food ecosystem across the spectrum, from rural to peri-urban to urban food production and distribution. Whereas Uber and Airbnb enable sharing of rides and homes, the model can be extended in the developing world to include on-demand use of expensive equipment, such as farm machinery, or storage space.

This includes uberization of planting and harvesting equipment (Hello Tractor), transportation vehicles, refrigeration facilities for temporary storage of perishable product, and “cloud kitchens” (EasyAppetite in Nigeria, FoodCourt in Rwanda, and Swiggy and Zomto in India) that produce fresh meals to be delivered to urban customers, enabling young people with motorbikes and cell phones to become entrepreneurs or contractors delivering meals to urban customers.

Another uberized service is marketing and distributing “ugly food” or imperfect produce to reduce food waste. About a third of the world’s food goes to waste, often because of appearance; this is enough to feed two billion people. Such services supply consumers with cheaper, nutritious, tasty, healthy fruits and vegetables that would normally be discarded as culls due to imperfections in shape or size.

4. Technology for Producing Plant-Based Foods in Cities
We need to change diet choices through education and marketing and by developing tasty plant-based substitutes. This is not only critical for environmental sustainability, but also offers opportunities for new businesses and services. It turns out that current agricultural production systems for “red meat” have a far greater detrimental impact on the environment than automobiles.

There have been great advances in plant-based foods, like the Impossible Burger and Beyond Meat, that can satisfy the consumer’s experience and perception of meat. Rather than giving up the experience of eating red meat, technology is enabling marketable, attractive plant-based products that can potentially drastically reduce world per capita consumption of red meat.

5. Cellular Agriculture, Lab-Grown Meat, and 3D Printed Food
Lab-grown meat, literally meat grown from cultured cells, may radically change where and how protein and food is produced, including the cities where it is consumed. There is a wide range of innovative alternatives to traditional meats that can supplement the need for livestock, farms, and butchers. The history of innovation is about getting rid of the bottleneck in the system, and with meat, the bottleneck is the animal. Finless Foods is a new company trying to replicate fish fillets, for example, while Memphis meats is working on beef and poultry.

3D printing or additive manufacturing is a “general purpose technology” used for making, plastic toys, human tissues, aircraft parts, and buildings. 3D printing can also be used to convert alternative ingredients such as proteins from algae, beet leaves, or insects into tasty and healthy products that can be produced by small, inexpensive printers in home kitchens. The food can be customized for individual health needs as well as preferences. 3D printing can also contribute to the food ecosystem by making possible on-demand replacement parts—which are badly needed in the developing world for tractors, pumps, and other equipment. Catapult Design 3D prints tractor replacement parts as well as corn shellers, cart designs, prosthetic limbs, and rolling water barrels for the Indian market.

6. Alt Farming: Vertical Farms to Produce Food in Urban Centers
Urban food ecosystem production systems will rely not only on field-grown crops, but also on production of food within cities. There are a host of new, alternative production systems using “controlled environmental agriculture.” These include low-cost, protected poly hoop houses, greenhouses, roof-top and sack/container gardens, and vertical farming in buildings using artificial lighting. Vertical farms enable year-round production of selected crops, regardless of weather—which will be increasingly important in response to climate change—and without concern for deteriorating soil conditions that affect crop quality and productivity. AeroFarms claims 390 times more productivity per square foot than normal field production.

7. Biotechnology and Nanotechnology for Sustainable Intensification of Agriculture
CRISPR is a promising gene editing technology that can be used to enhance crop productivity while avoiding societal concerns about GMOs. CRISPR can accelerate traditional breeding and selection programs for developing new climate and disease-resistant, higher-yielding, nutritious crops and animals.

Plant-derived coating materials, developed with nanotechnology, can decrease waste, extend shelf-life and transportability of fruits and vegetables, and significantly reduce post-harvest crop loss in developing countries that lack adequate refrigeration. Nanotechnology is also used in polymers to coat seeds to increase their shelf-life and increase their germination success and production for niche, high-value crops.

Putting It All Together
The next generation “urban food industry” will be part of the larger collaborative economy that is connected by digital platforms, the cloud, and the Internet of Things. A tech-enabled urban food ecosystem integrated with new business models and smart agricultural policies offers the opportunity for sustainable intensification (doing more with less) of agriculture to feed a rapidly growing global urban population—while also creating viable economic opportunities for rural and peri-urban as well as urban producers and value-chain players.

Image Credit: Akarawut / Shutterstock.com Continue reading

Posted in Human Robots

#433282 The 4 Waves of AI: Who Will Own the ...

Recently, I picked up Kai-Fu Lee’s newest book, AI Superpowers.

Kai-Fu Lee is one of the most plugged-in AI investors on the planet, managing over $2 billion between six funds and over 300 portfolio companies in the US and China.

Drawing from his pioneering work in AI, executive leadership at Microsoft, Apple, and Google (where he served as founding president of Google China), and his founding of VC fund Sinovation Ventures, Lee shares invaluable insights about:

The four factors driving today’s AI ecosystems;
China’s extraordinary inroads in AI implementation;
Where autonomous systems are headed;
How we’ll need to adapt.

With a foothold in both Beijing and Silicon Valley, Lee looks at the power balance between Chinese and US tech behemoths—each turbocharging new applications of deep learning and sweeping up global markets in the process.

In this post, I’ll be discussing Lee’s “Four Waves of AI,” an excellent framework for discussing where AI is today and where it’s going. I’ll also be featuring some of the hottest Chinese tech companies leading the charge, worth watching right now.

I’m super excited that this Tuesday, I’ve scored the opportunity to sit down with Kai-Fu Lee to discuss his book in detail via a webinar.

With Sino-US competition heating up, who will own the future of technology?

Let’s dive in.

The First Wave: Internet AI
In this first stage of AI deployment, we’re dealing primarily with recommendation engines—algorithmic systems that learn from masses of user data to curate online content personalized to each one of us.

Think Amazon’s spot-on product recommendations, or that “Up Next” YouTube video you just have to watch before getting back to work, or Facebook ads that seem to know what you’ll buy before you do.

Powered by the data flowing through our networks, internet AI leverages the fact that users automatically label data as we browse. Clicking versus not clicking; lingering on a web page longer than we did on another; hovering over a Facebook video to see what happens at the end.

These cascades of labeled data build a detailed picture of our personalities, habits, demands, and desires: the perfect recipe for more tailored content to keep us on a given platform.

Currently, Lee estimates that Chinese and American companies stand head-to-head when it comes to deployment of internet AI. But given China’s data advantage, he predicts that Chinese tech giants will have a slight lead (60-40) over their US counterparts in the next five years.

While you’ve most definitely heard of Alibaba and Baidu, you’ve probably never stumbled upon Toutiao.

Starting out as a copycat of America’s wildly popular Buzzfeed, Toutiao reached a valuation of $20 billion by 2017, dwarfing Buzzfeed’s valuation by more than a factor of 10. But with almost 120 million daily active users, Toutiao doesn’t just stop at creating viral content.

Equipped with natural-language processing and computer vision, Toutiao’s AI engines survey a vast network of different sites and contributors, rewriting headlines to optimize for user engagement, and processing each user’s online behavior—clicks, comments, engagement time—to curate individualized news feeds for millions of consumers.

And as users grow more engaged with Toutiao’s content, the company’s algorithms get better and better at recommending content, optimizing headlines, and delivering a truly personalized feed.

It’s this kind of positive feedback loop that fuels today’s AI giants surfing the wave of internet AI.

The Second Wave: Business AI
While internet AI takes advantage of the fact that netizens are constantly labeling data via clicks and other engagement metrics, business AI jumps on the data that traditional companies have already labeled in the past.

Think banks issuing loans and recording repayment rates; hospitals archiving diagnoses, imaging data, and subsequent health outcomes; or courts noting conviction history, recidivism, and flight.

While we humans make predictions based on obvious root causes (strong features), AI algorithms can process thousands of weakly correlated variables (weak features) that may have much more to do with a given outcome than the usual suspects.

By scouting out hidden correlations that escape our linear cause-and-effect logic, business AI leverages labeled data to train algorithms that outperform even the most veteran of experts.

Apply these data-trained AI engines to banking, insurance, and legal sentencing, and you get minimized default rates, optimized premiums, and plummeting recidivism rates.

While Lee confidently places America in the lead (90-10) for business AI, China’s substantial lag in structured industry data could actually work in its favor going forward.

In industries where Chinese startups can leapfrog over legacy systems, China has a major advantage.

Take Chinese app Smart Finance, for instance.

While Americans embraced credit and debit cards in the 1970s, China was still in the throes of its Cultural Revolution, largely missing the bus on this technology.

Fast forward to 2017, and China’s mobile payment spending outnumbered that of Americans’ by a ratio of 50 to 1. Without the competition of deeply entrenched credit cards, mobile payments were an obvious upgrade to China’s cash-heavy economy, embraced by 70 percent of China’s 753 million smartphone users by the end of 2017.

But by leapfrogging over credit cards and into mobile payments, China largely left behind the notion of credit.

And here’s where Smart Finance comes in.

An AI-powered app for microfinance, Smart Finance depends almost exclusively on its algorithms to make millions of microloans. For each potential borrower, the app simply requests access to a portion of the user’s phone data.

On the basis of variables as subtle as your typing speed and battery percentage, Smart Finance can predict with astounding accuracy your likelihood of repaying a $300 loan.

Such deployments of business AI and internet AI are already revolutionizing our industries and individual lifestyles. But still on the horizon lie two even more monumental waves— perception AI and autonomous AI.

The Third Wave: Perception AI
In this wave, AI gets an upgrade with eyes, ears, and myriad other senses, merging the digital world with our physical environments.

As sensors and smart devices proliferate through our homes and cities, we are on the verge of entering a trillion-sensor economy.

Companies like China’s Xiaomi are putting out millions of IoT-connected devices, and teams of researchers have already begun prototyping smart dust—solar cell- and sensor-geared particulates that can store and communicate troves of data anywhere, anytime.

As Kai-Fu explains, perception AI “will bring the convenience and abundance of the online world into our offline reality.” Sensor-enabled hardware devices will turn everything from hospitals to cars to schools into online-merge-offline (OMO) environments.

Imagine walking into a grocery store, scanning your face to pull up your most common purchases, and then picking up a virtual assistant (VA) shopping cart. Having pre-loaded your data, the cart adjusts your usual grocery list with voice input, reminds you to get your spouse’s favorite wine for an upcoming anniversary, and guides you through a personalized store route.

While we haven’t yet leveraged the full potential of perception AI, China and the US are already making incredible strides. Given China’s hardware advantage, Lee predicts China currently has a 60-40 edge over its American tech counterparts.

Now the go-to city for startups building robots, drones, wearable technology, and IoT infrastructure, Shenzhen has turned into a powerhouse for intelligent hardware, as I discussed last week. Turbocharging output of sensors and electronic parts via thousands of factories, Shenzhen’s skilled engineers can prototype and iterate new products at unprecedented scale and speed.

With the added fuel of Chinese government support and a relaxed Chinese attitude toward data privacy, China’s lead may even reach 80-20 in the next five years.

Jumping on this wave are companies like Xiaomi, which aims to turn bathrooms, kitchens, and living rooms into smart OMO environments. Having invested in 220 companies and incubated 29 startups that produce its products, Xiaomi surpassed 85 million intelligent home devices by the end of 2017, making it the world’s largest network of these connected products.

One KFC restaurant in China has even teamed up with Alipay (Alibaba’s mobile payments platform) to pioneer a ‘pay-with-your-face’ feature. Forget cash, cards, and cell phones, and let OMO do the work.

The Fourth Wave: Autonomous AI
But the most monumental—and unpredictable—wave is the fourth and final: autonomous AI.

Integrating all previous waves, autonomous AI gives machines the ability to sense and respond to the world around them, enabling AI to move and act productively.

While today’s machines can outperform us on repetitive tasks in structured and even unstructured environments (think Boston Dynamics’ humanoid Atlas or oncoming autonomous vehicles), machines with the power to see, hear, touch and optimize data will be a whole new ballgame.

Think: swarms of drones that can selectively spray and harvest entire farms with computer vision and remarkable dexterity, heat-resistant drones that can put out forest fires 100X more efficiently, or Level 5 autonomous vehicles that navigate smart roads and traffic systems all on their own.

While autonomous AI will first involve robots that create direct economic value—automating tasks on a one-to-one replacement basis—these intelligent machines will ultimately revamp entire industries from the ground up.

Kai-Fu Lee currently puts America in a commanding lead of 90-10 in autonomous AI, especially when it comes to self-driving vehicles. But Chinese government efforts are quickly ramping up the competition.

Already in China’s Zhejiang province, highway regulators and government officials have plans to build China’s first intelligent superhighway, outfitted with sensors, road-embedded solar panels and wireless communication between cars, roads and drivers.

Aimed at increasing transit efficiency by up to 30 percent while minimizing fatalities, the project may one day allow autonomous electric vehicles to continuously charge as they drive.

A similar government-fueled project involves Beijing’s new neighbor Xiong’an. Projected to take in over $580 billion in infrastructure spending over the next 20 years, Xiong’an New Area could one day become the world’s first city built around autonomous vehicles.

Baidu is already working with Xiong’an’s local government to build out this AI city with an environmental focus. Possibilities include sensor-geared cement, computer vision-enabled traffic lights, intersections with facial recognition, and parking lots-turned parks.

Lastly, Lee predicts China will almost certainly lead the charge in autonomous drones. Already, Shenzhen is home to premier drone maker DJI—a company I’ll be visiting with 24 top executives later this month as part of my annual China Platinum Trip.

Named “the best company I have ever encountered” by Chris Anderson, DJI owns an estimated 50 percent of the North American drone market, supercharged by Shenzhen’s extraordinary maker movement.

While the long-term Sino-US competitive balance in fourth wave AI remains to be seen, one thing is certain: in a matter of decades, we will witness the rise of AI-embedded cityscapes and autonomous machines that can interact with the real world and help solve today’s most pressing grand challenges.

Join Me
Webinar with Dr. Kai-Fu Lee: Dr. Kai-Fu Lee — one of the world’s most respected experts on AI — and I will discuss his latest book AI Superpowers: China, Silicon Valley, and the New World Order. Artificial Intelligence is reshaping the world as we know it. With U.S.-Sino competition heating up, who will own the future of technology? Register here for the free webinar on September 4th, 2018 from 11:00am–12:30pm PST.

Image Credit: Elena11 / Shutterstock.com Continue reading

Posted in Human Robots

#432882 Why the Discovery of Room-Temperature ...

Superconductors are among the most bizarre and exciting materials yet discovered. Counterintuitive quantum-mechanical effects mean that, below a critical temperature, they have zero electrical resistance. This property alone is more than enough to spark the imagination.

A current that could flow forever without losing any energy means transmission of power with virtually no losses in the cables. When renewable energy sources start to dominate the grid and high-voltage transmission across continents becomes important to overcome intermittency, lossless cables will result in substantial savings.

What’s more, a superconducting wire carrying a current that never, ever diminishes would act as a perfect store of electrical energy. Unlike batteries, which degrade over time, if the resistance is truly zero, you could return to the superconductor in a billion years and find that same old current flowing through it. Energy could be captured and stored indefinitely!

With no resistance, a huge current could be passed through the superconducting wire and, in turn, produce magnetic fields of incredible power.

You could use them to levitate trains and produce astonishing accelerations, thereby revolutionizing the transport system. You could use them in power plants—replacing conventional methods which spin turbines in magnetic fields to generate electricity—and in quantum computers as the two-level system required for a “qubit,” in which the zeros and ones are replaced by current flowing clockwise or counterclockwise in a superconductor.

Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic; superconductors can certainly seem like magical devices. So, why aren’t they busy remaking the world? There’s a problem—that critical temperature.

For all known materials, it’s hundreds of degrees below freezing. Superconductors also have a critical magnetic field; beyond a certain magnetic field strength, they cease to work. There’s a tradeoff: materials with an intrinsically high critical temperature can also often provide the largest magnetic fields when cooled well below that temperature.

This has meant that superconductor applications so far have been limited to situations where you can afford to cool the components of your system to close to absolute zero: in particle accelerators and experimental nuclear fusion reactors, for example.

But even as some aspects of superconductor technology become mature in limited applications, the search for higher temperature superconductors moves on. Many physicists still believe a room-temperature superconductor could exist. Such a discovery would unleash amazing new technologies.

The Quest for Room-Temperature Superconductors
After Heike Kamerlingh Onnes discovered superconductivity by accident while attempting to prove Lord Kelvin’s theory that resistance would increase with decreasing temperature, theorists scrambled to explain the new property in the hope that understanding it might allow for room-temperature superconductors to be synthesized.

They came up with the BCS theory, which explained some of the properties of superconductors. It also predicted that the dream of technologists, a room-temperature superconductor, could not exist; the maximum temperature for superconductivity according to BCS theory was just 30 K.

Then, in the 1980s, the field changed again with the discovery of unconventional, or high-temperature, superconductivity. “High temperature” is still very cold: the highest temperature for superconductivity achieved was -70°C for hydrogen sulphide at extremely high pressures. For normal pressures, -140°C is near the upper limit. Unfortunately, high-temperature superconductors—which require relatively cheap liquid nitrogen, rather than liquid helium, to cool—are mostly brittle ceramics, which are expensive to form into wires and have limited application.

Given the limitations of high-temperature superconductors, researchers continue to believe there’s a better option awaiting discovery—an incredible new material that checks boxes like superconductivity approaching room temperature, affordability, and practicality.

Tantalizing Clues
Without a detailed theoretical understanding of how this phenomenon occurs—although incremental progress happens all the time—scientists can occasionally feel like they’re taking educated guesses at materials that might be likely candidates. It’s a little like trying to guess a phone number, but with the periodic table of elements instead of digits.

Yet the prospect remains, in the words of one researcher, tantalizing. A Nobel Prize and potentially changing the world of energy and electricity is not bad for a day’s work.

Some research focuses on cuprates, complex crystals that contain layers of copper and oxygen atoms. Doping cuprates with various different elements, such exotic compounds as mercury barium calcium copper oxide, are amongst the best superconductors known today.

Research also continues into some anomalous but unexplained reports that graphite soaked in water can act as a room-temperature superconductor, but there’s no indication that this could be used for technological applications yet.

In early 2017, as part of the ongoing effort to explore the most extreme and exotic forms of matter we can create on Earth, researchers managed to compress hydrogen into a metal.

The pressure required to do this was more than that at the core of the Earth and thousands of times higher than that at the bottom of the ocean. Some researchers in the field, called condensed-matter physics, doubt that metallic hydrogen was produced at all.

It’s considered possible that metallic hydrogen could be a room-temperature superconductor. But getting the samples to stick around long enough for detailed testing has proved tricky, with the diamonds containing the metallic hydrogen suffering a “catastrophic failure” under the pressure.

Superconductivity—or behavior that strongly resembles it—was also observed in yttrium barium copper oxide (YBCO) at room temperature in 2014. The only catch was that this electron transport lasted for a tiny fraction of a second and required the material to be bombarded with pulsed lasers.

Not very practical, you might say, but tantalizing nonetheless.

Other new materials display enticing properties too. The 2016 Nobel Prize in Physics was awarded for the theoretical work that characterizes topological insulators—materials that exhibit similarly strange quantum behaviors. They can be considered perfect insulators for the bulk of the material but extraordinarily good conductors in a thin layer on the surface.

Microsoft is betting on topological insulators as the key component in their attempt at a quantum computer. They’ve also been considered potentially important components in miniaturized circuitry.

A number of remarkable electronic transport properties have also been observed in new, “2D” structures—like graphene, these are materials synthesized to be as thick as a single atom or molecule. And research continues into how we can utilize the superconductors we’ve already discovered; for example, some teams are trying to develop insulating material that prevents superconducting HVDC cable from overheating.

Room-temperature superconductivity remains as elusive and exciting as it has been for over a century. It is unclear whether a room-temperature superconductor can exist, but the discovery of high-temperature superconductors is a promising indicator that unconventional and highly useful quantum effects may be discovered in completely unexpected materials.

Perhaps in the future—through artificial intelligence simulations or the serendipitous discoveries of a 21st century Kamerlingh Onnes—this little piece of magic could move into the realm of reality.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots