Tag Archives: sharing
#435080 12 Ways Big Tech Can Take Big Action on ...
Bill Gates and Mark Zuckerberg have invested $1 billion in Breakthrough Energy to fund next-generation solutions to tackle climate. But there is a huge risk that any successful innovation will only reach the market as the world approaches 2030 at the earliest.
We now know that reducing the risk of dangerous climate change means halving global greenhouse gas emissions by that date—in just 11 years. Perhaps Gates, Zuckerberg, and all the tech giants should invest equally in innovations to do with how their own platforms —search, social media, eCommerce—can support societal behavior changes to drive down emissions.
After all, the tech giants influence the decisions of four billion consumers every day. It is time for a social contract between tech and society.
Recently myself and collaborator Johan Falk published a report during the World Economic Forum in Davos outlining 12 ways the tech sector can contribute to supporting societal goals to stabilize Earth’s climate.
Become genuine climate guardians
Tech giants go to great lengths to show how serious they are about reducing their emissions. But I smell cognitive dissonance. Google and Microsoft are working in partnership with oil companies to develop AI tools to help maximize oil recovery. This is not the behavior of companies working flat-out to stabilize Earth’s climate. Indeed, few major tech firms have visions that indicate a stable and resilient planet might be a good goal, yet AI alone has the potential to slash greenhouse gas emissions by four percent by 2030—equivalent to the emissions of Australia, Canada, and Japan combined.
We are now developing a playbook, which we plan to publish later this year at the UN climate summit, about making it as simple as possible for a CEO to become a climate guardian.
Hey Alexa, do you care about the stability of Earth’s climate?
Increasingly, consumers are delegating their decisions to narrow artificial intelligence like Alexa and Siri. Welcome to a world of zero-click purchases.
Should algorithms and information architecture be designed to nudge consumer behavior towards low-carbon choices, for example by making these options the default? We think so. People don’t mind being nudged; in fact, they welcome efforts to make their lives better. For instance, if I want to lose weight, I know I will need all the help I can get. Let’s ‘nudge for good’ and experiment with supporting societal goals.
Use social media for good
Facebook’s goal is to bring the world closer together. With 2.2 billion users on the platform, CEO Mark Zuckerberg can reasonably claim this goal is possible. But social media has changed the flow of information in the world, creating a lucrative industry around a toxic brown-cloud of confusion and anger, with frankly terrifying implications for democracy. This has been linked to the rise of nationalism and populism, and to the election of leaders who shun international cooperation, dismiss scientific knowledge, and reverse climate action at a moment when we need it more than ever.
Social media tools need re-engineering to help people make sense of the world, support democratic processes, and build communities around societal goals. Make this your mission.
Design for a future on Earth
Almost everything is designed with computer software, from buildings to mobile phones to consumer packaging. It is time to make zero-carbon design the new default and design products for sharing, re-use and disassembly.
The future is circular
Halving emissions in a decade will require all companies to adopt circular business models to reduce material use. Some tech companies are leading the charge. Apple has committed to becoming 100 percent circular as soon as possible. Great.
While big tech companies strive to be market leaders here, many other companies lack essential knowledge. Tech companies can support rapid adoption in different economic sectors, not least because they have the know-how to scale innovations exponentially. It makes business sense. If economies of scale drive the price of recycled steel and aluminium down, everyone wins.
Reward low-carbon consumption
eCommerce platforms can create incentives for low-carbon consumption. The world’s largest experiment in greening consumer behavior is Ant Forest, set up by Chinese fintech giant Ant Financial.
An estimated 300 million customers—similar to the population of the United States—gain points for making low-carbon choices such as walking to work, using public transport, or paying bills online. Virtual points are eventually converted into real trees. Sure, big questions remain about its true influence on emissions, but this is a space for rapid experimentation for big impact.
Make information more useful
Science is our tool for defining reality. Scientific consensus is how we attain reliable knowledge. Even after the information revolution, reliable knowledge about the world remains fragmented and unstructured. Build the next generation of search engines to genuinely make the world’s knowledge useful for supporting societal goals.
We need to put these tools towards supporting shared world views of the state of the planet based on the best science. New AI tools being developed by startups like Iris.ai can help see through the fog. From Alexa to Google Home and Siri, the future is “Voice”, but who chooses the information source? The highest bidder? Again, the implications for climate are huge.
Create new standards for digital advertising and marketing
Half of global ad revenue will soon be online, and largely going to a small handful of companies. How about creating a novel ethical standard on what is advertised and where? Companies could consider promoting sustainable choices and healthy lifestyles and limiting advertising of high-emissions products such as cheap flights.
We are what we eat
It is no secret that tech is about to disrupt grocery. The supermarkets of the future will be built on personal consumer data. With about two billion people either obese or overweight, revolutions in choice architecture could support positive diet choices, reduce meat consumption, halve food waste and, into the bargain, slash greenhouse gas emissions.
The future of transport is not cars, it’s data
The 2020s look set to be the biggest disruption of the automobile industry since Henry Ford unveiled the Model T. Two seismic shifts are on their way.
First, electric cars now compete favorably with petrol engines on range. Growth will reach an inflection point within a year or two once prices reach parity. The death of the internal combustion engine in Europe and Asia is assured with end dates announced by China, India, France, the UK, and most of Scandinavia. Dates range from 2025 (Norway) to 2040 (UK and China).
Tech giants can accelerate the demise. Uber recently announced a passenger surcharge to help London drivers save around $1,500 a year towards the cost of an electric car.
Second, driverless cars can shift the transport economic model from ownership to service and ride sharing. A complete shift away from privately-owned vehicles is around the corner, with large implications for emissions.
Clean-energy living and working
Most buildings are barely used and inefficiently heated and cooled. Digitization can slash this waste and its corresponding emissions through measurement, monitoring, and new business models to use office space. While, just a few unicorns are currently in this space, the potential is enormous. Buildings are one of the five biggest sources of emissions, yet have the potential to become clean energy producers in a distributed energy network.
Creating liveable cities
More cities are setting ambitious climate targets to halve emissions in a decade or even less. Tech companies can support this transition by driving demand for low-carbon services for their workforces and offices, but also by providing tools to help monitor emissions and act to reduce them. Google, for example, is collecting travel and other data from across cities to estimate emissions in real time. This is possible through technologies like artificial intelligence and the internet of things. But beware of smart cities that turn out to be not so smart. Efficiencies can reduce resilience when cities face crises.
It’s a Start
Of course, it will take more than tech to solve the climate crisis. But tech is a wildcard. The actions of the current tech giants and their acolytes could serve to destabilize the climate further or bring it under control.
We need a new social contract between tech companies and society to achieve societal goals. The alternative is unthinkable. Without drastic action now, climate chaos threatens to engulf us all. As this future approaches, regulators will be forced to take ever more draconian action to rein in the problem. Acting now will reduce that risk.
Note: A version of this article was originally published on World Economic Forum
Image Credit: Bruce Rolff / Shutterstock.com Continue reading
#434303 Making Superhumans Through Radical ...
Imagine trying to read War and Peace one letter at a time. The thought alone feels excruciating. But in many ways, this painful idea holds parallels to how human-machine interfaces (HMI) force us to interact with and process data today.
Designed back in the 1970s at Xerox PARC and later refined during the 1980s by Apple, today’s HMI was originally conceived during fundamentally different times, and specifically, before people and machines were generating so much data. Fast forward to 2019, when humans are estimated to produce 44 zettabytes of data—equal to two stacks of books from here to Pluto—and we are still using the same HMI from the 1970s.
These dated interfaces are not equipped to handle today’s exponential rise in data, which has been ushered in by the rapid dematerialization of many physical products into computers and software.
Breakthroughs in perceptual and cognitive computing, especially machine learning algorithms, are enabling technology to process vast volumes of data, and in doing so, they are dramatically amplifying our brain’s abilities. Yet even with these powerful technologies that at times make us feel superhuman, the interfaces are still crippled with poor ergonomics.
Many interfaces are still designed around the concept that human interaction with technology is secondary, not instantaneous. This means that any time someone uses technology, they are inevitably multitasking, because they must simultaneously perform a task and operate the technology.
If our aim, however, is to create technology that truly extends and amplifies our mental abilities so that we can offload important tasks, the technology that helps us must not also overwhelm us in the process. We must reimagine interfaces to work in coherence with how our minds function in the world so that our brains and these tools can work together seamlessly.
Embodied Cognition
Most technology is designed to serve either the mind or the body. It is a problematic divide, because our brains use our entire body to process the world around us. Said differently, our minds and bodies do not operate distinctly. Our minds are embodied.
Studies using MRI scans have shown that when a person feels an emotion in their gut, blood actually moves to that area of the body. The body and the mind are linked in this way, sharing information back and forth continuously.
Current technology presents data to the brain differently from how the brain processes data. Our brains, for example, use sensory data to continually encode and decipher patterns within the neocortex. Our brains do not create a linguistic label for each item, which is how the majority of machine learning systems operate, nor do our brains have an image associated with each of these labels.
Our bodies move information through us instantaneously, in a sense “computing” at the speed of thought. What if our technology could do the same?
Using Cognitive Ergonomics to Design Better Interfaces
Well-designed physical tools, as philosopher Martin Heidegger once meditated on while using the metaphor of a hammer, seem to disappear into the “hand.” They are designed to amplify a human ability and not get in the way during the process.
The aim of physical ergonomics is to understand the mechanical movement of the human body and then adapt a physical system to amplify the human output in accordance. By understanding the movement of the body, physical ergonomics enables ergonomically sound physical affordances—or conditions—so that the mechanical movement of the body and the mechanical movement of the machine can work together harmoniously.
Cognitive ergonomics applied to HMI design uses this same idea of amplifying output, but rather than focusing on physical output, the focus is on mental output. By understanding the raw materials the brain uses to comprehend information and form an output, cognitive ergonomics allows technologists and designers to create technological affordances so that the brain can work seamlessly with interfaces and remove the interruption costs of our current devices. In doing so, the technology itself “disappears,” and a person’s interaction with technology becomes fluid and primary.
By leveraging cognitive ergonomics in HMI design, we can create a generation of interfaces that can process and present data the same way humans process real-world information, meaning through fully-sensory interfaces.
Several brain-machine interfaces are already on the path to achieving this. AlterEgo, a wearable device developed by MIT researchers, uses electrodes to detect and understand nonverbal prompts, which enables the device to read the user’s mind and act as an extension of the user’s cognition.
Another notable example is the BrainGate neural device, created by researchers at Stanford University. Just two months ago, a study was released showing that this brain implant system allowed paralyzed patients to navigate an Android tablet with their thoughts alone.
These are two extraordinary examples of what is possible for the future of HMI, but there is still a long way to go to bring cognitive ergonomics front and center in interface design.
Disruptive Innovation Happens When You Step Outside Your Existing Users
Most of today’s interfaces are designed by a narrow population, made up predominantly of white, non-disabled men who are prolific in the use of technology (you may recall The New York Times viral article from 2016, Artificial Intelligence’s White Guy Problem). If you ask this population if there is a problem with today’s HMIs, most will say no, and this is because the technology has been designed to serve them.
This lack of diversity means a limited perspective is being brought to interface design, which is problematic if we want HMI to evolve and work seamlessly with the brain. To use cognitive ergonomics in interface design, we must first gain a more holistic understanding of how people with different abilities understand the world and how they interact with technology.
Underserved groups, such as people with physical disabilities, operate on what Clayton Christensen coined in The Innovator’s Dilemma as the fringe segment of a market. Developing solutions that cater to fringe groups can in fact disrupt the larger market by opening a downward, much larger market.
Learning From Underserved Populations
When technology fails to serve a group of people, that group must adapt the technology to meet their needs.
The workarounds created are often ingenious, specifically because they have not been arrived at by preferences, but out of necessity that has forced disadvantaged users to approach the technology from a very different vantage point.
When a designer or technologist begins learning from this new viewpoint and understanding challenges through a different lens, they can bring new perspectives to design—perspectives that otherwise can go unseen.
Designers and technologists can also learn from people with physical disabilities who interact with the world by leveraging other senses that help them compensate for one they may lack. For example, some blind people use echolocation to detect objects in their environments.
The BrainPort device developed by Wicab is an incredible example of technology leveraging one human sense to serve or compliment another. The BrainPort device captures environmental information with a wearable video camera and converts this data into soft electrical stimulation sequences that are sent to a device on the user’s tongue—the most sensitive touch receptor in the body. The user learns how to interpret the patterns felt on their tongue, and in doing so, become able to “see” with their tongue.
Key to the future of HMI design is learning how different user groups navigate the world through senses beyond sight. To make cognitive ergonomics work, we must understand how to leverage the senses so we’re not always solely relying on our visual or verbal interactions.
Radical Inclusion for the Future of HMI
Bringing radical inclusion into HMI design is about gaining a broader lens on technology design at large, so that technology can serve everyone better.
Interestingly, cognitive ergonomics and radical inclusion go hand in hand. We can’t design our interfaces with cognitive ergonomics without bringing radical inclusion into the picture, and we also will not arrive at radical inclusion in technology so long as cognitive ergonomics are not considered.
This new mindset is the only way to usher in an era of technology design that amplifies the collective human ability to create a more inclusive future for all.
Image Credit: jamesteohart / Shutterstock.com Continue reading
#433911 Thanksgiving Food for Thought: The Tech ...
With the Thanksgiving holiday upon us, it’s a great time to reflect on the future of food. Over the last few years, we have seen a dramatic rise in exponential technologies transforming the food industry from seed to plate. Food is important in many ways—too little or too much of it can kill us, and it is often at the heart of family, culture, our daily routines, and our biggest celebrations. The agriculture and food industries are also two of the world’s biggest employers. Let’s take a look to see what is in store for the future.
Robotic Farms
Over the last few years, we have seen a number of new companies emerge in the robotic farming industry. This includes new types of farming equipment used in arable fields, as well as indoor robotic vertical farms. In November 2017, Hands Free Hectare became the first in the world to remotely grow an arable crop. They used autonomous tractors to sow and spray crops, small rovers to take soil samples, drones to monitor crop growth, and an unmanned combine harvester to collect the crops. Since then, they’ve also grown and harvested a field of winter wheat, and have been adding additional technologies and capabilities to their arsenal of robotic farming equipment.
Indoor vertical farming is also rapidly expanding. As Engadget reported in October 2018, a number of startups are now growing crops like leafy greens, tomatoes, flowers, and herbs. These farms can grow food in urban areas, reducing transport, water, and fertilizer costs, and often don’t need pesticides since they are indoors. IronOx, which is using robots to grow plants with navigation technology used by self-driving cars, can grow 30 times more food per acre of land using 90 percent less water than traditional farmers. Vertical farming company Plenty was recently funded by Softbank’s Vision Fund, Jeff Bezos, and others to build 300 vertical farms in China.
These startups are not only succeeding in wealthy countries. Hello Tractor, an “uberized” tractor, has worked with 250,000 smallholder farms in Africa, creating both food security and tech-infused agriculture jobs. The World Food Progam’s Innovation Accelerator (an impact partner of Singularity University) works with hundreds of startups aimed at creating zero hunger. One project is focused on supporting refugees in developing “food computers” in refugee camps—computerized devices that grow food while also adjusting to the conditions around them. As exponential trends drive down the costs of robotics, sensors, software, and energy, we should see robotic farming scaling around the world and becoming the main way farming takes place.
Cultured Meat
Exponential technologies are not only revolutionizing how we grow vegetables and grains, but also how we generate protein and meat. The new cultured meat industry is rapidly expanding, led by startups such as Memphis Meats, Mosa Meats, JUST Meat, Inc. and Finless Foods, and backed by heavyweight investors including DFJ, Bill Gates, Richard Branson, Cargill, and Tyson Foods.
Cultured meat is grown in a bioreactor using cells from an animal, a scaffold, and a culture. The process is humane and, potentially, scientists can make the meat healthier by adding vitamins, removing fat, or customizing it to an individual’s diet and health concerns. Another benefit is that cultured meats, if grown at scale, would dramatically reduce environmental destruction, pollution, and climate change caused by the livestock and fishing industries. Similar to vertical farms, cultured meat is produced using technology and can be grown anywhere, on-demand and in a decentralized way.
Similar to robotic farming equipment, bioreactors will also follow exponential trends, rapidly falling in cost. In fact, the first cultured meat hamburger (created by Singularity University faculty Member Mark Post of Mosa Meats in 2013) cost $350,000 dollars. In 2018, Fast Company reported the cost was now about $11 per burger, and the Israeli startup Future Meat Technologies predicted they will produce beef at about $2 per pound in 2020, which will be competitive with existing prices. For those who have turkey on their mind, one can read about New Harvest’s work (one of the leading think tanks and research centers for the cultured meat and cellular agriculture industry) in funding efforts to generate a nugget of cultured turkey meat.
One outstanding question is whether cultured meat is safe to eat and how it will interact with the overall food supply chain. In the US, regulators like the Food and Drug Administration (FDA) and the US Department of Agriculture (USDA) are working out their roles in this process, with the FDA overseeing the cellular process and the FDA overseeing production and labeling.
Food Processing
Tech companies are also making great headway in streamlining food processing. Norwegian company Tomra Foods was an early leader in using imaging recognition, sensors, artificial intelligence, and analytics to more efficiently sort food based on shape, composition of fat, protein, and moisture, and other food safety and quality indicators. Their technologies have improved food yield by 5-10 percent, which is significant given they own 25 percent of their market.
These advances are also not limited to large food companies. In 2016 Google reported how a small family farm in Japan built a world-class cucumber sorting device using their open-source machine learning tool TensorFlow. SU startup Impact Vision uses hyper-spectral imaging to analyze food quality, which increases revenues and reduces food waste and product recalls from contamination.
These examples point to a question many have on their mind: will we live in a future where a few large companies use advanced technologies to grow the majority of food on the planet, or will the falling costs of these technologies allow family farms, startups, and smaller players to take part in creating a decentralized system? Currently, the future could flow either way, but it is important for smaller companies to take advantage of the most cutting-edge technology in order to stay competitive.
Food Purchasing and Delivery
In the last year, we have also seen a number of new developments in technology improving access to food. Amazon Go is opening grocery stores in Seattle, San Francisco, and Chicago where customers use an app that allows them to pick up their products and pay without going through cashier lines. Sam’s Club is not far behind, with an app that also allows a customer to purchase goods in-store.
The market for food delivery is also growing. In 2017, Morgan Stanley estimated that the online food delivery market from restaurants could grow to $32 billion by 2021, from $12 billion in 2017. Companies like Zume are pioneering robot-powered pizza making and delivery. In addition to using robotics to create affordable high-end gourmet pizzas in their shop, they also have a pizza delivery truck that can assemble and cook pizzas while driving. Their system combines predictive analytics using past customer data to prepare pizzas for certain neighborhoods before the orders even come in. In early November 2018, the Wall Street Journal estimated that Zume is valued at up to $2.25 billion.
Looking Ahead
While each of these developments is promising on its own, it’s also important to note that since all these technologies are in some way digitized and connected to the internet, the various food tech players can collaborate. In theory, self-driving delivery restaurants could share data on what they are selling to their automated farm equipment, facilitating coordination of future crops. There is a tremendous opportunity to improve efficiency, lower costs, and create an abundance of healthy, sustainable food for all.
On the other hand, these technologies are also deeply disruptive. According to the Food and Agricultural Organization of the United Nations, in 2010 about one billion people, or a third of the world’s workforce, worked in the farming and agricultural industries. We need to ensure these farmers are linked to new job opportunities, as well as facilitate collaboration between existing farming companies and technologists so that the industries can continue to grow and lead rather than be displaced.
Just as importantly, each of us might think about how these changes in the food industry might impact our own ways of life and culture. Thanksgiving celebrates community and sharing of food during a time of scarcity. Technology will help create an abundance of food and less need for communities to depend on one another. What are the ways that you will create community, sharing, and culture in this new world?
Image Credit: nikkytok / Shutterstock.com Continue reading