Tag Archives: current

#436946 Coronavirus May Mean Automation Is ...

We’re in the midst of a public health emergency, and life as we know it has ground to a halt. The places we usually go are closed, the events we were looking forward to are canceled, and some of us have lost our jobs or fear losing them soon.

But although it may not seem like it, there are some silver linings; this crisis is bringing out the worst in some (I’m looking at you, toilet paper hoarders), but the best in many. Italians on lockdown are singing together, Spaniards on lockdown are exercising together, this entrepreneur made a DIY ventilator and put it on YouTube, and volunteers in Italy 3D printed medical valves for virus treatment at a fraction of their usual cost.

Indeed, if you want to feel like there’s still hope for humanity instead of feeling like we’re about to snowball into terribleness as a species, just look at these examples—and I’m sure there are many more out there. There’s plenty of hope and opportunity to be found in this crisis.

Peter Xing, a keynote speaker and writer on emerging technologies and associate director in technology and growth initiatives at KPMG, would agree. Xing believes the coronavirus epidemic is presenting us with ample opportunities for increased automation and remote delivery of goods and services. “The upside right now is the burgeoning platform of the digital transformation ecosystem,” he said.

In a thought-provoking talk at Singularity University’s COVID-19 virtual summit this week, Xing explained how the outbreak is accelerating our transition to a highly-automated society—and painted a picture of what the future may look like.

Confronting Scarcity
You’ve probably seen them by now—the barren shelves at your local grocery store. Whether you were in the paper goods aisle, the frozen food section, or the fresh produce area, it was clear something was amiss; the shelves were empty. One of the most inexplicable items people have been panic-bulk-buying is toilet paper.

Xing described this toilet paper scarcity as a prisoner’s dilemma, pointing out that we have a scarcity problem right now in terms of our mindset, not in terms of actual supply shortages. “It’s a prisoner’s dilemma in that we’re all prisoners in our homes right now, and we can either hoard or not hoard, and the outcomes depend on how we collaborate with each other,” he said. “But it’s not a zero-sum game.”

Xing referenced a CNN article about why toilet paper, of all things, is one of the items people have been panic-buying most (I, too, have been utterly baffled by this phenomenon). But maybe there’d be less panic if we knew more about the production methods and supply chain involved in manufacturing toilet paper. It turns out it’s a highly automated process (you can learn more about it in this documentary by National Geographic) and requires very few people (though it does require about 27,000 trees a day—so stop bulk-buying it! Just stop!).

The supply chain limitation here is in the raw material; we certainly can’t keep cutting down this many trees a day forever. But—somewhat ironically, given the Costco cartloads of TP people have been stuffing into their trunks and backseats—thanks to automation, toilet paper isn’t something stores are going to stop receiving anytime soon.

Automation For All
Now we have a reason to apply this level of automation to, well, pretty much everything.

Though our current situation may force us into using more robots and automated systems sooner than we’d planned, it will end up saving us money and creating opportunity, Xing believes. He cited “fast-casual” restaurants (Chipotle, Panera, etc.) as a prime example.

Currently, people in the US spend much more to eat at home than we do to eat in fast-casual restaurants if you take into account the cost of the food we’re preparing plus the value of the time we’re spending on cooking, grocery shopping, and cleaning up after meals. According to research from investment management firm ARK Invest, taking all these costs into account makes for about $12 per meal for food cooked at home.

That’s the same as or more than the cost of grabbing a burrito or a sandwich at the joint around the corner. As more of the repetitive, low-skill tasks involved in preparing fast casual meals are automated, their cost will drop even more, giving us more incentive to forego home cooking. (But, it’s worth noting that these figures don’t take into account that eating at home is, in most cases, better for you since you’re less likely to fill your food with sugar, oil, or various other taste-enhancing but health-destroying ingredients—plus, there are those of us who get a nearly incomparable amount of joy from laboring over then savoring a homemade meal).

Now that we’re not supposed to be touching each other or touching anything anyone else has touched, but we still need to eat, automating food preparation sounds appealing (and maybe necessary). Multiple food delivery services have already implemented a contactless delivery option, where customers can choose to have their food left on their doorstep.

Besides the opportunities for in-restaurant automation, “This is an opportunity for automation to happen at the last mile,” said Xing. Delivery drones, robots, and autonomous trucks and vans could all play a part. In fact, use of delivery drones has ramped up in China since the outbreak.

Speaking of deliveries, service robots have steadily increased in numbers at Amazon; as of late 2019, the company employed around 650,000 humans and 200,000 robots—and costs have gone down as robots have gone up.

ARK Invest’s research predicts automation could add $800 billion to US GDP over the next 5 years and $12 trillion during the next 15 years. On this trajectory, GDP would end up being 40 percent higher with automation than without it.

Automating Ourselves?
This is all well and good, but what do these numbers and percentages mean for the average consumer, worker, or citizen?

“The benefits of automation aren’t being passed on to the average citizen,” said Xing. “They’re going to the shareholders of the companies creating the automation.” This is where policies like universal basic income and universal healthcare come in; in the not-too-distant future, we may see more movement toward measures like these (depending how the election goes) that spread the benefit of automation out rather than concentrating it in a few wealthy hands.

In the meantime, though, some people are benefiting from automation in ways that maybe weren’t expected. We’re in the midst of what’s probably the biggest remote-work experiment in US history, not to mention remote learning. Tools that let us digitally communicate and collaborate, like Slack, Zoom, Dropbox, and Gsuite, are enabling remote work in a way that wouldn’t have been possible 20 or even 10 years ago.

In addition, Xing said, tools like DataRobot and H2O.ai are democratizing artificial intelligence by allowing almost anyone, not just data scientists or computer engineers, to run machine learning algorithms. People are codifying the steps in their own repetitive work processes and having their computers take over tasks for them.

As 3D printing gets cheaper and more accessible, it’s also being more widely adopted, and people are finding more applications (case in point: the Italians mentioned above who figured out how to cheaply print a medical valve for coronavirus treatment).

The Mother of Invention
This movement towards a more automated society has some positives: it will help us stay healthy during times like the present, it will drive down the cost of goods and services, and it will grow our GDP in the long run. But by leaning into automation, will we be enabling a future that keeps us more physically, psychologically, and emotionally distant from each other?

We’re in a crisis, and desperate times call for desperate measures. We’re sheltering in place, practicing social distancing, and trying not to touch each other. And for most of us, this is really unpleasant and difficult. We can’t wait for it to be over.

For better or worse, this pandemic will likely make us pick up the pace on our path to automation, across many sectors and processes. The solutions people implement during this crisis won’t disappear when things go back to normal (and, depending who you talk to, they may never really do so).

But let’s make sure to remember something. Even once robots are making our food and drones are delivering it, and our computers are doing data entry and email replies on our behalf, and we all have 3D printers to make anything we want at home—we’re still going to be human. And humans like being around each other. We like seeing one another’s faces, hearing one another’s voices, and feeling one another’s touch—in person, not on a screen or in an app.

No amount of automation is going to change that, and beyond lowering costs or increasing GDP, our greatest and most crucial responsibility will always be to take care of each other.

Image Credit: Gritt Zheng on Unsplash Continue reading

Posted in Human Robots

#436911 Scientists Linked Artificial and ...

Scientists have linked up two silicon-based artificial neurons with a biological one across multiple countries into a fully-functional network. Using standard internet protocols, they established a chain of communication whereby an artificial neuron controls a living, biological one, and passes on the info to another artificial one.

Whoa.

We’ve talked plenty about brain-computer interfaces and novel computer chips that resemble the brain. We’ve covered how those “neuromorphic” chips could link up into tremendously powerful computing entities, using engineered communication nodes called artificial synapses.

As Moore’s law is dying, we even said that neuromorphic computing is one path towards the future of extremely powerful, low energy consumption artificial neural network-based computing—in hardware—that could in theory better link up with the brain. Because the chips “speak” the brain’s language, in theory they could become neuroprosthesis hubs far more advanced and “natural” than anything currently possible.

This month, an international team put all of those ingredients together, turning theory into reality.

The three labs, scattered across Padova, Italy, Zurich, Switzerland, and Southampton, England, collaborated to create a fully self-controlled, hybrid artificial-biological neural network that communicated using biological principles, but over the internet.

The three-neuron network, linked through artificial synapses that emulate the real thing, was able to reproduce a classic neuroscience experiment that’s considered the basis of learning and memory in the brain. In other words, artificial neuron and synapse “chips” have progressed to the point where they can actually use a biological neuron intermediary to form a circuit that, at least partially, behaves like the real thing.

That’s not to say cyborg brains are coming soon. The simulation only recreated a small network that supports excitatory transmission in the hippocampus—a critical region that supports memory—and most brain functions require enormous cross-talk between numerous neurons and circuits. Nevertheless, the study is a jaw-dropping demonstration of how far we’ve come in recreating biological neurons and synapses in artificial hardware.

And perhaps one day, the currently “experimental” neuromorphic hardware will be integrated into broken biological neural circuits as bridges to restore movement, memory, personality, and even a sense of self.

The Artificial Brain Boom
One important thing: this study relies heavily on a decade of research into neuromorphic computing, or the implementation of brain functions inside computer chips.

The best-known example is perhaps IBM’s TrueNorth, which leveraged the brain’s computational principles to build a completely different computer than what we have today. Today’s computers run on a von Neumann architecture, in which memory and processing modules are physically separate. In contrast, the brain’s computing and memory are simultaneously achieved at synapses, small “hubs” on individual neurons that talk to adjacent ones.

Because memory and processing occur on the same site, biological neurons don’t have to shuttle data back and forth between processing and storage compartments, massively reducing processing time and energy use. What’s more, a neuron’s history will also influence how it behaves in the future, increasing flexibility and adaptability compared to computers. With the rise of deep learning, which loosely mimics neural processing as the prima donna of AI, the need to reduce power while boosting speed and flexible learning is becoming ever more tantamount in the AI community.

Neuromorphic computing was partially born out of this need. Most chips utilize special ingredients that change their resistance (or other physical characteristics) to mimic how a neuron might adapt to stimulation. Some chips emulate a whole neuron, that is, how it responds to a history of stimulation—does it get easier or harder to fire? Others imitate synapses themselves, that is, how easily they will pass on the information to another neuron.

Although single neuromorphic chips have proven to be far more efficient and powerful than current computer chips running machine learning algorithms in toy problems, so far few people have tried putting the artificial components together with biological ones in the ultimate test.

That’s what this study did.

A Hybrid Network
Still with me? Let’s talk network.

It’s gonna sound complicated, but remember: learning is the formation of neural networks, and neurons that fire together wire together. To rephrase: when learning, neurons will spontaneously organize into networks so that future instances will re-trigger the entire network. To “wire” together, downstream neurons will become more responsive to their upstream neural partners, so that even a whisper will cause them to activate. In contrast, some types of stimulation will cause the downstream neuron to “chill out” so that only an upstream “shout” will trigger downstream activation.

Both these properties—easier or harder to activate downstream neurons—are essentially how the brain forms connections. The “amping up,” in neuroscience jargon, is long-term potentiation (LTP), whereas the down-tuning is LTD (long-term depression). These two phenomena were first discovered in the rodent hippocampus more than half a century ago, and ever since have been considered as the biological basis of how the brain learns and remembers, and implicated in neurological problems such as addition (seriously, you can’t pass Neuro 101 without learning about LTP and LTD!).

So it’s perhaps especially salient that one of the first artificial-brain hybrid networks recapitulated this classic result.

To visualize: the three-neuron network began in Switzerland, with an artificial neuron with the badass name of “silicon spiking neuron.” That neuron is linked to an artificial synapse, a “memristor” located in the UK, which is then linked to a biological rat neuron cultured in Italy. The rat neuron has a “smart” microelectrode, controlled by the artificial synapse, to stimulate it. This is the artificial-to-biological pathway.

Meanwhile, the rat neuron in Italy also has electrodes that listen in on its electrical signaling. This signaling is passed back to another artificial synapse in the UK, which is then used to control a second artificial neuron back in Switzerland. This is the biological-to-artificial pathway back. As a testimony in how far we’ve come in digitizing neural signaling, all of the biological neural responses are digitized and sent over the internet to control its far-out artificial partner.

Here’s the crux: to demonstrate a functional neural network, just having the biological neuron passively “pass on” electrical stimulation isn’t enough. It has to show the capacity to learn, that is, to be able to mimic the amping up and down-tuning that are LTP and LTD, respectively.

You’ve probably guessed the results: certain stimulation patterns to the first artificial neuron in Switzerland changed how the artificial synapse in the UK operated. This, in turn, changed the stimulation to the biological neuron, so that it either amped up or toned down depending on the input.

Similarly, the response of the biological neuron altered the second artificial synapse, which then controlled the output of the second artificial neuron. Altogether, the biological and artificial components seamlessly linked up, over thousands of miles, into a functional neural circuit.

Cyborg Mind-Meld
So…I’m still picking my jaw up off the floor.

It’s utterly insane seeing a classic neuroscience learning experiment repeated with an integrated network with artificial components. That said, a three-neuron network is far from the thousands of synapses (if not more) needed to truly re-establish a broken neural circuit in the hippocampus, which DARPA has been aiming to do. And LTP/LTD has come under fire recently as the de facto brain mechanism for learning, though so far they remain cemented as neuroscience dogma.

However, this is one of the few studies where you see fields coming together. As Richard Feynman famously said, “What I cannot recreate, I cannot understand.” Even though neuromorphic chips were built on a high-level rather than molecular-level understanding of how neurons work, the study shows that artificial versions can still synapse with their biological counterparts. We’re not just on the right path towards understanding the brain, we’re recreating it, in hardware—if just a little.

While the study doesn’t have immediate use cases, practically it does boost both the neuromorphic computing and neuroprosthetic fields.

“We are very excited with this new development,” said study author Dr. Themis Prodromakis at the University of Southampton. “On one side it sets the basis for a novel scenario that was never encountered during natural evolution, where biological and artificial neurons are linked together and communicate across global networks; laying the foundations for the Internet of Neuro-electronics. On the other hand, it brings new prospects to neuroprosthetic technologies, paving the way towards research into replacing dysfunctional parts of the brain with AI chips.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#436530 How Smart Roads Will Make Driving ...

Roads criss-cross the landscape, but while they provide vital transport links, in many ways they represent a huge amount of wasted space. Advances in “smart road” technology could change that, creating roads that can harvest energy from cars, detect speeding, automatically weigh vehicles, and even communicate with smart cars.

“Smart city” projects are popping up in countries across the world thanks to advances in wireless communication, cloud computing, data analytics, remote sensing, and artificial intelligence. Transportation is a crucial element of most of these plans, but while much of the focus is on public transport solutions, smart roads are increasingly being seen as a crucial feature of these programs.

New technology is making it possible to tackle a host of issues including traffic congestion, accidents, and pollution, say the authors of a paper in the journal Proceedings of the Royal Society A. And they’ve outlined ten of the most promising advances under development or in planning stages that could feature on tomorrow’s roads.

Energy harvesting

A variety of energy harvesting technologies integrated into roads have been proposed as ways to power street lights and traffic signals or provide a boost to the grid. Photovoltaic panels could be built into the road surface to capture sunlight, or piezoelectric materials installed beneath the asphalt could generate current when deformed by vehicles passing overhead.

Musical roads

Countries like Japan, Denmark, the Netherlands, Taiwan, and South Korea have built roads that play music as cars pass by. By varying the spacing of rumble strips, it’s possible to produce a series of different notes as vehicles drive over them. The aim is generally to warn of hazards or help drivers keep to the speed limit.

Automatic weighing

Weight-in-motion technology that measures vehicles’ loads as they drive slowly through a designated lane has been around since the 1970s, but more recently high speed weight-in-motion tech has made it possible to measure vehicles as they travel at regular highway speeds. The latest advance has been integration with automatic licence plate reading and wireless communication to allow continuous remote monitoring both to enforce weight restrictions and monitor wear on roads.

Vehicle charging

The growing popularity of electric vehicles has spurred the development of technology to charge cars and buses as they drive. The most promising of these approaches is magnetic induction, which involves burying cables beneath the road to generate electromagnetic fields that a receiver device in the car then transforms into electrical power to charge batteries.

Smart traffic signs

Traffic signs aren’t always as visible as they should be, and it can often be hard to remember what all of them mean. So there are now proposals for “smart signs” that wirelessly beam a sign’s content to oncoming cars fitted with receivers, which can then alert the driver verbally or on the car’s display. The approach isn’t affected by poor weather and lighting, can be reprogrammed easily, and could do away with the need for complex sign recognition technology in future self-driving cars.

Traffic violation detection and notification

Sensors and cameras can be combined with these same smart signs to detect and automatically notify drivers of traffic violations. The automatic transmission of traffic signals means drivers won’t be able to deny they’ve seen the warnings or been notified of any fines, as a record will be stored on their car’s black box.

Talking cars

Car-to-car communication technology and V2X, which lets cars share information with any other connected device, are becoming increasingly common. Inter-car communication can be used to propagate accidents or traffic jam alerts to prevent congestion, while letting vehicles communicate with infrastructure can help signals dynamically manage timers to keep traffic flowing or automatically collect tolls.

Smart intersections

Combing sensors and cameras with object recognition systems that can detect vehicles and other road users can help increase safety and efficiency at intersections. It can be used to extend green lights for slower road users like pedestrians and cyclists, sense jaywalkers, give priority to emergency vehicles, and dynamically adjust light timers to optimize traffic flow. Information can even be broadcast to oncoming vehicles to highlight blind spots and potential hazards.

Automatic crash detection

There’s a “golden hour” after an accident in which the chance of saving lives is greatly increased. Vehicle communication technology can ensure that notification of a crash reaches the emergency services rapidly, and can also provide vital information about the number and type of vehicles involved, which can help emergency response planning. It can also be used to alert other drivers to slow down or stop to prevent further accidents.

Smart street lights

Street lights are increasingly being embedded with sensors, wireless connectivity, and micro-controllers to enable a variety of smart functions. These include motion activation to save energy, providing wireless access points, air quality monitoring, or parking and litter monitoring. This can also be used to send automatic maintenance requests if a light is faulty, and can even allow neighboring lights to be automatically brightened to compensate.

Image Credit: Image by David Mark from Pixabay Continue reading

Posted in Human Robots

#436504 20 Technology Metatrends That Will ...

In the decade ahead, waves of exponential technological advancements are stacking atop one another, eclipsing decades of breakthroughs in scale and impact.

Emerging from these waves are 20 “metatrends” likely to revolutionize entire industries (old and new), redefine tomorrow’s generation of businesses and contemporary challenges, and transform our livelihoods from the bottom up.

Among these metatrends are augmented human longevity, the surging smart economy, AI-human collaboration, urbanized cellular agriculture, and high-bandwidth brain-computer interfaces, just to name a few.

It is here that master entrepreneurs and their teams must see beyond the immediate implications of a given technology, capturing second-order, Google-sized business opportunities on the horizon.

Welcome to a new decade of runaway technological booms, historic watershed moments, and extraordinary abundance.

Let’s dive in.

20 Metatrends for the 2020s
(1) Continued increase in global abundance: The number of individuals in extreme poverty continues to drop, as the middle-income population continues to rise. This metatrend is driven by the convergence of high-bandwidth and low-cost communication, ubiquitous AI on the cloud, and growing access to AI-aided education and AI-driven healthcare. Everyday goods and services (finance, insurance, education, and entertainment) are being digitized and becoming fully demonetized, available to the rising billion on mobile devices.

(2) Global gigabit connectivity will connect everyone and everything, everywhere, at ultra-low cost: The deployment of both licensed and unlicensed 5G, plus the launch of a multitude of global satellite networks (OneWeb, Starlink, etc.), allow for ubiquitous, low-cost communications for everyone, everywhere, not to mention the connection of trillions of devices. And today’s skyrocketing connectivity is bringing online an additional three billion individuals, driving tens of trillions of dollars into the global economy. This metatrend is driven by the convergence of low-cost space launches, hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.

(3) The average human healthspan will increase by 10+ years: A dozen game-changing biotech and pharmaceutical solutions (currently in Phase 1, 2, or 3 clinical trials) will reach consumers this decade, adding an additional decade to the human healthspan. Technologies include stem cell supply restoration, wnt pathway manipulation, senolytic medicines, a new generation of endo-vaccines, GDF-11, and supplementation of NMD/NAD+, among several others. And as machine learning continues to mature, AI is set to unleash countless new drug candidates, ready for clinical trials. This metatrend is driven by the convergence of genome sequencing, CRISPR technologies, AI, quantum computing, and cellular medicine.

(4) An age of capital abundance will see increasing access to capital everywhere: From 2016 – 2018 (and likely in 2019), humanity hit all-time highs in the global flow of seed capital, venture capital, and sovereign wealth fund investments. While this trend will witness some ups and downs in the wake of future recessions, it is expected to continue its overall upward trajectory. Capital abundance leads to the funding and testing of ‘crazy’ entrepreneurial ideas, which in turn accelerate innovation. Already, $300 billion in crowdfunding is anticipated by 2025, democratizing capital access for entrepreneurs worldwide. This metatrend is driven by the convergence of global connectivity, dematerialization, demonetization, and democratization.

(5) Augmented reality and the spatial web will achieve ubiquitous deployment: The combination of augmented reality (yielding Web 3.0, or the spatial web) and 5G networks (offering 100Mb/s – 10Gb/s connection speeds) will transform how we live our everyday lives, impacting every industry from retail and advertising to education and entertainment. Consumers will play, learn, and shop throughout the day in a newly intelligent, virtually overlaid world. This metatrend will be driven by the convergence of hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.

(6) Everything is smart, embedded with intelligence: The price of specialized machine learning chips is dropping rapidly with a rise in global demand. Combined with the explosion of low-cost microscopic sensors and the deployment of high-bandwidth networks, we’re heading into a decade wherein every device becomes intelligent. Your child’s toy remembers her face and name. Your kids’ drone safely and diligently follows and videos all the children at the birthday party. Appliances respond to voice commands and anticipate your needs.

(7) AI will achieve human-level intelligence: As predicted by technologist and futurist Ray Kurzweil, artificial intelligence will reach human-level performance this decade (by 2030). Through the 2020s, AI algorithms and machine learning tools will be increasingly made open source, available on the cloud, allowing any individual with an internet connection to supplement their cognitive ability, augment their problem-solving capacity, and build new ventures at a fraction of the current cost. This metatrend will be driven by the convergence of global high-bandwidth connectivity, neural networks, and cloud computing. Every industry, spanning industrial design, healthcare, education, and entertainment, will be impacted.

(8) AI-human collaboration will skyrocket across all professions: The rise of “AI as a Service” (AIaaS) platforms will enable humans to partner with AI in every aspect of their work, at every level, in every industry. AIs will become entrenched in everyday business operations, serving as cognitive collaborators to employees—supporting creative tasks, generating new ideas, and tackling previously unattainable innovations. In some fields, partnership with AI will even become a requirement. For example: in the future, making certain diagnoses without the consultation of AI may be deemed malpractice.

(9) Most individuals adapt a JARVIS-like “software shell” to improve their quality of life: As services like Alexa, Google Home, and Apple Homepod expand in functionality, such services will eventually travel beyond the home and become your cognitive prosthetic 24/7. Imagine a secure JARVIS-like software shell that you give permission to listen to all your conversations, read your email, monitor your blood chemistry, etc. With access to such data, these AI-enabled software shells will learn your preferences, anticipate your needs and behavior, shop for you, monitor your health, and help you problem-solve in support of your mid- and long-term goals.

(10) Globally abundant, cheap renewable energy: Continued advancements in solar, wind, geothermal, hydroelectric, nuclear, and localized grids will drive humanity towards cheap, abundant, and ubiquitous renewable energy. The price per kilowatt-hour will drop below one cent per kilowatt-hour for renewables, just as storage drops below a mere three cents per kilowatt-hour, resulting in the majority displacement of fossil fuels globally. And as the world’s poorest countries are also the world’s sunniest, the democratization of both new and traditional storage technologies will grant energy abundance to those already bathed in sunlight.

(11) The insurance industry transforms from “recovery after risk” to “prevention of risk”: Today, fire insurance pays you after your house burns down; life insurance pays your next-of-kin after you die; and health insurance (which is really sick insurance) pays only after you get sick. This next decade, a new generation of insurance providers will leverage the convergence of machine learning, ubiquitous sensors, low-cost genome sequencing, and robotics to detect risk, prevent disaster, and guarantee safety before any costs are incurred.

(12) Autonomous vehicles and flying cars will redefine human travel (soon to be far faster and cheaper): Fully autonomous vehicles, car-as-a-service fleets, and aerial ride-sharing (flying cars) will be fully operational in most major metropolitan cities in the coming decade. The cost of transportation will plummet 3-4X, transforming real estate, finance, insurance, the materials economy, and urban planning. Where you live and work, and how you spend your time, will all be fundamentally reshaped by this future of human travel. Your kids and elderly parents will never drive. This metatrend will be driven by the convergence of machine learning, sensors, materials science, battery storage improvements, and ubiquitous gigabit connections.

(13) On-demand production and on-demand delivery will birth an “instant economy of things”: Urban dwellers will learn to expect “instant fulfillment” of their retail orders as drone and robotic last-mile delivery services carry products from local supply depots directly to your doorstep. Further riding the deployment of regional on-demand digital manufacturing (3D printing farms), individualized products can be obtained within hours, anywhere, anytime. This metatrend is driven by the convergence of networks, 3D printing, robotics, and artificial intelligence.

(14) Ability to sense and know anything, anytime, anywhere: We’re rapidly approaching the era wherein 100 billion sensors (the Internet of Everything) is monitoring and sensing (imaging, listening, measuring) every facet of our environments, all the time. Global imaging satellites, drones, autonomous car LIDARs, and forward-looking augmented reality (AR) headset cameras are all part of a global sensor matrix, together allowing us to know anything, anytime, anywhere. This metatrend is driven by the convergence of terrestrial, atmospheric and space-based sensors, vast data networks, and machine learning. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.

(15) Disruption of advertising: As AI becomes increasingly embedded in everyday life, your custom AI will soon understand what you want better than you do. In turn, we will begin to both trust and rely upon our AIs to make most of our buying decisions, turning over shopping to AI-enabled personal assistants. Your AI might make purchases based upon your past desires, current shortages, conversations you’ve allowed your AI to listen to, or by tracking where your pupils focus on a virtual interface (i.e. what catches your attention). As a result, the advertising industry—which normally competes for your attention (whether at the Superbowl or through search engines)—will have a hard time influencing your AI. This metatrend is driven by the convergence of machine learning, sensors, augmented reality, and 5G/networks.

(16) Cellular agriculture moves from the lab into inner cities, providing high-quality protein that is cheaper and healthier: This next decade will witness the birth of the most ethical, nutritious, and environmentally sustainable protein production system devised by humankind. Stem cell-based ‘cellular agriculture’ will allow the production of beef, chicken, and fish anywhere, on-demand, with far higher nutritional content, and a vastly lower environmental footprint than traditional livestock options. This metatrend is enabled by the convergence of biotechnology, materials science, machine learning, and AgTech.

(17) High-bandwidth brain-computer interfaces (BCIs) will come online for public use: Technologist and futurist Ray Kurzweil has predicted that in the mid-2030s, we will begin connecting the human neocortex to the cloud. This next decade will see tremendous progress in that direction, first serving those with spinal cord injuries, whereby patients will regain both sensory capacity and motor control. Yet beyond assisting those with motor function loss, several BCI pioneers are now attempting to supplement their baseline cognitive abilities, a pursuit with the potential to increase their sensorium, memory, and even intelligence. This metatrend is fueled by the convergence of materials science, machine learning, and robotics.

(18) High-resolution VR will transform both retail and real estate shopping: High-resolution, lightweight virtual reality headsets will allow individuals at home to shop for everything from clothing to real estate from the convenience of their living room. Need a new outfit? Your AI knows your detailed body measurements and can whip up a fashion show featuring your avatar wearing the latest 20 designs on a runway. Want to see how your furniture might look inside a house you’re viewing online? No problem! Your AI can populate the property with your virtualized inventory and give you a guided tour. This metatrend is enabled by the convergence of: VR, machine learning, and high-bandwidth networks.

(19) Increased focus on sustainability and the environment: An increase in global environmental awareness and concern over global warming will drive companies to invest in sustainability, both from a necessity standpoint and for marketing purposes. Breakthroughs in materials science, enabled by AI, will allow companies to drive tremendous reductions in waste and environmental contamination. One company’s waste will become another company’s profit center. This metatrend is enabled by the convergence of materials science, artificial intelligence, and broadband networks.

(20) CRISPR and gene therapies will minimize disease: A vast range of infectious diseases, ranging from AIDS to Ebola, are now curable. In addition, gene-editing technologies continue to advance in precision and ease of use, allowing families to treat and ultimately cure hundreds of inheritable genetic diseases. This metatrend is driven by the convergence of various biotechnologies (CRISPR, gene therapy), genome sequencing, and artificial intelligence.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Free-Photos from Pixabay Continue reading

Posted in Human Robots

#436484 If Machines Want to Make Art, Will ...

Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?

Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.

But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.

We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.

But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.

Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.

Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.

Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.

The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Rene Böhmer / Unsplash Continue reading

Posted in Human Robots