Tag Archives: sense

#436530 How Smart Roads Will Make Driving ...

Roads criss-cross the landscape, but while they provide vital transport links, in many ways they represent a huge amount of wasted space. Advances in “smart road” technology could change that, creating roads that can harvest energy from cars, detect speeding, automatically weigh vehicles, and even communicate with smart cars.

“Smart city” projects are popping up in countries across the world thanks to advances in wireless communication, cloud computing, data analytics, remote sensing, and artificial intelligence. Transportation is a crucial element of most of these plans, but while much of the focus is on public transport solutions, smart roads are increasingly being seen as a crucial feature of these programs.

New technology is making it possible to tackle a host of issues including traffic congestion, accidents, and pollution, say the authors of a paper in the journal Proceedings of the Royal Society A. And they’ve outlined ten of the most promising advances under development or in planning stages that could feature on tomorrow’s roads.

Energy harvesting

A variety of energy harvesting technologies integrated into roads have been proposed as ways to power street lights and traffic signals or provide a boost to the grid. Photovoltaic panels could be built into the road surface to capture sunlight, or piezoelectric materials installed beneath the asphalt could generate current when deformed by vehicles passing overhead.

Musical roads

Countries like Japan, Denmark, the Netherlands, Taiwan, and South Korea have built roads that play music as cars pass by. By varying the spacing of rumble strips, it’s possible to produce a series of different notes as vehicles drive over them. The aim is generally to warn of hazards or help drivers keep to the speed limit.

Automatic weighing

Weight-in-motion technology that measures vehicles’ loads as they drive slowly through a designated lane has been around since the 1970s, but more recently high speed weight-in-motion tech has made it possible to measure vehicles as they travel at regular highway speeds. The latest advance has been integration with automatic licence plate reading and wireless communication to allow continuous remote monitoring both to enforce weight restrictions and monitor wear on roads.

Vehicle charging

The growing popularity of electric vehicles has spurred the development of technology to charge cars and buses as they drive. The most promising of these approaches is magnetic induction, which involves burying cables beneath the road to generate electromagnetic fields that a receiver device in the car then transforms into electrical power to charge batteries.

Smart traffic signs

Traffic signs aren’t always as visible as they should be, and it can often be hard to remember what all of them mean. So there are now proposals for “smart signs” that wirelessly beam a sign’s content to oncoming cars fitted with receivers, which can then alert the driver verbally or on the car’s display. The approach isn’t affected by poor weather and lighting, can be reprogrammed easily, and could do away with the need for complex sign recognition technology in future self-driving cars.

Traffic violation detection and notification

Sensors and cameras can be combined with these same smart signs to detect and automatically notify drivers of traffic violations. The automatic transmission of traffic signals means drivers won’t be able to deny they’ve seen the warnings or been notified of any fines, as a record will be stored on their car’s black box.

Talking cars

Car-to-car communication technology and V2X, which lets cars share information with any other connected device, are becoming increasingly common. Inter-car communication can be used to propagate accidents or traffic jam alerts to prevent congestion, while letting vehicles communicate with infrastructure can help signals dynamically manage timers to keep traffic flowing or automatically collect tolls.

Smart intersections

Combing sensors and cameras with object recognition systems that can detect vehicles and other road users can help increase safety and efficiency at intersections. It can be used to extend green lights for slower road users like pedestrians and cyclists, sense jaywalkers, give priority to emergency vehicles, and dynamically adjust light timers to optimize traffic flow. Information can even be broadcast to oncoming vehicles to highlight blind spots and potential hazards.

Automatic crash detection

There’s a “golden hour” after an accident in which the chance of saving lives is greatly increased. Vehicle communication technology can ensure that notification of a crash reaches the emergency services rapidly, and can also provide vital information about the number and type of vehicles involved, which can help emergency response planning. It can also be used to alert other drivers to slow down or stop to prevent further accidents.

Smart street lights

Street lights are increasingly being embedded with sensors, wireless connectivity, and micro-controllers to enable a variety of smart functions. These include motion activation to save energy, providing wireless access points, air quality monitoring, or parking and litter monitoring. This can also be used to send automatic maintenance requests if a light is faulty, and can even allow neighboring lights to be automatically brightened to compensate.

Image Credit: Image by David Mark from Pixabay Continue reading

Posted in Human Robots

#436507 The Weird, the Wacky, the Just Plain ...

As you know if you’ve ever been to, heard of, or read about the annual Consumer Electronics Show in Vegas, there’s no shortage of tech in any form: gadgets, gizmos, and concepts abound. You probably couldn’t see them all in a month even if you spent all day every day trying.

Given the sheer scale of the show, the number of exhibitors, and the inherent subjectivity of bestowing superlatives, it’s hard to pick out the coolest tech from CES. But I’m going to do it anyway; in no particular order, here are some of the products and concepts that I personally found most intriguing at this year’s event.

e-Novia’s Haptic Gloves
Italian startup e-Novia’s Weart glove uses a ‘sensing core’ to record tactile sensations and an ‘actuation core’ to reproduce those sensations onto the wearer’s skin. Haptic gloves will bring touch to VR and AR experiences, making them that much more life-like. The tech could also be applied to digitization of materials and in gaming and entertainment.

e-Novia’s modular haptic glove
I expected a full glove, but in fact there were two rings that attached to my fingers. Weart co-founder Giovanni Spagnoletti explained that they’re taking a modular approach, so as to better tailor the technology to different experiences. He then walked me through a virtual reality experience that was a sort of simulated science experiment: I had to lift a glass beaker, place it on a stove, pour in an ingredient, open a safe to access some dry ice, add that, and so on. As I went through the steps, I felt the beaker heat up and cool off at the expected times, and felt the liquid moving inside, as well as the pressure of my fingertips against the numbered buttons on the safe.

A virtual (but tactile) science experiment
There was a slight delay between my taking an action and feeling the corresponding tactile sensation, but on the whole, the haptic glove definitely made the experience more realistic—and more fun. Slightly less fun but definitely more significant, Spagnoletti told me Weart is working with a medical group to bring tactile sensations to VR training for surgeons.

Sarcos Robotics’ Exoskeleton
That tire may as well be a feather
Sarcos Robotics unveiled its Guardian XO full-body exoskeleton, which it says can safely lift up to 200 pounds across an extended work session. What’s cool about this particular exoskeleton is that it’s not just a prototype; the company announced a partnership with Delta airlines, which will be trialing the technology for aircraft maintenance, engine repair, and luggage handling. In a demo, I watched a petite female volunteer strap into the exoskeleton and easily lift a 50-pound weight with one hand, and a Sarcos employee lift and attach a heavy component of a propeller; she explained that the strength-augmenting function of the exoskeleton can easily be switched on or off—and the wearer’s hands released—to facilitate multi-step tasks.

Hyundai’s Flying Taxi
Where to?
Hyundai and Uber partnered to unveil an air taxi concept. With a 49-foot wingspan, 4 lift rotors, and 4 tilt rotors, the aircraft would be manned by a pilot and could carry 4 passengers at speeds up to 180 miles per hour. The companies say you’ll be able to ride across your city in one of these by 2030—we’ll see if the regulatory environment, public opinion, and other factors outside of technological capability let that happen.

Mercedes’ Avatar Concept Car
Welcome to the future
As evident from its name, Mercedes’ sweet new Vision AVTR concept car was inspired by the movie Avatar; director James Cameron helped design it. The all-electric car has no steering wheel, transparent doors, seats made of vegan leather, and 33 reptilian-scale-like flaps on the back; its design is meant to connect the driver with both the car and the surrounding environment in a natural, seamless way.

Next-generation scrolling
Offered the chance to ‘drive’ the car, I jumped on it. Placing my hand on the center console started the engine, and within seconds it had synced to my heartbeat, which reverberated through the car. The whole dashboard, from driver door to passenger door, is one big LED display. It showed a virtual landscape I could select by holding up my hand: as I moved my hand from left to right, different images were projected onto my open palm. Closing my hand on an image selected it, and suddenly it looked like I was in the middle of a lush green mountain range. Applying slight forward pressure on the center console made the car advance in the virtual landscape; it was essentially like playing a really cool video game.

Mercedes is aiming to have a carbon-neutral production fleet by 2039, and to reduce the amount of energy it uses during production by 40 percent by 2030. It’s unclear when—or whether—the man-machine-nature connecting features of the Vision AVTR will start showing up in production, but I for one will be on the lookout.

Waverly Labs’ In-Ear Translator
Waverly Labs unveiled its Ambassador translator earlier this year and has it on display at the show. It’s worn on the ear and uses a far-field microphone array with speech recognition to translate real-time conversations in 20 different languages. Besides in-ear audio, translations can also appear as text on an app or be broadcast live in a conference environment.

It’s kind of like a giant talking earring
I stopped by the booth and tested out the translator with Waverly senior software engineer Georgiy Konovalov. We each hooked on an earpiece, and first, he spoke to me in Russian. After a delay of a couple seconds, I heard his words in—slightly robotic, but fully comprehensible—English. Then we switched: I spoke to him in Spanish, my words popped up on his phone screen in Cyrillic, and he translated them back to English for me out loud.

On the whole, the demo was pretty cool. If you’ve ever been lost in a foreign country whose language you don’t speak, imagine how handy a gadget like this would come in. Let’s just hope that once they’re more widespread, these products don’t end up discouraging people from learning languages.

Not to be outdone, Google also announced updates to its Translate product, which is being deployed at information desks in JFK airport’s international terminal, in sports stadiums in Qatar, and by some large hotel chains.

Stratuscent’s Digital Nose
AI is making steady progress towards achieving human-like vision and hearing—but there’s been less work done on mimicking our sense of smell (maybe because it’s less useful in everyday applications). Stratuscent’s digital nose, which it says is based on NASA patents, uses chemical receptors and AI to identify both simple chemicals and complex scents. The company is aiming to create the world’s first comprehensive database of everyday scents, which it says it will use to make “intelligent decisions” for customers. What kind of decisions remains to be seen—and smelled.

Banner Image Credit: The Mercedes Vision AVTR concept car. Photo by Vanessa Bates Ramirez Continue reading

Posted in Human Robots

#436504 20 Technology Metatrends That Will ...

In the decade ahead, waves of exponential technological advancements are stacking atop one another, eclipsing decades of breakthroughs in scale and impact.

Emerging from these waves are 20 “metatrends” likely to revolutionize entire industries (old and new), redefine tomorrow’s generation of businesses and contemporary challenges, and transform our livelihoods from the bottom up.

Among these metatrends are augmented human longevity, the surging smart economy, AI-human collaboration, urbanized cellular agriculture, and high-bandwidth brain-computer interfaces, just to name a few.

It is here that master entrepreneurs and their teams must see beyond the immediate implications of a given technology, capturing second-order, Google-sized business opportunities on the horizon.

Welcome to a new decade of runaway technological booms, historic watershed moments, and extraordinary abundance.

Let’s dive in.

20 Metatrends for the 2020s
(1) Continued increase in global abundance: The number of individuals in extreme poverty continues to drop, as the middle-income population continues to rise. This metatrend is driven by the convergence of high-bandwidth and low-cost communication, ubiquitous AI on the cloud, and growing access to AI-aided education and AI-driven healthcare. Everyday goods and services (finance, insurance, education, and entertainment) are being digitized and becoming fully demonetized, available to the rising billion on mobile devices.

(2) Global gigabit connectivity will connect everyone and everything, everywhere, at ultra-low cost: The deployment of both licensed and unlicensed 5G, plus the launch of a multitude of global satellite networks (OneWeb, Starlink, etc.), allow for ubiquitous, low-cost communications for everyone, everywhere, not to mention the connection of trillions of devices. And today’s skyrocketing connectivity is bringing online an additional three billion individuals, driving tens of trillions of dollars into the global economy. This metatrend is driven by the convergence of low-cost space launches, hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.

(3) The average human healthspan will increase by 10+ years: A dozen game-changing biotech and pharmaceutical solutions (currently in Phase 1, 2, or 3 clinical trials) will reach consumers this decade, adding an additional decade to the human healthspan. Technologies include stem cell supply restoration, wnt pathway manipulation, senolytic medicines, a new generation of endo-vaccines, GDF-11, and supplementation of NMD/NAD+, among several others. And as machine learning continues to mature, AI is set to unleash countless new drug candidates, ready for clinical trials. This metatrend is driven by the convergence of genome sequencing, CRISPR technologies, AI, quantum computing, and cellular medicine.

(4) An age of capital abundance will see increasing access to capital everywhere: From 2016 – 2018 (and likely in 2019), humanity hit all-time highs in the global flow of seed capital, venture capital, and sovereign wealth fund investments. While this trend will witness some ups and downs in the wake of future recessions, it is expected to continue its overall upward trajectory. Capital abundance leads to the funding and testing of ‘crazy’ entrepreneurial ideas, which in turn accelerate innovation. Already, $300 billion in crowdfunding is anticipated by 2025, democratizing capital access for entrepreneurs worldwide. This metatrend is driven by the convergence of global connectivity, dematerialization, demonetization, and democratization.

(5) Augmented reality and the spatial web will achieve ubiquitous deployment: The combination of augmented reality (yielding Web 3.0, or the spatial web) and 5G networks (offering 100Mb/s – 10Gb/s connection speeds) will transform how we live our everyday lives, impacting every industry from retail and advertising to education and entertainment. Consumers will play, learn, and shop throughout the day in a newly intelligent, virtually overlaid world. This metatrend will be driven by the convergence of hardware advancements, 5G networks, artificial intelligence, materials science, and surging computing power.

(6) Everything is smart, embedded with intelligence: The price of specialized machine learning chips is dropping rapidly with a rise in global demand. Combined with the explosion of low-cost microscopic sensors and the deployment of high-bandwidth networks, we’re heading into a decade wherein every device becomes intelligent. Your child’s toy remembers her face and name. Your kids’ drone safely and diligently follows and videos all the children at the birthday party. Appliances respond to voice commands and anticipate your needs.

(7) AI will achieve human-level intelligence: As predicted by technologist and futurist Ray Kurzweil, artificial intelligence will reach human-level performance this decade (by 2030). Through the 2020s, AI algorithms and machine learning tools will be increasingly made open source, available on the cloud, allowing any individual with an internet connection to supplement their cognitive ability, augment their problem-solving capacity, and build new ventures at a fraction of the current cost. This metatrend will be driven by the convergence of global high-bandwidth connectivity, neural networks, and cloud computing. Every industry, spanning industrial design, healthcare, education, and entertainment, will be impacted.

(8) AI-human collaboration will skyrocket across all professions: The rise of “AI as a Service” (AIaaS) platforms will enable humans to partner with AI in every aspect of their work, at every level, in every industry. AIs will become entrenched in everyday business operations, serving as cognitive collaborators to employees—supporting creative tasks, generating new ideas, and tackling previously unattainable innovations. In some fields, partnership with AI will even become a requirement. For example: in the future, making certain diagnoses without the consultation of AI may be deemed malpractice.

(9) Most individuals adapt a JARVIS-like “software shell” to improve their quality of life: As services like Alexa, Google Home, and Apple Homepod expand in functionality, such services will eventually travel beyond the home and become your cognitive prosthetic 24/7. Imagine a secure JARVIS-like software shell that you give permission to listen to all your conversations, read your email, monitor your blood chemistry, etc. With access to such data, these AI-enabled software shells will learn your preferences, anticipate your needs and behavior, shop for you, monitor your health, and help you problem-solve in support of your mid- and long-term goals.

(10) Globally abundant, cheap renewable energy: Continued advancements in solar, wind, geothermal, hydroelectric, nuclear, and localized grids will drive humanity towards cheap, abundant, and ubiquitous renewable energy. The price per kilowatt-hour will drop below one cent per kilowatt-hour for renewables, just as storage drops below a mere three cents per kilowatt-hour, resulting in the majority displacement of fossil fuels globally. And as the world’s poorest countries are also the world’s sunniest, the democratization of both new and traditional storage technologies will grant energy abundance to those already bathed in sunlight.

(11) The insurance industry transforms from “recovery after risk” to “prevention of risk”: Today, fire insurance pays you after your house burns down; life insurance pays your next-of-kin after you die; and health insurance (which is really sick insurance) pays only after you get sick. This next decade, a new generation of insurance providers will leverage the convergence of machine learning, ubiquitous sensors, low-cost genome sequencing, and robotics to detect risk, prevent disaster, and guarantee safety before any costs are incurred.

(12) Autonomous vehicles and flying cars will redefine human travel (soon to be far faster and cheaper): Fully autonomous vehicles, car-as-a-service fleets, and aerial ride-sharing (flying cars) will be fully operational in most major metropolitan cities in the coming decade. The cost of transportation will plummet 3-4X, transforming real estate, finance, insurance, the materials economy, and urban planning. Where you live and work, and how you spend your time, will all be fundamentally reshaped by this future of human travel. Your kids and elderly parents will never drive. This metatrend will be driven by the convergence of machine learning, sensors, materials science, battery storage improvements, and ubiquitous gigabit connections.

(13) On-demand production and on-demand delivery will birth an “instant economy of things”: Urban dwellers will learn to expect “instant fulfillment” of their retail orders as drone and robotic last-mile delivery services carry products from local supply depots directly to your doorstep. Further riding the deployment of regional on-demand digital manufacturing (3D printing farms), individualized products can be obtained within hours, anywhere, anytime. This metatrend is driven by the convergence of networks, 3D printing, robotics, and artificial intelligence.

(14) Ability to sense and know anything, anytime, anywhere: We’re rapidly approaching the era wherein 100 billion sensors (the Internet of Everything) is monitoring and sensing (imaging, listening, measuring) every facet of our environments, all the time. Global imaging satellites, drones, autonomous car LIDARs, and forward-looking augmented reality (AR) headset cameras are all part of a global sensor matrix, together allowing us to know anything, anytime, anywhere. This metatrend is driven by the convergence of terrestrial, atmospheric and space-based sensors, vast data networks, and machine learning. In this future, it’s not “what you know,” but rather “the quality of the questions you ask” that will be most important.

(15) Disruption of advertising: As AI becomes increasingly embedded in everyday life, your custom AI will soon understand what you want better than you do. In turn, we will begin to both trust and rely upon our AIs to make most of our buying decisions, turning over shopping to AI-enabled personal assistants. Your AI might make purchases based upon your past desires, current shortages, conversations you’ve allowed your AI to listen to, or by tracking where your pupils focus on a virtual interface (i.e. what catches your attention). As a result, the advertising industry—which normally competes for your attention (whether at the Superbowl or through search engines)—will have a hard time influencing your AI. This metatrend is driven by the convergence of machine learning, sensors, augmented reality, and 5G/networks.

(16) Cellular agriculture moves from the lab into inner cities, providing high-quality protein that is cheaper and healthier: This next decade will witness the birth of the most ethical, nutritious, and environmentally sustainable protein production system devised by humankind. Stem cell-based ‘cellular agriculture’ will allow the production of beef, chicken, and fish anywhere, on-demand, with far higher nutritional content, and a vastly lower environmental footprint than traditional livestock options. This metatrend is enabled by the convergence of biotechnology, materials science, machine learning, and AgTech.

(17) High-bandwidth brain-computer interfaces (BCIs) will come online for public use: Technologist and futurist Ray Kurzweil has predicted that in the mid-2030s, we will begin connecting the human neocortex to the cloud. This next decade will see tremendous progress in that direction, first serving those with spinal cord injuries, whereby patients will regain both sensory capacity and motor control. Yet beyond assisting those with motor function loss, several BCI pioneers are now attempting to supplement their baseline cognitive abilities, a pursuit with the potential to increase their sensorium, memory, and even intelligence. This metatrend is fueled by the convergence of materials science, machine learning, and robotics.

(18) High-resolution VR will transform both retail and real estate shopping: High-resolution, lightweight virtual reality headsets will allow individuals at home to shop for everything from clothing to real estate from the convenience of their living room. Need a new outfit? Your AI knows your detailed body measurements and can whip up a fashion show featuring your avatar wearing the latest 20 designs on a runway. Want to see how your furniture might look inside a house you’re viewing online? No problem! Your AI can populate the property with your virtualized inventory and give you a guided tour. This metatrend is enabled by the convergence of: VR, machine learning, and high-bandwidth networks.

(19) Increased focus on sustainability and the environment: An increase in global environmental awareness and concern over global warming will drive companies to invest in sustainability, both from a necessity standpoint and for marketing purposes. Breakthroughs in materials science, enabled by AI, will allow companies to drive tremendous reductions in waste and environmental contamination. One company’s waste will become another company’s profit center. This metatrend is enabled by the convergence of materials science, artificial intelligence, and broadband networks.

(20) CRISPR and gene therapies will minimize disease: A vast range of infectious diseases, ranging from AIDS to Ebola, are now curable. In addition, gene-editing technologies continue to advance in precision and ease of use, allowing families to treat and ultimately cure hundreds of inheritable genetic diseases. This metatrend is driven by the convergence of various biotechnologies (CRISPR, gene therapy), genome sequencing, and artificial intelligence.

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2020 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University — your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Image by Free-Photos from Pixabay Continue reading

Posted in Human Robots

#436484 If Machines Want to Make Art, Will ...

Assuming that the emergence of consciousness in artificial minds is possible, those minds will feel the urge to create art. But will we be able to understand it? To answer this question, we need to consider two subquestions: when does the machine become an author of an artwork? And how can we form an understanding of the art that it makes?

Empathy, we argue, is the force behind our capacity to understand works of art. Think of what happens when you are confronted with an artwork. We maintain that, to understand the piece, you use your own conscious experience to ask what could possibly motivate you to make such an artwork yourself—and then you use that first-person perspective to try to come to a plausible explanation that allows you to relate to the artwork. Your interpretation of the work will be personal and could differ significantly from the artist’s own reasons, but if we share sufficient experiences and cultural references, it might be a plausible one, even for the artist. This is why we can relate so differently to a work of art after learning that it is a forgery or imitation: the artist’s intent to deceive or imitate is very different from the attempt to express something original. Gathering contextual information before jumping to conclusions about other people’s actions—in art, as in life—can enable us to relate better to their intentions.

But the artist and you share something far more important than cultural references: you share a similar kind of body and, with it, a similar kind of embodied perspective. Our subjective human experience stems, among many other things, from being born and slowly educated within a society of fellow humans, from fighting the inevitability of our own death, from cherishing memories, from the lonely curiosity of our own mind, from the omnipresence of the needs and quirks of our biological body, and from the way it dictates the space- and time-scales we can grasp. All conscious machines will have embodied experiences of their own, but in bodies that will be entirely alien to us.

We are able to empathize with nonhuman characters or intelligent machines in human-made fiction because they have been conceived by other human beings from the only subjective perspective accessible to us: “What would it be like for a human to behave as x?” In order to understand machinic art as such—and assuming that we stand a chance of even recognizing it in the first place—we would need a way to conceive a first-person experience of what it is like to be that machine. That is something we cannot do even for beings that are much closer to us. It might very well happen that we understand some actions or artifacts created by machines of their own volition as art, but in doing so we will inevitably anthropomorphize the machine’s intentions. Art made by a machine can be meaningfully interpreted in a way that is plausible only from the perspective of that machine, and any coherent anthropomorphized interpretation will be implausibly alien from the machine perspective. As such, it will be a misinterpretation of the artwork.

But what if we grant the machine privileged access to our ways of reasoning, to the peculiarities of our perception apparatus, to endless examples of human culture? Wouldn’t that enable the machine to make art that a human could understand? Our answer is yes, but this would also make the artworks human—not authentically machinic. All examples so far of “art made by machines” are actually just straightforward examples of human art made with computers, with the artists being the computer programmers. It might seem like a strange claim: how can the programmers be the authors of the artwork if, most of the time, they can’t control—or even anticipate—the actual materializations of the artwork? It turns out that this is a long-standing artistic practice.

Suppose that your local orchestra is playing Beethoven’s Symphony No 7 (1812). Even though Beethoven will not be directly responsible for any of the sounds produced there, you would still say that you are listening to Beethoven. Your experience might depend considerably on the interpretation of the performers, the acoustics of the room, the behavior of fellow audience members or your state of mind. Those and other aspects are the result of choices made by specific individuals or of accidents happening to them. But the author of the music? Ludwig van Beethoven. Let’s say that, as a somewhat odd choice for the program, John Cage’s Imaginary Landscape No 4 (March No 2) (1951) is also played, with 24 performers controlling 12 radios according to a musical score. In this case, the responsibility for the sounds being heard should be attributed to unsuspecting radio hosts, or even to electromagnetic fields. Yet, the shaping of sounds over time—the composition—should be credited to Cage. Each performance of this piece will vary immensely in its sonic materialization, but it will always be a performance of Imaginary Landscape No 4.

Why should we change these principles when artists use computers if, in these respects at least, computer art does not bring anything new to the table? The (human) artists might not be in direct control of the final materializations, or even be able to predict them but, despite that, they are the authors of the work. Various materializations of the same idea—in this case formalized as an algorithm—are instantiations of the same work manifesting different contextual conditions. In fact, a common use of computation in the arts is the production of variations of a process, and artists make extensive use of systems that are sensitive to initial conditions, external inputs, or pseudo-randomness to deliberately avoid repetition of outputs. Having a computer executing a procedure to build an artwork, even if using pseudo-random processes or machine-learning algorithms, is no different than throwing dice to arrange a piece of music, or to pursuing innumerable variations of the same formula. After all, the idea of machines that make art has an artistic tradition that long predates the current trend of artworks made by artificial intelligence.

Machinic art is a term that we believe should be reserved for art made by an artificial mind’s own volition, not for that based on (or directed towards) an anthropocentric view of art. From a human point of view, machinic artworks will still be procedural, algorithmic, and computational. They will be generative, because they will be autonomous from a human artist. And they might be interactive, with humans or other systems. But they will not be the result of a human deferring decisions to a machine, because the first of those—the decision to make art—needs to be the result of a machine’s volition, intentions, and decisions. Only then will we no longer have human art made with computers, but proper machinic art.

The problem is not whether machines will or will not develop a sense of self that leads to an eagerness to create art. The problem is that if—or when—they do, they will have such a different Umwelt that we will be completely unable to relate to it from our own subjective, embodied perspective. Machinic art will always lie beyond our ability to understand it because the boundaries of our comprehension—in art, as in life—are those of the human experience.

This article was originally published at Aeon and has been republished under Creative Commons.

Image Credit: Rene Böhmer / Unsplash Continue reading

Posted in Human Robots

#436482 50+ Reasons Our Favorite Emerging ...

For most of history, technology was about atoms, the manipulation of physical stuff to extend humankind’s reach. But in the last five or six decades, atoms have partnered with bits, the elemental “particles” of the digital world as we know it today. As computing has advanced at the accelerating pace described by Moore’s Law, technological progress has become increasingly digitized.

SpaceX lands and reuses rockets and self-driving cars do away with drivers thanks to automation, sensors, and software. Businesses find and hire talent from anywhere in the world, and for better and worse, a notable fraction of the world learns and socializes online. From the sequencing of DNA to artificial intelligence and from 3D printing to robotics, more and more new technologies are moving at a digital pace and quickly emerging to reshape the world around us.

In 2019, stories charting the advances of some of these digital technologies consistently made headlines. Below is, what is at best, an incomplete list of some of the big stories that caught our eye this year. With so much happening, it’s likely we’ve missed some notable headlines and advances—as well as some of your personal favorites. In either instance, share your thoughts and candidates for the biggest stories and breakthroughs on Facebook and Twitter.

With that said, let’s dive straight into the year.

Artificial Intelligence
No technology garnered as much attention as AI in 2019. With good reason. Intelligent computer systems are transitioning from research labs to everyday life. Healthcare, weather forecasting, business process automation, traffic congestion—you name it, and machine learning algorithms are likely beginning to work on it. Yet, AI has also been hyped up and overmarketed, and the latest round of AI technology, deep learning, is likely only one piece of the AI puzzle.

This year, Open AI’s game-playing algorithms beat some of the world’s best Dota 2 players, DeepMind notched impressive wins in Starcraft, and Carnegie Mellon University’s Libratus “crushed” pros at six-player Texas Hold‘em.
Speaking of games, AI’s mastery of the incredibly complex game of Go prompted a former world champion to quit, stating that AI ‘”cannot be defeated.”
But it isn’t just fun and games. Practical, powerful applications that make the best of AI’s pattern recognition abilities are on the way. Insilico Medicine, for example, used machine learning to help discover and design a new drug in just 46 days, and DeepMind is focused on using AI to crack protein folding.
Of course, AI can be a double-edged sword. When it comes to deepfakes and fake news, for example, AI makes both easier to create and detect, and early in the year, OpenAI created and announced a powerful AI text generator but delayed releasing it for fear of malicious use.
Recognizing AI’s power for good and ill, the OECD, EU, World Economic Forum, and China all took a stab at defining an ethical framework for the development and deployment of AI.

Computing Systems
Processors and chips kickstarted the digital boom and are still the bedrock of continued growth. While progress in traditional silicon-based chips continues, it’s slowing and getting more expensive. Some say we’re reaching the end of Moore’s Law. While that may be the case for traditional chips, specialized chips and entirely new kinds of computing are waiting in the wings.

In fall 2019, Google confirmed its quantum computer had achieved “quantum supremacy,” a term that means a quantum computer can perform a calculation a normal computer cannot. IBM pushed back on the claim, and it should be noted the calculation was highly specialized. But while it’s still early days, there does appear to be some real progress (and more to come).
Should quantum computing become truly practical, “the implications are staggering.” It could impact machine learning, medicine, chemistry, and materials science, just to name a few areas.
Specialized chips continue to take aim at machine learning—a giant new chip with over a trillion transistors, for example, may make machine learning algorithms significantly more efficient.
Cellular computers also saw advances in 2019 thanks to CRISPR. And the year witnessed the emergence of the first reprogrammable DNA computer and new chips inspired by the brain.
The development of hardware computing platforms is intrinsically linked to software. 2019 saw a continued move from big technology companies towards open sourcing (at least parts of) their software, potentially democratizing the use of advanced systems.

Networks
Increasing interconnectedness has, in many ways, defined the 21st century so far. Your phone is no longer just a phone. It’s access to the world’s population and accumulated knowledge—and it fits in your pocket. Pretty neat. This is all thanks to networks, which had some notable advances in 2019.

The biggest network development of the year may well be the arrival of the first 5G networks.
5G’s faster speeds promise advances across many emerging technologies.
Self-driving vehicles, for example, may become both smarter and safer thanks to 5G C-V2X networks. (Don’t worry with trying to remember that. If they catch on, they’ll hopefully get a better name.)
Wi-Fi may have heard the news and said “hold my beer,” as 2019 saw the introduction of Wi-Fi 6. Perhaps the most important upgrade, among others, is that Wi-Fi 6 ensures that the ever-growing number of network connected devices get higher data rates.
Networks also went to space in 2019, as SpaceX began launching its Starlink constellation of broadband satellites. In typical fashion, Elon Musk showed off the network’s ability to bounce data around the world by sending a Tweet.

Augmented Reality and Virtual Reality
Forget Pokemon Go (unless you want to add me as a friend in the game—in which case don’t forget Pokemon Go). 2019 saw AR and VR advance, even as Magic Leap, the most hyped of the lot, struggled to live up to outsized expectations and sell headsets.

Mixed reality AR and VR technologies, along with the explosive growth of sensor-based data about the world around us, is creating a one-to-one “Mirror World” of our physical reality—a digital world you can overlay on our own or dive into immersively thanks to AR and VR.
Facebook launched Replica, for example, which is a photorealistic virtual twin of the real world that, among other things, will help train AIs to better navigate their physical surroundings.
Our other senses (beyond eyes) may also become part of the Mirror World through the use of peripherals like a newly developed synthetic skin that aim to bring a sense of touch to VR.
AR and VR equipment is also becoming cheaper—with more producers entering the space—and more user-friendly. Instead of a wired headset requiring an expensive gaming PC, the new Oculus Quest is a wireless, self-contained step toward the mainstream.
Niche uses also continue to gain traction, from Google Glass’s Enterprise edition to the growth of AR and VR in professional education—including on-the-job-training and roleplaying emotionally difficult work encounters, like firing an employee.

Digital Biology and Biotech
The digitization of biology is happening at an incredible rate. With wild new research coming to light every year and just about every tech giant pouring money into new solutions and startups, we’re likely to see amazing advances in 2020 added to those we saw in 2019.

None were, perhaps, more visible than the success of protein-rich, plant-based substitutes for various meats. This was the year Beyond Meat was the top IPO on the NASDAQ stock exchange and people stood in line for the plant-based Impossible Whopper and KFC’s Beyond Chicken.
In the healthcare space, a report about three people with HIV who became virus free thanks to a bone marrow transplants of stem cells caused a huge stir. The research is still in relatively early stages, and isn’t suitable for most people, but it does provides a glimmer of hope.
CRISPR technology, which almost deserves its own section, progressed by leaps and bounds. One tweak made CRISPR up to 50 times more accurate, while the latest new CRISPR-based system, CRISPR prime, was described as a “word processor” for gene editing.
Many areas of healthcare stand to gain from CRISPR. For instance, cancer treatment, were a first safety test showed ‘promising’ results.
CRISPR’s many potential uses, however, also include some weird/morally questionable areas, which was exemplified by one the year’s stranger CRISPR-related stories about a human-monkey hybrid embryo in China.
Incidentally, China could be poised to take the lead on CRISPR thanks to massive investments and research programs.
As a consequence of quick advances in gene editing, we are approaching a point where we will be able to design our own biology—but first we need to have a serious conversation as a society about the ethics of gene editing and what lines should be drawn.

3D Printing
3D printing has quietly been growing both market size and the objects the printers are capable of producing. While both are impressive, perhaps the biggest story of 2019 is their increased speed.

One example was a boat that was printed in just three days, which also set three new world records for 3D printing.
3D printing is also spreading in the construction industry. In Mexico, the technology is being used to construct 50 new homes with subsidized mortgages of just $20/month.
3D printers also took care of all parts of a 640 square-meter home in Dubai.
Generally speaking, the use of 3D printing to make parts for everything from rocket engines (even entire rockets) to trains to cars illustrates the sturdiness of the technology, anno 2019.
In healthcare, 3D printing is also advancing the cause of bio-printed organs and, in one example, was used to print vascularized parts of a human heart.

Robotics
Living in Japan, I get to see Pepper, Aibo, and other robots on pretty much a daily basis. The novelty of that experience is spreading to other countries, and robots are becoming a more visible addition to both our professional and private lives.

We can’t talk about robots and 2019 without mentioning Boston Dynamics’ Spot robot, which went on sale for the general public.
Meanwhile, Google, Boston Dynamics’ former owner, rebooted their robotics division with a more down-to-earth focus on everyday uses they hope to commercialize.
SoftBank’s Pepper robot is working as a concierge and receptionist in various countries. It is also being used as a home companion. Not satisfied, Pepper rounded off 2019 by heading to the gym—to coach runners.
Indeed, there’s a growing list of sports where robots perform as well—or better—than humans.
2019 also saw robots launch an assault on the kitchen, including the likes of Samsung’s robot chef, and invade the front yard, with iRobot’s Terra robotic lawnmower.
In the borderlands of robotics, full-body robotic exoskeletons got a bit more practical, as the (by all accounts) user-friendly, battery-powered Sarcos Robotics Guardian XO went commercial.

Autonomous Vehicles
Self-driving cars did not—if you will forgive the play on words—stay quite on track during 2019. The fallout from Uber’s 2018 fatal crash marred part of the year, while some big players ratcheted back expectations on a quick shift to the driverless future. Still, self-driving cars, trucks, and other autonomous systems did make progress this year.

Winner of my unofficial award for best name in self-driving goes to Optimus Ride. The company also illustrates that self-driving may not be about creating a one-size-fits-all solution but catering to specific markets.
Self-driving trucks had a good year, with tests across many countries and states. One of the year’s odder stories was a self-driving truck traversing the US with a delivery of butter.
A step above the competition may be the future slogan (or perhaps not) of Boeing’s self-piloted air taxi that saw its maiden test flight in 2019. It joins a growing list of companies looking to create autonomous, flying passenger vehicles.
2019 was also the year where companies seemed to go all in on last-mile autonomous vehicles. Who wins that particular competition could well emerge during 2020.

Blockchain and Digital Currencies
Bitcoin continues to be the cryptocurrency equivalent of a rollercoaster, but the underlying blockchain technology is progressing more steadily. Together, they may turn parts of our financial systems cashless and digital—though how and when remains a slightly open question.

One indication of this was Facebook’s hugely controversial announcement of Libra, its proposed cryptocurrency. The company faced immediate pushback and saw a host of partners jump ship. Still, it brought the tech into mainstream conversations as never before and is putting the pressure on governments and central banks to explore their own digital currencies.
Deloitte’s in-depth survey of the state of blockchain highlighted how the technology has moved from fintech into just about any industry you can think of.
One of the biggest issues facing the spread of many digital currencies—Bitcoin in particular, you could argue—is how much energy it consumes to mine them. 2019 saw the emergence of several new digital currencies with a much smaller energy footprint.
2019 was also a year where we saw a new kind of digital currency, stablecoins, rise to prominence. As the name indicates, stablecoins are a group of digital currencies whose price fluctuations are more stable than the likes of Bitcoin.
In a geopolitical sense, 2019 was a year of China playing catch-up. Having initially banned blockchain, the country turned 180 degrees and announced that it was “quite close” to releasing a digital currency and a wave of blockchain-programs.

Renewable Energy and Energy Storage
While not every government on the planet seems to be a fan of renewable energy, it keeps on outperforming fossil fuel after fossil fuel in places well suited to it—even without support from some of said governments.

One of the reasons for renewable energy’s continued growth is that energy efficiency levels keep on improving.
As a result, an increased number of coal plants are being forced to close due to an inability to compete, and the UK went coal-free for a record two weeks.
We are also seeing more and more financial institutions refusing to fund fossil fuel projects. One such example is the European Investment Bank.
Renewable energy’s advance is tied at the hip to the rise of energy storage, which also had a breakout 2019, in part thanks to investments from the likes of Bill Gates.
The size and capabilities of energy storage also grew in 2019. The best illustration came from Australia were Tesla’s mega-battery proved that energy storage has reached a stage where it can prop up entire energy grids.

Image Credit: Mathew Schwartz / Unsplash Continue reading

Posted in Human Robots