Tag Archives: modern
Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.
As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.
In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.
This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.
It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.
The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”
Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.
Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.
By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.
Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.
Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.
Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.
Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.
Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”
If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.
While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.
The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.
The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.
Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.
Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.
Image Credit: VladFotoMag / Shutterstock.com Continue reading →
Development across the entire information technology landscape certainly didn’t slow down this year. From CRISPR babies, to the rapid decline of the crypto markets, to a new robot on Mars, and discovery of subatomic particles that could change modern physics as we know it, there was no shortage of headline-grabbing breakthroughs and discoveries.
As 2018 comes to a close, we can pause and reflect on some of the biggest technology breakthroughs and scientific discoveries that occurred this year.
I reached out to a few Singularity University speakers and faculty across the various technology domains we cover asking what they thought the biggest breakthrough was in their area of expertise. The question posed was:
“What, in your opinion, was the biggest development in your area of focus this year? Or, what was the breakthrough you were most surprised by in 2018?”
I can share that for me, hands down, the most surprising development I came across in 2018 was learning that a publicly-traded company that was briefly valued at over $1 billion, and has over 12,000 employees and contractors spread around the world, has no physical office space and the entire business is run and operated from inside an online virtual world. This is Ready Player One stuff happening now.
For the rest, here’s what our experts had to say.
Dr. Tiffany Vora | Faculty Director and Vice Chair, Digital Biology and Medicine, Singularity University
“That’s easy: CRISPR babies. I knew it was technically possible, and I’ve spent two years predicting it would happen first in China. I knew it was just a matter of time but I failed to predict the lack of oversight, the dubious consent process, the paucity of publicly-available data, and the targeting of a disease that we already know how to prevent and treat and that the children were at low risk of anyway.
I’m not convinced that this counts as a technical breakthrough, since one of the girls probably isn’t immune to HIV, but it sure was a surprise.”
For more, read Dr. Vora’s summary of this recent stunning news from China regarding CRISPR-editing human embryos.
Andrew Fursman | Co-Founder/CEO 1Qbit, Faculty, Quantum Computing, Singularity University
“There were two last-minute holiday season surprise quantum computing funding and technology breakthroughs:
First, right before the government shutdown, one priority legislative accomplishment will provide $1.2 billion in quantum computing research over the next five years. Second, there’s the rise of ions as a truly viable, scalable quantum computing architecture.”
*Read this Gizmodo profile on an exciting startup in the space to learn more about this type of quantum computing
Ramez Naam | Chair, Energy and Environmental Systems, Singularity University
“2018 had plenty of energy surprises. In solar, we saw unsubsidized prices in the sunny parts of the world at just over two cents per kwh, or less than half the price of new coal or gas electricity. In the US southwest and Texas, new solar is also now cheaper than new coal or gas. But even more shockingly, in Germany, which is one of the least sunny countries on earth (it gets less sunlight than Canada) the average bid for new solar in a 2018 auction was less than 5 US cents per kwh. That’s as cheap as new natural gas in the US, and far cheaper than coal, gas, or any other new electricity source in most of Europe.
In fact, it’s now cheaper in some parts of the world to build new solar or wind than to run existing coal plants. Think tank Carbon Tracker calculates that, over the next 10 years, it will become cheaper to build new wind or solar than to operate coal power in most of the world, including specifically the US, most of Europe, and—most importantly—India and the world’s dominant burner of coal, China.
Here comes the sun.”
GLOBAL GRAND CHALLENGES
Darlene Damm | Vice Chair, Faculty, Global Grand Challenges, Singularity University
“In 2018 we saw a lot of areas in the Global Grand Challenges move forward—advancements in robotic farming technology and cultured meat, low-cost 3D printed housing, more sophisticated types of online education expanding to every corner of the world, and governments creating new policies to deal with the ethics of the digital world. These were the areas we were watching and had predicted there would be change.
What most surprised me was to see young people, especially teenagers, start to harness technology in powerful ways and use it as a platform to make their voices heard and drive meaningful change in the world. In 2018 we saw teenagers speak out on a number of issues related to their well-being and launch digital movements around issues such as gun and school safety, global warming and environmental issues. We often talk about the harm technology can cause to young people, but on the flip side, it can be a very powerful tool for youth to start changing the world today and something I hope we see more of in the future.”
Pascal Finette | Chair, Entrepreneurship and Open Innovation, Singularity University
“Without a doubt the rapid and massive adoption of AI, specifically deep learning, across industries, sectors, and organizations. What was a curiosity for most companies at the beginning of the year has quickly made its way into the boardroom and leadership meetings, and all the way down into the innovation and IT department’s agenda. You are hard-pressed to find a mid- to large-sized company today that is not experimenting or implementing AI in various aspects of its business.
On the slightly snarkier side of answering this question: The very rapid decline in interest in blockchain (and cryptocurrencies). The blockchain party was short, ferocious, and ended earlier than most would have anticipated, with a huge hangover for some. The good news—with the hot air dissipated, we can now focus on exploring the unique use cases where blockchain does indeed offer real advantages over centralized approaches.”
*Author note: snark is welcome and appreciated
Hod Lipson | Director, Creative Machines Lab, Columbia University
“The biggest surprise for me this year in robotics was learning dexterity. For decades, roboticists have been trying to understand and imitate dexterous manipulation. We humans seem to be able to manipulate objects with our fingers with incredible ease—imagine sifting through a bunch of keys in the dark, or tossing and catching a cube. And while there has been much progress in machine perception, dexterous manipulation remained elusive.
There seemed to be something almost magical in how we humans can physically manipulate the physical world around us. Decades of research in grasping and manipulation, and millions of dollars spent on robot-hand hardware development, has brought us little progress. But in late 2018, the Berkley OpenAI group demonstrated that this hurdle may finally succumb to machine learning as well. Given 200 years worth of practice, machines learned to manipulate a physical object with amazing fluidity. This might be the beginning of a new age for dexterous robotics.”
Jeremy Howard | Founding Researcher, fast.ai, Founder/CEO, Enlitic, Faculty Data Science, Singularity University
“The biggest development in machine learning this year has been the development of effective natural language processing (NLP).
The New York Times published an article last month titled “Finally, a Machine That Can Finish Your Sentence,” which argued that NLP neural networks have reached a significant milestone in capability and speed of development. The “finishing your sentence” capability mentioned in the title refers to a type of neural network called a “language model,” which is literally a model that learns how to finish your sentences.
Earlier this year, two systems (one, called ELMO, is from the Allen Institute for AI, and the other, called ULMFiT, was developed by me and Sebastian Ruder) showed that such a model could be fine-tuned to dramatically improve the state-of-the-art in nearly every NLP task that researchers study. This work was further developed by OpenAI, which in turn was greatly scaled up by Google Brain, who created a system called BERT which reached human-level performance on some of NLP’s toughest challenges.
Over the next year, expect to see fine-tuned language models used for everything from understanding medical texts to building disruptive social media troll armies.”
Andre Wegner | Founder/CEO Authentise, Chair, Digital Manufacturing, Singularity University
“Most surprising to me was the extent and speed at which the industry finally opened up.
While previously, only few 3D printing suppliers had APIs and knew what to do with them, 2018 saw nearly every OEM (or original equipment manufacturer) enabling data access and, even more surprisingly, shying away from proprietary standards and adopting MTConnect, as stalwarts such as 3D Systems and Stratasys have been. This means that in two to three years, data access to machines will be easy, commonplace, and free. The value will be in what is being done with that data.
Another example of this openness are the seemingly endless announcements of integrated workflows: GE’s announcement with most major software players to enable integrated solutions, EOS’s announcement with Siemens, and many more. It’s clear that all actors in the additive ecosystem have taken a step forward in terms of openness. The result is a faster pace of innovation, particularly in the software and data domains that are crucial to enabling comprehensive digital workflow to drive agile and resilient manufacturing.
I’m more optimistic we’ll achieve that now than I was at the end of 2017.”
SCIENCE AND DISCOVERY
Paul Saffo | Chair, Future Studies, Singularity University, Distinguished Visiting Scholar, Stanford Media-X Research Network
“The most important development in technology this year isn’t a technology, but rather the astonishing science surprises made possible by recent technology innovations. My short list includes the discovery of the “neptmoon”, a Neptune-scale moon circling a Jupiter-scale planet 8,000 lightyears from us; the successful deployment of the Mars InSight Lander a month ago; and the tantalizing ANITA detection (what could be a new subatomic particle which would in turn blow the standard model wide open). The highest use of invention is to support science discovery, because those discoveries in turn lead us to the future innovations that will improve the state of the world—and fire up our imaginations.”
Pablos Holman | Inventor, Hacker, Faculty, Singularity University
“Just five or ten years ago, if you’d asked any of us technologists “What is harder for robots? Eyes, or fingers?” We’d have all said eyes. Robots have extraordinary eyes now, but even in a surgical robot, the fingers are numb and don’t feel anything. Stanford robotics researchers have invented fingertips that can feel, and this will be a kingpin that allows robots to go everywhere they haven’t been yet.”
Nathana Sharma | Blockchain, Policy, Law, and Ethics, Faculty, Singularity University
“2017 was the year of peak blockchain hype. 2018 has been a year of resetting expectations and technological development, even as the broader cryptocurrency markets have faced a winter. It’s now about seeing adoption and applications that people want and need to use rise. An incredible piece of news from December 2018 is that Facebook is developing a cryptocurrency for users to make payments through Whatsapp. That’s surprisingly fast mainstream adoption of this new technology, and indicates how powerful it is.”
Neil Jacobstein | Chair, Artificial Intelligence and Robotics, Singularity University
“I think one of the most visible improvements in AI was illustrated by the Boston Dynamics Parkour video. This was not due to an improvement in brushless motors, accelerometers, or gears. It was due to improvements in AI algorithms and training data. To be fair, the video released was cherry-picked from numerous attempts, many of which ended with a crash. However, the fact that it could be accomplished at all in 2018 was a real win for both AI and robotics.”
Divya Chander | Chair, Neuroscience, Singularity University
“2018 ushered in a new era of exponential trends in non-invasive brain modulation. Changing behavior or restoring function takes on a new meaning when invasive interfaces are no longer needed to manipulate neural circuitry. The end of 2018 saw two amazing announcements: the ability to grow neural organoids (mini-brains) in a dish from neural stem cells that started expressing electrical activity, mimicking the brain function of premature babies, and the first (known) application of CRISPR to genetically alter two fetuses grown through IVF. Although this was ostensibly to provide genetic resilience against HIV infections, imagine what would happen if we started tinkering with neural circuitry and intelligence.”
Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading →
The human mind can be a confusing and overwhelming place. Despite incredible leaps in human progress, many of us still struggle to make our peace with our thoughts. The roots of this are complex and multifaceted. To find explanations for the global mental health epidemic, one can tap into neuroscience, psychology, evolutionary biology, or simply observe the meaningless systems that dominate our modern-day world.
This is not only the context of our reality but also that of the critically-acclaimed Netflix series, Maniac. Psychological dark comedy meets science fiction, Maniac is a retro, futuristic, and hallucinatory trip that is filled with hidden symbols. Directed by Cary Joji Fukunaga, the series tells the story of two strangers who decide to participate in the final stage of a “groundbreaking” pharmaceutical trial—one that combines novel pharmaceuticals with artificial intelligence, and promises to make their emotional pain go away.
Naturally, things don’t go according to plan.
From exams used for testing defense mechanisms to techniques such as cognitive behavioral therapy, the narrative infuses genuine psychological science. As perplexing as the series may be to some viewers, many of the tools depicted actually have a strong grounding in current technological advancements.
Catalysts for Alleviating Suffering
In the therapy of Maniac, participants undergo a three-day trial wherein they ingest three pills and appear to connect their consciousness to a superintelligent AI. Each participant is hurled into the traumatic experiences imprinted in their subconscious and forced to cope with them in a series of hallucinatory and dream-like experiences.
Perhaps the most recognizable parallel that can be drawn is with the latest advancements in psychedelic therapy. Psychedelics are a class of drugs that alter the experience of consciousness, and often cause radical changes in perception and cognitive processes.
Through a process known as transient hypofrontality, the executive “over-thinking” parts of our brains get a rest, and deeper areas become more active. This experience, combined with the breakdown of the ego, is often correlated with feelings of timelessness, peacefulness, presence, unity, and above all, transcendence.
Despite being not addictive and extremely difficult to overdose on, regulators looked down on the use of psychedelics for decades and many continue to dismiss them as “party drugs.” But in the last few years, all of this began to change.
Earlier this summer, the FDA granted breakthrough therapy designation to MDMA for the treatment of PTSD, after several phases of successful trails. Similar research has discovered that Psilocybin (also known as magic mushrooms) combined with therapy is far more effective than traditional forms of treatment to treat depression and anxiety. Today, there is a growing and overwhelming body of research that proves that not only are psychedelics such as LSD, MDMA, or Psylicybin effective catalysts to alleviate suffering and enhance the human condition, but they are potentially the most effective tools out there.
It’s important to realize that these substances are not solutions on their own, but rather catalysts for more effective therapy. They can be groundbreaking, but only in the right context and setting.
In Maniac, the medication-assisted therapy is guided by what appears to be a super-intelligent form of artificial intelligence called the GRTA, nicknamed Gertie. Gertie, who is a “guide” in machine form, accesses the minds of the participants through what appears to be a futuristic brain-scanning technology and curates customized hallucinatory experiences with the goal of accelerating the healing process.
Such a powerful form of brain-scanning technology is not unheard of. Current levels of scanning technology are already allowing us to decipher dreams and connect three human brains, and are only growing exponentially. Though they are nowhere as advanced as Gertie (we have a long way to go before we get to this kind of general AI), we are also seeing early signs of AI therapy bots, chatbots that listen, think, and communicate with users like a therapist would.
The parallels between current advancements in mental health therapy and the methods in Maniac can be startling, and are a testament to how science fiction and the arts can be used to explore the existential implications of technology.
Not Necessarily a Dystopia
While there are many ingenious similarities between the technology in Maniac and the state of mental health therapy, it’s important to recognize the stark differences. Like many other blockbuster science fiction productions, Maniac tells a fundamentally dystopian tale.
The series tells the story of the 73rd iteration of a controversial drug trial, one that has experienced many failures and even led to various participants being braindead. The scientists appear to be evil, secretive, and driven by their own superficial agendas and deep unresolved emotional issues.
In contrast, clinicians and researchers are not only required to file an “investigational new drug application” with the FDA (and get approval) but also update the agency with safety and progress reports throughout the trial.
Furthermore, many of today’s researchers are driven by a strong desire to contribute to the well-being and progress of our species. Even more, the results of decades of research by organizations like MAPS have been exceptionally promising and aligned with positive values. While Maniac is entertaining and thought-provoking, viewers must not forget the positive potential of such advancements in mental health therapy.
Science, technology, and psychology aside, Maniac is a deep commentary on the human condition and the often disorienting states that pain us all. Within any human lifetime, suffering is inevitable. It is the disproportionate, debilitating, and unjust levels of suffering that we ought to tackle as a society. Ultimately, Maniac explores whether advancements in science and technology can help us live not a life devoid of suffering, but one where it is balanced with fulfillment.
Image Credit: xpixel / Shutterstock.com Continue reading →
Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.
This is the age of Web 3.0.
Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.
To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.
During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.
Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.
And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.
In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:
News & Media Coverage
Let’s dive in.
Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.
The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.
Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.
But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?
Enter mesh networks, AI, public ledgers, and virtual reality.
While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.
Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.
We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.
And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.
The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.
Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.
Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.
With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.
And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.
Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.
But perhaps most exciting is the convergence of artificial intelligence and augmented reality.
As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.
Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.
As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.
After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.
Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.
But what if you’d rather not get auto-populated product info in the first place? No problem!
Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.
And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.
With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.
This brings us to the transformative future of retail.
Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.
As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.
Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.
After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).
Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.
And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.
But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.
Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….
As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.
Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.
Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.
In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: nmedia / Shutterstock.com Continue reading →
If there’s one line that stands the test of time in Steven Spielberg’s 1993 classic Jurassic Park, it’s probably Jeff Goldblum’s exclamation, “Your scientists were so preoccupied with whether or not they could, they didn’t stop to think if they should.”
Goldblum’s character, Dr. Ian Malcolm, was warning against the hubris of naively tinkering with dinosaur DNA in an effort to bring these extinct creatures back to life. Twenty-five years on, his words are taking on new relevance as a growing number of scientists and companies are grappling with how to tread the line between “could” and “should” in areas ranging from gene editing and real-world “de-extinction” to human augmentation, artificial intelligence and many others.
Despite growing concerns that powerful emerging technologies could lead to unexpected and wide-ranging consequences, innovators are struggling with how to develop beneficial new products while being socially responsible. Part of the answer could lie in watching more science fiction movies like Jurassic Park.
Hollywood Lessons in Societal Risks
I’ve long been interested in how innovators and others can better understand the increasingly complex landscape around the social risks and benefits associated with emerging technologies. Growing concerns over the impacts of tech on jobs, privacy, security and even the ability of people to live their lives without undue interference highlight the need for new thinking around how to innovate responsibly.
New ideas require creativity and imagination, and a willingness to see the world differently. And this is where science fiction movies can help.
Sci-fi flicks are, of course, notoriously unreliable when it comes to accurately depicting science and technology. But because their plots are often driven by the intertwined relationships between people and technology, they can be remarkably insightful in revealing social factors that affect successful and responsible innovation.
This is clearly seen in Jurassic Park. The movie provides a surprisingly good starting point for thinking about the pros and cons of modern-day genetic engineering and the growing interest in bringing extinct species back from the dead. But it also opens up conversations around the nature of complex systems that involve both people and technology, and the potential dangers of “permissionless” innovation that’s driven by power, wealth and a lack of accountability.
Similar insights emerge from a number of other movies, including Spielberg’s 2002 film “Minority Report”—which presaged a growing capacity for AI-enabled crime prediction and the ethical conundrums it’s raising—as well as the 2014 film Ex Machina.
As with Jurassic Park, Ex Machina centers around a wealthy and unaccountable entrepreneur who is supremely confident in his own abilities. In this case, the technology in question is artificial intelligence.
The movie tells a tale of an egotistical genius who creates a remarkable intelligent machine—but he lacks the awareness to recognize his limitations and the risks of what he’s doing. It also provides a chilling insight into potential dangers of creating machines that know us better than we know ourselves, while not being bound by human norms or values.
The result is a sobering reminder of how, without humility and a good dose of humanity, our innovations can come back to bite us.
The technologies in Jurassic Park, Minority Report, and Ex Machina lie beyond what is currently possible. Yet these films are often close enough to emerging trends that they help reveal the dangers of irresponsible, or simply naive, innovation. This is where these and other science fiction movies can help innovators better understand the social challenges they face and how to navigate them.
Real-World Problems Worked Out On-Screen
In a recent op-ed in the New York Times, journalist Kara Swisher asked, “Who will teach Silicon Valley to be ethical?” Prompted by a growing litany of socially questionable decisions amongst tech companies, Swisher suggests that many of them need to grow up and get serious about ethics. But ethics alone are rarely enough. It’s easy for good intentions to get swamped by fiscal pressures and mired in social realities.
Elon Musk has shown that brilliant tech innovators can take ethical missteps along the way. Image Credit:AP Photo/Chris Carlson
Technology companies increasingly need to find some way to break from business as usual if they are to become more responsible. High-profile cases involving companies like Facebook and Uber as well as Tesla’s Elon Musk have highlighted the social as well as the business dangers of operating without fully understanding the consequences of people-oriented actions.
Many more companies are struggling to create socially beneficial technologies and discovering that, without the necessary insights and tools, they risk blundering about in the dark.
For instance, earlier this year, researchers from Google and DeepMind published details of an artificial intelligence-enabled system that can lip-read far better than people. According to the paper’s authors, the technology has enormous potential to improve the lives of people who have trouble speaking aloud. Yet it doesn’t take much to imagine how this same technology could threaten the privacy and security of millions—especially when coupled with long-range surveillance cameras.
Developing technologies like this in socially responsible ways requires more than good intentions or simply establishing an ethics board. People need a sophisticated understanding of the often complex dynamic between technology and society. And while, as Mozilla’s Mitchell Baker suggests, scientists and technologists engaging with the humanities can be helpful, it’s not enough.
An Easy Way into a Serious Discipline
The “new formulation” of complementary skills Baker says innovators desperately need already exists in a thriving interdisciplinary community focused on socially responsible innovation. My home institution, the School for the Future of Innovation in Society at Arizona State University, is just one part of this.
Experts within this global community are actively exploring ways to translate good ideas into responsible practices. And this includes the need for creative insights into the social landscape around technology innovation, and the imagination to develop novel ways to navigate it.
People love to come together as a movie audience.Image credit: The National Archives UK, CC BY 4.0
Here is where science fiction movies become a powerful tool for guiding innovators, technology leaders and the companies where they work. Their fictional scenarios can reveal potential pitfalls and opportunities that can help steer real-world decisions toward socially beneficial and responsible outcomes, while avoiding unnecessary risks.
And science fiction movies bring people together. By their very nature, these films are social and educational levelers. Look at who’s watching and discussing the latest sci-fi blockbuster, and you’ll often find a diverse cross-section of society. The genre can help build bridges between people who know how science and technology work, and those who know what’s needed to ensure they work for the good of society.
This is the underlying theme in my new book Films from the Future: The Technology and Morality of Sci-Fi Movies. It’s written for anyone who’s curious about emerging trends in technology innovation and how they might potentially affect society. But it’s also written for innovators who want to do the right thing and just don’t know where to start.
Of course, science fiction films alone aren’t enough to ensure socially responsible innovation. But they can help reveal some profound societal challenges facing technology innovators and possible ways to navigate them. And what better way to learn how to innovate responsibly than to invite some friends round, open the popcorn and put on a movie?
It certainly beats being blindsided by risks that, with hindsight, could have been avoided.
Andrew Maynard, Director, Risk Innovation Lab, Arizona State University
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Fred Mantel / Shutterstock.com Continue reading →