Tag Archives: Space
#433950 How the Spatial Web Will Transform Every ...
What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or is our future workplace completely virtualized, whereby we hang out at home in our PJs while walking about our virtual corporate headquarters?
This blog will look at the future of work during the age of Web 3.0… Examining scenarios in which AI, VR, and the spatial web converge to transform every element of our careers, from training to execution to free time.
Three weeks ago, I explored the vast implications of Web 3.0 on news, media, smart advertising, and personalized retail. And to offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.
A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.
But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments.
Suddenly, all our information will be manipulated, stored, understood, and experienced in spatial ways.
In this third installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for:
Professional Training
Delocalized Business and the Virtual Workplace
Smart Permissions and Data Security
Let’s dive in.
Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market.
Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.
In September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.
In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mock-ups into CAD-designed virtual replicas.
But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.
And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.
Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.
When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.
Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.
But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands.
VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.
Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.
Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.
As tomorrow’s career model shifts from a “one-and-done graduate degree” to lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.
But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.
Rise of the Virtual Workplace and Digital Data Integrity
In addition to enabling an annual $52 billion virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.
Too good to be true? Check out an incredible publicly listed company called eXp Realty.
Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading.
But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.
And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial.
What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.
In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.
Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.
Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.
Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.
This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.
But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.
As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.
Imaging showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.
You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.
With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.
Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.
Life-like virtual modules are already unlocking countless professional training camps, modifiable in real-time and easily updated.
Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading.
And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.
Welcome to the Spatial Web workplace.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: MONOPOLY919 / Shutterstock.com Continue reading
#433907 How the Spatial Web Will Fix What’s ...
Converging exponential technologies will transform media, advertising and the retail world. The world we see, through our digitally-enhanced eyes, will multiply and explode with intelligence, personalization, and brilliance.
This is the age of Web 3.0.
Last week, I discussed the what and how of Web 3.0 (also known as the Spatial Web), walking through its architecture and the converging technologies that enable it.
To recap, while Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens—a flat web of sensorily confined information.
During the next two to five years, the convergence of 5G, AI, a trillion sensors, and VR/AR will enable us to both map our physical world into virtual space and superimpose a digital layer onto our physical environments.
Web 3.0 is about to transform everything—from the way we learn and educate, to the way we trade (smart) assets, to our interactions with real and virtual versions of each other.
And while users grow rightly concerned about data privacy and misuse, the Spatial Web’s use of blockchain in its data and governance layer will secure and validate our online identities, protecting everything from your virtual assets to personal files.
In this second installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for a handful of industries:
News & Media Coverage
Smart Advertising
Personalized Retail
Let’s dive in.
Transforming Network News with Web 3.0
News media is big business. In 2016, global news media (including print) generated 168 billion USD in circulation and advertising revenue.
The news we listen to impacts our mindset. Listen to dystopian news on violence, disaster, and evil, and you’ll more likely be searching for a cave to hide in, rather than technology for the launch of your next business.
Today, different news media present starkly different realities of everything from foreign conflict to domestic policy. And outcomes are consequential. What reporters and news corporations decide to show or omit of a given news story plays a tremendous role in shaping the beliefs and resulting values of entire populations and constituencies.
But what if we could have an objective benchmark for today’s news, whereby crowdsourced and sensor-collected evidence allows you to tour the site of journalistic coverage, determining for yourself the most salient aspects of a story?
Enter mesh networks, AI, public ledgers, and virtual reality.
While traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.
In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.
Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.
Imagine a scenario in which protests break out across the country, each cluster of activists broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram of the march in real time. Want to see and hear what the NYC-based crowds are advocating for? Throw on some VR goggles and explore the event with full access. Or cue into the southern Texan border to assess for yourself the handling of immigrant entry and border conflicts.
Take a front seat in the Capitol during tomorrow’s Senate hearing, assessing each Senator’s reactions, questions and arguments without a Fox News or CNN filter. Or if you’re short on time, switch on the holographic press conference and host 3D avatars of live-broadcasting politicians in your living room.
We often think of modern media as taking away consumer agency, feeding tailored and often partisan ideology to a complacent audience. But as wireless mesh networks and agnostic sensor data allow for immersive VR-accessible news sites, the average viewer will necessarily become an active participant in her own education of current events.
And with each of us interpreting the news according to our own values, I envision a much less polarized world. A world in which civic engagement, moderately reasoned dialogue, and shared assumptions will allow us to empathize and make compromises.
The future promises an era in which news is verified and balanced; wherein public ledgers, AI, and new web interfaces bring you into the action and respect your intelligence—not manipulate your ignorance.
Web 3.0 Reinventing Advertising
Bringing about the rise of ‘user-owned data’ and self-established permissions, Web 3.0 is poised to completely disrupt digital advertising—a global industry worth over 192 billion USD.
Currently, targeted advertising leverages tomes of personal data and online consumer behavior to subtly engage you with products you might not want, or sell you on falsely advertised services promising inaccurate results.
With a new Web 3.0 data and governance layer, however, distributed ledger technologies will require advertisers to engage in more direct interaction with consumers, validating claims and upping transparency.
And with a data layer that allows users to own and authorize third-party use of their data, blockchain also holds extraordinary promise to slash not only data breaches and identity theft, but covert advertiser bombardment without your authorization.
Accessing crowdsourced reviews and AI-driven fact-checking, users will be able to validate advertising claims more efficiently and accurately than ever before, potentially rating and filtering out advertisers in the process. And in such a streamlined system of verified claims, sellers will face increased pressure to compete more on product and rely less on marketing.
But perhaps most exciting is the convergence of artificial intelligence and augmented reality.
As Spatial Web networks begin to associate digital information with physical objects and locations, products will begin to “sell themselves.” Each with built-in smart properties, products will become hyper-personalized, communicating information directly to users through Web 3.0 interfaces.
Imagine stepping into a department store in pursuit of a new web-connected fridge. As soon as you enter, your AR goggles register your location and immediately grant you access to a populated register of store products.
As you move closer to a kitchen set that catches your eye, a virtual salesperson—whether by holographic video or avatar—pops into your field of view next to the fridge you’ve been examining and begins introducing you to its various functions and features. You quickly decide you’d rather disable the avatar and get textual input instead, and preferences are reset to list appliance properties visually.
After a virtual tour of several other fridges, you decide on the one you want and seamlessly execute a smart contract, carried out by your smart wallet and the fridge. The transaction takes place in seconds, and the fridge’s blockchain-recorded ownership record has been updated.
Better yet, you head over to a friend’s home for dinner after moving into the neighborhood. While catching up in the kitchen, your eyes fixate on the cabinets, which quickly populate your AR glasses with a price-point and selection of colors.
But what if you’d rather not get auto-populated product info in the first place? No problem!
Now empowered with self-sovereign identities, users might be able to turn off advertising preferences entirely, turning on smart recommendations only when they want to buy a given product or need new supplies.
And with user-centric data, consumers might even sell such information to advertisers directly. Now, instead of Facebook or Google profiting off your data, you might earn a passive income by giving advertisers permission to personalize and market their services. Buy more, and your personal data marketplace grows in value. Buy less, and a lower-valued advertising profile causes an ebb in advertiser input.
With user-controlled data, advertisers now work on your terms, putting increased pressure on product iteration and personalizing products for each user.
This brings us to the transformative future of retail.
Personalized Retail–Power of the Spatial Web
In a future of smart and hyper-personalized products, I might walk through a virtual game space or a digitally reconstructed Target, browsing specific categories of clothing I’ve predetermined prior to entry.
As I pick out my selection, my AI assistant hones its algorithm reflecting new fashion preferences, and personal shoppers—also visiting the store in VR—help me pair different pieces as I go.
Once my personal shopper has finished constructing various outfits, I then sit back and watch a fashion show of countless Peter avatars with style and color variations of my selection, each customizable.
After I’ve made my selection, I might choose to purchase physical versions of three outfits and virtual versions of two others for my digital avatar. Payments are made automatically as I leave the store, including a smart wallet transaction made with the personal shopper at a per-outfit rate (for only the pieces I buy).
Already, several big players have broken into the VR market. Just this year, Walmart has announced its foray into the VR space, shipping 17,000 Oculus Go VR headsets to Walmart locations across the US.
And just this past January, Walmart filed two VR shopping-related patents. In a new bid to disrupt a rapidly changing retail market, Walmart now describes a system in which users couple their VR headset with haptic gloves for an immersive in-store experience, whether at 3am in your living room or during a lunch break at the office.
But Walmart is not alone. Big e-commerce players from Amazon to Alibaba are leaping onto the scene with new software buildout to ride the impending headset revolution.
Beyond virtual reality, players like IKEA have even begun using mobile-based augmented reality to map digitally replicated furniture in your physical living room, true to dimension. And this is just the beginning….
As AR headset hardware undergoes breakneck advancements in the next two to five years, we might soon be able to project watches onto our wrists, swapping out colors, styles, brand, and price points.
Or let’s say I need a new coffee table in my office. Pulling up multiple models in AR, I can position each option using advanced hand-tracking technology and customize height and width according to my needs. Once the smart payment is triggered, the manufacturer prints my newly-customized piece, droning it to my doorstep. As soon as I need to assemble the pieces, overlaid digital prompts walk me through each step, and any user confusions are communicated to a company database.
Perhaps one of the ripest industries for Spatial Web disruption, retail presents one of the greatest opportunities for profit across virtual apparel, digital malls, AI fashion startups and beyond.
In our next series iteration, I’ll be looking at the tremendous opportunities created by Web 3.0 for the Future of Work and Entertainment.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: nmedia / Shutterstock.com Continue reading
#433892 The Spatial Web Will Map Our 3D ...
The boundaries between digital and physical space are disappearing at a breakneck pace. What was once static and boring is becoming dynamic and magical.
For all of human history, looking at the world through our eyes was the same experience for everyone. Beyond the bounds of an over-active imagination, what you see is the same as what I see.
But all of this is about to change. Over the next two to five years, the world around us is about to light up with layer upon layer of rich, fun, meaningful, engaging, and dynamic data. Data you can see and interact with.
This magical future ahead is called the Spatial Web and will transform every aspect of our lives, from retail and advertising, to work and education, to entertainment and social interaction.
Massive change is underway as a result of a series of converging technologies, from 5G global networks and ubiquitous artificial intelligence, to 30+ billion connected devices (known as the IoT), each of which will generate scores of real-world data every second, everywhere.
The current AI explosion will make everything smart, autonomous, and self-programming. Blockchain and cloud-enabled services will support a secure data layer, putting data back in the hands of users and allowing us to build complex rule-based infrastructure in tomorrow’s virtual worlds.
And with the rise of online-merge-offline (OMO) environments, two-dimensional screens will no longer serve as our exclusive portal to the web. Instead, virtual and augmented reality eyewear will allow us to interface with a digitally-mapped world, richly layered with visual data.
Welcome to the Spatial Web. Over the next few months, I’ll be doing a deep dive into the Spatial Web (a.k.a. Web 3.0), covering what it is, how it works, and its vast implications across industries, from real estate and healthcare to entertainment and the future of work. In this blog, I’ll discuss the what, how, and why of Web 3.0—humanity’s first major foray into our virtual-physical hybrid selves (BTW, this year at Abundance360, we’ll be doing a deep dive into the Spatial Web with the leaders of HTC, Magic Leap, and High-Fidelity).
Let’s dive in.
What is the Spatial Web?
While we humans exist in three dimensions, our web today is flat.
The web was designed for shared information, absorbed through a flat screen. But as proliferating sensors, ubiquitous AI, and interconnected networks blur the lines between our physical and online worlds, we need a spatial web to help us digitally map a three-dimensional world.
To put Web 3.0 in context, let’s take a trip down memory lane. In the late 1980s, the newly-birthed world wide web consisted of static web pages and one-way information—a monumental system of publishing and linking information unlike any unified data system before it. To connect, we had to dial up through unstable modems and struggle through insufferably slow connection speeds.
But emerging from this revolutionary (albeit non-interactive) infodump, Web 2.0 has connected the planet more in one decade than empires did in millennia.
Granting democratized participation through newly interactive sites and applications, today’s web era has turbocharged information-sharing and created ripple effects of scientific discovery, economic growth, and technological progress on an unprecedented scale.
We’ve seen the explosion of social networking sites, wikis, and online collaboration platforms. Consumers have become creators; physically isolated users have been handed a global microphone; and entrepreneurs can now access billions of potential customers.
But if Web 2.0 took the world by storm, the Spatial Web emerging today will leave it in the dust.
While there’s no clear consensus about its definition, the Spatial Web refers to a computing environment that exists in three-dimensional space—a twinning of real and virtual realities—enabled via billions of connected devices and accessed through the interfaces of virtual and augmented reality.
In this way, the Spatial Web will enable us to both build a twin of our physical reality in the virtual realm and bring the digital into our real environments.
It’s the next era of web-like technologies:
Spatial computing technologies, like augmented and virtual reality;
Physical computing technologies, like IoT and robotic sensors;
And decentralized computing: both blockchain—which enables greater security and data authentication—and edge computing, which pushes computing power to where it’s most needed, speeding everything up.
Geared with natural language search, data mining, machine learning, and AI recommendation agents, the Spatial Web is a growing expanse of services and information, navigable with the use of ever-more-sophisticated AI assistants and revolutionary new interfaces.
Where Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and social media on two-dimensional screens. But converging technologies are quickly transcending the laptop, and will even disrupt the smartphone in the next decade.
With the rise of wearables, smart glasses, AR / VR interfaces, and the IoT, the Spatial Web will integrate seamlessly into our physical environment, overlaying every conversation, every road, every object, conference room, and classroom with intuitively-presented data and AI-aided interaction.
Think: the Oasis in Ready Player One, where anyone can create digital personas, build and invest in smart assets, do business, complete effortless peer-to-peer transactions, and collect real estate in a virtual world.
Or imagine a virtual replica or “digital twin” of your office, each conference room authenticated on the blockchain, requiring a cryptographic key for entry.
As I’ve discussed with my good friend and “VR guru” Philip Rosedale, I’m absolutely clear that in the not-too-distant future, every physical element of every building in the world is going to be fully digitized, existing as a virtual incarnation or even as N number of these. “Meet me at the top of the Empire State Building?” “Sure, which one?”
This digitization of life means that suddenly every piece of information can become spatial, every environment can be smarter by virtue of AI, and every data point about me and my assets—both virtual and physical—can be reliably stored, secured, enhanced, and monetized.
In essence, the Spatial Web lets us interface with digitally-enhanced versions of our physical environment and build out entirely fictional virtual worlds—capable of running simulations, supporting entire economies, and even birthing new political systems.
But while I’ll get into the weeds of different use cases next week, let’s first concretize.
How Does It Work?
Let’s start with the stack. In the PC days, we had a database accompanied by a program that could ingest that data and present it to us as digestible information on a screen.
Then, in the early days of the web, data migrated to servers. Information was fed through a website, with which you would interface via a browser—whether Mosaic or Mozilla.
And then came the cloud.
Resident at either the edge of the cloud or on your phone, today’s rapidly proliferating apps now allow us to interact with previously read-only data, interfacing through a smartphone. But as Siri and Alexa have brought us verbal interfaces, AI-geared phone cameras can now determine your identity, and sensors are beginning to read our gestures.
And now we’re not only looking at our screens but through them, as the convergence of AI and AR begins to digitally populate our physical worlds.
While Pokémon Go sent millions of mobile game-players on virtual treasure hunts, IKEA is just one of the many companies letting you map virtual furniture within your physical home—simulating everything from cabinets to entire kitchens. No longer the one-sided recipients, we’re beginning to see through sensors, creatively inserting digital content in our everyday environments.
Let’s take a look at how the latest incarnation might work. In this new Web 3.0 stack, my personal AI would act as an intermediary, accessing public or privately-authorized data through the blockchain on my behalf, and then feed it through an interface layer composed of everything from my VR headset, to numerous wearables, to my smart environment (IoT-connected devices or even in-home robots).
But as we attempt to build a smart world with smart infrastructure, smart supply chains and smart everything else, we need a set of basic standards with addresses for people, places, and things. Just like our web today relies on the Internet Protocol (TCP/IP) and other infrastructure, by which your computer is addressed and data packets are transferred, we need infrastructure for the Spatial Web.
And a select group of players is already stepping in to fill this void. Proposing new structural designs for Web 3.0, some are attempting to evolve today’s web model from text-based web pages in 2D to three-dimensional AR and VR web experiences located in both digitally-mapped physical worlds and newly-created virtual ones.
With a spatial programming language analogous to HTML, imagine building a linkable address for any physical or virtual space, granting it a format that then makes it interchangeable and interoperable with all other spaces.
But it doesn’t stop there.
As soon as we populate a virtual room with content, we then need to encode who sees it, who can buy it, who can move it…
And the Spatial Web’s eventual governing system (for posting content on a centralized grid) would allow us to address everything from the room you’re sitting in, to the chair on the other side of the table, to the building across the street.
Just as we have a DNS for the web and the purchasing of web domains, once we give addresses to spaces (akin to granting URLs), we then have the ability to identify and visit addressable locations, physical objects, individuals, or pieces of digital content in cyberspace.
And these not only apply to virtual worlds, but to the real world itself. As new mapping technologies emerge, we can now map rooms, objects, and large-scale environments into virtual space with increasing accuracy.
We might then dictate who gets to move your coffee mug in a virtual conference room, or when a team gets to use the room itself. Rules and permissions would be set in the grid, decentralized governance systems, or in the application layer.
Taken one step further, imagine then monetizing smart spaces and smart assets. If you have booked the virtual conference room, perhaps you’ll let me pay you 0.25 BTC to let me use it instead?
But given the Spatial Web’s enormous technological complexity, what’s allowing it to emerge now?
Why Is It Happening Now?
While countless entrepreneurs have already started harnessing blockchain technologies to build decentralized apps (or dApps), two major developments are allowing today’s birth of Web 3.0:
High-resolution wireless VR/AR headsets are finally catapulting virtual and augmented reality out of a prolonged winter.
The International Data Corporation (IDC) predicts the VR and AR headset market will reach 65.9 million units by 2022. Already in the next 18 months, 2 billion devices will be enabled with AR. And tech giants across the board have long begun investing heavy sums.
In early 2019, HTC is releasing the VIVE Focus, a wireless self-contained VR headset. At the same time, Facebook is charging ahead with its Project Santa Cruz—the Oculus division’s next-generation standalone, wireless VR headset. And Magic Leap has finally rolled out its long-awaited Magic Leap One mixed reality headset.
Mass deployment of 5G will drive 10 to 100-gigabit connection speeds in the next 6 years, matching hardware progress with the needed speed to create virtual worlds.
We’ve already seen tremendous leaps in display technology. But as connectivity speeds converge with accelerating GPUs, we’ll start to experience seamless VR and AR interfaces with ever-expanding virtual worlds.
And with such democratizing speeds, every user will be able to develop in VR.
But accompanying these two catalysts is also an important shift towards the decentralized web and a demand for user-controlled data.
Converging technologies, from immutable ledgers and blockchain to machine learning, are now enabling the more direct, decentralized use of web applications and creation of user content. With no central point of control, middlemen are removed from the equation and anyone can create an address, independently interacting with the network.
Enabled by a permission-less blockchain, any user—regardless of birthplace, gender, ethnicity, wealth, or citizenship—would thus be able to establish digital assets and transfer them seamlessly, granting us a more democratized Internet.
And with data stored on distributed nodes, this also means no single point of failure. One could have multiple backups, accessible only with digital authorization, leaving users immune to any single server failure.
Implications Abound–What’s Next…
With a newly-built stack and an interface built from numerous converging technologies, the Spatial Web will transform every facet of our everyday lives—from the way we organize and access our data, to our social and business interactions, to the way we train employees and educate our children.
We’re about to start spending more time in the virtual world than ever before. Beyond entertainment or gameplay, our livelihoods, work, and even personal decisions are already becoming mediated by a web electrified with AI and newly-emerging interfaces.
In our next blog on the Spatial Web, I’ll do a deep dive into the myriad industry implications of Web 3.0, offering tangible use cases across sectors.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘on ramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: Comeback01 / Shutterstock.com Continue reading
#433785 DeepMind’s Eerie Reimagination of the ...
If a recent project using Google’s DeepMind were a recipe, you would take a pair of AI systems, images of animals, and a whole lot of computing power. Mix it all together, and you’d get a series of imagined animals dreamed up by one of the AIs. A look through the research paper about the project—or this open Google Folder of images it produced—will likely lead you to agree that the results are a mix of impressive and downright eerie.
But the eerie factor doesn’t mean the project shouldn’t be considered a success and a step forward for future uses of AI.
From GAN To BigGAN
The team behind the project consists of Andrew Brock, a PhD student at Edinburgh Center for Robotics, and DeepMind intern and researcher Jeff Donahue and Karen Simonyan.
They used a so-called Generative Adversarial Network (GAN) to generate the images. In a GAN, two AI systems collaborate in a game-like manner. One AI produces images of an object or creature. The human equivalent would be drawing pictures of, for example, a dog—without necessarily knowing what a dog exactly looks like. Those images are then shown to the second AI, which has already been fed images of dogs. The second AI then tells the first one how far off its efforts were. The first one uses this information to improve its images. The two go back and forth in an iterative process, and the goal is for the first AI to become so good at creating images of dogs that the second can’t tell the difference between its creations and actual pictures of dogs.
The team was able to draw on Google’s vast vaults of computational power to create images of a quality and life-like nature that were beyond almost anything seen before. In part, this was achieved by feeding the GAN with more images than is usually the case. According to IFLScience, the standard is to feed about 64 images per subject into the GAN. In this case, the research team fed about 2,000 images per subject into the system, leading to it being nicknamed BigGAN.
Their results showed that feeding the system with more images and using masses of raw computer power markedly increased the GAN’s precision and ability to create life-like renditions of the subjects it was trained to reproduce.
“The main thing these models need is not algorithmic improvements, but computational ones. […] When you increase model capacity and you increase the number of images you show at every step, you get this twofold combined effect,” Andrew Brock told Fast Company.
The Power Drain
The team used 512 of Google’s AI-focused Tensor Processing Units (TPU) to generate 512-pixel images. Each experiment took between 24 and 48 hours to run.
That kind of computing power needs a lot of electricity. As artist and Innovator-In-Residence at the Library of Congress Jer Thorp tongue-in-cheek put it on Twitter: “The good news is that AI can now give you a more believable image of a plate of spaghetti. The bad news is that it used roughly enough energy to power Cleveland for the afternoon.”
Thorp added that a back-of-the-envelope calculation showed that the computations to produce the images would require about 27,000 square feet of solar panels to have adequate power.
BigGAN’s images have been hailed by researchers, with Oriol Vinyals, research scientist at DeepMind, rhetorically asking if these were the ‘Best GAN samples yet?’
However, they are still not perfect. The number of legs on a given creature is one example of where the BigGAN seemed to struggle. The system was good at recognizing that something like a spider has a lot of legs, but seemed unable to settle on how many ‘a lot’ was supposed to be. The same applied to dogs, especially if the images were supposed to show said dogs in motion.
Those eerie images are contrasted by other renditions that show such lifelike qualities that a human mind has a hard time identifying them as fake. Spaniels with lolling tongues, ocean scenery, and butterflies were all rendered with what looks like perfection. The same goes for an image of a hamburger that was good enough to make me stop writing because I suddenly needed lunch.
The Future Use Cases
GAN networks were first introduced in 2014, and given their relative youth, researchers and companies are still busy trying out possible use cases.
One possible use is image correction—making pixillated images clearer. Not only does this help your future holiday snaps, but it could be applied in industries such as space exploration. A team from the University of Michigan and the Max Planck Institute have developed a method for GAN networks to create images from text descriptions. At Berkeley, a research group has used GAN to create an interface that lets users change the shape, size, and design of objects, including a handbag.
For anyone who has seen a film like Wag the Dog or read 1984, the possibilities are also starkly alarming. GANs could, in other words, make fake news look more real than ever before.
For now, it seems that while not all GANs require the computational and electrical power of the BigGAN, there is still some way to reach these potential use cases. However, if there’s one lesson from Moore’s Law and exponential technology, it is that today’s technical roadblock quickly becomes tomorrow’s minor issue as technology progresses.
Image Credit: Ondrej Prosicky/Shutterstock Continue reading