Tag Archives: black
#433950 How the Spatial Web Will Transform Every ...
What is the future of work? Is our future one of ‘technological socialism’ (where technology is taking care of our needs)? Or is our future workplace completely virtualized, whereby we hang out at home in our PJs while walking about our virtual corporate headquarters?
This blog will look at the future of work during the age of Web 3.0… Examining scenarios in which AI, VR, and the spatial web converge to transform every element of our careers, from training to execution to free time.
Three weeks ago, I explored the vast implications of Web 3.0 on news, media, smart advertising, and personalized retail. And to offer a quick recap on what the Spatial Web is and how it works, let’s cover some brief history.
A Quick Recap on Web 3.0
While Web 1.0 consisted of static documents and read-only data (static web pages), Web 2.0 introduced multimedia content, interactive web applications, and participatory social media, all of these mediated by two-dimensional screens.
But over the next two to five years, the convergence of 5G, artificial intelligence, VR/AR, and a trillion-sensor economy will enable us to both map our physical world into virtual space and superimpose a digital data layer onto our physical environments.
Suddenly, all our information will be manipulated, stored, understood, and experienced in spatial ways.
In this third installment of the Web 3.0 series, I’ll be discussing the Spatial Web’s vast implications for:
Professional Training
Delocalized Business and the Virtual Workplace
Smart Permissions and Data Security
Let’s dive in.
Virtual Training, Real-World Results
Virtual and augmented reality have already begun disrupting the professional training market.
Leading the charge, Walmart has already implemented VR across 200 Academy training centers, running over 45 modules and simulating everything from unusual customer requests to a Black Friday shopping rush.
In September 2018, Walmart committed to a 17,000-headset order of the Oculus Go to equip every US Supercenter, neighborhood market, and discount store with VR-based employee training.
In the engineering world, Bell Helicopter is using VR to massively expedite development and testing of its latest aircraft, FCX-001. Partnering with Sector 5 Digital and HTC VIVE, Bell found it could concentrate a typical six-year aircraft design process into the course of six months, turning physical mock-ups into CAD-designed virtual replicas.
But beyond the design process itself, Bell is now one of a slew of companies pioneering VR pilot tests and simulations with real-world accuracy. Seated in a true-to-life virtual cockpit, pilots have now tested countless iterations of the FCX-001 in virtual flight, drawing directly onto the 3D model and enacting aircraft modifications in real-time.
And in an expansion of our virtual senses, several key players are already working on haptic feedback. In the case of VR flight, French company Go Touch VR is now partnering with software developer FlyInside on fingertip-mounted haptic tech for aviation.
Dramatically reducing time and trouble required for VR-testing pilots, they aim to give touch-based confirmation of every switch and dial activated on virtual flights, just as one would experience in a full-sized cockpit mockup. Replicating texture, stiffness, and even the sensation of holding an object, these piloted devices contain a suite of actuators to simulate everything from a light touch to higher-pressured contact, all controlled by gaze and finger movements.
When it comes to other high-risk simulations, virtual and augmented reality have barely scratched the surface.
Firefighters can now combat virtual wildfires with new platforms like FLAIM Trainer or TargetSolutions. And thanks to the expansion of medical AR/VR services like 3D4Medical or Echopixel, surgeons might soon perform operations on annotated organs and magnified incision sites, speeding up reaction times and vastly improving precision.
But perhaps most urgent, Web 3.0 and its VR interface will offer an immediate solution for today’s constant industry turnover and large-scale re-education demands.
VR educational facilities with exact replicas of anything from large industrial equipment to minute circuitry will soon give anyone a second chance at the 21st-century job market.
Want to be an electric, autonomous vehicle mechanic at age 15? Throw on a demonetized VR module and learn by doing, testing your prototype iterations at almost zero cost and with no risk of harming others.
Want to be a plasma physicist and play around with a virtual nuclear fusion reactor? Now you’ll be able to simulate results and test out different tweaks, logging Smart Educational Record credits in the process.
As tomorrow’s career model shifts from a “one-and-done graduate degree” to lifelong education, professional VR-based re-education will allow for a continuous education loop, reducing the barrier to entry for anyone wanting to enter a new industry.
But beyond professional training and virtually enriched, real-world work scenarios, Web 3.0 promises entirely virtual workplaces and blockchain-secured authorization systems.
Rise of the Virtual Workplace and Digital Data Integrity
In addition to enabling an annual $52 billion virtual goods marketplace, the Spatial Web is also giving way to “virtual company headquarters” and completely virtualized companies, where employees can work from home or any place on the planet.
Too good to be true? Check out an incredible publicly listed company called eXp Realty.
Launched on the heels of the 2008 financial crisis, eXp Realty beat the odds, going public this past May and surpassing a $1B market cap on day one of trading.
But how? Opting for a demonetized virtual model, eXp’s founder Glenn Sanford decided to ditch brick and mortar from the get-go, instead building out an online virtual campus for employees, contractors, and thousands of agents.
And after years of hosting team meetings, training seminars, and even agent discussions with potential buyers through 2D digital interfaces, eXp’s virtual headquarters went spatial.
What is eXp’s primary corporate value? FUN! And Glenn Sanford’s employees love their jobs.
In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.
Foregoing any physical locations for a centralized VR campus, eXp Realty has essentially thrown out all overhead and entered a lucrative market with barely any upfront costs.
Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.
Throw in the Spatial Web’s fundamental blockchain-based data layer, and now cryptographically secured virtual IDs will let you validate colleagues’ identities or any of the virtual avatars we will soon inhabit.
This becomes critically important for spatial information logs—keeping incorruptible records of who’s present at a meeting, which data each person has access to, and AI-translated reports of everything discussed and contracts agreed to.
But as I discussed in a previous Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high rises too.
As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.
Imaging showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.
You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.
With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.
Final Thoughts
While converging technologies slash the lifespan of Fortune 500 companies, bring on the rise of vast new industries, and transform the job market, Web 3.0 is changing the way we work, where we work, and who we work with.
Life-like virtual modules are already unlocking countless professional training camps, modifiable in real-time and easily updated.
Virtual programming and blockchain-based authentication are enabling smart data logging, identity protection, and on-demand smart asset trading.
And VR/AR-accessible worlds (and corporate campuses) not only demonetize, dematerialize, and delocalize our everyday workplaces, but enrich our physical worlds with AI-driven, context-specific data.
Welcome to the Spatial Web workplace.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: MONOPOLY919 / Shutterstock.com Continue reading
#433807 The How, Why, and Whether of Custom ...
A digital afterlife may soon be within reach, but it might not be for your benefit.
The reams of data we’re creating could soon make it possible to create digital avatars that live on after we die, aimed at comforting our loved ones or sharing our experience with future generations.
That may seem like a disappointing downgrade from the vision promised by the more optimistic futurists, where we upload our consciousness to the cloud and live forever in machines. But it might be a realistic possibility in the not-too-distant future—and the first steps have already been taken.
After her friend died in a car crash, Eugenia Kuyda, co-founder of Russian AI startup Luka, trained a neural network-powered chatbot on their shared message history to mimic him. Journalist and amateur coder James Vlahos took a more involved approach, carrying out extensive interviews with his terminally ill father so that he could create a digital clone of him when he died.
For those of us without the time or expertise to build our own artificial intelligence-powered avatar, startup Eternime is offering to take your social media posts and interactions as well as basic personal information to build a copy of you that could then interact with relatives once you’re gone. The service is so far only running a private beta with a handful of people, but with 40,000 on its waiting list, it’s clear there’s a market.
Comforting—Or Creepy?
The whole idea may seem eerily similar to the Black Mirror episode Be Right Back, in which a woman pays a company to create a digital copy of her deceased husband and eventually a realistic robot replica. And given the show’s focus on the emotional turmoil she goes through, people might question whether the idea is a sensible one.
But it’s hard to say at this stage whether being able to interact with an approximation of a deceased loved one would be a help or a hindrance in the grieving process. The fear is that it could make it harder for people to “let go” or “move on,” but others think it could play a useful therapeutic role, reminding people that just because someone is dead it doesn’t mean they’re gone, and providing a novel way for them to express and come to terms with their feelings.
While at present most envisage these digital resurrections as a way to memorialize loved ones, there are also more ambitious plans to use the technology as a way to preserve expertise and experience. A project at MIT called Augmented Eternity is investigating whether we could use AI to trawl through someone’s digital footprints and extract both their knowledge and elements of their personality.
Project leader Hossein Rahnama says he’s already working with a CEO who wants to leave behind a digital avatar that future executives could consult with after he’s gone. And you wouldn’t necessarily have to wait until you’re dead—experts could create virtual clones of themselves that could dispense advice on demand to far more people. These clones could soon be more than simple chatbots, too. Hollywood has already started spending millions of dollars to create 3D scans of its most bankable stars so that they can keep acting beyond the grave.
It’s easy to see the appeal of the idea; imagine if we could bring back Stephen Hawking or Tim Cook to share their wisdom with us. And what if we could create a digital brain trust combining the experience and wisdom of all the world’s greatest thinkers, accessible on demand?
But there are still huge hurdles ahead before we could create truly accurate representations of people by simply trawling through their digital remains. The first problem is data. Most peoples’ digital footprints only started reaching significant proportions in the last decade or so, and cover a relatively small period of their lives. It could take many years before there’s enough data to create more than just a superficial imitation of someone.
And that’s assuming that the data we produce is truly representative of who we are. Carefully-crafted Instagram profiles and cautiously-worded work emails hardly capture the messy realities of most peoples’ lives.
Perhaps if the idea is simply to create a bank of someone’s knowledge and expertise, accurately capturing the essence of their character would be less important. But these clones would also be static. Real people continually learn and change, but a digital avatar is a snapshot of someone’s character and opinions at the point they died. An inability to adapt as the world around them changes could put a shelf life on the usefulness of these replicas.
Who’s Calling the (Digital) Shots?
It won’t stop people trying, though, and that raises a potentially more important question: Who gets to make the calls about our digital afterlife? The subjects, their families, or the companies that hold their data?
In most countries, the law is currently pretty hazy on this topic. Companies like Google and Facebook have processes to let you choose who should take control of your accounts in the event of your death. But if you’ve forgotten to do that, the fate of your virtual remains comes down to a tangle of federal law, local law, and tech company terms of service.
This lack of regulation could create incentives and opportunities for unscrupulous behavior. The voice of a deceased loved one could be a highly persuasive tool for exploitation, and digital replicas of respected experts could be powerful means of pushing a hidden agenda.
That means there’s a pressing need for clear and unambiguous rules. Researchers at Oxford University recently suggested ethical guidelines that would treat our digital remains the same way museums and archaeologists are required to treat mortal remains—with dignity and in the interest of society.
Whether those kinds of guidelines are ever enshrined in law remains to be seen, but ultimately they may decide whether the digital afterlife turns out to be heaven or hell.
Image Credit: frankie’s / Shutterstock.com Continue reading
#433758 DeepMind’s New Research Plan to Make ...
Making sure artificial intelligence does what we want and behaves in predictable ways will be crucial as the technology becomes increasingly ubiquitous. It’s an area frequently neglected in the race to develop products, but DeepMind has now outlined its research agenda to tackle the problem.
AI safety, as the field is known, has been gaining prominence in recent years. That’s probably at least partly down to the overzealous warnings of a coming AI apocalypse from well-meaning, but underqualified pundits like Elon Musk and Stephen Hawking. But it’s also recognition of the fact that AI technology is quickly pervading all aspects of our lives, making decisions on everything from what movies we watch to whether we get a mortgage.
That’s why DeepMind hired a bevy of researchers who specialize in foreseeing the unforeseen consequences of the way we built AI back in 2016. And now the team has spelled out the three key domains they think require research if we’re going to build autonomous machines that do what we want.
In a new blog designed to provide updates on the team’s work, they introduce the ideas of specification, robustness, and assurance, which they say will act as the cornerstones of their future research. Specification involves making sure AI systems do what their operator intends; robustness means a system can cope with changes to its environment and attempts to throw it off course; and assurance involves our ability to understand what systems are doing and how to control them.
A classic thought experiment designed to illustrate how we could lose control of an AI system can help illustrate the problem of specification. Philosopher Nick Bostrom’s posited a hypothetical machine charged with making as many paperclips as possible. Because the creators fail to add what they might assume are obvious additional goals like not harming people, the AI wipes out humanity so we can’t switch it off before turning all matter in the universe into paperclips.
Obviously the example is extreme, but it shows how a poorly-specified goal can lead to unexpected and disastrous outcomes. Properly codifying the desires of the designer is no easy feat, though; often there are not neat ways to encompass both the explicit and implicit goals in ways that are understandable to the machine and don’t leave room for ambiguities, meaning we often rely on incomplete approximations.
The researchers note recent research by OpenAI in which an AI was trained to play a boat-racing game called CoastRunners. The game rewards players for hitting targets laid out along the race route. The AI worked out that it could get a higher score by repeatedly knocking over regenerating targets rather than actually completing the course. The blog post includes a link to a spreadsheet detailing scores of such examples.
Another key concern for AI designers is making their creation robust to the unpredictability of the real world. Despite their superhuman abilities on certain tasks, most cutting-edge AI systems are remarkably brittle. They tend to be trained on highly-curated datasets and so can fail when faced with unfamiliar input. This can happen by accident or by design—researchers have come up with numerous ways to trick image recognition algorithms into misclassifying things, including thinking a 3D printed tortoise was actually a gun.
Building systems that can deal with every possible encounter may not be feasible, so a big part of making AIs more robust may be getting them to avoid risks and ensuring they can recover from errors, or that they have failsafes to ensure errors don’t lead to catastrophic failure.
And finally, we need to have ways to make sure we can tell whether an AI is performing the way we expect it to. A key part of assurance is being able to effectively monitor systems and interpret what they’re doing—if we’re basing medical treatments or sentencing decisions on the output of an AI, we’d like to see the reasoning. That’s a major outstanding problem for popular deep learning approaches, which are largely indecipherable black boxes.
The other half of assurance is the ability to intervene if a machine isn’t behaving the way we’d like. But designing a reliable off switch is tough, because most learning systems have a strong incentive to prevent anyone from interfering with their goals.
The authors don’t pretend to have all the answers, but they hope the framework they’ve come up with can help guide others working on AI safety. While it may be some time before AI is truly in a position to do us harm, hopefully early efforts like these will mean it’s built on a solid foundation that ensures it is aligned with our goals.
Image Credit: cono0430 / Shutterstock.com Continue reading