Tag Archives: spaces
This week, the widely-anticipated fifth season of the dystopian series Black Mirror was released on Netflix. The storylines this season are less focused on far-out scenarios and increasingly aligned with current issues. With only three episodes, this season raises more questions than it answers, often leaving audiences bewildered.
The episode Smithereens explores our society’s crippling addiction to social media platforms and the monopoly they hold over our data. In Rachel, Jack and Ashley Too, we see the disruptive impact of technologies on the music and entertainment industry, and the price of fame for artists in the digital world. Like most Black Mirror episodes, these explore the sometimes disturbing implications of tech advancements on humanity.
But once again, in the midst of all the doom and gloom, the creators of the series leave us with a glimmer of hope. Aligned with Pride month, the episode Striking Vipers explores the impact of virtual reality on love, relationships, and sexual fluidity.
*The review contains a few spoilers.*
The first episode of the season, Striking Vipers may be one of the most thought-provoking episodes in Black Mirror history. Reminiscent of previous episodes San Junipero and Hang the DJ, the writers explore the potential for technology to transform human intimacy.
The episode tells the story of two old friends, Danny and Karl, whose friendship is reignited in an unconventional way. Karl unexpectedly appears at Danny’s 38th birthday and reintroduces him to the VR version of a game they used to play years before. In the game Striking Vipers X, each of the players is represented by an avatar of their choice in an uncanny digital reality. Following old tradition, Karl chooses to become the female fighter, Roxanne, and Danny takes on the role of the male fighter, Lance. The state-of-the-art VR headsets appear to use an advanced form of brain-machine interface to allow each player to be fully immersed in the virtual world, emulating all physical sensations.
To their surprise (and confusion), Danny and Karl find themselves transitioning from fist-fighting to kissing. Over the course of many games, they continue to explore a sexual and romantic relationship in the virtual world, leaving them confused and distant in the real world. The virtual and physical realities begin to blur, and so do the identities of the players with their avatars. Danny, who is married (in a heterosexual relationship) and is a father, begins to carry guilt and confusion in the real world. They both wonder if there would be any spark between them in real life.
The brain-machine interface (BMI) depicted in the episode is still science fiction, but that hasn’t stopped innovators from pushing the technology forward. Experts today are designing more intricate BMI systems while programming better algorithms to interpret the neural signals they capture. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate with one another purely through brainwaves.
The convergence of BMIs with virtual reality and artificial intelligence could make the experience of such immersive digital realities possible. Virtual reality, too, is decreasing exponentially in cost and increasing in quality.
The narrative provides meaningful commentary on another tech area—gaming. It highlights video games not necessarily as addictive distractions, but rather as a platform for connecting with others in a deeper way. This is already very relevant. Video games like Final Fantasy are often a tool for meaningful digital connections for their players.
The Implications of Virtual Reality on Love and Relationships
The narrative of Striking Vipers raises many novel questions about the implications of immersive technologies on relationships: could the virtual world allow us a safe space to explore suppressed desires? Can virtual avatars make it easier for us to show affection to those we care about? Can a sexual or romantic encounter in the digital world be considered infidelity?
Above all, the episode explores the therapeutic possibilities of such technologies. While many fears about virtual reality had been raised in previous seasons of Black Mirror, this episode was focused on its potential. This includes the potential of immersive technology to be a source of liberation, meaningful connections, and self-exploration, as well as a tool for realizing our true identities and desires.
Once again, this is aligned with emerging trends in VR. We are seeing the rise of social VR applications and platforms that allow you to hang out with your friends and family as avatars in the virtual space. The technology is allowing for animation movies, such as Coco VR, to become an increasingly social and interactive experience. Considering that meaningful social interaction can alleviate depression and anxiety, such applications could contribute to well-being.
Techno-philosopher and National Geographic host Jason Silva points out that immersive media technologies can be “engines of empathy.” VR allows us to enter virtual spaces that mimic someone else’s state of mind, allowing us to empathize with the way they view the world. Silva said, “Imagine the intimacy that becomes possible when people meet and they say, ‘Hey, do you want to come visit my world? Do you want to see what it’s like to be inside my head?’”
What is most fascinating about Striking Vipers is that it explores how we may redefine love with virtual reality; we are introduced to love between virtual avatars. While this kind of love may seem confusing to audiences, it may be one of the complex implications of virtual reality on human relationships.
In many ways, the title Black Mirror couldn’t be more appropriate, as each episode serves as a mirror to the most disturbing aspects of our psyches as they get amplified through technology. However, what we see in uplifting and thought-provoking plots like Striking Vipers, San Junipero, and Hang The DJ is that technology could also amplify the most positive aspects of our humanity. This includes our powerful capacity to love.
Image Credit: Arsgera / Shutterstock.com Continue reading →
In the wake of the housing market collapse of 2008, one entrepreneur decided to dive right into the failing real estate industry. But this time, he didn’t buy any real estate to begin with. Instead, Glenn Sanford decided to launch the first-ever cloud-based real estate brokerage, eXp Realty.
Contracting virtual platform VirBELA to build out the company’s mega-campus in VR, eXp Realty demonstrates the power of a dematerialized workspace, throwing out hefty overhead costs and fundamentally redefining what ‘real estate’ really means. Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, 3 Canadian provinces, and 400 MLS market areas… all without a single physical office.
But VR is just one of many exponential technologies converging to revolutionize real estate and construction. As floating cities and driverless cars spread out your living options, AI and VR are together cutting out the middleman.
Already, the global construction industry is projected to surpass $12.9 trillion in 2022, and the total value of the US housing market alone grew to $33.3 trillion last year. Both vital for our daily lives, these industries will continue to explode in value, posing countless possibilities for disruption.
In this blog, I’ll be discussing the following trends:
New prime real estate locations;
Disintermediation of the real estate broker and search;
Materials science and 3D printing in construction.
Let’s dive in!
Location Location Location
Until today, location has been the name of the game when it comes to hunting down the best real estate. But constraints on land often drive up costs while limiting options, and urbanization is only exacerbating the problem.
Beyond the world of virtual real estate, two primary mechanisms are driving the creation of new locations.
(1) Floating Cities
Offshore habitation hubs, floating cities have long been conceived as a solution to rising sea levels, skyrocketing urban populations, and threatened ecosystems. In success, they will soon unlock an abundance of prime real estate, whether for scenic living, commerce, education, or recreation.
One pioneering model is that of Oceanix City, designed by Danish architect Bjarke Ingels and a host of other domain experts. Intended to adapt organically over time, Oceanix would consist of a galaxy of mass-produced, hexagonal floating modules, built as satellite “cities” off coastal urban centers and sustained by renewable energies.
While individual 4.5-acre platforms would each sustain 300 people, these hexagonal modules are designed to link into 75-acre tessellations sustaining up to 10,000 residents. Each anchored to the ocean floor using biorock, Oceanix cities are slated to be closed-loop systems, as external resources are continuously supplied by automated drone networks.
Electric boats or flying cars might zoom you to work, city-embedded water capture technologies would provide your water, and while vertical and outdoor farming supply your family meal, share economies would dominate goods provision.
AERIAL: Located in calm, sheltered waters, near coastal megacities, OCEANIX City will be an adaptable, sustainable, scalable, and affordable solution for human life on the ocean. Image Credit: OCEANIX/BIG-Bjarke Ingels Group.
Joined by countless government officials whose islands risk submersion at the hands of sea level rise, the UN is now getting on board. And just this year, seasteading is exiting the realm of science fiction and testing practical waters.
As French Polynesia seeks out robust solutions to sea level rise, their government has now joined forces with the San Francisco-based Seasteading Institute. With a newly designated special economic zone and 100 acres of beachfront, this joint Floating Island Project could even see up to a dozen inhabitable structures by 2020. And what better to fund the $60 million project than the team’s upcoming ICO?
But aside from creating new locations, autonomous vehicles (AVs) and flying cars are turning previously low-demand land into the prime real estate of tomorrow.
(2) Autonomous Electric Vehicles and Flying Cars
Today, the value of a location is a function of its proximity to your workplace, your city’s central business district, the best schools, or your closest friends.
But what happens when driverless cars desensitize you to distance, or Hyperloop and flying cars decimate your commute time? Historically, every time new transit methods have hit the mainstream, tolerance for distance has opened up right alongside them, further catalyzing city spread.
And just as Hyperloop and the Boring Company aim to make your commute immaterial, autonomous vehicle (AV) ridesharing services will spread out cities in two ways: (1) by drastically reducing parking spaces needed (vertical parking decks = more prime real estate); and (2) by untethering you from the steering wheel. Want an extra two hours of sleep on the way to work? Schedule a sleeper AV and nap on your route to the office. Need a car-turned-mobile-office? No problem.
Meanwhile, aerial taxis (i.e. flying cars) will allow you to escape ground congestion entirely, delivering you from bedroom to boardroom at decimated time scales.
Already working with regulators, Uber Elevate has staked ambitious plans for its UberAIR airborne taxi project. By 2023, Uber anticipates rolling out flying drones in its two first pilot cities, Los Angeles and Dallas. Flying between rooftop skyports, drones would carry passengers at a height of 1,000 to 2,000 feet at speeds between 100 to 200 mph. And while costs per ride are anticipated to resemble those of an Uber Black based on mileage, prices are projected to soon drop to those of an UberX.
But the true economic feat boils down to this: if I were to commute 50 to 100 kilometers, I could get two or three times the house for the same price. (Not to mention the extra living space offered up by my now-unneeded garage.)
All of a sudden, virtual reality, broadband, AVs, or high-speed vehicles are going to change where we live and where we work. So rather than living in a crowded, dense urban core for access to jobs and entertainment, our future of personalized, autonomous, low-cost transport opens the luxury of rural areas to all without compromising the benefits of a short commute.
Once these drivers multiply your real estate options, how will you select your next home?
Disintermediation: Say Bye to Your Broker
In a future of continuous and personalized preference-tracking, why hire a human agent who knows less about your needs and desires than a personal AI?
Just as disintermediation is cutting out bankers and insurance agents, so too is it closing in on real estate brokers. Over the next decade, as AI becomes your agent, VR will serve as your medium.
To paint a more vivid picture of how this will look, over 98 percent of your home search will be conducted from the comfort of your couch through next-generation VR headgear.
Once you’ve verbalized your primary desires for home location, finishings, size, etc. to your personal AI, it will offer you top picks, tour-able 24/7, with optional assistance by a virtual guide and constantly updated data. As a seller, this means potential buyers from two miles, or two continents, away.
Throughout each immersive VR tour, advanced eye-tracking software and a permissioned machine learning algorithm follow your gaze, further learn your likes and dislikes, and intelligently recommend other homes or commercial residences to visit.
Curious as to what the living room might look like with a fresh coat of blue paint and a white carpet? No problem! VR programs will be able to modify rendered environments instantly, changing countless variables, from furniture materials to even the sun’s orientation. Keen to input your own furniture into a VR-rendered home? Advanced AIs could one day compile all your existing furniture, electronics, clothing, decorations, and even books, virtually organizing them across any accommodating new space.
As 3D scanning technologies make extraordinary headway, VR renditions will only grow cheaper and higher resolution. One company called Immersive Media (disclosure: I’m an investor and advisor) has a platform for 360-degree video capture and distribution, and is already exploring real estate 360-degree video.
Smaller firms like Studio 216, Vieweet, Arch Virtual, ArX Solutions, and Rubicon Media can similarly capture and render models of various properties for clients and investors to view and explore. In essence, VR real estate platforms will allow you to explore any home for sale, do the remodel, and determine if it truly is the house of your dreams.
Once you’re ready to make a bid, your AI will even help estimate a bid, process and submit your offer. Real estate companies like Zillow, Trulia, Move, Redfin, ZipRealty (acquired by Realogy in 2014) and many others have already invested millions in machine learning applications to make search, valuation, consulting, and property management easier, faster, and much more accurate.
But what happens if the home you desire most means starting from scratch with new construction?
New Methods and Materials for Construction
For thousands of years, we’ve been constrained by the construction materials of nature. We built bricks from naturally abundant clay and shale, used tree limbs as our rooftops and beams, and mastered incredible structures in ancient Rome with the use of cement.
But construction is now on the cusp of a materials science revolution. Today, I’d like to focus on three key materials:
Imagine if you could turn the world’s greatest waste products into their most essential building blocks. Thanks to UCLA researchers at CO2NCRETE, we can already do this with carbon emissions.
Today, concrete produces about five percent of all greenhouse gas (GHG) emissions. But what if concrete could instead conserve greenhouse emissions? CO2NCRETE engineers capture carbon from smokestacks and combine it with lime to create a new type of cement. The lab’s 3D printers then shape the upcycled concrete to build entirely new structures. Once conquered at scale, upcycled concrete will turn a former polluter into a future conserver.
Or what if we wanted to print new residences from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.
In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.
Nano- and micro-materials are ushering in a new era of smart, super-strong, and self-charging buildings. While carbon nanotubes dramatically increase the strength-to-weight ratio of skyscrapers, revolutionizing their structural flexibility, nanomaterials don’t stop here.
Several research teams are pioneering silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use. Researchers at the US National Renewable Energy Lab have developed similar smart windows. Turning into solar panels when bathed in sunlight, these thermochromic windows will power our buildings, changing color as they do.
The American Society of Civil Engineers estimates that the US needs to spend roughly $4.5 trillion to fix nationwide roads, bridges, dams, and common infrastructure by 2025. But what if infrastructure could fix itself?
Enter self-healing concrete. Engineers at Delft University have developed bio-concrete that can repair its own cracks. As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”
But bio-concrete is only the beginning of self-healing technologies. As futurist architecture firms start printing plastic and carbon-fiber houses like the stunner seen below (using Branch Technologies’ 3D printing technology), engineers have begun tackling self-healing plastic.
And in a bid to go smart, burgeoning construction projects have started embedding sensors for preemptive detection. Beyond materials and sensors, however, construction methods are fast colliding into robotics and 3D printing.
While some startups and research institutes have leveraged robot swarm construction (namely, Harvard’s robotic termite-like swarm of programmed constructors), others have taken to large-scale autonomous robots.
One such example involves Fastbrick Robotics. After multiple iterations, the company’s Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.
Layhead. Image Credit: Fastbrick Robotics.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.
Imagine the implications. Eliminating human safety concerns and unlocking any environment, autonomous builder robots could collaboratively build massive structures in space or deep underwater habitats.
Where, how, and what we live in form a vital pillar of our everyday lives. The concept of “home” is unlikely to disappear anytime soon. At the same time, real estate and construction are two of the biggest playgrounds for technological convergence, each on the verge of revolutionary disruption.
As underlying shifts in transportation, land reclamation, and the definition of “space” (real vs. virtual) take hold, the real estate market is about to explode in value, spreading out urban centers on unprecedented scales and unlocking vast new prime “property.”
Meanwhile, converging advancements in AI and VR are fundamentally disrupting the way we design, build, and explore new residences. Just as mirror worlds create immersive, virtual real estate economies, VR tours and AI agents are absorbing both sides of the coin to entirely obliterate the middleman.
And as materials science breakthroughs meet new modes of construction, the only limits to tomorrow’s structures are those of our own imagination.
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.
Image Credit: OCEANIX/BIG-Bjarke Ingels Group. Continue reading →
Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.
In this third installment of my Convergence Catalyzer series, I’ll be synthesizing key insights from my annual entrepreneurs’ mastermind event, Abundance 360. This five-blog series looks at 3D printing, artificial intelligence, VR/AR, energy and transportation, and blockchain.
Today, let’s dive into virtual and augmented reality.
Today’s most prominent tech giants are leaping onto the VR/AR scene, each driving forward new and upcoming product lines. Think: Microsoft’s HoloLens, Facebook’s Oculus, Amazon’s Sumerian, and Google’s Cardboard (Apple plans to release a headset by 2021).
And as plummeting prices meet exponential advancements in VR/AR hardware, this burgeoning disruptor is on its way out of the early adopters’ market and into the majority of consumers’ homes.
My good friend Philip Rosedale is my go-to expert on AR/VR and one of the foremost creators of today’s most cutting-edge virtual worlds. After creating the virtual civilization Second Life in 2013, now populated by almost 1 million active users, Philip went on to co-found High Fidelity, which explores the future of next-generation shared VR.
In just the next five years, he predicts five emerging trends will take hold, together disrupting major players and birthing new ones.
Let’s dive in…
Top 5 Predictions for VR/AR Breakthroughs (2019-2024)
“If you think you kind of understand what’s going on with that tech today, you probably don’t,” says Philip. “We’re still in the middle of landing the airplane of all these new devices.”
(1) Transition from PC-based to standalone mobile VR devices
Historically, VR devices have relied on PC connections, usually involving wires and clunky hardware that restrict a user’s field of motion. However, as VR enters the dematerialization stage, we are about to witness the rapid rise of a standalone and highly mobile VR experience economy.
Oculus Go, the leading standalone mobile VR device on the market, requires only a mobile app for setup and can be transported anywhere with WiFi.
With a consumer audience in mind, the 32GB headset is priced at $200 and shares an app ecosystem with Samsung’s Gear VR. While Google Daydream are also standalone VR devices, they require a docked mobile phone instead of the built-in screen of Oculus Go.
In the AR space, Lenovo’s standalone Microsoft’s HoloLens 2 leads the way in providing tetherless experiences.
Freeing headsets from the constraints of heavy hardware will make VR/AR increasingly interactive and transportable, a seamless add-on whenever, wherever. Within a matter of years, it may be as simple as carrying lightweight VR goggles wherever you go and throwing them on at a moment’s notice.
(2) Wide field-of-view AR displays
Microsoft’s HoloLens 2 leads the AR industry in headset comfort and display quality. The most significant issue with their prior version was the limited rectangular field of view (FOV).
By implementing laser technology to create a microelectromechanical systems (MEMS) display, however, HoloLens 2 can position waveguides in front of users’ eyes, directed by mirrors. Subsequently enlarging images can be accomplished by shifting the angles of these mirrors. Coupled with a 47 pixel per degree resolution, HoloLens 2 has now doubled its predecessor’s FOV. Microsoft anticipates the release of its headset by the end of this year at a $3,500 price point, first targeting businesses and eventually rolling it out to consumers.
Magic Leap provides a similar FOV but with lower resolution than the HoloLens 2. The Meta 2 boasts an even wider 90-degree FOV, but requires a cable attachment. The race to achieve the natural human 120-degree horizontal FOV continues.
“The technology to expand the field of view is going to make those devices much more usable by giving you bigger than a small box to look through,” Rosedale explains.
(3) Mapping of real world to enable persistent AR ‘mirror worlds’
‘Mirror worlds’ are alternative dimensions of reality that can blanket a physical space. While seated in your office, the floor beneath you could dissolve into a calm lake and each desk into a sailboat. In the classroom, mirror worlds would convert pencils into magic wands and tabletops into touch screens.
Pokémon Go provides an introductory glimpse into the mirror world concept and its massive potential to unite people in real action.
To create these mirror worlds, AR headsets must precisely understand the architecture of the surrounding world. Rosedale predicts the scanning accuracy of devices will improve rapidly over the next five years to make these alternate dimensions possible.
(4) 5G mobile devices reduce latency to imperceptible levels
Verizon has already launched 5G networks in Minneapolis and Chicago, compatible with the Moto Z3. Sprint plans to follow with its own 5G launch in May. Samsung, LG, Huawei, and ZTE have all announced upcoming 5G devices.
“5G is rolling out this year and it’s going to materially affect particularly my work, which is making you feel like you’re talking to somebody else directly face to face,” explains Rosedale. “5G is critical because currently the cell devices impose too much delay, so it doesn’t feel real to talk to somebody face to face on these devices.”
To operate seamlessly from anywhere on the planet, standalone VR/AR devices will require a strong 5G network. Enhancing real-time connectivity in VR/AR will transform the communication methods of tomorrow.
(5) Eye-tracking and facial expressions built in for full natural communication
Companies like Pupil Labs and Tobii provide eye tracking hardware add-ons and software to VR/AR headsets. This technology allows for foveated rendering, which renders a given scene in high resolution only in the fovea region, while the peripheral regions appear in lower resolution, conserving processing power.
As seen in the HoloLens 2, eye tracking can also be used to identify users and customize lens widths to provide a comfortable, personalized experience for each individual.
According to Rosedale, “The fundamental opportunity for both VR and AR is to improve human communication.” He points out that current VR/AR headsets miss many of the subtle yet important aspects of communication. Eye movements and microexpressions provide valuable insight into a user’s emotions and desires.
Coupled with emotion-detecting AI software, such as Affectiva, VR/AR devices might soon convey much more richly textured and expressive interactions between any two people, transcending physical boundaries and even language gaps.
As these promising trends begin to transform the market, VR/AR will undoubtedly revolutionize our lives… possibly to the point at which our virtual worlds become just as consequential and enriching as our physical world.
A boon for next-gen education, VR/AR will empower youth and adults alike with holistic learning that incorporates social, emotional, and creative components through visceral experiences, storytelling, and simulation. Traveling to another time, manipulating the insides of a cell, or even designing a new city will become daily phenomena of tomorrow’s classrooms.
In real estate, buyers will increasingly make decisions through virtual tours. Corporate offices might evolve into spaces that only exist in ‘mirror worlds’ or grow virtual duplicates for remote workers.
In healthcare, accuracy of diagnosis will skyrocket, while surgeons gain access to digital aids as they conduct life-saving procedures. Or take manufacturing, wherein training and assembly will become exponentially more efficient as visual cues guide complex tasks.
In the mere matter of a decade, VR and AR will unlock limitless applications for new and converging industries. And as virtual worlds converge with AI, 3D printing, computing advancements and beyond, today’s experience economies will explode in scale and scope. Prepare yourself for the exciting disruption ahead!
Abundance-Digital Online Community: Stay ahead of technological advancements, and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.
Image Credit: Mariia Korneeva / Shutterstock.com Continue reading →
Deep learning is powering some amazing new capabilities, but we find it hard to scrutinize the workings of these algorithms. Lack of interpretability in AI is a common concern and many are trying to fix it, but is it really always necessary to know what’s going on inside these “black boxes”?
In a recent perspective piece for Science, Elizabeth Holm, a professor of materials science and engineering at Carnegie Mellon University, argued in defense of the black box algorithm. I caught up with her last week to find out more.
Edd Gent: What’s your experience with black box algorithms?
Elizabeth Holm: I got a dual PhD in materials science and engineering and scientific computing. I came to academia about six years ago and part of what I wanted to do in making this career change was to refresh and revitalize my computer science side.
I realized that computer science had changed completely. It used to be about algorithms and making codes run fast, but now it’s about data and artificial intelligence. There are the interpretable methods like random forest algorithms, where we can tell how the machine is making its decisions. And then there are the black box methods, like convolutional neural networks.
Once in a while we can find some information about their inner workings, but most of the time we have to accept their answers and kind of probe around the edges to figure out the space in which we can use them and how reliable and accurate they are.
EG: What made you feel like you had to mount a defense of these black box algorithms?
EH: When I started talking with my colleagues, I found that the black box nature of many of these algorithms was a real problem for them. I could understand that because we’re scientists, we always want to know why and how.
It got me thinking as a bit of a contrarian, “Are black boxes all bad? Must we reject them?” Surely not, because human thought processes are fairly black box. We often rely on human thought processes that the thinker can’t necessarily explain.
It’s looking like we’re going to be stuck with these methods for a while, because they’re really helpful. They do amazing things. And so there’s a very pragmatic realization that these are the best methods we’ve got to do some really important problems, and we’re not right now seeing alternatives that are interpretable. We’re going to have to use them, so we better figure out how.
EG: In what situations do you think we should be using black box algorithms?
EH: I came up with three rules. The simplest rule is: when the cost of a bad decision is small and the value of a good decision is high, it’s worth it. The example I gave in the paper is targeted advertising. If you send an ad no one wants it doesn’t cost a lot. If you’re the receiver it doesn’t cost a lot to get rid of it.
There are cases where the cost is high, and that’s then we choose the black box if it’s the best option to do the job. Things get a little trickier here because we have to ask “what are the costs of bad decisions, and do we really have them fully characterized?” We also have to be very careful knowing that our systems may have biases, they may have limitations in where you can apply them, they may be breakable.
But at the same time, there are certainly domains where we’re going to test these systems so extensively that we know their performance in virtually every situation. And if their performance is better than the other methods, we need to do it. Self driving vehicles are a significant example—it’s almost certain they’re going to have to use black box methods, and that they’re going to end up being better drivers than humans.
The third rule is the more fun one for me as a scientist, and that’s the case where the black box really enlightens us as to a new way to look at something. We have trained a black box to recognize the fracture energy of breaking a piece of metal from a picture of the broken surface. It did a really good job, and humans can’t do this and we don’t know why.
What the computer seems to be seeing is noise. There’s a signal in that noise, and finding it is very difficult, but if we do we may find something significant to the fracture process, and that would be an awesome scientific discovery.
EG: Do you think there’s been too much emphasis on interpretability?
EH: I think the interpretability problem is a fundamental, fascinating computer science grand challenge and there are significant issues where we need to have an interpretable model. But how I would frame it is not that there’s too much emphasis on interpretability, but rather that there’s too much dismissiveness of uninterpretable models.
I think that some of the current social and political issues surrounding some very bad black box outcomes have convinced people that all machine learning and AI should be interpretable because that will somehow solve those problems.
Asking humans to explain their rationale has not eliminated bias, or stereotyping, or bad decision-making in humans. Relying too much on interpreted ability perhaps puts the responsibility in the wrong place for getting better results. I can make a better black box without knowing exactly in what way the first one was bad.
EG: Looking further into the future, do you think there will be situations where humans will have to rely on black box algorithms to solve problems we can’t get our heads around?
EH: I do think so, and it’s not as much of a stretch as we think it is. For example, humans don’t design the circuit map of computer chips anymore. We haven’t for years. It’s not a black box algorithm that designs those circuit boards, but we’ve long since given up trying to understand a particular computer chip’s design.
With the billions of circuits in every computer chip, the human mind can’t encompass it, either in scope or just the pure time that it would take to trace every circuit. There are going to be cases where we want a system so complex that only the patience that computers have and their ability to work in very high-dimensional spaces is going to be able to do it.
So we can continue to argue about interpretability, but we need to acknowledge that we’re going to need to use black boxes. And this is our opportunity to do our due diligence to understand how to use them responsibly, ethically, and with benefits rather than harm. And that’s going to be a social conversation as well as as a scientific one.
*Responses have been edited for length and style
Image Credit: Chingraph / Shutterstock.com Continue reading →
Swarms of microrobots will scuttle along beneath our roads and pavements, finding and fixing leaky pipes and faulty cables. Thanks to their efforts, we can avoid costly road work that costs billions of dollars each year—not to mention frustrating traffic delays.
That is, if a new project sponsored by the U.K. government is a success. Recent developments in the space seem to point towards a bright future for microrobots.
Microrobots Saving Billions
Each year, around 1.5 million road excavations take place across the U.K. Many are due to leaky pipes and faulty cables that necessitate excavation of road surfaces in order to fix them. The resulting repairs, alongside disruptions to traffic and businesses, are estimated to cost a whopping £6.3 billion ($8 billion).
A consortium of scientists, led by University of Sheffield Professor Kirill Horoshenkov, are planning to use microrobots to negate most of these costs. The group has received a £7.2 million ($9.2 million) grant to develop and build their bots.
According to Horoshenkov, the microrobots will come in two versions. One is an inspection bot, which will navigate along underground infrastructure and examine its condition via sonar. The inspectors will be complemented by worker bots capable of carrying out repairs with cement and adhesives or cleaning out blockages with a high-powered jet. The inspector bots will be around one centimeter long and possibly autonomous, while the worker bots will be slightly larger and steered via remote control.
If successful, it is believed the bots could potentially save the U.K. economy around £5 billion ($6.4 billion) a year.
The U.K. government has set aside a further £19 million ($24 million) for research into robots for hazardous environments, such as nuclear decommissioning, drones for oil pipeline monitoring, and artificial intelligence software to detect the need for repairs on satellites in orbit.
The Lowest-Hanging Fruit
Microrobots like the ones now under development in the U.K. have many potential advantages and use cases. Thanks to their small size they can navigate tight spaces, for example in search and rescue operations, and robot swarm technology would allow them to collaborate to perform many different functions, including in construction projects.
To date, the number of microrobots in use is relatively limited, but that could be about to change, with bots closing in on other types of inspection jobs, which could be considered one of the lowest-hanging fruits.
Engineering firm Rolls-Royce (not the car company, but the one that builds aircraft engines) is looking to use microrobots to inspect some of the up to 25,000 individual parts that make up an engine. The microrobots use the cockroach as a model, and Rolls Royce believes they could save engineers time when performing the maintenance checks that can take over a month per engine.
Even Smaller Successes
Going further down in scale, recent years have seen a string of successes for nanobots. For example, a team of researchers at the Femto-ST Institute have used nanobots to build what is likely the world’s smallest house (if this isn’t a category at Guinness, someone needs to get on the phone with them), which stands a ‘towering’ 0.015 millimeters.
One of the areas where nanobots have shown great promise is in medicine. Several studies have shown how the minute bots are capable of delivering drugs directly into dense biological tissue, which can otherwise be highly challenging to target directly. Such delivery systems have a great potential for improving the treatment of a wide range of ailments and illnesses, including cancer.
There’s no question that the ecosystem of microrobots and nanobots is evolving. While still in their early days, the above successes point to a near-future boom in the bots we may soon refer to as our ‘littlest everyday helpers.’
Image Credit: 5nikolas5 / Shutterstock.com Continue reading →