Tag Archives: zone

#435152 The Futuristic Tech Disrupting Real ...

In the wake of the housing market collapse of 2008, one entrepreneur decided to dive right into the failing real estate industry. But this time, he didn’t buy any real estate to begin with. Instead, Glenn Sanford decided to launch the first-ever cloud-based real estate brokerage, eXp Realty.

Contracting virtual platform VirBELA to build out the company’s mega-campus in VR, eXp Realty demonstrates the power of a dematerialized workspace, throwing out hefty overhead costs and fundamentally redefining what ‘real estate’ really means. Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, 3 Canadian provinces, and 400 MLS market areas… all without a single physical office.

But VR is just one of many exponential technologies converging to revolutionize real estate and construction. As floating cities and driverless cars spread out your living options, AI and VR are together cutting out the middleman.

Already, the global construction industry is projected to surpass $12.9 trillion in 2022, and the total value of the US housing market alone grew to $33.3 trillion last year. Both vital for our daily lives, these industries will continue to explode in value, posing countless possibilities for disruption.

In this blog, I’ll be discussing the following trends:

New prime real estate locations;
Disintermediation of the real estate broker and search;
Materials science and 3D printing in construction.

Let’s dive in!

Location Location Location
Until today, location has been the name of the game when it comes to hunting down the best real estate. But constraints on land often drive up costs while limiting options, and urbanization is only exacerbating the problem.

Beyond the world of virtual real estate, two primary mechanisms are driving the creation of new locations.

(1) Floating Cities

Offshore habitation hubs, floating cities have long been conceived as a solution to rising sea levels, skyrocketing urban populations, and threatened ecosystems. In success, they will soon unlock an abundance of prime real estate, whether for scenic living, commerce, education, or recreation.

One pioneering model is that of Oceanix City, designed by Danish architect Bjarke Ingels and a host of other domain experts. Intended to adapt organically over time, Oceanix would consist of a galaxy of mass-produced, hexagonal floating modules, built as satellite “cities” off coastal urban centers and sustained by renewable energies.

While individual 4.5-acre platforms would each sustain 300 people, these hexagonal modules are designed to link into 75-acre tessellations sustaining up to 10,000 residents. Each anchored to the ocean floor using biorock, Oceanix cities are slated to be closed-loop systems, as external resources are continuously supplied by automated drone networks.

Electric boats or flying cars might zoom you to work, city-embedded water capture technologies would provide your water, and while vertical and outdoor farming supply your family meal, share economies would dominate goods provision.

AERIAL: Located in calm, sheltered waters, near coastal megacities, OCEANIX City will be an adaptable, sustainable, scalable, and affordable solution for human life on the ocean. Image Credit: OCEANIX/BIG-Bjarke Ingels Group.
Joined by countless government officials whose islands risk submersion at the hands of sea level rise, the UN is now getting on board. And just this year, seasteading is exiting the realm of science fiction and testing practical waters.

As French Polynesia seeks out robust solutions to sea level rise, their government has now joined forces with the San Francisco-based Seasteading Institute. With a newly designated special economic zone and 100 acres of beachfront, this joint Floating Island Project could even see up to a dozen inhabitable structures by 2020. And what better to fund the $60 million project than the team’s upcoming ICO?

But aside from creating new locations, autonomous vehicles (AVs) and flying cars are turning previously low-demand land into the prime real estate of tomorrow.

(2) Autonomous Electric Vehicles and Flying Cars

Today, the value of a location is a function of its proximity to your workplace, your city’s central business district, the best schools, or your closest friends.

But what happens when driverless cars desensitize you to distance, or Hyperloop and flying cars decimate your commute time? Historically, every time new transit methods have hit the mainstream, tolerance for distance has opened up right alongside them, further catalyzing city spread.

And just as Hyperloop and the Boring Company aim to make your commute immaterial, autonomous vehicle (AV) ridesharing services will spread out cities in two ways: (1) by drastically reducing parking spaces needed (vertical parking decks = more prime real estate); and (2) by untethering you from the steering wheel. Want an extra two hours of sleep on the way to work? Schedule a sleeper AV and nap on your route to the office. Need a car-turned-mobile-office? No problem.

Meanwhile, aerial taxis (i.e. flying cars) will allow you to escape ground congestion entirely, delivering you from bedroom to boardroom at decimated time scales.

Already working with regulators, Uber Elevate has staked ambitious plans for its UberAIR airborne taxi project. By 2023, Uber anticipates rolling out flying drones in its two first pilot cities, Los Angeles and Dallas. Flying between rooftop skyports, drones would carry passengers at a height of 1,000 to 2,000 feet at speeds between 100 to 200 mph. And while costs per ride are anticipated to resemble those of an Uber Black based on mileage, prices are projected to soon drop to those of an UberX.

But the true economic feat boils down to this: if I were to commute 50 to 100 kilometers, I could get two or three times the house for the same price. (Not to mention the extra living space offered up by my now-unneeded garage.)

All of a sudden, virtual reality, broadband, AVs, or high-speed vehicles are going to change where we live and where we work. So rather than living in a crowded, dense urban core for access to jobs and entertainment, our future of personalized, autonomous, low-cost transport opens the luxury of rural areas to all without compromising the benefits of a short commute.

Once these drivers multiply your real estate options, how will you select your next home?

Disintermediation: Say Bye to Your Broker
In a future of continuous and personalized preference-tracking, why hire a human agent who knows less about your needs and desires than a personal AI?

Just as disintermediation is cutting out bankers and insurance agents, so too is it closing in on real estate brokers. Over the next decade, as AI becomes your agent, VR will serve as your medium.

To paint a more vivid picture of how this will look, over 98 percent of your home search will be conducted from the comfort of your couch through next-generation VR headgear.

Once you’ve verbalized your primary desires for home location, finishings, size, etc. to your personal AI, it will offer you top picks, tour-able 24/7, with optional assistance by a virtual guide and constantly updated data. As a seller, this means potential buyers from two miles, or two continents, away.

Throughout each immersive VR tour, advanced eye-tracking software and a permissioned machine learning algorithm follow your gaze, further learn your likes and dislikes, and intelligently recommend other homes or commercial residences to visit.

Curious as to what the living room might look like with a fresh coat of blue paint and a white carpet? No problem! VR programs will be able to modify rendered environments instantly, changing countless variables, from furniture materials to even the sun’s orientation. Keen to input your own furniture into a VR-rendered home? Advanced AIs could one day compile all your existing furniture, electronics, clothing, decorations, and even books, virtually organizing them across any accommodating new space.

As 3D scanning technologies make extraordinary headway, VR renditions will only grow cheaper and higher resolution. One company called Immersive Media (disclosure: I’m an investor and advisor) has a platform for 360-degree video capture and distribution, and is already exploring real estate 360-degree video.

Smaller firms like Studio 216, Vieweet, Arch Virtual, ArX Solutions, and Rubicon Media can similarly capture and render models of various properties for clients and investors to view and explore. In essence, VR real estate platforms will allow you to explore any home for sale, do the remodel, and determine if it truly is the house of your dreams.

Once you’re ready to make a bid, your AI will even help estimate a bid, process and submit your offer. Real estate companies like Zillow, Trulia, Move, Redfin, ZipRealty (acquired by Realogy in 2014) and many others have already invested millions in machine learning applications to make search, valuation, consulting, and property management easier, faster, and much more accurate.

But what happens if the home you desire most means starting from scratch with new construction?

New Methods and Materials for Construction
For thousands of years, we’ve been constrained by the construction materials of nature. We built bricks from naturally abundant clay and shale, used tree limbs as our rooftops and beams, and mastered incredible structures in ancient Rome with the use of cement.

But construction is now on the cusp of a materials science revolution. Today, I’d like to focus on three key materials:

Upcycled Materials

Imagine if you could turn the world’s greatest waste products into their most essential building blocks. Thanks to UCLA researchers at CO2NCRETE, we can already do this with carbon emissions.

Today, concrete produces about five percent of all greenhouse gas (GHG) emissions. But what if concrete could instead conserve greenhouse emissions? CO2NCRETE engineers capture carbon from smokestacks and combine it with lime to create a new type of cement. The lab’s 3D printers then shape the upcycled concrete to build entirely new structures. Once conquered at scale, upcycled concrete will turn a former polluter into a future conserver.

Or what if we wanted to print new residences from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

Nanomaterials

Nano- and micro-materials are ushering in a new era of smart, super-strong, and self-charging buildings. While carbon nanotubes dramatically increase the strength-to-weight ratio of skyscrapers, revolutionizing their structural flexibility, nanomaterials don’t stop here.

Several research teams are pioneering silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use. Researchers at the US National Renewable Energy Lab have developed similar smart windows. Turning into solar panels when bathed in sunlight, these thermochromic windows will power our buildings, changing color as they do.

Self-Healing Infrastructure

The American Society of Civil Engineers estimates that the US needs to spend roughly $4.5 trillion to fix nationwide roads, bridges, dams, and common infrastructure by 2025. But what if infrastructure could fix itself?

Enter self-healing concrete. Engineers at Delft University have developed bio-concrete that can repair its own cracks. As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”

But bio-concrete is only the beginning of self-healing technologies. As futurist architecture firms start printing plastic and carbon-fiber houses like the stunner seen below (using Branch Technologies’ 3D printing technology), engineers have begun tackling self-healing plastic.

And in a bid to go smart, burgeoning construction projects have started embedding sensors for preemptive detection. Beyond materials and sensors, however, construction methods are fast colliding into robotics and 3D printing.

While some startups and research institutes have leveraged robot swarm construction (namely, Harvard’s robotic termite-like swarm of programmed constructors), others have taken to large-scale autonomous robots.

One such example involves Fastbrick Robotics. After multiple iterations, the company’s Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Layhead. Image Credit: Fastbrick Robotics.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

Imagine the implications. Eliminating human safety concerns and unlocking any environment, autonomous builder robots could collaboratively build massive structures in space or deep underwater habitats.

Final Thoughts
Where, how, and what we live in form a vital pillar of our everyday lives. The concept of “home” is unlikely to disappear anytime soon. At the same time, real estate and construction are two of the biggest playgrounds for technological convergence, each on the verge of revolutionary disruption.

As underlying shifts in transportation, land reclamation, and the definition of “space” (real vs. virtual) take hold, the real estate market is about to explode in value, spreading out urban centers on unprecedented scales and unlocking vast new prime “property.”

Meanwhile, converging advancements in AI and VR are fundamentally disrupting the way we design, build, and explore new residences. Just as mirror worlds create immersive, virtual real estate economies, VR tours and AI agents are absorbing both sides of the coin to entirely obliterate the middleman.

And as materials science breakthroughs meet new modes of construction, the only limits to tomorrow’s structures are those of our own imagination.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: OCEANIX/BIG-Bjarke Ingels Group. Continue reading

Posted in Human Robots

#434827 AI and Robotics Are Transforming ...

During the past 50 years, the frequency of recorded natural disasters has surged nearly five-fold.

In this blog, I’ll be exploring how converging exponential technologies (AI, robotics, drones, sensors, networks) are transforming the future of disaster relief—how we can prevent them in the first place and get help to victims during that first golden hour wherein immediate relief can save lives.

Here are the three areas of greatest impact:

AI, predictive mapping, and the power of the crowd
Next-gen robotics and swarm solutions
Aerial drones and immediate aid supply

Let’s dive in!

Artificial Intelligence and Predictive Mapping
When it comes to immediate and high-precision emergency response, data is gold.

Already, the meteoric rise of space-based networks, stratosphere-hovering balloons, and 5G telecommunications infrastructure is in the process of connecting every last individual on the planet.

Aside from democratizing the world’s information, however, this upsurge in connectivity will soon grant anyone the ability to broadcast detailed geo-tagged data, particularly those most vulnerable to natural disasters.

Armed with the power of data broadcasting and the force of the crowd, disaster victims now play a vital role in emergency response, turning a historically one-way blind rescue operation into a two-way dialogue between connected crowds and smart response systems.

With a skyrocketing abundance of data, however, comes a new paradigm: one in which we no longer face a scarcity of answers. Instead, it will be the quality of our questions that matters most.

This is where AI comes in: our mining mechanism.

In the case of emergency response, what if we could strategically map an almost endless amount of incoming data points? Or predict the dynamics of a flood and identify a tsunami’s most vulnerable targets before it even strikes? Or even amplify critical signals to trigger automatic aid by surveillance drones and immediately alert crowdsourced volunteers?

Already, a number of key players are leveraging AI, crowdsourced intelligence, and cutting-edge visualizations to optimize crisis response and multiply relief speeds.

Take One Concern, for instance. Born out of Stanford under the mentorship of leading AI expert Andrew Ng, One Concern leverages AI through analytical disaster assessment and calculated damage estimates.

Partnering with the cities of Los Angeles, San Francisco, and numerous cities in San Mateo County, the platform assigns verified, unique ‘digital fingerprints’ to every element in a city. Building robust models of each system, One Concern’s AI platform can then monitor site-specific impacts of not only climate change but each individual natural disaster, from sweeping thermal shifts to seismic movement.

This data, combined with that of city infrastructure and former disasters, are then used to predict future damage under a range of disaster scenarios, informing prevention methods and structures in need of reinforcement.

Within just four years, One Concern can now make precise predictions with an 85 percent accuracy rate in under 15 minutes.

And as IoT-connected devices and intelligent hardware continue to boom, a blooming trillion-sensor economy will only serve to amplify AI’s predictive capacity, offering us immediate, preventive strategies long before disaster strikes.

Beyond natural disasters, however, crowdsourced intelligence, predictive crisis mapping, and AI-powered responses are just as formidable a triage in humanitarian disasters.

One extraordinary story is that of Ushahidi. When violence broke out after the 2007 Kenyan elections, one local blogger proposed a simple yet powerful question to the web: “Any techies out there willing to do a mashup of where the violence and destruction is occurring and put it on a map?”

Within days, four ‘techies’ heeded the call, building a platform that crowdsourced first-hand reports via SMS, mined the web for answers, and—with over 40,000 verified reports—sent alerts back to locals on the ground and viewers across the world.

Today, Ushahidi has been used in over 150 countries, reaching a total of 20 million people across 100,000+ deployments. Now an open-source crisis-mapping software, its V3 (or “Ushahidi in the Cloud”) is accessible to anyone, mining millions of Tweets, hundreds of thousands of news articles, and geo-tagged, time-stamped data from countless sources.

Aggregating one of the longest-running crisis maps to date, Ushahidi’s Syria Tracker has proved invaluable in the crowdsourcing of witness reports. Providing real-time geographic visualizations of all verified data, Syria Tracker has enabled civilians to report everything from missing people and relief supply needs to civilian casualties and disease outbreaks— all while evading the government’s cell network, keeping identities private, and verifying reports prior to publication.

As mobile connectivity and abundant sensors converge with AI-mined crowd intelligence, real-time awareness will only multiply in speed and scale.

Imagining the Future….

Within the next 10 years, spatial web technology might even allow us to tap into mesh networks.

As I’ve explored in a previous blog on the implications of the spatial web, while traditional networks rely on a limited set of wired access points (or wireless hotspots), a wireless mesh network can connect entire cities via hundreds of dispersed nodes that communicate with each other and share a network connection non-hierarchically.

In short, this means that individual mobile users can together establish a local mesh network using nothing but the computing power in their own devices.

Take this a step further, and a local population of strangers could collectively broadcast countless 360-degree feeds across a local mesh network.

Imagine a scenario in which armed attacks break out across disjointed urban districts, each cluster of eye witnesses and at-risk civilians broadcasting an aggregate of 360-degree videos, all fed through photogrammetry AIs that build out a live hologram in real time, giving family members and first responders complete information.

Or take a coastal community in the throes of torrential rainfall and failing infrastructure. Now empowered by a collective live feed, verification of data reports takes a matter of seconds, and richly-layered data informs first responders and AI platforms with unbelievable accuracy and specificity of relief needs.

By linking all the right technological pieces, we might even see the rise of automated drone deliveries. Imagine: crowdsourced intelligence is first cross-referenced with sensor data and verified algorithmically. AI is then leveraged to determine the specific needs and degree of urgency at ultra-precise coordinates. Within minutes, once approved by personnel, swarm robots rush to collect the requisite supplies, equipping size-appropriate drones with the right aid for rapid-fire delivery.

This brings us to a second critical convergence: robots and drones.

While cutting-edge drone technology revolutionizes the way we deliver aid, new breakthroughs in AI-geared robotics are paving the way for superhuman emergency responses in some of today’s most dangerous environments.

Let’s explore a few of the most disruptive examples to reach the testing phase.

First up….

Autonomous Robots and Swarm Solutions
As hardware advancements converge with exploding AI capabilities, disaster relief robots are graduating from assistance roles to fully autonomous responders at a breakneck pace.

Born out of MIT’s Biomimetic Robotics Lab, the Cheetah III is but one of many robots that may form our first line of defense in everything from earthquake search-and-rescue missions to high-risk ops in dangerous radiation zones.

Now capable of running at 6.4 meters per second, Cheetah III can even leap up to a height of 60 centimeters, autonomously determining how to avoid obstacles and jump over hurdles as they arise.

Initially designed to perform spectral inspection tasks in hazardous settings (think: nuclear plants or chemical factories), the Cheetah’s various iterations have focused on increasing its payload capacity, range of motion, and even a gripping function with enhanced dexterity.

Cheetah III and future versions are aimed at saving lives in almost any environment.

And the Cheetah III is not alone. Just this February, Tokyo’s Electric Power Company (TEPCO) has put one of its own robots to the test. For the first time since Japan’s devastating 2011 tsunami, which led to three nuclear meltdowns in the nation’s Fukushima nuclear power plant, a robot has successfully examined the reactor’s fuel.

Broadcasting the process with its built-in camera, the robot was able to retrieve small chunks of radioactive fuel at five of the six test sites, offering tremendous promise for long-term plans to clean up the still-deadly interior.

Also out of Japan, Mitsubishi Heavy Industries (MHi) is even using robots to fight fires with full autonomy. In a remarkable new feat, MHi’s Water Cannon Bot can now put out blazes in difficult-to-access or highly dangerous fire sites.

Delivering foam or water at 4,000 liters per minute and 1 megapascal (MPa) of pressure, the Cannon Bot and its accompanying Hose Extension Bot even form part of a greater AI-geared system to conduct reconnaissance and surveillance on larger transport vehicles.

As wildfires grow ever more untameable, high-volume production of such bots could prove a true lifesaver. Paired with predictive AI forest fire mapping and autonomous hauling vehicles, not only will solutions like MHi’s Cannon Bot save numerous lives, but avoid population displacement and paralyzing damage to our natural environment before disaster has the chance to spread.

But even in cases where emergency shelter is needed, groundbreaking (literally) robotics solutions are fast to the rescue.

After multiple iterations by Fastbrick Robotics, the Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square-meter home in under three days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long-term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

But what if we need to build emergency shelters from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute for Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

But while cutting-edge robotics unlock extraordinary new frontiers for low-cost, large-scale emergency construction, novel hardware and computing breakthroughs are also enabling robotic scale at the other extreme of the spectrum.

Again, inspired by biological phenomena, robotics specialists across the US have begun to pilot tiny robotic prototypes for locating trapped individuals and assessing infrastructural damage.

Take RoboBees, tiny Harvard-developed bots that use electrostatic adhesion to ‘perch’ on walls and even ceilings, evaluating structural damage in the aftermath of an earthquake.

Or Carnegie Mellon’s prototyped Snakebot, capable of navigating through entry points that would otherwise be completely inaccessible to human responders. Driven by AI, the Snakebot can maneuver through even the most densely-packed rubble to locate survivors, using cameras and microphones for communication.

But when it comes to fast-paced reconnaissance in inaccessible regions, miniature robot swarms have good company.

Next-Generation Drones for Instantaneous Relief Supplies
Particularly in the case of wildfires and conflict zones, autonomous drone technology is fundamentally revolutionizing the way we identify survivors in need and automate relief supply.

Not only are drones enabling high-resolution imagery for real-time mapping and damage assessment, but preliminary research shows that UAVs far outpace ground-based rescue teams in locating isolated survivors.

As presented by a team of electrical engineers from the University of Science and Technology of China, drones could even build out a mobile wireless broadband network in record time using a “drone-assisted multi-hop device-to-device” program.

And as shown during Houston’s Hurricane Harvey, drones can provide scores of predictive intel on everything from future flooding to damage estimates.

Among multiple others, a team led by Texas A&M computer science professor and director of the university’s Center for Robot-Assisted Search and Rescue Dr. Robin Murphy flew a total of 119 drone missions over the city, from small-scale quadcopters to military-grade unmanned planes. Not only were these critical for monitoring levee infrastructure, but also for identifying those left behind by human rescue teams.

But beyond surveillance, UAVs have begun to provide lifesaving supplies across some of the most remote regions of the globe. One of the most inspiring examples to date is Zipline.

Created in 2014, Zipline has completed 12,352 life-saving drone deliveries to date. While drones are designed, tested, and assembled in California, Zipline primarily operates in Rwanda and Tanzania, hiring local operators and providing over 11 million people with instant access to medical supplies.

Providing everything from vaccines and HIV medications to blood and IV tubes, Zipline’s drones far outpace ground-based supply transport, in many instances providing life-critical blood cells, plasma, and platelets in under an hour.

But drone technology is even beginning to transcend the limited scale of medical supplies and food.

Now developing its drones under contracts with DARPA and the US Marine Corps, Logistic Gliders, Inc. has built autonomously-navigating drones capable of carrying 1,800 pounds of cargo over unprecedented long distances.

Built from plywood, Logistic’s gliders are projected to cost as little as a few hundred dollars each, making them perfect candidates for high-volume remote aid deliveries, whether navigated by a pilot or self-flown in accordance with real-time disaster zone mapping.

As hardware continues to advance, autonomous drone technology coupled with real-time mapping algorithms pose no end of abundant opportunities for aid supply, disaster monitoring, and richly layered intel previously unimaginable for humanitarian relief.

Concluding Thoughts
Perhaps one of the most consequential and impactful applications of converging technologies is their transformation of disaster relief methods.

While AI-driven intel platforms crowdsource firsthand experiential data from those on the ground, mobile connectivity and drone-supplied networks are granting newfound narrative power to those most in need.

And as a wave of new hardware advancements gives rise to robotic responders, swarm technology, and aerial drones, we are fast approaching an age of instantaneous and efficiently-distributed responses in the midst of conflict and natural catastrophes alike.

Empowered by these new tools, what might we create when everyone on the planet has the same access to relief supplies and immediate resources? In a new age of prevention and fast recovery, what futures can you envision?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Arcansel / Shutterstock.com Continue reading

Posted in Human Robots

#432051 What Roboticists Are Learning From Early ...

You might not have heard of Hanson Robotics, but if you’re reading this, you’ve probably seen their work. They were the company behind Sophia, the lifelike humanoid avatar that’s made dozens of high-profile media appearances. Before that, they were the company behind that strange-looking robot that seemed a bit like Asimo with Albert Einstein’s head—or maybe you saw BINA48, who was interviewed for the New York Times in 2010 and featured in Jon Ronson’s books. For the sci-fi aficionados amongst you, they even made a replica of legendary author Philip K. Dick, best remembered for having books with titles like Do Androids Dream of Electric Sheep? turned into films with titles like Blade Runner.

Hanson Robotics, in other words, with their proprietary brand of life-like humanoid robots, have been playing the same game for a while. Sometimes it can be a frustrating game to watch. Anyone who gives the robot the slightest bit of thought will realize that this is essentially a chat-bot, with all the limitations this implies. Indeed, even in that New York Times interview with BINA48, author Amy Harmon describes it as a frustrating experience—with “rare (but invariably thrilling) moments of coherence.” This sensation will be familiar to anyone who’s conversed with a chatbot that has a few clever responses.

The glossy surface belies the lack of real intelligence underneath; it seems, at first glance, like a much more advanced machine than it is. Peeling back that surface layer—at least for a Hanson robot—means you’re peeling back Frubber. This proprietary substance—short for “Flesh Rubber,” which is slightly nightmarish—is surprisingly complicated. Up to thirty motors are required just to control the face; they manipulate liquid cells in order to make the skin soft, malleable, and capable of a range of different emotional expressions.

A quick combinatorial glance at the 30+ motors suggests that there are millions of possible combinations; researchers identify 62 that they consider “human-like” in Sophia, although not everyone agrees with this assessment. Arguably, the technical expertise that went into reconstructing the range of human facial expressions far exceeds the more simplistic chat engine the robots use, although it’s the second one that allows it to inflate the punters’ expectations with a few pre-programmed questions in an interview.

Hanson Robotics’ belief is that, ultimately, a lot of how humans will eventually relate to robots is going to depend on their faces and voices, as well as on what they’re saying. “The perception of identity is so intimately bound up with the perception of the human form,” says David Hanson, company founder.

Yet anyone attempting to design a robot that won’t terrify people has to contend with the uncanny valley—that strange blend of concern and revulsion people react with when things appear to be creepily human. Between cartoonish humanoids and genuine humans lies what has often been a no-go zone in robotic aesthetics.

The uncanny valley concept originated with roboticist Masahiro Mori, who argued that roboticists should avoid trying to replicate humans exactly. Since anything that wasn’t perfect, but merely very good, would elicit an eerie feeling in humans, shirking the challenge entirely was the only way to avoid the uncanny valley. It’s probably a task made more difficult by endless streams of articles about AI taking over the world that inexplicably conflate AI with killer humanoid Terminators—which aren’t particularly likely to exist (although maybe it’s best not to push robots around too much).

The idea behind this realm of psychological horror is fairly simple, cognitively speaking.

We know how to categorize things that are unambiguously human or non-human. This is true even if they’re designed to interact with us. Consider the popularity of Aibo, Jibo, or even some robots that don’t try to resemble humans. Something that resembles a human, but isn’t quite right, is bound to evoke a fear response in the same way slightly distorted music or slightly rearranged furniture in your home will. The creature simply doesn’t fit.

You may well reject the idea of the uncanny valley entirely. David Hanson, naturally, is not a fan. In the paper Upending the Uncanny Valley, he argues that great art forms have often resembled humans, but the ultimate goal for humanoid roboticists is probably to create robots we can relate to as something closer to humans than works of art.

Meanwhile, Hanson and other scientists produce competing experiments to either demonstrate that the uncanny valley is overhyped, or to confirm it exists and probe its edges.

The classic experiment involves gradually morphing a cartoon face into a human face, via some robotic-seeming intermediaries—yet it’s in movement that the real horror of the almost-human often lies. Hanson has argued that incorporating cartoonish features may help—and, sometimes, that the uncanny valley is a generational thing which will melt away when new generations grow used to the quirks of robots. Although Hanson might dispute the severity of this effect, it’s clearly what he’s trying to avoid with each new iteration.

Hiroshi Ishiguro is the latest of the roboticists to have dived headlong into the valley.

Building on the work of pioneers like Hanson, those who study human-robot interaction are pushing at the boundaries of robotics—but also of social science. It’s usually difficult to simulate what you don’t understand, and there’s still an awful lot we don’t understand about how we interpret the constant streams of non-verbal information that flow when you interact with people in the flesh.

Ishiguro took this imitation of human forms to extreme levels. Not only did he monitor and log the physical movements people made on videotapes, but some of his robots are based on replicas of people; the Repliee series began with a ‘replicant’ of his daughter. This involved making a rubber replica—a silicone cast—of her entire body. Future experiments were focused on creating Geminoid, a replica of Ishiguro himself.

As Ishiguro aged, he realized that it would be more effective to resemble his replica through cosmetic surgery rather than by continually creating new casts of his face, each with more lines than the last. “I decided not to get old anymore,” Ishiguro said.

We love to throw around abstract concepts and ideas: humans being replaced by machines, cared for by machines, getting intimate with machines, or even merging themselves with machines. You can take an idea like that, hold it in your hand, and examine it—dispassionately, if not without interest. But there’s a gulf between thinking about it and living in a world where human-robot interaction is not a field of academic research, but a day-to-day reality.

As the scientists studying human-robot interaction develop their robots, their replicas, and their experiments, they are making some of the first forays into that world. We might all be living there someday. Understanding ourselves—decrypting the origins of empathy and love—may be the greatest challenge to face. That is, if you want to avoid the valley.

Image Credit: Anton Gvozdikov / Shutterstock.com Continue reading

Posted in Human Robots

#431733 Why Humanoid Robots Are Still So Hard to ...

Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.

Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.

We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading

Posted in Human Robots

#430734 Why XPRIZE Is Asking Writers to Take Us ...

In a world of accelerating change, educating the public about the implications of technological advancements is extremely important. We can continue to write informative articles and speculate about the kind of future that lies ahead. Or instead, we can take readers on an immersive journey by using science fiction to paint vivid images of the future for society.
The XPRIZE Foundation recently announced a science fiction storytelling competition. In recent years, the organization has backed and launched a range of competitions to propel innovation in science and technology. These have been aimed at a variety of challenges, such as transforming the lives of low-literacy adults, tackling climate change, and creating water from thin air.
Their sci-fi writing competition asks participants to envision a groundbreaking future for humanity. The initiative, in partnership with Japanese airline ANA, features 22 sci-fi stories from noteworthy authors that are now live on the website. Each of these stories is from the perspective of a different passenger on a plane that travels 20 years into the future through a wormhole. Contestants will compete to tell the story of the passenger in Seat 14C.
In addition to the competition, XPRIZE has brought together a science fiction advisory council to work with the organization and imagine what the future will look like. According to Peter Diamandis, founder and executive chairman, “As the future becomes harder and harder to predict, we look forward to engaging some of the world’s most visionary storytellers to help us imagine what’s just beyond the horizon and chart a path toward a future of abundance.”
The Importance of Science Fiction
Why is an organization like XPRIZE placing just as much importance on fiction as it does on reality? As Isaac Asimov has pointed out, “Modern science fiction is the only form of literature that consistently considers the nature of the changes that face us.” While the rest of the world reports on a new invention, sci-fi authors examine how these advancements affect the human condition.
True science fiction is distinguished from pure fantasy in that everything that happens is within the bounds of the physical laws of the universe. We’ve already seen how sci-fi can inspire generations and shape the future. 3D printers, wearable technology, and smartphones were first seen in Star Trek. Targeted advertising and air touch technology was first seen in Philip K. Dick’s 1958 story “The Minority Report.” Tanning beds, robot vacuums, and flatscreen TVs were seen in The Jetsons. The internet and a world of global instant communication was predicted by Arthur C. Clarke in his work long before it became reality.
Sci-fi shows like Black Mirror or Star Trek aren’t just entertainment. They allow us to imagine and explore the influence of technology on humanity. For instance, how will artificial intelligence impact human relationships? How will social media affect privacy? What if we encounter alien life? Good sci-fi stories take us on journeys that force us to think critically about the societal impacts of technological advancements.
As sci-fi author Yaasha Moriah points out, the genre is universal because “it tackles hard questions about human nature, morality, and the evolution of society, all through the narrative of speculation about the future. If we continue to do A, will it necessarily lead to problems B and C? What implicit lessons are being taught when we insist on a particular policy? When we elevate the importance of one thing over another—say, security over privacy—what could be the potential benefits and dangers of that mentality? That’s why science fiction has such an enduring appeal. We want to explore deep questions, without being preached at. We want to see the principles in action, and observe their results.”
An Extension of STEAM Education
At its core, this genre is a harmonious symbiosis between two distinct disciplines: science and literature. It is an extension of STEAM education, an educational approach that combines science, technology, engineering, the arts, and mathematics. Story-telling with science fiction allows us to use the arts in order to educate and engage the public about scientific advancements and its implications.
According to the National Science Foundation, research on art-based learning of STEM, including the use of narrative writing, works “beyond expectation.” It has been shown to have a powerful impact on creative thinking, collaborative behavior and application skills.
What does it feel like to travel through a wormhole? What are some ethical challenges of AI? How could we terraform Mars? For decades, science fiction writers and producers have answered these questions through the art of storytelling.
What better way to engage more people with science and technology than through sparking their imaginations? The method makes academic subject areas many traditionally perceived as boring or dry far more inspiring and engaging.
A Form of Time Travel
XPRIZE’s competition theme of traveling 20 years into the future through a wormhole is an appropriate beacon for the genre. In many ways, sci-fi is a precautionary form of time travel. Before we put a certain technology, scientific invention, or policy to use, we can envision and explore what our world would be like if we were to do so.
Sci-fi lets us explore different scenarios for the future of humanity before deciding which ones are more desirable. Some of these scenarios may be radically beyond our comfort zone. Yet when we’re faced with the seemingly impossible, we must remind ourselves that if something is within the domain of the physical laws of the universe, then it’s absolutely possible.
Stock Media provided by NASA_images / Pond5 Continue reading

Posted in Human Robots