Tag Archives: applications
This week, the widely-anticipated fifth season of the dystopian series Black Mirror was released on Netflix. The storylines this season are less focused on far-out scenarios and increasingly aligned with current issues. With only three episodes, this season raises more questions than it answers, often leaving audiences bewildered.
The episode Smithereens explores our society’s crippling addiction to social media platforms and the monopoly they hold over our data. In Rachel, Jack and Ashley Too, we see the disruptive impact of technologies on the music and entertainment industry, and the price of fame for artists in the digital world. Like most Black Mirror episodes, these explore the sometimes disturbing implications of tech advancements on humanity.
But once again, in the midst of all the doom and gloom, the creators of the series leave us with a glimmer of hope. Aligned with Pride month, the episode Striking Vipers explores the impact of virtual reality on love, relationships, and sexual fluidity.
*The review contains a few spoilers.*
The first episode of the season, Striking Vipers may be one of the most thought-provoking episodes in Black Mirror history. Reminiscent of previous episodes San Junipero and Hang the DJ, the writers explore the potential for technology to transform human intimacy.
The episode tells the story of two old friends, Danny and Karl, whose friendship is reignited in an unconventional way. Karl unexpectedly appears at Danny’s 38th birthday and reintroduces him to the VR version of a game they used to play years before. In the game Striking Vipers X, each of the players is represented by an avatar of their choice in an uncanny digital reality. Following old tradition, Karl chooses to become the female fighter, Roxanne, and Danny takes on the role of the male fighter, Lance. The state-of-the-art VR headsets appear to use an advanced form of brain-machine interface to allow each player to be fully immersed in the virtual world, emulating all physical sensations.
To their surprise (and confusion), Danny and Karl find themselves transitioning from fist-fighting to kissing. Over the course of many games, they continue to explore a sexual and romantic relationship in the virtual world, leaving them confused and distant in the real world. The virtual and physical realities begin to blur, and so do the identities of the players with their avatars. Danny, who is married (in a heterosexual relationship) and is a father, begins to carry guilt and confusion in the real world. They both wonder if there would be any spark between them in real life.
The brain-machine interface (BMI) depicted in the episode is still science fiction, but that hasn’t stopped innovators from pushing the technology forward. Experts today are designing more intricate BMI systems while programming better algorithms to interpret the neural signals they capture. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate with one another purely through brainwaves.
The convergence of BMIs with virtual reality and artificial intelligence could make the experience of such immersive digital realities possible. Virtual reality, too, is decreasing exponentially in cost and increasing in quality.
The narrative provides meaningful commentary on another tech area—gaming. It highlights video games not necessarily as addictive distractions, but rather as a platform for connecting with others in a deeper way. This is already very relevant. Video games like Final Fantasy are often a tool for meaningful digital connections for their players.
The Implications of Virtual Reality on Love and Relationships
The narrative of Striking Vipers raises many novel questions about the implications of immersive technologies on relationships: could the virtual world allow us a safe space to explore suppressed desires? Can virtual avatars make it easier for us to show affection to those we care about? Can a sexual or romantic encounter in the digital world be considered infidelity?
Above all, the episode explores the therapeutic possibilities of such technologies. While many fears about virtual reality had been raised in previous seasons of Black Mirror, this episode was focused on its potential. This includes the potential of immersive technology to be a source of liberation, meaningful connections, and self-exploration, as well as a tool for realizing our true identities and desires.
Once again, this is aligned with emerging trends in VR. We are seeing the rise of social VR applications and platforms that allow you to hang out with your friends and family as avatars in the virtual space. The technology is allowing for animation movies, such as Coco VR, to become an increasingly social and interactive experience. Considering that meaningful social interaction can alleviate depression and anxiety, such applications could contribute to well-being.
Techno-philosopher and National Geographic host Jason Silva points out that immersive media technologies can be “engines of empathy.” VR allows us to enter virtual spaces that mimic someone else’s state of mind, allowing us to empathize with the way they view the world. Silva said, “Imagine the intimacy that becomes possible when people meet and they say, ‘Hey, do you want to come visit my world? Do you want to see what it’s like to be inside my head?’”
What is most fascinating about Striking Vipers is that it explores how we may redefine love with virtual reality; we are introduced to love between virtual avatars. While this kind of love may seem confusing to audiences, it may be one of the complex implications of virtual reality on human relationships.
In many ways, the title Black Mirror couldn’t be more appropriate, as each episode serves as a mirror to the most disturbing aspects of our psyches as they get amplified through technology. However, what we see in uplifting and thought-provoking plots like Striking Vipers, San Junipero, and Hang The DJ is that technology could also amplify the most positive aspects of our humanity. This includes our powerful capacity to love.
Image Credit: Arsgera / Shutterstock.com Continue reading →
Buck Rogers had Twiki. Luke Skywalker palled around with C-3PO and R2-D2. And astronauts aboard the International Space Station (ISS) now have their own robotic companions in space—Astrobee.
A pair of the cube-shaped robots were launched to the ISS during an April re-supply mission and are currently being commissioned for use on the space station. The free-flying space robots, dubbed Bumble and Honey, are the latest generation of robotic machines to join the human crew on the ISS.
Exploration of the solar system and beyond will require autonomous machines that can assist humans with numerous tasks—or go where we cannot. NASA has said repeatedly that robots will be instrumental in future space missions to the moon, Mars, and even to the icy moon Europa.
The Astrobee robots will specifically test robotic capabilities in zero gravity, replacing the SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellite) robots that have been on the ISS for more than a decade to test various technologies ranging from communications to navigation.
The 18-sided robots, each about the size of a volleyball or an oversized Dungeons and Dragons die, use CO2-based cold-gas thrusters for movement and a series of ultrasonic beacons for orientation. The Astrobee robots, on the other hand, can propel themselves autonomously around the interior of the ISS using electric fans and six cameras.
The modular design of the Astrobee robots means they are highly plug-and-play, capable of being reconfigured with different hardware modules. The robots’ software is also open-source, encouraging scientists and programmers to develop and test new algorithms and features.
And, yes, the Astrobee robots will be busy as bees once they are fully commissioned this fall, with experiments planned to begin next year. Scientists hope to learn more about how robots can assist space crews and perform caretaking duties on spacecraft.
Robots Working Together
The Astrobee robots are expected to be joined by a familiar “face” on the ISS later this year—the humanoid robot Robonaut.
Robonaut, also known as R2, was the first US-built robot on the ISS. It joined the crew back in 2011 without legs, which were added in 2014. However, the installation never entirely worked, as R2 experienced power failures that eventually led to its return to Earth last year to fix the problem. If all goes as planned, the space station’s first humanoid robot will return to the ISS to lend a hand to the astronauts and the new robotic arrivals.
In particular, NASA is interested in how the two different robotic platforms can complement each other, with an eye toward outfitting the agency’s proposed lunar orbital space station with various robots that can supplement a human crew.
“We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations,” Astrobee technical lead Trey Smith in the NASA Intelligent Robotics Group told IEEE Spectrum. “And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.”
While the focus on R2 has been to test its capabilities in zero gravity and to use it for mundane or dangerous tasks in space, the technology enabling the humanoid robot has proven to be equally useful on Earth.
For example, R2 has amazing dexterity for a robot, with sensors, actuators, and tendons comparable to the nerves, muscles, and tendons in a human hand. Based on that design, engineers are working on a robotic glove that can help factory workers, for instance, do their jobs better while reducing the risk of repetitive injuries. R2 has also inspired development of a robotic exoskeleton for both astronauts in space and paraplegics on Earth.
Working Hard on Soft Robotics
While innovative and technologically sophisticated, Astrobee and Robonaut are typical robots in that neither one would do well in a limbo contest. In other words, most robots are limited in their flexibility and agility based on current hardware and materials.
A subfield of robotics known as soft robotics involves developing robots with highly pliant materials that mimic biological organisms in how they move. Scientists at NASA’s Langley Research Center are investigating how soft robots could help with future space exploration.
Specifically, the researchers are looking at a series of properties to understand how actuators—components responsible for moving a robotic part, such as Robonaut’s hand—can be built and used in space.
The team first 3D prints a mold and then pours a flexible material like silicone into the mold. Air bladders or chambers in the actuator expand and compress using just air.
Some of the first applications of soft robotics sound more tool-like than R2-D2-like. For example, two soft robots could connect to produce a temporary shelter for astronauts on the moon or serve as an impromptu wind shield during one of Mars’ infamous dust storms.
The idea is to use soft robots in situations that are “dangerous, dirty, or dull,” according to Jack Fitzpatrick, a NASA intern working on the soft robotics project at Langley.
Working on Mars
Of course, space robots aren’t only designed to assist humans. In many instances, they are the only option to explore even relatively close celestial bodies like Mars. Four American-made robotic rovers have been used to investigate the fourth planet from the sun since 1997.
Opportunity is perhaps the most famous, covering about 25 miles of terrain across Mars over 15 years. A dust storm knocked it out of commission last year, with NASA officially ending the mission in February.
However, the biggest and baddest of the Mars rovers, Curiosity, is still crawling across the Martian surface, sending back valuable data since 2012. The car-size robot carries 17 cameras, a laser to vaporize rocks for study, and a drill to collect samples. It is on the hunt for signs of biological life.
The next year or two could see a virtual traffic jam of robots to Mars. NASA’s Mars 2020 Rover is next in line to visit the Red Planet, sporting scientific gadgets like an X-ray fluorescence spectrometer for chemical analyses and ground-penetrating radar to see below the Martian surface.
This diagram shows the instrument payload for the Mars 2020 mission. Image Credit: NASA.
Meanwhile, the Europeans have teamed with the Russians on a rover called Rosalind Franklin, named after a famed British chemist, that will drill down into the Martian ground for evidence of past or present life as soon as 2021.
The Chinese are also preparing to begin searching for life on Mars using robots as soon as next year, as part of the country’s Mars Global Remote Sensing Orbiter and Small Rover program. The mission is scheduled to be the first in a series of launches that would culminate with bringing samples back from Mars to Earth.
Perhaps there is no more famous utterance in the universe of science fiction as “to boldly go where no one has gone before.” However, the fact is that human exploration of the solar system and beyond will only be possible with robots of different sizes, shapes, and sophistication.
Image Credit: NASA. Continue reading →
In the wake of the housing market collapse of 2008, one entrepreneur decided to dive right into the failing real estate industry. But this time, he didn’t buy any real estate to begin with. Instead, Glenn Sanford decided to launch the first-ever cloud-based real estate brokerage, eXp Realty.
Contracting virtual platform VirBELA to build out the company’s mega-campus in VR, eXp Realty demonstrates the power of a dematerialized workspace, throwing out hefty overhead costs and fundamentally redefining what ‘real estate’ really means. Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, 3 Canadian provinces, and 400 MLS market areas… all without a single physical office.
But VR is just one of many exponential technologies converging to revolutionize real estate and construction. As floating cities and driverless cars spread out your living options, AI and VR are together cutting out the middleman.
Already, the global construction industry is projected to surpass $12.9 trillion in 2022, and the total value of the US housing market alone grew to $33.3 trillion last year. Both vital for our daily lives, these industries will continue to explode in value, posing countless possibilities for disruption.
In this blog, I’ll be discussing the following trends:
New prime real estate locations;
Disintermediation of the real estate broker and search;
Materials science and 3D printing in construction.
Let’s dive in!
Location Location Location
Until today, location has been the name of the game when it comes to hunting down the best real estate. But constraints on land often drive up costs while limiting options, and urbanization is only exacerbating the problem.
Beyond the world of virtual real estate, two primary mechanisms are driving the creation of new locations.
(1) Floating Cities
Offshore habitation hubs, floating cities have long been conceived as a solution to rising sea levels, skyrocketing urban populations, and threatened ecosystems. In success, they will soon unlock an abundance of prime real estate, whether for scenic living, commerce, education, or recreation.
One pioneering model is that of Oceanix City, designed by Danish architect Bjarke Ingels and a host of other domain experts. Intended to adapt organically over time, Oceanix would consist of a galaxy of mass-produced, hexagonal floating modules, built as satellite “cities” off coastal urban centers and sustained by renewable energies.
While individual 4.5-acre platforms would each sustain 300 people, these hexagonal modules are designed to link into 75-acre tessellations sustaining up to 10,000 residents. Each anchored to the ocean floor using biorock, Oceanix cities are slated to be closed-loop systems, as external resources are continuously supplied by automated drone networks.
Electric boats or flying cars might zoom you to work, city-embedded water capture technologies would provide your water, and while vertical and outdoor farming supply your family meal, share economies would dominate goods provision.
AERIAL: Located in calm, sheltered waters, near coastal megacities, OCEANIX City will be an adaptable, sustainable, scalable, and affordable solution for human life on the ocean. Image Credit: OCEANIX/BIG-Bjarke Ingels Group.
Joined by countless government officials whose islands risk submersion at the hands of sea level rise, the UN is now getting on board. And just this year, seasteading is exiting the realm of science fiction and testing practical waters.
As French Polynesia seeks out robust solutions to sea level rise, their government has now joined forces with the San Francisco-based Seasteading Institute. With a newly designated special economic zone and 100 acres of beachfront, this joint Floating Island Project could even see up to a dozen inhabitable structures by 2020. And what better to fund the $60 million project than the team’s upcoming ICO?
But aside from creating new locations, autonomous vehicles (AVs) and flying cars are turning previously low-demand land into the prime real estate of tomorrow.
(2) Autonomous Electric Vehicles and Flying Cars
Today, the value of a location is a function of its proximity to your workplace, your city’s central business district, the best schools, or your closest friends.
But what happens when driverless cars desensitize you to distance, or Hyperloop and flying cars decimate your commute time? Historically, every time new transit methods have hit the mainstream, tolerance for distance has opened up right alongside them, further catalyzing city spread.
And just as Hyperloop and the Boring Company aim to make your commute immaterial, autonomous vehicle (AV) ridesharing services will spread out cities in two ways: (1) by drastically reducing parking spaces needed (vertical parking decks = more prime real estate); and (2) by untethering you from the steering wheel. Want an extra two hours of sleep on the way to work? Schedule a sleeper AV and nap on your route to the office. Need a car-turned-mobile-office? No problem.
Meanwhile, aerial taxis (i.e. flying cars) will allow you to escape ground congestion entirely, delivering you from bedroom to boardroom at decimated time scales.
Already working with regulators, Uber Elevate has staked ambitious plans for its UberAIR airborne taxi project. By 2023, Uber anticipates rolling out flying drones in its two first pilot cities, Los Angeles and Dallas. Flying between rooftop skyports, drones would carry passengers at a height of 1,000 to 2,000 feet at speeds between 100 to 200 mph. And while costs per ride are anticipated to resemble those of an Uber Black based on mileage, prices are projected to soon drop to those of an UberX.
But the true economic feat boils down to this: if I were to commute 50 to 100 kilometers, I could get two or three times the house for the same price. (Not to mention the extra living space offered up by my now-unneeded garage.)
All of a sudden, virtual reality, broadband, AVs, or high-speed vehicles are going to change where we live and where we work. So rather than living in a crowded, dense urban core for access to jobs and entertainment, our future of personalized, autonomous, low-cost transport opens the luxury of rural areas to all without compromising the benefits of a short commute.
Once these drivers multiply your real estate options, how will you select your next home?
Disintermediation: Say Bye to Your Broker
In a future of continuous and personalized preference-tracking, why hire a human agent who knows less about your needs and desires than a personal AI?
Just as disintermediation is cutting out bankers and insurance agents, so too is it closing in on real estate brokers. Over the next decade, as AI becomes your agent, VR will serve as your medium.
To paint a more vivid picture of how this will look, over 98 percent of your home search will be conducted from the comfort of your couch through next-generation VR headgear.
Once you’ve verbalized your primary desires for home location, finishings, size, etc. to your personal AI, it will offer you top picks, tour-able 24/7, with optional assistance by a virtual guide and constantly updated data. As a seller, this means potential buyers from two miles, or two continents, away.
Throughout each immersive VR tour, advanced eye-tracking software and a permissioned machine learning algorithm follow your gaze, further learn your likes and dislikes, and intelligently recommend other homes or commercial residences to visit.
Curious as to what the living room might look like with a fresh coat of blue paint and a white carpet? No problem! VR programs will be able to modify rendered environments instantly, changing countless variables, from furniture materials to even the sun’s orientation. Keen to input your own furniture into a VR-rendered home? Advanced AIs could one day compile all your existing furniture, electronics, clothing, decorations, and even books, virtually organizing them across any accommodating new space.
As 3D scanning technologies make extraordinary headway, VR renditions will only grow cheaper and higher resolution. One company called Immersive Media (disclosure: I’m an investor and advisor) has a platform for 360-degree video capture and distribution, and is already exploring real estate 360-degree video.
Smaller firms like Studio 216, Vieweet, Arch Virtual, ArX Solutions, and Rubicon Media can similarly capture and render models of various properties for clients and investors to view and explore. In essence, VR real estate platforms will allow you to explore any home for sale, do the remodel, and determine if it truly is the house of your dreams.
Once you’re ready to make a bid, your AI will even help estimate a bid, process and submit your offer. Real estate companies like Zillow, Trulia, Move, Redfin, ZipRealty (acquired by Realogy in 2014) and many others have already invested millions in machine learning applications to make search, valuation, consulting, and property management easier, faster, and much more accurate.
But what happens if the home you desire most means starting from scratch with new construction?
New Methods and Materials for Construction
For thousands of years, we’ve been constrained by the construction materials of nature. We built bricks from naturally abundant clay and shale, used tree limbs as our rooftops and beams, and mastered incredible structures in ancient Rome with the use of cement.
But construction is now on the cusp of a materials science revolution. Today, I’d like to focus on three key materials:
Imagine if you could turn the world’s greatest waste products into their most essential building blocks. Thanks to UCLA researchers at CO2NCRETE, we can already do this with carbon emissions.
Today, concrete produces about five percent of all greenhouse gas (GHG) emissions. But what if concrete could instead conserve greenhouse emissions? CO2NCRETE engineers capture carbon from smokestacks and combine it with lime to create a new type of cement. The lab’s 3D printers then shape the upcycled concrete to build entirely new structures. Once conquered at scale, upcycled concrete will turn a former polluter into a future conserver.
Or what if we wanted to print new residences from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.
In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.
Nano- and micro-materials are ushering in a new era of smart, super-strong, and self-charging buildings. While carbon nanotubes dramatically increase the strength-to-weight ratio of skyscrapers, revolutionizing their structural flexibility, nanomaterials don’t stop here.
Several research teams are pioneering silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use. Researchers at the US National Renewable Energy Lab have developed similar smart windows. Turning into solar panels when bathed in sunlight, these thermochromic windows will power our buildings, changing color as they do.
The American Society of Civil Engineers estimates that the US needs to spend roughly $4.5 trillion to fix nationwide roads, bridges, dams, and common infrastructure by 2025. But what if infrastructure could fix itself?
Enter self-healing concrete. Engineers at Delft University have developed bio-concrete that can repair its own cracks. As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”
But bio-concrete is only the beginning of self-healing technologies. As futurist architecture firms start printing plastic and carbon-fiber houses like the stunner seen below (using Branch Technologies’ 3D printing technology), engineers have begun tackling self-healing plastic.
And in a bid to go smart, burgeoning construction projects have started embedding sensors for preemptive detection. Beyond materials and sensors, however, construction methods are fast colliding into robotics and 3D printing.
While some startups and research institutes have leveraged robot swarm construction (namely, Harvard’s robotic termite-like swarm of programmed constructors), others have taken to large-scale autonomous robots.
One such example involves Fastbrick Robotics. After multiple iterations, the company’s Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.
Layhead. Image Credit: Fastbrick Robotics.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.
Imagine the implications. Eliminating human safety concerns and unlocking any environment, autonomous builder robots could collaboratively build massive structures in space or deep underwater habitats.
Where, how, and what we live in form a vital pillar of our everyday lives. The concept of “home” is unlikely to disappear anytime soon. At the same time, real estate and construction are two of the biggest playgrounds for technological convergence, each on the verge of revolutionary disruption.
As underlying shifts in transportation, land reclamation, and the definition of “space” (real vs. virtual) take hold, the real estate market is about to explode in value, spreading out urban centers on unprecedented scales and unlocking vast new prime “property.”
Meanwhile, converging advancements in AI and VR are fundamentally disrupting the way we design, build, and explore new residences. Just as mirror worlds create immersive, virtual real estate economies, VR tours and AI agents are absorbing both sides of the coin to entirely obliterate the middleman.
And as materials science breakthroughs meet new modes of construction, the only limits to tomorrow’s structures are those of our own imagination.
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.
Image Credit: OCEANIX/BIG-Bjarke Ingels Group. Continue reading →
As a human you instinctively know that a leopard is closer to a cat than a motorbike, but the way we train most AI makes them oblivious to these kinds of relations. Building the concept of similarity into our algorithms could make them far more capable, writes the author of a new paper in Science Robotics.
Convolutional neural networks have revolutionized the field of computer vision to the point that machines are now outperforming humans on some of the most challenging visual tasks. But the way we train them to analyze images is very different from the way humans learn, says Atsuto Maki, an associate professor at KTH Royal Institute of Technology.
“Imagine that you are two years old and being quizzed on what you see in a photo of a leopard,” he writes. “You might answer ‘a cat’ and your parents might say, ‘yeah, not quite but similar’.”
In contrast, the way we train neural networks rarely gives that kind of partial credit. They are typically trained to have very high confidence in the correct label and consider all incorrect labels, whether ”cat” or “motorbike,” equally wrong. That’s a mistake, says Maki, because ignoring the fact that something can be “less wrong” means you’re not exploiting all of the information in the training data.
Even when models are trained this way, there will be small differences in the probabilities assigned to incorrect labels that can tell you a lot about how well the model can generalize what it has learned to unseen data.
If you show a model a picture of a leopard and it gives “cat” a probability of five percent and “motorbike” one percent, that suggests it picked up on the fact that a cat is closer to a leopard than a motorbike. In contrast, if the figures are the other way around it means the model hasn’t learned the broad features that make cats and leopards similar, something that could potentially be helpful when analyzing new data.
If we could boost this ability to identify similarities between classes we should be able to create more flexible models better able to generalize, says Maki. And recent research has demonstrated how variations of an approach called regularization might help us achieve that goal.
Neural networks are prone to a problem called “overfitting,” which refers to a tendency to pay too much attention to tiny details and noise specific to their training set. When that happens, models will perform excellently on their training data but poorly when applied to unseen test data without these particular quirks.
Regularization is used to circumvent this problem, typically by reducing the network’s capacity to learn all this unnecessary information and therefore boost its ability to generalize to new data. Techniques are varied, but generally involve modifying the network’s structure or the strength of the weights between artificial neurons.
More recently, though, researchers have suggested new regularization approaches that work by encouraging a broader spread of probabilities across all classes. This essentially helps them capture more of the class similarities, says Maki, and therefore boosts their ability to generalize.
One such approach was devised in 2017 by Google Brain researchers, led by deep learning pioneer Geoffrey Hinton. They introduced a penalty to their training process that directly punished overconfident predictions in the model’s outputs, and a technique called label smoothing that prevents the largest probability becoming much larger than all others. This meant the probabilities were lower for correct labels and higher for incorrect ones, which was found to boost performance of models on varied tasks from image classification to speech recognition.
Another came from Maki himself in 2017 and achieves the same goal, but by suppressing high values in the model’s feature vector—the mathematical construct that describes all of an object’s important characteristics. This has a knock-on effect on the spread of output probabilities and also helped boost performance on various image classification tasks.
While it’s still early days for the approach, the fact that humans are able to exploit these kinds of similarities to learn more efficiently suggests that models that incorporate them hold promise. Maki points out that it could be particularly useful in applications such as robotic grasping, where distinguishing various similar objects is important.
Image Credit: Marianna Kalashnyk / Shutterstock.com Continue reading →
The energy and transportation industries are being aggressively disrupted by converging exponential technologies.
In just five days, the sun provides Earth with an energy supply exceeding all proven reserves of oil, coal, and natural gas. Capturing just 1 part in 8,000 of this available solar energy would allow us to meet 100 percent of our energy needs.
As we leverage renewable energy supplied by the sun, wind, geothermal sources, and eventually fusion, we are rapidly heading towards a future where 100 percent of our energy needs will be met by clean tech in just 30 years.
During the past 40 years, solar prices have dropped 250-fold. And as these costs plummet, solar panel capacity continues to grow exponentially.
On the heels of energy abundance, we are additionally witnessing a new transportation revolution, which sets the stage for a future of seamlessly efficient travel at lower economic and environmental costs.
Top 5 Transportation Breakthroughs (2019-2024)
Entrepreneur and inventor Ramez Naam is my go-to expert on all things energy and environment. Currently serving as the Energy Co-Chair at Singularity University, Naam is the award-winning author of five books, including the Nexus series of science fiction novels. Having spent 13 years at Microsoft, his software has touched the lives of over a billion people. Naam holds over 20 patents, including several shared with co-inventor Bill Gates.
In the next five years, he forecasts five respective transportation and energy trends, each poised to disrupt major players and birth entirely new business models.
Let’s dive in.
Autonomous cars drive 1 billion miles on US roads. Then 10 billion
Alphabet’s Waymo alone has already reached 10 million miles driven in the US. The 600 Waymo vehicles on public roads drive a total of 25,000 miles each day, and computer simulations provide an additional 25,000 virtual cars driving constantly. Since its launch in December, the Waymo One service has transported over 1,000 pre-vetted riders in the Phoenix area.
With more training miles, the accuracy of these cars continues to improve. Since last year, GM Cruise has improved its disengagement rate by 321 percent since last year, trailing close behind with only one human intervention per 5,025 miles self-driven.
Autonomous taxis as a service in top 20 US metro areas
Along with its first quarterly earnings released last week, Lyft recently announced that it would expand its Waymo partnership with the upcoming deployment of 10 autonomous vehicles in the Phoenix area. While individuals previously had to partake in Waymo’s “early rider program” prior to trying Waymo One, the Lyft partnership will allow anyone to ride in a self-driving vehicle without a prior NDA.
Strategic partnerships will grow increasingly essential between automakers, self-driving tech companies, and rideshare services. Ford is currently working with Volkswagen, and Nvidia now collaborates with Daimler (Mercedes) and Toyota. Just last week, GM Cruise raised another $1.15 billion at a $19 billion valuation as the company aims to launch a ride-hailing service this year.
“They’re going to come to the Bay Area, Los Angeles, Houston, other cities with relatively good weather,” notes Naam. “In every major city within five years in the US and in some other parts of the world, you’re going to see the ability to hail an autonomous vehicle as a ride.”
Cambrian explosion of vehicle formats
Naam explains, “If you look today at the average ridership of a taxi, a Lyft, or an Uber, it’s about 1.1 passengers plus the driver. So, why do you need a large four-seater vehicle for that?”
Small electric, autonomous pods that seat as few as two people will begin to emerge, satisfying the majority of ride-hailing demands we see today. At the same time, larger communal vehicles will appear, such as Uber Express, that will undercut even the cheapest of transportation methods—buses, trams, and the like. Finally, last-mile scooter transit (or simply short-distance walks) might connect you to communal pick-up locations.
By 2024, an unimaginably diverse range of vehicles will arise to meet every possible need, regardless of distance or destination.
Drone delivery for lightweight packages in at least one US city
Wing, the Alphabet drone delivery startup, recently became the first company to gain approval from the Federal Aviation Administration (FAA) to make deliveries in the US. Having secured approval to deliver to 100 homes in Canberra, Australia, Wing additionally plans to begin delivering goods from local businesses in the suburbs of Virginia.
The current state of drone delivery is best suited for lightweight, urgent-demand payloads like pharmaceuticals, thumb drives, or connectors. And as Amazon continues to decrease its Prime delivery times—now as speedy as a one-day turnaround in many cities—the use of drones will become essential.
Robotic factories drive onshoring of US factories… but without new jobs
The supply chain will continue to shorten and become more agile with the re-onshoring of manufacturing jobs in the US and other countries. Naam reasons that new management and software jobs will drive this shift, as these roles develop the necessary robotics to manufacture goods. Equally as important, these robotic factories will provide a more humane setting than many of the current manufacturing practices overseas.
Top 5 Energy Breakthroughs (2019-2024)
First “1 cent per kWh” deals for solar and wind signed
Ten years ago, the lowest price of solar and wind power fell between 10 to 12 cents per kilowatt hour (kWh), over twice the price of wholesale power from coal or natural gas.
Today, the gap between solar/wind power and fossil fuel-generated electricity is nearly negligible in many parts of the world. In G20 countries, fossil fuel electricity costs between 5 to 17 cents per kWh, while the average cost per kWh of solar power in the US stands at under 10 cents.
Spanish firm Solarpack Corp Technological recently won a bid in Chile for a 120 MW solar power plant supplying energy at 2.91 cents per kWh. This deal will result in an estimated 25 percent drop in energy costs for Chilean businesses by 2021.
Naam indicates, “We will see the first unsubsidized 1.0 cent solar deals in places like Chile, Mexico, the Southwest US, the Middle East, and North Africa, and we’ll see similar prices for wind in places like Mexico, Brazil, and the US Great Plains.”
Solar and wind will reach >15 percent of US electricity, and begin to drive all growth
Just over eight percent of energy in the US comes from solar and wind sources. In total, 17 percent of American energy is derived from renewable sources, while a whopping 63 percent is sourced from fossil fuels, and 17 percent from nuclear.
Last year in the U.K., twice as much energy was generated from wind than from coal. For over a week in May, the U.K. went completely coal-free, using wind and solar to supply 35 percent and 21 percent of power, respectively. While fossil fuels remain the primary electricity source, this week-long experiment highlights the disruptive potential of solar and wind power that major countries like the U.K. are beginning to emphasize.
“Solar and wind are still a relatively small part of the worldwide power mix, only about six percent. Within five years, it’s going to be 15 percent in the US and more than close to that worldwide,” Naam predicts. “We are nearing the point where we are not building any new fossil fuel power plants.”
It will be cheaper to build new solar/wind/batteries than to run on existing coal
Last October, Northern Indiana utility company NIPSCO announced its transition from a 65 percent coal-powered state to projected coal-free status by 2028. Importantly, this decision was made purely on the basis of financials, with an estimated $4 billion in cost savings for customers. The company has already begun several initiatives in solar, wind, and batteries.
NextEra, the largest power generator in the US, has taken on a similar goal, making a deal last year to purchase roughly seven million solar panels from JinkoSolar over four years. Leading power generators across the globe have vocalized a similar economic case for renewable energy.
ICE car sales have now peaked. All car sales growth will be electric
While electric vehicles (EV) have historically been more expensive for consumers than internal combustion engine-powered (ICE) cars, EVs are cheaper to operate and maintain. The yearly cost of operating an EV in the US is about $485, less than half the $1,117 cost of operating a gas-powered vehicle.
And as battery prices continue to shrink, the upfront costs of EVs will decline until a long-term payoff calculation is no longer required to determine which type of car is the better investment. EVs will become the obvious choice.
Many experts including Naam believe that ICE-powered vehicles peaked worldwide in 2018 and will begin to decline over the next five years, as has already been demonstrated in the past five months. At the same time, EVs are expected to quadruple their market share to 1.6 percent this year.
New storage technologies will displace Li-ion batteries for tomorrow’s most demanding applications
Lithium ion batteries have dominated the battery market for decades, but Naam anticipates new storage technologies will take hold for different contexts. Flow batteries, which can collect and store solar and wind power at large scales, will supply city grids. Already, California’s Independent System Operator, the nonprofit that maintains the majority of the state’s power grid, recently installed a flow battery system in San Diego.
Solid-state batteries, which consist of entirely solid electrolytes, will supply mobile devices in cars. A growing body of competitors, including Toyota, BMW, Honda, Hyundai, and Nissan, are already working on developing solid-state battery technology. These types of batteries offer up to six times faster charging periods, three times the energy density, and eight years of added lifespan, compared to lithium ion batteries.
Major advancements in transportation and energy technologies will continue to converge over the next five years. A case in point, Tesla’s recent announcement of its “robotaxi” fleet exemplifies the growing trend towards joint priority of sustainability and autonomy.
On the connectivity front, 5G and next-generation mobile networks will continue to enable the growth of autonomous fleets, many of which will soon run on renewable energy sources. This growth demands important partnerships between energy storage manufacturers, automakers, self-driving tech companies, and ridesharing services.
In the eco-realm, increasingly obvious economic calculi will catalyze consumer adoption of autonomous electric vehicles. In just five years, Naam predicts that self-driving rideshare services will be cheaper than owning a private vehicle for urban residents. And by the same token, plummeting renewable energy costs will make these fuels far more attractive than fossil fuel-derived electricity.
As universally optimized AI systems cut down on traffic, aggregate time spent in vehicles will decimate, while hours in your (or not your) car will be applied to any number of activities as autonomous systems steer the way. All the while, sharing an electric vehicle will cut down not only on your carbon footprint but on the exorbitant costs swallowed by your previous SUV. How will you spend this extra time and money? What new natural resources will fuel your everyday life?
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.
Image Credit: welcomia / Shutterstock.com Continue reading →