Tag Archives: group

#435167 A Closer Look at the Robots Helping Us ...

Buck Rogers had Twiki. Luke Skywalker palled around with C-3PO and R2-D2. And astronauts aboard the International Space Station (ISS) now have their own robotic companions in space—Astrobee.

A pair of the cube-shaped robots were launched to the ISS during an April re-supply mission and are currently being commissioned for use on the space station. The free-flying space robots, dubbed Bumble and Honey, are the latest generation of robotic machines to join the human crew on the ISS.

Exploration of the solar system and beyond will require autonomous machines that can assist humans with numerous tasks—or go where we cannot. NASA has said repeatedly that robots will be instrumental in future space missions to the moon, Mars, and even to the icy moon Europa.

The Astrobee robots will specifically test robotic capabilities in zero gravity, replacing the SPHERES (Synchronized Position Hold, Engage, Reorient, Experimental Satellite) robots that have been on the ISS for more than a decade to test various technologies ranging from communications to navigation.

The 18-sided robots, each about the size of a volleyball or an oversized Dungeons and Dragons die, use CO2-based cold-gas thrusters for movement and a series of ultrasonic beacons for orientation. The Astrobee robots, on the other hand, can propel themselves autonomously around the interior of the ISS using electric fans and six cameras.

The modular design of the Astrobee robots means they are highly plug-and-play, capable of being reconfigured with different hardware modules. The robots’ software is also open-source, encouraging scientists and programmers to develop and test new algorithms and features.

And, yes, the Astrobee robots will be busy as bees once they are fully commissioned this fall, with experiments planned to begin next year. Scientists hope to learn more about how robots can assist space crews and perform caretaking duties on spacecraft.

Robots Working Together
The Astrobee robots are expected to be joined by a familiar “face” on the ISS later this year—the humanoid robot Robonaut.

Robonaut, also known as R2, was the first US-built robot on the ISS. It joined the crew back in 2011 without legs, which were added in 2014. However, the installation never entirely worked, as R2 experienced power failures that eventually led to its return to Earth last year to fix the problem. If all goes as planned, the space station’s first humanoid robot will return to the ISS to lend a hand to the astronauts and the new robotic arrivals.

In particular, NASA is interested in how the two different robotic platforms can complement each other, with an eye toward outfitting the agency’s proposed lunar orbital space station with various robots that can supplement a human crew.

“We don’t have definite plans for what would happen on the Gateway yet, but there’s a general recognition that intra-vehicular robots are important for space stations,” Astrobee technical lead Trey Smith in the NASA Intelligent Robotics Group told IEEE Spectrum. “And so, it would not be surprising to see a mobile manipulator like Robonaut, and a free flyer like Astrobee, on the Gateway.”

While the focus on R2 has been to test its capabilities in zero gravity and to use it for mundane or dangerous tasks in space, the technology enabling the humanoid robot has proven to be equally useful on Earth.

For example, R2 has amazing dexterity for a robot, with sensors, actuators, and tendons comparable to the nerves, muscles, and tendons in a human hand. Based on that design, engineers are working on a robotic glove that can help factory workers, for instance, do their jobs better while reducing the risk of repetitive injuries. R2 has also inspired development of a robotic exoskeleton for both astronauts in space and paraplegics on Earth.

Working Hard on Soft Robotics
While innovative and technologically sophisticated, Astrobee and Robonaut are typical robots in that neither one would do well in a limbo contest. In other words, most robots are limited in their flexibility and agility based on current hardware and materials.

A subfield of robotics known as soft robotics involves developing robots with highly pliant materials that mimic biological organisms in how they move. Scientists at NASA’s Langley Research Center are investigating how soft robots could help with future space exploration.

Specifically, the researchers are looking at a series of properties to understand how actuators—components responsible for moving a robotic part, such as Robonaut’s hand—can be built and used in space.

The team first 3D prints a mold and then pours a flexible material like silicone into the mold. Air bladders or chambers in the actuator expand and compress using just air.

Some of the first applications of soft robotics sound more tool-like than R2-D2-like. For example, two soft robots could connect to produce a temporary shelter for astronauts on the moon or serve as an impromptu wind shield during one of Mars’ infamous dust storms.

The idea is to use soft robots in situations that are “dangerous, dirty, or dull,” according to Jack Fitzpatrick, a NASA intern working on the soft robotics project at Langley.

Working on Mars
Of course, space robots aren’t only designed to assist humans. In many instances, they are the only option to explore even relatively close celestial bodies like Mars. Four American-made robotic rovers have been used to investigate the fourth planet from the sun since 1997.

Opportunity is perhaps the most famous, covering about 25 miles of terrain across Mars over 15 years. A dust storm knocked it out of commission last year, with NASA officially ending the mission in February.

However, the biggest and baddest of the Mars rovers, Curiosity, is still crawling across the Martian surface, sending back valuable data since 2012. The car-size robot carries 17 cameras, a laser to vaporize rocks for study, and a drill to collect samples. It is on the hunt for signs of biological life.

The next year or two could see a virtual traffic jam of robots to Mars. NASA’s Mars 2020 Rover is next in line to visit the Red Planet, sporting scientific gadgets like an X-ray fluorescence spectrometer for chemical analyses and ground-penetrating radar to see below the Martian surface.

This diagram shows the instrument payload for the Mars 2020 mission. Image Credit: NASA.
Meanwhile, the Europeans have teamed with the Russians on a rover called Rosalind Franklin, named after a famed British chemist, that will drill down into the Martian ground for evidence of past or present life as soon as 2021.

The Chinese are also preparing to begin searching for life on Mars using robots as soon as next year, as part of the country’s Mars Global Remote Sensing Orbiter and Small Rover program. The mission is scheduled to be the first in a series of launches that would culminate with bringing samples back from Mars to Earth.

Perhaps there is no more famous utterance in the universe of science fiction as “to boldly go where no one has gone before.” However, the fact is that human exploration of the solar system and beyond will only be possible with robots of different sizes, shapes, and sophistication.

Image Credit: NASA. Continue reading

Posted in Human Robots

#435152 The Futuristic Tech Disrupting Real ...

In the wake of the housing market collapse of 2008, one entrepreneur decided to dive right into the failing real estate industry. But this time, he didn’t buy any real estate to begin with. Instead, Glenn Sanford decided to launch the first-ever cloud-based real estate brokerage, eXp Realty.

Contracting virtual platform VirBELA to build out the company’s mega-campus in VR, eXp Realty demonstrates the power of a dematerialized workspace, throwing out hefty overhead costs and fundamentally redefining what ‘real estate’ really means. Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, 3 Canadian provinces, and 400 MLS market areas… all without a single physical office.

But VR is just one of many exponential technologies converging to revolutionize real estate and construction. As floating cities and driverless cars spread out your living options, AI and VR are together cutting out the middleman.

Already, the global construction industry is projected to surpass $12.9 trillion in 2022, and the total value of the US housing market alone grew to $33.3 trillion last year. Both vital for our daily lives, these industries will continue to explode in value, posing countless possibilities for disruption.

In this blog, I’ll be discussing the following trends:

New prime real estate locations;
Disintermediation of the real estate broker and search;
Materials science and 3D printing in construction.

Let’s dive in!

Location Location Location
Until today, location has been the name of the game when it comes to hunting down the best real estate. But constraints on land often drive up costs while limiting options, and urbanization is only exacerbating the problem.

Beyond the world of virtual real estate, two primary mechanisms are driving the creation of new locations.

(1) Floating Cities

Offshore habitation hubs, floating cities have long been conceived as a solution to rising sea levels, skyrocketing urban populations, and threatened ecosystems. In success, they will soon unlock an abundance of prime real estate, whether for scenic living, commerce, education, or recreation.

One pioneering model is that of Oceanix City, designed by Danish architect Bjarke Ingels and a host of other domain experts. Intended to adapt organically over time, Oceanix would consist of a galaxy of mass-produced, hexagonal floating modules, built as satellite “cities” off coastal urban centers and sustained by renewable energies.

While individual 4.5-acre platforms would each sustain 300 people, these hexagonal modules are designed to link into 75-acre tessellations sustaining up to 10,000 residents. Each anchored to the ocean floor using biorock, Oceanix cities are slated to be closed-loop systems, as external resources are continuously supplied by automated drone networks.

Electric boats or flying cars might zoom you to work, city-embedded water capture technologies would provide your water, and while vertical and outdoor farming supply your family meal, share economies would dominate goods provision.

AERIAL: Located in calm, sheltered waters, near coastal megacities, OCEANIX City will be an adaptable, sustainable, scalable, and affordable solution for human life on the ocean. Image Credit: OCEANIX/BIG-Bjarke Ingels Group.
Joined by countless government officials whose islands risk submersion at the hands of sea level rise, the UN is now getting on board. And just this year, seasteading is exiting the realm of science fiction and testing practical waters.

As French Polynesia seeks out robust solutions to sea level rise, their government has now joined forces with the San Francisco-based Seasteading Institute. With a newly designated special economic zone and 100 acres of beachfront, this joint Floating Island Project could even see up to a dozen inhabitable structures by 2020. And what better to fund the $60 million project than the team’s upcoming ICO?

But aside from creating new locations, autonomous vehicles (AVs) and flying cars are turning previously low-demand land into the prime real estate of tomorrow.

(2) Autonomous Electric Vehicles and Flying Cars

Today, the value of a location is a function of its proximity to your workplace, your city’s central business district, the best schools, or your closest friends.

But what happens when driverless cars desensitize you to distance, or Hyperloop and flying cars decimate your commute time? Historically, every time new transit methods have hit the mainstream, tolerance for distance has opened up right alongside them, further catalyzing city spread.

And just as Hyperloop and the Boring Company aim to make your commute immaterial, autonomous vehicle (AV) ridesharing services will spread out cities in two ways: (1) by drastically reducing parking spaces needed (vertical parking decks = more prime real estate); and (2) by untethering you from the steering wheel. Want an extra two hours of sleep on the way to work? Schedule a sleeper AV and nap on your route to the office. Need a car-turned-mobile-office? No problem.

Meanwhile, aerial taxis (i.e. flying cars) will allow you to escape ground congestion entirely, delivering you from bedroom to boardroom at decimated time scales.

Already working with regulators, Uber Elevate has staked ambitious plans for its UberAIR airborne taxi project. By 2023, Uber anticipates rolling out flying drones in its two first pilot cities, Los Angeles and Dallas. Flying between rooftop skyports, drones would carry passengers at a height of 1,000 to 2,000 feet at speeds between 100 to 200 mph. And while costs per ride are anticipated to resemble those of an Uber Black based on mileage, prices are projected to soon drop to those of an UberX.

But the true economic feat boils down to this: if I were to commute 50 to 100 kilometers, I could get two or three times the house for the same price. (Not to mention the extra living space offered up by my now-unneeded garage.)

All of a sudden, virtual reality, broadband, AVs, or high-speed vehicles are going to change where we live and where we work. So rather than living in a crowded, dense urban core for access to jobs and entertainment, our future of personalized, autonomous, low-cost transport opens the luxury of rural areas to all without compromising the benefits of a short commute.

Once these drivers multiply your real estate options, how will you select your next home?

Disintermediation: Say Bye to Your Broker
In a future of continuous and personalized preference-tracking, why hire a human agent who knows less about your needs and desires than a personal AI?

Just as disintermediation is cutting out bankers and insurance agents, so too is it closing in on real estate brokers. Over the next decade, as AI becomes your agent, VR will serve as your medium.

To paint a more vivid picture of how this will look, over 98 percent of your home search will be conducted from the comfort of your couch through next-generation VR headgear.

Once you’ve verbalized your primary desires for home location, finishings, size, etc. to your personal AI, it will offer you top picks, tour-able 24/7, with optional assistance by a virtual guide and constantly updated data. As a seller, this means potential buyers from two miles, or two continents, away.

Throughout each immersive VR tour, advanced eye-tracking software and a permissioned machine learning algorithm follow your gaze, further learn your likes and dislikes, and intelligently recommend other homes or commercial residences to visit.

Curious as to what the living room might look like with a fresh coat of blue paint and a white carpet? No problem! VR programs will be able to modify rendered environments instantly, changing countless variables, from furniture materials to even the sun’s orientation. Keen to input your own furniture into a VR-rendered home? Advanced AIs could one day compile all your existing furniture, electronics, clothing, decorations, and even books, virtually organizing them across any accommodating new space.

As 3D scanning technologies make extraordinary headway, VR renditions will only grow cheaper and higher resolution. One company called Immersive Media (disclosure: I’m an investor and advisor) has a platform for 360-degree video capture and distribution, and is already exploring real estate 360-degree video.

Smaller firms like Studio 216, Vieweet, Arch Virtual, ArX Solutions, and Rubicon Media can similarly capture and render models of various properties for clients and investors to view and explore. In essence, VR real estate platforms will allow you to explore any home for sale, do the remodel, and determine if it truly is the house of your dreams.

Once you’re ready to make a bid, your AI will even help estimate a bid, process and submit your offer. Real estate companies like Zillow, Trulia, Move, Redfin, ZipRealty (acquired by Realogy in 2014) and many others have already invested millions in machine learning applications to make search, valuation, consulting, and property management easier, faster, and much more accurate.

But what happens if the home you desire most means starting from scratch with new construction?

New Methods and Materials for Construction
For thousands of years, we’ve been constrained by the construction materials of nature. We built bricks from naturally abundant clay and shale, used tree limbs as our rooftops and beams, and mastered incredible structures in ancient Rome with the use of cement.

But construction is now on the cusp of a materials science revolution. Today, I’d like to focus on three key materials:

Upcycled Materials

Imagine if you could turn the world’s greatest waste products into their most essential building blocks. Thanks to UCLA researchers at CO2NCRETE, we can already do this with carbon emissions.

Today, concrete produces about five percent of all greenhouse gas (GHG) emissions. But what if concrete could instead conserve greenhouse emissions? CO2NCRETE engineers capture carbon from smokestacks and combine it with lime to create a new type of cement. The lab’s 3D printers then shape the upcycled concrete to build entirely new structures. Once conquered at scale, upcycled concrete will turn a former polluter into a future conserver.

Or what if we wanted to print new residences from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

Nanomaterials

Nano- and micro-materials are ushering in a new era of smart, super-strong, and self-charging buildings. While carbon nanotubes dramatically increase the strength-to-weight ratio of skyscrapers, revolutionizing their structural flexibility, nanomaterials don’t stop here.

Several research teams are pioneering silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use. Researchers at the US National Renewable Energy Lab have developed similar smart windows. Turning into solar panels when bathed in sunlight, these thermochromic windows will power our buildings, changing color as they do.

Self-Healing Infrastructure

The American Society of Civil Engineers estimates that the US needs to spend roughly $4.5 trillion to fix nationwide roads, bridges, dams, and common infrastructure by 2025. But what if infrastructure could fix itself?

Enter self-healing concrete. Engineers at Delft University have developed bio-concrete that can repair its own cracks. As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”

But bio-concrete is only the beginning of self-healing technologies. As futurist architecture firms start printing plastic and carbon-fiber houses like the stunner seen below (using Branch Technologies’ 3D printing technology), engineers have begun tackling self-healing plastic.

And in a bid to go smart, burgeoning construction projects have started embedding sensors for preemptive detection. Beyond materials and sensors, however, construction methods are fast colliding into robotics and 3D printing.

While some startups and research institutes have leveraged robot swarm construction (namely, Harvard’s robotic termite-like swarm of programmed constructors), others have taken to large-scale autonomous robots.

One such example involves Fastbrick Robotics. After multiple iterations, the company’s Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Layhead. Image Credit: Fastbrick Robotics.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

Imagine the implications. Eliminating human safety concerns and unlocking any environment, autonomous builder robots could collaboratively build massive structures in space or deep underwater habitats.

Final Thoughts
Where, how, and what we live in form a vital pillar of our everyday lives. The concept of “home” is unlikely to disappear anytime soon. At the same time, real estate and construction are two of the biggest playgrounds for technological convergence, each on the verge of revolutionary disruption.

As underlying shifts in transportation, land reclamation, and the definition of “space” (real vs. virtual) take hold, the real estate market is about to explode in value, spreading out urban centers on unprecedented scales and unlocking vast new prime “property.”

Meanwhile, converging advancements in AI and VR are fundamentally disrupting the way we design, build, and explore new residences. Just as mirror worlds create immersive, virtual real estate economies, VR tours and AI agents are absorbing both sides of the coin to entirely obliterate the middleman.

And as materials science breakthroughs meet new modes of construction, the only limits to tomorrow’s structures are those of our own imagination.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: OCEANIX/BIG-Bjarke Ingels Group. Continue reading

Posted in Human Robots

#435098 Coming of Age in the Age of AI: The ...

The first generation to grow up entirely in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to ride in self-driving cars, as well as the first whose healthcare and education could be increasingly turned over to artificially intelligent machines.

Futurists, demographers, and marketers have yet to agree on the specifics of what defines the next wave of humanity to follow Generation Z. That hasn’t stopped some, like Australian futurist Mark McCrindle, from coining the term Generation Alpha, denoting a sort of reboot of society in a fully-realized digital age.

“In the past, the individual had no power, really,” McCrindle told Business Insider. “Now, the individual has great control of their lives through being able to leverage this world. Technology, in a sense, transformed the expectations of our interactions.”

No doubt technology may impart Marvel superhero-like powers to Generation Alpha that even tech-savvy Millennials never envisioned over cups of chai latte. But the powers of machine learning, computer vision, and other disciplines under the broad category of artificial intelligence will shape this yet unformed generation more definitively than any before it.

What will it be like to come of age in the Age of AI?

The AI Doctor Will See You Now
Perhaps no other industry is adopting and using AI as much as healthcare. The term “artificial intelligence” appears in nearly 90,000 publications from biomedical literature and research on the PubMed database.

AI is already transforming healthcare and longevity research. Machines are helping to design drugs faster and detect disease earlier. And AI may soon influence not only how we diagnose and treat illness in children, but perhaps how we choose which children will be born in the first place.

A study published earlier this month in NPJ Digital Medicine by scientists from Weill Cornell Medicine used 12,000 photos of human embryos taken five days after fertilization to train an AI algorithm on how to tell which in vitro fertilized embryo had the best chance of a successful pregnancy based on its quality.

Investigators assigned each embryo a grade based on various aspects of its appearance. A statistical analysis then correlated that grade with the probability of success. The algorithm, dubbed Stork, was able to classify the quality of a new set of images with 97 percent accuracy.

“Our algorithm will help embryologists maximize the chances that their patients will have a single healthy pregnancy,” said Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, in a press release. “The IVF procedure will remain the same, but we’ll be able to improve outcomes by harnessing the power of artificial intelligence.”

Other medical researchers see potential in applying AI to detect possible developmental issues in newborns. Scientists in Europe, working with a Finnish AI startup that creates seizure monitoring technology, have developed a technique for detecting movement patterns that might indicate conditions like cerebral palsy.

Published last month in the journal Acta Pediatrica, the study relied on an algorithm to extract the movements from a newborn, turning it into a simplified “stick figure” that medical experts could use to more easily detect clinically relevant data.

The researchers are continuing to improve the datasets, including using 3D video recordings, and are now developing an AI-based method for determining if a child’s motor maturity aligns with its true age. Meanwhile, a study published in February in Nature Medicine discussed the potential of using AI to diagnose pediatric disease.

AI Gets Classy
After being weaned on algorithms, Generation Alpha will hit the books—about machine learning.

China is famously trying to win the proverbial AI arms race by spending billions on new technologies, with one Chinese city alone pledging nearly $16 billion to build a smart economy based on artificial intelligence.

To reach dominance by its stated goal of 2030, Chinese cities are also incorporating AI education into their school curriculum. Last year, China published its first high school textbook on AI, according to the South China Morning Post. More than 40 schools are participating in a pilot program that involves SenseTime, one of the country’s biggest AI companies.

In the US, where it seems every child has access to their own AI assistant, researchers are just beginning to understand how the ubiquity of intelligent machines will influence the ways children learn and interact with their highly digitized environments.

Sandra Chang-Kredl, associate professor of the department of education at Concordia University, told The Globe and Mail that AI could have detrimental effects on learning creativity or emotional connectedness.

Similar concerns inspired Stefania Druga, a member of the Personal Robots group at the MIT Media Lab (and former Education Teaching Fellow at SU), to study interactions between children and artificial intelligence devices in order to encourage positive interactions.

Toward that goal, Druga created Cognimates, a platform that enables children to program and customize their own smart devices such as Alexa or even a smart, functional robot. The kids can also use Cognimates to train their own AI models or even build a machine learning version of Rock Paper Scissors that gets better over time.

“I believe it’s important to also introduce young people to the concepts of AI and machine learning through hands-on projects so they can make more informed and critical use of these technologies,” Druga wrote in a Medium blog post.

Druga is also the founder of Hackidemia, an international organization that sponsors workshops and labs around the world to introduce kids to emerging technologies at an early age.

“I think we are in an arms race in education with the advancement of technology, and we need to start thinking about AI literacy before patterns of behaviors for children and their families settle in place,” she wrote.

AI Goes Back to School
It also turns out that AI has as much to learn from kids. More and more researchers are interested in understanding how children grasp basic concepts that still elude the most advanced machine minds.

For example, developmental psychologist Alison Gopnik has written and lectured extensively about how studying the minds of children can provide computer scientists clues on how to improve machine learning techniques.

In an interview on Vox, she described that while DeepMind’s AlpahZero was trained to be a chessmaster, it struggles with even the simplest changes in the rules, such as allowing the bishop to move horizontally instead of vertically.

“A human chess player, even a kid, will immediately understand how to transfer that new rule to their playing of the game,” she noted. “Flexibility and generalization are something that even human one-year-olds can do but that the best machine learning systems have a much harder time with.”

Last year, the federal defense agency DARPA announced a new program aimed at improving AI by teaching it “common sense.” One of the chief strategies is to develop systems for “teaching machines through experience, mimicking the way babies grow to understand the world.”

Such an approach is also the basis of a new AI program at MIT called the MIT Quest for Intelligence.

The research leverages cognitive science to understand human intelligence, according to an article on the project in MIT Technology Review, such as exploring how young children visualize the world using their own innate 3D models.

“Children’s play is really serious business,” said Josh Tenenbaum, who leads the Computational Cognitive Science lab at MIT and his head of the new program. “They’re experiments. And that’s what makes humans the smartest learners in the known universe.”

In a world increasingly driven by smart technologies, it’s good to know the next generation will be able to keep up.

Image Credit: phoelixDE / Shutterstock.com Continue reading

Posted in Human Robots

#434865 5 AI Breakthroughs We’ll Likely See in ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

As AI algorithms such as Siri and Alexa can process your voice and output helpful responses, other AIs like Face++ can recognize faces. And yet others create art from scribbles, or even diagnose medical conditions.

Let’s dive into AI and convergence.

Top 5 Predictions for AI Breakthroughs (2019-2024)
My friend Neil Jacobstein is my ‘go-to expert’ in AI, with over 25 years of technical consulting experience in the field. Currently the AI and Robotics chair at Singularity University, Jacobstein is also a Distinguished Visiting Scholar in Stanford’s MediaX Program, a Henry Crown Fellow, an Aspen Institute moderator, and serves on the National Academy of Sciences Earth and Life Studies Committee. Neil predicted five trends he expects to emerge over the next five years, by 2024.

AI gives rise to new non-human pattern recognition and intelligence results

AlphaGo Zero, a machine learning computer program trained to play the complex game of Go, defeated the Go world champion in 2016 by 100 games to zero. But instead of learning from human play, AlphaGo Zero trained by playing against itself—a method known as reinforcement learning.

Building its own knowledge from scratch, AlphaGo Zero demonstrates a novel form of creativity, free of human bias. Even more groundbreaking, this type of AI pattern recognition allows machines to accumulate thousands of years of knowledge in a matter of hours.

While these systems can’t answer the question “What is orange juice?” or compete with the intelligence of a fifth grader, they are growing more and more strategically complex, merging with other forms of narrow artificial intelligence. Within the next five years, who knows what successors of AlphaGo Zero will emerge, augmenting both your business functions and day-to-day life.

Doctors risk malpractice when not using machine learning for diagnosis and treatment planning

A group of Chinese and American researchers recently created an AI system that diagnoses common childhood illnesses, ranging from the flu to meningitis. Trained on electronic health records compiled from 1.3 million outpatient visits of almost 600,000 patients, the AI program produced diagnosis outcomes with unprecedented accuracy.

While the US health system does not tout the same level of accessible universal health data as some Chinese systems, we’ve made progress in implementing AI in medical diagnosis. Dr. Kang Zhang, chief of ophthalmic genetics at the University of California, San Diego, created his own system that detects signs of diabetic blindness, relying on both text and medical images.

With an eye to the future, Jacobstein has predicted that “we will soon see an inflection point where doctors will feel it’s a risk to not use machine learning and AI in their everyday practices because they don’t want to be called out for missing an important diagnostic signal.”

Quantum advantage will massively accelerate drug design and testing

Researchers estimate that there are 1060 possible drug-like molecules—more than the number of atoms in our solar system. But today, chemists must make drug predictions based on properties influenced by molecular structure, then synthesize numerous variants to test their hypotheses.

Quantum computing could transform this time-consuming, highly costly process into an efficient, not to mention life-changing, drug discovery protocol.

“Quantum computing is going to have a major industrial impact… not by breaking encryption,” said Jacobstein, “but by making inroads into design through massive parallel processing that can exploit superposition and quantum interference and entanglement, and that can wildly outperform classical computing.”

AI accelerates security systems’ vulnerability and defense

With the incorporation of AI into almost every aspect of our lives, cyberattacks have grown increasingly threatening. “Deep attacks” can use AI-generated content to avoid both human and AI controls.

Previous examples include fake videos of former President Obama speaking fabricated sentences, and an adversarial AI fooling another algorithm into categorizing a stop sign as a 45 mph speed limit sign. Without the appropriate protections, AI systems can be manipulated to conduct any number of destructive objectives, whether ruining reputations or diverting autonomous vehicles.

Jacobstein’s take: “We all have security systems on our buildings, in our homes, around the healthcare system, and in air traffic control, financial organizations, the military, and intelligence communities. But we all know that these systems have been hacked periodically and we’re going to see that accelerate. So, there are major business opportunities there and there are major opportunities for you to get ahead of that curve before it bites you.”

AI design systems drive breakthroughs in atomically precise manufacturing

Just as the modern computer transformed our relationship with bits and information, AI will redefine and revolutionize our relationship with molecules and materials. AI is currently being used to discover new materials for clean-tech innovations, such as solar panels, batteries, and devices that can now conduct artificial photosynthesis.

Today, it takes about 15 to 20 years to create a single new material, according to industry experts. But as AI design systems skyrocket in capacity, these will vastly accelerate the materials discovery process, allowing us to address pressing issues like climate change at record rates. Companies like Kebotix are already on their way to streamlining the creation of chemistries and materials at the click of a button.

Atomically precise manufacturing will enable us to produce the previously unimaginable.

Final Thoughts
Within just the past three years, countries across the globe have signed into existence national AI strategies and plans for ramping up innovation. Businesses and think tanks have leaped onto the scene, hiring AI engineers and tech consultants to leverage what computer scientist Andrew Ng has even called the new ‘electricity’ of the 21st century.

As AI plays an exceedingly vital role in everyday life, how will your business leverage it to keep up and build forward?

In the wake of burgeoning markets, new ventures will quickly arise, each taking advantage of untapped data sources or unmet security needs.

And as your company aims to ride the wave of AI’s exponential growth, consider the following pointers to leverage AI and disrupt yourself before it reaches you first:

Determine where and how you can begin collecting critical data to inform your AI algorithms
Identify time-intensive processes that can be automated and accelerated within your company
Discern which global challenges can be expedited by hyper-fast, all-knowing minds

Remember: good data is vital fuel. Well-defined problems are the best compass. And the time to start implementing AI is now.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#434797 This Week’s Awesome Stories From ...

GENE EDITING
Genome Engineers Made More Than 13,000 Genome Edits in a Single Cell
Antonio Regalado | MIT Technology Review
“The group, led by gene technologist George Church, wants to rewrite genomes at a far larger scale than has currently been possible, something it says could ultimately lead to the ‘radical redesign’ of species—even humans.”

ROBOTICS
Inside Google’s Rebooted Robotics Program
Cade Metz | The New York Times
“Google’s new lab is indicative of a broader effort to bring so-called machine learning to robotics. …Many believe that machine learning—not extravagant new devices—will be the key to developing robotics for manufacturing, warehouse automation, transportation and many other tasks.

VIDEOS
Boston Dynamics Builds the Warehouse Robot of Jeff Bezos’ Dreams
Luke Dormehl | Digital Trends
“…for anyone wondering what the future of warehouse operation is likely to look like, this offers a far more practical glimpse of the years to come than, say, a dancing dog robot. As Boston Dynamics moves toward commercializing its creations for the first time, this could turn out to be a lot closer than you might think.”

TECHNOLOGY
Europe Is Splitting the Internet Into Three
Casey Newton | The Verge
“The internet had previously been divided into two: the open web, which most of the world could access; and the authoritarian web of countries like China, which is parceled out stingily and heavily monitored. As of today, though, the web no longer feels truly worldwide. Instead we now have the American internet, the authoritarian internet, and the European internet. How does the EU Copyright Directive change our understanding of the web?”

VIRTUAL REALITY
No Man’s Sky’s Next Update Will Let You Explore Infinite Space in Virtual Reality
Taylor Hatmaker | TechCrunch
“Assuming the game runs well enough, No Man’s Sky Virtual Reality will be a far cry from gimmicky VR games that lack true depth, offering one of the most expansive—if not the most expansive—VR experiences to date.”

3D PRINTING
3D Metal Printing Tries to Break Into the Manufacturing Mainstream
Mark Anderson | IEEE Spectrum
“It’s been five or so years since 3D printing was at peak hype. Since then, the technology has edged its way into a new class of materials and started to break into more applications. Today, 3D printers are being seriously considered as a means to produce stainless steel 5G smartphones, high-strength alloy gas-turbine blades, and other complex metal parts.”

Image Credit: ale de sun / Shutterstock.com Continue reading

Posted in Human Robots