Tag Archives: can

#433892 The Spatial Web Will Map Our 3D ...

The boundaries between digital and physical space are disappearing at a breakneck pace. What was once static and boring is becoming dynamic and magical.

For all of human history, looking at the world through our eyes was the same experience for everyone. Beyond the bounds of an over-active imagination, what you see is the same as what I see.

But all of this is about to change. Over the next two to five years, the world around us is about to light up with layer upon layer of rich, fun, meaningful, engaging, and dynamic data. Data you can see and interact with.

This magical future ahead is called the Spatial Web and will transform every aspect of our lives, from retail and advertising, to work and education, to entertainment and social interaction.

Massive change is underway as a result of a series of converging technologies, from 5G global networks and ubiquitous artificial intelligence, to 30+ billion connected devices (known as the IoT), each of which will generate scores of real-world data every second, everywhere.

The current AI explosion will make everything smart, autonomous, and self-programming. Blockchain and cloud-enabled services will support a secure data layer, putting data back in the hands of users and allowing us to build complex rule-based infrastructure in tomorrow’s virtual worlds.

And with the rise of online-merge-offline (OMO) environments, two-dimensional screens will no longer serve as our exclusive portal to the web. Instead, virtual and augmented reality eyewear will allow us to interface with a digitally-mapped world, richly layered with visual data.

Welcome to the Spatial Web. Over the next few months, I’ll be doing a deep dive into the Spatial Web (a.k.a. Web 3.0), covering what it is, how it works, and its vast implications across industries, from real estate and healthcare to entertainment and the future of work. In this blog, I’ll discuss the what, how, and why of Web 3.0—humanity’s first major foray into our virtual-physical hybrid selves (BTW, this year at Abundance360, we’ll be doing a deep dive into the Spatial Web with the leaders of HTC, Magic Leap, and High-Fidelity).

Let’s dive in.

What is the Spatial Web?
While we humans exist in three dimensions, our web today is flat.

The web was designed for shared information, absorbed through a flat screen. But as proliferating sensors, ubiquitous AI, and interconnected networks blur the lines between our physical and online worlds, we need a spatial web to help us digitally map a three-dimensional world.

To put Web 3.0 in context, let’s take a trip down memory lane. In the late 1980s, the newly-birthed world wide web consisted of static web pages and one-way information—a monumental system of publishing and linking information unlike any unified data system before it. To connect, we had to dial up through unstable modems and struggle through insufferably slow connection speeds.

But emerging from this revolutionary (albeit non-interactive) infodump, Web 2.0 has connected the planet more in one decade than empires did in millennia.

Granting democratized participation through newly interactive sites and applications, today’s web era has turbocharged information-sharing and created ripple effects of scientific discovery, economic growth, and technological progress on an unprecedented scale.

We’ve seen the explosion of social networking sites, wikis, and online collaboration platforms. Consumers have become creators; physically isolated users have been handed a global microphone; and entrepreneurs can now access billions of potential customers.

But if Web 2.0 took the world by storm, the Spatial Web emerging today will leave it in the dust.

While there’s no clear consensus about its definition, the Spatial Web refers to a computing environment that exists in three-dimensional space—a twinning of real and virtual realities—enabled via billions of connected devices and accessed through the interfaces of virtual and augmented reality.

In this way, the Spatial Web will enable us to both build a twin of our physical reality in the virtual realm and bring the digital into our real environments.

It’s the next era of web-like technologies:

Spatial computing technologies, like augmented and virtual reality;
Physical computing technologies, like IoT and robotic sensors;
And decentralized computing: both blockchain—which enables greater security and data authentication—and edge computing, which pushes computing power to where it’s most needed, speeding everything up.

Geared with natural language search, data mining, machine learning, and AI recommendation agents, the Spatial Web is a growing expanse of services and information, navigable with the use of ever-more-sophisticated AI assistants and revolutionary new interfaces.

Where Web 1.0 consisted of static documents and read-only data, Web 2.0 introduced multimedia content, interactive web applications, and social media on two-dimensional screens. But converging technologies are quickly transcending the laptop, and will even disrupt the smartphone in the next decade.

With the rise of wearables, smart glasses, AR / VR interfaces, and the IoT, the Spatial Web will integrate seamlessly into our physical environment, overlaying every conversation, every road, every object, conference room, and classroom with intuitively-presented data and AI-aided interaction.

Think: the Oasis in Ready Player One, where anyone can create digital personas, build and invest in smart assets, do business, complete effortless peer-to-peer transactions, and collect real estate in a virtual world.

Or imagine a virtual replica or “digital twin” of your office, each conference room authenticated on the blockchain, requiring a cryptographic key for entry.

As I’ve discussed with my good friend and “VR guru” Philip Rosedale, I’m absolutely clear that in the not-too-distant future, every physical element of every building in the world is going to be fully digitized, existing as a virtual incarnation or even as N number of these. “Meet me at the top of the Empire State Building?” “Sure, which one?”

This digitization of life means that suddenly every piece of information can become spatial, every environment can be smarter by virtue of AI, and every data point about me and my assets—both virtual and physical—can be reliably stored, secured, enhanced, and monetized.

In essence, the Spatial Web lets us interface with digitally-enhanced versions of our physical environment and build out entirely fictional virtual worlds—capable of running simulations, supporting entire economies, and even birthing new political systems.

But while I’ll get into the weeds of different use cases next week, let’s first concretize.

How Does It Work?
Let’s start with the stack. In the PC days, we had a database accompanied by a program that could ingest that data and present it to us as digestible information on a screen.

Then, in the early days of the web, data migrated to servers. Information was fed through a website, with which you would interface via a browser—whether Mosaic or Mozilla.

And then came the cloud.

Resident at either the edge of the cloud or on your phone, today’s rapidly proliferating apps now allow us to interact with previously read-only data, interfacing through a smartphone. But as Siri and Alexa have brought us verbal interfaces, AI-geared phone cameras can now determine your identity, and sensors are beginning to read our gestures.

And now we’re not only looking at our screens but through them, as the convergence of AI and AR begins to digitally populate our physical worlds.

While Pokémon Go sent millions of mobile game-players on virtual treasure hunts, IKEA is just one of the many companies letting you map virtual furniture within your physical home—simulating everything from cabinets to entire kitchens. No longer the one-sided recipients, we’re beginning to see through sensors, creatively inserting digital content in our everyday environments.

Let’s take a look at how the latest incarnation might work. In this new Web 3.0 stack, my personal AI would act as an intermediary, accessing public or privately-authorized data through the blockchain on my behalf, and then feed it through an interface layer composed of everything from my VR headset, to numerous wearables, to my smart environment (IoT-connected devices or even in-home robots).

But as we attempt to build a smart world with smart infrastructure, smart supply chains and smart everything else, we need a set of basic standards with addresses for people, places, and things. Just like our web today relies on the Internet Protocol (TCP/IP) and other infrastructure, by which your computer is addressed and data packets are transferred, we need infrastructure for the Spatial Web.

And a select group of players is already stepping in to fill this void. Proposing new structural designs for Web 3.0, some are attempting to evolve today’s web model from text-based web pages in 2D to three-dimensional AR and VR web experiences located in both digitally-mapped physical worlds and newly-created virtual ones.

With a spatial programming language analogous to HTML, imagine building a linkable address for any physical or virtual space, granting it a format that then makes it interchangeable and interoperable with all other spaces.

But it doesn’t stop there.

As soon as we populate a virtual room with content, we then need to encode who sees it, who can buy it, who can move it…

And the Spatial Web’s eventual governing system (for posting content on a centralized grid) would allow us to address everything from the room you’re sitting in, to the chair on the other side of the table, to the building across the street.

Just as we have a DNS for the web and the purchasing of web domains, once we give addresses to spaces (akin to granting URLs), we then have the ability to identify and visit addressable locations, physical objects, individuals, or pieces of digital content in cyberspace.

And these not only apply to virtual worlds, but to the real world itself. As new mapping technologies emerge, we can now map rooms, objects, and large-scale environments into virtual space with increasing accuracy.

We might then dictate who gets to move your coffee mug in a virtual conference room, or when a team gets to use the room itself. Rules and permissions would be set in the grid, decentralized governance systems, or in the application layer.

Taken one step further, imagine then monetizing smart spaces and smart assets. If you have booked the virtual conference room, perhaps you’ll let me pay you 0.25 BTC to let me use it instead?

But given the Spatial Web’s enormous technological complexity, what’s allowing it to emerge now?

Why Is It Happening Now?
While countless entrepreneurs have already started harnessing blockchain technologies to build decentralized apps (or dApps), two major developments are allowing today’s birth of Web 3.0:

High-resolution wireless VR/AR headsets are finally catapulting virtual and augmented reality out of a prolonged winter.

The International Data Corporation (IDC) predicts the VR and AR headset market will reach 65.9 million units by 2022. Already in the next 18 months, 2 billion devices will be enabled with AR. And tech giants across the board have long begun investing heavy sums.

In early 2019, HTC is releasing the VIVE Focus, a wireless self-contained VR headset. At the same time, Facebook is charging ahead with its Project Santa Cruz—the Oculus division’s next-generation standalone, wireless VR headset. And Magic Leap has finally rolled out its long-awaited Magic Leap One mixed reality headset.

Mass deployment of 5G will drive 10 to 100-gigabit connection speeds in the next 6 years, matching hardware progress with the needed speed to create virtual worlds.

We’ve already seen tremendous leaps in display technology. But as connectivity speeds converge with accelerating GPUs, we’ll start to experience seamless VR and AR interfaces with ever-expanding virtual worlds.

And with such democratizing speeds, every user will be able to develop in VR.

But accompanying these two catalysts is also an important shift towards the decentralized web and a demand for user-controlled data.

Converging technologies, from immutable ledgers and blockchain to machine learning, are now enabling the more direct, decentralized use of web applications and creation of user content. With no central point of control, middlemen are removed from the equation and anyone can create an address, independently interacting with the network.

Enabled by a permission-less blockchain, any user—regardless of birthplace, gender, ethnicity, wealth, or citizenship—would thus be able to establish digital assets and transfer them seamlessly, granting us a more democratized Internet.

And with data stored on distributed nodes, this also means no single point of failure. One could have multiple backups, accessible only with digital authorization, leaving users immune to any single server failure.

Implications Abound–What’s Next…
With a newly-built stack and an interface built from numerous converging technologies, the Spatial Web will transform every facet of our everyday lives—from the way we organize and access our data, to our social and business interactions, to the way we train employees and educate our children.

We’re about to start spending more time in the virtual world than ever before. Beyond entertainment or gameplay, our livelihoods, work, and even personal decisions are already becoming mediated by a web electrified with AI and newly-emerging interfaces.

In our next blog on the Spatial Web, I’ll do a deep dive into the myriad industry implications of Web 3.0, offering tangible use cases across sectors.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘on ramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Comeback01 / Shutterstock.com Continue reading

Posted in Human Robots

#433884 Designer Babies, and Their Babies: How ...

As if stand-alone technologies weren’t advancing fast enough, we’re in age where we must study the intersection points of these technologies. How is what’s happening in robotics influenced by what’s happening in 3D printing? What could be made possible by applying the latest advances in quantum computing to nanotechnology?

Along these lines, one crucial tech intersection is that of artificial intelligence and genomics. Each field is seeing constant progress, but Jamie Metzl believes it’s their convergence that will really push us into uncharted territory, beyond even what we’ve imagined in science fiction. “There’s going to be this push and pull, this competition between the reality of our biology with its built-in limitations and the scope of our aspirations,” he said.

Metzl is a senior fellow at the Atlantic Council and author of the upcoming book Hacking Darwin: Genetic Engineering and the Future of Humanity. At Singularity University’s Exponential Medicine conference last week, he shared his insights on genomics and AI, and where their convergence could take us.

Life As We Know It
Metzl explained how genomics as a field evolved slowly—and then quickly. In 1953, James Watson and Francis Crick identified the double helix structure of DNA, and realized that the order of the base pairs held a treasure trove of genetic information. There was such a thing as a book of life, and we’d found it.

In 2003, when the Human Genome Project was completed (after 13 years and $2.7 billion), we learned the order of the genome’s 3 billion base pairs, and the location of specific genes on our chromosomes. Not only did a book of life exist, we figured out how to read it.

Jamie Metzl at Exponential Medicine
Fifteen years after that, it’s 2018 and precision gene editing in plants, animals, and humans is changing everything, and quickly pushing us into an entirely new frontier. Forget reading the book of life—we’re now learning how to write it.

“Readable, writable, and hackable, what’s clear is that human beings are recognizing that we are another form of information technology, and just like our IT has entered this exponential curve of discovery, we will have that with ourselves,” Metzl said. “And it’s intersecting with the AI revolution.”

Learning About Life Meets Machine Learning
In 2016, DeepMind’s AlphaGo program outsmarted the world’s top Go player. In 2017 AlphaGo Zero was created: unlike AlphaGo, AlphaGo Zero wasn’t trained using previous human games of Go, but was simply given the rules of Go—and in four days it defeated the AlphaGo program.

Our own biology is, of course, vastly more complex than the game of Go, and that, Metzl said, is our starting point. “The system of our own biology that we are trying to understand is massively, but very importantly not infinitely, complex,” he added.

Getting a standardized set of rules for our biology—and, eventually, maybe even outsmarting our biology—will require genomic data. Lots of it.

Multiple countries already starting to produce this data. The UK’s National Health Service recently announced a plan to sequence the genomes of five million Britons over the next five years. In the US the All of Us Research Program will sequence a million Americans. China is the most aggressive in sequencing its population, with a goal of sequencing half of all newborns by 2020.

“We’re going to get these massive pools of sequenced genomic data,” Metzl said. “The real gold will come from comparing people’s sequenced genomes to their electronic health records, and ultimately their life records.” Getting people comfortable with allowing open access to their data will be another matter; Metzl mentioned that Luna DNA and others have strategies to help people get comfortable with giving consent to their private information. But this is where China’s lack of privacy protection could end up being a significant advantage.

To compare genotypes and phenotypes at scale—first millions, then hundreds of millions, then eventually billions, Metzl said—we’re going to need AI and big data analytic tools, and algorithms far beyond what we have now. These tools will let us move from precision medicine to predictive medicine, knowing precisely when and where different diseases are going to occur and shutting them down before they start.

But, Metzl said, “As we unlock the genetics of ourselves, it’s not going to be about just healthcare. It’s ultimately going to be about who and what we are as humans. It’s going to be about identity.”

Designer Babies, and Their Babies
In Metzl’s mind, the most serious application of our genomic knowledge will be in embryo selection.

Currently, in-vitro fertilization (IVF) procedures can extract around 15 eggs, fertilize them, then do pre-implantation genetic testing; right now what’s knowable is single-gene mutation diseases and simple traits like hair color and eye color. As we get to the millions and then billions of people with sequences, we’ll have information about how these genetics work, and we’re going to be able to make much more informed choices,” Metzl said.

Imagine going to a fertility clinic in 2023. You give a skin graft or a blood sample, and using in-vitro gametogenesis (IVG)—infertility be damned—your skin or blood cells are induced to become eggs or sperm, which are then combined to create embryos. The dozens or hundreds of embryos created from artificial gametes each have a few cells extracted from them, and these cells are sequenced. The sequences will tell you the likelihood of specific traits and disease states were that embryo to be implanted and taken to full term. “With really anything that has a genetic foundation, we’ll be able to predict with increasing levels of accuracy how that potential child will be realized as a human being,” Metzl said.

This, he added, could lead to some wild and frightening possibilities: if you have 1,000 eggs and you pick one based on its optimal genetic sequence, you could then mate your embryo with somebody else who has done the same thing in a different genetic line. “Your five-day-old embryo and their five-day-old embryo could have a child using the same IVG process,” Metzl said. “Then that child could have a child with another five-day-old embryo from another genetic line, and you could go on and on down the line.”

Sounds insane, right? But wait, there’s more: as Jason Pontin reported earlier this year in Wired, “Gene-editing technologies such as Crispr-Cas9 would make it relatively easy to repair, add, or remove genes during the IVG process, eliminating diseases or conferring advantages that would ripple through a child’s genome. This all may sound like science fiction, but to those following the research, the combination of IVG and gene editing appears highly likely, if not inevitable.”

From Crazy to Commonplace?
It’s a slippery slope from gene editing and embryo-mating to a dystopian race to build the most perfect humans possible. If somebody’s investing so much time and energy in selecting their embryo, Metzl asked, how will they think about the mating choices of their children? IVG could quickly leave the realm of healthcare and enter that of evolution.

“We all need to be part of an inclusive, integrated, global dialogue on the future of our species,” Metzl said. “Healthcare professionals are essential nodes in this.” Not least among this dialogue should be the question of access to tech like IVG; are there steps we can take to keep it from becoming a tool for a wealthy minority, and thereby perpetuating inequality and further polarizing societies?

As Pontin points out, at its inception 40 years ago IVF also sparked fear, confusion, and resistance—and now it’s as normal and common as could be, with millions of healthy babies conceived using the technology.

The disruption that genomics, AI, and IVG will bring to reproduction could follow a similar story cycle—if we’re smart about it. As Metzl put it, “This must be regulated, because it is life.”

Image Credit: hywards / Shutterstock.com Continue reading

Posted in Human Robots

#433872 Breaking Out of the Corporate Bubble ...

For big companies, success is a blessing and a curse. You don’t get big without doing something (or many things) very right. It might start with an invention or service the world didn’t know it needed. Your product takes off, and growth brings a whole new set of logistical challenges. Delivering consistent quality, hiring the right team, establishing a strong culture, tapping into new markets, satisfying shareholders. The list goes on.

Eventually, however, what made you successful also makes you resistant to change.

You’ve built a machine for one purpose, and it’s running smoothly, but what about retooling that machine to make something new? Not so easy. Leaders of big companies know there is no future for their organizations without change. And yet, they struggle to drive it.

In their new book, Leading Transformation: How to Take Charge of Your Company’s Future, Kyle Nel, Nathan Furr, and Thomas Ramsøy aim to deliver a roadmap for corporate transformation.

The book focuses on practical tools that have worked in big companies to break down behavioral and cognitive biases, envision radical futures, and run experiments. These include using science fiction and narrative to see ahead and adopting better measures of success for new endeavors.

A thread throughout is how to envision a new future and move into that future.

We’re limited by the bubbles in which we spend the most time—the corporate bubble, the startup bubble, the nonprofit bubble. The mutually beneficial convergence of complementary bubbles, then, can be a powerful tool for kickstarting transformation. The views and experiences of one partner can challenge the accepted wisdom of the other; resources can flow into newly co-created visions and projects; and connections can be made that wouldn’t otherwise exist.

The authors call such alliances uncommon partners. In the following excerpt from the book, Made In Space, a startup building 3D printers for space, helps Lowe’s explore an in-store 3D printing system, and Lowe’s helps Made In Space expand its vision and focus.

Uncommon Partners
In a dingy conference room at NASA, five prototypical nerds, smelling of Thai food, laid out the path to printing satellites in space and buildings on distant planets. At the end of their four-day marathon, they emerged with an artifact trail that began with early prototypes for the first 3D printer on the International Space Station and ended in the additive-manufacturing future—a future much bigger than 3D printing.

In the additive-manufacturing future, we will view everything as transient, or capable of being repurposed into new things. Rather than throwing away a soda bottle or a bent nail, we will simply reprocess these things into a new hinge for the fence we are building or a light switch plate for the tool shed. Indeed, we might not even go buy bricks for the tool shed, but instead might print them from impurities pulled from the air and the dirt beneath our feet. Such a process would both capture carbon in the air to make the bricks and avoid all the carbon involved in making and then transporting traditional bricks to your house.

If it all sounds a little too science fiction, think again. Lowe’s has already been honored as a Champion of Change by the US government for its prototype system to recycle plastic (e.g., plastic bags and bottles). The future may be closer than you have imagined. But to get there, Lowe’s didn’t work alone. It had to work with uncommon partners to create the future.

Uncommon partners are the types of organizations you might not normally work with, but which can greatly help you create radical new futures. Increasingly, as new technologies emerge and old industries converge, companies are finding that working independently to create all the necessary capabilities to enter new industries or create new technologies is costly, risky, and even counterproductive. Instead, organizations are finding that they need to collaborate with uncommon partners as an ecosystem to cocreate the future together. Nathan [Furr] and his colleague at INSEAD, Andrew Shipilov, call this arrangement an adaptive ecosystem strategy and described how companies such as Lowe’s, Samsung, Mastercard, and others are learning to work differently with partners and to work with different kinds of partners to more effectively discover new opportunities. For Lowe’s, an adaptive ecosystem strategy working with uncommon partners forms the foundation of capturing new opportunities and transforming the company. Despite its increased agility, Lowe’s can’t be (and shouldn’t become) an independent additive-manufacturing, robotics-using, exosuit-building, AR-promoting, fill-in-the-blank-what’s-next-ing company in addition to being a home improvement company. Instead, Lowe’s applies an adaptive ecosystem strategy to find the uncommon partners with which it can collaborate in new territory.

To apply the adaptive ecosystem strategy with uncommon partners, start by identifying the technical or operational components required for a particular focus area (e.g., exosuits) and then sort these components into three groups. First, there are the components that are emerging organically without any assistance from the orchestrator—the leader who tries to bring together the adaptive ecosystem. Second, there are the elements that might emerge, with encouragement and support. Third are the elements that won’t happen unless you do something about it. In an adaptive ecosystem strategy, you can create regular partnerships for the first two elements—those already emerging or that might emerge—if needed. But you have to create the elements in the final category (those that won’t emerge) either with an uncommon partner or by yourself.

For example, when Lowe’s wanted to explore the additive-manufacturing space, it began a search for an uncommon partner to provide the missing but needed capabilities. Unfortunately, initial discussions with major 3D printing companies proved disappointing. The major manufacturers kept trying to sell Lowe’s 3D printers. But the vision our group had created with science fiction was not for vendors to sell Lowe’s a printer, but for partners to help the company build a system—something that would allow customers to scan, manipulate, print, and eventually recycle additive-manufacturing objects. Every time we discussed 3D printing systems with these major companies, they responded that they could do it and then tried to sell printers. When Carin Watson, one of the leading lights at Singularity University, introduced us to Made In Space (a company being incubated in Singularity University’s futuristic accelerator), we discovered an uncommon partner that understood what it meant to cocreate a system.

Initially, Made In Space had been focused on simply getting 3D printing to work in space, where you can’t rely on gravity, you can’t send up a technician if the machine breaks, and you can’t release noxious fumes into cramped spacecraft quarters. But after the four days in the conference room going over the comic for additive manufacturing, Made In Space and Lowe’s emerged with a bigger vision. The company helped lay out an artifact trail that included not only the first printer on the International Space Station but also printing system services in Lowe’s stores.

Of course, the vision for an additive-manufacturing future didn’t end there. It also reshaped Made In Space’s trajectory, encouraging the startup, during those four days in a NASA conference room, to design a bolder future. Today, some of its bold projects include the Archinaut, a system that enables satellites to build themselves while in space, a direction that emerged partly from the science fiction narrative we created around additive manufacturing.

In summary, uncommon partners help you succeed by providing you with the capabilities you shouldn’t be building yourself, as well as with fresh insights. You also help uncommon partners succeed by creating new opportunities from which they can prosper.

Helping Uncommon Partners Prosper
Working most effectively with uncommon partners can require a shift from more familiar outsourcing or partnership relationships. When working with uncommon partners, you are trying to cocreate the future, which entails a great deal more uncertainty. Because you can’t specify outcomes precisely, agreements are typically less formal than in other types of relationships, and they operate under the provisions of shared vision and trust more than binding agreement clauses. Moreover, your goal isn’t to extract all the value from the relationship. Rather, you need to find a way to share the value.

Ideally, your uncommon partners should be transformed for the better by the work you do. For example, Lowe’s uncommon partner developing the robotics narrative was a small startup called Fellow Robots. Through their work with Lowe’s, Fellow Robots transformed from a small team focused on a narrow application of robotics (which was arguably the wrong problem) to a growing company developing a very different and valuable set of capabilities: putting cutting-edge technology on top of the old legacy systems embedded at the core of most companies. Working with Lowe’s allowed Fellow Robots to discover new opportunities, and today Fellow Robots works with retailers around the world, including BevMo! and Yamada. Ultimately, working with uncommon partners should be transformative for both of you, so focus more on creating a bigger pie than on how you are going to slice up a smaller pie.

The above excerpt appears in the new book Leading Transformation: How to Take Charge of Your Company’s Future by Kyle Nel, Nathan Furr, and Thomas Ramsøy, published by Harvard Business Review Press.

Image Credit: Here / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#433852 How Do We Teach Autonomous Cars To Drive ...

Autonomous vehicles can follow the general rules of American roads, recognizing traffic signals and lane markings, noticing crosswalks and other regular features of the streets. But they work only on well-marked roads that are carefully scanned and mapped in advance.

Many paved roads, though, have faded paint, signs obscured behind trees and unusual intersections. In addition, 1.4 million miles of U.S. roads—one-third of the country’s public roadways—are unpaved, with no on-road signals like lane markings or stop-here lines. That doesn’t include miles of private roads, unpaved driveways or off-road trails.

What’s a rule-following autonomous car to do when the rules are unclear or nonexistent? And what are its passengers to do when they discover their vehicle can’t get them where they’re going?

Accounting for the Obscure
Most challenges in developing advanced technologies involve handling infrequent or uncommon situations, or events that require performance beyond a system’s normal capabilities. That’s definitely true for autonomous vehicles. Some on-road examples might be navigating construction zones, encountering a horse and buggy, or seeing graffiti that looks like a stop sign. Off-road, the possibilities include the full variety of the natural world, such as trees down over the road, flooding and large puddles—or even animals blocking the way.

At Mississippi State University’s Center for Advanced Vehicular Systems, we have taken up the challenge of training algorithms to respond to circumstances that almost never happen, are difficult to predict and are complex to create. We seek to put autonomous cars in the hardest possible scenario: driving in an area the car has no prior knowledge of, with no reliable infrastructure like road paint and traffic signs, and in an unknown environment where it’s just as likely to see a cactus as a polar bear.

Our work combines virtual technology and the real world. We create advanced simulations of lifelike outdoor scenes, which we use to train artificial intelligence algorithms to take a camera feed and classify what it sees, labeling trees, sky, open paths and potential obstacles. Then we transfer those algorithms to a purpose-built all-wheel-drive test vehicle and send it out on our dedicated off-road test track, where we can see how our algorithms work and collect more data to feed into our simulations.

Starting Virtual
We have developed a simulator that can create a wide range of realistic outdoor scenes for vehicles to navigate through. The system generates a range of landscapes of different climates, like forests and deserts, and can show how plants, shrubs and trees grow over time. It can also simulate weather changes, sunlight and moonlight, and the accurate locations of 9,000 stars.

The system also simulates the readings of sensors commonly used in autonomous vehicles, such as lidar and cameras. Those virtual sensors collect data that feeds into neural networks as valuable training data.

Simulated desert, meadow and forest environments generated by the Mississippi State University Autonomous Vehicle Simulator. Chris Goodin, Mississippi State University, Author provided.
Building a Test Track
Simulations are only as good as their portrayals of the real world. Mississippi State University has purchased 50 acres of land on which we are developing a test track for off-road autonomous vehicles. The property is excellent for off-road testing, with unusually steep grades for our area of Mississippi—up to 60 percent inclines—and a very diverse population of plants.

We have selected certain natural features of this land that we expect will be particularly challenging for self-driving vehicles, and replicated them exactly in our simulator. That allows us to directly compare results from the simulation and real-life attempts to navigate the actual land. Eventually, we’ll create similar real and virtual pairings of other types of landscapes to improve our vehicle’s capabilities.

A road washout, as seen in real life, left, and in simulation. Chris Goodin, Mississippi State University, Author provided.
Collecting More Data
We have also built a test vehicle, called the Halo Project, which has an electric motor and sensors and computers that can navigate various off-road environments. The Halo Project car has additional sensors to collect detailed data about its actual surroundings, which can help us build virtual environments to run new tests in.

The Halo Project car can collect data about driving and navigating in rugged terrain. Beth Newman Wynn, Mississippi State University, Author provided.
Two of its lidar sensors, for example, are mounted at intersecting angles on the front of the car so their beams sweep across the approaching ground. Together, they can provide information on how rough or smooth the surface is, as well as capturing readings from grass and other plants and items on the ground.

Lidar beams intersect, scanning the ground in front of the vehicle. Chris Goodin, Mississippi State University, Author provided
We’ve seen some exciting early results from our research. For example, we have shown promising preliminary results that machine learning algorithms trained on simulated environments can be useful in the real world. As with most autonomous vehicle research, there is still a long way to go, but our hope is that the technologies we’re developing for extreme cases will also help make autonomous vehicles more functional on today’s roads.

Matthew Doude, Associate Director, Center for Advanced Vehicular Systems; Ph.D. Student in Industrial and Systems Engineering, Mississippi State University; Christopher Goodin, Assistant Research Professor, Center for Advanced Vehicular Systems, Mississippi State University, and Daniel Carruth, Assistant Research Professor and Associate Director for Human Factors and Advanced Vehicle System, Center for Advanced Vehicular Systems, Mississippi State University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Photo provided for The Conversation by Matthew Goudin / CC BY ND Continue reading

Posted in Human Robots

#433828 Using Big Data to Give Patients Control ...

Big data, personalized medicine, artificial intelligence. String these three buzzphrases together, and what do you have?

A system that may revolutionize the future of healthcare, by bringing sophisticated health data directly to patients for them to ponder, digest, and act upon—and potentially stop diseases in their tracks.

At Singularity University’s Exponential Medicine conference in San Diego this week, Dr. Ran Balicer, director of the Clalit Research Institute in Israel, painted a futuristic picture of how big data can merge with personalized healthcare into an app-based system in which the patient is in control.

Dr. Ran Balicer at Exponential Medicine
Picture this: instead of going to a physician with your ailments, your doctor calls you with some bad news: “Within six hours, you’re going to have a heart attack. So why don’t you come into the clinic and we can fix that.” Crisis averted.

Following the treatment, you’re at home monitoring your biomarkers, lab test results, and other health information through an app with a clean, beautiful user interface. Within the app, you can observe how various health-influencing life habits—smoking, drinking, insufficient sleep—influence your chance of future cardiovascular disease risks by toggling their levels up or down.

There’s more: you can also set a health goal within the app—for example, stop smoking—which automatically informs your physician. The app will then suggest pharmaceuticals to help you ditch the nicotine and automatically sends the prescription to your local drug store. You’ll also immediately find a list of nearby support groups that can help you reach your health goal.

With this hefty dose of AI, you’re in charge of your health—in fact, probably more so than under current healthcare systems.

Sound fantastical? In fact, this type of preemptive care is already being provided in some countries, including Israel, at a massive scale, said Balicer. By mining datasets with deep learning and other powerful AI tools, we can predict the future—and put it into the hands of patients.

The Israeli Advantage
In order to apply big data approaches to medicine, you first need a giant database.

Israel is ahead of the game in this regard. With decades of electronic health records aggregated within a central warehouse, Israel offers a wealth of health-related data on the scale of millions of people and billions of data points. The data is incredibly multiplex, covering lab tests, drugs, hospital admissions, medical procedures, and more.

One of Balicer’s early successes was an algorithm that predicts diabetes, which allowed the team to notify physicians to target their care. Clalit has also been busy digging into data that predicts winter pneumonia, osteoporosis, and a long list of other preventable diseases.

So far, Balicer’s predictive health system has only been tested on a pilot group of patients, but he is expecting to roll out the platform to all patients in the database in the next few months.

Truly Personalized Medicine
To Balicer, whatever a machine can do better, it should be welcomed to do. AI diagnosticians have already enjoyed plenty of successes—but their collaboration remains mostly with physicians, at a point in time when the patient is already ill.

A particularly powerful use of AI in medicine is to bring insights and trends directly to the patient, such that they can take control over their own health and medical care.

For example, take the problem of tailored drug dosing. Current drug doses are based on average results conducted during clinical trials—the dosing is not tailored for any specific patient’s genetic and health makeup. But what if a doctor had already seen millions of other patients similar to your case, and could generate dosing recommendations more relevant to you based on that particular group of patients?

Such personalized recommendations are beyond the ability of any single human doctor. But with the help of AI, which can quickly process massive datasets to find similarities, doctors may soon be able to prescribe individually-tailored medications.

Tailored treatment doesn’t stop there. Another issue with pharmaceuticals and treatment regimes is that they often come with side effects: potentially health-threatening reactions that may, or may not, happen to you based on your biometrics.

Back in 2017, the New England Journal of Medicine launched the SPRINT Data Analysis Challenge, which urged physicians and data analysts to identify novel clinical findings using shared clinical trial data.

Working with Dr. Noa Dagan at the Clalit Research Institute, Balicer and team developed an algorithm that recommends whether or not a patient receives a particularly intensive treatment regime for hypertension.

Rather than simply looking at one outcome—normalized blood pressure—the algorithm takes into account an individual’s specific characteristics, laying out the treatment’s predicted benefits and harms for a particular patient.

“We built thousands of models for each patient to comprehensively understand the impact of the treatment for the individual; for example, a reduced risk for stroke and cardiovascular-related deaths could be accompanied by an increase in serious renal failure,” said Balicer. “This approach allows a truly personalized balance—allowing patients and their physicians to ultimately decide if the risks of the treatment are worth the benefits.”

This is already personalized medicine at its finest. But Balicer didn’t stop there.

We are not the sum of our biologics and medical stats, he said. A truly personalized approach needs to take a patient’s needs and goals and the sacrifices and tradeoffs they’re willing to make into account, rather than having the physician make decisions for them.

Balicer’s preventative system adds this layer of complexity by giving weights to different outcomes based on patients’ input of their own health goals. Rather than blindly following big data, the system holistically integrates the patient’s opinion to make recommendations.

Balicer’s system is just one example of how AI can truly transform personalized health care. The next big challenge is to work with physicians to further optimize these systems, in a way that doctors can easily integrate them into their workflow and embrace the technology.

“Health systems will not be replaced by algorithms, rest assured,” concluded Balicer, “but health systems that don’t use algorithms will be replaced by those that do.”

Image Credit: Magic mine / Shutterstock.com Continue reading

Posted in Human Robots