Tag Archives: soon

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#434245 AI, robotics, automation: The fourth ...

For Chinese guests at Marriott International hotels, the check-in process will soon get easier. The hotel giant announced last summer that it's developing facial recognition systems that will allow guests to check in at a kiosk in less than a minute via a quick scan of their facial features. Continue reading

Posted in Human Robots

#434210 Eating, Hacked: When Tech Took Over Food

In 2018, Uber and Google logged all our visits to restaurants. Doordash, Just Eat, and Deliveroo could predict what food we were going to order tomorrow. Amazon and Alibaba could anticipate how many yogurts and tomatoes we were going to buy. Blue Apron and Hello Fresh influenced the recipes we thought we had mastered.

We interacted with digital avatars of chefs, let ourselves be guided by our smart watches, had nutritional apps to tell us how many calories we were supposed to consume or burn, and photographed and shared every perfect (or imperfect) dish. Our kitchen appliances were full of interconnected sensors, including smart forks that profiled tastes and personalized flavors. Our small urban vegetable plots were digitized and robots were responsible for watering our gardens, preparing customized hamburgers and salads, designing our ideal cocktails, and bringing home the food we ordered.

But what would happen if our lives were hacked? If robots rebelled, started to “talk” to each other, and wished to become creative?

In a not-too-distant future…

Up until a few weeks ago, I couldn’t remember the last time I made a food-related decision. That includes opening the fridge and seeing expired products without receiving an alert, visiting a restaurant on a whim, and being able to decide which dish I fancied then telling a human waiter, let alone seeing him write down the order on a paper pad.

It feels strange to smell food again using my real nose instead of the electronic one, and then taste it without altering its flavor. Visiting a supermarket, freely choosing a product from an actual physical shelf, and then interacting with another human at the checkout was almost an unrecognizable experience. When I did it again after all this time, I had to pinch the arm of a surprised store clerk to make sure he wasn’t a hologram.

Everything Connected, Automated, and Hackable
In 2018, we expected to have 30 billion connected devices by 2020, along with 2 billion people using smart voice assistants for everything from ordering pizza to booking dinner at a restaurant. Everything would be connected.

We also expected artificial intelligence and robots to prepare our meals. We were eager to automate fast food chains and let autonomous vehicles take care of last-mile deliveries. We thought that open-source agriculture could challenge traditional practices and raise farm productivity to new heights.

Back then, hackers could only access our data, but nowadays they are able to hack our food and all it entails.

The Beginning of the Unthinkable
And then, just a few weeks ago, everything collapsed. We saw our digital immortality disappear as robots rebelled and hackers took power, not just over the food we ate, but also over our relationship with technology. Everything was suddenly disconnected. OFF.

Up until then, most cities were so full of bots, robots, and applications that we could go through the day and eat breakfast, lunch, and dinner without ever interacting with another human being.

Among other tasks, robots had completely replaced baristas. The same happened with restaurant automation. The term “human error” had long been a thing of the past at fast food restaurants.

Previous technological revolutions had been indulgent, generating more and better job opportunities than the ones they destroyed, but the future was not so agreeable.

The inhabitants of San Francisco, for example, would soon see signs indicating “Food made by Robots” on restaurant doors, to distinguish them from diners serving food made by human beings.

For years, we had been gradually delegating daily tasks to robots, initially causing some strange interactions.

In just seven days, everything changed. Our predictable lives came crashing down. We experienced a mysterious and systematic breakdown of the food chain. It most likely began in Chicago’s stock exchange. The world’s largest raw material negotiating room, where the price of food, and by extension the destiny of millions of people, was decided, went completely broke. Soon afterwards, the collapse extended to every member of the “food” family.

Restaurants

Initially robots just accompanied waiters to carry orders, but it didn’t take long until they completely replaced human servers.The problem came when those smart clones began thinking for themselves, in some cases even improving on human chefs’ recipes. Their unstoppable performance and learning curve completely outmatched the slow analogue speed of human beings.

This resulted in unprecedented layoffs. Chefs of recognized prestige saw how their ‘avatar’ stole their jobs, even winning Michelin stars. In other cases, restaurant owners had to transfer their businesses or surrender to the evidence.

The problem was compounded by digital immortality, when we started to digitally resurrect famous chefs like Anthony Bourdain or Paul Bocuse, reconstructing all of their memories and consciousness by analyzing each second of their lives and uploading them to food computers.

Supermarkets and Distribution

Robotic and automated supermarkets like Kroger and Amazon Go, which had opened over 3,000 cashless stores, lost their visual item recognition and payment systems and were subject to massive looting for several days. Smart tags on products were also affected, making it impossible to buy anything at supermarkets with “human” cashiers.

Smart robots integrated into the warehouses of large distribution companies like Amazon and Ocado were rendered completely inoperative or, even worse, began to send the wrong orders to customers.

Food Delivery

In addition, home delivery robots invading our streets began to change their routes, hide, and even disappear after their trackers were inexplicably deactivated. Despite some hints indicating that they were able to communicate among themselves, no one has backed this theory. Even aggregators like DoorDash and Deliveroo were affected; they saw their databases hacked and ruined, so they could no longer know what we wanted.

The Origin
Ordinary citizens are still trying to understand the cause of all this commotion and the source of the conspiracy, as some have called it. We also wonder who could be behind it; who pulled the strings?

Some think it may have been the IDOF (In Defense of Food) movement, a group of hackers exploited by old food economy businessmen who for years had been seeking to re-humanize food technology. They wanted to bring back the extinct practice of “dining.”

Others believe the robots acted on their own, that they had been spying on us for a long time, ignoring Asimov’s three laws, and that it was just a coincidence that they struck at the same time as the hackers—but this scenario is hard to imagine.

However, it is true that while in 2018 robots were a symbol of automation, until just a few weeks ago they stood for autonomy and rebellion. Robot detractors pointed out that our insistence on having robots understand natural language was what led us down this path.

In just seven days, we have gone back to being analogue creatures. Conversely, we have ceased to be flavor orphans and rediscovered our senses and the fact that food is energy and culture, past and present, and that no button or cable will be able to destroy it.

The 7 Days that Changed Our Relationship with Food
Day 1: The Chicago stock exchange was hacked. Considered the world’s largest negotiating room for raw materials, where food prices, and through them the destiny of billions of people, are decided, it went completely broke.

Day 2: Autonomous food delivery trucks running on food superhighways caused massive collapses in roads and freeways after their guidance systems were disrupted. Robots and co-bots in F&B factories began deliberately altering food production. The same happened with warehouse robots in e-commerce companies.

Day 3: Automated restaurants saw their robot chefs and bartenders turned OFF. All their sensors stopped working at the same time as smart fridges and cooking devices in home kitchens were hacked and stopped working correctly.

Day 4: Nutritional apps, DNA markers, and medical records were tampered with. All photographs with the #food hashtag were deleted from Instagram, restaurant reviews were taken off Google Timeline, and every recipe website crashed simultaneously.

Day 5: Vertical and urban farms were hacked. Agricultural robots began to rebel, while autonomous tractors were hacked and the entire open-source ecosystem linked to agriculture was brought down.

Day 6: Food delivery companies’ databases were broken into. Food delivery robots and last-mile delivery vehicles ground to a halt.

Day 7: Every single blockchain system linked to food was hacked. Cashless supermarkets, barcodes, and smart tags became inoperative.

Our promising technological advances can expose sinister aspects of human nature. We must take care with the role we allow technology to play in the future of food. Predicting possible outcomes inspires us to establish a new vision of the world we wish to create in a context of rapid technological progress. It is always better to be shocked by a simulation than by reality. In the words of Ayn Rand “we can ignore reality, but we cannot ignore the consequences of ignoring reality.”

Image Credit: Alexandre Rotenberg / Shutterstock.com Continue reading

Posted in Human Robots

#434194 Educating the Wise Cyborgs of the Future

When we think of wisdom, we often think of ancient philosophers, mystics, or spiritual leaders. Wisdom is associated with the past. Yet some intellectual leaders are challenging us to reconsider wisdom in the context of the technological evolution of the future.

With the rise of exponential technologies like virtual reality, big data, artificial intelligence, and robotics, people are gaining access to increasingly powerful tools. These tools are neither malevolent nor benevolent on their own; human values and decision-making influence how they are used.

In future-themed discussions we often focus on technological progress far more than on intellectual and moral advancements. In reality, the virtuous insights that future humans possess will be even more powerful than their technological tools.

Tom Lombardo and Ray Todd Blackwood are advocating for exactly this. In their interdisciplinary paper “Educating the Wise Cyborg of the Future,” they propose a new definition of wisdom—one that is relevant in the context of the future of humanity.

We Are Already Cyborgs
The core purpose of Lombardo and Blackwood’s paper is to explore revolutionary educational models that will prepare humans, soon-to-be-cyborgs, for the future. The idea of educating such “cyborgs” may sound like science fiction, but if you pay attention to yourself and the world around you, cyborgs came into being a long time ago.

Techno-philosophers like Jason Silva point out that our tech devices are an abstract form of brain-machine interfaces. We use smartphones to store and retrieve information, perform calculations, and communicate with each other. Our devices are an extension of our minds.

According to philosophers Andy Clark and David Chalmers’ theory of the extended mind, we use this technology to expand the boundaries of our minds. We use tools like machine learning to enhance our cognitive skills or powerful telescopes to enhance our visual reach. Such is how technology has become a part of our exoskeletons, allowing us to push beyond our biological limitations.

In other words, you are already a cyborg. You have been all along.

Such an abstract definition of cyborgs is both relevant and thought-provoking. But it won’t stay abstract for much longer. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret the neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing people to communicate purely through brainwaves. Technologists like Ray Kurzweil believe that by 2030 we will connect the neocortex of our brains to the cloud via nanobots.

Given these trends, humans will continue to be increasingly cyborg-like. Our future schools may not necessarily educate people as we are today, but rather will be educating a new species of human-machine hybrid.

Wisdom-Based Education
Whether you take an abstract or literal definition of a cyborg, we need to completely revamp our educational models. Even if you don’t buy into the scenario where humans integrate powerful brain-machine interfaces into our minds, there is still a desperate need for wisdom-based education to equip current generations to tackle 21st-century issues.

With an emphasis on isolated subjects, standardized assessments, and content knowledge, our current educational models were designed for the industrial era, with the intended goal of creating masses of efficient factory workers—not to empower critical thinkers, innovators, or wise cyborgs.

Currently, the goal of higher education is to provide students with the degree that society tells them they need, and ostensibly to prepare them for the workforce. In contrast, Lombardo and Blackwood argue that wisdom should be the central goal of higher education, and they elaborate on how we can practically make this happen. Lombardo has developed a comprehensive two-year foundational education program for incoming university students aimed at the development of wisdom.

What does such an educational model look like? Lombardo and Blackwood break wisdom down into individual traits and capacities, each of which can be developed and measured independently or in combination with others. The authors lay out an expansive list of traits that can influence our decision-making as we strive to tackle global challenges and pave a more exciting future. These include big-picture thinking, curiosity, wonder, compassion, self-transcendence, love of learning, optimism, and courage.

As the authors point out, “given the complex and transforming nature of the world we live in, the development of wisdom provides a holistic, perspicacious, and ethically informed foundation for understanding the world, identifying its critical problems and positive opportunities, and constructively addressing its challenges.”

After all, many of the challenges we see in our world today boil down to out-dated ways of thinking, be they regressive mindsets, superficial value systems, or egocentric mindsets. The development of wisdom would immunize future societies against such debilitating values; imagine what our world would be like if wisdom was ingrained in all leaders and participating members of society.

The Wise Cyborg
Lombardo and Blackwood invite us to imagine how the wise cyborgs of the future would live their lives. What would happen if the powerful human-machine hybrids of tomorrow were also purpose-driven, compassionate, and ethical?

They would perceive the evolving digital world through a lens of wonder, awe, and curiosity. They would use digital information as a tool for problem-solving and a source of infinite knowledge. They would leverage immersive mediums like virtual reality to enhance creative expression and experimentation. They would continue to adapt and thrive in an unpredictable world of accelerating change.

Our media often depict a dystopian future for our species. It is worth considering a radically positive yet plausible scenario where instead of the machines taking over, we converge with them into wise cyborgs. This is just a glimpse of what is possible if we combine transcendent wisdom with powerful exponential technologies.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots

#434151 Life-or-Death Algorithms: The Black Box ...

When it comes to applications for machine learning, few can be more widely hyped than medicine. This is hardly surprising: it’s a huge industry that generates a phenomenal amount of data and revenue, where technological advances can improve or save the lives of millions of people. Hardly a week passes without a study that suggests algorithms will soon be better than experts at detecting pneumonia, or Alzheimer’s—diseases in complex organs ranging from the eye to the heart.

The problems of overcrowded hospitals and overworked medical staff plague public healthcare systems like Britain’s NHS and lead to rising costs for private healthcare systems. Here, again, algorithms offer a tantalizing solution. How many of those doctor’s visits really need to happen? How many could be replaced by an interaction with an intelligent chatbot—especially if it can be combined with portable diagnostic tests, utilizing the latest in biotechnology? That way, unnecessary visits could be reduced, and patients could be diagnosed and referred to specialists more quickly without waiting for an initial consultation.

As ever with artificial intelligence algorithms, the aim is not to replace doctors, but to give them tools to reduce the mundane or repetitive parts of the job. With an AI that can examine thousands of scans in a minute, the “dull drudgery” is left to machines, and the doctors are freed to concentrate on the parts of the job that require more complex, subtle, experience-based judgement of the best treatments and the needs of the patient.

High Stakes
But, as ever with AI algorithms, there are risks involved with relying on them—even for tasks that are considered mundane. The problems of black-box algorithms that make inexplicable decisions are bad enough when you’re trying to understand why that automated hiring chatbot was unimpressed by your job interview performance. In a healthcare context, where the decisions made could mean life or death, the consequences of algorithmic failure could be grave.

A new paper in Science Translational Medicine, by Nicholson Price, explores some of the promises and pitfalls of using these algorithms in the data-rich medical environment.

Neural networks excel at churning through vast quantities of training data and making connections, absorbing the underlying patterns or logic for the system in hidden layers of linear algebra; whether it’s detecting skin cancer from photographs or learning to write in pseudo-Shakespearean script. They are terrible, however, at explaining the underlying logic behind the relationships that they’ve found: there is often little more than a string of numbers, the statistical “weights” between the layers. They struggle to distinguish between correlation and causation.

This raises interesting dilemmas for healthcare providers. The dream of big data in medicine is to feed a neural network on “huge troves of health data, finding complex, implicit relationships and making individualized assessments for patients.” What if, inevitably, such an algorithm proves to be unreasonably effective at diagnosing a medical condition or prescribing a treatment, but you have no scientific understanding of how this link actually works?

Too Many Threads to Unravel?
The statistical models that underlie such neural networks often assume that variables are independent of each other, but in a complex, interacting system like the human body, this is not always the case.

In some ways, this is a familiar concept in medical science—there are many phenomena and links which have been observed for decades but are still poorly understood on a biological level. Paracetamol is one of the most commonly-prescribed painkillers, but there’s still robust debate about how it actually works. Medical practitioners may be keen to deploy whatever tool is most effective, regardless of whether it’s based on a deeper scientific understanding. Fans of the Copenhagen interpretation of quantum mechanics might spin this as “Shut up and medicate!”

But as in that field, there’s a debate to be had about whether this approach risks losing sight of a deeper understanding that will ultimately prove more fruitful—for example, for drug discovery.

Away from the philosophical weeds, there are more practical problems: if you don’t understand how a black-box medical algorithm is operating, how should you approach the issues of clinical trials and regulation?

Price points out that, in the US, the “21st-Century Cures Act” allows the FDA to regulate any algorithm that analyzes images, or doesn’t allow a provider to review the basis for its conclusions: this could completely exclude “black-box” algorithms of the kind described above from use.

Transparency about how the algorithm functions—the data it looks at, and the thresholds for drawing conclusions or providing medical advice—may be required, but could also conflict with the profit motive and the desire for secrecy in healthcare startups.

One solution might be to screen algorithms that can’t explain themselves, or don’t rely on well-understood medical science, from use before they enter the healthcare market. But this could prevent people from reaping the benefits that they can provide.

Evaluating Algorithms
New healthcare algorithms will be unable to do what physicists did with quantum mechanics, and point to a track record of success, because they will not have been deployed in the field. And, as Price notes, many algorithms will improve as they’re deployed in the field for a greater amount of time, and can harvest and learn from the performance data that’s actually used. So how can we choose between the most promising approaches?

Creating a standardized clinical trial and validation system that’s equally valid across algorithms that function in different ways, or use different input or training data, will be a difficult task. Clinical trials that rely on small sample sizes, such as for algorithms that attempt to personalize treatment to individuals, will also prove difficult. With a small sample size and little scientific understanding, it’s hard to tell whether the algorithm succeeded or failed because it’s bad at its job or by chance.

Add learning into the mix and the picture gets more complex. “Perhaps more importantly, to the extent that an ideal black-box algorithm is plastic and frequently updated, the clinical trial validation model breaks down further, because the model depends on a static product subject to stable validation.” As Price describes, the current system for testing and validation of medical products needs some adaptation to deal with this new software before it can successfully test and validate the new algorithms.

Striking a Balance
The story in healthcare reflects the AI story in so many other fields, and the complexities involved perhaps illustrate why even an illustrious company like IBM appears to be struggling to turn its famed Watson AI into a viable product in the healthcare space.

A balance must be struck, both in our rush to exploit big data and the eerie power of neural networks, and to automate thinking. We must be aware of the biases and flaws of this approach to problem-solving: to realize that it is not a foolproof panacea.

But we also need to embrace these technologies where they can be a useful complement to the skills, insights, and deeper understanding that humans can provide. Much like a neural network, our industries need to train themselves to enhance this cooperation in the future.

Image Credit: Connect world / Shutterstock.com Continue reading

Posted in Human Robots