Tag Archives: state

#431599 8 Ways AI Will Transform Our Cities by ...

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading

Posted in Human Robots

#431592 Reactive Content Will Get to Know You ...

The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.

For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#431427 Why the Best Healthcare Hacks Are the ...

Technology has the potential to solve some of our most intractable healthcare problems. In fact, it’s already doing so, with inventions getting us closer to a medical Tricorder, and progress toward 3D printed organs, and AIs that can do point-of-care diagnosis.
No doubt these applications of cutting-edge tech will continue to push the needle on progress in medicine, diagnosis, and treatment. But what if some of the healthcare hacks we need most aren’t high-tech at all?
According to Dr. Darshak Sanghavi, this is exactly the case. In a talk at Singularity University’s Exponential Medicine last week, Sanghavi told the audience, “We often think in extremely complex ways, but I think a lot of the improvements in health at scale can be done in an analog way.”
Sanghavi is the chief medical officer and senior vice president of translation at OptumLabs, and was previously director of preventive and population health at the Center for Medicare and Medicaid Innovation, where he oversaw the development of large pilot programs aimed at improving healthcare costs and quality.
“How can we improve health at scale, not for only a small number of people, but for entire populations?” Sanghavi asked. With programs that benefit a small group of people, he explained, what tends to happen is that the average health of a population improves, but the disparities across the group worsen.
“My mantra became, ‘The denominator is everybody,’” he said. He shared details of some low-tech but crucial fixes he believes could vastly benefit the US healthcare system.
1. Regulatory Hacking
Healthcare regulations are ultimately what drive many aspects of patient care, for better or worse. Worse because the mind-boggling complexity of regulations (exhibit A: the Affordable Care Act is reportedly about 20,000 pages long) can make it hard for people to get the care they need at a cost they can afford, but better because, as Sanghavi explained, tweaking these regulations in the right way can result in across-the-board improvements in a given population’s health.
An adjustment to Medicare hospitalization rules makes for a relevant example. The code was updated to state that if people who left the hospital were re-admitted within 30 days, that hospital had to pay a penalty. The result was hospitals taking more care to ensure patients were released not only in good health, but also with a solid understanding of what they had to do to take care of themselves going forward. “Here, arguably the writing of a few lines of regulatory code resulted in a remarkable decrease in 30-day re-admissions, and the savings of several billion dollars,” Sanghavi said.
2. Long-Term Focus
It’s easy to focus on healthcare hacks that have immediate, visible results—but what about fixes whose benefits take years to manifest? How can we motivate hospitals, regulators, and doctors to take action when they know they won’t see changes anytime soon?
“I call this the reality TV problem,” Sanghavi said. “Reality shows don’t really care about who’s the most talented recording artist—they care about getting the most viewers. That is exactly how we think about health care.”
Sanghavi’s team wanted to address this problem for heart attacks. They found they could reliably determine someone’s 10-year risk of having a heart attack based on a simple risk profile. Rather than monitoring patients’ cholesterol, blood pressure, weight, and other individual factors, the team took the average 10-year risk across entire provider panels, then made providers responsible for controlling those populations.
“Every percentage point you lower that risk, by hook or by crook, you get some people to stop smoking, you get some people on cholesterol medication. It’s patient-centered decision-making, and the provider then makes money. This is the world’s first predictive analytic model, at scale, that’s actually being paid for at scale,” he said.
3. Aligned Incentives
If hospitals are held accountable for the health of the communities they’re based in, those hospitals need to have the right incentives to follow through. “Hospitals have to spend money on community benefit, but linking that benefit to a meaningful population health metric can catalyze significant improvements,” Sanghavi said.
Darshak Sanghavi speaking at Singularity University’s 2017 Exponential Medicine Summit in San Diego, CA.
He used smoking cessation as an example. His team designed a program where hospitals were given a score (determined by the Centers for Disease Control and Prevention) based on the smoking rate in the counties where they’re located, then given monetary incentives to improve their score. Improving their score, in turn, resulted in better health for their communities, which meant fewer patients to treat for smoking-related health problems.
4. Social Determinants of Health
Social determinants of health include factors like housing, income, family, and food security. The answer to getting people to pay attention to these factors at scale, and creating aligned incentives, Sanghavi said, is “Very simple. We just have to measure it to start with, and measure it universally.”
His team was behind a $157 million pilot program called Accountable Health Communities that went live this year. The program requires all Medicare and Medicaid beneficiaries get screened for various social determinants of health. With all that data being collected, analysts can pinpoint local trends, then target funds to address the underlying problem, whether it’s job training, drug use, or nutritional education. “You’re then free to invest the dollars where they’re needed…this is how we can improve health at scale, with very simple changes in the incentive structures that are created,” he said.
5. ‘Securitizing’ Public Health
Sanghavi’s final point tied back to his discussion of aligning incentives. As misguided as it may seem, the reality is that financial incentives can make a huge difference in healthcare outcomes, from both a patient and a provider perspective.
Sanghavi’s team did an experiment in which they created outcome benchmarks for three major health problems that exist across geographically diverse areas: smoking, adolescent pregnancy, and binge drinking. The team proposed measuring the baseline of these issues then creating what they called a social impact bond. If communities were able to lower their frequency of these conditions by a given percent within a stated period of time, they’d get paid for it.
“What that did was essentially say, ‘you have a buyer for this outcome if you can achieve it,’” Sanghavi said. “And you can try to get there in any way you like.” The program is currently in CMS clearance.
AI and Robots Not Required
Using robots to perform surgery and artificial intelligence to diagnose disease will undoubtedly benefit doctors and patients around the US and the world. But Sanghavi’s talk made it clear that our healthcare system needs much more than this, and that improving population health on a large scale is really a low-tech project—one involving more regulatory and financial innovation than technological innovation.
“The things that get measured are the things that get changed,” he said. “If we choose the right outcomes to predict long-term benefit, and we pay for those outcomes, that’s the way to make progress.”
Image Credit: Wonderful Nature / Shutterstock.com Continue reading

Posted in Human Robots

#431424 A ‘Google Maps’ for the Mouse Brain ...

Ask any neuroscientist to draw you a neuron, and it’ll probably look something like a star with two tails: one stubby with extensive tree-like branches, the other willowy, lengthy and dotted with spindly spikes.
While a decent abstraction, this cartoonish image hides the uncomfortable truth that scientists still don’t know much about what many neurons actually look like, not to mention the extent of their connections.
But without untangling the jumbled mess of neural wires that zigzag across the brain, scientists are stumped in trying to answer one of the most fundamental mysteries of the brain: how individual neuronal threads carry and assemble information, which forms the basis of our thoughts, memories, consciousness, and self.
What if there was a way to virtually trace and explore the brain’s serpentine fibers, much like the way Google Maps allows us to navigate the concrete tangles of our cities’ highways?
Thanks to an interdisciplinary team at Janelia Research Campus, we’re on our way. Meet MouseLight, the most extensive map of the mouse brain ever attempted. The ongoing project has an ambitious goal: reconstructing thousands—if not more—of the mouse’s 70 million neurons into a 3D map. (You can play with it here!)
With map in hand, neuroscientists around the world can begin to answer how neural circuits are organized in the brain, and how information flows from one neuron to another across brain regions and hemispheres.
The first release, presented Monday at the Society for Neuroscience Annual Conference in Washington, DC, contains information about the shape and sizes of 300 neurons.
And that’s just the beginning.
“MouseLight’s new dataset is the largest of its kind,” says Dr. Wyatt Korff, director of project teams. “It’s going to change the textbook view of neurons.”

http://mouselight.janelia.org/assets/carousel/ML-Movie.mp4
Brain Atlas
MouseLight is hardly the first rodent brain atlasing project.
The Mouse Brain Connectivity Atlas at the Allen Institute for Brain Science in Seattle tracks neuron activity across small circuits in an effort to trace a mouse’s connectome—a complete atlas of how the firing of one neuron links to the next.
MICrONS (Machine Intelligence from Cortical Networks), the $100 million government-funded “moonshot” hopes to distill brain computation into algorithms for more powerful artificial intelligence. Its first step? Brain mapping.
What makes MouseLight stand out is its scope and level of detail.
MICrONS, for example, is focused on dissecting a cubic millimeter of the mouse visual processing center. In contrast, MouseLight involves tracing individual neurons across the entire brain.
And while connectomics outlines the major connections between brain regions, the birds-eye view entirely misses the intricacies of each individual neuron. This is where MouseLight steps in.
Slice and Dice
With a width only a fraction of a human hair, neuron projections are hard to capture in their native state. Tug or squeeze the brain too hard, and the long, delicate branches distort or even shred into bits.
In fact, previous attempts at trying to reconstruct neurons at this level of detail topped out at just a dozen, stymied by technological hiccups and sky-high costs.
A few years ago, the MouseLight team set out to automate the entire process, with a few time-saving tweaks. Here’s how it works.
After injecting a mouse with a virus that causes a handful of neurons to produce a green-glowing protein, the team treated the brain with a sugar alcohol solution. This step “clears” the brain, transforming the beige-colored organ to translucent, making it easier for light to penetrate and boosting the signal-to-background noise ratio. The brain is then glued onto a small pedestal and ready for imaging.
Building upon an established method called “two-photon microscopy,” the team then tweaked several parameters to reduce imaging time from days (or weeks) down to a fraction of that. Endearingly known as “2P” by the experts, this type of laser microscope zaps the tissue with just enough photos to light up a single plane without damaging the tissue—sharper plane, better focus, crisper image.
After taking an image, the setup activates its vibrating razor and shaves off the imaged section of the brain—a waspy slice about 200 micrometers thick. The process is repeated until the whole brain is imaged.
This setup increased imaging speed by 16 to 48 times faster than conventional microscopy, writes team leader Dr. Jayaram Chandrashekar, who published a version of the method early last year in eLife.
The resulting images strikingly highlight every crook and cranny of a neuronal branch, popping out against a pitch-black background. But pretty pictures come at a hefty data cost: each image takes up a whopping 20 terabytes of data—roughly the storage space of 4,000 DVDs, or 10,000 hours of movies.
Stitching individual images back into 3D is an image-processing nightmare. The MouseLight team used a combination of computational power and human prowess to complete this final step.
The reconstructed images are handed off to a mighty team of seven trained neuron trackers. With the help of tracing algorithms developed in-house and a keen eye, each member can track roughly a neuron a day—significantly less time than the week or so previously needed.
A Numbers Game
Even with just 300 fully reconstructed neurons, MouseLight has already revealed new secrets of the brain.
While it’s widely accepted that axons, the neurons’ outgoing projection, can span the entire length of the brain, these extra-long connections were considered relatively rare. (In fact, one previously discovered “giant neuron” was thought to link to consciousness because of its expansive connections).
Images captured from two-photon microscopy show an axon and dendrites protruding from a neuron’s cell body (sphere in center). Image Credit: Janelia Research Center, MouseLight project team
MouseLight blows that theory out of the water.
The data clearly shows that “giant neurons” are far more common than previously thought. For example, four neurons normally associated with taste had wiry branches that stretched all the way into brain areas that control movement and process touch.
“We knew that different regions of the brain talked to each other, but seeing it in 3D is different,” says Dr. Eve Marder at Brandeis University.
“The results are so stunning because they give you a really clear view of how the whole brain is connected.”
With a tested and true system in place, the team is now aiming to add 700 neurons to their collection within a year.
But appearance is only part of the story.
We can’t tell everything about a person simply by how they look. Neurons are the same: scientists can only infer so much about a neuron’s function by looking at their shape and positions. The team also hopes to profile the gene expression patterns of each neuron, which could provide more hints to their roles in the brain.
MouseLight essentially dissects the neural infrastructure that allows information traffic to flow through the brain. These anatomical highways are just the foundation. Just like Google Maps, roads form only the critical first layer of the map. Street view, traffic information and other add-ons come later for a complete look at cities in flux.
The same will happen for understanding our ever-changing brain.
Image Credit: Janelia Research Campus, MouseLight project team Continue reading

Posted in Human Robots

#431412 3 Dangerous Ideas From Ray Kurzweil

Recently, I interviewed my friend Ray Kurzweil at the Googleplex for a 90-minute webinar on disruptive and dangerous ideas, a prelude to my fireside chat with Ray at Abundance 360 this January.

Ray is my friend and cofounder and chancellor of Singularity University. He is also an XPRIZE trustee, a director of engineering at Google, and one of the best predictors of our exponential future.
It’s my pleasure to share with you three compelling ideas that came from our conversation.
1. The nation-state will soon be irrelevant.
Historically, we humans don’t like change. We like waking up in the morning and knowing that the world is the same as the night before.
That’s one reason why government institutions exist: to stabilize society.
But how will this change in 20 or 30 years? What role will stabilizing institutions play in a world of continuous, accelerating change?
“Institutions stick around, but they change their role in our lives,” Ray explained. “They already have. The nation-state is not as profound as it was. Religion used to direct every aspect of your life, minute to minute. It’s still important in some ways, but it’s much less important, much less pervasive. [It] plays a much smaller role in most people’s lives than it did, and the same is true for governments.”
Ray continues: “We are fantastically interconnected already. Nation-states are not islands anymore. So we’re already much more of a global community. The generation growing up today really feels like world citizens much more than ever before, because they’re talking to people all over the world, and it’s not a novelty.”
I’ve previously shared my belief that national borders have become extremely porous, with ideas, people, capital, and technology rapidly flowing between nations. In decades past, your cultural identity was tied to your birthplace. In the decades ahead, your identify is more a function of many other external factors. If you love space, you’ll be connected with fellow space-cadets around the globe more than you’ll be tied to someone born next door.
2. We’ll hit longevity escape velocity before we realize we’ve hit it.
Ray and I share a passion for extending the healthy human lifespan.
I frequently discuss Ray’s concept of “longevity escape velocity”—the point at which, for every year that you’re alive, science is able to extend your life for more than a year.
Scientists are continually extending the human lifespan, helping us cure heart disease, cancer, and eventually, neurodegenerative disease. This will keep accelerating as technology improves.
During my discussion with Ray, I asked him when he expects we’ll reach “escape velocity…”
His answer? “I predict it’s likely just another 10 to 12 years before the general public will hit longevity escape velocity.”
“At that point, biotechnology is going to have taken over medicine,” Ray added. “The next decade is going to be a profound revolution.”
From there, Ray predicts that nanorobots will “basically finish the job of the immune system,” with the ability to seek and destroy cancerous cells and repair damaged organs.
As we head into this sci-fi-like future, your most important job for the next 15 years is to stay alive. “Wear your seatbelt until we get the self-driving cars going,” Ray jokes.
The implications to society will be profound. While the scarcity-minded in government will react saying, “Social Security will be destroyed,” the more abundance-minded will realize that extending a person’s productive earning life space from 65 to 75 or 85 years old would be a massive boon to GDP.
3. Technology will help us define and actualize human freedoms.
The third dangerous idea from my conversation with Ray is about how technology will enhance our humanity, not detract from it.
You may have heard critics complain that technology is making us less human and increasingly disconnected.
Ray and I share a slightly different viewpoint: that technology enables us to tap into the very essence of what it means to be human.
“I don’t think humans even have to be biological,” explained Ray. “I think humans are the species that changes who we are.”
Ray argues that this began when humans developed the earliest technologies—fire and stone tools. These tools gave people new capabilities and became extensions of our physical bodies.
At its base level, technology is the means by which we change our environment and change ourselves. This will continue, even as the technologies themselves evolve.
“People say, ‘Well, do I really want to become part machine?’ You’re not even going to notice it,” Ray says, “because it’s going to be a sensible thing to do at each point.”
Today, we take medicine to fight disease and maintain good health and would likely consider it irresponsible if someone refused to take a proven, life-saving medicine.
In the future, this will still happen—except the medicine might have nanobots that can target disease or will also improve your memory so you can recall things more easily.
And because this new medicine works so well for so many, public perception will change. Eventually, it will become the norm… as ubiquitous as penicillin and ibuprofen are today.
In this way, ingesting nanorobots, uploading your brain to the cloud, and using devices like smart contact lenses can help humans become, well, better at being human.
Ray sums it up: “We are the species that changes who we are to become smarter and more profound, more beautiful, more creative, more musical, funnier, sexier.”
Speaking of sexuality and beauty, Ray also sees technology expanding these concepts. “In virtual reality, you can be someone else. Right now, actually changing your gender in real reality is a pretty significant, profound process, but you could do it in virtual reality much more easily and you can be someone else. A couple could become each other and discover their relationship from the other’s perspective.”
In the 2030s, when Ray predicts sensor-laden nanorobots will be able to go inside the nervous system, virtual or augmented reality will become exceptionally realistic, enabling us to “be someone else and have other kinds of experiences.”
Why Dangerous Ideas Matter
Why is it so important to discuss dangerous ideas?
I often say that the day before something is a breakthrough, it’s a crazy idea.
By consuming and considering a steady diet of “crazy ideas,” you train yourself to think bigger and bolder, a critical requirement for making impact.
As humans, we are linear and scarcity-minded.
As entrepreneurs, we must think exponentially and abundantly.
At the end of the day, the formula for a true breakthrough is equal to “having a crazy idea” you believe in, plus the passion to pursue that idea against all naysayers and obstacles.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots