Tag Archives: free

#434781 What Would It Mean for AI to Become ...

As artificial intelligence systems take on more tasks and solve more problems, it’s hard to say which is rising faster: our interest in them or our fear of them. Futurist Ray Kurzweil famously predicted that “By 2029, computers will have emotional intelligence and be convincing as people.”

We don’t know how accurate this prediction will turn out to be. Even if it takes more than 10 years, though, is it really possible for machines to become conscious? If the machines Kurzweil describes say they’re conscious, does that mean they actually are?

Perhaps a more relevant question at this juncture is: what is consciousness, and how do we replicate it if we don’t understand it?

In a panel discussion at South By Southwest titled “How AI Will Design the Human Future,” experts from academia and industry discussed these questions and more.

Wait, What Is AI?
Most of AI’s recent feats—diagnosing illnesses, participating in debate, writing realistic text—involve machine learning, which uses statistics to find patterns in large datasets then uses those patterns to make predictions. However, “AI” has been used to refer to everything from basic software automation and algorithms to advanced machine learning and deep learning.

“The term ‘artificial intelligence’ is thrown around constantly and often incorrectly,” said Jennifer Strong, a reporter at the Wall Street Journal and host of the podcast “The Future of Everything.” Indeed, one study found that 40 percent of European companies that claim to be working on or using AI don’t actually use it at all.

Dr. Peter Stone, associate chair of computer science at UT Austin, was the study panel chair on the 2016 One Hundred Year Study on Artificial Intelligence (or AI100) report. Based out of Stanford University, AI100 is studying and anticipating how AI will impact our work, our cities, and our lives.

“One of the first things we had to do was define AI,” Stone said. They defined it as a collection of different technologies inspired by the human brain to be able to perceive their surrounding environment and figure out what actions to take given these inputs.

Modeling on the Unknown
Here’s the crazy thing about that definition (and about AI itself): we’re essentially trying to re-create the abilities of the human brain without having anything close to a thorough understanding of how the human brain works.

“We’re starting to pair our brains with computers, but brains don’t understand computers and computers don’t understand brains,” Stone said. Dr. Heather Berlin, cognitive neuroscientist and professor of psychiatry at the Icahn School of Medicine at Mount Sinai, agreed. “It’s still one of the greatest mysteries how this three-pound piece of matter can give us all our subjective experiences, thoughts, and emotions,” she said.

This isn’t to say we’re not making progress; there have been significant neuroscience breakthroughs in recent years. “This has been the stuff of science fiction for a long time, but now there’s active work being done in this area,” said Amir Husain, CEO and founder of Austin-based AI company Spark Cognition.

Advances in brain-machine interfaces show just how much more we understand the brain now than we did even a few years ago. Neural implants are being used to restore communication or movement capabilities in people who’ve been impaired by injury or illness. Scientists have been able to transfer signals from the brain to prosthetic limbs and stimulate specific circuits in the brain to treat conditions like Parkinson’s, PTSD, and depression.

But much of the brain’s inner workings remain a deep, dark mystery—one that will have to be further solved if we’re ever to get from narrow AI, which refers to systems that can perform specific tasks and is where the technology stands today, to artificial general intelligence, or systems that possess the same intelligence level and learning capabilities as humans.

The biggest question that arises here, and one that’s become a popular theme across stories and films, is if machines achieve human-level general intelligence, does that also mean they’d be conscious?

Wait, What Is Consciousness?
As valuable as the knowledge we’ve accumulated about the brain is, it seems like nothing more than a collection of disparate facts when we try to put it all together to understand consciousness.

“If you can replace one neuron with a silicon chip that can do the same function, then replace another neuron, and another—at what point are you still you?” Berlin asked. “These systems will be able to pass the Turing test, so we’re going to need another concept of how to measure consciousness.”

Is consciousness a measurable phenomenon, though? Rather than progressing by degrees or moving through some gray area, isn’t it pretty black and white—a being is either conscious or it isn’t?

This may be an outmoded way of thinking, according to Berlin. “It used to be that only philosophers could study consciousness, but now we can study it from a scientific perspective,” she said. “We can measure changes in neural pathways. It’s subjective, but depends on reportability.”

She described three levels of consciousness: pure subjective experience (“Look, the sky is blue”), awareness of one’s own subjective experience (“Oh, it’s me that’s seeing the blue sky”), and relating one subjective experience to another (“The blue sky reminds me of a blue ocean”).

“These subjective states exist all the way down the animal kingdom. As humans we have a sense of self that gives us another depth to that experience, but it’s not necessary for pure sensation,” Berlin said.

Husain took this definition a few steps farther. “It’s this self-awareness, this idea that I exist separate from everything else and that I can model myself,” he said. “Human brains have a wonderful simulator. They can propose a course of action virtually, in their minds, and see how things play out. The ability to include yourself as an actor means you’re running a computation on the idea of yourself.”

Most of the decisions we make involve envisioning different outcomes, thinking about how each outcome would affect us, and choosing which outcome we’d most prefer.

“Complex tasks you want to achieve in the world are tied to your ability to foresee the future, at least based on some mental model,” Husain said. “With that view, I as an AI practitioner don’t see a problem implementing that type of consciousness.”

Moving Forward Cautiously (But Not too Cautiously)
To be clear, we’re nowhere near machines achieving artificial general intelligence or consciousness, and whether a “conscious machine” is possible—not to mention necessary or desirable—is still very much up for debate.

As machine intelligence continues to advance, though, we’ll need to walk the line between progress and risk management carefully.

Improving the transparency and explainability of AI systems is one crucial goal AI developers and researchers are zeroing in on. Especially in applications that could mean the difference between life and death, AI shouldn’t advance without people being able to trace how it’s making decisions and reaching conclusions.

Medicine is a prime example. “There are already advances that could save lives, but they’re not being used because they’re not trusted by doctors and nurses,” said Stone. “We need to make sure there’s transparency.” Demanding too much transparency would also be a mistake, though, because it will hinder the development of systems that could at best save lives and at worst improve efficiency and free up doctors to have more face time with patients.

Similarly, self-driving cars have great potential to reduce deaths from traffic fatalities. But even though humans cause thousands of deadly crashes every day, we’re terrified by the idea of self-driving cars that are anything less than perfect. “If we only accept autonomous cars when there’s zero probability of an accident, then we will never accept them,” Stone said. “Yet we give 16-year-olds the chance to take a road test with no idea what’s going on in their brains.”

This brings us back to the fact that, in building tech modeled after the human brain—which has evolved over millions of years—we’re working towards an end whose means we don’t fully comprehend, be it something as basic as choosing when to brake or accelerate or something as complex as measuring consciousness.

“We shouldn’t charge ahead and do things just because we can,” Stone said. “The technology can be very powerful, which is exciting, but we have to consider its implications.”

Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots

#434772 Traditional Higher Education Is Losing ...

Should you go to graduate school? If so, why? If not, what are your alternatives? Millions of young adults across the globe—and their parents and mentors—find themselves asking these questions every year.

Earlier this month, I explored how exponential technologies are rising to meet the needs of the rapidly changing workforce.

In this blog, I’ll dive into a highly effective way to build the business acumen and skills needed to make the most significant impact in these exponential times.

To start, let’s dive into the value of graduate school versus apprenticeship—especially during this time of extraordinarily rapid growth, and the micro-diversification of careers.

The True Value of an MBA
All graduate schools are not created equal.

For complex technical trades like medicine, engineering, and law, formal graduate-level training provides a critical foundation for safe, ethical practice (until these trades are fully augmented by artificial intelligence and automation…).

For the purposes of today’s blog, let’s focus on the value of a Master in Business Administration (MBA) degree, compared to acquiring your business acumen through various forms of apprenticeship.

The Waning of Business Degrees
Ironically, business schools are facing a tough business problem. The rapid rate of technological change, a booming job market, and the digitization of education are chipping away at the traditional graduate-level business program.

The data speaks for itself.

The Decline of Graduate School Admissions
Enrollment in two-year, full-time MBA programs in the US fell by more than one-third from 2010 to 2016.

While in previous years, top business schools (e.g. Stanford, Harvard, and Wharton) were safe from the decrease in applications, this year, they also felt the waning interest in MBA programs.

Harvard Business School: 4.5 percent decrease in applications, the school’s biggest drop since 2005.
Wharton: 6.7 percent decrease in applications.
Stanford Graduate School: 4.6 percent decrease in applications.

Another signal of change began unfolding over the past week. You may have read news headlines about an emerging college admissions scam, which implicates highly selective US universities, sports coaches, parents, and students in a conspiracy to game the undergraduate admissions process.

Already, students are filing multibillion-dollar civil lawsuits arguing that the scheme has devalued their degrees or denied them a fair admissions opportunity.

MBA Graduates in the Workforce
To meet today’s business needs, startups and massive companies alike are increasingly hiring technologists, developers, and engineers in place of the MBA graduates they may have preferentially hired in the past.

While 85 percent of US employers expect to hire MBA graduates this year (a decrease from 91 percent in 2017), 52 percent of employers worldwide expect to hire graduates with a master’s in data analytics (an increase from 35 percent last year).

We’re also seeing the waning of MBA degree holders at the CEO level.

For decades, an MBA was the hallmark of upward mobility towards the C-suite of top companies.

But as exponential technologies permeate not only products but every part of the supply chain—from manufacturing and shipping to sales, marketing and customer service—that trend is changing by necessity.

Looking at the Harvard Business Review’s Top 100 CEOs in 2018 list, more CEOs on the list held engineering degrees than MBAs (34 held engineering degrees, while 32 held MBAs).

There’s much more to leading innovative companies than an advanced business degree.

How Are Schools Responding?
With disruption to the advanced business education system already here, some business schools are applying notes from their own innovation classes to brace for change.

Over the past half-decade, we’ve seen schools with smaller MBA programs shut their doors in favor of advanced degrees with more specialization. This directly responds to market demand for skills in data science, supply chain, and manufacturing.

Some degrees resemble the precise skills training of technical trades. Others are very much in line with the apprenticeship models we’ll explore next.

Regardless, this new specialization strategy is working and attracting more new students. Over the past decade (2006 to 2016), enrollment in specialized graduate business programs doubled.

Higher education is also seeing a preference shift toward for-profit trade schools, like coding boot camps. This shift is one of several forces pushing universities to adopt skill-specific advanced degrees.

But some schools are slow to adapt, raising the question: how and when will these legacy programs be disrupted? A survey of over 170 business school deans around the world showed that many programs are operating at a loss.

But if these schools are world-class business institutions, as advertised, why do they keep the doors open even while they lose money? The surveyed deans revealed an important insight: they keep the degree program open because of the program’s prestige.

Why Go to Business School?
Shorthand Credibility, Cognitive Biases, and Prestige
Regardless of what knowledge a person takes away from graduate school, attending one of the world’s most rigorous and elite programs gives grads external validation.

With over 55 percent of MBA applicants applying to just 6 percent of graduate business schools, we have a clear cognitive bias toward the perceived elite status of certain universities.

To the outside world, thanks to the power of cognitive biases, an advanced degree is credibility shorthand for your capabilities.

Simply passing through a top school’s filtration system means that you had some level of abilities and merits.

And startup success statistics tend to back up that perceived enhanced capability. Let’s take, for example, universities with the most startup unicorn founders (see the figure below).

When you consider the 320+ unicorn startups around the world today, these numbers become even more impressive. Stanford’s 18 unicorn companies account for over 5 percent of global unicorns, and Harvard is responsible for producing just under 5 percent.

Combined, just these two universities (out of over 5,000 in the US, and thousands more around the world) account for 1 in 10 of the billion-dollar private companies in the world.

By the numbers, the prestigious reputation of these elite business programs has a firm basis in current innovation success.

While prestige may be inherent to the degree earned by graduates from these business programs, the credibility boost from holding one of these degrees is not a guaranteed path to success in the business world.

For example, you might expect that the Harvard School of Business or Stanford Graduate School of Business would come out on top when tallying up the alma maters of Fortune 500 CEOs.

It turns out that the University of Wisconsin-Madison leads the business school pack with 14 CEOs to Harvard’s 12. Beyond prestige, the success these elite business programs see translates directly into cultivating unmatched networks and relationships.

Relationships
Graduate schools—particularly at the upper echelon—are excellent at attracting sharp students.

At an elite business school, if you meet just five to ten people with extraordinary skill sets, personalities, ideas, or networks, then you have returned your $200,000 education investment.

It’s no coincidence that some 40 percent of Silicon Valley venture capitalists are alumni of either Harvard or Stanford.

From future investors to advisors, friends, and potential business partners, relationships are critical to an entrepreneur’s success.

Apprenticeships
As we saw above, graduate business degree programs are melting away in the current wave of exponential change.

With an increasing $1.5 trillion in student debt, there must be a more impactful alternative to attending graduate school for those starting their careers.

When I think about the most important skills I use today as an entrepreneur, writer, and strategic thinker, they didn’t come from my decade of graduate school at Harvard or MIT… they came from my experiences building real technologies and companies, and working with mentors.

Apprenticeship comes in a variety of forms; here, I’ll cover three top-of-mind approaches:

Real-world business acumen via startup accelerators
A direct apprenticeship model
The 6 D’s of mentorship

Startup Accelerators and Business Practicum
Let’s contrast the shrinking interest in MBA programs with applications to a relatively new model of business education: startup accelerators.

Startup accelerators are short-term (typically three to six months), cohort-based programs focusing on providing startup founders with the resources (capital, mentorship, relationships, and education) needed to refine their entrepreneurial acumen.

While graduate business programs have been condensing, startup accelerators are alive, well, and expanding rapidly.

In the 10 years from 2005 (when Paul Graham founded Y Combinator) through 2015, the number of startup accelerators in the US increased by more than tenfold.

The increase in startup accelerator activity hints at a larger trend: our best and brightest business minds are opting to invest their time and efforts in obtaining hands-on experience, creating tangible value for themselves and others, rather than diving into the theory often taught in business school classrooms.

The “Strike Force” Model
The Strike Force is my elite team of young entrepreneurs who work directly with me across all of my companies, travel by my side, sit in on every meeting with me, and help build businesses that change the world.

Previous Strike Force members have gone on to launch successful companies, including Bold Capital Partners, my $250 million venture capital firm.

Strike Force is an apprenticeship for the next generation of exponential entrepreneurs.

To paraphrase my good friend Tony Robbins: If you want to short-circuit the video game, find someone who’s been there and done that and is now doing something you want to one day do.

Every year, over 500,000 apprentices in the US follow this precise template. These apprentices are learning a craft they wish to master, under the mentorship of experts (skilled metal workers, bricklayers, medical technicians, electricians, and more) who have already achieved the desired result.

What if we more readily applied this model to young adults with aspirations of creating massive value through the vehicles of entrepreneurship and innovation?

For the established entrepreneur: How can you bring young entrepreneurs into your organization to create more value for your company, while also passing on your ethos and lessons learned to the next generation?

For the young, driven millennial: How can you find your mentor and convince him or her to take you on as an apprentice? What value can you create for this person in exchange for their guidance and investment in your professional development?

The 6 D’s of Mentorship
In my last blog on education, I shared how mobile device and internet penetration will transform adult literacy and basic education. Mobile phones and connectivity already create extraordinary value for entrepreneurs and young professionals looking to take their business acumen and skill set to the next level.

For all of human history up until the last decade or so, if you wanted to learn from the best and brightest in business, leadership, or strategy, you either needed to search for a dated book that they wrote at the local library or bookstore, or you had to be lucky enough to meet that person for a live conversation.

Now you can access the mentorship of just about any thought leader on the planet, at any time, for free.

Thanks to the power of the internet, mentorship has digitized, demonetized, dematerialized, and democratized.

What do you want to learn about?

Investing? Leadership? Technology? Marketing? Project management?

You can access a near-infinite stream of cutting-edge tools, tactics, and lessons from thousands of top performers from nearly every field—instantaneously, and for free.

For example, every one of Warren Buffett’s letters to his Berkshire Hathaway investors over the past 40 years is available for free on a device that fits in your pocket.

The rise of audio—particularly podcasts and audiobooks—is another underestimated driving force away from traditional graduate business programs and toward apprenticeships.

Over 28 million podcast episodes are available for free. Once you identify the strong signals in the noise, you’re still left with thousands of hours of long-form podcast conversation from which to learn valuable lessons.

Whenever and wherever you want, you can learn from the world’s best. In the future, mentorship and apprenticeship will only become more personalized. Imagine accessing a high-fidelity, AI-powered avatar of Bill Gates, Richard Branson, or Arthur C. Clarke (one of my early mentors) to help guide you through your career.

Virtual mentorship and coaching are powerful education forces that are here to stay.

Bringing It All Together
The education system is rapidly changing. Traditional master’s programs for business are ebbing away in the tides of exponential technologies. Apprenticeship models are reemerging as an effective way to train tomorrow’s leaders.

In a future blog, I’ll revisit the concept of apprenticeships and other effective business school alternatives.

If you are a young, ambitious entrepreneur (or the parent of one), remember that you live in the most abundant time ever in human history to refine your craft.

Right now, you have access to world-class mentorship and cutting-edge best-practices—literally in the palm of your hand. What will you do with this extraordinary power?

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: fongbeerredhot / Shutterstock.com Continue reading

Posted in Human Robots

#434534 To Extend Our Longevity, First We Must ...

Healthcare today is reactive, retrospective, bureaucratic, and expensive. It’s sick care, not healthcare.

But that is radically changing at an exponential rate.

Through this multi-part blog series on longevity, I’ll take a deep dive into aging, longevity, and healthcare technologies that are working together to dramatically extend the human lifespan, disrupting the $3 trillion healthcare system in the process.

I’ll begin the series by explaining the nine hallmarks of aging, as explained in this journal article. Next, I’ll break down the emerging technologies and initiatives working to combat these nine hallmarks. Finally, I’ll explore the transformative implications of dramatically extending the human health span.

In this blog I’ll cover:

Why the healthcare system is broken
Why, despite this, we live in the healthiest time in human history
The nine mechanisms of aging

Let’s dive in.

The System is Broken—Here’s the Data:

Doctors spend $210 billion per year on procedures that aren’t based on patient need, but fear of liability.
Americans spend, on average, $8,915 per person on healthcare—more than any other country on Earth.
Prescription drugs cost around 50 percent more in the US than in other industrialized countries.
At current rates, by 2025, nearly 25 percent of the US GDP will be spent on healthcare.
It takes 12 years and $359 million, on average, to take a new drug from the lab to a patient.
Only 5 in 5,000 of these new drugs proceed to human testing. From there, only 1 of those 5 is actually approved for human use.

And Yet, We Live in the Healthiest Time in Human History
Consider these insights, which I adapted from Max Roser’s excellent database Our World in Data:

Right now, the countries with the lowest life expectancy in the world still have higher life expectancies than the countries with the highest life expectancy did in 1800.
In 1841, a 5-year-old had a life expectancy of 55 years. Today, a 5-year-old can expect to live 82 years—an increase of 27 years.
We’re seeing a dramatic increase in healthspan. In 1845, a newborn would expect to live to 40 years old. For a 70-year-old, that number became 79. Now, people of all ages can expect to live to be 81 to 86 years old.
100 years ago, 1 of 3 children would die before the age of 5. As of 2015, the child mortality rate fell to just 4.3 percent.
The cancer mortality rate has declined 27 percent over the past 25 years.

Figure: Around the globe, life expectancy has doubled since the 1800s. | Image from Life Expectancy by Max Roser – Our World in Data / CC BY SA
Figure: A dramatic reduction in child mortality in 1800 vs. in 2015. | Image from Child Mortality by Max Roser – Our World in Data / CC BY SA
The 9 Mechanisms of Aging
*This section was adapted from CB INSIGHTS: The Future Of Aging.

Longevity, healthcare, and aging are intimately linked.

With better healthcare, we can better treat some of the leading causes of death, impacting how long we live.

By investigating how to treat diseases, we’ll inevitably better understand what causes these diseases in the first place, which directly correlates to why we age.

Following are the nine hallmarks of aging. I’ll share examples of health and longevity technologies addressing each of these later in this blog series.

Genomic instability: As we age, the environment and normal cellular processes cause damage to our genes. Activities like flying at high altitude, for example, expose us to increased radiation or free radicals. This damage compounds over the course of life and is known to accelerate aging.
Telomere attrition: Each strand of DNA in the body (known as chromosomes) is capped by telomeres. These short snippets of DNA repeated thousands of times are designed to protect the bulk of the chromosome. Telomeres shorten as our DNA replicates; if a telomere reaches a certain critical shortness, a cell will stop dividing, resulting in increased incidence of disease.
Epigenetic alterations: Over time, environmental factors will change how genes are expressed, i.e., how certain sequences of DNA are read and the instruction set implemented.
Loss of proteostasis: Over time, different proteins in our body will no longer fold and function as they are supposed to, resulting in diseases ranging from cancer to neurological disorders.
Deregulated nutrient-sensing: Nutrient levels in the body can influence various metabolic pathways. Among the affected parts of these pathways are proteins like IGF-1, mTOR, sirtuins, and AMPK. Changing levels of these proteins’ pathways has implications on longevity.
Mitochondrial dysfunction: Mitochondria (our cellular power plants) begin to decline in performance as we age. Decreased performance results in excess fatigue and other symptoms of chronic illnesses associated with aging.
Cellular senescence: As cells age, they stop dividing and cannot be removed from the body. They build up and typically cause increased inflammation.
Stem cell exhaustion: As we age, our supply of stem cells begins to diminish as much as 100 to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing the body.
Altered intercellular communication: The communication mechanisms that cells use are disrupted as cells age, resulting in decreased ability to transmit information between cells.

Conclusion
Over the past 200 years, we have seen an abundance of healthcare technologies enable a massive lifespan boom.

Now, exponential technologies like artificial intelligence, 3D printing and sensors, as well as tremendous advancements in genomics, stem cell research, chemistry, and many other fields, are beginning to tackle the fundamental issues of why we age.

In the next blog in this series, we will dive into how genome sequencing and editing, along with new classes of drugs, are augmenting our biology to further extend our healthy lives.

What will you be able to achieve with an extra 30 to 50 healthy years (or longer) in your lifespan? Personally, I’m excited for a near-infinite lifespan to take on moonshots.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: David Carbo / Shutterstock.com Continue reading

Posted in Human Robots

#434492 Black Mirror’s ‘Bandersnatch’ ...

When was the last time you watched a movie where you could control the plot?

Bandersnatch is the first interactive film in the sci fi anthology series Black Mirror. Written by series creator Charlie Brooker and directed by David Slade, the film tells the story of young programmer Stefan Butler, who is adapting a fantasy choose-your-own-adventure novel called Bandersnatch into a video game. Throughout the film, viewers are given the power to influence Butler’s decisions, leading to diverging plots with different endings.

Like many Black Mirror episodes, this film is mind-bending, dark, and thought-provoking. In addition to innovating cinema as we know it, it is a fascinating rumination on free will, parallel realities, and emerging technologies.

Pick Your Own Adventure
With a non-linear script, Bandersnatch is a viewing experience like no other. Throughout the film viewers are given the option of making a decision for the protagonist. In these instances, they have 10 seconds to make a decision until a default decision is made. For example, in the early stage of the plot, Butler is given the choice of accepting or rejecting Tuckersoft’s offer to develop a video game and the viewer gets to decide what he does. The decision then shapes the plot accordingly.

The video game Butler is developing involves moving through a graphical maze of corridors while avoiding a creature called the Pax, and at times making choices through an on-screen instruction (sound familiar?). In other words, it’s a pick-your-own-adventure video game in a pick-your-own-adventure movie.

Many viewers have ended up spending hours exploring all the different branches of the narrative (though the average viewing is 90 minutes). One user on reddit has mapped out an entire flowchart, showing how all the different decisions (and pseudo-decisions) lead to various endings.

However, over time, Butler starts to question his own free will. It’s almost as if he’s beginning to realize that the audience is controlling him. In one branch of the narrative, he is confronted by this reality when the audience indicates to him that he is being controlled in a Netflix show: “I am watching you on Netflix. I make all the decisions for you”. Butler, as you can imagine, is horrified by this message.

But Butler isn’t the only one who has an illusion of choice. We, the seemingly powerful viewers, also appear to operate under the illusion of choice. Despite there being five main endings to the film, they are all more or less the same.

The Science Behind Bandersnatch
The premise of Bandersnatch isn’t based on fantasy, but hard science. Free will has always been a widely-debated issue in neuroscience, with reputable scientists and studies demonstrating that the whole concept may be an illusion.

In the 1970s, a psychologist named Benjamin Libet conducted a series of experiments that studied voluntary decision making in humans. He found that brain activity imitating an action, such as moving your wrist, preceded the conscious awareness of the action.

Psychologist Malcom Gladwell theorizes that while we like to believe we spend a lot of time thinking about our decisions, our mental processes actually work rapidly, automatically, and often subconsciously, from relatively little information. In addition to this, thinking and making decisions are usually a byproduct of several different brain systems, such as the hippocampus, amygdala, and prefrontal cortex working together. You are more conscious of some information processes in the brain than others.

As neuroscientist and philosopher Sam Harris points out in his book Free Will, “You did not pick your parents or the time and place of your birth. You didn’t choose your gender or most of your life experiences. You had no control whatsoever over your genome or the development of your brain. And now your brain is making choices on the basis of preferences and beliefs that have been hammered into it over a lifetime.” Like Butler, we may believe we are operating under full agency of our abilities, but we are at the mercy of many internal and external factors that influence our decisions.

Beyond free will, Bandersnatch also taps into the theory of parallel universes, a facet of the astronomical theory of the multiverse. In astrophysics, there is a theory that there are parallel universes other than our own, where all the choices you made are played out in alternate realities. For instance, if today you had the option of having cereal or eggs for breakfast, and you chose eggs, in a parallel universe, you chose cereal. Human history and our lives may have taken different paths in these parallel universes.

The Future of Cinema
In the future, the viewing experience will no longer be a passive one. Bandersnatch is just a glimpse into how technology is revolutionizing film as we know it and making it a more interactive and personalized experience. All the different scenarios and branches of the plot were scripted and filmed, but in the future, they may be adapted real-time via artificial intelligence.

Virtual reality may allow us to play an even more active role by making us participants or characters in the film. Data from your history of preferences and may be used to create a unique version of the plot that is optimized for your viewing experience.

Let’s also not underestimate the social purpose of advancing film and entertainment. Science fiction gives us the ability to create simulations of the future. Different narratives can allow us to explore how powerful technologies combined with human behavior can result in positive or negative scenarios. Perhaps in the future, science fiction will explore implications of technologies and observe human decision making in novel contexts, via AI-powered films in the virtual world.

Image Credit: andrey_l / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#434324 Big Brother Nation: The Case for ...

Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.

As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.

In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.

This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.

It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.

The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”

Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.

Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.

By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.

Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.

Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.

Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.

Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.

Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”

If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.

While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.

The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.

The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.

Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.

Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.

Image Credit: VladFotoMag / Shutterstock.com Continue reading

Posted in Human Robots