Tag Archives: did

#434755 This Week’s Awesome Stories From ...

DeepMind and Google: The Battle to Control Artificial Intelligence
Hal Hodson | 1843
“Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies. Every element was in place to hasten the arrival of AGI and solve the causes of human misery.”

Robot Valets Are Now Parking Cars in One of France’s Busiest Airports
James Vincent | The Verge
“Stanley Robotics say its system uses space much more efficiently than humans, fitting 50 percent more cars into the same area. This is thanks in part to the robots’ precision driving, but also because the system keeps track of when customers will return. This means the robots can park cars three or four deep, but then dig out the right vehicle ready for its owner’s return.”

Quantum Computing Should Supercharge This Machine-Learning Technique
Will Knight | MIT Technology Review
“Quantum computing and artificial intelligence are both hyped ridiculously. But it seems a combination of the two may indeed combine to open up new possibilities.”

Scientists Reawaken Cells From a 28,000-Year-Old Mammoth
Becky Ferreira | Motherboard
“Yuka the woolly mammoth died a long time ago, but scientists gave her cells a short second life in mouse egg cells.”

CRISPR Experts Are Calling for a Global Moratorium on Heritable Gene Editing
Niall Firth | MIT Technology Review
“We still don’t know what the majority of our genes do, so the risks of unintended consequences or so-called off-target effects—good or bad—are huge. …Changes in a genome might have unforeseen outcomes in future generations as well. ‘Attempting to reshape the species on the basis of our current state of knowledge would be hubris,’ the letter reads.”

Unleash the Full Potential of the Human Genome Project
Paul Glimcher | The Hill
“So how do the risks embedded in our genes become the diseases, the so-called phenotypes, we seek to cure or prevent? …It is not just nature, but also nurture, which leads to disease. This is something that we have known for centuries, but which we seem to have conveniently forgotten in our rush to embrace the technology of genetics. In 1990 the only thing we could measure comprehensively was genetics, so we did it. But why did we stop there?”

Image Credit: Fernanda Marin / Unsplash Continue reading

Posted in Human Robots

#434753 Top Takeaways From The Economist ...

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.

There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology
Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality
Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications? “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing
If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.

Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work
From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes
This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko / Shutterstock.com Continue reading

Posted in Human Robots

#434534 To Extend Our Longevity, First We Must ...

Healthcare today is reactive, retrospective, bureaucratic, and expensive. It’s sick care, not healthcare.

But that is radically changing at an exponential rate.

Through this multi-part blog series on longevity, I’ll take a deep dive into aging, longevity, and healthcare technologies that are working together to dramatically extend the human lifespan, disrupting the $3 trillion healthcare system in the process.

I’ll begin the series by explaining the nine hallmarks of aging, as explained in this journal article. Next, I’ll break down the emerging technologies and initiatives working to combat these nine hallmarks. Finally, I’ll explore the transformative implications of dramatically extending the human health span.

In this blog I’ll cover:

Why the healthcare system is broken
Why, despite this, we live in the healthiest time in human history
The nine mechanisms of aging

Let’s dive in.

The System is Broken—Here’s the Data:

Doctors spend $210 billion per year on procedures that aren’t based on patient need, but fear of liability.
Americans spend, on average, $8,915 per person on healthcare—more than any other country on Earth.
Prescription drugs cost around 50 percent more in the US than in other industrialized countries.
At current rates, by 2025, nearly 25 percent of the US GDP will be spent on healthcare.
It takes 12 years and $359 million, on average, to take a new drug from the lab to a patient.
Only 5 in 5,000 of these new drugs proceed to human testing. From there, only 1 of those 5 is actually approved for human use.

And Yet, We Live in the Healthiest Time in Human History
Consider these insights, which I adapted from Max Roser’s excellent database Our World in Data:

Right now, the countries with the lowest life expectancy in the world still have higher life expectancies than the countries with the highest life expectancy did in 1800.
In 1841, a 5-year-old had a life expectancy of 55 years. Today, a 5-year-old can expect to live 82 years—an increase of 27 years.
We’re seeing a dramatic increase in healthspan. In 1845, a newborn would expect to live to 40 years old. For a 70-year-old, that number became 79. Now, people of all ages can expect to live to be 81 to 86 years old.
100 years ago, 1 of 3 children would die before the age of 5. As of 2015, the child mortality rate fell to just 4.3 percent.
The cancer mortality rate has declined 27 percent over the past 25 years.

Figure: Around the globe, life expectancy has doubled since the 1800s. | Image from Life Expectancy by Max Roser – Our World in Data / CC BY SA
Figure: A dramatic reduction in child mortality in 1800 vs. in 2015. | Image from Child Mortality by Max Roser – Our World in Data / CC BY SA
The 9 Mechanisms of Aging
*This section was adapted from CB INSIGHTS: The Future Of Aging.

Longevity, healthcare, and aging are intimately linked.

With better healthcare, we can better treat some of the leading causes of death, impacting how long we live.

By investigating how to treat diseases, we’ll inevitably better understand what causes these diseases in the first place, which directly correlates to why we age.

Following are the nine hallmarks of aging. I’ll share examples of health and longevity technologies addressing each of these later in this blog series.

Genomic instability: As we age, the environment and normal cellular processes cause damage to our genes. Activities like flying at high altitude, for example, expose us to increased radiation or free radicals. This damage compounds over the course of life and is known to accelerate aging.
Telomere attrition: Each strand of DNA in the body (known as chromosomes) is capped by telomeres. These short snippets of DNA repeated thousands of times are designed to protect the bulk of the chromosome. Telomeres shorten as our DNA replicates; if a telomere reaches a certain critical shortness, a cell will stop dividing, resulting in increased incidence of disease.
Epigenetic alterations: Over time, environmental factors will change how genes are expressed, i.e., how certain sequences of DNA are read and the instruction set implemented.
Loss of proteostasis: Over time, different proteins in our body will no longer fold and function as they are supposed to, resulting in diseases ranging from cancer to neurological disorders.
Deregulated nutrient-sensing: Nutrient levels in the body can influence various metabolic pathways. Among the affected parts of these pathways are proteins like IGF-1, mTOR, sirtuins, and AMPK. Changing levels of these proteins’ pathways has implications on longevity.
Mitochondrial dysfunction: Mitochondria (our cellular power plants) begin to decline in performance as we age. Decreased performance results in excess fatigue and other symptoms of chronic illnesses associated with aging.
Cellular senescence: As cells age, they stop dividing and cannot be removed from the body. They build up and typically cause increased inflammation.
Stem cell exhaustion: As we age, our supply of stem cells begins to diminish as much as 100 to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing the body.
Altered intercellular communication: The communication mechanisms that cells use are disrupted as cells age, resulting in decreased ability to transmit information between cells.

Over the past 200 years, we have seen an abundance of healthcare technologies enable a massive lifespan boom.

Now, exponential technologies like artificial intelligence, 3D printing and sensors, as well as tremendous advancements in genomics, stem cell research, chemistry, and many other fields, are beginning to tackle the fundamental issues of why we age.

In the next blog in this series, we will dive into how genome sequencing and editing, along with new classes of drugs, are augmenting our biology to further extend our healthy lives.

What will you be able to achieve with an extra 30 to 50 healthy years (or longer) in your lifespan? Personally, I’m excited for a near-infinite lifespan to take on moonshots.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: David Carbo / Shutterstock.com Continue reading

Posted in Human Robots

#434492 Black Mirror’s ‘Bandersnatch’ ...

When was the last time you watched a movie where you could control the plot?

Bandersnatch is the first interactive film in the sci fi anthology series Black Mirror. Written by series creator Charlie Brooker and directed by David Slade, the film tells the story of young programmer Stefan Butler, who is adapting a fantasy choose-your-own-adventure novel called Bandersnatch into a video game. Throughout the film, viewers are given the power to influence Butler’s decisions, leading to diverging plots with different endings.

Like many Black Mirror episodes, this film is mind-bending, dark, and thought-provoking. In addition to innovating cinema as we know it, it is a fascinating rumination on free will, parallel realities, and emerging technologies.

Pick Your Own Adventure
With a non-linear script, Bandersnatch is a viewing experience like no other. Throughout the film viewers are given the option of making a decision for the protagonist. In these instances, they have 10 seconds to make a decision until a default decision is made. For example, in the early stage of the plot, Butler is given the choice of accepting or rejecting Tuckersoft’s offer to develop a video game and the viewer gets to decide what he does. The decision then shapes the plot accordingly.

The video game Butler is developing involves moving through a graphical maze of corridors while avoiding a creature called the Pax, and at times making choices through an on-screen instruction (sound familiar?). In other words, it’s a pick-your-own-adventure video game in a pick-your-own-adventure movie.

Many viewers have ended up spending hours exploring all the different branches of the narrative (though the average viewing is 90 minutes). One user on reddit has mapped out an entire flowchart, showing how all the different decisions (and pseudo-decisions) lead to various endings.

However, over time, Butler starts to question his own free will. It’s almost as if he’s beginning to realize that the audience is controlling him. In one branch of the narrative, he is confronted by this reality when the audience indicates to him that he is being controlled in a Netflix show: “I am watching you on Netflix. I make all the decisions for you”. Butler, as you can imagine, is horrified by this message.

But Butler isn’t the only one who has an illusion of choice. We, the seemingly powerful viewers, also appear to operate under the illusion of choice. Despite there being five main endings to the film, they are all more or less the same.

The Science Behind Bandersnatch
The premise of Bandersnatch isn’t based on fantasy, but hard science. Free will has always been a widely-debated issue in neuroscience, with reputable scientists and studies demonstrating that the whole concept may be an illusion.

In the 1970s, a psychologist named Benjamin Libet conducted a series of experiments that studied voluntary decision making in humans. He found that brain activity imitating an action, such as moving your wrist, preceded the conscious awareness of the action.

Psychologist Malcom Gladwell theorizes that while we like to believe we spend a lot of time thinking about our decisions, our mental processes actually work rapidly, automatically, and often subconsciously, from relatively little information. In addition to this, thinking and making decisions are usually a byproduct of several different brain systems, such as the hippocampus, amygdala, and prefrontal cortex working together. You are more conscious of some information processes in the brain than others.

As neuroscientist and philosopher Sam Harris points out in his book Free Will, “You did not pick your parents or the time and place of your birth. You didn’t choose your gender or most of your life experiences. You had no control whatsoever over your genome or the development of your brain. And now your brain is making choices on the basis of preferences and beliefs that have been hammered into it over a lifetime.” Like Butler, we may believe we are operating under full agency of our abilities, but we are at the mercy of many internal and external factors that influence our decisions.

Beyond free will, Bandersnatch also taps into the theory of parallel universes, a facet of the astronomical theory of the multiverse. In astrophysics, there is a theory that there are parallel universes other than our own, where all the choices you made are played out in alternate realities. For instance, if today you had the option of having cereal or eggs for breakfast, and you chose eggs, in a parallel universe, you chose cereal. Human history and our lives may have taken different paths in these parallel universes.

The Future of Cinema
In the future, the viewing experience will no longer be a passive one. Bandersnatch is just a glimpse into how technology is revolutionizing film as we know it and making it a more interactive and personalized experience. All the different scenarios and branches of the plot were scripted and filmed, but in the future, they may be adapted real-time via artificial intelligence.

Virtual reality may allow us to play an even more active role by making us participants or characters in the film. Data from your history of preferences and may be used to create a unique version of the plot that is optimized for your viewing experience.

Let’s also not underestimate the social purpose of advancing film and entertainment. Science fiction gives us the ability to create simulations of the future. Different narratives can allow us to explore how powerful technologies combined with human behavior can result in positive or negative scenarios. Perhaps in the future, science fiction will explore implications of technologies and observe human decision making in novel contexts, via AI-powered films in the virtual world.

Image Credit: andrey_l / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#434324 Big Brother Nation: The Case for ...

Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.

As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.

In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.

This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.

It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.

The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”

Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.

Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.

By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.

Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.

Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.

Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.

Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.

Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”

If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.

While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.

The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.

The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.

Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.

Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.

Image Credit: VladFotoMag / Shutterstock.com Continue reading

Posted in Human Robots