Tag Archives: high
#434534 To Extend Our Longevity, First We Must ...
Healthcare today is reactive, retrospective, bureaucratic, and expensive. It’s sick care, not healthcare.
But that is radically changing at an exponential rate.
Through this multi-part blog series on longevity, I’ll take a deep dive into aging, longevity, and healthcare technologies that are working together to dramatically extend the human lifespan, disrupting the $3 trillion healthcare system in the process.
I’ll begin the series by explaining the nine hallmarks of aging, as explained in this journal article. Next, I’ll break down the emerging technologies and initiatives working to combat these nine hallmarks. Finally, I’ll explore the transformative implications of dramatically extending the human health span.
In this blog I’ll cover:
Why the healthcare system is broken
Why, despite this, we live in the healthiest time in human history
The nine mechanisms of aging
Let’s dive in.
The System is Broken—Here’s the Data:
Doctors spend $210 billion per year on procedures that aren’t based on patient need, but fear of liability.
Americans spend, on average, $8,915 per person on healthcare—more than any other country on Earth.
Prescription drugs cost around 50 percent more in the US than in other industrialized countries.
At current rates, by 2025, nearly 25 percent of the US GDP will be spent on healthcare.
It takes 12 years and $359 million, on average, to take a new drug from the lab to a patient.
Only 5 in 5,000 of these new drugs proceed to human testing. From there, only 1 of those 5 is actually approved for human use.
And Yet, We Live in the Healthiest Time in Human History
Consider these insights, which I adapted from Max Roser’s excellent database Our World in Data:
Right now, the countries with the lowest life expectancy in the world still have higher life expectancies than the countries with the highest life expectancy did in 1800.
In 1841, a 5-year-old had a life expectancy of 55 years. Today, a 5-year-old can expect to live 82 years—an increase of 27 years.
We’re seeing a dramatic increase in healthspan. In 1845, a newborn would expect to live to 40 years old. For a 70-year-old, that number became 79. Now, people of all ages can expect to live to be 81 to 86 years old.
100 years ago, 1 of 3 children would die before the age of 5. As of 2015, the child mortality rate fell to just 4.3 percent.
The cancer mortality rate has declined 27 percent over the past 25 years.
Figure: Around the globe, life expectancy has doubled since the 1800s. | Image from Life Expectancy by Max Roser – Our World in Data / CC BY SA
Figure: A dramatic reduction in child mortality in 1800 vs. in 2015. | Image from Child Mortality by Max Roser – Our World in Data / CC BY SA
The 9 Mechanisms of Aging
*This section was adapted from CB INSIGHTS: The Future Of Aging.
Longevity, healthcare, and aging are intimately linked.
With better healthcare, we can better treat some of the leading causes of death, impacting how long we live.
By investigating how to treat diseases, we’ll inevitably better understand what causes these diseases in the first place, which directly correlates to why we age.
Following are the nine hallmarks of aging. I’ll share examples of health and longevity technologies addressing each of these later in this blog series.
Genomic instability: As we age, the environment and normal cellular processes cause damage to our genes. Activities like flying at high altitude, for example, expose us to increased radiation or free radicals. This damage compounds over the course of life and is known to accelerate aging.
Telomere attrition: Each strand of DNA in the body (known as chromosomes) is capped by telomeres. These short snippets of DNA repeated thousands of times are designed to protect the bulk of the chromosome. Telomeres shorten as our DNA replicates; if a telomere reaches a certain critical shortness, a cell will stop dividing, resulting in increased incidence of disease.
Epigenetic alterations: Over time, environmental factors will change how genes are expressed, i.e., how certain sequences of DNA are read and the instruction set implemented.
Loss of proteostasis: Over time, different proteins in our body will no longer fold and function as they are supposed to, resulting in diseases ranging from cancer to neurological disorders.
Deregulated nutrient-sensing: Nutrient levels in the body can influence various metabolic pathways. Among the affected parts of these pathways are proteins like IGF-1, mTOR, sirtuins, and AMPK. Changing levels of these proteins’ pathways has implications on longevity.
Mitochondrial dysfunction: Mitochondria (our cellular power plants) begin to decline in performance as we age. Decreased performance results in excess fatigue and other symptoms of chronic illnesses associated with aging.
Cellular senescence: As cells age, they stop dividing and cannot be removed from the body. They build up and typically cause increased inflammation.
Stem cell exhaustion: As we age, our supply of stem cells begins to diminish as much as 100 to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing the body.
Altered intercellular communication: The communication mechanisms that cells use are disrupted as cells age, resulting in decreased ability to transmit information between cells.
Conclusion
Over the past 200 years, we have seen an abundance of healthcare technologies enable a massive lifespan boom.
Now, exponential technologies like artificial intelligence, 3D printing and sensors, as well as tremendous advancements in genomics, stem cell research, chemistry, and many other fields, are beginning to tackle the fundamental issues of why we age.
In the next blog in this series, we will dive into how genome sequencing and editing, along with new classes of drugs, are augmenting our biology to further extend our healthy lives.
What will you be able to achieve with an extra 30 to 50 healthy years (or longer) in your lifespan? Personally, I’m excited for a near-infinite lifespan to take on moonshots.
Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.
Image Credit: David Carbo / Shutterstock.com Continue reading →
#434532 How Microrobots Will Fix Our Roads and ...
Swarms of microrobots will scuttle along beneath our roads and pavements, finding and fixing leaky pipes and faulty cables. Thanks to their efforts, we can avoid costly road work that costs billions of dollars each year—not to mention frustrating traffic delays.
That is, if a new project sponsored by the U.K. government is a success. Recent developments in the space seem to point towards a bright future for microrobots.
Microrobots Saving Billions
Each year, around 1.5 million road excavations take place across the U.K. Many are due to leaky pipes and faulty cables that necessitate excavation of road surfaces in order to fix them. The resulting repairs, alongside disruptions to traffic and businesses, are estimated to cost a whopping £6.3 billion ($8 billion).
A consortium of scientists, led by University of Sheffield Professor Kirill Horoshenkov, are planning to use microrobots to negate most of these costs. The group has received a £7.2 million ($9.2 million) grant to develop and build their bots.
According to Horoshenkov, the microrobots will come in two versions. One is an inspection bot, which will navigate along underground infrastructure and examine its condition via sonar. The inspectors will be complemented by worker bots capable of carrying out repairs with cement and adhesives or cleaning out blockages with a high-powered jet. The inspector bots will be around one centimeter long and possibly autonomous, while the worker bots will be slightly larger and steered via remote control.
If successful, it is believed the bots could potentially save the U.K. economy around £5 billion ($6.4 billion) a year.
The U.K. government has set aside a further £19 million ($24 million) for research into robots for hazardous environments, such as nuclear decommissioning, drones for oil pipeline monitoring, and artificial intelligence software to detect the need for repairs on satellites in orbit.
The Lowest-Hanging Fruit
Microrobots like the ones now under development in the U.K. have many potential advantages and use cases. Thanks to their small size they can navigate tight spaces, for example in search and rescue operations, and robot swarm technology would allow them to collaborate to perform many different functions, including in construction projects.
To date, the number of microrobots in use is relatively limited, but that could be about to change, with bots closing in on other types of inspection jobs, which could be considered one of the lowest-hanging fruits.
Engineering firm Rolls-Royce (not the car company, but the one that builds aircraft engines) is looking to use microrobots to inspect some of the up to 25,000 individual parts that make up an engine. The microrobots use the cockroach as a model, and Rolls Royce believes they could save engineers time when performing the maintenance checks that can take over a month per engine.
Even Smaller Successes
Going further down in scale, recent years have seen a string of successes for nanobots. For example, a team of researchers at the Femto-ST Institute have used nanobots to build what is likely the world’s smallest house (if this isn’t a category at Guinness, someone needs to get on the phone with them), which stands a ‘towering’ 0.015 millimeters.
One of the areas where nanobots have shown great promise is in medicine. Several studies have shown how the minute bots are capable of delivering drugs directly into dense biological tissue, which can otherwise be highly challenging to target directly. Such delivery systems have a great potential for improving the treatment of a wide range of ailments and illnesses, including cancer.
There’s no question that the ecosystem of microrobots and nanobots is evolving. While still in their early days, the above successes point to a near-future boom in the bots we may soon refer to as our ‘littlest everyday helpers.’
Image Credit: 5nikolas5 / Shutterstock.com Continue reading →
#434508 The Top Biotech and Medicine Advances to ...
2018 was bonkers for science.
From a woman who gave birth using a transplanted uterus, to the infamous CRISPR baby scandal, to forensics adopting consumer-based genealogy test kits to track down criminals, last year was a factory churning out scientific “whoa” stories with consequences for years to come.
With CRISPR still in the headlines, Britain ready to bid Europe au revoir, and multiple scientific endeavors taking off, 2019 is shaping up to be just as tumultuous.
Here are the science and health stories that may blow up in the new year. But first, a note of caveat: predicting the future is tough. Forecasting is the lovechild between statistics and (a good deal of) intuition, and entire disciplines have been dedicated to the endeavor. But January is the perfect time to gaze into the crystal ball for wisps of insight into the year to come. Last year we predicted the widespread approval of gene therapy products—on the most part, we nailed it. This year we’re hedging our bets with multiple predictions.
Gene Drives Used in the Wild
The concept of gene drives scares many, for good reason. Gene drives are a step up in severity (and consequences) from CRISPR and other gene-editing tools. Even with germline editing, in which the sperm, egg, or embryos are altered, gene editing affects just one genetic line—one family—at least at the beginning, before they reproduce with the general population.
Gene drives, on the other hand, have the power to wipe out entire species.
In a nutshell, they’re little bits of DNA code that help a gene transfer from parent to child with almost 100 percent perfect probability. The “half of your DNA comes from dad, the other comes from mom” dogma? Gene drives smash that to bits.
In other words, the only time one would consider using a gene drive is to change the genetic makeup of an entire population. It sounds like the plot of a supervillain movie, but scientists have been toying around with the idea of deploying the technology—first in mosquitoes, then (potentially) in rodents.
By releasing just a handful of mutant mosquitoes that carry gene drives for infertility, for example, scientists could potentially wipe out entire populations that carry infectious scourges like malaria, dengue, or Zika. The technology is so potent—and dangerous—the US Defense Advances Research Projects Agency is shelling out $65 million to suss out how to deploy, control, counter, or even reverse the effects of tampering with ecology.
Last year, the U.N. gave a cautious go-ahead for the technology to be deployed in the wild in limited terms. Now, the first release of a genetically modified mosquito is set for testing in Burkina Faso in Africa—the first-ever field experiment involving gene drives.
The experiment will only release mosquitoes in the Anopheles genus, which are the main culprits transferring disease. As a first step, over 10,000 male mosquitoes are set for release into the wild. These dudes are genetically sterile but do not cause infertility, and will help scientists examine how they survive and disperse as a preparation for deploying gene-drive-carrying mosquitoes.
Hot on the project’s heels, the nonprofit consortium Target Malaria, backed by the Bill and Melinda Gates foundation, is engineering a gene drive called Mosq that will spread infertility across the population or kill out all female insects. Their attempt to hack the rules of inheritance—and save millions in the process—is slated for 2024.
A Universal Flu Vaccine
People often brush off flu as a mere annoyance, but the infection kills hundreds of thousands each year based on the CDC’s statistical estimates.
The flu virus is actually as difficult of a nemesis as HIV—it mutates at an extremely rapid rate, making effective vaccines almost impossible to engineer on time. Scientists currently use data to forecast the strains that will likely explode into an epidemic and urge the public to vaccinate against those predictions. That’s partly why, on average, flu vaccines only have a success rate of roughly 50 percent—not much better than a coin toss.
Tired of relying on educated guesses, scientists have been chipping away at a universal flu vaccine that targets all strains—perhaps even those we haven’t yet identified. Often referred to as the “holy grail” in epidemiology, these vaccines try to alert our immune systems to parts of a flu virus that are least variable from strain to strain.
Last November, a first universal flu vaccine developed by BiondVax entered Phase 3 clinical trials, which means it’s already been proven safe and effective in a small numbers and is now being tested in a broader population. The vaccine doesn’t rely on dead viruses, which is a common technique. Rather, it uses a small chain of amino acids—the chemical components that make up proteins—to stimulate the immune system into high alert.
With the government pouring $160 million into the research and several other universal candidates entering clinical trials, universal flu vaccines may finally experience a breakthrough this year.
In-Body Gene Editing Shows Further Promise
CRISPR and other gene editing tools headed the news last year, including both downers suggesting we already have immunity to the technology and hopeful news of it getting ready for treating inherited muscle-wasting diseases.
But what wasn’t widely broadcasted was the in-body gene editing experiments that have been rolling out with gusto. Last September, Sangamo Therapeutics in Richmond, California revealed that they had injected gene-editing enzymes into a patient in an effort to correct a genetic deficit that prevents him from breaking down complex sugars.
The effort is markedly different than the better-known CAR-T therapy, which extracts cells from the body for genetic engineering before returning them to the hosts. Rather, Sangamo’s treatment directly injects viruses carrying the edited genes into the body. So far, the procedure looks to be safe, though at the time of reporting it was too early to determine effectiveness.
This year the company hopes to finally answer whether it really worked.
If successful, it means that devastating genetic disorders could potentially be treated with just a few injections. With a gamut of new and more precise CRISPR and other gene-editing tools in the works, the list of treatable inherited diseases is likely to grow. And with the CRISPR baby scandal potentially dampening efforts at germline editing via regulations, in-body gene editing will likely receive more attention if Sangamo’s results return positive.
Neuralink and Other Brain-Machine Interfaces
Neuralink is the stuff of sci fi: tiny implanted particles into the brain could link up your biological wetware with silicon hardware and the internet.
But that’s exactly what Elon Musk’s company, founded in 2016, seeks to develop: brain-machine interfaces that could tinker with your neural circuits in an effort to treat diseases or even enhance your abilities.
Last November, Musk broke his silence on the secretive company, suggesting that he may announce something “interesting” in a few months, that’s “better than anyone thinks is possible.”
Musk’s aspiration for achieving symbiosis with artificial intelligence isn’t the driving force for all brain-machine interfaces (BMIs). In the clinics, the main push is to rehabilitate patients—those who suffer from paralysis, memory loss, or other nerve damage.
2019 may be the year that BMIs and neuromodulators cut the cord in the clinics. These devices may finally work autonomously within a malfunctioning brain, applying electrical stimulation only when necessary to reduce side effects without requiring external monitoring. Or they could allow scientists to control brains with light without needing bulky optical fibers.
Cutting the cord is just the first step to fine-tuning neurological treatments—or enhancements—to the tune of your own brain, and 2019 will keep on bringing the music.
Image Credit: angellodeco / Shutterstock.com Continue reading →
#434336 These Smart Seafaring Robots Have a ...
Drones. Self-driving cars. Flying robo taxis. If the headlines of the last few years are to be believed, terrestrial transportation in the future will someday be filled with robotic conveyances and contraptions that will require little input from a human other than to download an app.
But what about the other 70 percent of the planet’s surface—the part that’s made up of water?
Sure, there are underwater drones that can capture 4K video for the next BBC documentary. Remotely operated vehicles (ROVs) are capable of diving down thousands of meters to investigate ocean vents or repair industrial infrastructure.
Yet most of the robots on or below the water today still lean heavily on the human element to operate. That’s not surprising given the unstructured environment of the seas and the poor communication capabilities for anything moving below the waves. Autonomous underwater vehicles (AUVs) are probably the closest thing today to smart cars in the ocean, but they generally follow pre-programmed instructions.
A new generation of seafaring robots—leveraging artificial intelligence, machine vision, and advanced sensors, among other technologies—are beginning to plunge into the ocean depths. Here are some of the latest and most exciting ones.
The Transformer of the Sea
Nic Radford, chief technology officer of Houston Mechatronics Inc. (HMI), is hesitant about throwing around the word “autonomy” when talking about his startup’s star creation, Aquanaut. He prefers the term “shared control.”
Whatever you want to call it, Aquanaut seems like something out of the script of a Transformers movie. The underwater robot begins each mission in a submarine-like shape, capable of autonomously traveling up to 200 kilometers on battery power, depending on the assignment.
When Aquanaut reaches its destination—oil and gas is the primary industry HMI hopes to disrupt to start—its four specially-designed and built linear actuators go to work. Aquanaut then unfolds into a robot with a head, upper torso, and two manipulator arms, all while maintaining proper buoyancy to get its job done.
The lightbulb moment of how to engineer this transformation from submarine to robot came one day while Aquanaut’s engineers were watching the office’s stand-up desks bob up and down. The answer to the engineering challenge of the hull suddenly seemed obvious.
“We’re just gonna build a big, gigantic, underwater stand-up desk,” Radford told Singularity Hub.
Hardware wasn’t the only problem the team, comprised of veteran NASA roboticists like Radford, had to solve. In order to ditch the expensive support vessels and large teams of humans required to operate traditional ROVs, Aquanaut would have to be able to sense its environment in great detail and relay that information back to headquarters using an underwater acoustics communications system that harkens back to the days of dial-up internet connections.
To tackle that problem of low bandwidth, HMI equipped Aquanaut with a machine vision system comprised of acoustic, optical, and laser-based sensors. All of that dense data is compressed using in-house designed technology and transmitted to a single human operator who controls Aquanaut with a few clicks of a mouse. In other words, no joystick required.
“I don’t know of anyone trying to do this level of autonomy as it relates to interacting with the environment,” Radford said.
HMI got $20 million earlier this year in Series B funding co-led by Transocean, one of the world’s largest offshore drilling contractors. That should be enough money to finish the Aquanaut prototype, which Radford said is about 99.8 percent complete. Some “high-profile” demonstrations are planned for early next year, with commercial deployments as early as 2020.
“What just gives us an incredible advantage here is that we have been born and bred on doing robotic systems for remote locations,” Radford noted. “This is my life, and I’ve bet the farm on it, and it takes this kind of fortitude and passion to see these things through, because these are not easy problems to solve.”
On Cruise Control
Meanwhile, a Boston-based startup is trying to solve the problem of making ships at sea autonomous. Sea Machines is backed by about $12.5 million in capital venture funding, with Toyota AI joining the list of investors in a $10 million Series A earlier this month.
Sea Machines is looking to the self-driving industry for inspiration, developing what it calls “vessel intelligence” systems that can be retrofitted on existing commercial vessels or installed on newly-built working ships.
For instance, the startup announced a deal earlier this year with Maersk, the world’s largest container shipping company, to deploy a system of artificial intelligence, computer vision, and LiDAR on the Danish company’s new ice-class container ship. The technology works similar to advanced driver-assistance systems found in automobiles to avoid hazards. The proof of concept will lay the foundation for a future autonomous collision avoidance system.
It’s not just startups making a splash in autonomous shipping. Radford noted that Rolls Royce—yes, that Rolls Royce—is leading the way in the development of autonomous ships. Its Intelligence Awareness system pulls in nearly every type of hyped technology on the market today: neural networks, augmented reality, virtual reality, and LiDAR.
In augmented reality mode, for example, a live feed video from the ship’s sensors can detect both static and moving objects, overlaying the scene with details about the types of vessels in the area, as well as their distance, heading, and other pertinent data.
While safety is a primary motivation for vessel automation—more than 1,100 ships have been lost over the past decade—these new technologies could make ships more efficient and less expensive to operate, according to a story in Wired about the Rolls Royce Intelligence Awareness system.
Sea Hunt Meets Science
As Singularity Hub noted in a previous article, ocean robots can also play a critical role in saving the seas from environmental threats. One poster child that has emerged—or, invaded—is the spindly lionfish.
A venomous critter endemic to the Indo-Pacific region, the lionfish is now found up and down the east coast of North America and beyond. And it is voracious, eating up to 30 times its own stomach volume and reducing juvenile reef fish populations by nearly 90 percent in as little as five weeks, according to the Ocean Support Foundation.
That has made the colorful but deadly fish Public Enemy No. 1 for many marine conservationists. Both researchers and startups are developing autonomous robots to hunt down the invasive predator.
At the Worcester Polytechnic Institute, for example, students are building a spear-carrying robot that uses machine learning and computer vision to distinguish lionfish from other aquatic species. The students trained the algorithms on thousands of different images of lionfish. The result: a lionfish-killing machine that boasts an accuracy of greater than 95 percent.
Meanwhile, a small startup called the American Marine Research Corporation out of Pensacola, Florida is applying similar technology to seek and destroy lionfish. Rather than spearfishing, the AMRC drone would stun and capture the lionfish, turning a profit by selling the creatures to local seafood restaurants.
Lionfish: It’s what’s for dinner.
Water Bots
A new wave of smart, independent robots are diving, swimming, and cruising across the ocean and its deepest depths. These autonomous systems aren’t necessarily designed to replace humans, but to venture where we can’t go or to improve safety at sea. And, perhaps, these latest innovations may inspire the robots that will someday plumb the depths of watery planets far from Earth.
Image Credit: Houston Mechatronics, Inc. Continue reading →
#434324 Big Brother Nation: The Case for ...
Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.
As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.
In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.
This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.
It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.
The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”
Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.
Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.
By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.
Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.
Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.
Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.
Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.
Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”
If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.
While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.
The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.
The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.
Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.
Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.
Image Credit: VladFotoMag / Shutterstock.com Continue reading →