Tag Archives: religion

#433717 Could an artificial intelligence be ...

Humans aren't the only people in society – at least according to the law. In the U.S., corporations have been given rights of free speech and religion. Some natural features also have person-like rights. But both of those required changes to the legal system. A new argument has laid a path for artificial intelligence systems to be recognized as people too – without any legislation, court rulings or other revisions to existing law. Continue reading

Posted in Human Robots

#433689 The Rise of Dataism: A Threat to Freedom ...

What would happen if we made all of our data public—everything from wearables monitoring our biometrics, all the way to smartphones monitoring our location, our social media activity, and even our internet search history?

Would such insights into our lives simply provide companies and politicians with greater power to invade our privacy and manipulate us by using our psychological profiles against us?

A burgeoning new philosophy called dataism doesn’t think so.

In fact, this trending ideology believes that liberating the flow of data is the supreme value of the universe, and that it could be the key to unleashing the greatest scientific revolution in the history of humanity.

What Is Dataism?
First mentioned by David Brooks in his 2013 New York Times article “The Philosophy of Data,” dataism is an ethical system that has been most heavily explored and popularized by renowned historian, Yuval Noah Harari.

In his 2016 book Homo Deus, Harari described dataism as a new form of religion that celebrates the growing importance of big data.

Its core belief centers around the idea that the universe gives greater value and support to systems, individuals, and societies that contribute most heavily and efficiently to data processing. In an interview with Wired, Harari stated, “Humans were special and important because up until now they were the most sophisticated data processing system in the universe, but this is no longer the case.”

Now, big data and machine learning are proving themselves more sophisticated, and dataists believe we should hand over as much information and power to these algorithms as possible, allowing the free flow of data to unlock innovation and progress unlike anything we’ve ever seen before.

Pros: Progress and Personal Growth
When you let data run freely, it’s bound to be mixed and matched in new ways that inevitably spark progress. And as we enter the exponential future where every person is constantly connected and sharing their data, the potential for such collaborative epiphanies becomes even greater.

We can already see important increases in quality of life thanks to companies like Google. With Google Maps on your phone, your position is constantly updating on their servers. This information, combined with everyone else on the planet using a phone with Google Maps, allows your phone to inform you of traffic conditions. Based on the speed and location of nearby phones, Google can reroute you to less congested areas or help you avoid accidents. And since you trust that these algorithms have more data than you, you gladly hand over your power to them, following your GPS’s directions rather than your own.

We can do the same sort of thing with our bodies.

Imagine, for instance, a world where each person has biosensors in their bloodstreams—a not unlikely or distant possibility when considering diabetic people already wear insulin pumps that constantly monitor their blood sugar levels. And let’s assume this data was freely shared to the world.

Now imagine a virus like Zika or the Bird Flu breaks out. Thanks to this technology, the odd change in biodata coming from a particular region flags an artificial intelligence that feeds data to the CDC (Center for Disease Control and Prevention). Recognizing that a pandemic could be possible, AIs begin 3D printing vaccines on-demand, predicting the number of people who may be afflicted. When our personal AIs tell us the locations of the spreading epidemic and to take the vaccine it just delivered by drone to our homes, are we likely to follow its instructions? Almost certainly—and if so, it’s likely millions, if not billions, of lives will have been saved.

But to quickly create such vaccines, we’ll also need to liberate research.

Currently, universities and companies seeking to benefit humankind with medical solutions have to pay extensively to organize clinical trials and to find people who match their needs. But if all our biodata was freely aggregated, perhaps they could simply say “monitor all people living with cancer” to an AI, and thanks to the constant stream of data coming in from the world’s population, a machine learning program may easily be able to detect a pattern and create a cure.

As always in research, the more sample data you have, the higher the chance that such patterns will emerge. If data is flowing freely, then anyone in the world can suddenly decide they have a hunch they want to explore, and without having to spend months and months of time and money hunting down the data, they can simply test their hypothesis.

Whether garage tinkerers, at-home scientists, or PhD students—an abundance of free data allows for science to progress unhindered, each person able to operate without being slowed by lack of data. And any progress they make is immediately liberated, becoming free data shared with anyone else that may find a use for it.

Any individual with a curious passion would have the entire world’s data at their fingertips, empowering every one of us to become an expert in any subject that inspires us. Expertise we can then share back into the data stream—a positive feedback loop spearheading progress for the entirety of humanity’s knowledge.

Such exponential gains represent a dataism utopia.

Unfortunately, our current incentives and economy also show us the tragic failures of this model.

As Harari has pointed out, the rise of datism means that “humanism is now facing an existential challenge and the idea of ‘free will’ is under threat.”

Cons: Manipulation and Extortion
In 2017, The Economist declared that data was the most valuable resource on the planet—even more valuable than oil.

Perhaps this is because data is ‘priceless’: it represents understanding, and understanding represents control. And so, in the world of advertising and politics, having data on your consumers and voters gives you an incredible advantage.

This was evidenced by the Cambridge Analytica scandal, in which it’s believed that Donald Trump and the architects of Brexit leveraged users’ Facebook data to create psychological profiles that enabled them to manipulate the masses.

How powerful are these psychological models?

A team who built a model similar to that used by Cambridge Analytica said their model could understand someone as well as a coworker with access to only 10 Facebook likes. With 70 likes they could know them as well as a friend might, 150 likes to match their parents’ understanding, and at 300 likes they could even come to know someone better than their lovers. With more likes, they could even come to know someone better than that person knows themselves.

Proceeding With Caution
In a capitalist democracy, do we want businesses and politicians to know us better than we know ourselves?

In spite of the remarkable benefits that may result for our species by freely giving away our information, do we run the risk of that data being used to exploit and manipulate the masses towards a future without free will, where our daily lives are puppeteered by those who own our data?

It’s extremely possible.

And it’s for this reason that one of the most important conversations we’ll have as a species centers around data ownership: do we just give ownership of the data back to the users, allowing them to choose who to sell or freely give their data to? Or will that simply deter the entrepreneurial drive and cause all of the free services we use today, like Google Search and Facebook, to begin charging inaccessible prices? How much are we willing to pay for our freedom? And how much do we actually care?

If recent history has taught us anything, it’s that humans are willing to give up more privacy than they like to think. Fifteen years ago, it would have been crazy to suggest we’d all allow ourselves to be tracked by our cars, phones, and daily check-ins to our favorite neighborhood locations; but now most of us see it as a worthwhile trade for optimized commutes and dating. As we continue navigating that fine line between exploitation and innovation into a more technological future, what other trade-offs might we be willing to make?

Image Credit: graphicINmotion / Shutterstock.com Continue reading

Posted in Human Robots

#432303 What If the AI Revolution Is Neither ...

Why does everyone assume that the AI revolution will either lead to a fiery apocalypse or a glorious utopia, and not something in between? Of course, part of this is down to the fact that you get more attention by saying “The end is nigh!” or “Utopia is coming!”

But part of it is down to how humans think about change, especially unprecedented change. Millenarianism doesn’t have anything to do with being a “millennial,” being born in the 90s and remembering Buffy the Vampire Slayer. It is a way of thinking about the future that involves a deeply ingrained sense of destiny. A definition might be: “Millenarianism is the expectation that the world as it is will be destroyed and replaced with a perfect world, that a redeemer will come to cast down the evil and raise up the righteous.”

Millenarian beliefs, then, intimately link together the ideas of destruction and creation. They involve the idea of a huge, apocalyptic, seismic shift that will destroy the fabric of the old world and create something entirely new. Similar belief systems exist in many of the world’s major religions, and also the unspoken religion of some atheists and agnostics, which is a belief in technology.

Look at some futurist beliefs around the technological Singularity. In Ray Kurzweil’s vision, the Singularity is the establishment of paradise. Everyone is rendered immortal by biotechnology that can cure our ills; our brains can be uploaded to the cloud; inequality and suffering wash away under the wave of these technologies. The “destruction of the world” is replaced by a Silicon Valley buzzword favorite: disruption. And, as with many millenarian beliefs, your mileage varies on whether this destruction paves the way for a new utopia—or simply ends the world.

There are good reasons to be skeptical and interrogative towards this way of thinking. The most compelling reason is probably that millenarian beliefs seem to be a default mode of how humans think about change; just look at how many variants of this belief have cropped up all over the world.

These beliefs are present in aspects of Christian theology, although they only really became mainstream in their modern form in the 19th and 20th centuries. Ideas like the Tribulations—many years of hardship and suffering—before the Rapture, when the righteous will be raised up and the evil punished. After this destruction, the world will be made anew, or humans will ascend to paradise.

Despite being dogmatically atheist, Marxism has many of the same beliefs. It is all about a deterministic view of history that builds to a crescendo. In the same way as Rapture-believers look for signs that prophecies are beginning to be fulfilled, so Marxists look for evidence that we’re in the late stages of capitalism. They believe that, inevitably, society will degrade and degenerate to a breaking point—just as some millenarian Christians do.

In Marxism, this is when the exploitation of the working class by the rich becomes unsustainable, and the working class bands together and overthrows the oppressors. The “tribulation” is replaced by a “revolution.” Sometimes revolutionary figures, like Lenin, or Marx himself, are heralded as messiahs who accelerate the onset of the Millennium; and their rhetoric involves utterly smashing the old system such that a new world can be built. Of course, there is judgment, when the righteous workers take what’s theirs and the evil bourgeoisie are destroyed.

Even Norse mythology has an element of this, as James Hughes points out in his essay in Nick Bostrom’s book Global Catastrophic Risks. Ragnarok involves men and gods being defeated in a final, apocalyptic battle—but because that was a little bleak, they add in the idea that a new earth will arise where the survivors will live in harmony.

Judgement day is a cultural trope, too. Take the ancient Egyptians and their beliefs around the afterlife; the Lord of the underworld, Osiris, weighs the mortal’s heart against a feather. “Should the heart of the deceased prove to be heavy with wrongdoing, it would be eaten by a demon, and the hope of an afterlife vanished.”

Perhaps in the Singularity, something similar goes on. As our technology and hence our power improve, a final reckoning approaches: our hearts, as humans, will be weighed against a feather. If they prove too heavy with wrongdoing—with misguided stupidity, with arrogance and hubris, with evil—then we will fail the test, and we will destroy ourselves. But if we pass, and emerge from the Singularity and all of its threats and promises unscathed, then we will have paradise. And, like the other belief systems, there’s no room for non-believers; all of society is going to be radically altered, whether you want it to be or not, whether it benefits you or leaves you behind. A technological rapture.

It almost seems like every major development provokes this response. Nuclear weapons did, too. Either this would prove the final straw and we’d destroy ourselves, or the nuclear energy could be harnessed to build a better world. People talked at the dawn of the nuclear age about electricity that was “too cheap to meter.” The scientists who worked on the bomb often thought that with such destructive power in human hands, we’d be forced to cooperate and work together as a species.

When we see the same response over and over again to different circumstances, cropping up in different areas, whether it’s science, religion, or politics, we need to consider human biases. We like millenarian beliefs; and so when the idea of artificial intelligence outstripping human intelligence emerges, these beliefs spring up around it.

We don’t love facts. We don’t love information. We aren’t as rational as we’d like to think. We are creatures of narrative. Physicists observe the world and we weave our observations into narrative theories, stories about little billiard balls whizzing around and hitting each other, or space and time that bend and curve and expand. Historians try to make sense of an endless stream of events. We rely on stories: stories that make sense of the past, justify the present, and prepare us for the future.

And as stories go, the millenarian narrative is a brilliant and compelling one. It can lead you towards social change, as in the case of the Communists, or the Buddhist uprisings in China. It can justify your present-day suffering, if you’re in the tribulation. It gives you hope that your life is important and has meaning. It gives you a sense that things are evolving in a specific direction, according to rules—not just randomly sprawling outwards in a chaotic way. It promises that the righteous will be saved and the wrongdoers will be punished, even if there is suffering along the way. And, ultimately, a lot of the time, the millenarian narrative promises paradise.

We need to be wary of the millenarian narrative when we’re considering technological developments and the Singularity and existential risks in general. Maybe this time is different, but we’ve cried wolf many times before. There is a more likely, less appealing story. Something along the lines of: there are many possibilities, none of them are inevitable, and lots of the outcomes are less extreme than you might think—or they might take far longer than you think to arrive. On the surface, it’s not satisfying. It’s so much easier to think of things as either signaling the end of the world or the dawn of a utopia—or possibly both at once. It’s a narrative we can get behind, a good story, and maybe, a nice dream.

But dig a little below the surface, and you’ll find that the millenarian beliefs aren’t always the most promising ones, because they remove human agency from the equation. If you think that, say, the malicious use of algorithms, or the control of superintelligent AI, are serious and urgent problems that are worth solving, you can’t be wedded to a belief system that insists utopia or dystopia are inevitable. You have to believe in the shades of grey—and in your own ability to influence where we might end up. As we move into an uncertain technological future, we need to be aware of the power—and the limitations—of dreams.

Image Credit: Photobank gallery / Shutterstock.com

We are a participant in the Amazon Services LLC Associates Program, an affiliate advertising program designed to provide a means for us to earn fees by linking to Amazon.com and affiliated sites. Continue reading

Posted in Human Robots

#432027 We Read This 800-Page Report on the ...

The longevity field is bustling but still fragmented, and the “silver tsunami” is coming.

That is the takeaway of The Science of Longevity, the behemoth first volume of a four-part series offering a bird’s-eye view of the longevity industry in 2017. The report, a joint production of the Biogerontology Research Foundation, Deep Knowledge Life Science, Aging Analytics Agency, and Longevity.International, synthesizes the growing array of academic and industry ventures related to aging, healthspan, and everything in between.

This is huge, not only in scale but also in ambition. The report, totally worth a read here, will be followed by four additional volumes in 2018, covering topics ranging from the business side of longevity ventures to financial systems to potential tensions between life extension and religion.

And that’s just the first step. The team hopes to publish updated versions of the report annually, giving scientists, investors, and regulatory agencies an easy way to keep their finger on the longevity pulse.

“In 2018, ‘aging’ remains an unnamed adversary in an undeclared war. For all intents and purposes it is mere abstraction in the eyes of regulatory authorities worldwide,” the authors write.

That needs to change.

People often arrive at the field of aging from disparate areas with wildly diverse opinions and strengths. The report compiles these individual efforts at cracking aging into a systematic resource—a “periodic table” for longevity that clearly lays out emerging trends and promising interventions.

The ultimate goal? A global framework serving as a road map to guide the burgeoning industry. With such a framework in hand, academics and industry alike are finally poised to petition the kind of large-scale investments and regulatory changes needed to tackle aging with a unified front.

Infographic depicting many of the key research hubs and non-profits within the field of geroscience.
Image Credit: Longevity.International
The Aging Globe
The global population is rapidly aging. And our medical and social systems aren’t ready to handle this oncoming “silver tsunami.”

Take the medical field. Many age-related diseases such as Alzheimer’s lack effective treatment options. Others, including high blood pressure, stroke, lung or heart problems, require continuous medication and monitoring, placing enormous strain on medical resources.

What’s more, because disease risk rises exponentially with age, medical care for the elderly becomes a game of whack-a-mole: curing any individual disease such as cancer only increases healthy lifespan by two to three years before another one hits.

That’s why in recent years there’s been increasing support for turning the focus to the root of the problem: aging. Rather than tackling individual diseases, geroscience aims to add healthy years to our lifespan—extending “healthspan,” so to speak.

Despite this relative consensus, the field still faces a roadblock. The US FDA does not yet recognize aging as a bona fide disease. Without such a designation, scientists are banned from testing potential interventions for aging in clinical trials (that said, many have used alternate measures such as age-related biomarkers or Alzheimer’s symptoms as a proxy).

Luckily, the FDA’s stance is set to change. The promising anti-aging drug metformin, for example, is already in clinical trials, examining its effect on a variety of age-related symptoms and diseases. This report, and others to follow, may help push progress along.

“It is critical for investors, policymakers, scientists, NGOs, and influential entities to prioritize the amelioration of the geriatric world scenario and recognize aging as a critical matter of global economic security,” the authors say.

Biomedical Gerontology
The causes of aging are complex, stubborn, and not all clear.

But the report lays out two main streams of intervention with already promising results.

The first is to understand the root causes of aging and stop them before damage accumulates. It’s like meddling with cogs and other inner workings of a clock to slow it down, the authors say.

The report lays out several treatments to keep an eye on.

Geroprotective drugs is a big one. Often repurposed from drugs already on the market, these traditional small molecule drugs target a wide variety of metabolic pathways that play a role in aging. Think anti-oxidants, anti-inflammatory, and drugs that mimic caloric restriction, a proven way to extend healthspan in animal models.

More exciting are the emerging technologies. One is nanotechnology. Nanoparticles of carbon, “bucky-balls,” for example, have already been shown to fight viral infections and dangerous ion particles, as well as stimulate the immune system and extend lifespan in mice (though others question the validity of the results).

Blood is another promising, if surprising, fountain of youth: recent studies found that molecules in the blood of the young rejuvenate the heart, brain, and muscles of aged rodents, though many of these findings have yet to be replicated.

Rejuvenation Biotechnology
The second approach is repair and maintenance.

Rather than meddling with inner clockwork, here we force back the hands of a clock to set it back. The main example? Stem cell therapy.

This type of approach would especially benefit the brain, which harbors small, scattered numbers of stem cells that deplete with age. For neurodegenerative diseases like Alzheimer’s, in which neurons progressively die off, stem cell therapy could in theory replace those lost cells and mend those broken circuits.

Once a blue-sky idea, the discovery of induced pluripotent stem cells (iPSCs), where scientists can turn skin and other mature cells back into a stem-like state, hugely propelled the field into near reality. But to date, stem cells haven’t been widely adopted in clinics.

It’s “a toolkit of highly innovative, highly invasive technologies with clinical trials still a great many years off,” the authors say.

But there is a silver lining. The boom in 3D tissue printing offers an alternative approach to stem cells in replacing aging organs. Recent investment from the Methuselah Foundation and other institutions suggests interest remains high despite still being a ways from mainstream use.

A Disruptive Future
“We are finally beginning to see an industry emerge from mankind’s attempts to make sense of the biological chaos,” the authors conclude.

Looking through the trends, they identified several technologies rapidly gaining steam.

One is artificial intelligence, which is already used to bolster drug discovery. Machine learning may also help identify new longevity genes or bring personalized medicine to the clinic based on a patient’s records or biomarkers.

Another is senolytics, a class of drugs that kill off “zombie cells.” Over 10 prospective candidates are already in the pipeline, with some expected to enter the market in less than a decade, the authors say.

Finally, there’s the big gun—gene therapy. The treatment, unlike others mentioned, can directly target the root of any pathology. With a snip (or a swap), genetic tools can turn off damaging genes or switch on ones that promote a youthful profile. It is the most preventative technology at our disposal.

There have already been some success stories in animal models. Using gene therapy, rodents given a boost in telomerase activity, which lengthens the protective caps of DNA strands, live healthier for longer.

“Although it is the prospect farthest from widespread implementation, it may ultimately prove the most influential,” the authors say.

Ultimately, can we stop the silver tsunami before it strikes?

Perhaps not, the authors say. But we do have defenses: the technologies outlined in the report, though still immature, could one day stop the oncoming tidal wave in its tracks.

Now we just have to bring them out of the lab and into the real world. To push the transition along, the team launched Longevity.International, an online meeting ground that unites various stakeholders in the industry.

By providing scientists, entrepreneurs, investors, and policy-makers a platform for learning and discussion, the authors say, we may finally generate enough drive to implement our defenses against aging. The war has begun.

Read the report in full here, and watch out for others coming soon here. The second part of the report profiles 650 (!!!) longevity-focused research hubs, non-profits, scientists, conferences, and literature. It’s an enormously helpful resource—totally worth keeping it in your back pocket for future reference.

Image Credit: Worraket / Shutterstock.com Continue reading

Posted in Human Robots

#431925 How the Science of Decision-Making Will ...

Neuroscientist Brie Linkenhoker believes that leaders must be better prepared for future strategic challenges by continually broadening their worldviews.
As the director of Worldview Stanford, Brie and her team produce multimedia content and immersive learning experiences to make academic research and insights accessible and useable by curious leaders. These future-focused topics are designed to help curious leaders understand the forces shaping the future.
Worldview Stanford has tackled such interdisciplinary topics as the power of minds, the science of decision-making, environmental risk and resilience, and trust and power in the age of big data.
We spoke with Brie about why understanding our biases is critical to making better decisions, particularly in a time of increasing change and complexity.

Lisa Kay Solomon: What is Worldview Stanford?
Brie Linkenhoker: Leaders and decision makers are trying to navigate this complex hairball of a planet that we live on and that requires keeping up on a lot of diverse topics across multiple fields of study and research. Universities like Stanford are where that new knowledge is being created, but it’s not getting out and used as readily as we would like, so that’s what we’re working on.
Worldview is designed to expand our individual and collective worldviews about important topics impacting our future. Your worldview is not a static thing, it’s constantly changing. We believe it should be informed by lots of different perspectives, different cultures, by knowledge from different domains and disciplines. This is more important now than ever.
At Worldview, we create learning experiences that are an amalgamation of all of those things.
LKS: One of your marquee programs is the Science of Decision Making. Can you tell us about that course and why it’s important?
BL: We tend to think about decision makers as being people in leadership positions, but every person who works in your organization, every member of your family, every member of the community is a decision maker. You have to decide what to buy, who to partner with, what government regulations to anticipate.
You have to think not just about your own decisions, but you have to anticipate how other people make decisions too. So, when we set out to create the Science of Decision Making, we wanted to help people improve their own decisions and be better able to predict, understand, anticipate the decisions of others.

“I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them.”

We realized that the only way to do that was to combine a lot of different perspectives, so we recruited experts from economics, psychology, neuroscience, philosophy, biology, and religion. We also brought in cutting-edge research on artificial intelligence and virtual reality and explored conversations about how technology is changing how we make decisions today and how it might support our decision-making in the future.
There’s no single set of answers. There are as many unanswered questions as there are answered questions.
LKS: One of the other things you explore in this course is the role of biases and heuristics. Can you explain the importance of both in decision-making?
BL: When I was a strategy consultant, executives would ask me, “How do I get rid of the biases in my decision-making or my organization’s decision-making?” And my response would be, “Good luck with that. It isn’t going to happen.”
As human beings we make, probably, thousands of decisions every single day. If we had to be actively thinking about each one of those decisions, we wouldn’t get out of our house in the morning, right?
We have to be able to do a lot of our decision-making essentially on autopilot to free up cognitive resources for more difficult decisions. So, we’ve evolved in the human brain a set of what we understand to be heuristics or rules of thumb.
And heuristics are great in, say, 95 percent of situations. It’s that five percent, or maybe even one percent, that they’re really not so great. That’s when we have to become aware of them because in some situations they can become biases.
For example, it doesn’t matter so much that we’re not aware of our rules of thumb when we’re driving to work or deciding what to make for dinner. But they can become absolutely critical in situations where a member of law enforcement is making an arrest or where you’re making a decision about a strategic investment or even when you’re deciding who to hire.
Let’s take hiring for a moment.
How many years is a hire going to impact your organization? You’re potentially looking at 5, 10, 15, 20 years. Having the right person in a role could change the future of your business entirely. That’s one of those areas where you really need to be aware of your own heuristics and biases—and we all have them. There’s no getting rid of them.
LKS: We seem to be at a time when the boundaries between different disciplines are starting to blend together. How has the advancement of neuroscience help us become better leaders? What do you see happening next?
BL: Heuristics and biases are very topical these days, thanks in part to Michael Lewis’s fantastic book, The Undoing Project, which is the story of the groundbreaking work that Nobel Prize winner Danny Kahneman and Amos Tversky did in the psychology and biases of human decision-making. Their work gave rise to the whole new field of behavioral economics.
In the last 10 to 15 years, neuroeconomics has really taken off. Neuroeconomics is the combination of behavioral economics with neuroscience. In behavioral economics, they use economic games and economic choices that have numbers associated with them and have real-world application.
For example, they ask, “How much would you spend to buy A versus B?” Or, “If I offered you X dollars for this thing that you have, would you take it or would you say no?” So, it’s trying to look at human decision-making in a format that’s easy to understand and quantify within a laboratory setting.
Now you bring neuroscience into that. You can have people doing those same kinds of tasks—making those kinds of semi-real-world decisions—in a brain scanner, and we can now start to understand what’s going on in the brain while people are making decisions. You can ask questions like, “Can I look at the signals in someone’s brain and predict what decision they’re going to make?” That can help us build a model of decision-making.
I think in another 10 or 15 years, we’re probably going to have really rich models of how we actually make decisions and what’s going on in the brain to support them. That’s very exciting for a neuroscientist.
Image Credit: Black Salmon / Shutterstock.com Continue reading

Posted in Human Robots