Tag Archives: species

#433689 The Rise of Dataism: A Threat to Freedom ...

What would happen if we made all of our data public—everything from wearables monitoring our biometrics, all the way to smartphones monitoring our location, our social media activity, and even our internet search history?

Would such insights into our lives simply provide companies and politicians with greater power to invade our privacy and manipulate us by using our psychological profiles against us?

A burgeoning new philosophy called dataism doesn’t think so.

In fact, this trending ideology believes that liberating the flow of data is the supreme value of the universe, and that it could be the key to unleashing the greatest scientific revolution in the history of humanity.

What Is Dataism?
First mentioned by David Brooks in his 2013 New York Times article “The Philosophy of Data,” dataism is an ethical system that has been most heavily explored and popularized by renowned historian, Yuval Noah Harari.

In his 2016 book Homo Deus, Harari described dataism as a new form of religion that celebrates the growing importance of big data.

Its core belief centers around the idea that the universe gives greater value and support to systems, individuals, and societies that contribute most heavily and efficiently to data processing. In an interview with Wired, Harari stated, “Humans were special and important because up until now they were the most sophisticated data processing system in the universe, but this is no longer the case.”

Now, big data and machine learning are proving themselves more sophisticated, and dataists believe we should hand over as much information and power to these algorithms as possible, allowing the free flow of data to unlock innovation and progress unlike anything we’ve ever seen before.

Pros: Progress and Personal Growth
When you let data run freely, it’s bound to be mixed and matched in new ways that inevitably spark progress. And as we enter the exponential future where every person is constantly connected and sharing their data, the potential for such collaborative epiphanies becomes even greater.

We can already see important increases in quality of life thanks to companies like Google. With Google Maps on your phone, your position is constantly updating on their servers. This information, combined with everyone else on the planet using a phone with Google Maps, allows your phone to inform you of traffic conditions. Based on the speed and location of nearby phones, Google can reroute you to less congested areas or help you avoid accidents. And since you trust that these algorithms have more data than you, you gladly hand over your power to them, following your GPS’s directions rather than your own.

We can do the same sort of thing with our bodies.

Imagine, for instance, a world where each person has biosensors in their bloodstreams—a not unlikely or distant possibility when considering diabetic people already wear insulin pumps that constantly monitor their blood sugar levels. And let’s assume this data was freely shared to the world.

Now imagine a virus like Zika or the Bird Flu breaks out. Thanks to this technology, the odd change in biodata coming from a particular region flags an artificial intelligence that feeds data to the CDC (Center for Disease Control and Prevention). Recognizing that a pandemic could be possible, AIs begin 3D printing vaccines on-demand, predicting the number of people who may be afflicted. When our personal AIs tell us the locations of the spreading epidemic and to take the vaccine it just delivered by drone to our homes, are we likely to follow its instructions? Almost certainly—and if so, it’s likely millions, if not billions, of lives will have been saved.

But to quickly create such vaccines, we’ll also need to liberate research.

Currently, universities and companies seeking to benefit humankind with medical solutions have to pay extensively to organize clinical trials and to find people who match their needs. But if all our biodata was freely aggregated, perhaps they could simply say “monitor all people living with cancer” to an AI, and thanks to the constant stream of data coming in from the world’s population, a machine learning program may easily be able to detect a pattern and create a cure.

As always in research, the more sample data you have, the higher the chance that such patterns will emerge. If data is flowing freely, then anyone in the world can suddenly decide they have a hunch they want to explore, and without having to spend months and months of time and money hunting down the data, they can simply test their hypothesis.

Whether garage tinkerers, at-home scientists, or PhD students—an abundance of free data allows for science to progress unhindered, each person able to operate without being slowed by lack of data. And any progress they make is immediately liberated, becoming free data shared with anyone else that may find a use for it.

Any individual with a curious passion would have the entire world’s data at their fingertips, empowering every one of us to become an expert in any subject that inspires us. Expertise we can then share back into the data stream—a positive feedback loop spearheading progress for the entirety of humanity’s knowledge.

Such exponential gains represent a dataism utopia.

Unfortunately, our current incentives and economy also show us the tragic failures of this model.

As Harari has pointed out, the rise of datism means that “humanism is now facing an existential challenge and the idea of ‘free will’ is under threat.”

Cons: Manipulation and Extortion
In 2017, The Economist declared that data was the most valuable resource on the planet—even more valuable than oil.

Perhaps this is because data is ‘priceless’: it represents understanding, and understanding represents control. And so, in the world of advertising and politics, having data on your consumers and voters gives you an incredible advantage.

This was evidenced by the Cambridge Analytica scandal, in which it’s believed that Donald Trump and the architects of Brexit leveraged users’ Facebook data to create psychological profiles that enabled them to manipulate the masses.

How powerful are these psychological models?

A team who built a model similar to that used by Cambridge Analytica said their model could understand someone as well as a coworker with access to only 10 Facebook likes. With 70 likes they could know them as well as a friend might, 150 likes to match their parents’ understanding, and at 300 likes they could even come to know someone better than their lovers. With more likes, they could even come to know someone better than that person knows themselves.

Proceeding With Caution
In a capitalist democracy, do we want businesses and politicians to know us better than we know ourselves?

In spite of the remarkable benefits that may result for our species by freely giving away our information, do we run the risk of that data being used to exploit and manipulate the masses towards a future without free will, where our daily lives are puppeteered by those who own our data?

It’s extremely possible.

And it’s for this reason that one of the most important conversations we’ll have as a species centers around data ownership: do we just give ownership of the data back to the users, allowing them to choose who to sell or freely give their data to? Or will that simply deter the entrepreneurial drive and cause all of the free services we use today, like Google Search and Facebook, to begin charging inaccessible prices? How much are we willing to pay for our freedom? And how much do we actually care?

If recent history has taught us anything, it’s that humans are willing to give up more privacy than they like to think. Fifteen years ago, it would have been crazy to suggest we’d all allow ourselves to be tracked by our cars, phones, and daily check-ins to our favorite neighborhood locations; but now most of us see it as a worthwhile trade for optimized commutes and dating. As we continue navigating that fine line between exploitation and innovation into a more technological future, what other trade-offs might we be willing to make?

Image Credit: graphicINmotion / Shutterstock.com Continue reading

Posted in Human Robots

#433506 MIT’s New Robot Taught Itself to Pick ...

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.

Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.

The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.

This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.

Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.

Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.

Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.

But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.

This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.

The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.

This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.

Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.

As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.

This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.

Image Credit: Tom Buehler/CSAIL Continue reading

Posted in Human Robots

#432893 These 4 Tech Trends Are Driving Us ...

From a first-principles perspective, the task of feeding eight billion people boils down to converting energy from the sun into chemical energy in our bodies.

Traditionally, solar energy is converted by photosynthesis into carbohydrates in plants (i.e., biomass), which are either eaten by the vegans amongst us, or fed to animals, for those with a carnivorous preference.

Today, the process of feeding humanity is extremely inefficient.

If we could radically reinvent what we eat, and how we create that food, what might you imagine that “future of food” would look like?

In this post we’ll cover:

Vertical farms
CRISPR engineered foods
The alt-protein revolution
Farmer 3.0

Let’s dive in.

Vertical Farming
Where we grow our food…

The average American meal travels over 1,500 miles from farm to table. Wine from France, beef from Texas, potatoes from Idaho.

Imagine instead growing all of your food in a 50-story tall vertical farm in downtown LA or off-shore on the Great Lakes where the travel distance is no longer 1,500 miles but 50 miles.

Delocalized farming will minimize travel costs at the same time that it maximizes freshness.

Perhaps more importantly, vertical farming also allows tomorrow’s farmer the ability to control the exact conditions of her plants year round.

Rather than allowing the vagaries of the weather and soil conditions to dictate crop quality and yield, we can now perfectly control the growing cycle.

LED lighting provides the crops with the maximum amount of light, at the perfect frequency, 24 hours a day, 7 days a week.

At the same time, sensors and robots provide the root system the exact pH and micronutrients required, while fine-tuning the temperature of the farm.

Such precision farming can generate yields that are 200% to 400% above normal.

Next let’s explore how we can precision-engineer the genetic properties of the plant itself.

CRISPR and Genetically Engineered Foods
What food do we grow?

A fundamental shift is occurring in our relationship with agriculture. We are going from evolution by natural selection (Darwinism) to evolution by human direction.

CRISPR (the cutting edge gene editing tool) is providing a pathway for plant breeding that is more predictable, faster and less expensive than traditional breeding methods.

Rather than our crops being subject to nature’s random, environmental whim, CRISPR unlocks our capability to modify our crops to match the available environment.

Further, using CRISPR we will be able to optimize the nutrient density of our crops, enhancing their value and volume.

CRISPR may also hold the key to eliminating common allergens from crops. As we identify the allergen gene in peanuts, for instance, we can use CRISPR to silence that gene, making the crops we raise safer for and more accessible to a rapidly growing population.

Yet another application is our ability to make plants resistant to infection or more resistant to drought or cold.

Helping to accelerate the impact of CRISPR, the USDA recently announced that genetically engineered crops will not be regulated—providing an opening for entrepreneurs to capitalize on the opportunities for optimization CRISPR enables.

CRISPR applications in agriculture are an opportunity to help a billion people and become a billionaire in the process.

Protecting crops against volatile environments, combating crop diseases and increasing nutrient values, CRISPR is a promising tool to help feed the world’s rising population.

The Alt-Protein/Lab-Grown Meat Revolution
Something like a third of the Earth’s arable land is used for raising livestock—a massive amount of land—and global demand for meat is predicted to double in the coming decade.

Today, we must grow an entire cow—all bones, skin, and internals included—to produce a steak.

Imagine if we could instead start with a single muscle stem cell and only grow the steak, without needing the rest of the cow? Think of it as cellular agriculture.

Imagine returning millions, perhaps billions, of acres of grazing land back to the wilderness? This is the promise of lab-grown meats.

Lab-grown meat can also be engineered (using technology like CRISPR) to be packed with nutrients and be the healthiest, most delicious protein possible.

We’re watching this technology develop in real time. Several startups across the globe are already working to bring artificial meats to the food industry.

JUST, Inc. (previously Hampton Creek) run by my friend Josh Tetrick, has been on a mission to build a food system where everyone can get and afford delicious, nutritious food. They started by exploring 300,000+ species of plants all around the world to see how they can make food better and now are investing heavily in stem-cell-grown meats.

Backed by Richard Branson and Bill Gates, Memphis Meats is working on ways to produce real meat from animal cells, rather than whole animals. So far, they have produced beef, chicken, and duck using cultured cells from living animals.

As with vertical farming, transitioning production of our majority protein source to a carefully cultivated environment allows for agriculture to optimize inputs (water, soil, energy, land footprint), nutrients and, importantly, taste.

Farmer 3.0
Vertical farming and cellular agriculture are reinventing how we think about our food supply chain and what food we produce.

The next question to answer is who will be producing the food?

Let’s look back at how farming evolved through history.

Farmers 0.0 (Neolithic Revolution, around 9000 BCE): The hunter-gatherer to agriculture transition gains momentum, and humans cultivated the ability to domesticate plants for food production.

Farmers 1.0 (until around the 19th century): Farmers spent all day in the field performing backbreaking labor, and agriculture accounted for most jobs.

Farmers 2.0 (mid-20th century, Green Revolution): From the invention of the first farm tractor in 1812 through today, transformative mechanical biochemical technologies (fertilizer) boosted yields and made the job of farming easier, driving the US farm job rate down to less than two percent today.

Farmers 3.0: In the near future, farmers will leverage exponential technologies (e.g., AI, networks, sensors, robotics, drones), CRISPR and genetic engineering, and new business models to solve the world’s greatest food challenges and efficiently feed the eight-billion-plus people on Earth.

An important driver of the Farmer 3.0 evolution is the delocalization of agriculture driven by vertical and urban farms. Vertical farms and urban agriculture are empowering a new breed of agriculture entrepreneurs.

Let’s take a look at an innovative incubator in Brooklyn, New York called Square Roots.

Ten farm-in-a-shipping-containers in a Brooklyn parking lot represent the first Square Roots campus. Each 8-foot x 8.5-foot x 20-foot shipping container contains an equivalent of 2 acres of produce and can yield more than 50 pounds of produce each week.

For 13 months, one cohort of next-generation food entrepreneurs takes part in a curriculum with foundations in farming, business, community and leadership.

The urban farming incubator raised a $5.4 million seed funding round in August 2017.

Training a new breed of entrepreneurs to apply exponential technology to growing food is essential to the future of farming.

One of our massive transformative purposes at the Abundance Group is to empower entrepreneurs to generate extraordinary wealth while creating a world of abundance. Vertical farms and cellular agriculture are key elements enabling the next generation of food and agriculture entrepreneurs.

Conclusion
Technology is driving food abundance.

We’re already seeing food become demonetized, as the graph below shows.

From 1960 to 2014, the percent of income spent on food in the U.S. fell from 19 percent to under 10 percent of total disposable income—a dramatic decrease over the 40 percent of household income spent on food in 1900.

The dropping percent of per-capita disposable income spent on food. Source: USDA, Economic Research Service, Food Expenditure Series
Ultimately, technology has enabled a massive variety of food at a significantly reduced cost and with fewer resources used for production.

We’re increasingly going to optimize and fortify the food supply chain to achieve more reliable, predictable, and nutritious ways to obtain basic sustenance.

And that means a world with abundant, nutritious, and inexpensive food for every man, woman, and child.

What an extraordinary time to be alive.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital.

Abundance-Digital is my ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#432691 Is the Secret to Significantly Longer ...

Once upon a time, a powerful Sumerian king named Gilgamesh went on a quest, as such characters often do in these stories of myth and legend. Gilgamesh had witnessed the death of his best friend, Enkidu, and, fearing a similar fate, went in search of immortality. The great king failed to find the secret of eternal life but took solace that his deeds would live well beyond his mortal years.

Fast-forward four thousand years, give or take a century, and Gilgamesh (as famous as any B-list celebrity today, despite the passage of time) would probably be heartened to learn that many others have taken up his search for longevity. Today, though, instead of battling epic monsters and the machinations of fickle gods, those seeking to enhance and extend life are cutting-edge scientists and visionary entrepreneurs who are helping unlock the secrets of human biology.

Chief among them is Aubrey de Grey, a biomedical gerontologist who founded the SENS Research Foundation, a Silicon Valley-based research organization that seeks to advance the application of regenerative medicine to age-related diseases. SENS stands for Strategies for Engineered Negligible Senescence, a term coined by de Grey to describe a broad array (seven, to be precise) of medical interventions that attempt to repair or prevent different types of molecular and cellular damage that eventually lead to age-related diseases like cancer and Alzheimer’s.

Many of the strategies focus on senescent cells, which accumulate in tissues and organs as people age. Not quite dead, senescent cells stop dividing but are still metabolically active, spewing out all sorts of proteins and other molecules that can cause inflammation and other problems. In a young body, that’s usually not a problem (and probably part of general biological maintenance), as a healthy immune system can go to work to put out most fires.

However, as we age, senescent cells continue to accumulate, and at some point the immune system retires from fire watch. Welcome to old age.

Of Mice and Men
Researchers like de Grey believe that treating the cellular underpinnings of aging could not only prevent disease but significantly extend human lifespans. How long? Well, if you’re talking to de Grey, Biblical proportions—on the order of centuries.

De Grey says that science has made great strides toward that end in the last 15 years, such as the ability to copy mitochondrial DNA to the nucleus. Mitochondria serve as the power plant of the cell but are highly susceptible to mutations that lead to cellular degeneration. Copying the mitochondrial DNA into the nucleus would help protect it from damage.

Another achievement occurred about six years ago when scientists first figured out how to kill senescent cells. That discovery led to a spate of new experiments in mice indicating that removing these ticking-time-bomb cells prevented disease and even extended their lifespans. Now the anti-aging therapy is about to be tested in humans.

“As for the next few years, I think the stream of advances is likely to become a flood—once the first steps are made, things get progressively easier and faster,” de Grey tells Singularity Hub. “I think there’s a good chance that we will achieve really dramatic rejuvenation of mice within only six to eight years: maybe taking middle-aged mice and doubling their remaining lifespan, which is an order of magnitude more than can be done today.”

Not Horsing Around
Richard G.A. Faragher, a professor of biogerontology at the University of Brighton in the United Kingdom, recently made discoveries in the lab regarding the rejuvenation of senescent cells with chemical compounds found in foods like chocolate and red wine. He hopes to apply his findings to an animal model in the future—in this case,horses.

“We have been very fortunate in receiving some funding from an animal welfare charity to look at potential treatments for older horses,” he explains to Singularity Hub in an email. “I think this is a great idea. Many aspects of the physiology we are studying are common between horses and humans.”

What Faragher and his colleagues demonstrated in a paper published in BMC Cell Biology last year was that resveralogues, chemicals based on resveratrol, were able to reactivate a protein called a splicing factor that is involved in gene regulation. Within hours, the chemicals caused the cells to rejuvenate and start dividing like younger cells.

“If treatments work in our old pony systems, then I am sure they could be translated into clinical trials in humans,” Faragher says. “How long is purely a matter of money. Given suitable funding, I would hope to see a trial within five years.”

Show Them the Money
Faragher argues that the recent breakthroughs aren’t because a result of emerging technologies like artificial intelligence or the gene-editing tool CRISPR, but a paradigm shift in how scientists understand the underpinnings of cellular aging. Solving the “aging problem” isn’t a question of technology but of money, he says.

“Frankly, when AI and CRISPR have removed cystic fibrosis, Duchenne muscular dystrophy or Gaucher syndrome, I’ll be much more willing to hear tales of amazing progress. Go fix a single, highly penetrant genetic disease in the population using this flashy stuff and then we’ll talk,” he says. “My faith resides in the most potent technological development of all: money.”

De Grey is less flippant about the role that technology will play in the quest to defeat aging. AI, CRISPR, protein engineering, advances in stem cell therapies, and immune system engineering—all will have a part.

“There is not really anything distinctive about the ways in which these technologies will contribute,” he says. “What’s distinctive is that we will need all of these technologies, because there are so many different types of damage to repair and they each require different tricks.”

It’s in the Blood
A startup in the San Francisco Bay Area believes machines can play a big role in discovering the right combination of factors that lead to longer and healthier lives—and then develop drugs that exploit those findings.

BioAge Labs raised nearly $11 million last year for its machine learning platform that crunches big data sets to find blood factors, such as proteins or metabolites, that are tied to a person’s underlying biological age. The startup claims that these factors can predict how long a person will live.

“Our interest in this comes out of research into parabiosis, where joining the circulatory systems of old and young mice—so that they share the same blood—has been demonstrated to make old mice healthier and more robust,” Dr. Eric Morgen, chief medical officer at BioAge, tells Singularity Hub.

Based on that idea, he explains, it should be possible to alter those good or bad factors to produce a rejuvenating effect.

“Our main focus at BioAge is to identify these types of factors in our human cohort data, characterize the important molecular pathways they are involved in, and then drug those pathways,” he says. “This is a really hard problem, and we use machine learning to mine these complex datasets to determine which individual factors and molecular pathways best reflect biological age.”

Saving for the Future
Of course, there’s no telling when any of these anti-aging therapies will come to market. That’s why Forever Labs, a biotechnology startup out of Ann Arbor, Michigan, wants your stem cells now. The company offers a service to cryogenically freeze stem cells taken from bone marrow.

The theory behind the procedure, according to Forever Labs CEO Steven Clausnitzer, is based on research showing that stem cells may be a key component for repairing cellular damage. That’s because stem cells can develop into many different cell types and can divide endlessly to replenish other cells. Clausnitzer notes that there are upwards of a thousand clinical studies looking at using stem cells to treat age-related conditions such as cardiovascular disease.

However, stem cells come with their own expiration date, which usually coincides with the age that most people start experiencing serious health problems. Stem cells harvested from bone marrow at a younger age can potentially provide a therapeutic resource in the future.

“We believe strongly that by having access to your own best possible selves, you’re going to be well positioned to lead healthier, longer lives,” he tells Singularity Hub.

“There’s a compelling argument to be made that if you started to maintain the bone marrow population, the amount of nuclear cells in your bone marrow, and to re-up them so that they aren’t declining with age, it stands to reason that you could absolutely mitigate things like cardiovascular disease and stroke and Alzheimer’s,” he adds.

Clausnitzer notes that the stored stem cells can be used today in developing therapies to treat chronic conditions such as osteoarthritis. However, the more exciting prospect—and the reason he put his own 38-year-old stem cells on ice—is that he believes future stem cell therapies can help stave off the ravages of age-related disease.

“I can start reintroducing them not to treat age-related disease but to treat the decline in the stem-cell niche itself, so that I don’t ever get an age-related disease,” he says. “I don’t think that it equates to immortality, but it certainly is a step in that direction.”

Indecisive on Immortality
The societal implications of a longer-living human species are a guessing game at this point. We do know that by mid-century, the global population of those aged 65 and older will reach 1.6 billion, while those older than 80 will hit nearly 450 million, according to the National Academies of Science. If many of those people could enjoy healthy lives in their twilight years, an enormous medical cost could be avoided.

Faragher is certainly working toward a future where human health is ubiquitous. Human immortality is another question entirely.

“The longer lifespans become, the more heavily we may need to control birth rates and thus we may have fewer new minds. This could have a heavy ‘opportunity cost’ in terms of progress,” he says.

And does anyone truly want to live forever?

“There have been happy moments in my life but I have also suffered some traumatic disappointments. No [drug] will wash those experiences out of me,” Faragher says. “I no longer view my future with unqualified enthusiasm, and I do not think I am the only middle-aged man to feel that way. I don’t think it is an accident that so many ‘immortalists’ are young.

“They should be careful what they wish for.”

Image Credit: Karim Ortiz / Shutterstock.com Continue reading

Posted in Human Robots

#432568 Tech Optimists See a Golden ...

Technology evangelists dream about a future where we’re all liberated from the more mundane aspects of our jobs by artificial intelligence. Other futurists go further, imagining AI will enable us to become superhuman, enhancing our intelligence, abandoning our mortal bodies, and uploading ourselves to the cloud.

Paradise is all very well, although your mileage may vary on whether these scenarios are realistic or desirable. The real question is, how do we get there?

Economist John Maynard Keynes notably argued in favor of active intervention when an economic crisis hits, rather than waiting for the markets to settle down to a more healthy equilibrium in the long run. His rebuttal to critics was, “In the long run, we are all dead.” After all, if it takes 50 years of upheaval and economic chaos for things to return to normality, there has been an immense amount of human suffering first.

Similar problems arise with the transition to a world where AI is intimately involved in our lives. In the long term, automation of labor might benefit the human species immensely. But in the short term, it has all kinds of potential pitfalls, especially in exacerbating inequality within societies where AI takes on a larger role. A new report from the Institute for Public Policy Research has deep concerns about the future of work.

Uneven Distribution
While the report doesn’t foresee the same gloom and doom of mass unemployment that other commentators have considered, the concern is that the gains in productivity and economic benefits from AI will be unevenly distributed. In the UK, jobs that account for £290 billion worth of wages in today’s economy could potentially be automated with current technology. But these are disproportionately jobs held by people who are already suffering from social inequality.

Low-wage jobs are five times more likely to be automated than high-wage jobs. A greater proportion of jobs held by women are likely to be automated. The solution that’s often suggested is that people should simply “retrain”; but if no funding or assistance is provided, this burden is too much to bear. You can’t expect people to seamlessly transition from driving taxis to writing self-driving car software without help. As we have already seen, inequality is exacerbated when jobs that don’t require advanced education (even if they require a great deal of technical skill) are the first to go.

No Room for Beginners
Optimists say algorithms won’t replace humans, but will instead liberate us from the dull parts of our jobs. Lawyers used to have to spend hours trawling through case law to find legal precedents; now AI can identify the most relevant documents for them. Doctors no longer need to look through endless scans and perform diagnostic tests; machines can do this, leaving the decision-making to humans. This boosts productivity and provides invaluable tools for workers.

But there are issues with this rosy picture. If humans need to do less work, the economic incentive is for the boss to reduce their hours. Some of these “dull, routine” parts of the job were traditionally how people getting into the field learned the ropes: paralegals used to look through case law, but AI may render them obsolete. Even in the field of journalism, there’s now software that will rewrite press releases for publication, traditionally something close to an entry-level task. If there are no entry-level jobs, or if entry-level now requires years of training, the result is to exacerbate inequality and reduce social mobility.

Automating Our Biases
The adoption of algorithms into employment has already had negative impacts on equality. Cathy O’Neil, mathematics PhD from Harvard, raises these concerns in her excellent book Weapons of Math Destruction. She notes that algorithms designed by humans often encode the biases of that society, whether they’re racial or based on gender and sexuality.

Google’s search engine advertises more executive-level jobs to users it thinks are male. AI programs predict that black offenders are more likely to re-offend than white offenders; they receive correspondingly longer sentences. It needn’t necessarily be that bias has been actively programmed; perhaps the algorithms just learn from historical data, but this means they will perpetuate historical inequalities.

Take candidate-screening software HireVue, used by many major corporations to assess new employees. It analyzes “verbal and non-verbal cues” of candidates, comparing them to employees that historically did well. Either way, according to Cathy O’Neil, they are “using people’s fear and trust of mathematics to prevent them from asking questions.” With no transparency or understanding of how the algorithm generates its results, and no consensus over who’s responsible for the results, discrimination can occur automatically, on a massive scale.

Combine this with other demographic trends. In rich countries, people are living longer. An increasing burden will be placed on a shrinking tax base to support that elderly population. A recent study said that due to the accumulation of wealth in older generations, millennials stand to inherit more than any previous generation, but it won’t happen until they’re in their 60s. Meanwhile, those with savings and capital will benefit as the economy shifts: the stock market and GDP will grow, but wages and equality will fall, a situation that favors people who are already wealthy.

Even in the most dramatic AI scenarios, inequality is exacerbated. If someone develops a general intelligence that’s near-human or super-human, and they manage to control and monopolize it, they instantly become immensely wealthy and powerful. If the glorious technological future that Silicon Valley enthusiasts dream about is only going to serve to make the growing gaps wider and strengthen existing unfair power structures, is it something worth striving for?

What Makes a Utopia?
We urgently need to redefine our notion of progress. Philosophers worry about an AI that is misaligned—the things it seeks to maximize are not the things we want maximized. At the same time, we measure the development of our countries by GDP, not the quality of life of workers or the equality of opportunity in the society. Growing wealth with increased inequality is not progress.

Some people will take the position that there are always winners and losers in society, and that any attempt to redress the inequalities of our society will stifle economic growth and leave everyone worse off. Some will see this as an argument for a new economic model, based around universal basic income. Any moves towards this will need to take care that it’s affordable, sustainable, and doesn’t lead towards an entrenched two-tier society.

Walter Schiedel’s book The Great Leveller is a huge survey of inequality across all of human history, from the 21st century to prehistoric cave-dwellers. He argues that only revolutions, wars, and other catastrophes have historically reduced inequality: a perfect example is the Black Death in Europe, which (by reducing the population and therefore the labor supply that was available) increased wages and reduced inequality. Meanwhile, our solution to the financial crisis of 2007-8 may have only made the problem worse.

But in a world of nuclear weapons, of biowarfare, of cyberwarfare—a world of unprecedented, complex, distributed threats—the consequences of these “safety valves” could be worse than ever before. Inequality increases the risk of global catastrophe, and global catastrophes could scupper any progress towards the techno-utopia that the utopians dream of. And a society with entrenched inequality is no utopia at all.

Image Credit: OliveTree / Shutterstock.com Continue reading

Posted in Human Robots