Tag Archives: there

#433284 Tech Can Sustainably Feed Developing ...

In the next 30 years, virtually all net population growth will occur in urban regions of developing countries. At the same time, worldwide food production will become increasingly limited by the availability of land, water, and energy. These constraints will be further worsened by climate change and the expected addition of two billion people to today’s four billion now living in urban regions. Meanwhile, current urban food ecosystems in the developing world are inefficient and critically inadequate to meet the challenges of the future.

Combined, these trends could have catastrophic economic and political consequences. A new path forward for urban food ecosystems needs to be found. But what is that path?

New technologies, coupled with new business models and supportive government policies, can create more resilient urban food ecosystems in the coming decades. These tech-enabled systems can sustainably link rural, peri-urban (areas just outside cities), and urban producers and consumers, increase overall food production, and generate opportunities for new businesses and jobs (Figure 1).

Figure 1: The urban food value chain nodes from rural, peri-urban and urban producers
to servicing end customers in urban and peri-urban markets.
Here’s a glimpse of the changes technology may bring to the systems feeding cities in the future.

A technology-linked urban food ecosystem would create unprecedented opportunities for small farms to reach wider markets and progress from subsistence farming to commercially producing niche cash crops and animal protein, such as poultry, fish, pork, and insects.

Meanwhile, new opportunities within cities will appear with the creation of vertical farms and other controlled-environment agricultural systems as well as production of plant-based and 3D printed foods and cultured meat. Uberized facilitation of production and distribution of food will reduce bottlenecks and provide new business opportunities and jobs. Off-the-shelf precision agriculture technology will increasingly be the new norm, from smallholders to larger producers.

As part of Agricultural Revolution 4.0, all this will be integrated into the larger collaborative economy—connected by digital platforms, the cloud, and the Internet of Things and powered by artificial intelligence. It will more efficiently and effectively use resources and people to connect the nexus of food, water, energy, nutrition, and human health. It will also aid in the development of a circular economy that is designed to be restorative and regenerative, minimizing waste and maximizing recycling and reuse to build economic, natural, and social capital.

In short, technology will enable transformation of urban food ecosystems, from expanded production in cities to more efficient and inclusive distribution and closer connections with rural farmers. Here’s a closer look at seven tech-driven trends that will help feed tomorrow’s cities.

1. Worldwide Connectivity: Information, Learning, and Markets
Connectivity from simple cell phone SMS communication to internet-enabled smartphones and cloud services are providing platforms for the increasingly powerful technologies enabling development of a new agricultural revolution. Internet connections currently reach more than 4 billion people, about 55% of the global population. That number will grow fast in coming years.

These information and communications technologies connect food producers to consumers with just-in-time data, enhanced good agricultural practices, mobile money and credit, telecommunications, market information and merchandising, and greater transparency and traceability of goods and services throughout the value chain. Text messages on mobile devices have become the one-stop-shop for small farmers to place orders, gain technology information for best management practices, and access market information to increase profitability.

Hershey’s CocoaLink in Ghana, for example, uses text and voice messages with cocoa industry experts and small farm producers. Digital Green is a technology-enabled communication system in Asia and Africa to bring needed agricultural and management practices to small farmers in their own language by filming and recording successful farmers in their own communities. MFarm is a mobile app that connects Kenyan farmers with urban markets via text messaging.

2. Blockchain Technology: Greater Access to Basic Financial Services and Enhanced Food Safety
Gaining access to credit and executing financial transactions have been persistent constraints for small farm producers. Blockchain promises to help the unbanked access basic financial services.

The Gates Foundation has released an open source platform, Mojaloop, to allow software developers and banks and financial service providers to build secure digital payment platforms at scale. Mojaloop software uses more secure blockchain technology to enable urban food system players in the developing world to conduct business and trade. The free software reduces complexity and cost in building payment platforms to connect small farmers with customers, merchants, banks, and mobile money providers. Such digital financial services will allow small farm producers in the developing world to conduct business without a brick-and-mortar bank.

Blockchain is also important for traceability and transparency requirements to meet food regulatory and consumer requirement during the production, post-harvest, shipping, processing and distribution to consumers. Combining blockchain with RFID technologies also will enhance food safety.

3. Uberized Services: On-Demand Equipment, Storage, and More
Uberized services can advance development of the urban food ecosystem across the spectrum, from rural to peri-urban to urban food production and distribution. Whereas Uber and Airbnb enable sharing of rides and homes, the model can be extended in the developing world to include on-demand use of expensive equipment, such as farm machinery, or storage space.

This includes uberization of planting and harvesting equipment (Hello Tractor), transportation vehicles, refrigeration facilities for temporary storage of perishable product, and “cloud kitchens” (EasyAppetite in Nigeria, FoodCourt in Rwanda, and Swiggy and Zomto in India) that produce fresh meals to be delivered to urban customers, enabling young people with motorbikes and cell phones to become entrepreneurs or contractors delivering meals to urban customers.

Another uberized service is marketing and distributing “ugly food” or imperfect produce to reduce food waste. About a third of the world’s food goes to waste, often because of appearance; this is enough to feed two billion people. Such services supply consumers with cheaper, nutritious, tasty, healthy fruits and vegetables that would normally be discarded as culls due to imperfections in shape or size.

4. Technology for Producing Plant-Based Foods in Cities
We need to change diet choices through education and marketing and by developing tasty plant-based substitutes. This is not only critical for environmental sustainability, but also offers opportunities for new businesses and services. It turns out that current agricultural production systems for “red meat” have a far greater detrimental impact on the environment than automobiles.

There have been great advances in plant-based foods, like the Impossible Burger and Beyond Meat, that can satisfy the consumer’s experience and perception of meat. Rather than giving up the experience of eating red meat, technology is enabling marketable, attractive plant-based products that can potentially drastically reduce world per capita consumption of red meat.

5. Cellular Agriculture, Lab-Grown Meat, and 3D Printed Food
Lab-grown meat, literally meat grown from cultured cells, may radically change where and how protein and food is produced, including the cities where it is consumed. There is a wide range of innovative alternatives to traditional meats that can supplement the need for livestock, farms, and butchers. The history of innovation is about getting rid of the bottleneck in the system, and with meat, the bottleneck is the animal. Finless Foods is a new company trying to replicate fish fillets, for example, while Memphis meats is working on beef and poultry.

3D printing or additive manufacturing is a “general purpose technology” used for making, plastic toys, human tissues, aircraft parts, and buildings. 3D printing can also be used to convert alternative ingredients such as proteins from algae, beet leaves, or insects into tasty and healthy products that can be produced by small, inexpensive printers in home kitchens. The food can be customized for individual health needs as well as preferences. 3D printing can also contribute to the food ecosystem by making possible on-demand replacement parts—which are badly needed in the developing world for tractors, pumps, and other equipment. Catapult Design 3D prints tractor replacement parts as well as corn shellers, cart designs, prosthetic limbs, and rolling water barrels for the Indian market.

6. Alt Farming: Vertical Farms to Produce Food in Urban Centers
Urban food ecosystem production systems will rely not only on field-grown crops, but also on production of food within cities. There are a host of new, alternative production systems using “controlled environmental agriculture.” These include low-cost, protected poly hoop houses, greenhouses, roof-top and sack/container gardens, and vertical farming in buildings using artificial lighting. Vertical farms enable year-round production of selected crops, regardless of weather—which will be increasingly important in response to climate change—and without concern for deteriorating soil conditions that affect crop quality and productivity. AeroFarms claims 390 times more productivity per square foot than normal field production.

7. Biotechnology and Nanotechnology for Sustainable Intensification of Agriculture
CRISPR is a promising gene editing technology that can be used to enhance crop productivity while avoiding societal concerns about GMOs. CRISPR can accelerate traditional breeding and selection programs for developing new climate and disease-resistant, higher-yielding, nutritious crops and animals.

Plant-derived coating materials, developed with nanotechnology, can decrease waste, extend shelf-life and transportability of fruits and vegetables, and significantly reduce post-harvest crop loss in developing countries that lack adequate refrigeration. Nanotechnology is also used in polymers to coat seeds to increase their shelf-life and increase their germination success and production for niche, high-value crops.

Putting It All Together
The next generation “urban food industry” will be part of the larger collaborative economy that is connected by digital platforms, the cloud, and the Internet of Things. A tech-enabled urban food ecosystem integrated with new business models and smart agricultural policies offers the opportunity for sustainable intensification (doing more with less) of agriculture to feed a rapidly growing global urban population—while also creating viable economic opportunities for rural and peri-urban as well as urban producers and value-chain players.

Image Credit: Akarawut / Shutterstock.com Continue reading

Posted in Human Robots

#433278 Outdated Evolution: Updating Our ...

What happens when evolution shapes an animal for tribes of 150 primitive individuals living in a chaotic jungle, and then suddenly that animal finds itself living with millions of others in an engineered metropolis, their pockets all bulging with devices of godlike power?

The result, it seems, is a modern era of tension where archaic forms of governance struggle to keep up with the technological advances of their citizenry, where governmental policies act like constraining bottlenecks rather than spearheads of progress.

Simply put, our governments have failed to adapt to disruptive technologies. And if we are to regain our stability moving forward into a future of even greater disruption, it’s imperative that we understand the issues that got us into this situation and what kind of solutions we can engineer to overcome our governmental weaknesses.

Hierarchy vs. Technological Decentralization
Many of the greatest issues our governments face today come from humanity’s biologically-hardwired desire for centralized hierarchies. This innate proclivity towards building and navigating systems of status and rank were evolutionary gifts handed down to us by our ape ancestors, where each member of a community had a mental map of their social hierarchy. Their nervous systems behaved differently depending on their rank in this hierarchy, influencing their interactions in a way that ensured only the most competent ape would rise to the top to gain access to the best food and mates.

As humanity emerged and discovered the power of language, we continued this practice by ensuring that those at the top of the hierarchies, those with the greatest education and access to information, were the dominant decision-makers for our communities.

However, this kind of structured chain of power is only necessary if we’re operating in conditions of scarcity. But resources, including information, are no longer scarce.

It’s estimated that more than two-thirds of adults in the world now own a smartphone, giving the average citizen the same access to the world’s information as the leaders of our governments. And with global poverty falling from 35.5 percent to 10.9 percent over the last 25 years, our younger generations are growing up seeing automation and abundance as a likely default, where innovations like solar energy, lab-grown meat, and 3D printing are expected to become commonplace.

It’s awareness of this paradigm shift that has empowered the recent rise of decentralization. As information and access to resources become ubiquitous, there is noticeably less need for our inefficient and bureaucratic hierarchies.

For example, if blockchain can prove its feasibility for large-scale systems, it can be used to update and upgrade numerous applications to a decentralized model, including currency and voting. Such innovations would lower the risk of failing banks collapsing the economy like they did in 2008, as well as prevent corrupt politicians from using gerrymandering and long queues at polling stations to deter voter participation.

Of course, technology isn’t a magic wand that should be implemented carelessly. Facebook’s “move fast and break things” approach might have very possibly broken American democracy in 2016, as social media played on some of the worst tendencies humanity can operate on during an election: fear and hostility.

But if decentralized technology, like blockchain’s public ledgers, can continue to spread a sense of security and transparency throughout society, perhaps we can begin to quiet that paranoia and hyper-vigilance our brains evolved to cope with living as apes in dangerous jungles. By decentralizing our power structures, we take away the channels our outdated biological behaviors might use to enact social dominance and manipulation.

The peace of mind this creates helps to reestablish trust in our communities and in our governments. And with trust in the government increased, it’s likely we’ll see our next issue corrected.

From Business and Law to Science and Technology
A study found that 59 percent of US presidents, 68 percent of vice presidents, and 78 percent of secretaries of state were lawyers by education and occupation. That’s more than one out of every two people in the most powerful positions in the American government restricted to a field dedicated to convincing other people (judges) their perspective is true, even if they lack evidence.

And so the scientific method became less important than semantics to our leaders.

Similarly, of the 535 individuals in the American congress, only 24 hold a PhD, only 2 of which are in a STEM field. And so far, it’s not getting better: Trump is the first president since WWII not to name a science advisor.

But if we can use technologies like blockchain to increase transparency, efficiency, and trust in the government, then the upcoming generations who understand decentralization, abundance, and exponential technologies might feel inspired enough to run for government positions. This helps solve that common problem where the smartest and most altruistic people tend to avoid government positions because they don’t want to play the semantic and deceitful game of politics.

By changing this narrative, our governments can begin to fill with techno-progressive individuals who actually understand the technologies that are rapidly reshaping our reality. And this influence of expertise is going to be crucial as our governments are forced to restructure and create new policies to accommodate the incoming disruption.

Clearing Regulations to Begin Safe Experimentation
As exponential technologies become more ubiquitous, we’re likely going to see young kids and garage tinkerers creating powerful AIs and altering genetics thanks to tools like CRISPR and free virtual reality tutorials.

This easy accessibility to such powerful technology means unexpected and rapid progress can occur almost overnight, quickly overwhelming our government’s regulatory systems.

Uber and Airbnb are two of the best examples of our government’s inability to keep up with such technology, both companies achieving market dominance before regulators were even able to consider how to handle them. And when a government has decided against them, they often still continue to operate because people simply choose to keep using the apps.

Luckily, this kind of disruption hasn’t yet posed a major existential threat. But this will change when we see companies begin developing cyborg body parts, brain-computer interfaces, nanobot health injectors, and at-home genetic engineering kits.

For this reason, it’s crucial that we have experts who understand how to update our regulations to be as flexible as is necessary to ensure we don’t create black market conditions like we’ve done with drugs. It’s better to have safe and monitored experimentation, rather than forcing individuals into seedy communities using unsafe products.

Survival of the Most Adaptable
If we hope to be an animal that survives our changing environment, we have to adapt. We cannot cling to the behaviors and systems formed thousands of years ago. We must instead acknowledge that we now exist in an ecosystem of disruptive technology, and we must evolve and update our governments if they’re going to be capable of navigating these transformative impacts.

Image Credit: mmatee / Shutterstock.com Continue reading

Posted in Human Robots

#432882 Why the Discovery of Room-Temperature ...

Superconductors are among the most bizarre and exciting materials yet discovered. Counterintuitive quantum-mechanical effects mean that, below a critical temperature, they have zero electrical resistance. This property alone is more than enough to spark the imagination.

A current that could flow forever without losing any energy means transmission of power with virtually no losses in the cables. When renewable energy sources start to dominate the grid and high-voltage transmission across continents becomes important to overcome intermittency, lossless cables will result in substantial savings.

What’s more, a superconducting wire carrying a current that never, ever diminishes would act as a perfect store of electrical energy. Unlike batteries, which degrade over time, if the resistance is truly zero, you could return to the superconductor in a billion years and find that same old current flowing through it. Energy could be captured and stored indefinitely!

With no resistance, a huge current could be passed through the superconducting wire and, in turn, produce magnetic fields of incredible power.

You could use them to levitate trains and produce astonishing accelerations, thereby revolutionizing the transport system. You could use them in power plants—replacing conventional methods which spin turbines in magnetic fields to generate electricity—and in quantum computers as the two-level system required for a “qubit,” in which the zeros and ones are replaced by current flowing clockwise or counterclockwise in a superconductor.

Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic; superconductors can certainly seem like magical devices. So, why aren’t they busy remaking the world? There’s a problem—that critical temperature.

For all known materials, it’s hundreds of degrees below freezing. Superconductors also have a critical magnetic field; beyond a certain magnetic field strength, they cease to work. There’s a tradeoff: materials with an intrinsically high critical temperature can also often provide the largest magnetic fields when cooled well below that temperature.

This has meant that superconductor applications so far have been limited to situations where you can afford to cool the components of your system to close to absolute zero: in particle accelerators and experimental nuclear fusion reactors, for example.

But even as some aspects of superconductor technology become mature in limited applications, the search for higher temperature superconductors moves on. Many physicists still believe a room-temperature superconductor could exist. Such a discovery would unleash amazing new technologies.

The Quest for Room-Temperature Superconductors
After Heike Kamerlingh Onnes discovered superconductivity by accident while attempting to prove Lord Kelvin’s theory that resistance would increase with decreasing temperature, theorists scrambled to explain the new property in the hope that understanding it might allow for room-temperature superconductors to be synthesized.

They came up with the BCS theory, which explained some of the properties of superconductors. It also predicted that the dream of technologists, a room-temperature superconductor, could not exist; the maximum temperature for superconductivity according to BCS theory was just 30 K.

Then, in the 1980s, the field changed again with the discovery of unconventional, or high-temperature, superconductivity. “High temperature” is still very cold: the highest temperature for superconductivity achieved was -70°C for hydrogen sulphide at extremely high pressures. For normal pressures, -140°C is near the upper limit. Unfortunately, high-temperature superconductors—which require relatively cheap liquid nitrogen, rather than liquid helium, to cool—are mostly brittle ceramics, which are expensive to form into wires and have limited application.

Given the limitations of high-temperature superconductors, researchers continue to believe there’s a better option awaiting discovery—an incredible new material that checks boxes like superconductivity approaching room temperature, affordability, and practicality.

Tantalizing Clues
Without a detailed theoretical understanding of how this phenomenon occurs—although incremental progress happens all the time—scientists can occasionally feel like they’re taking educated guesses at materials that might be likely candidates. It’s a little like trying to guess a phone number, but with the periodic table of elements instead of digits.

Yet the prospect remains, in the words of one researcher, tantalizing. A Nobel Prize and potentially changing the world of energy and electricity is not bad for a day’s work.

Some research focuses on cuprates, complex crystals that contain layers of copper and oxygen atoms. Doping cuprates with various different elements, such exotic compounds as mercury barium calcium copper oxide, are amongst the best superconductors known today.

Research also continues into some anomalous but unexplained reports that graphite soaked in water can act as a room-temperature superconductor, but there’s no indication that this could be used for technological applications yet.

In early 2017, as part of the ongoing effort to explore the most extreme and exotic forms of matter we can create on Earth, researchers managed to compress hydrogen into a metal.

The pressure required to do this was more than that at the core of the Earth and thousands of times higher than that at the bottom of the ocean. Some researchers in the field, called condensed-matter physics, doubt that metallic hydrogen was produced at all.

It’s considered possible that metallic hydrogen could be a room-temperature superconductor. But getting the samples to stick around long enough for detailed testing has proved tricky, with the diamonds containing the metallic hydrogen suffering a “catastrophic failure” under the pressure.

Superconductivity—or behavior that strongly resembles it—was also observed in yttrium barium copper oxide (YBCO) at room temperature in 2014. The only catch was that this electron transport lasted for a tiny fraction of a second and required the material to be bombarded with pulsed lasers.

Not very practical, you might say, but tantalizing nonetheless.

Other new materials display enticing properties too. The 2016 Nobel Prize in Physics was awarded for the theoretical work that characterizes topological insulators—materials that exhibit similarly strange quantum behaviors. They can be considered perfect insulators for the bulk of the material but extraordinarily good conductors in a thin layer on the surface.

Microsoft is betting on topological insulators as the key component in their attempt at a quantum computer. They’ve also been considered potentially important components in miniaturized circuitry.

A number of remarkable electronic transport properties have also been observed in new, “2D” structures—like graphene, these are materials synthesized to be as thick as a single atom or molecule. And research continues into how we can utilize the superconductors we’ve already discovered; for example, some teams are trying to develop insulating material that prevents superconducting HVDC cable from overheating.

Room-temperature superconductivity remains as elusive and exciting as it has been for over a century. It is unclear whether a room-temperature superconductor can exist, but the discovery of high-temperature superconductors is a promising indicator that unconventional and highly useful quantum effects may be discovered in completely unexpected materials.

Perhaps in the future—through artificial intelligence simulations or the serendipitous discoveries of a 21st century Kamerlingh Onnes—this little piece of magic could move into the realm of reality.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432691 Is the Secret to Significantly Longer ...

Once upon a time, a powerful Sumerian king named Gilgamesh went on a quest, as such characters often do in these stories of myth and legend. Gilgamesh had witnessed the death of his best friend, Enkidu, and, fearing a similar fate, went in search of immortality. The great king failed to find the secret of eternal life but took solace that his deeds would live well beyond his mortal years.

Fast-forward four thousand years, give or take a century, and Gilgamesh (as famous as any B-list celebrity today, despite the passage of time) would probably be heartened to learn that many others have taken up his search for longevity. Today, though, instead of battling epic monsters and the machinations of fickle gods, those seeking to enhance and extend life are cutting-edge scientists and visionary entrepreneurs who are helping unlock the secrets of human biology.

Chief among them is Aubrey de Grey, a biomedical gerontologist who founded the SENS Research Foundation, a Silicon Valley-based research organization that seeks to advance the application of regenerative medicine to age-related diseases. SENS stands for Strategies for Engineered Negligible Senescence, a term coined by de Grey to describe a broad array (seven, to be precise) of medical interventions that attempt to repair or prevent different types of molecular and cellular damage that eventually lead to age-related diseases like cancer and Alzheimer’s.

Many of the strategies focus on senescent cells, which accumulate in tissues and organs as people age. Not quite dead, senescent cells stop dividing but are still metabolically active, spewing out all sorts of proteins and other molecules that can cause inflammation and other problems. In a young body, that’s usually not a problem (and probably part of general biological maintenance), as a healthy immune system can go to work to put out most fires.

However, as we age, senescent cells continue to accumulate, and at some point the immune system retires from fire watch. Welcome to old age.

Of Mice and Men
Researchers like de Grey believe that treating the cellular underpinnings of aging could not only prevent disease but significantly extend human lifespans. How long? Well, if you’re talking to de Grey, Biblical proportions—on the order of centuries.

De Grey says that science has made great strides toward that end in the last 15 years, such as the ability to copy mitochondrial DNA to the nucleus. Mitochondria serve as the power plant of the cell but are highly susceptible to mutations that lead to cellular degeneration. Copying the mitochondrial DNA into the nucleus would help protect it from damage.

Another achievement occurred about six years ago when scientists first figured out how to kill senescent cells. That discovery led to a spate of new experiments in mice indicating that removing these ticking-time-bomb cells prevented disease and even extended their lifespans. Now the anti-aging therapy is about to be tested in humans.

“As for the next few years, I think the stream of advances is likely to become a flood—once the first steps are made, things get progressively easier and faster,” de Grey tells Singularity Hub. “I think there’s a good chance that we will achieve really dramatic rejuvenation of mice within only six to eight years: maybe taking middle-aged mice and doubling their remaining lifespan, which is an order of magnitude more than can be done today.”

Not Horsing Around
Richard G.A. Faragher, a professor of biogerontology at the University of Brighton in the United Kingdom, recently made discoveries in the lab regarding the rejuvenation of senescent cells with chemical compounds found in foods like chocolate and red wine. He hopes to apply his findings to an animal model in the future—in this case,horses.

“We have been very fortunate in receiving some funding from an animal welfare charity to look at potential treatments for older horses,” he explains to Singularity Hub in an email. “I think this is a great idea. Many aspects of the physiology we are studying are common between horses and humans.”

What Faragher and his colleagues demonstrated in a paper published in BMC Cell Biology last year was that resveralogues, chemicals based on resveratrol, were able to reactivate a protein called a splicing factor that is involved in gene regulation. Within hours, the chemicals caused the cells to rejuvenate and start dividing like younger cells.

“If treatments work in our old pony systems, then I am sure they could be translated into clinical trials in humans,” Faragher says. “How long is purely a matter of money. Given suitable funding, I would hope to see a trial within five years.”

Show Them the Money
Faragher argues that the recent breakthroughs aren’t because a result of emerging technologies like artificial intelligence or the gene-editing tool CRISPR, but a paradigm shift in how scientists understand the underpinnings of cellular aging. Solving the “aging problem” isn’t a question of technology but of money, he says.

“Frankly, when AI and CRISPR have removed cystic fibrosis, Duchenne muscular dystrophy or Gaucher syndrome, I’ll be much more willing to hear tales of amazing progress. Go fix a single, highly penetrant genetic disease in the population using this flashy stuff and then we’ll talk,” he says. “My faith resides in the most potent technological development of all: money.”

De Grey is less flippant about the role that technology will play in the quest to defeat aging. AI, CRISPR, protein engineering, advances in stem cell therapies, and immune system engineering—all will have a part.

“There is not really anything distinctive about the ways in which these technologies will contribute,” he says. “What’s distinctive is that we will need all of these technologies, because there are so many different types of damage to repair and they each require different tricks.”

It’s in the Blood
A startup in the San Francisco Bay Area believes machines can play a big role in discovering the right combination of factors that lead to longer and healthier lives—and then develop drugs that exploit those findings.

BioAge Labs raised nearly $11 million last year for its machine learning platform that crunches big data sets to find blood factors, such as proteins or metabolites, that are tied to a person’s underlying biological age. The startup claims that these factors can predict how long a person will live.

“Our interest in this comes out of research into parabiosis, where joining the circulatory systems of old and young mice—so that they share the same blood—has been demonstrated to make old mice healthier and more robust,” Dr. Eric Morgen, chief medical officer at BioAge, tells Singularity Hub.

Based on that idea, he explains, it should be possible to alter those good or bad factors to produce a rejuvenating effect.

“Our main focus at BioAge is to identify these types of factors in our human cohort data, characterize the important molecular pathways they are involved in, and then drug those pathways,” he says. “This is a really hard problem, and we use machine learning to mine these complex datasets to determine which individual factors and molecular pathways best reflect biological age.”

Saving for the Future
Of course, there’s no telling when any of these anti-aging therapies will come to market. That’s why Forever Labs, a biotechnology startup out of Ann Arbor, Michigan, wants your stem cells now. The company offers a service to cryogenically freeze stem cells taken from bone marrow.

The theory behind the procedure, according to Forever Labs CEO Steven Clausnitzer, is based on research showing that stem cells may be a key component for repairing cellular damage. That’s because stem cells can develop into many different cell types and can divide endlessly to replenish other cells. Clausnitzer notes that there are upwards of a thousand clinical studies looking at using stem cells to treat age-related conditions such as cardiovascular disease.

However, stem cells come with their own expiration date, which usually coincides with the age that most people start experiencing serious health problems. Stem cells harvested from bone marrow at a younger age can potentially provide a therapeutic resource in the future.

“We believe strongly that by having access to your own best possible selves, you’re going to be well positioned to lead healthier, longer lives,” he tells Singularity Hub.

“There’s a compelling argument to be made that if you started to maintain the bone marrow population, the amount of nuclear cells in your bone marrow, and to re-up them so that they aren’t declining with age, it stands to reason that you could absolutely mitigate things like cardiovascular disease and stroke and Alzheimer’s,” he adds.

Clausnitzer notes that the stored stem cells can be used today in developing therapies to treat chronic conditions such as osteoarthritis. However, the more exciting prospect—and the reason he put his own 38-year-old stem cells on ice—is that he believes future stem cell therapies can help stave off the ravages of age-related disease.

“I can start reintroducing them not to treat age-related disease but to treat the decline in the stem-cell niche itself, so that I don’t ever get an age-related disease,” he says. “I don’t think that it equates to immortality, but it certainly is a step in that direction.”

Indecisive on Immortality
The societal implications of a longer-living human species are a guessing game at this point. We do know that by mid-century, the global population of those aged 65 and older will reach 1.6 billion, while those older than 80 will hit nearly 450 million, according to the National Academies of Science. If many of those people could enjoy healthy lives in their twilight years, an enormous medical cost could be avoided.

Faragher is certainly working toward a future where human health is ubiquitous. Human immortality is another question entirely.

“The longer lifespans become, the more heavily we may need to control birth rates and thus we may have fewer new minds. This could have a heavy ‘opportunity cost’ in terms of progress,” he says.

And does anyone truly want to live forever?

“There have been happy moments in my life but I have also suffered some traumatic disappointments. No [drug] will wash those experiences out of me,” Faragher says. “I no longer view my future with unqualified enthusiasm, and I do not think I am the only middle-aged man to feel that way. I don’t think it is an accident that so many ‘immortalists’ are young.

“They should be careful what they wish for.”

Image Credit: Karim Ortiz / Shutterstock.com Continue reading

Posted in Human Robots