Tag Archives: before

#437209 A Renaissance of Genomics and Drugs Is ...

The causes of aging are extremely complex and unclear. But with longevity clinical trials increasing, more answers—and questions—are emerging than ever before.

With the dramatic demonetization of genome reading and editing over the past decade, and Big Pharma, startups, and the FDA starting to face aging as a disease, we are starting to turn those answers into practical ways to extend our healthspan.

In this article, I’ll explore how genome sequencing and editing, along with new classes of anti-aging drugs, are augmenting our biology to further extend our healthy lives.

Genome Sequencing and Editing
Your genome is the software that runs your body. A sequence of 3.2 billion letters makes you “you.” These base pairs of A’s, T’s, C’s, and G’s determine your hair color, your height, your personality, your propensity for disease, your lifespan, and so on.

Until recently, it’s been very difficult to rapidly and cheaply “read” these letters—and even more difficult to understand what they mean. Since 2001, the cost to sequence a whole human genome has plummeted exponentially, outpacing Moore’s Law threefold. From an initial cost of $3.7 billion, it dropped to $10 million in 2006, and to $1,500 in 2015.

Today, the cost of genome sequencing has dropped below $600, and according to Illumina, the world’s leading sequencing company, the process will soon cost about $100 and take about an hour to complete.

This represents one of the most powerful and transformative technology revolutions in healthcare. When we understand your genome, we’ll be able to understand how to optimize “you.”

We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you.
We’ll understand what microbiome types, or gut flora, are ideal for you (more on this in a later article).
We’ll accurately predict how specific sedatives and medicines will impact you.
We’ll learn which diseases and illnesses you’re most likely to develop and, more importantly, how to best prevent them from developing in the first place (rather than trying to cure them after the fact).

CRISPR Gene Editing
In addition to reading the human genome, scientists can now edit a genome using a naturally occurring biological system discovered in 1987 called CRISPR/Cas9.

Short for Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9, the editing system was adapted from a naturally-occurring defense system found in bacteria.

Here’s how it works. The bacteria capture snippets of DNA from invading viruses (or bacteriophage) and use them to create DNA segments known as CRISPR arrays. The CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones), and defend against future invasions. If the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 to cut the DNA apart, which disables the virus.

Most importantly, CRISPR is cheap, quick, easy to use, and more accurate than all previous gene editing methods. As a result, CRISPR/Cas9 has swept through labs around the world as the way to edit a genome. A short search in the literature will show an exponential rise in the number of CRISPR-related publications and patents.

2018: Filled With CRISPR Breakthroughs
Early results are impressive. Researchers have used CRISPR to genetically engineer cocaine resistance into mice, reverse the gene defect causing Duchenne muscular dystrophy (DMD) in dogs, and reduce genetic deafness in mice.

Already this year, CRISPR-edited immune cells have been shown to successfully kill cancer cells in human patients. Researchers have discovered ways to activate CRISPR with light and use the gene-editing technology to better understand Alzheimer’s disease progression.

With great power comes great responsibility, and the opportunity for moral and ethical dilemmas. In 2015, Chinese scientists sparked global controversy when they first edited human embryo cells in the lab with the goal of modifying genes that would make the child resistant to smallpox, HIV, and cholera. Three years later, in November 2018, researcher He Jiankui informed the world that the first set of CRISPR-engineered female twins had been delivered.

To accomplish his goal, Jiankui deleted a region of a receptor on the surface of white blood cells known as CCR5, introducing a rare, natural genetic variation that makes it more difficult for HIV to infect its favorite target, white blood cells. Because Jiankui forged ethical review documents and misled doctors in the process, he was sentenced to three years in prison and fined $429,000 last December.

Coupled with significant ethical conversations necessary for progress, CRISPR will soon provide us the tools to eliminate diseases, create hardier offspring, produce new environmentally resistant crops, and even wipe out pathogens.

Senolytics, Nutraceuticals, and Pharmaceuticals
Over the arc of your life, the cells in your body divide until they reach what is known as the Hayflick limit, or the number of times a normal human cell population will divide before cell division stops, which is typically about 50 divisions.

What normally follows next is programmed cell death or destruction by the immune system. A very small fraction of cells, however, become senescent cells and evade this fate to linger indefinitely. These lingering cells secrete a potent mix of molecules that triggers chronic inflammation, damages the surrounding tissue structures, and changes the behavior of nearby cells for the worse. Senescent cells appear to be one of the root causes of aging, causing everything from fibrosis and blood vessel calcification to localized inflammatory conditions such as osteoarthritis to diminished lung function.

Fortunately, both the scientific and entrepreneurial communities have begun to work on senolytic therapies, moving the technology for selectively destroying senescent cells out of the laboratory and into a half-dozen startup companies.

Prominent companies in the field include the following:

Unity Biotechnology is developing senolytic medicines to selectively eliminate senescent cells with an initial focus on delivering localized therapy in osteoarthritis, ophthalmology, and pulmonary disease.

Oisin Biotechnologies is pioneering a programmable gene therapy that can destroy cells based on their internal biochemistry.

SIWA Therapeutics is working on an immunotherapy approach to the problem of senescent cells.

In recent years, researchers have identified or designed a handful of senolytic compounds that can curb aging by regulating senescent cells. Two of these drugs that have gained mainstay research traction are rapamycin and metformin.

(1) Rapamycin

Originally extracted from bacteria found on Easter Island, rapamycin acts on the m-TOR (mechanistic target of rapamycin) pathway to selectively block a key protein that facilitates cell division. Currently, rapamycin derivatives are widely used for immunosuppression in organ and bone marrow transplants. Research now suggests that use results in prolonged lifespan and enhanced cognitive and immune function.

PureTech Health subsidiary resTORbio (which went public in 2018) is working on a rapamycin-based drug intended to enhance immunity and reduce infection. Their clinical-stage RTB101 drug works by inhibiting part of the mTOR pathway.

Results of the drug’s recent clinical trial include decreased incidence of infection, improved influenza vaccination response, and a 30.6 percent decrease in respiratory tract infection.

Impressive, to say the least.

(2) Metformin

Metformin is a widely-used generic drug for mitigating liver sugar production in Type 2 diabetes patients. Researchers have found that metformin also reduces oxidative stress and inflammation, which otherwise increase as we age. There is strong evidence that metformin can augment cellular regeneration and dramatically mitigate cellular senescence by reducing both oxidative stress and inflammation.

Over 100 studies registered on ClinicalTrials.gov are currently following up on strong evidence of metformin’s protective effect against cancer.

(3) Nutraceuticals and NAD+

Beyond cellular senescence, certain critical nutrients and proteins tend to decline as a function of age. Nutraceuticals combat aging by supplementing and replenishing these declining nutrient levels.

NAD+ exists in every cell, participating in every process from DNA repair to creating the energy vital for cellular processes. It’s been shown that NAD+ levels decline as we age.

The Elysium Health Basis supplement aims to elevate NAD+ levels in the body to extend one’s lifespan. Elysium’s first clinical study reports that Basis increases NAD+ levels consistently by a sustained 40 percent.

Conclusion
These are just a taste of the tremendous momentum that longevity and aging technology has right now. As artificial intelligence and quantum computing transform how we decode our DNA and how we discover drugs, genetics and pharmaceuticals will become truly personalized.

The next article in this series will demonstrate how artificial intelligence is converging with genetics and pharmaceuticals to transform how we approach longevity, aging, and vitality.

We are edging closer toward a dramatically extended healthspan—where 100 is the new 60. What will you create, where will you explore, and how will you spend your time if you are able to add an additional 40 healthy years to your life?

Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”

If you’d like to learn more and consider joining our 2021 membership, apply here.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.

(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)

This article originally appeared on diamandis.com. Read the original article here.

Image Credit: Arek Socha from Pixabay Continue reading

Posted in Human Robots

#437202 Scientists Used Dopamine to Seamlessly ...

In just half a decade, neuromorphic devices—or brain-inspired computing—already seem quaint. The current darling? Artificial-biological hybrid computing, uniting both man-made computer chips and biological neurons seamlessly into semi-living circuits.

It sounds crazy, but a new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward.

Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit.

The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.

The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care?

Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked.

“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”

Neuromorphic Computing
The study grew from years of work into neuromorphic computing, or data processing inspired by the brain.

The blue-sky idea was inspired by the brain’s massive parallel computing capabilities, along with vast energy savings. By mimicking these properties, scientists reasoned, we could potentially turbo-charge computing. Neuromorphic devices basically embody artificial neural networks in physical form—wouldn’t hardware that mimics how the brain processes information be even more efficient and powerful?

These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.

As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?

Buzz & Chemistry
Theoretically, the answer’s yes.

But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical.

Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.

In other words, without chemical signals, the brain can’t function normally. These neurotransmitters don’t just passively carry information. Dopamine, for example, can dramatically change how a neural circuit functions. For an artificial-biological hybrid neural system, the absence of chemistry is like nixing international cargo vessels and only sticking with land-based trains and highways.

“To emulate biological synaptic behavior, the connectivity of the neuromorphic device must be dynamically regulated by the local neurotransmitter activity,” the team said.

Let’s Get Electro-Chemical
The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials.

Rather than the classic neuron shape, picture more of a sandwich with a chunk bitten out in the middle (yup, I’m totally serious). Each of the remaining parts of the sandwich is a soft electrode, made of biological polymers. The “bitten out” part has a conductive solution that can pass on electrical signals.

The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.

In other words, dopamine release from the biological neuron interacts with the artificial one, so that the chemicals change how the downstream neuron behaves in a somewhat lasting way—a loose mimic of what happens inside the brain during learning.

But that’s not all. Chemical signaling is especially powerful in the brain because it’s flexible. Dopamine, for example, only grabs onto the downstream neurons for a bit before it returns back to its upstream neuron—that is, recycled or destroyed. This means that its effect is temporary, giving the neural circuit breathing room to readjust its activity.

The Stanford team also tried reconstructing this quirk in their hybrid circuit. They crafted a microfluidic channel that shuttles both dopamine and its byproduct away from the artificial neurons after they’ve done their job for recycling.

Putting It All Together
After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.”

They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one. Before the experiment, the team wasn’t quite sure what to expect. Theoretically, it made sense that dopamine would change the artificial neuron’s conductance, similar to learning. But “it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab,” said study author Scott Keene.

On the first try, however, the team found that the burst of chemical signaling was able to change the artificial neuron’s conductance long-term, similar to the neuroscience dogma “neurons that fire together, wire together.” Activating the upstream biological neuron with chemicals also changed the artificial neuron’s conductance in a way that mimicked learning.

“That’s when we realized the potential this has for emulating the long-term learning process of a synapse,” said Keene.

Visualizing under an electron microscope, the team found that, similar to its biological counterpart, the hybrid synapse was able to efficiently recycle dopamine with timescales similar to the brain after some calibration. By playing with how much dopamine accumulates at the artificial neuron, the team found that they loosely mimic a learning rule called spike learning—a darling of machine learning inspired by the brain’s computation.

A Hybrid Future?
Unfortunately for cyborg enthusiasts, the work is still in its infancy.

For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.

That said, we’re inching ever closer to full-capability artificial-biological hybrid circuits.

“The neurotransmitter-mediated neuromorphic device presented in this work constitutes a fundamental building block for artificial neural networks that can be directly modulated based on biological feedback from live neurons,” the authors concluded. “[It] is a crucial first step in realizing next-generation adaptive biohybrid interfaces.”

Image Credit: Gerd Altmann from Pixabay Continue reading

Posted in Human Robots

#437182 MIT’s Tiny New Brain Chip Aims for AI ...

The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.

That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.

The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.

In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.

“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”

A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.

To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.

While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.

“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”

Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.

Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.

Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.

The gains from such specialized chips are already being felt.

In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.

Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.

As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.

Image credit: Peng Lin Continue reading

Posted in Human Robots

#437171 Scientists Tap the World’s Most ...

In The Hitchhiker’s Guide to the Galaxy by Douglas Adams, the haughty supercomputer Deep Thought is asked whether it can find the answer to the ultimate question concerning life, the universe, and everything. It replies that, yes, it can do it, but it’s tricky and it’ll have to think about it. When asked how long it will take it replies, “Seven-and-a-half million years. I told you I’d have to think about it.”

Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the Covid-19 pandemic. They’re being used in many facets of responding to the disease, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools, and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.

The largest number of Covid-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively muzzle proteins that SARS-CoV-2, the virus that causes Covid-19, needs for its life cycle.

The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates its human cellular target, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.

The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second—equivalent to about a million laptops. Image credit: Oak Ridge National Laboratory, U.S. Dept. of Energy, CC BY

I am a molecular biophysicist. My lab, at the Center for Molecular Biophysics at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.

The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of Covid-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the anti-HIV drugs that revolutionized AIDS treatment in the 1990s.

World’s Most Powerful Computer
Since the 1990s the power of supercomputers has increased by a factor of a million or so. Summit at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly a million laptops. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.

However, in order to gin up speed, supercomputer architectures have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually.

Further, researchers use supercomputers to figure out by simulation the different shapes formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits—chemicals that work—for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections.

Scientists are using supercomputers to find ways to disable the various proteins—including the infamous spike protein (green protrusions)—produced by SARS-CoV-2, the virus responsible for Covid-19. Image credit: Thomas Splettstoesser scistyle.com, CC BY-ND

Billions of Possibilities
So which chemicals are being tested for Covid-19? A first approach is trying out drugs that already exist for other indications and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.

But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against Covid-19, even when the SARS-CoV-2 protein targets don’t have the same shape.

Our own work has now expanded to about 10 targets on SARS-CoV-2, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven docking study of a repurposing compound database in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a high-performance computing consortium against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.

Also, we and others are venturing out into the wild world of new drug discovery for Covid-19—looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.

Dawn of the Exascale Era
Work will be helped by the arrival of the next big machine at Oak Ridge, called Frontier, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.

Although some fear supercomputers will take over the world, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best—some prefer artificial intelligence, for example—so there’s quite a lot of arguing going on.

Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle Covid-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply 42.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: NIH/NIAID Continue reading

Posted in Human Robots

#437157 A Human-Centric World of Work: Why It ...

Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.

The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.

To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.

Gary Bolles, SU’s chair for the Future of Work, kicked off the discussion with his thoughts on a future of work that’s human-centric, including why it matters and how to build it.

What Is Work?
“Work” seems like a straightforward concept to define, but since it’s constantly shifting shape over time, let’s make sure we’re on the same page. Bolles defined work, very basically, as human skills applied to problems.

“It doesn’t matter if it’s a dirty floor or a complex market entry strategy or a major challenge in the world,” he said. “We as humans create value by applying our skills to solve problems in the world.” You can think of the problems that need solving as the demand and human skills as the supply, and the two are in constant oscillation, including, every few decades or centuries, a massive shift.

We’re in the midst of one of those shifts right now (and we already were, long before the pandemic). Skills that have long been in demand are declining. The World Economic Forum’s 2018 Future of Jobs report listed things like manual dexterity, management of financial and material resources, and quality control and safety awareness as declining skills. Meanwhile, skills the next generation will need include analytical thinking and innovation, emotional intelligence, creativity, and systems analysis.

Along Came a Pandemic
With the outbreak of coronavirus and its spread around the world, the demand side of work shrunk; all the problems that needed solving gave way to the much bigger, more immediate problem of keeping people alive. But as a result, tens of millions of people around the world are out of work—and those are just the ones that are being counted, and they’re a fraction of the true total. There are additional millions in seasonal or gig jobs or who work in informal economies now without work, too.

“This is our opportunity to focus,” Bolles said. “How do we help people re-engage with work? And make it better work, a better economy, and a better set of design heuristics for a world that we all want?”

Bolles posed five key questions—some spurred by impact of the pandemic—on which future of work conversations should focus to make sure it’s a human-centric future.

1. What does an inclusive world of work look like? Rather than seeing our current systems of work as immutable, we need to actually understand those systems and how we want to change them.

2. How can we increase the value of human work? We know that robots and software are going to be fine in the future—but for humans to be fine, we need to design for that very intentionally.

3. How can entrepreneurship help create a better world of work? In many economies the new value that’s created often comes from younger companies; how do we nurture entrepreneurship?

4. What will the intersection of workplace and geography look like? A large percentage of the global workforce is now working from home; what could some of the outcomes of that be? How does gig work fit in?

5. How can we ensure a healthy evolution of work and life? The health and the protection of those at risk is why we shut down our economies, but we need to find a balance that allows people to work while keeping them safe.

Problem-Solving Doesn’t End
The end result these questions are driving towards, and our overarching goal, is maximizing human potential. “If we come up with ways we can continue to do that, we’ll have a much more beneficial future of work,” Bolles said. “We should all be talking about where we can have an impact.”

One small silver lining? We had plenty of problems to solve in the world before ever hearing about coronavirus, and now we have even more. Is the pace of automation accelerating due to the virus? Yes. Are companies finding more ways to automate their processes in order to keep people from getting sick? They are.

But we have a slew of new problems on our hands, and we’re not going to stop needing human skills to solve them (not to mention the new problems that will surely emerge as second- and third-order effects of the shutdowns). If Bolles’ definition of work holds up, we’ve got ours cut out for us.

In an article from April titled The Great Reset, Bolles outlined three phases of the unemployment slump (we’re currently still in the first phase) and what we should be doing to minimize the damage. “The evolution of work is not about what will happen 10 to 20 years from now,” he said. “It’s about what we could be doing differently today.”

Watch Bolles’ talk and those of dozens of other experts for more insights into building a human-centric future of work here.

Image Credit: www_slon_pics from Pixabay Continue reading

Posted in Human Robots