Tag Archives: research
#437222 China and AI: What the World Can Learn ...
China announced in 2017 its ambition to become the world leader in artificial intelligence (AI) by 2030. While the US still leads in absolute terms, China appears to be making more rapid progress than either the US or the EU, and central and local government spending on AI in China is estimated to be in the tens of billions of dollars.
The move has led—at least in the West—to warnings of a global AI arms race and concerns about the growing reach of China’s authoritarian surveillance state. But treating China as a “villain” in this way is both overly simplistic and potentially costly. While there are undoubtedly aspects of the Chinese government’s approach to AI that are highly concerning and rightly should be condemned, it’s important that this does not cloud all analysis of China’s AI innovation.
The world needs to engage seriously with China’s AI development and take a closer look at what’s really going on. The story is complex and it’s important to highlight where China is making promising advances in useful AI applications and to challenge common misconceptions, as well as to caution against problematic uses.
Nesta has explored the broad spectrum of AI activity in China—the good, the bad, and the unexpected.
The Good
China’s approach to AI development and implementation is fast-paced and pragmatic, oriented towards finding applications which can help solve real-world problems. Rapid progress is being made in the field of healthcare, for example, as China grapples with providing easy access to affordable and high-quality services for its aging population.
Applications include “AI doctor” chatbots, which help to connect communities in remote areas with experienced consultants via telemedicine; machine learning to speed up pharmaceutical research; and the use of deep learning for medical image processing, which can help with the early detection of cancer and other diseases.
Since the outbreak of Covid-19, medical AI applications have surged as Chinese researchers and tech companies have rushed to try and combat the virus by speeding up screening, diagnosis, and new drug development. AI tools used in Wuhan, China, to tackle Covid-19 by helping accelerate CT scan diagnosis are now being used in Italy and have been also offered to the NHS in the UK.
The Bad
But there are also elements of China’s use of AI that are seriously concerning. Positive advances in practical AI applications that are benefiting citizens and society don’t detract from the fact that China’s authoritarian government is also using AI and citizens’ data in ways that violate privacy and civil liberties.
Most disturbingly, reports and leaked documents have revealed the government’s use of facial recognition technologies to enable the surveillance and detention of Muslim ethnic minorities in China’s Xinjiang province.
The emergence of opaque social governance systems that lack accountability mechanisms are also a cause for concern.
In Shanghai’s “smart court” system, for example, AI-generated assessments are used to help with sentencing decisions. But it is difficult for defendants to assess the tool’s potential biases, the quality of the data, and the soundness of the algorithm, making it hard for them to challenge the decisions made.
China’s experience reminds us of the need for transparency and accountability when it comes to AI in public services. Systems must be designed and implemented in ways that are inclusive and protect citizens’ digital rights.
The Unexpected
Commentators have often interpreted the State Council’s 2017 Artificial Intelligence Development Plan as an indication that China’s AI mobilization is a top-down, centrally planned strategy.
But a closer look at the dynamics of China’s AI development reveals the importance of local government in implementing innovation policy. Municipal and provincial governments across China are establishing cross-sector partnerships with research institutions and tech companies to create local AI innovation ecosystems and drive rapid research and development.
Beyond the thriving major cities of Beijing, Shanghai, and Shenzhen, efforts to develop successful innovation hubs are also underway in other regions. A promising example is the city of Hangzhou, in Zhejiang Province, which has established an “AI Town,” clustering together the tech company Alibaba, Zhejiang University, and local businesses to work collaboratively on AI development. China’s local ecosystem approach could offer interesting insights to policymakers in the UK aiming to boost research and innovation outside the capital and tackle longstanding regional economic imbalances.
China’s accelerating AI innovation deserves the world’s full attention, but it is unhelpful to reduce all the many developments into a simplistic narrative about China as a threat or a villain. Observers outside China need to engage seriously with the debate and make more of an effort to understand—and learn from—the nuances of what’s really happening.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Dominik Vanyi on Unsplash Continue reading
#437209 A Renaissance of Genomics and Drugs Is ...
The causes of aging are extremely complex and unclear. But with longevity clinical trials increasing, more answers—and questions—are emerging than ever before.
With the dramatic demonetization of genome reading and editing over the past decade, and Big Pharma, startups, and the FDA starting to face aging as a disease, we are starting to turn those answers into practical ways to extend our healthspan.
In this article, I’ll explore how genome sequencing and editing, along with new classes of anti-aging drugs, are augmenting our biology to further extend our healthy lives.
Genome Sequencing and Editing
Your genome is the software that runs your body. A sequence of 3.2 billion letters makes you “you.” These base pairs of A’s, T’s, C’s, and G’s determine your hair color, your height, your personality, your propensity for disease, your lifespan, and so on.
Until recently, it’s been very difficult to rapidly and cheaply “read” these letters—and even more difficult to understand what they mean. Since 2001, the cost to sequence a whole human genome has plummeted exponentially, outpacing Moore’s Law threefold. From an initial cost of $3.7 billion, it dropped to $10 million in 2006, and to $1,500 in 2015.
Today, the cost of genome sequencing has dropped below $600, and according to Illumina, the world’s leading sequencing company, the process will soon cost about $100 and take about an hour to complete.
This represents one of the most powerful and transformative technology revolutions in healthcare. When we understand your genome, we’ll be able to understand how to optimize “you.”
We’ll know the perfect foods, the perfect drugs, the perfect exercise regimen, and the perfect supplements, just for you.
We’ll understand what microbiome types, or gut flora, are ideal for you (more on this in a later article).
We’ll accurately predict how specific sedatives and medicines will impact you.
We’ll learn which diseases and illnesses you’re most likely to develop and, more importantly, how to best prevent them from developing in the first place (rather than trying to cure them after the fact).
CRISPR Gene Editing
In addition to reading the human genome, scientists can now edit a genome using a naturally occurring biological system discovered in 1987 called CRISPR/Cas9.
Short for Clustered Regularly Interspaced Short Palindromic Repeats and CRISPR-associated protein 9, the editing system was adapted from a naturally-occurring defense system found in bacteria.
Here’s how it works. The bacteria capture snippets of DNA from invading viruses (or bacteriophage) and use them to create DNA segments known as CRISPR arrays. The CRISPR arrays allow the bacteria to “remember” the viruses (or closely related ones), and defend against future invasions. If the viruses attack again, the bacteria produce RNA segments from the CRISPR arrays to target the viruses’ DNA. The bacteria then use Cas9 to cut the DNA apart, which disables the virus.
Most importantly, CRISPR is cheap, quick, easy to use, and more accurate than all previous gene editing methods. As a result, CRISPR/Cas9 has swept through labs around the world as the way to edit a genome. A short search in the literature will show an exponential rise in the number of CRISPR-related publications and patents.
2018: Filled With CRISPR Breakthroughs
Early results are impressive. Researchers have used CRISPR to genetically engineer cocaine resistance into mice, reverse the gene defect causing Duchenne muscular dystrophy (DMD) in dogs, and reduce genetic deafness in mice.
Already this year, CRISPR-edited immune cells have been shown to successfully kill cancer cells in human patients. Researchers have discovered ways to activate CRISPR with light and use the gene-editing technology to better understand Alzheimer’s disease progression.
With great power comes great responsibility, and the opportunity for moral and ethical dilemmas. In 2015, Chinese scientists sparked global controversy when they first edited human embryo cells in the lab with the goal of modifying genes that would make the child resistant to smallpox, HIV, and cholera. Three years later, in November 2018, researcher He Jiankui informed the world that the first set of CRISPR-engineered female twins had been delivered.
To accomplish his goal, Jiankui deleted a region of a receptor on the surface of white blood cells known as CCR5, introducing a rare, natural genetic variation that makes it more difficult for HIV to infect its favorite target, white blood cells. Because Jiankui forged ethical review documents and misled doctors in the process, he was sentenced to three years in prison and fined $429,000 last December.
Coupled with significant ethical conversations necessary for progress, CRISPR will soon provide us the tools to eliminate diseases, create hardier offspring, produce new environmentally resistant crops, and even wipe out pathogens.
Senolytics, Nutraceuticals, and Pharmaceuticals
Over the arc of your life, the cells in your body divide until they reach what is known as the Hayflick limit, or the number of times a normal human cell population will divide before cell division stops, which is typically about 50 divisions.
What normally follows next is programmed cell death or destruction by the immune system. A very small fraction of cells, however, become senescent cells and evade this fate to linger indefinitely. These lingering cells secrete a potent mix of molecules that triggers chronic inflammation, damages the surrounding tissue structures, and changes the behavior of nearby cells for the worse. Senescent cells appear to be one of the root causes of aging, causing everything from fibrosis and blood vessel calcification to localized inflammatory conditions such as osteoarthritis to diminished lung function.
Fortunately, both the scientific and entrepreneurial communities have begun to work on senolytic therapies, moving the technology for selectively destroying senescent cells out of the laboratory and into a half-dozen startup companies.
Prominent companies in the field include the following:
Unity Biotechnology is developing senolytic medicines to selectively eliminate senescent cells with an initial focus on delivering localized therapy in osteoarthritis, ophthalmology, and pulmonary disease.
Oisin Biotechnologies is pioneering a programmable gene therapy that can destroy cells based on their internal biochemistry.
SIWA Therapeutics is working on an immunotherapy approach to the problem of senescent cells.
In recent years, researchers have identified or designed a handful of senolytic compounds that can curb aging by regulating senescent cells. Two of these drugs that have gained mainstay research traction are rapamycin and metformin.
(1) Rapamycin
Originally extracted from bacteria found on Easter Island, rapamycin acts on the m-TOR (mechanistic target of rapamycin) pathway to selectively block a key protein that facilitates cell division. Currently, rapamycin derivatives are widely used for immunosuppression in organ and bone marrow transplants. Research now suggests that use results in prolonged lifespan and enhanced cognitive and immune function.
PureTech Health subsidiary resTORbio (which went public in 2018) is working on a rapamycin-based drug intended to enhance immunity and reduce infection. Their clinical-stage RTB101 drug works by inhibiting part of the mTOR pathway.
Results of the drug’s recent clinical trial include decreased incidence of infection, improved influenza vaccination response, and a 30.6 percent decrease in respiratory tract infection.
Impressive, to say the least.
(2) Metformin
Metformin is a widely-used generic drug for mitigating liver sugar production in Type 2 diabetes patients. Researchers have found that metformin also reduces oxidative stress and inflammation, which otherwise increase as we age. There is strong evidence that metformin can augment cellular regeneration and dramatically mitigate cellular senescence by reducing both oxidative stress and inflammation.
Over 100 studies registered on ClinicalTrials.gov are currently following up on strong evidence of metformin’s protective effect against cancer.
(3) Nutraceuticals and NAD+
Beyond cellular senescence, certain critical nutrients and proteins tend to decline as a function of age. Nutraceuticals combat aging by supplementing and replenishing these declining nutrient levels.
NAD+ exists in every cell, participating in every process from DNA repair to creating the energy vital for cellular processes. It’s been shown that NAD+ levels decline as we age.
The Elysium Health Basis supplement aims to elevate NAD+ levels in the body to extend one’s lifespan. Elysium’s first clinical study reports that Basis increases NAD+ levels consistently by a sustained 40 percent.
Conclusion
These are just a taste of the tremendous momentum that longevity and aging technology has right now. As artificial intelligence and quantum computing transform how we decode our DNA and how we discover drugs, genetics and pharmaceuticals will become truly personalized.
The next article in this series will demonstrate how artificial intelligence is converging with genetics and pharmaceuticals to transform how we approach longevity, aging, and vitality.
We are edging closer toward a dramatically extended healthspan—where 100 is the new 60. What will you create, where will you explore, and how will you spend your time if you are able to add an additional 40 healthy years to your life?
Join Me
(1) A360 Executive Mastermind: If you’re an exponentially and abundance-minded entrepreneur who would like coaching directly from me, consider joining my Abundance 360 Mastermind, a highly selective community of 360 CEOs and entrepreneurs who I coach for 3 days every January in Beverly Hills, Ca. Through A360, I provide my members with context and clarity about how converging exponential technologies will transform every industry. I’m committed to running A360 for the course of an ongoing 25-year journey as a “countdown to the Singularity.”
If you’d like to learn more and consider joining our 2021 membership, apply here.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs—those who want to get involved and play at a higher level. Click here to learn more.
(Both A360 and Abundance-Digital are part of Singularity University—your participation opens you to a global community.)
This article originally appeared on diamandis.com. Read the original article here.
Image Credit: Arek Socha from Pixabay Continue reading
#437182 MIT’s Tiny New Brain Chip Aims for AI ...
The human brain operates on roughly 20 watts of power (a third of a 60-watt light bulb) in a space the size of, well, a human head. The biggest machine learning algorithms use closer to a nuclear power plant’s worth of electricity and racks of chips to learn.
That’s not to slander machine learning, but nature may have a tip or two to improve the situation. Luckily, there’s a branch of computer chip design heeding that call. By mimicking the brain, super-efficient neuromorphic chips aim to take AI off the cloud and put it in your pocket.
The latest such chip is smaller than a piece of confetti and has tens of thousands of artificial synapses made out of memristors—chip components that can mimic their natural counterparts in the brain.
In a recent paper in Nature Nanotechnology, a team of MIT scientists say their tiny new neuromorphic chip was used to store, retrieve, and manipulate images of Captain America’s Shield and MIT’s Killian Court. Whereas images stored with existing methods tended to lose fidelity over time, the new chip’s images remained crystal clear.
“So far, artificial synapse networks exist as software. We’re trying to build real neural network hardware for portable artificial intelligence systems,” Jeehwan Kim, associate professor of mechanical engineering at MIT said in a press release. “Imagine connecting a neuromorphic device to a camera on your car, and having it recognize lights and objects and make a decision immediately, without having to connect to the internet. We hope to use energy-efficient memristors to do those tasks on-site, in real-time.”
A Brain in Your Pocket
Whereas the computers in our phones and laptops use separate digital components for processing and memory—and therefore need to shuttle information between the two—the MIT chip uses analog components called memristors that process and store information in the same place. This is similar to the way the brain works and makes memristors far more efficient. To date, however, they’ve struggled with reliability and scalability.
To overcome these challenges, the MIT team designed a new kind of silicon-based, alloyed memristor. Ions flowing in memristors made from unalloyed materials tend to scatter as the components get smaller, meaning the signal loses fidelity and the resulting computations are less reliable. The team found an alloy of silver and copper helped stabilize the flow of silver ions between electrodes, allowing them to scale the number of memristors on the chip without sacrificing functionality.
While MIT’s new chip is promising, there’s likely a ways to go before memristor-based neuromorphic chips go mainstream. Between now and then, engineers like Kim have their work cut out for them to further scale and demonstrate their designs. But if successful, they could make for smarter smartphones and other even smaller devices.
“We would like to develop this technology further to have larger-scale arrays to do image recognition tasks,” Kim said. “And some day, you might be able to carry around artificial brains to do these kinds of tasks, without connecting to supercomputers, the internet, or the cloud.”
Special Chips for AI
The MIT work is part of a larger trend in computing and machine learning. As progress in classical chips has flagged in recent years, there’s been an increasing focus on more efficient software and specialized chips to continue pushing the pace.
Neuromorphic chips, for example, aren’t new. IBM and Intel are developing their own designs. So far, their chips have been based on groups of standard computing components, such as transistors (as opposed to memristors), arranged to imitate neurons in the brain. These chips are, however, still in the research phase.
Graphics processing units (GPUs)—chips originally developed for graphics-heavy work like video games—are the best practical example of specialized hardware for AI and were heavily used in this generation of machine learning early on. In the years since, Google, NVIDIA, and others have developed even more specialized chips that cater more specifically to machine learning.
The gains from such specialized chips are already being felt.
In a recent cost analysis of machine learning, research and investment firm ARK Invest said cost declines have far outpaced Moore’s Law. In a particular example, they found the cost to train an image recognition algorithm (ResNet-50) went from around $1,000 in 2017 to roughly $10 in 2019. The fall in cost to actually run such an algorithm was even more dramatic. It took $10,000 to classify a billion images in 2017 and just $0.03 in 2019.
Some of these declines can be traced to better software, but according to ARK, specialized chips have improved performance by nearly 16 times in the last three years.
As neuromorphic chips—and other tailored designs—advance further in the years to come, these trends in cost and performance may continue. Eventually, if all goes to plan, we might all carry a pocket brain that can do the work of today’s best AI.
Image credit: Peng Lin Continue reading