Tag Archives: brain
#439280 Google and Harvard Unveil the Largest ...
Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.
The mapped region encompasses the various layers and cell types of the cerebral cortex, a region of brain tissue associated with higher-level cognition, such as thinking, planning, and language. According to Google, it’s the largest brain map at this level of detail to date, and it’s freely available to scientists (and the rest of us) online. (Really. Go here. Take a stroll.)
To make the map, the teams sliced donated tissue into 5,300 sections, each 30 nanometers thick, and imaged them with a scanning electron microscope at a resolution of 4 nanometers. The resulting 225 million images were computationally aligned and stitched back into a 3D digital representation of the region. Machine learning algorithms segmented individual cells and classified synapses, axons, dendrites, cells, and other structures, and humans checked their work. (The team posted a pre-print paper about the map on bioArxiv.)
Last year, Google and the Janelia Research Campus of the Howard Hughes Medical Institute made headlines when they similarly mapped a portion of a fruit fly brain. That map, at the time the largest yet, covered some 25,000 neurons and 20 million synapses. In addition to targeting the human brain, itself of note, the new map includes tens of thousands of neurons and 130 million synapses. It takes up 1.4 petabytes of disk space.
By comparison, over three decades’ worth of satellite images of Earth by NASA’s Landsat program require 1.3 petabytes of storage. Collections of brain images on the smallest scales are like “a world in a grain of sand,” the Allen Institute’s Clay Reid told Nature, quoting William Blake in reference to an earlier map of the mouse brain.
All that, however, is but a millionth of the human brain. Which is to say, a similarly detailed map of the entire thing is yet years away. Still, the work shows how fast the field is moving. A map of this scale and detail would have been unimaginable a few decades ago.
How to Map a Brain
The study of the brain’s cellular circuitry is known as connectomics.
Obtaining the human connectome, or the wiring diagram of a whole brain, is a moonshot akin to the human genome. And like the human genome, at first, it seemed an impossible feat.
The only complete connectomes are for simple creatures: the nematode worm (C. elegans) and the larva of a sea creature called C. intestinalis. There’s a very good reason for that. Until recently, the mapping process was time-consuming and costly.
Researchers mapping C. elegans in the 1980s used a film camera attached to an electron microscope to image slices of the worm, then reconstructed the neurons and synaptic connections by hand, like a maddeningly difficult three-dimensional puzzle. C. elegans has only 302 neurons and roughly 7,000 synapses, but the rough draft of its connectome took 15 years, and a final draft took another 20. Clearly, this approach wouldn’t scale.
What’s changed? In short, automation.
These days the images themselves are, of course, digital. A process known as focused ion beam milling shaves down each slice of tissue a few nanometers at a time. After one layer is vaporized, an electron microscope images the newly exposed layer. The imaged layer is then shaved away by the ion beam and the next one imaged, until all that’s left of the slice of tissue is a nanometer-resolution digital copy. It’s a far cry from the days of Kodachrome.
But maybe the most dramatic improvement is what happens after scientists complete that pile of images.
Instead of assembling them by hand, algorithms take over. Their first job is ordering the imaged slices. Then they do something impossible until the last decade. They line up the images just so, tracing the path of cells and synapses between them and thus building a 3D model. Humans still proofread the results, but they don’t do the hardest bit anymore. (Even the proofreading can be refined. Renowned neuroscientist and connectomics proponent Sebastian Seung, for example, created a game called Eyewire, where thousands of volunteers review structures.)
“It’s truly beautiful to look at,” Harvard’s Jeff Lichtman, whose lab collaborated with Google on the new map, told Nature in 2019. The programs can trace out neurons faster than the team can churn out image data, he said. “We’re not able to keep up with them. That’s a great place to be.”
But Why…?
In a 2010 TED talk, Seung told the audience you are your connectome. Reconstruct the connections and you reconstruct the mind itself: memories, experience, and personality.
But connectomics has not been without controversy over the years.
Not everyone believes mapping the connectome at this level of detail is necessary for a deep understanding of the brain. And, especially in the field’s earlier, more artisanal past, researchers worried the scale of resources required simply wouldn’t yield comparably valuable (or timely) results.
“I don’t need to know the precise details of the wiring of each cell and each synapse in each of those brains,” nueroscientist Anthony Movshon said in 2019. “What I need to know, instead, is the organizational principles that wire them together.” These, Movshon believes, can likely be inferred from observations at lower resolutions.
Also, a static snapshot of the brain’s physical connections doesn’t necessarily explain how those connections are used in practice.
“A connectome is necessary, but not sufficient,” some scientists have said over the years. Indeed, it may be in the combination of brain maps—including functional, higher-level maps that track signals flowing through neural networks in response to stimuli—that the brain’s inner workings will be illuminated in the sharpest detail.
Still, the C. elegans connectome has proven to be a foundational building block for neuroscience over the years. And the growing speed of mapping is beginning to suggest goals that once seemed impractical may actually be within reach in the coming decades.
Are We There Yet?
Seung has said that when he first started out he estimated it’d take a million years for a person to manually trace all the connections in a cubic millimeter of human cortex. The whole brain, he further inferred, would take on the order of a trillion years.
That’s why automation and algorithms have been so crucial to the field.
Janelia’s Gerry Rubin told Stat he and his team have overseen a 1,000-fold increase in mapping speed since they began work on the fruit fly connectome in 2008. The full connectome—the first part of which was completed last year—may arrive in 2022.
Other groups are working on other animals, like octopuses, saying comparing how different forms of intelligence are wired up may prove particularly rich ground for discovery.
The full connectome of a mouse, a project already underway, may follow the fruit fly by the end of the decade. Rubin estimates going from mouse to human would need another million-fold jump in mapping speed. But he points to the trillion-fold increase in DNA sequencing speed since 1973 to show such dramatic technical improvements aren’t unprecedented.
The genome may be an apt comparison in another way too. Even after sequencing the first human genome, it’s taken many years to scale genomics to the point we can more fully realize its potential. Perhaps the same will be true of connectomics.
Even as the technology opens new doors, it may take time to understand and make use of all it has to offer.
“I believe people were impatient about what [connectomes] would provide,” Joshua Vogelstein, cofounder of the Open Connetome Project, told the Verge last year. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”
Proponents hope brain maps will yield new insights into how the brain works—from thinking to emotion and memory—and how to better diagnose and treat brain disorders. Others, Google among them no doubt, hope to glean insights that could lead to more efficient computing (the brain is astonishing in this respect) and powerful artificial intelligence.
There’s no telling exactly what scientists will find as, neuron by synapse, they map the inner workings of our minds—but it seems all but certain great discoveries await.
Image Credit: Google / Harvard Continue reading
#439164 Advancing AI With a Supercomputer: A ...
Building a computer that can support artificial intelligence at the scale and complexity of the human brain will be a colossal engineering effort. Now researchers at the National Institute of Standards and Technology have outlined how they think we’ll get there.
How, when, and whether we’ll ever create machines that can match our cognitive capabilities is a topic of heated debate among both computer scientists and philosophers. One of the most contentious questions is the extent to which the solution needs to mirror our best example of intelligence so far: the human brain.
Rapid advances in AI powered by deep neural networks—which despite their name operate very differently than the brain—have convinced many that we may be able to achieve “artificial general intelligence” without mimicking the brain’s hardware or software.
Others think we’re still missing fundamental aspects of how intelligence works, and that the best way to fill the gaps is to borrow from nature. For many that means building “neuromorphic” hardware that more closely mimics the architecture and operation of biological brains.
The problem is that the existing computer technology we have at our disposal looks very different from biological information processing systems, and operates on completely different principles. For a start, modern computers are digital and neurons are analog. And although both rely on electrical signals, they come in very different flavors, and the brain also uses a host of chemical signals to carry out processing.
Now though, researchers at NIST think they’ve found a way to combine existing technologies in a way that could mimic the core attributes of the brain. Using their approach, they outline a blueprint for a “neuromorphic supercomputer” that could not only match, but surpass the physical limits of biological systems.
The key to their approach, outlined in Applied Physics Letters, is a combination of electronics and optical technologies. The logic is that electronics are great at computing, while optical systems can transmit information at the speed of light, so combining them is probably the best way to mimic the brain’s excellent computing and communication capabilities.
It’s not a new idea, but so far getting our best electronic and optical hardware to gel has proven incredibly tough. The team thinks they’ve found a potential workaround, dropping the temperature of the system to negative 450 degrees Fahrenheit.
While that might seem to only complicate matters, it actually opens up a host of new hardware possibilities. There are a bunch of high-performance electronic and optical components that only work at these frigid temperatures, like superconducting electronics, single-photon detectors, and silicon LEDs.
The researchers propose using these components to build artificial neurons that operate more like their biological cousins than conventional computer components, firing off electrical impulses, or spikes, rather than shuttling numbers around.
Each neuron has thousands of artificial synapses made from single photon detectors, which pick up optical messages from other neurons. These incoming signals are combined and processed by superconducting circuits, and once they cross a certain threshold a silicon LED is activated, sending an optical impulse to all downstream neurons.
The researchers envisage combining millions of these neurons on 300-millimeter silicon wafers and then stacking the wafers to create a highly interconnected network that mimics the architecture of the brain, with short-range connections dealt with by optical waveguides on each chip and long-range ones dealt with by fiber optic cables.
They acknowledge that the need to cryogenically cool the entire device is a challenge. But they say the improved power efficiency and that of their design should cancel out the cost of this cooling, and a system on the scale of the human brain should require no more power or space than a modern supercomputer. They also point out that there is significant R&D going into cryogenically-cooled quantum computers, which they could likely piggyback off of.
Some of the basic components of the system have already been experimentally demonstrated by the researchers, though they admit there’s still a long way to go to put all the pieces together. While many of these components are compatible with standard electronics fabrication, finding ways to manufacture them cheaply and integrate them will be a mammoth task.
Perhaps more important is the question of what kind of software the machine would run. It’s designed to implement “spiking neural networks” similar to those found in the brain, but our understanding of biological neural networks is still rudimentary, and our ability to mimic them is even worse. While both scientists and tech companies have been experimenting with the approach, it is still far less capable than deep learning.
Given the enormous engineering challenge involved in building a device of this scale, it may be a while before this blueprint makes it off the drawing board. But the proposal is an intriguing new chapter in the hunt for artificial general intelligence.
Image Credit: InspiredImages from Pixabay Continue reading
#439110 Robotic Exoskeletons Could One Day Walk ...
Engineers, using artificial intelligence and wearable cameras, now aim to help robotic exoskeletons walk by themselves.
Increasingly, researchers around the world are developing lower-body exoskeletons to help people walk. These are essentially walking robots users can strap to their legs to help them move.
One problem with such exoskeletons: They often depend on manual controls to switch from one mode of locomotion to another, such as from sitting to standing, or standing to walking, or walking on the ground to walking up or down stairs. Relying on joysticks or smartphone apps every time you want to switch the way you want to move can prove awkward and mentally taxing, says Brokoslaw Laschowski, a robotics researcher at the University of Waterloo in Canada.
Scientists are working on automated ways to help exoskeletons recognize when to switch locomotion modes — for instance, using sensors attached to legs that can detect bioelectric signals sent from your brain to your muscles telling them to move. However, this approach comes with a number of challenges, such as how how skin conductivity can change as a person’s skin gets sweatier or dries off.
Now several research groups are experimenting with a new approach: fitting exoskeleton users with wearable cameras to provide the machines with vision data that will let them operate autonomously. Artificial intelligence (AI) software can analyze this data to recognize stairs, doors, and other features of the surrounding environment and calculate how best to respond.
Laschowski leads the ExoNet project, the first open-source database of high-resolution wearable camera images of human locomotion scenarios. It holds more than 5.6 million images of indoor and outdoor real-world walking environments. The team used this data to train deep-learning algorithms; their convolutional neural networks can already automatically recognize different walking environments with 73 percent accuracy “despite the large variance in different surfaces and objects sensed by the wearable camera,” Laschowski notes.
According to Laschowski, a potential limitation of their work their reliance on conventional 2-D images, whereas depth cameras could also capture potentially useful distance data. He and his collaborators ultimately chose not to rely on depth cameras for a number of reasons, including the fact that the accuracy of depth measurements typically degrades in outdoor lighting and with increasing distance, he says.
In similar work, researchers in North Carolina had volunteers with cameras either mounted on their eyeglasses or strapped onto their knees walk through a variety of indoor and outdoor settings to capture the kind of image data exoskeletons might use to see the world around them. The aim? “To automate motion,” says Edgar Lobaton an electrical engineering researcher at North Carolina State University. He says they are focusing on how AI software might reduce uncertainty due to factors such as motion blur or overexposed images “to ensure safe operation. We want to ensure that we can really rely on the vision and AI portion before integrating it into the hardware.”
In the future, Laschowski and his colleagues will focus on improving the accuracy of their environmental analysis software with low computational and memory storage requirements, which are important for onboard, real-time operations on robotic exoskeletons. Lobaton and his team also seek to account for uncertainty introduced into their visual systems by movements .
Ultimately, the ExoNet researchers want to explore how AI software can transmit commands to exoskeletons so they can perform tasks such as climbing stairs or avoiding obstacles based on a system’s analysis of a user's current movements and the upcoming terrain. With autonomous cars as inspiration, they are seeking to develop autonomous exoskeletons that can handle the walking task without human input, Laschowski says.
However, Laschowski adds, “User safety is of the utmost importance, especially considering that we're working with individuals with mobility impairments,” resulting perhaps from advanced age or physical disabilities.
“The exoskeleton user will always have the ability to override the system should the classification algorithm or controller make a wrong decision.” Continue reading
#439077 How Scientists Grew Human Muscles in Pig ...
The little pigs bouncing around the lab looked exceedingly normal. Yet their adorable exterior hid a remarkable secret: each piglet carried two different sets of genes. For now, both sets came from their own species. But one day, one of those sets may be human.
The piglets are chimeras—creatures with intermingled sets of genes, as if multiple entities were seamlessly mashed together. Named after the Greek lion-goat-serpent monsters, chimeras may hold the key to an endless supply of human organs and tissues for transplant. The crux is growing these human parts in another animal—one close enough in size and function to our own.
Last week, a team from the University of Minnesota unveiled two mind-bending chimeras. One was joyous little piglets, each propelled by muscles grown from a different pig. Another was pig embryos, transplanted into surrogate pigs, that developed human muscles for more than 20 days.
The study, led by Drs. Mary and Daniel Garry at the University of Minnesota, had a therapeutic point: engineering a brilliant way to replace muscle loss, especially for the muscles around our skeletons that allow us to move and navigate the world. Trauma and injury, such as from firearm wounds or car crashes, can damage muscle tissue beyond the point of repair. Unfortunately, muscles are also stubborn in that donor tissue from cadavers doesn’t usually “take” at the injury site. For now, there are no effective treatments for severe muscle death, called volumetric muscle loss.
The new human-pig hybrids are designed to tackle this problem. Muscle wasting aside, the study also points to a clever “hack” that increases the amount of human tissue inside a growing pig embryo.
If further improved, the technology could “provide an unlimited supply of organs for transplantation,” said Dr. Mary Garry to Inverse. What’s more, because the human tissue can be sourced from patients themselves, the risk of rejection by the immune system is relatively low—even when grown inside a pig.
“The shortage of organs for heart transplantation, vascular grafting, and skeletal muscle is staggering,” said Garry. Human-animal chimeras could have a “seismic impact” that transforms organ transplantation and helps solve the organ shortage crisis.
That is, if society accepts the idea of a semi-humanoid pig.
Wait…But How?
The new study took a page from previous chimera recipes.
The main ingredients and steps go like this: first, you need an embryo that lacks the ability to develop a tissue or organ. This leaves an “empty slot” of sorts that you can fill with another set of genes—pig, human, or even monkey.
Second, you need to fine-tune the recipe so that the embryos “take” the new genes, incorporating them into their bodies as if they were their own. Third, the new genes activate to instruct the growing embryo to make the necessary tissue or organs without harming the overall animal. Finally, the foreign genes need to stay put, without cells migrating to another body part—say, the brain.
Not exactly straightforward, eh? The piglets are technological wonders that mix cutting-edge gene editing with cloning technologies.
The team went for two chimeras: one with two sets of pig genes, the other with a pig and human mix. Both started with a pig embryo that can’t make its own skeletal muscles (those are the muscles surrounding your bones). Using CRISPR, the gene-editing Swiss Army Knife, they snipped out three genes that are absolutely necessary for those muscles to develop. Like hitting a bullseye with three arrows simultaneously, it’s already a technological feat.
Here’s the really clever part: the muscles around your bones have a slightly different genetic makeup than the ones that line your blood vessels or the ones that pump your heart. While the resulting pig embryos had severe muscle deformities as they developed, their hearts beat as normal. This means the gene editing cut only impacted skeletal muscles.
Then came step two: replacing the missing genes. Using a microneedle, the team injected a fertilized and slightly developed pig egg—called a blastomere—into the embryo. If left on its natural course, a blastomere eventually develops into another embryo. This step “smashes” the two sets of genes together, with the newcomer filling the muscle void. The hybrid embryo was then placed into a surrogate, and roughly four months later, chimeric piglets were born.
Equipped with foreign DNA, the little guys nevertheless seemed totally normal, nosing around the lab and running everywhere without obvious clumsy stumbles. Under the microscope, their “xenomorph” muscles were indistinguishable from run-of-the-mill average muscle tissue—no signs of damage or inflammation, and as stretchy and tough as muscles usually are. What’s more, the foreign DNA seemed to have only developed into muscles, even though they were prevalent across the body. Extensive fishing experiments found no trace of the injected set of genes inside blood vessels or the brain.
A Better Human-Pig Hybrid
Confident in their recipe, the team next repeated the experiment with human cells, with a twist. Instead of using controversial human embryonic stem cells, which are obtained from aborted fetuses, they relied on induced pluripotent stem cells (iPSCs). These are skin cells that have been reverted back into a stem cell state.
Unlike previous attempts at making human chimeras, the team then scoured the genetic landscape of how pig and human embryos develop to find any genetic “brakes” that could derail the process. One gene, TP53, stood out, which was then promptly eliminated with CRISPR.
This approach provides a way for future studies to similarly increase the efficiency of interspecies chimeras, the team said.
The human-pig embryos were then carefully grown inside surrogate pigs for less than a month, and extensively analyzed. By day 20, the hybrids had already grown detectable human skeletal muscle. Similar to the pig-pig chimeras, the team didn’t detect any signs that the human genes had sprouted cells that would eventually become neurons or other non-muscle cells.
For now, human-animal chimeras are not allowed to grow to term, in part to stem the theoretical possibility of engineering humanoid hybrid animals (shudder). However, a sentient human-pig chimera is something that the team specifically addressed. Through multiple experiments, they found no trace of human genes in the embryos’ brain stem cells 20 and 27 days into development. Similarly, human donor genes were absent in cells that would become the hybrid embryos’ reproductive cells.
Despite bioethical quandaries and legal restrictions, human-animal chimeras have taken off, both as a source of insight into human brain development and a well of personalized organs and tissues for transplant. In 2019, Japan lifted its ban on developing human brain cells inside animal embryos, as well as the term limit—to global controversy. There’s also the question of animal welfare, given that hybrid clones will essentially become involuntary organ donors.
As the debates rage on, scientists are nevertheless pushing the limits of human-animal chimeras, while treading as carefully as possible.
“Our data…support the feasibility of the generation of these interspecies chimeras, which will serve as a model for translational research or, one day, as a source for xenotransplantation,” the team said.
Image Credit: Christopher Carson on Unsplash Continue reading