Tag Archives: the brain

#439280 Google and Harvard Unveil the Largest ...

Last Tuesday, teams from Google and Harvard published an intricate map of every cell and connection in a cubic millimeter of the human brain.

The mapped region encompasses the various layers and cell types of the cerebral cortex, a region of brain tissue associated with higher-level cognition, such as thinking, planning, and language. According to Google, it’s the largest brain map at this level of detail to date, and it’s freely available to scientists (and the rest of us) online. (Really. Go here. Take a stroll.)

To make the map, the teams sliced donated tissue into 5,300 sections, each 30 nanometers thick, and imaged them with a scanning electron microscope at a resolution of 4 nanometers. The resulting 225 million images were computationally aligned and stitched back into a 3D digital representation of the region. Machine learning algorithms segmented individual cells and classified synapses, axons, dendrites, cells, and other structures, and humans checked their work. (The team posted a pre-print paper about the map on bioArxiv.)

Last year, Google and the Janelia Research Campus of the Howard Hughes Medical Institute made headlines when they similarly mapped a portion of a fruit fly brain. That map, at the time the largest yet, covered some 25,000 neurons and 20 million synapses. In addition to targeting the human brain, itself of note, the new map includes tens of thousands of neurons and 130 million synapses. It takes up 1.4 petabytes of disk space.

By comparison, over three decades’ worth of satellite images of Earth by NASA’s Landsat program require 1.3 petabytes of storage. Collections of brain images on the smallest scales are like “a world in a grain of sand,” the Allen Institute’s Clay Reid told Nature, quoting William Blake in reference to an earlier map of the mouse brain.

All that, however, is but a millionth of the human brain. Which is to say, a similarly detailed map of the entire thing is yet years away. Still, the work shows how fast the field is moving. A map of this scale and detail would have been unimaginable a few decades ago.

How to Map a Brain
The study of the brain’s cellular circuitry is known as connectomics.

Obtaining the human connectome, or the wiring diagram of a whole brain, is a moonshot akin to the human genome. And like the human genome, at first, it seemed an impossible feat.

The only complete connectomes are for simple creatures: the nematode worm (C. elegans) and the larva of a sea creature called C. intestinalis. There’s a very good reason for that. Until recently, the mapping process was time-consuming and costly.

Researchers mapping C. elegans in the 1980s used a film camera attached to an electron microscope to image slices of the worm, then reconstructed the neurons and synaptic connections by hand, like a maddeningly difficult three-dimensional puzzle. C. elegans has only 302 neurons and roughly 7,000 synapses, but the rough draft of its connectome took 15 years, and a final draft took another 20. Clearly, this approach wouldn’t scale.

What’s changed? In short, automation.

These days the images themselves are, of course, digital. A process known as focused ion beam milling shaves down each slice of tissue a few nanometers at a time. After one layer is vaporized, an electron microscope images the newly exposed layer. The imaged layer is then shaved away by the ion beam and the next one imaged, until all that’s left of the slice of tissue is a nanometer-resolution digital copy. It’s a far cry from the days of Kodachrome.

But maybe the most dramatic improvement is what happens after scientists complete that pile of images.

Instead of assembling them by hand, algorithms take over. Their first job is ordering the imaged slices. Then they do something impossible until the last decade. They line up the images just so, tracing the path of cells and synapses between them and thus building a 3D model. Humans still proofread the results, but they don’t do the hardest bit anymore. (Even the proofreading can be refined. Renowned neuroscientist and connectomics proponent Sebastian Seung, for example, created a game called Eyewire, where thousands of volunteers review structures.)

“It’s truly beautiful to look at,” Harvard’s Jeff Lichtman, whose lab collaborated with Google on the new map, told Nature in 2019. The programs can trace out neurons faster than the team can churn out image data, he said. “We’re not able to keep up with them. That’s a great place to be.”

But Why…?
In a 2010 TED talk, Seung told the audience you are your connectome. Reconstruct the connections and you reconstruct the mind itself: memories, experience, and personality.

But connectomics has not been without controversy over the years.

Not everyone believes mapping the connectome at this level of detail is necessary for a deep understanding of the brain. And, especially in the field’s earlier, more artisanal past, researchers worried the scale of resources required simply wouldn’t yield comparably valuable (or timely) results.

“I don’t need to know the precise details of the wiring of each cell and each synapse in each of those brains,” nueroscientist Anthony Movshon said in 2019. “What I need to know, instead, is the organizational principles that wire them together.” These, Movshon believes, can likely be inferred from observations at lower resolutions.

Also, a static snapshot of the brain’s physical connections doesn’t necessarily explain how those connections are used in practice.

“A connectome is necessary, but not sufficient,” some scientists have said over the years. Indeed, it may be in the combination of brain maps—including functional, higher-level maps that track signals flowing through neural networks in response to stimuli—that the brain’s inner workings will be illuminated in the sharpest detail.

Still, the C. elegans connectome has proven to be a foundational building block for neuroscience over the years. And the growing speed of mapping is beginning to suggest goals that once seemed impractical may actually be within reach in the coming decades.

Are We There Yet?
Seung has said that when he first started out he estimated it’d take a million years for a person to manually trace all the connections in a cubic millimeter of human cortex. The whole brain, he further inferred, would take on the order of a trillion years.

That’s why automation and algorithms have been so crucial to the field.

Janelia’s Gerry Rubin told Stat he and his team have overseen a 1,000-fold increase in mapping speed since they began work on the fruit fly connectome in 2008. The full connectome—the first part of which was completed last year—may arrive in 2022.

Other groups are working on other animals, like octopuses, saying comparing how different forms of intelligence are wired up may prove particularly rich ground for discovery.

The full connectome of a mouse, a project already underway, may follow the fruit fly by the end of the decade. Rubin estimates going from mouse to human would need another million-fold jump in mapping speed. But he points to the trillion-fold increase in DNA sequencing speed since 1973 to show such dramatic technical improvements aren’t unprecedented.

The genome may be an apt comparison in another way too. Even after sequencing the first human genome, it’s taken many years to scale genomics to the point we can more fully realize its potential. Perhaps the same will be true of connectomics.

Even as the technology opens new doors, it may take time to understand and make use of all it has to offer.

“I believe people were impatient about what [connectomes] would provide,” Joshua Vogelstein, cofounder of the Open Connetome Project, told the Verge last year. “The amount of time between a good technology being seeded, and doing actual science using that technology is often approximately 15 years. Now it’s 15 years later and we can start doing science.”

Proponents hope brain maps will yield new insights into how the brain works—from thinking to emotion and memory—and how to better diagnose and treat brain disorders. Others, Google among them no doubt, hope to glean insights that could lead to more efficient computing (the brain is astonishing in this respect) and powerful artificial intelligence.

There’s no telling exactly what scientists will find as, neuron by synapse, they map the inner workings of our minds—but it seems all but certain great discoveries await.

Image Credit: Google / Harvard Continue reading

Posted in Human Robots

#439077 How Scientists Grew Human Muscles in Pig ...

The little pigs bouncing around the lab looked exceedingly normal. Yet their adorable exterior hid a remarkable secret: each piglet carried two different sets of genes. For now, both sets came from their own species. But one day, one of those sets may be human.

The piglets are chimeras—creatures with intermingled sets of genes, as if multiple entities were seamlessly mashed together. Named after the Greek lion-goat-serpent monsters, chimeras may hold the key to an endless supply of human organs and tissues for transplant. The crux is growing these human parts in another animal—one close enough in size and function to our own.

Last week, a team from the University of Minnesota unveiled two mind-bending chimeras. One was joyous little piglets, each propelled by muscles grown from a different pig. Another was pig embryos, transplanted into surrogate pigs, that developed human muscles for more than 20 days.

The study, led by Drs. Mary and Daniel Garry at the University of Minnesota, had a therapeutic point: engineering a brilliant way to replace muscle loss, especially for the muscles around our skeletons that allow us to move and navigate the world. Trauma and injury, such as from firearm wounds or car crashes, can damage muscle tissue beyond the point of repair. Unfortunately, muscles are also stubborn in that donor tissue from cadavers doesn’t usually “take” at the injury site. For now, there are no effective treatments for severe muscle death, called volumetric muscle loss.

The new human-pig hybrids are designed to tackle this problem. Muscle wasting aside, the study also points to a clever “hack” that increases the amount of human tissue inside a growing pig embryo.

If further improved, the technology could “provide an unlimited supply of organs for transplantation,” said Dr. Mary Garry to Inverse. What’s more, because the human tissue can be sourced from patients themselves, the risk of rejection by the immune system is relatively low—even when grown inside a pig.

“The shortage of organs for heart transplantation, vascular grafting, and skeletal muscle is staggering,” said Garry. Human-animal chimeras could have a “seismic impact” that transforms organ transplantation and helps solve the organ shortage crisis.

That is, if society accepts the idea of a semi-humanoid pig.

Wait…But How?
The new study took a page from previous chimera recipes.

The main ingredients and steps go like this: first, you need an embryo that lacks the ability to develop a tissue or organ. This leaves an “empty slot” of sorts that you can fill with another set of genes—pig, human, or even monkey.

Second, you need to fine-tune the recipe so that the embryos “take” the new genes, incorporating them into their bodies as if they were their own. Third, the new genes activate to instruct the growing embryo to make the necessary tissue or organs without harming the overall animal. Finally, the foreign genes need to stay put, without cells migrating to another body part—say, the brain.

Not exactly straightforward, eh? The piglets are technological wonders that mix cutting-edge gene editing with cloning technologies.

The team went for two chimeras: one with two sets of pig genes, the other with a pig and human mix. Both started with a pig embryo that can’t make its own skeletal muscles (those are the muscles surrounding your bones). Using CRISPR, the gene-editing Swiss Army Knife, they snipped out three genes that are absolutely necessary for those muscles to develop. Like hitting a bullseye with three arrows simultaneously, it’s already a technological feat.

Here’s the really clever part: the muscles around your bones have a slightly different genetic makeup than the ones that line your blood vessels or the ones that pump your heart. While the resulting pig embryos had severe muscle deformities as they developed, their hearts beat as normal. This means the gene editing cut only impacted skeletal muscles.

Then came step two: replacing the missing genes. Using a microneedle, the team injected a fertilized and slightly developed pig egg—called a blastomere—into the embryo. If left on its natural course, a blastomere eventually develops into another embryo. This step “smashes” the two sets of genes together, with the newcomer filling the muscle void. The hybrid embryo was then placed into a surrogate, and roughly four months later, chimeric piglets were born.

Equipped with foreign DNA, the little guys nevertheless seemed totally normal, nosing around the lab and running everywhere without obvious clumsy stumbles. Under the microscope, their “xenomorph” muscles were indistinguishable from run-of-the-mill average muscle tissue—no signs of damage or inflammation, and as stretchy and tough as muscles usually are. What’s more, the foreign DNA seemed to have only developed into muscles, even though they were prevalent across the body. Extensive fishing experiments found no trace of the injected set of genes inside blood vessels or the brain.

A Better Human-Pig Hybrid
Confident in their recipe, the team next repeated the experiment with human cells, with a twist. Instead of using controversial human embryonic stem cells, which are obtained from aborted fetuses, they relied on induced pluripotent stem cells (iPSCs). These are skin cells that have been reverted back into a stem cell state.

Unlike previous attempts at making human chimeras, the team then scoured the genetic landscape of how pig and human embryos develop to find any genetic “brakes” that could derail the process. One gene, TP53, stood out, which was then promptly eliminated with CRISPR.

This approach provides a way for future studies to similarly increase the efficiency of interspecies chimeras, the team said.

The human-pig embryos were then carefully grown inside surrogate pigs for less than a month, and extensively analyzed. By day 20, the hybrids had already grown detectable human skeletal muscle. Similar to the pig-pig chimeras, the team didn’t detect any signs that the human genes had sprouted cells that would eventually become neurons or other non-muscle cells.

For now, human-animal chimeras are not allowed to grow to term, in part to stem the theoretical possibility of engineering humanoid hybrid animals (shudder). However, a sentient human-pig chimera is something that the team specifically addressed. Through multiple experiments, they found no trace of human genes in the embryos’ brain stem cells 20 and 27 days into development. Similarly, human donor genes were absent in cells that would become the hybrid embryos’ reproductive cells.

Despite bioethical quandaries and legal restrictions, human-animal chimeras have taken off, both as a source of insight into human brain development and a well of personalized organs and tissues for transplant. In 2019, Japan lifted its ban on developing human brain cells inside animal embryos, as well as the term limit—to global controversy. There’s also the question of animal welfare, given that hybrid clones will essentially become involuntary organ donors.

As the debates rage on, scientists are nevertheless pushing the limits of human-animal chimeras, while treading as carefully as possible.

“Our data…support the feasibility of the generation of these interspecies chimeras, which will serve as a model for translational research or, one day, as a source for xenotransplantation,” the team said.

Image Credit: Christopher Carson on Unsplash Continue reading

Posted in Human Robots

#439070 Are Digital Humans the Next Step in ...

In the fictional worlds of film and TV, artificial intelligence has been depicted as so advanced that it is indistinguishable from humans. But what if we’re actually getting closer to a world where AI is capable of thinking and feeling?

Tech company UneeQ is embarking on that journey with its “digital humans.” These avatars act as visual interfaces for customer service chatbots, virtual assistants, and other applications. UneeQ’s digital humans appear lifelike not only in terms of language and tone of voice, but also because of facial movements: raised eyebrows, a tilt of the head, a smile, even a wink. They transform a transaction into an interaction: creepy yet astonishing, human, but not quite.

What lies beneath UneeQ’s digital humans? Their 3D faces are modeled on actual human features. Speech recognition enables the avatar to understand what a person is saying, and natural language processing is used to craft a response. Before the avatar utters a word, specific emotions and facial expressions are encoded within the response.

UneeQ may be part of a larger trend towards humanizing computing. ObEN’s digital avatars serve as virtual identities for celebrities, influencers, gaming characters, and other entities in the media and entertainment industry. Meanwhile, Soul Machines is taking a more biological approach, with a “digital brain” that simulates aspects of the human brain to modulate the emotions “felt” and “expressed” by its “digital people.” Amelia is employing a similar methodology in building its “digital employees.” It emulates parts of the brain involved with memory to respond to queries and, with each interaction, learns to deliver more engaging and personalized experiences.

Shiwali Mohan, an AI systems scientist at the Palo Alto Research Center, is skeptical of these digital beings. “They’re humanlike in their looks and the way they sound, but that in itself is not being human,” she says. “Being human is also how you think, how you approach problems, and how you break them down; and that takes a lot of algorithmic design. Designing for human-level intelligence is a different endeavor than designing graphics that behave like humans. If you think about the problems we’re trying to design these avatars for, we might not need something that looks like a human—it may not even be the right solution path.”

And even if these avatars appear near-human, they still evoke an uncanny valley feeling. “If something looks like a human, we have high expectations of them, but they might behave differently in ways that humans just instinctively know how other humans react. These differences give rise to the uncanny valley feeling,” says Mohan.

Yet the demand is there, with Amelia seeing high adoption of its digital employees across the financial, health care, and retail sectors. “We find that banks and insurance companies, which are so risk-averse, are leading the adoption of such disruptive technologies because they understand that the risk of non-adoption is much greater than the risk of early adoption,” says Chetan Dube, Amelia’s CEO. “Unless they innovate their business models and make them much more efficient digitally, they might be left behind.” Dube adds that the COVID-19 pandemic has accelerated adoption of digital employees in health care and retail as well.

Amelia, Soul Machines, and UneeQ are taking their digital beings a step further, enabling organizations to create avatars themselves using low-code or no-code platforms: Digital Employee Builder for Amelia, Creator for UneeQ, and Digital DNA Studio for Soul Machines. Unreal Engine, a game engine developed by Epic Games, is doing the same with MetaHuman Creator, a tool that allows anyone to create photorealistic digital humans. “The biggest motivation for Digital Employee Builder is to democratize AI,” Dube says.

Mohan is cautious about this approach. “AI has problems with bias creeping in from data sets and into the way it speaks. The AI community is still trying to figure out how to measure and counter that bias,” she says. “[Companies] have to have an AI expert on board that can recommend the right things to build for.”

Despite being wary of the technology, Mohan supports the purpose behind these virtual beings and is optimistic about where they’re headed. “We do need these tools that support humans in different kinds of things. I think the vision is the pro, and I’m behind that vision,” she says. “As we develop more sophisticated AI technology, we would then have to implement novel ways of interacting with that technology. Hopefully, all of that is designed to support humans in their goals.” Continue reading

Posted in Human Robots

#439051 ‘Neutrobots’ smuggle drugs ...

A team of researchers from the Harbin Institute of Technology along with partners at the First Affiliated Hospital of Harbin Medical University, both in China, has developed a tiny robot that can ferry cancer drugs through the blood-brain barrier (BBB) without setting off an immune reaction. In their paper published in the journal Science Robotics, the group describes their robot and tests with mice. Junsun Hwang and Hongsoo Choi, with the Daegu Gyeongbuk Institute of Science and Technology in Korea, have published a Focus piece in the same journal issue on the work done by the team in China. Continue reading

Posted in Human Robots

#439042 How Scientists Used Ultrasound to Read ...

Thanks to neural implants, mind reading is no longer science fiction.

As I’m writing this sentence, a tiny chip with arrays of electrodes could sit on my brain, listening in on the crackling of my neurons firing as my hands dance across the keyboard. Sophisticated algorithms could then decode these electrical signals in real time. My brain’s inner language to plan and move my fingers could then be used to guide a robotic hand to do the same. Mind-to-machine control, voilà!

Yet as the name implies, even the most advanced neural implant has a problem: it’s an implant. For electrodes to reliably read the brain’s electrical chatter, they need to pierce through the its protective membrane and into brain tissue. Danger of infection aside, over time, damage accumulates around the electrodes, distorting their signals or even rendering them unusable.

Now, researchers from Caltech have paved a way to read the brain without any physical contact. Key to their device is a relatively new superstar in neuroscience: functional ultrasound, which uses sound waves to capture activity in the brain.

In monkeys, the technology could reliably predict their eye movement and hand gestures after just a single trial—without the usual lengthy training process needed to decode a movement. If adopted by humans, the new mind-reading tech represents a triple triumph: it requires minimal surgery and minimal learning, but yields maximal resolution for brain decoding. For people who are paralyzed, it could be a paradigm shift in how they control their prosthetics.

“We pushed the limits of ultrasound neuroimaging and were thrilled that it could predict movement,” said study author Dr. Sumner Norman.

To Dr. Krishna Shenoy at Stanford, who was not involved, the study will finally put ultrasound “on the map as a brain-machine interface technique. Adding to this toolkit is spectacular,” he said.

Breaking the Sound Barrier
Using sound to decode brain activity might seem preposterous, but ultrasound has had quite the run in medicine. You’ve probably heard of its most common use: taking photos of a fetus in pregnancy. The technique uses a transducer, which emits ultrasound pulses into the body and finds boundaries in tissue structure by analyzing the sound waves that bounce back.

Roughly a decade ago, neuroscientists realized they could adapt the tech for brain scanning. Rather than directly measuring the brain’s electrical chatter, it looks at a proxy—blood flow. When certain brain regions or circuits are active, the brain requires much more energy, which is provided by increased blood flow. In this way, functional ultrasound works similarly to functional MRI, but at a far higher resolution—roughly ten times, the authors said. Plus, people don’t have to lie very still in an expensive, claustrophobic magnet.

“A key question in this work was: If we have a technique like functional ultrasound that gives us high-resolution images of the brain’s blood flow dynamics in space and over time, is there enough information from that imaging to decode something useful about behavior?” said study author Dr. Mikhail Shapiro.

There’s plenty of reasons for doubt. As the new kid on the block, functional ultrasound has some known drawbacks. A major one: it gives a far less direct signal than electrodes. Previous studies show that, with multiple measurements, it can provide a rough picture of brain activity. But is that enough detail to guide a robotic prosthesis?

One-Trial Wonder
The new study put functional ultrasound to the ultimate test: could it reliably detect movement intention in monkeys? Because their brains are the most similar to ours, rhesus macaque monkeys are often the critical step before a brain-machine interface technology is adapted for humans.

The team first inserted small ultrasound transducers into the skulls of two rhesus monkeys. While it sounds intense, the surgery doesn’t penetrate the brain or its protective membrane; it’s only on the skull. Compared to electrodes, this means the brain itself isn’t physically harmed.

The device is linked to a computer, which controls the direction of sound waves and captures signals from the brain. For this study, the team aimed the pulses at the posterior parietal cortex, a part of the “motor” aspect of the brain, which plans movement. If right now you’re thinking about scrolling down this page, that’s the brain region already activated, before your fingers actually perform the movement.

Then came the tests. The first looked at eye movements—something pretty necessary before planning actual body movements without tripping all over the place. Here, the monkeys learned to focus on a central dot on a computer screen. A second dot, either left or right, then flashed. The monkeys’ task was to flicker their eyes to the most recent dot. It’s something that seems easy for us, but requires sophisticated brain computation.

The second task was more straightforward. Rather than just moving their eyes to the second target dot, the monkeys learned to grab and manipulate a joystick to move a cursor to that target.

Using brain imaging to decode the mind and control movement. Image Credit: S. Norman, Caltech
As the monkeys learned, so did the device. Ultrasound data capturing brain activity was fed into a sophisticated machine learning algorithm to guess the monkeys’ intentions. Here’s the kicker: once trained, using data from just a single trial, the algorithm was able to correctly predict the monkeys’ actual eye movement—whether left or right—with roughly 78 percent accuracy. The accuracy for correctly maneuvering the joystick was even higher, at nearly 90 percent.

That’s crazy accurate, and very much needed for a mind-controlled prosthetic. If you’re using a mind-controlled cursor or limb, the last thing you’d want is to have to imagine the movement multiple times before you actually click the web button, grab the door handle, or move your robotic leg.

Even more impressive is the resolution. Sound waves seem omnipresent, but with focused ultrasound, it’s possible to measure brain activity at a resolution of 100 microns—roughly 10 neurons in the brain.

A Cyborg Future?
Before you start worrying about scientists blasting your brain with sound waves to hack your mind, don’t worry. The new tech still requires skull surgery, meaning that a small chunk of skull needs to be removed. However, the brain itself is spared. This means that compared to electrodes, ultrasound could offer less damage and potentially a far longer mind reading than anything currently possible.

There are downsides. Focused ultrasound is far younger than any electrode-based neural implants, and can’t yet reliably decode 360-degree movement or fine finger movements. For now, the tech requires a wire to link the device to a computer, which is off-putting to many people and will prevent widespread adoption. Add to that the inherent downside of focused ultrasound, which lags behind electrical recordings by roughly two seconds.

All that aside, however, the tech is just tiptoeing into a future where minds and machines seamlessly connect. Ultrasound can penetrate the skull, though not yet at the resolution needed for imaging and decoding brain activity. The team is already working with human volunteers with traumatic brain injuries, who had to have a piece of their skulls removed, to see how well ultrasound works for reading their minds.

“What’s most exciting is that functional ultrasound is a young technique with huge potential. This is just our first step in bringing high performance, less invasive brain-machine interface to more people,” said Norman.

Image Credit: Free-Photos / Pixabay Continue reading

Posted in Human Robots