Tag Archives: describe
A few years back, DeepMind’s Demis Hassabis famously prophesized that AI and neuroscience will positively feed into each other in a “virtuous circle.” If realized, this would fundamentally expand our insight into intelligence, both machine and human.
We’ve already seen some proofs of concept, at least in the brain-to-AI direction. For example, memory replay, a biological mechanism that fortifies our memories during sleep, also boosted AI learning when abstractly appropriated into deep learning models. Reinforcement learning, loosely based on our motivation circuits, is now behind some of AI’s most powerful tools.
Hassabis is about to be proven right again.
Last week, two studies independently tapped into the power of ANNs to solve a 70-year-old neuroscience mystery: how does our visual system perceive reality?
The first, published in Cell, used generative networks to evolve DeepDream-like images that hyper-activate complex visual neurons in monkeys. These machine artworks are pure nightmare fuel to the human eye; but together, they revealed a fundamental “visual hieroglyph” that may form a basic rule for how we piece together visual stimuli to process sight into perception.
In the second study, a team used a deep ANN model—one thought to mimic biological vision—to synthesize new patterns tailored to control certain networks of visual neurons in the monkey brain. When directly shown to monkeys, the team found that the machine-generated artworks could reliably activate predicted populations of neurons. Future improved ANN models could allow even better control, giving neuroscientists a powerful noninvasive tool to study the brain. The work was published in Science.
The individual results, though fascinating, aren’t necessarily the point. Rather, they illustrate how scientists are now striving to complete the virtuous circle: tapping AI to probe natural intelligence. Vision is only the beginning—the tools can potentially be expanded into other sensory domains. And the more we understand about natural brains, the better we can engineer artificial ones.
It’s a “great example of leveraging artificial intelligence to study organic intelligence,” commented Dr. Roman Sandler at Kernel.co on Twitter.
ANNs and biological vision have quite the history.
In the late 1950s, the legendary neuroscientist duo David Hubel and Torsten Wiesel became some of the first to use mathematical equations to understand how neurons in the brain work together.
In a series of experiments—many using cats—the team carefully dissected the structure and function of the visual cortex. Using myriads of images, they revealed that vision is processed in a hierarchy: neurons in “earlier” brain regions, those closer to the eyes, tend to activate when they “see” simple patterns such as lines. As we move deeper into the brain, from the early V1 to a nub located slightly behind our ears, the IT cortex, neurons increasingly respond to more complex or abstract patterns, including faces, animals, and objects. The discovery led some scientists to call certain IT neurons “Jennifer Aniston cells,” which fire in response to pictures of the actress regardless of lighting, angle, or haircut. That is, IT neurons somehow extract visual information into the “gist” of things.
That’s not trivial. The complex neural connections that lead to increasing abstraction of what we see into what we think we see—what we perceive—is a central question in machine vision: how can we teach machines to transform numbers encoding stimuli into dots, lines, and angles that eventually form “perceptions” and “gists”? The answer could transform self-driving cars, facial recognition, and other computer vision applications as they learn to better generalize.
Hubel and Wiesel’s Nobel-prize-winning studies heavily influenced the birth of ANNs and deep learning. Much of earlier ANN “feed-forward” model structures are based on our visual system; even today, the idea of increasing layers of abstraction—for perception or reasoning—guide computer scientists to build AI that can better generalize. The early romance between vision and deep learning is perhaps the bond that kicked off our current AI revolution.
It only seems fair that AI would feed back into vision neuroscience.
Hieroglyphs and Controllers
In the Cell study, a team led by Dr. Margaret Livingstone at Harvard Medical School tapped into generative networks to unravel IT neurons’ complex visual alphabet.
Scientists have long known that neurons in earlier visual regions (V1) tend to fire in response to “grating patches” oriented in certain ways. Using a limited set of these patches like letters, V1 neurons can “express a visual sentence” and represent any image, said Dr. Arash Afraz at the National Institute of Health, who was not involved in the study.
But how IT neurons operate remained a mystery. Here, the team used a combination of genetic algorithms and deep generative networks to “evolve” computer art for every studied neuron. In seven monkeys, the team implanted electrodes into various parts of the visual IT region so that they could monitor the activity of a single neuron.
The team showed each monkey an initial set of 40 images. They then picked the top 10 images that stimulated the highest neural activity, and married them to 30 new images to “evolve” the next generation of images. After 250 generations, the technique, XDREAM, generated a slew of images that mashed up contorted face-like shapes with lines, gratings, and abstract shapes.
This image shows the evolution of an optimum image for stimulating a visual neuron in a monkey. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“The evolved images look quite counter-intuitive,” explained Afraz. Some clearly show detailed structures that resemble natural images, while others show complex structures that can’t be characterized by our puny human brains.
This figure shows natural images (right) and images evolved by neurons in the inferotemporal cortex of a monkey (left). Image Credit: Ponce, Xiao, and Schade et al. – Cell.
“What started to emerge during each experiment were pictures that were reminiscent of shapes in the world but were not actual objects in the world,” said study author Carlos Ponce. “We were seeing something that was more like the language cells use with each other.”
This image was evolved by a neuron in the inferotemporal cortex of a monkey using AI. Image Credit: Ponce, Xiao, and Schade et al. – Cell.
Although IT neurons don’t seem to use a simple letter alphabet, it does rely on a vast array of characters like hieroglyphs or Chinese characters, “each loaded with more information,” said Afraz.
The adaptive nature of XDREAM turns it into a powerful tool to probe the inner workings of our brains—particularly for revealing discrepancies between biology and models.
The Science study, led by Dr. James DiCarlo at MIT, takes a similar approach. Using ANNs to generate new patterns and images, the team was able to selectively predict and independently control neuron populations in a high-level visual region called V4.
“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” said study author Dr. Pouya Bashivan. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”
It suggests that our current ANN models for visual computation “implicitly capture a great deal of visual knowledge” which we can’t really describe, but which the brain uses to turn vision information into perception, the authors said. By testing AI-generated images on biological vision, however, the team concluded that today’s ANNs have a degree of understanding and generalization. The results could potentially help engineer even more accurate ANN models of biological vision, which in turn could feed back into machine vision.
“One thing is clear already: Improved ANN models … have led to control of a high-level neural population that was previously out of reach,” the authors said. “The results presented here have likely only scratched the surface of what is possible with such implemented characterizations of the brain’s neural networks.”
To Afraz, the power of AI here is to find cracks in human perception—both our computational models of sensory processes, as well as our evolved biological software itself. AI can be used “as a perfect adversarial tool to discover design cracks” of IT, said Afraz, such as finding computer art that “fools” a neuron into thinking the object is something else.
“As artificial intelligence researchers develop models that work as well as the brain does—or even better—we will still need to understand which networks are more likely to behave safely and further human goals,” said Ponce. “More efficient AI can be grounded by knowledge of how the brain works.”
Image Credit: Sangoiri / Shutterstock.com Continue reading
According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.
This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.
Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.
Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?
New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.
The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.
“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”
The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.
In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.
Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.
Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.
They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.
Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.
Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.
But before they can get out and shape the world, as these studies show, they will need to understand themselves.
Image Credit: jumbojan / Shutterstock.com Continue reading
A new technique using artificial intelligence to manipulate video content gives new meaning to the expression “talking head.”
An international team of researchers showcased the latest advancement in synthesizing facial expressions—including mouth, eyes, eyebrows, and even head position—in video at this month’s 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry.
The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a “target” actor based on the facial and head movement of a “source” actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.
In this case, the adversaries are actually working together: One neural network generates content, while the other rejects or approves each effort. The back-and-forth interplay between the two eventually produces a realistic result that can easily fool the human eye, including reproducing a static scene behind the head as it bobs back and forth.
The researchers say the technique can be used by the film industry for a variety of purposes, from editing facial expressions of actors for matching dubbed voices to repositioning an actor’s head in post-production. AI can not only produce highly realistic results, but much quicker ones compared to the manual processes used today, according to the researchers. You can read the full paper of their work here.
“Deep Video Portraits shows how such a visual effect could be created with less effort in the future,” said Christian Richardt, from the University of Bath’s motion capture research center CAMERA, in a press release. “With our approach, even the positioning of an actor’s head and their facial expression could be easily edited to change camera angles or subtly change the framing of a scene to tell the story better.”
AI Tech Different Than So-Called “Deepfakes”
The work is far from the first to employ AI to manipulate video and audio. At last year’s SIGGRAPH conference, researchers from the University of Washington showcased their work using algorithms that inserted audio recordings from a person in one instance into a separate video of the same person in a different context.
In this case, they “faked” a video using a speech from former President Barack Obama addressing a mass shooting incident during his presidency. The AI-doctored video injects the audio into an unrelated video of the president while also blending the facial and mouth movements, creating a pretty credible job of lip synching.
A previous paper by many of the same scientists on the Deep Video Portraits project detailed how they were first able to manipulate a video in real time of a talking head (in this case, actor and former California governor Arnold Schwarzenegger). The Face2Face system pulled off this bit of digital trickery using a depth-sensing camera that tracked the facial expressions of an Asian female source actor.
A less sophisticated method of swapping faces using a machine learning software dubbed FakeApp emerged earlier this year. Predictably, the tech—requiring numerous photos of the source actor in order to train the neural network—was used for more juvenile pursuits, such as injecting a person’s face onto a porn star.
The application gave rise to the term “deepfakes,” which is now used somewhat ubiquitously to describe all such instances of AI-manipulated video—much to the chagrin of some of the researchers involved in more legitimate uses.
Fighting AI-Created Video Forgeries
However, the researchers are keenly aware that their work—intended for benign uses such as in the film industry or even to correct gaze and head positions for more natural interactions through video teleconferencing—could be used for nefarious purposes. Fake news is the most obvious concern.
“With ever-improving video editing technology, we must also start being more critical about the video content we consume every day, especially if there is no proof of origin,” said Michael Zollhöfer, a visiting assistant professor at Stanford University and member of the Deep Video Portraits team, in the press release.
Toward that end, the research team is training the same adversarial neural networks to spot video forgeries. They also strongly recommend that developers clearly watermark videos that are edited through AI or otherwise, and denote clearly what part and element of the scene was modified.
To catch less ethical users, the US Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), is supporting a program called Media Forensics. This latest DARPA challenge enlists researchers to develop technologies to automatically assess the integrity of an image or video, as part of an end-to-end media forensics platform.
The DARPA official in charge of the program, Matthew Turek, did tell MIT Technology Review that so far the program has “discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations.” In one reported example, researchers have targeted eyes, which rarely blink in the case of “deepfakes” like those created by FakeApp, because the AI is trained on still pictures. That method would seem to be less effective to spot the sort of forgeries created by Deep Video Portraits, which appears to flawlessly match the entire facial and head movements between the source and target actors.
“We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip,” Zollhöfer said. “This will lead to ever-better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes.
Image Credit: Tancha / Shutterstock.com Continue reading
Once upon a time, a powerful Sumerian king named Gilgamesh went on a quest, as such characters often do in these stories of myth and legend. Gilgamesh had witnessed the death of his best friend, Enkidu, and, fearing a similar fate, went in search of immortality. The great king failed to find the secret of eternal life but took solace that his deeds would live well beyond his mortal years.
Fast-forward four thousand years, give or take a century, and Gilgamesh (as famous as any B-list celebrity today, despite the passage of time) would probably be heartened to learn that many others have taken up his search for longevity. Today, though, instead of battling epic monsters and the machinations of fickle gods, those seeking to enhance and extend life are cutting-edge scientists and visionary entrepreneurs who are helping unlock the secrets of human biology.
Chief among them is Aubrey de Grey, a biomedical gerontologist who founded the SENS Research Foundation, a Silicon Valley-based research organization that seeks to advance the application of regenerative medicine to age-related diseases. SENS stands for Strategies for Engineered Negligible Senescence, a term coined by de Grey to describe a broad array (seven, to be precise) of medical interventions that attempt to repair or prevent different types of molecular and cellular damage that eventually lead to age-related diseases like cancer and Alzheimer’s.
Many of the strategies focus on senescent cells, which accumulate in tissues and organs as people age. Not quite dead, senescent cells stop dividing but are still metabolically active, spewing out all sorts of proteins and other molecules that can cause inflammation and other problems. In a young body, that’s usually not a problem (and probably part of general biological maintenance), as a healthy immune system can go to work to put out most fires.
However, as we age, senescent cells continue to accumulate, and at some point the immune system retires from fire watch. Welcome to old age.
Of Mice and Men
Researchers like de Grey believe that treating the cellular underpinnings of aging could not only prevent disease but significantly extend human lifespans. How long? Well, if you’re talking to de Grey, Biblical proportions—on the order of centuries.
De Grey says that science has made great strides toward that end in the last 15 years, such as the ability to copy mitochondrial DNA to the nucleus. Mitochondria serve as the power plant of the cell but are highly susceptible to mutations that lead to cellular degeneration. Copying the mitochondrial DNA into the nucleus would help protect it from damage.
Another achievement occurred about six years ago when scientists first figured out how to kill senescent cells. That discovery led to a spate of new experiments in mice indicating that removing these ticking-time-bomb cells prevented disease and even extended their lifespans. Now the anti-aging therapy is about to be tested in humans.
“As for the next few years, I think the stream of advances is likely to become a flood—once the first steps are made, things get progressively easier and faster,” de Grey tells Singularity Hub. “I think there’s a good chance that we will achieve really dramatic rejuvenation of mice within only six to eight years: maybe taking middle-aged mice and doubling their remaining lifespan, which is an order of magnitude more than can be done today.”
Not Horsing Around
Richard G.A. Faragher, a professor of biogerontology at the University of Brighton in the United Kingdom, recently made discoveries in the lab regarding the rejuvenation of senescent cells with chemical compounds found in foods like chocolate and red wine. He hopes to apply his findings to an animal model in the future—in this case,horses.
“We have been very fortunate in receiving some funding from an animal welfare charity to look at potential treatments for older horses,” he explains to Singularity Hub in an email. “I think this is a great idea. Many aspects of the physiology we are studying are common between horses and humans.”
What Faragher and his colleagues demonstrated in a paper published in BMC Cell Biology last year was that resveralogues, chemicals based on resveratrol, were able to reactivate a protein called a splicing factor that is involved in gene regulation. Within hours, the chemicals caused the cells to rejuvenate and start dividing like younger cells.
“If treatments work in our old pony systems, then I am sure they could be translated into clinical trials in humans,” Faragher says. “How long is purely a matter of money. Given suitable funding, I would hope to see a trial within five years.”
Show Them the Money
Faragher argues that the recent breakthroughs aren’t because a result of emerging technologies like artificial intelligence or the gene-editing tool CRISPR, but a paradigm shift in how scientists understand the underpinnings of cellular aging. Solving the “aging problem” isn’t a question of technology but of money, he says.
“Frankly, when AI and CRISPR have removed cystic fibrosis, Duchenne muscular dystrophy or Gaucher syndrome, I’ll be much more willing to hear tales of amazing progress. Go fix a single, highly penetrant genetic disease in the population using this flashy stuff and then we’ll talk,” he says. “My faith resides in the most potent technological development of all: money.”
De Grey is less flippant about the role that technology will play in the quest to defeat aging. AI, CRISPR, protein engineering, advances in stem cell therapies, and immune system engineering—all will have a part.
“There is not really anything distinctive about the ways in which these technologies will contribute,” he says. “What’s distinctive is that we will need all of these technologies, because there are so many different types of damage to repair and they each require different tricks.”
It’s in the Blood
A startup in the San Francisco Bay Area believes machines can play a big role in discovering the right combination of factors that lead to longer and healthier lives—and then develop drugs that exploit those findings.
BioAge Labs raised nearly $11 million last year for its machine learning platform that crunches big data sets to find blood factors, such as proteins or metabolites, that are tied to a person’s underlying biological age. The startup claims that these factors can predict how long a person will live.
“Our interest in this comes out of research into parabiosis, where joining the circulatory systems of old and young mice—so that they share the same blood—has been demonstrated to make old mice healthier and more robust,” Dr. Eric Morgen, chief medical officer at BioAge, tells Singularity Hub.
Based on that idea, he explains, it should be possible to alter those good or bad factors to produce a rejuvenating effect.
“Our main focus at BioAge is to identify these types of factors in our human cohort data, characterize the important molecular pathways they are involved in, and then drug those pathways,” he says. “This is a really hard problem, and we use machine learning to mine these complex datasets to determine which individual factors and molecular pathways best reflect biological age.”
Saving for the Future
Of course, there’s no telling when any of these anti-aging therapies will come to market. That’s why Forever Labs, a biotechnology startup out of Ann Arbor, Michigan, wants your stem cells now. The company offers a service to cryogenically freeze stem cells taken from bone marrow.
The theory behind the procedure, according to Forever Labs CEO Steven Clausnitzer, is based on research showing that stem cells may be a key component for repairing cellular damage. That’s because stem cells can develop into many different cell types and can divide endlessly to replenish other cells. Clausnitzer notes that there are upwards of a thousand clinical studies looking at using stem cells to treat age-related conditions such as cardiovascular disease.
However, stem cells come with their own expiration date, which usually coincides with the age that most people start experiencing serious health problems. Stem cells harvested from bone marrow at a younger age can potentially provide a therapeutic resource in the future.
“We believe strongly that by having access to your own best possible selves, you’re going to be well positioned to lead healthier, longer lives,” he tells Singularity Hub.
“There’s a compelling argument to be made that if you started to maintain the bone marrow population, the amount of nuclear cells in your bone marrow, and to re-up them so that they aren’t declining with age, it stands to reason that you could absolutely mitigate things like cardiovascular disease and stroke and Alzheimer’s,” he adds.
Clausnitzer notes that the stored stem cells can be used today in developing therapies to treat chronic conditions such as osteoarthritis. However, the more exciting prospect—and the reason he put his own 38-year-old stem cells on ice—is that he believes future stem cell therapies can help stave off the ravages of age-related disease.
“I can start reintroducing them not to treat age-related disease but to treat the decline in the stem-cell niche itself, so that I don’t ever get an age-related disease,” he says. “I don’t think that it equates to immortality, but it certainly is a step in that direction.”
Indecisive on Immortality
The societal implications of a longer-living human species are a guessing game at this point. We do know that by mid-century, the global population of those aged 65 and older will reach 1.6 billion, while those older than 80 will hit nearly 450 million, according to the National Academies of Science. If many of those people could enjoy healthy lives in their twilight years, an enormous medical cost could be avoided.
Faragher is certainly working toward a future where human health is ubiquitous. Human immortality is another question entirely.
“The longer lifespans become, the more heavily we may need to control birth rates and thus we may have fewer new minds. This could have a heavy ‘opportunity cost’ in terms of progress,” he says.
And does anyone truly want to live forever?
“There have been happy moments in my life but I have also suffered some traumatic disappointments. No [drug] will wash those experiences out of me,” Faragher says. “I no longer view my future with unqualified enthusiasm, and I do not think I am the only middle-aged man to feel that way. I don’t think it is an accident that so many ‘immortalists’ are young.
“They should be careful what they wish for.”
Image Credit: Karim Ortiz / Shutterstock.com Continue reading