Tag Archives: second
In just half a decade, neuromorphic devices—or brain-inspired computing—already seem quaint. The current darling? Artificial-biological hybrid computing, uniting both man-made computer chips and biological neurons seamlessly into semi-living circuits.
It sounds crazy, but a new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward.
Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit.
The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.
The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care?
Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked.
“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”
The study grew from years of work into neuromorphic computing, or data processing inspired by the brain.
The blue-sky idea was inspired by the brain’s massive parallel computing capabilities, along with vast energy savings. By mimicking these properties, scientists reasoned, we could potentially turbo-charge computing. Neuromorphic devices basically embody artificial neural networks in physical form—wouldn’t hardware that mimics how the brain processes information be even more efficient and powerful?
These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.
As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?
Buzz & Chemistry
Theoretically, the answer’s yes.
But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical.
Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.
In other words, without chemical signals, the brain can’t function normally. These neurotransmitters don’t just passively carry information. Dopamine, for example, can dramatically change how a neural circuit functions. For an artificial-biological hybrid neural system, the absence of chemistry is like nixing international cargo vessels and only sticking with land-based trains and highways.
“To emulate biological synaptic behavior, the connectivity of the neuromorphic device must be dynamically regulated by the local neurotransmitter activity,” the team said.
Let’s Get Electro-Chemical
The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials.
Rather than the classic neuron shape, picture more of a sandwich with a chunk bitten out in the middle (yup, I’m totally serious). Each of the remaining parts of the sandwich is a soft electrode, made of biological polymers. The “bitten out” part has a conductive solution that can pass on electrical signals.
The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.
In other words, dopamine release from the biological neuron interacts with the artificial one, so that the chemicals change how the downstream neuron behaves in a somewhat lasting way—a loose mimic of what happens inside the brain during learning.
But that’s not all. Chemical signaling is especially powerful in the brain because it’s flexible. Dopamine, for example, only grabs onto the downstream neurons for a bit before it returns back to its upstream neuron—that is, recycled or destroyed. This means that its effect is temporary, giving the neural circuit breathing room to readjust its activity.
The Stanford team also tried reconstructing this quirk in their hybrid circuit. They crafted a microfluidic channel that shuttles both dopamine and its byproduct away from the artificial neurons after they’ve done their job for recycling.
Putting It All Together
After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.”
They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one. Before the experiment, the team wasn’t quite sure what to expect. Theoretically, it made sense that dopamine would change the artificial neuron’s conductance, similar to learning. But “it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab,” said study author Scott Keene.
On the first try, however, the team found that the burst of chemical signaling was able to change the artificial neuron’s conductance long-term, similar to the neuroscience dogma “neurons that fire together, wire together.” Activating the upstream biological neuron with chemicals also changed the artificial neuron’s conductance in a way that mimicked learning.
“That’s when we realized the potential this has for emulating the long-term learning process of a synapse,” said Keene.
Visualizing under an electron microscope, the team found that, similar to its biological counterpart, the hybrid synapse was able to efficiently recycle dopamine with timescales similar to the brain after some calibration. By playing with how much dopamine accumulates at the artificial neuron, the team found that they loosely mimic a learning rule called spike learning—a darling of machine learning inspired by the brain’s computation.
A Hybrid Future?
Unfortunately for cyborg enthusiasts, the work is still in its infancy.
For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.
That said, we’re inching ever closer to full-capability artificial-biological hybrid circuits.
“The neurotransmitter-mediated neuromorphic device presented in this work constitutes a fundamental building block for artificial neural networks that can be directly modulated based on biological feedback from live neurons,” the authors concluded. “[It] is a crucial first step in realizing next-generation adaptive biohybrid interfaces.”
Image Credit: Gerd Altmann from Pixabay Continue reading
Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.
The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.
To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.
Gary Bolles, SU’s chair for the Future of Work, kicked off the discussion with his thoughts on a future of work that’s human-centric, including why it matters and how to build it.
What Is Work?
“Work” seems like a straightforward concept to define, but since it’s constantly shifting shape over time, let’s make sure we’re on the same page. Bolles defined work, very basically, as human skills applied to problems.
“It doesn’t matter if it’s a dirty floor or a complex market entry strategy or a major challenge in the world,” he said. “We as humans create value by applying our skills to solve problems in the world.” You can think of the problems that need solving as the demand and human skills as the supply, and the two are in constant oscillation, including, every few decades or centuries, a massive shift.
We’re in the midst of one of those shifts right now (and we already were, long before the pandemic). Skills that have long been in demand are declining. The World Economic Forum’s 2018 Future of Jobs report listed things like manual dexterity, management of financial and material resources, and quality control and safety awareness as declining skills. Meanwhile, skills the next generation will need include analytical thinking and innovation, emotional intelligence, creativity, and systems analysis.
Along Came a Pandemic
With the outbreak of coronavirus and its spread around the world, the demand side of work shrunk; all the problems that needed solving gave way to the much bigger, more immediate problem of keeping people alive. But as a result, tens of millions of people around the world are out of work—and those are just the ones that are being counted, and they’re a fraction of the true total. There are additional millions in seasonal or gig jobs or who work in informal economies now without work, too.
“This is our opportunity to focus,” Bolles said. “How do we help people re-engage with work? And make it better work, a better economy, and a better set of design heuristics for a world that we all want?”
Bolles posed five key questions—some spurred by impact of the pandemic—on which future of work conversations should focus to make sure it’s a human-centric future.
1. What does an inclusive world of work look like? Rather than seeing our current systems of work as immutable, we need to actually understand those systems and how we want to change them.
2. How can we increase the value of human work? We know that robots and software are going to be fine in the future—but for humans to be fine, we need to design for that very intentionally.
3. How can entrepreneurship help create a better world of work? In many economies the new value that’s created often comes from younger companies; how do we nurture entrepreneurship?
4. What will the intersection of workplace and geography look like? A large percentage of the global workforce is now working from home; what could some of the outcomes of that be? How does gig work fit in?
5. How can we ensure a healthy evolution of work and life? The health and the protection of those at risk is why we shut down our economies, but we need to find a balance that allows people to work while keeping them safe.
Problem-Solving Doesn’t End
The end result these questions are driving towards, and our overarching goal, is maximizing human potential. “If we come up with ways we can continue to do that, we’ll have a much more beneficial future of work,” Bolles said. “We should all be talking about where we can have an impact.”
One small silver lining? We had plenty of problems to solve in the world before ever hearing about coronavirus, and now we have even more. Is the pace of automation accelerating due to the virus? Yes. Are companies finding more ways to automate their processes in order to keep people from getting sick? They are.
But we have a slew of new problems on our hands, and we’re not going to stop needing human skills to solve them (not to mention the new problems that will surely emerge as second- and third-order effects of the shutdowns). If Bolles’ definition of work holds up, we’ve got ours cut out for us.
In an article from April titled The Great Reset, Bolles outlined three phases of the unemployment slump (we’re currently still in the first phase) and what we should be doing to minimize the damage. “The evolution of work is not about what will happen 10 to 20 years from now,” he said. “It’s about what we could be doing differently today.”
Watch Bolles’ talk and those of dozens of other experts for more insights into building a human-centric future of work here.
Image Credit: www_slon_pics from Pixabay Continue reading
Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.
Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.
But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.
In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.
The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.
A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”
Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”
Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.
The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.
Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”
In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.
First, programmers need to document their process through online code repositories like GitHub or BitBucket.
Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.
Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.
Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.
AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.
Image Credit: Wikimedia Commons Continue reading