Tag Archives: answer
#437466 How Future AI Could Recognize a Kangaroo ...
AI is continuously taking on new challenges, from detecting deepfakes (which, incidentally, are also made using AI) to winning at poker to giving synthetic biology experiments a boost. These impressive feats result partly from the huge datasets the systems are trained on. That training is costly and time-consuming, and it yields AIs that can really only do one thing well.
For example, to train an AI to differentiate between a picture of a dog and one of a cat, it’s fed thousands—if not millions—of labeled images of dogs and cats. A child, on the other hand, can see a dog or cat just once or twice and remember which is which. How can we make AIs learn more like children do?
A team at the University of Waterloo in Ontario has an answer: change the way AIs are trained.
Here’s the thing about the datasets normally used to train AI—besides being huge, they’re highly specific. A picture of a dog can only be a picture of a dog, right? But what about a really small dog with a long-ish tail? That sort of dog, while still being a dog, looks more like a cat than, say, a fully-grown Golden Retriever.
It’s this concept that the Waterloo team’s methodology is based on. They described their work in a paper published on the pre-print (or non-peer-reviewed) server arXiv last month. Teaching an AI system to identify a new class of objects using just one example is what they call “one-shot learning.” But they take it a step further, focusing on “less than one shot learning,” or LO-shot learning for short.
LO-shot learning consists of a system learning to classify various categories based on a number of examples that’s smaller than the number of categories. That’s not the most straightforward concept to wrap your head around, so let’s go back to the dogs and cats example. Say you want to teach an AI to identify dogs, cats, and kangaroos. How could that possibly be done without several clear examples of each animal?
The key, the Waterloo team says, is in what they call soft labels. Unlike hard labels, which label a data point as belonging to one specific class, soft labels tease out the relationship or degree of similarity between that data point and multiple classes. In the case of an AI trained on only dogs and cats, a third class of objects, say, kangaroos, might be described as 60 percent like a dog and 40 percent like a cat (I know—kangaroos probably aren’t the best animal to have thrown in as a third category).
“Soft labels can be used to represent training sets using fewer prototypes than there are classes, achieving large increases in sample efficiency over regular (hard-label) prototypes,” the paper says. Translation? Tell an AI a kangaroo is some fraction cat and some fraction dog—both of which it’s seen and knows well—and it’ll be able to identify a kangaroo without ever having seen one.
If the soft labels are nuanced enough, you could theoretically teach an AI to identify a large number of categories based on a much smaller number of training examples.
The paper’s authors use a simple machine learning algorithm called k-nearest neighbors (kNN) to explore this idea more in depth. The algorithm operates under the assumption that similar things are most likely to exist near each other; if you go to a dog park, there will be lots of dogs but no cats or kangaroos. Go to the Australian grasslands and there’ll be kangaroos but no cats or dogs. And so on.
To train a kNN algorithm to differentiate between categories, you choose specific features to represent each category (i.e. for animals you could use weight or size as a feature). With one feature on the x-axis and the other on the y-axis, the algorithm creates a graph where data points that are similar to each other are clustered near each other. A line down the center divides the categories, and it’s pretty straightforward for the algorithm to discern which side of the line new data points should fall on.
The Waterloo team kept it simple and used plots of color on a 2D graph. Using the colors and their locations on the graphs, the team created synthetic data sets and accompanying soft labels. One of the more simplistic graphs is pictured below, along with soft labels in the form of pie charts.
Image Credit: Ilia Sucholutsky & Matthias Schonlau
When the team had the algorithm plot the boundary lines of the different colors based on these soft labels, it was able to split the plot up into more colors than the number of data points it was given in the soft labels.
While the results are encouraging, the team acknowledges that they’re just the first step, and there’s much more exploration of this concept yet to be done. The kNN algorithm is one of the least complex models out there; what might happen when LO-shot learning is applied to a far more complex algorithm? Also, to apply it, you still need to distill a larger dataset down into soft labels.
One idea the team is already working on is having other algorithms generate the soft labels for the algorithm that’s going to be trained using LO-shot; manually designing soft labels won’t always be as easy as splitting up some pie charts into different colors.
LO-shot’s potential for reducing the amount of training data needed to yield working AI systems is promising. Besides reducing the cost and the time required to train new models, the method could also make AI more accessible to industries, companies, or individuals who don’t have access to large datasets—an important step for democratization of AI.
Image Credit: pen_ash from Pixabay Continue reading
#437357 Algorithms Workers Can’t See Are ...
“I’m sorry, Dave. I’m afraid I can’t do that.” HAL’s cold, if polite, refusal to open the pod bay doors in 2001: A Space Odyssey has become a defining warning about putting too much trust in artificial intelligence, particularly if you work in space.
In the movies, when a machine decides to be the boss (or humans let it) things go wrong. Yet despite myriad dystopian warnings, control by machines is fast becoming our reality.
Algorithms—sets of instructions to solve a problem or complete a task—now drive everything from browser search results to better medical care.
They are helping design buildings. They are speeding up trading on financial markets, making and losing fortunes in micro-seconds. They are calculating the most efficient routes for delivery drivers.
In the workplace, self-learning algorithmic computer systems are being introduced by companies to assist in areas such as hiring, setting tasks, measuring productivity, evaluating performance, and even terminating employment: “I’m sorry, Dave. I’m afraid you are being made redundant.”
Giving self‐learning algorithms the responsibility to make and execute decisions affecting workers is called “algorithmic management.” It carries a host of risks in depersonalizing management systems and entrenching pre-existing biases.
At an even deeper level, perhaps, algorithmic management entrenches a power imbalance between management and worker. Algorithms are closely guarded secrets. Their decision-making processes are hidden. It’s a black-box: perhaps you have some understanding of the data that went in, and you see the result that comes out, but you have no idea of what goes on in between.
Algorithms at Work
Here are a few examples of algorithms already at work.
At Amazon’s fulfillment center in south-east Melbourne, they set the pace for “pickers,” who have timers on their scanners showing how long they have to find the next item. As soon as they scan that item, the timer resets for the next. All at a “not quite walking, not quite running” speed.
Or how about AI determining your success in a job interview? More than 700 companies have trialed such technology. US developer HireVue says its software speeds up the hiring process by 90 percent by having applicants answer identical questions and then scoring them according to language, tone, and facial expressions.
Granted, human assessments during job interviews are notoriously flawed. Algorithms,however, can also be biased. The classic example is the COMPAS software used by US judges, probation, and parole officers to rate a person’s risk of re-offending. In 2016 a ProPublica investigation showed the algorithm was heavily discriminatory, incorrectly classifying black subjects as higher risk 45 percent of the time, compared with 23 percent for white subjects.
How Gig Workers Cope
Algorithms do what their code tells them to do. The problem is this code is rarely available. This makes them difficult to scrutinize, or even understand.
Nowhere is this more evident than in the gig economy. Uber, Lyft, Deliveroo, and other platforms could not exist without algorithms allocating, monitoring, evaluating, and rewarding work.
Over the past year Uber Eats’ bicycle couriers and drivers, for instance, have blamed unexplained changes to the algorithm for slashing their jobs, and incomes.
Rider’s can’t be 100 percent sure it was all down to the algorithm. But that’s part of the problem. The fact those who depend on the algorithm don’t know one way or the other has a powerful influence on them.
This is a key result from our interviews with 58 food-delivery couriers. Most knew their jobs were allocated by an algorithm (via an app). They knew the app collected data. What they didn’t know was how data was used to award them work.
In response, they developed a range of strategies (or guessed how) to “win” more jobs, such as accepting gigs as quickly as possible and waiting in “magic” locations. Ironically, these attempts to please the algorithm often meant losing the very flexibility that was one of the attractions of gig work.
The information asymmetry created by algorithmic management has two profound effects. First, it threatens to entrench systemic biases, the type of discrimination hidden within the COMPAS algorithm for years. Second, it compounds the power imbalance between management and worker.
Our data also confirmed others’ findings that it is almost impossible to complain about the decisions of the algorithm. Workers often do not know the exact basis of those decisions, and there’s no one to complain to anyway. When Uber Eats bicycle couriers asked for reasons about their plummeting income, for example, responses from the company advised them “we have no manual control over how many deliveries you receive.”
Broader Lessons
When algorithmic management operates as a “black box” one of the consequences is that it is can become an indirect control mechanism. Thus far under-appreciated by Australian regulators, this control mechanism has enabled platforms to mobilize a reliable and scalable workforce while avoiding employer responsibilities.
“The absence of concrete evidence about how the algorithms operate”, the Victorian government’s inquiry into the “on-demand” workforce notes in its report, “makes it hard for a driver or rider to complain if they feel disadvantaged by one.”
The report, published in June, also found it is “hard to confirm if concern over algorithm transparency is real.”
But it is precisely the fact it is hard to confirm that’s the problem. How can we start to even identify, let alone resolve, issues like algorithmic management?
Fair conduct standards to ensure transparency and accountability are a start. One example is the Fair Work initiative, led by the Oxford Internet Institute. The initiative is bringing together researchers with platforms, workers, unions, and regulators to develop global principles for work in the platform economy. This includes “fair management,” which focuses on how transparent the results and outcomes of algorithms are for workers.
Understandings about impact of algorithms on all forms of work is still in its infancy. It demands greater scrutiny and research. Without human oversight based on agreed principles we risk inviting HAL into our workplaces.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: PickPik Continue reading
#437202 Scientists Used Dopamine to Seamlessly ...
In just half a decade, neuromorphic devices—or brain-inspired computing—already seem quaint. The current darling? Artificial-biological hybrid computing, uniting both man-made computer chips and biological neurons seamlessly into semi-living circuits.
It sounds crazy, but a new study in Nature Materials shows that it’s possible to get an artificial neuron to communicate directly with a biological one using not just electricity, but dopamine—a chemical the brain naturally uses to change how neural circuits behave, most known for signaling reward.
Because these chemicals, known as “neurotransmitters,” are how biological neurons functionally link up in the brain, the study is a dramatic demonstration that it’s possible to connect artificial components with biological brain cells into a functional circuit.
The team isn’t the first to pursue hybrid neural circuits. Previously, a different team hooked up two silicon-based artificial neurons with a biological one into a circuit using electrical protocols alone. Although a powerful demonstration of hybrid computing, the study relied on only one-half of the brain’s computational ability: electrical computing.
The new study now tackles the other half: chemical computing. It adds a layer of compatibility that lays the groundwork not just for brain-inspired computers, but also for brain-machine interfaces and—perhaps—a sort of “cyborg” future. After all, if your brain can’t tell the difference between an artificial neuron and your own, could you? And even if you did, would you care?
Of course, that scenario is far in the future—if ever. For now, the team, led by Dr. Alberto Salleo, professor of materials science and engineering at Stanford University, collectively breathed a sigh of relief that the hybrid circuit worked.
“It’s a demonstration that this communication melding chemistry and electricity is possible,” said Salleo. “You could say it’s a first step toward a brain-machine interface, but it’s a tiny, tiny very first step.”
Neuromorphic Computing
The study grew from years of work into neuromorphic computing, or data processing inspired by the brain.
The blue-sky idea was inspired by the brain’s massive parallel computing capabilities, along with vast energy savings. By mimicking these properties, scientists reasoned, we could potentially turbo-charge computing. Neuromorphic devices basically embody artificial neural networks in physical form—wouldn’t hardware that mimics how the brain processes information be even more efficient and powerful?
These explorations led to novel neuromorphic chips, or artificial neurons that “fire” like biological ones. Additional work found that it’s possible to link these chips up into powerful circuits that run deep learning with ease, with bioengineered communication nodes called artificial synapses.
As a potential computing hardware replacement, these systems have proven to be incredibly promising. Yet scientists soon wondered: given their similarity to biological brains, can we use them as “replacement parts” for brains that suffer from traumatic injuries, aging, or degeneration? Can we hook up neuromorphic components to the brain to restore its capabilities?
Buzz & Chemistry
Theoretically, the answer’s yes.
But there’s a huge problem: current brain-machine interfaces only use electrical signals to mimic neural computation. The brain, in contrast, has two tricks up its sleeve: electricity and chemicals, or electrochemical.
Within a neuron, electricity travels up its incoming branches, through the bulbous body, then down the output branches. When electrical signals reach the neuron’s outgoing “piers,” dotted along the output branch, however, they hit a snag. A small gap exists between neurons, so to get to the other side, the electrical signals generally need to be converted into little bubble ships, packed with chemicals, and set sail to the other neuronal shore.
In other words, without chemical signals, the brain can’t function normally. These neurotransmitters don’t just passively carry information. Dopamine, for example, can dramatically change how a neural circuit functions. For an artificial-biological hybrid neural system, the absence of chemistry is like nixing international cargo vessels and only sticking with land-based trains and highways.
“To emulate biological synaptic behavior, the connectivity of the neuromorphic device must be dynamically regulated by the local neurotransmitter activity,” the team said.
Let’s Get Electro-Chemical
The new study started with two neurons: the upstream, an immortalized biological cell that releases dopamine; and the downstream, an artificial neuron that the team previously introduced in 2017, made of a mix of biocompatible and electrical-conducting materials.
Rather than the classic neuron shape, picture more of a sandwich with a chunk bitten out in the middle (yup, I’m totally serious). Each of the remaining parts of the sandwich is a soft electrode, made of biological polymers. The “bitten out” part has a conductive solution that can pass on electrical signals.
The biological cell sits close to the first electrode. When activated, it dumps out boats of dopamine, which drift to the electrode and chemically react with it—mimicking the process of dopamine docking onto a biological neuron. This, in turn, generates a current that’s passed on to the second electrode through the conductive solution channel. When this current reaches the second electrode, it changes the electrode’s conductance—that is, how well it can pass on electrical information. This second step is analogous to docked dopamine “ships” changing how likely it is that a biological neuron will fire in the future.
In other words, dopamine release from the biological neuron interacts with the artificial one, so that the chemicals change how the downstream neuron behaves in a somewhat lasting way—a loose mimic of what happens inside the brain during learning.
But that’s not all. Chemical signaling is especially powerful in the brain because it’s flexible. Dopamine, for example, only grabs onto the downstream neurons for a bit before it returns back to its upstream neuron—that is, recycled or destroyed. This means that its effect is temporary, giving the neural circuit breathing room to readjust its activity.
The Stanford team also tried reconstructing this quirk in their hybrid circuit. They crafted a microfluidic channel that shuttles both dopamine and its byproduct away from the artificial neurons after they’ve done their job for recycling.
Putting It All Together
After confirming that biological cells can survive happily on top of the artificial one, the team performed a few tests to see if the hybrid circuit could “learn.”
They used electrical methods to first activate the biological dopamine neuron, and watched the artificial one. Before the experiment, the team wasn’t quite sure what to expect. Theoretically, it made sense that dopamine would change the artificial neuron’s conductance, similar to learning. But “it was hard to know whether we’d achieve the outcome we predicted on paper until we saw it happen in the lab,” said study author Scott Keene.
On the first try, however, the team found that the burst of chemical signaling was able to change the artificial neuron’s conductance long-term, similar to the neuroscience dogma “neurons that fire together, wire together.” Activating the upstream biological neuron with chemicals also changed the artificial neuron’s conductance in a way that mimicked learning.
“That’s when we realized the potential this has for emulating the long-term learning process of a synapse,” said Keene.
Visualizing under an electron microscope, the team found that, similar to its biological counterpart, the hybrid synapse was able to efficiently recycle dopamine with timescales similar to the brain after some calibration. By playing with how much dopamine accumulates at the artificial neuron, the team found that they loosely mimic a learning rule called spike learning—a darling of machine learning inspired by the brain’s computation.
A Hybrid Future?
Unfortunately for cyborg enthusiasts, the work is still in its infancy.
For one, the artificial neurons are still rather bulky compared to biological ones. This means that they can’t capture and translate information from a single “boat” of dopamine. It’s also unclear if, and how, a hybrid synapse can work inside a living brain. Given the billions of synapses firing away in our heads, it’ll be a challenge to find-and-replace those that need replacement, and be able to control our memories and behaviors similar to natural ones.
That said, we’re inching ever closer to full-capability artificial-biological hybrid circuits.
“The neurotransmitter-mediated neuromorphic device presented in this work constitutes a fundamental building block for artificial neural networks that can be directly modulated based on biological feedback from live neurons,” the authors concluded. “[It] is a crucial first step in realizing next-generation adaptive biohybrid interfaces.”
Image Credit: Gerd Altmann from Pixabay Continue reading
#437171 Scientists Tap the World’s Most ...
In The Hitchhiker’s Guide to the Galaxy by Douglas Adams, the haughty supercomputer Deep Thought is asked whether it can find the answer to the ultimate question concerning life, the universe, and everything. It replies that, yes, it can do it, but it’s tricky and it’ll have to think about it. When asked how long it will take it replies, “Seven-and-a-half million years. I told you I’d have to think about it.”
Real-life supercomputers are being asked somewhat less expansive questions but tricky ones nonetheless: how to tackle the Covid-19 pandemic. They’re being used in many facets of responding to the disease, including to predict the spread of the virus, to optimize contact tracing, to allocate resources and provide decisions for physicians, to design vaccines and rapid testing tools, and to understand sneezes. And the answers are needed in a rather shorter time frame than Deep Thought was proposing.
The largest number of Covid-19 supercomputing projects involves designing drugs. It’s likely to take several effective drugs to treat the disease. Supercomputers allow researchers to take a rational approach and aim to selectively muzzle proteins that SARS-CoV-2, the virus that causes Covid-19, needs for its life cycle.
The viral genome encodes proteins needed by the virus to infect humans and to replicate. Among these are the infamous spike protein that sniffs out and penetrates its human cellular target, but there are also enzymes and molecular machines that the virus forces its human subjects to produce for it. Finding drugs that can bind to these proteins and stop them from working is a logical way to go.
The Summit supercomputer at Oak Ridge National Laboratory has a peak performance of 200,000 trillion calculations per second—equivalent to about a million laptops. Image credit: Oak Ridge National Laboratory, U.S. Dept. of Energy, CC BY
I am a molecular biophysicist. My lab, at the Center for Molecular Biophysics at the University of Tennessee and Oak Ridge National Laboratory, uses a supercomputer to discover drugs. We build three-dimensional virtual models of biological molecules like the proteins used by cells and viruses, and simulate how various chemical compounds interact with those proteins. We test thousands of compounds to find the ones that “dock” with a target protein. Those compounds that fit, lock-and-key style, with the protein are potential therapies.
The top-ranked candidates are then tested experimentally to see if they indeed do bind to their targets and, in the case of Covid-19, stop the virus from infecting human cells. The compounds are first tested in cells, then animals, and finally humans. Computational drug discovery with high-performance computing has been important in finding antiviral drugs in the past, such as the anti-HIV drugs that revolutionized AIDS treatment in the 1990s.
World’s Most Powerful Computer
Since the 1990s the power of supercomputers has increased by a factor of a million or so. Summit at Oak Ridge National Laboratory is presently the world’s most powerful supercomputer, and has the combined power of roughly a million laptops. A laptop today has roughly the same power as a supercomputer had 20-30 years ago.
However, in order to gin up speed, supercomputer architectures have become more complicated. They used to consist of single, very powerful chips on which programs would simply run faster. Now they consist of thousands of processors performing massively parallel processing in which many calculations, such as testing the potential of drugs to dock with a pathogen or cell’s proteins, are performed at the same time. Persuading those processors to work together harmoniously is a pain in the neck but means we can quickly try out a lot of chemicals virtually.
Further, researchers use supercomputers to figure out by simulation the different shapes formed by the target binding sites and then virtually dock compounds to each shape. In my lab, that procedure has produced experimentally validated hits—chemicals that work—for each of 16 protein targets that physician-scientists and biochemists have discovered over the past few years. These targets were selected because finding compounds that dock with them could result in drugs for treating different diseases, including chronic kidney disease, prostate cancer, osteoporosis, diabetes, thrombosis and bacterial infections.
Scientists are using supercomputers to find ways to disable the various proteins—including the infamous spike protein (green protrusions)—produced by SARS-CoV-2, the virus responsible for Covid-19. Image credit: Thomas Splettstoesser scistyle.com, CC BY-ND
Billions of Possibilities
So which chemicals are being tested for Covid-19? A first approach is trying out drugs that already exist for other indications and that we have a pretty good idea are reasonably safe. That’s called “repurposing,” and if it works, regulatory approval will be quick.
But repurposing isn’t necessarily being done in the most rational way. One idea researchers are considering is that drugs that work against protein targets of some other virus, such as the flu, hepatitis or Ebola, will automatically work against Covid-19, even when the SARS-CoV-2 protein targets don’t have the same shape.
Our own work has now expanded to about 10 targets on SARS-CoV-2, and we’re also looking at human protein targets for disrupting the virus’s attack on human cells. Top-ranked compounds from our calculations are being tested experimentally for activity against the live virus. Several of these have already been found to be active.The best approach is to check if repurposed compounds will actually bind to their intended target. To that end, my lab published a preliminary report of a supercomputer-driven docking study of a repurposing compound database in mid-February. The study ranked 8,000 compounds in order of how well they bind to the viral spike protein. This paper triggered the establishment of a high-performance computing consortium against our viral enemy, announced by President Trump in March. Several of our top-ranked compounds are now in clinical trials.
Also, we and others are venturing out into the wild world of new drug discovery for Covid-19—looking for compounds that have never been tried as drugs before. Databases of billions of these compounds exist, all of which could probably be synthesized in principle but most of which have never been made. Billion-compound docking is a tailor-made task for massively parallel supercomputing.
Dawn of the Exascale Era
Work will be helped by the arrival of the next big machine at Oak Ridge, called Frontier, planned for next year. Frontier should be about 10 times more powerful than Summit. Frontier will herald the “exascale” supercomputing era, meaning machines capable of 1,000,000,000,000,000,000 calculations per second.
Although some fear supercomputers will take over the world, for the time being, at least, they are humanity’s servants, which means that they do what we tell them to. Different scientists have different ideas about how to calculate which drugs work best—some prefer artificial intelligence, for example—so there’s quite a lot of arguing going on.
Hopefully, scientists armed with the most powerful computers in the world will, sooner rather than later, find the drugs needed to tackle Covid-19. If they do, then their answers will be of more immediate benefit, if less philosophically tantalizing, than the answer to the ultimate question provided by Deep Thought, which was, maddeningly, simply 42.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image credit: NIH/NIAID Continue reading
#437150 AI Is Getting More Creative. But Who ...
Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.
Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.
But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.
In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.
The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.
A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”
Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”
Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.
The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.
Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”
In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.
First, programmers need to document their process through online code repositories like GitHub or BitBucket.
Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.
Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.
Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.
AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.
Image Credit: Wikimedia Commons Continue reading