Tag Archives: jason
#437150 AI Is Getting More Creative. But Who ...
Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.
Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.
But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.
In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.
The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.
A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”
Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”
Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.
The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.
Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”
In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.
First, programmers need to document their process through online code repositories like GitHub or BitBucket.
Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.
Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.
Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.
AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.
Image Credit: Wikimedia Commons Continue reading
#436426 Video Friday: This Robot Refuses to Fall ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.
[ Skydio ]
Sphero RVR is one of the 15 robots on our robot gift guide this year. Here’s a new video Sphero just released showing some of the things you can do with the robot.
[ RVR ]
NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.
[ NimbRo ]
Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.
[ DARPA SubT ]
Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.
[ Bristol ]
Happy Holidays from ABB!
Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.
[ ABB ]
We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.
And here’s a real pelican eel:
[ Science Robotics ]
Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.
[ Delft Dynamics ]
Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.
[ Paper ] via [ ROAM Lab ]
I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.
[ MotionLib ] via [ RobotStart ]
In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.
[ Paper ]
New weirdness from Toio!
[ Toio ]
Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.
[ Misty Robotics ]
We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.
[ CMU ]
How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.
[ Roboy Research Reviews ]
What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she “taught” her artistic style to a machine — and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. “Part of the beauty of human and machine systems is their inherent, shared fallibility,” she says.
[ TED ]
Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.
[ IEEE TechEthics ]
Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”
Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.
[ CMU RI ] Continue reading