Tag Archives: poet

#436426 Video Friday: This Robot Refuses to Fall ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.

[ Skydio ]

Sphero RVR is one of the 15 robots on our robot gift guide this year. Here’s a new video Sphero just released showing some of the things you can do with the robot.

[ RVR ]

NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.

[ NimbRo ]

Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.

[ DARPA SubT ]

Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.

[ Bristol ]

Happy Holidays from ABB!

Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.

[ ABB ]

We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.

And here’s a real pelican eel:

[ Science Robotics ]

Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.

[ Delft Dynamics ]

Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.

[ Paper ] via [ ROAM Lab ]

I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.

[ MotionLib ] via [ RobotStart ]

In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.

[ Paper ]

New weirdness from Toio!

[ Toio ]

Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.

[ Misty Robotics ]

We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.

[ CMU ]

How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.

[ Roboy Research Reviews ]

What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she “taught” her artistic style to a machine — and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. “Part of the beauty of human and machine systems is their inherent, shared fallibility,” she says.

[ TED ]

Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.

[ IEEE TechEthics ]

Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.

[ AI Podcast ]

This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”

Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.

[ CMU RI ] Continue reading

Posted in Human Robots

#436258 For Centuries, People Dreamed of a ...

This is part six of a six-part series on the history of natural language processing.

In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text.

But as is often the case with NLP technology, the tool held both great promise and great peril. Researchers and policy makers at the lab were concerned that their system, if widely released, could be exploited by bad actors and misappropriated for “malicious purposes.”

The people of OpenAI, which defines its mission as “discovering and enacting the path to safe artificial general intelligence,” were concerned that GPT-2 could be used to flood the Internet with fake text, thereby degrading an already fragile information ecosystem. For this reason, OpenAI decided that it would not release the full version of GPT-2 to the public or other researchers.

GPT-2 is an example of a technique in NLP called language modeling, whereby the computational system internalizes a statistical blueprint of a text so it’s able to mimic it. Just like the predictive text on your phone—which selects words based on words you’ve used before—GPT-2 can look at a string of text and then predict what the next word is likely to be based on the probabilities inherent in that text.

GPT-2 can be seen as a descendant of the statistical language modeling that the Russian mathematician A. A. Markov developed in the early 20th century (covered in part three of this series).

GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters.

What’s different with GPT-2, though, is the scale of the textual data modeled by the system. Whereas Markov analyzed a string of 20,000 letters to create a rudimentary model that could predict the likelihood of the next letter of a text being a consonant or a vowel, GPT-2 used 8 million articles scraped from Reddit to predict what the next word might be within that entire dataset.

And whereas Markov manually trained his model by counting only two parameters—vowels and consonants—GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters, burning through huge amounts of computational power in the process.

The results were impressive. In their blog post, OpenAI reported that GPT-2 could generate synthetic text in response to prompts, mimicking whatever style of text it was shown. If you prompt the system with a line of William Blake’s poetry, it can generate a line back in the Romantic poet’s style. If you prompt the system with a cake recipe, you get a newly invented recipe in response.

Perhaps the most compelling feature of GPT-2 is that it can answer questions accurately. For example, when OpenAI researchers asked the system, “Who wrote the book The Origin of Species?”—it responded: “Charles Darwin.” While only able to respond accurately some of the time, the feature does seem to be a limited realization of Gottfried Leibniz’s dream of a language-generating machine that could answer any and all human questions (described in part two of this series).

After observing the power of the new system in practice, OpenAI elected not to release the fully trained model. In the lead up to its release in February, there had been heightened awareness about “deepfakes”—synthetic images and videos, generated via machine learning techniques, in which people do and say things they haven’t really done and said. Researchers at OpenAI worried that GPT-2 could be used to essentially create deepfake text, making it harder for people to trust textual information online.

Responses to this decision varied. On one hand, OpenAI’s caution prompted an overblown reaction in the media, with articles about the “dangerous” technology feeding into the Frankenstein narrative that often surrounds developments in AI.

Others took issue with OpenAI’s self-promotion, with some even suggesting that OpenAI purposefully exaggerated GPT-2s power in order to create hype—while contravening a norm in the AI research community, where labs routinely share data, code, and pre-trained models. As machine learning researcher Zachary Lipton tweeted, “Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research.”

OpenAI stood by its decision to release only a limited version of GPT-2, but has since released larger models for other researchers and the public to experiment with. As yet, there has been no reported case of a widely distributed fake news article generated by the system. But there have been a number of interesting spin-off projects, including GPT-2 poetry and a webpage where you can prompt the system with questions yourself.

Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and
Star Wars movies.

There’s even a Reddit group populated entirely with text produced by GPT-2-powered bots. Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and Star Wars movies.

This bot-powered conversation may signify the new condition of life online, where language is increasingly created by a combination of human and non-human agents, and where maintaining the distinction between human and non-human, despite our best efforts, is increasingly difficult.

The idea of using rules, mechanisms, and algorithms to generate language has inspired people in many different cultures throughout history. But it’s in the online world that this powerful form of wordcraft may really find its natural milieu—in an environment where the identity of speakers becomes more ambiguous, and perhaps, less relevant. It remains to be seen what the consequences will be for language, communication, and our sense of human identity, which is so bound up with our ability to speak in natural language.

This is the sixth installment of a six-part series on the history of natural language processing. Last week’s post explained how an innocent Microsoft chatbot turned instantly racist on Twitter.

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#430286 Artificial Intelligence Predicts Death ...

Do not go gentle into that good night, Old age should burn and rave at close of day; Rage, rage against the dying of the light.
Welsh poet Dylan Thomas’ famous lines are a passionate plea to fight against the inevitability of death. While the sentiment is poetic, the reality is far more prosaic. We are all going to die someday at a time and place that will likely remain a mystery to us until the very end.
Or maybe not.
Researchers are now applying artificial intelligence, particularly machine learning and computer vision, to predict when someone may die. The ultimate goal is not to play the role of Grim Reaper, like in the macabre sci-fi Machine of Death universe, but to treat or even prevent chronic diseases and other illnesses.
The latest research into this application of AI to precision medicine used an off-the-shelf machine-learning platform to analyze 48 chest CT scans. The computer was able to predict which patients would die within five years with 69 percent accuracy. That’s about as good as any human doctor.
The results were published in the Nature journal Scientific Reports by a team led by the University of Adelaide.
In an email interview with Singularity Hub, lead author Dr. Luke Oakden-Rayner, a radiologist and PhD student, says that one of the obvious benefits of using AI in precision medicine is to identify health risks earlier and potentially intervene.
Less obvious, he adds, is the promise of speeding up longevity research.
“Currently, most research into chronic disease and longevity requires long periods of follow-up to detect any difference between patients with and without treatment, because the diseases progress so slowly,” he explains. “If we can quantify the changes earlier, not only can we identify disease while we can intervene more effectively, but we might also be able to detect treatment response much sooner.”
That could lead to faster and cheaper treatments, he adds. “If we could cut a year or two off the time it takes to take a treatment from lab to patient, that could speed up progress in this area substantially.”
AI has a heart
In January, researchers at Imperial College London published results that suggested AI could predict heart failure and death better than a human doctor. The research, published in the journal Radiology, involved creating virtual 3D hearts of about 250 patients that could simulate cardiac function. AI algorithms then went to work to learn what features would serve as the best predictors. The system relied on MRIs, blood tests, and other data for its analyses.
In the end, the machine was faster and better at assessing risk of pulmonary hypertension—about 73 percent versus 60 percent.
The researchers say the technology could be applied to predict outcomes of other heart conditions in the future. “We would like to develop the technology so it can be used in many heart conditions to complement how doctors interpret the results of medical tests,” says study co-author Dr. Tim Dawes in a press release. “The goal is to see if better predictions can guide treatment to help people to live longer.”
AI getting smarter
These sorts of applications with AI to precision medicine are only going to get better as the machines continue to learn, just like any medical school student.
Oakden-Rayner says his team is still building its ideal dataset as it moves forward with its research, but have already improved predictive accuracy by 75 to 80 percent by including information such as age and sex.
“I think there is an upper limit on how accurate we can be, because there is always going to be an element of randomness,” he says, replying to how well AI will be able to pinpoint individual human mortality. “But we can be much more precise than we are now, taking more of each individual’s risks and strengths into account. A model combining all of those factors will hopefully account for more than 80 percent of the risk of near-term mortality.”
Others are even more optimistic about how quickly AI will transform this aspect of the medical field.
“Predicting remaining life span for people is actually one of the easiest applications of machine learning,” Dr. Ziad Obermeyer tells STAT News. “It requires a unique set of data where we have electronic records linked to information about when people died. But once we have that for enough people, you can come up with a very accurate predictor of someone’s likelihood of being alive one month out, for instance, or one year out.”
Obermeyer co-authored a paper last year with Dr. Ezekiel Emanuel in the New England Journal of Medicine called “Predicting the Future—Big Data, Machine Learning, and Clinical Medicine.”
AI still has much to learn
Experts like Obermeyer and Oakden-Rayner agree that advances will come swiftly, but there is still much work to be done.
For one thing, there’s plenty of data out there to mine, but it’s still a bit of a mess. For example, the images needed to train machines still need to be processed to make them useful. “Many groups around the world are now spending millions of dollars on this task, because this appears to be the major bottleneck for successful medical AI,” Oakden-Rayner says.
In the interview with STAT News, Obermeyer says data is fragmented across the health system, so linking information and creating comprehensive datasets will take time and money. He also notes that while there is much excitement about the use of AI in precision medicine, there’s been little activity in testing the algorithms in a clinical setting.
“It’s all very well and good to say you’ve got an algorithm that’s good at predicting. Now let’s actually port them over to the real world in a safe and responsible and ethical way and see what happens,” he says in STAT News.
AI is no accident
Preventing a fatal disease is one thing. But preventing fatal accidents with AI?
That’s what US and Indian researchers set out to do when they looked over the disturbing number of deaths occurring from people taking selfies. The team identified 127 people who died while posing for a self-taken photo over a two-year period.
Based on a combination of text, images and location, the machine learned to identify a selfie as potentially dangerous or not. Running more than 3,000 annotated selfies collected on Twitter through the software resulted in 73 percent accuracy.
“The combination of image-based and location-based features resulted in the best accuracy,” they reported.
What’s next? A sort of selfie early warning system. “One of the directions that we are working on is to have the camera give the user information about [whether or not a particular location is] dangerous, with some score attached to it,” says Ponnurangam Kumaraguru, a professor at Indraprastha Institute of Information Technology in Delhi, in a story by Digital Trends.
AI and the future
This discussion begs the question: Do we really want to know when we’re going to die?
According to at least one paper published in Psychology Review earlier this year, the answer is a resounding “no.” Nearly nine out of 10 people in Germany and Spain who were quizzed about whether they would want to know about their future, including death, said they would prefer to remain ignorant.
Obermeyer sees it differently, at least when it comes to people living with life-threatening illness.
“[O]ne thing that those patients really, really want and aren’t getting from doctors is objective predictions about how long they have to live,” he tells Marketplace public radio. “Doctors are very reluctant to answer those kinds of questions, partly because, you know, you don’t want to be wrong about something so important. But also partly because there’s a sense that patients don’t want to know. And in fact, that turns out not to be true when you actually ask the patients.”
Stock Media provided by photocosma / Pond5 Continue reading

Posted in Human Robots