Tag Archives: machine
#436426 Video Friday: This Robot Refuses to Fall ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.
[ Skydio ]
Sphero RVR is one of the 15 robots on our robot gift guide this year. Here’s a new video Sphero just released showing some of the things you can do with the robot.
[ RVR ]
NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.
[ NimbRo ]
Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.
[ DARPA SubT ]
Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.
[ Bristol ]
Happy Holidays from ABB!
Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.
[ ABB ]
We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.
And here’s a real pelican eel:
[ Science Robotics ]
Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.
[ Delft Dynamics ]
Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.
[ Paper ] via [ ROAM Lab ]
I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.
[ MotionLib ] via [ RobotStart ]
In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.
[ Paper ]
New weirdness from Toio!
[ Toio ]
Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.
[ Misty Robotics ]
We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.
[ CMU ]
How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.
[ Roboy Research Reviews ]
What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she “taught” her artistic style to a machine — and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. “Part of the beauty of human and machine systems is their inherent, shared fallibility,” she says.
[ TED ]
Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.
[ IEEE TechEthics ]
Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”
Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.
[ CMU RI ] Continue reading
#436261 AI and the future of work: The prospects ...
AI experts gathered at MIT last week, with the aim of predicting the role artificial intelligence will play in the future of work. Will it be the enemy of the human worker? Will it prove to be a savior? Or will it be just another innovation—like electricity or the internet?
As IEEE Spectrum previously reported, this conference (“AI and the Future of Work Congress”), held at MIT’s Kresge Auditorium, offered sometimes pessimistic outlooks on the job- and industry-destroying path that AI and automation seems to be taking: Self-driving technology will put truck drivers out of work; smart law clerk algorithms will put paralegals out of work; robots will (continue to) put factory and warehouse workers out of work.
Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said even just in the past couple years, he’s noticed a shift in the public’s perception of AI. “I remember from previous versions of this conference, it felt like we had to make the case that we’re living in a period of accelerating change and that AI’s going to have a big impact,” he said. “Nobody had to make that case today.”
Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future, noted that following the path of least resistance is not a viable way forward. “If we do nothing, we’re in trouble,” she said. “The future will not take care of itself. We have to do something about it.”
Panelists and speakers spoke about championing productive uses of AI in the workplace, which ultimately benefit both employees and customers.
As one example, Zeynep Ton, professor at MIT Sloan School of Management, highlighted retailer Sam’s Club’s recent rollout of a program called Sam’s Garage. Previously customers shopping for tires for their car spent somewhere between 30 and 45 minutes with a Sam’s Club associate paging through manuals and looking up specs on websites.
But with an AI algorithm, they were able to cut that spec hunting time down to 2.2 minutes. “Now instead of wasting their time trying to figure out the different tires, they can field the different options and talk about which one would work best [for the customer],” she said. “This is a great example of solving a real problem, including [enhancing] the experience of the associate as well as the customer.”
“We think of it as an AI-first world that’s coming,” said Scott Prevost, VP of engineering at Adobe. Prevost said AI agents in Adobe’s software will behave something like a creative assistant or intern who will take care of more mundane tasks for you.
“We need a mindset change. That it is not just about minimizing costs or maximizing tax benefits, but really worrying about what kind of society we’re creating and what kind of environment we’re creating if we keep on just automating and [eliminating] good jobs.”
—Daron Acemoglu, MIT Institute Professor of Economics
Prevost cited an internal survey of Adobe customers that found 74 percent of respondents’ time was spent doing repetitive work—the kind that might be automated by an AI script or smart agent.
“It used to be you’d have the resources to work on three ideas [for a creative pitch or presentation],” Prevost said. “But if the AI can do a lot of the production work, then you can have 10 or 100. Which means you can actually explore some of the further out ideas. It’s also lowering the bar for everyday people to create really compelling output.”
In addition to changing the nature of work, noted a number of speakers at the event, AI is also directly transforming the workforce.
Jacob Hsu, CEO of the recruitment company Catalyte spoke about using AI as a job placement tool. The company seeks to fill myriad positions including auto mechanics, baristas, and office workers—with its sights on candidates including young people and mid-career job changers. To find them, it advertises on Craigslist, social media, and traditional media.
The prospects who sign up with Catalyte take a battery of tests. The company’s AI algorithms then match each prospect’s skills with the field best suited for their talents.
“We want to be like the Harry Potter Sorting Hat,” Hsu said.
Guillermo Miranda, IBM’s global head of corporate social responsibility, said IBM has increasingly been hiring based not on credentials but on skills. For instance, he said, as much as 50 per cent of the company’s new hires in some divisions do not have a traditional four-year college degree. “As a company, we need to be much more clear about hiring by skills,” he said. “It takes discipline. It takes conviction. It takes a little bit of enforcing with H.R. by the business leaders. But if you hire by skills, it works.”
Ardine Williams, Amazon’s VP of workforce development, said the e-commerce giant has been experimenting with developing skills of the employees at its warehouses (a.k.a. fulfillment centers) with an eye toward putting them in a position to get higher-paying work with other companies.
She described an agreement Amazon had made in its Dallas fulfillment center with aircraft maker Sikorsky, which had been experiencing a shortage of skilled workers for its nearby factory. So Amazon offered to its employees a free certification training to seek higher-paying work at Sikorsky.
“I do that because now I have an attraction mechanism—like a G.I. Bill,” Williams said. The program is also only available for employees who have worked at least a year with Amazon. So their program offers medium-term job retention, while ultimately moving workers up the wage ladder.
Radha Basu, CEO of AI data company iMerit, said her firm aggressively hires from the pool of women and under-resourced minority communities in the U.S. and India. The company specializes in turning unstructured data (e.g. video or audio feeds) into tagged and annotated data for machine learning, natural language processing, or computer vision applications.
“There is a motivation with these young people to learn these things,” she said. “It comes with no baggage.”
Alastair Fitzpayne, executive director of The Aspen Institute’s Future of Work Initiative, said the future of work ultimately means, in bottom-line terms, the future of human capital. “We have an R&D tax credit,” he said. “We’ve had it for decades. It provides credit for companies that make new investment in research and development. But we have nothing on the human capital side that’s analogous.”
So a company that’s making a big investment in worker training does it on their own dime, without any of the tax benefits that they might accrue if they, say, spent it on new equipment or new technology. Fitzpayne said a simple tweak to the R&D tax credit could make a big difference by incentivizing new investment programs in worker training. Which still means Amazon’s pre-existing worker training programs—for a company that already famously pays no taxes—would not count.
“We need a different way of developing new technologies,” said Daron Acemoglu, MIT Institute Professor of Economics. He pointed to the clean energy sector as an example. First a consensus around the problem needs to emerge. Then a broadly agreed-upon set of goals and measurements needs to be developed (e.g., that AI and automation would, for instance, create at least X new jobs for every Y jobs that it eliminates).
Then it just needs to be implemented.
“We need to build a consensus that, along the path we’re following at the moment, there are going to be increasing problems for labor,” Acemoglu said. “We need a mindset change. That it is not just about minimizing costs or maximizing tax benefits, but really worrying about what kind of society we’re creating and what kind of environment we’re creating if we keep on just automating and [eliminating] good jobs.” Continue reading
#436258 For Centuries, People Dreamed of a ...
This is part six of a six-part series on the history of natural language processing.
In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text.
But as is often the case with NLP technology, the tool held both great promise and great peril. Researchers and policy makers at the lab were concerned that their system, if widely released, could be exploited by bad actors and misappropriated for “malicious purposes.”
The people of OpenAI, which defines its mission as “discovering and enacting the path to safe artificial general intelligence,” were concerned that GPT-2 could be used to flood the Internet with fake text, thereby degrading an already fragile information ecosystem. For this reason, OpenAI decided that it would not release the full version of GPT-2 to the public or other researchers.
GPT-2 is an example of a technique in NLP called language modeling, whereby the computational system internalizes a statistical blueprint of a text so it’s able to mimic it. Just like the predictive text on your phone—which selects words based on words you’ve used before—GPT-2 can look at a string of text and then predict what the next word is likely to be based on the probabilities inherent in that text.
GPT-2 can be seen as a descendant of the statistical language modeling that the Russian mathematician A. A. Markov developed in the early 20th century (covered in part three of this series).
GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters.
What’s different with GPT-2, though, is the scale of the textual data modeled by the system. Whereas Markov analyzed a string of 20,000 letters to create a rudimentary model that could predict the likelihood of the next letter of a text being a consonant or a vowel, GPT-2 used 8 million articles scraped from Reddit to predict what the next word might be within that entire dataset.
And whereas Markov manually trained his model by counting only two parameters—vowels and consonants—GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters, burning through huge amounts of computational power in the process.
The results were impressive. In their blog post, OpenAI reported that GPT-2 could generate synthetic text in response to prompts, mimicking whatever style of text it was shown. If you prompt the system with a line of William Blake’s poetry, it can generate a line back in the Romantic poet’s style. If you prompt the system with a cake recipe, you get a newly invented recipe in response.
Perhaps the most compelling feature of GPT-2 is that it can answer questions accurately. For example, when OpenAI researchers asked the system, “Who wrote the book The Origin of Species?”—it responded: “Charles Darwin.” While only able to respond accurately some of the time, the feature does seem to be a limited realization of Gottfried Leibniz’s dream of a language-generating machine that could answer any and all human questions (described in part two of this series).
After observing the power of the new system in practice, OpenAI elected not to release the fully trained model. In the lead up to its release in February, there had been heightened awareness about “deepfakes”—synthetic images and videos, generated via machine learning techniques, in which people do and say things they haven’t really done and said. Researchers at OpenAI worried that GPT-2 could be used to essentially create deepfake text, making it harder for people to trust textual information online.
Responses to this decision varied. On one hand, OpenAI’s caution prompted an overblown reaction in the media, with articles about the “dangerous” technology feeding into the Frankenstein narrative that often surrounds developments in AI.
Others took issue with OpenAI’s self-promotion, with some even suggesting that OpenAI purposefully exaggerated GPT-2s power in order to create hype—while contravening a norm in the AI research community, where labs routinely share data, code, and pre-trained models. As machine learning researcher Zachary Lipton tweeted, “Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research.”
OpenAI stood by its decision to release only a limited version of GPT-2, but has since released larger models for other researchers and the public to experiment with. As yet, there has been no reported case of a widely distributed fake news article generated by the system. But there have been a number of interesting spin-off projects, including GPT-2 poetry and a webpage where you can prompt the system with questions yourself.
Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and
Star Wars movies.
There’s even a Reddit group populated entirely with text produced by GPT-2-powered bots. Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and Star Wars movies.
This bot-powered conversation may signify the new condition of life online, where language is increasingly created by a combination of human and non-human agents, and where maintaining the distinction between human and non-human, despite our best efforts, is increasingly difficult.
The idea of using rules, mechanisms, and algorithms to generate language has inspired people in many different cultures throughout history. But it’s in the online world that this powerful form of wordcraft may really find its natural milieu—in an environment where the identity of speakers becomes more ambiguous, and perhaps, less relevant. It remains to be seen what the consequences will be for language, communication, and our sense of human identity, which is so bound up with our ability to speak in natural language.
This is the sixth installment of a six-part series on the history of natural language processing. Last week’s post explained how an innocent Microsoft chatbot turned instantly racist on Twitter.
You can also check out our prior series on the untold history of AI. Continue reading
#436256 Alphabet Is Developing a Robot to Take ...
Robots excel at carrying out specialized tasks in controlled environments, but put them in your average office and they’d be lost. Alphabet wants to change that by developing what they call the Everyday Robot, which could learn to help us out with our daily chores.
For a long time most robots were painstakingly hand-coded to carry out their functions, but since the deep learning revolution earlier this decade there’s been a growing effort to imbue them with AI that lets them learn new tasks through experience.
That’s led to some impressive breakthroughs, like a robotic hand nimble enough to solve a Rubik’s cube and a robotic arm that can accurately toss bananas across a room.
And it turns out Alphabet’s early-stage research and development division, Alphabet X, has also secretly been using similar machine learning techniques to develop robots adaptable enough to carry out a range of tasks in cluttered and unpredictable human environments like homes and offices.
The robots they’ve built combine a wheeled base with a single arm and a head full of sensors (including LIDAR) for 3D scanning, borrowed from Alphabet’s self-driving car division, Waymo.
At the minute, though, they’re largely restricted to sorting trash for recycling, project leader Hans Peter Brondmo writes in a blog post. While that might sound mundane, identifying different kinds of trash, grasping it, and moving it to the correct bin is still a difficult thing for a robot to do consistently. Some of the robots also have to navigate around the office to sort trash at various recycling stations.
Alphabet says even its human staff were getting it wrong 20 percent of the time, but after several months of training the robots have managed to get that down to 3.5 percent.
Every day, 30 robots toil away in what’s been dubbed the “playpen” sorting trash, and then every night thousands of virtual robots continue to practice in a simulation. This experience is then used to update the robots’ control algorithms each night. All the robots also share their experiences with the others through a process called collaborative learning.
The process isn’t flawless, though. Simonite notes that while the robots exhibit some uncannily smart behaviors, like stirring piles of rubbish to make it easier to grab specific items, they also frequently miss or fumble the objects they’re trying to grasp.
Nonetheless, the project’s leaders are happy with their progress so far. And the hope is that creating robots that are able to learn from little more than experience in complex environments like an office should be a first step towards general-purpose robots that can pick up a variety of useful skills to assist humans.
Taking that next step will be the major test of the project. So far there’s been limited evidence that experience gained by robots in one task can be transferred to learning another. That’s something the group hopes to demonstrate next year.
And it seems there may be more robot news coming out of Alphabet X soon. The group has several other robotics “moonshots” in the pipeline, built on technology and talent transferred over in 2016 from the remains of a broadly unsuccessful splurge on robotics startups by former Google executive Andy Rubin.
Whether this robotics renaissance at Alphabet will finally help robots break into our homes and offices remains to be seen, but with the resources they have at hand, they just may be able to make it happen.
Image Credit: Everyday Robot, Alphabet X Continue reading