Tag Archives: Would
#436426 Video Friday: This Robot Refuses to Fall ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
Robotic Arena – January 25, 2020 – Wrocław, Poland
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
In case you somehow missed the massive Skydio 2 review we posted earlier this week, the first batches of the drone are now shipping. Each drone gets a lot of attention before it goes out the door, and here’s a behind-the-scenes clip of the process.
[ Skydio ]
Sphero RVR is one of the 15 robots on our robot gift guide this year. Here’s a new video Sphero just released showing some of the things you can do with the robot.
[ RVR ]
NimbRo-OP2 has some impressive recovery skills from the obligatory research-motivated robot abuse.
[ NimbRo ]
Teams seeking to qualify for the Virtual Urban Circuit of the Subterranean Challenge can access practice worlds to test their approaches prior to submitting solutions for the competition. This video previews three of the practice environments.
[ DARPA SubT ]
Stretchable skin-like robots that can be rolled up and put in your pocket have been developed by a University of Bristol team using a new way of embedding artificial muscles and electrical adhesion into soft materials.
[ Bristol ]
Happy Holidays from ABB!
Helping New York celebrate the festive season, twelve ABB robots are interacting with visitors to Bloomingdale’s iconic holiday celebration at their 59th Street flagship store. ABB’s robots are the main attraction in three of Bloomingdale’s twelve-holiday window displays at Lexington and Third Avenue, as ABB demonstrates the potential for its robotics and automation technology to revolutionize visual merchandising and make the retail experience more dynamic and whimsical.
[ ABB ]
We introduce pelican eel–inspired dual-morphing architectures that embody quasi-sequential behaviors of origami unfolding and skin stretching in response to fluid pressure. In the proposed system, fluid paths were enclosed and guided by a set of entirely stretchable origami units that imitate the morphing principle of the pelican eel’s stretchable and foldable frames. This geometric and elastomeric design of fluid networks, in which fluid pressure acts in the direction that the whole body deploys first, resulted in a quasi-sequential dual-morphing response. To verify the effectiveness of our design rule, we built an artificial creature mimicking a pelican eel and reproduced biomimetic dual-morphing behavior.
And here’s a real pelican eel:
[ Science Robotics ]
Delft Dynamics’ updated anti-drone system involves a tether, mid-air net gun, and even a parachute.
[ Delft Dynamics ]
Teleoperation is a great way of helping robots with complex tasks, especially if you can do it through motion capture. But what if you’re teleoperating a non-anthropomorphic robot? Columbia’s ROAM Lab is working on it.
[ Paper ] via [ ROAM Lab ]
I don’t know how I missed this video last year because it’s got a steely robot hand squeezing a cute lil’ chick.
[ MotionLib ] via [ RobotStart ]
In this video we present results of a trajectory generation method for autonomous overtaking of unexpected obstacles in a dynamic urban environment. In these settings, blind spots can arise from perception limitations. For example when overtaking unexpected objects on the vehicle’s ego lane on a two-way street. In this case, a human driver would first make sure that the opposite lane is free and that there is enough room to successfully execute the maneuver, and then it would cut into the opposite lane in order to execute the maneuver successfully. We consider the practical problem of autonomous overtaking when the coverage of the perception system is impaired due to occlusion.
[ Paper ]
New weirdness from Toio!
[ Toio ]
Palo Alto City Library won a technology innovation award! Watch to see how Senior Librarian Dan Lou is using Misty to enhance their technology programs to inspire and educate customers.
[ Misty Robotics ]
We consider the problem of reorienting a rigid object with arbitrary known shape on a table using a two-finger pinch gripper. Reorienting problem is challenging because of its non-smoothness and high dimensionality. In this work, we focus on solving reorienting using pivoting, in which we allow the grasped object to rotate between fingers. Pivoting decouples the gripper rotation from the object motion, making it possible to reorient an object under strict robot workspace constraints.
[ CMU ]
How can a mobile robot be a good pedestrian without bumping into you on the sidewalk? It must be hard for a robot to navigate in crowded environments since the flow of traffic follows implied social rules. But researchers from MIT developed an algorithm that teaches mobile robots to maneuver in crowds of people, respecting their natural behaviour.
[ Roboy Research Reviews ]
What happens when humans and robots make art together? In this awe-inspiring talk, artist Sougwen Chung shows how she “taught” her artistic style to a machine — and shares the results of their collaboration after making an unexpected discovery: robots make mistakes, too. “Part of the beauty of human and machine systems is their inherent, shared fallibility,” she says.
[ TED ]
Last month at the Cooper Union in New York City, IEEE TechEthics hosted a public panel session on the facts and misperceptions of autonomous vehicles, part of the IEEE TechEthics Conversations Series. The speakers were: Jason Borenstein from Georgia Tech; Missy Cummings from Duke University; Jack Pokrzywa from SAE; and Heather M. Roff from Johns Hopkins Applied Physics Laboratory. The panel was moderated by Mark A. Vasquez, program manager for IEEE TechEthics.
[ IEEE TechEthics ]
Two videos this week from Lex Fridman’s AI podcast: Noam Chomsky, and Whitney Cummings.
[ AI Podcast ]
This week’s CMU RI Seminar comes from Jeff Clune at the University of Wyoming, on “Improving Robot and Deep Reinforcement Learning via Quality Diversity and Open-Ended Algorithms.”
Quality Diversity (QD) algorithms are those that seek to produce a diverse set of high-performing solutions to problems. I will describe them and a number of their positive attributes. I will then summarize our Nature paper on how they, when combined with Bayesian Optimization, produce a learning algorithm that enables robots, after being damaged, to adapt in 1-2 minutes in order to continue performing their mission, yielding state-of-the-art robot damage recovery. I will next describe our QD-based Go-Explore algorithm, which dramatically improves the ability of deep reinforcement learning algorithms to solve previously unsolvable problems wherein reward signals are sparse, meaning that intelligent exploration is required. Go-Explore solves Montezuma’s Revenge, considered by many to be a major AI research challenge. Finally, I will motivate research into open-ended algorithms, which seek to innovate endlessly, and introduce our POET algorithm, which generates its own training challenges while learning to solve them, automatically creating a curricula for robots to learn an expanding set of diverse skills. POET creates and solves challenges that are unsolvable with traditional deep reinforcement learning techniques.
[ CMU RI ] Continue reading
#436403 Why Your 5G Phone Connection Could Mean ...
Will getting full bars on your 5G connection mean getting caught out by sudden weather changes?
The question may strike you as hypothetical, nonsensical even, but it is at the core of ongoing disputes between meteorologists and telecommunications companies. Everyone else, including you and I, are caught in the middle, wanting both 5G’s faster connection speeds and precise information about our increasingly unpredictable weather. So why can’t we have both?
Perhaps we can, but because of the way 5G networks function, it may take some special technology—specifically, artificial intelligence.
The Bandwidth Worries
Around the world, the first 5G networks are already being rolled out. The networks use a variety of frequencies to transmit data to and from devices at speeds up to 100 times faster than existing 4G networks.
One of the bandwidths used is between 24.25 and 24.45 gigahertz (GHz). In a recent FCC auction, telecommunications companies paid a combined $2 billion for the 5G usage rights for this spectrum in the US.
However, meteorologists are concerned that transmissions near the lower end of that range can interfere with their ability to accurately measure water vapor in the atmosphere. Wired reported that acting chief of the National Oceanic and Atmospheric Administration (NOAA), Neil Jacobs, told the US House Subcommittee on the Environment that 5G interference could substantially cut the amount of weather data satellites can gather. As a result, forecast accuracy could drop by as much as 30 percent.
Among the consequences could be less time to prepare for hurricanes, and it may become harder to predict storms’ paths. Due to the interconnectedness of weather patterns, measurement issues in one location can affect other areas too. Lack of accurate atmospheric data from the US could, for example, lead to less accurate forecasts for weather patterns over Europe.
The Numbers Game
Water vapor emits a faint signal at 23.8 GHz. Weather satellites measure the signals, and the data is used to gauge atmospheric humidity levels. Meteorologists have expressed concern that 5G signals in the same range can disturb those readings. The issue is that it would be nigh on impossible to tell whether a signal is water vapor or an errant 5G signal.
Furthermore, 5G disturbances in other frequency bands could make forecasting even more difficult. Rain and snow emit frequencies around 36-37 GHz. 50.2-50.4 GHz is used to measure atmospheric temperatures, and 86-92 GHz clouds and ice. All of the above are under consideration for international 5G signals. Some have warned that the wider consequences could set weather forecasts back to the 1980s.
Telecommunications companies and interest organizations have argued back, saying that weather sensors aren’t as susceptible to interference as meteorologists fear. Furthermore, 5G devices and signals will produce much less interference with weather forecasts than organizations like NOAA predict. Since very little scientific research has been carried out to examine the claims of either party, we seem stuck in a ‘wait and see’ situation.
To offset some of the possible effects, the two groups have tried to reach a consensus on a noise buffer between the 5G transmissions and water-vapor signals. It could be likened to limiting the noise from busy roads or loud sound systems to avoid bothering neighboring buildings.
The World Meteorological Organization was looking to establish a -55 decibel watts buffer. In Europe, regulators are locked in on a -42 decibel watts buffer for 5G base stations. For comparison, the US Federal Communications Commission has advocated for a -20 decibel watts buffer, which would, in reality, allow more than 150 times more noise than the European proposal.
How AI Could Help
Much of the conversation about 5G’s possible influence on future weather predictions is centered around mobile phones. However, the phones are far from the only systems that will be receiving and transmitting signals on 5G. Self-driving cars and the Internet of Things are two other technologies that could soon be heavily reliant on faster wireless signals.
Densely populated areas are likely going to be the biggest emitters of 5G signals, leading to a suggestion to only gather water-vapor data over oceans.
Another option is to develop artificial intelligence (AI) approaches to clean or process weather data. AI is playing an increasing role in weather forecasting. For example, in 2016 IBM bought The Weather Company for $2 billion. The goal was to combine the two companies’ models and data in IBM’s Watson to create more accurate forecasts. AI would also be able to predict increases or drops in business revenues due to weather changes. Monsanto has also been investing in AI for forecasting, in this case to provide agriculturally-related weather predictions.
Smartphones may also provide a piece of the weather forecasting puzzle. Studies have shown how data from thousands of smartphones can help to increase the accuracy of storm predictions, as well as the force of storms.
“Weather stations cost a lot of money,” Cliff Mass, an atmospheric scientist at the University of Washington in Seattle, told Inside Science, adding, “If there are already 20 million smartphones, you might as well take advantage of the observation system that’s already in place.”
Smartphones may not be the solution when it comes to finding new ways of gathering the atmospheric data on water vapor that 5G could disrupt. But it does go to show that some technologies open new doors, while at the same time, others shut them.
Image Credit: Image by Free-Photos from Pixabay Continue reading
#436261 AI and the future of work: The prospects ...
AI experts gathered at MIT last week, with the aim of predicting the role artificial intelligence will play in the future of work. Will it be the enemy of the human worker? Will it prove to be a savior? Or will it be just another innovation—like electricity or the internet?
As IEEE Spectrum previously reported, this conference (“AI and the Future of Work Congress”), held at MIT’s Kresge Auditorium, offered sometimes pessimistic outlooks on the job- and industry-destroying path that AI and automation seems to be taking: Self-driving technology will put truck drivers out of work; smart law clerk algorithms will put paralegals out of work; robots will (continue to) put factory and warehouse workers out of work.
Andrew McAfee, co-director of MIT’s Initiative on the Digital Economy, said even just in the past couple years, he’s noticed a shift in the public’s perception of AI. “I remember from previous versions of this conference, it felt like we had to make the case that we’re living in a period of accelerating change and that AI’s going to have a big impact,” he said. “Nobody had to make that case today.”
Elisabeth Reynolds, executive director of MIT’s Task Force on the Work of the Future, noted that following the path of least resistance is not a viable way forward. “If we do nothing, we’re in trouble,” she said. “The future will not take care of itself. We have to do something about it.”
Panelists and speakers spoke about championing productive uses of AI in the workplace, which ultimately benefit both employees and customers.
As one example, Zeynep Ton, professor at MIT Sloan School of Management, highlighted retailer Sam’s Club’s recent rollout of a program called Sam’s Garage. Previously customers shopping for tires for their car spent somewhere between 30 and 45 minutes with a Sam’s Club associate paging through manuals and looking up specs on websites.
But with an AI algorithm, they were able to cut that spec hunting time down to 2.2 minutes. “Now instead of wasting their time trying to figure out the different tires, they can field the different options and talk about which one would work best [for the customer],” she said. “This is a great example of solving a real problem, including [enhancing] the experience of the associate as well as the customer.”
“We think of it as an AI-first world that’s coming,” said Scott Prevost, VP of engineering at Adobe. Prevost said AI agents in Adobe’s software will behave something like a creative assistant or intern who will take care of more mundane tasks for you.
“We need a mindset change. That it is not just about minimizing costs or maximizing tax benefits, but really worrying about what kind of society we’re creating and what kind of environment we’re creating if we keep on just automating and [eliminating] good jobs.”
—Daron Acemoglu, MIT Institute Professor of Economics
Prevost cited an internal survey of Adobe customers that found 74 percent of respondents’ time was spent doing repetitive work—the kind that might be automated by an AI script or smart agent.
“It used to be you’d have the resources to work on three ideas [for a creative pitch or presentation],” Prevost said. “But if the AI can do a lot of the production work, then you can have 10 or 100. Which means you can actually explore some of the further out ideas. It’s also lowering the bar for everyday people to create really compelling output.”
In addition to changing the nature of work, noted a number of speakers at the event, AI is also directly transforming the workforce.
Jacob Hsu, CEO of the recruitment company Catalyte spoke about using AI as a job placement tool. The company seeks to fill myriad positions including auto mechanics, baristas, and office workers—with its sights on candidates including young people and mid-career job changers. To find them, it advertises on Craigslist, social media, and traditional media.
The prospects who sign up with Catalyte take a battery of tests. The company’s AI algorithms then match each prospect’s skills with the field best suited for their talents.
“We want to be like the Harry Potter Sorting Hat,” Hsu said.
Guillermo Miranda, IBM’s global head of corporate social responsibility, said IBM has increasingly been hiring based not on credentials but on skills. For instance, he said, as much as 50 per cent of the company’s new hires in some divisions do not have a traditional four-year college degree. “As a company, we need to be much more clear about hiring by skills,” he said. “It takes discipline. It takes conviction. It takes a little bit of enforcing with H.R. by the business leaders. But if you hire by skills, it works.”
Ardine Williams, Amazon’s VP of workforce development, said the e-commerce giant has been experimenting with developing skills of the employees at its warehouses (a.k.a. fulfillment centers) with an eye toward putting them in a position to get higher-paying work with other companies.
She described an agreement Amazon had made in its Dallas fulfillment center with aircraft maker Sikorsky, which had been experiencing a shortage of skilled workers for its nearby factory. So Amazon offered to its employees a free certification training to seek higher-paying work at Sikorsky.
“I do that because now I have an attraction mechanism—like a G.I. Bill,” Williams said. The program is also only available for employees who have worked at least a year with Amazon. So their program offers medium-term job retention, while ultimately moving workers up the wage ladder.
Radha Basu, CEO of AI data company iMerit, said her firm aggressively hires from the pool of women and under-resourced minority communities in the U.S. and India. The company specializes in turning unstructured data (e.g. video or audio feeds) into tagged and annotated data for machine learning, natural language processing, or computer vision applications.
“There is a motivation with these young people to learn these things,” she said. “It comes with no baggage.”
Alastair Fitzpayne, executive director of The Aspen Institute’s Future of Work Initiative, said the future of work ultimately means, in bottom-line terms, the future of human capital. “We have an R&D tax credit,” he said. “We’ve had it for decades. It provides credit for companies that make new investment in research and development. But we have nothing on the human capital side that’s analogous.”
So a company that’s making a big investment in worker training does it on their own dime, without any of the tax benefits that they might accrue if they, say, spent it on new equipment or new technology. Fitzpayne said a simple tweak to the R&D tax credit could make a big difference by incentivizing new investment programs in worker training. Which still means Amazon’s pre-existing worker training programs—for a company that already famously pays no taxes—would not count.
“We need a different way of developing new technologies,” said Daron Acemoglu, MIT Institute Professor of Economics. He pointed to the clean energy sector as an example. First a consensus around the problem needs to emerge. Then a broadly agreed-upon set of goals and measurements needs to be developed (e.g., that AI and automation would, for instance, create at least X new jobs for every Y jobs that it eliminates).
Then it just needs to be implemented.
“We need to build a consensus that, along the path we’re following at the moment, there are going to be increasing problems for labor,” Acemoglu said. “We need a mindset change. That it is not just about minimizing costs or maximizing tax benefits, but really worrying about what kind of society we’re creating and what kind of environment we’re creating if we keep on just automating and [eliminating] good jobs.” Continue reading
#436258 For Centuries, People Dreamed of a ...
This is part six of a six-part series on the history of natural language processing.
In February of this year, OpenAI, one of the foremost artificial intelligence labs in the world, announced that a team of researchers had built a powerful new text generator called the Generative Pre-Trained Transformer 2, or GPT-2 for short. The researchers used a reinforcement learning algorithm to train their system on a broad set of natural language processing (NLP) capabilities, including reading comprehension, machine translation, and the ability to generate long strings of coherent text.
But as is often the case with NLP technology, the tool held both great promise and great peril. Researchers and policy makers at the lab were concerned that their system, if widely released, could be exploited by bad actors and misappropriated for “malicious purposes.”
The people of OpenAI, which defines its mission as “discovering and enacting the path to safe artificial general intelligence,” were concerned that GPT-2 could be used to flood the Internet with fake text, thereby degrading an already fragile information ecosystem. For this reason, OpenAI decided that it would not release the full version of GPT-2 to the public or other researchers.
GPT-2 is an example of a technique in NLP called language modeling, whereby the computational system internalizes a statistical blueprint of a text so it’s able to mimic it. Just like the predictive text on your phone—which selects words based on words you’ve used before—GPT-2 can look at a string of text and then predict what the next word is likely to be based on the probabilities inherent in that text.
GPT-2 can be seen as a descendant of the statistical language modeling that the Russian mathematician A. A. Markov developed in the early 20th century (covered in part three of this series).
GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters.
What’s different with GPT-2, though, is the scale of the textual data modeled by the system. Whereas Markov analyzed a string of 20,000 letters to create a rudimentary model that could predict the likelihood of the next letter of a text being a consonant or a vowel, GPT-2 used 8 million articles scraped from Reddit to predict what the next word might be within that entire dataset.
And whereas Markov manually trained his model by counting only two parameters—vowels and consonants—GPT-2 used cutting-edge machine learning algorithms to do linguistic analysis with over 1.5 million parameters, burning through huge amounts of computational power in the process.
The results were impressive. In their blog post, OpenAI reported that GPT-2 could generate synthetic text in response to prompts, mimicking whatever style of text it was shown. If you prompt the system with a line of William Blake’s poetry, it can generate a line back in the Romantic poet’s style. If you prompt the system with a cake recipe, you get a newly invented recipe in response.
Perhaps the most compelling feature of GPT-2 is that it can answer questions accurately. For example, when OpenAI researchers asked the system, “Who wrote the book The Origin of Species?”—it responded: “Charles Darwin.” While only able to respond accurately some of the time, the feature does seem to be a limited realization of Gottfried Leibniz’s dream of a language-generating machine that could answer any and all human questions (described in part two of this series).
After observing the power of the new system in practice, OpenAI elected not to release the fully trained model. In the lead up to its release in February, there had been heightened awareness about “deepfakes”—synthetic images and videos, generated via machine learning techniques, in which people do and say things they haven’t really done and said. Researchers at OpenAI worried that GPT-2 could be used to essentially create deepfake text, making it harder for people to trust textual information online.
Responses to this decision varied. On one hand, OpenAI’s caution prompted an overblown reaction in the media, with articles about the “dangerous” technology feeding into the Frankenstein narrative that often surrounds developments in AI.
Others took issue with OpenAI’s self-promotion, with some even suggesting that OpenAI purposefully exaggerated GPT-2s power in order to create hype—while contravening a norm in the AI research community, where labs routinely share data, code, and pre-trained models. As machine learning researcher Zachary Lipton tweeted, “Perhaps what's *most remarkable* about the @OpenAI controversy is how *unremarkable* the technology is. Despite their outsize attention & budget, the research itself is perfectly ordinary—right in the main branch of deep learning NLP research.”
OpenAI stood by its decision to release only a limited version of GPT-2, but has since released larger models for other researchers and the public to experiment with. As yet, there has been no reported case of a widely distributed fake news article generated by the system. But there have been a number of interesting spin-off projects, including GPT-2 poetry and a webpage where you can prompt the system with questions yourself.
Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and
Star Wars movies.
There’s even a Reddit group populated entirely with text produced by GPT-2-powered bots. Mimicking humans on Reddit, the bots have long conversations about a variety of topics, including conspiracy theories and Star Wars movies.
This bot-powered conversation may signify the new condition of life online, where language is increasingly created by a combination of human and non-human agents, and where maintaining the distinction between human and non-human, despite our best efforts, is increasingly difficult.
The idea of using rules, mechanisms, and algorithms to generate language has inspired people in many different cultures throughout history. But it’s in the online world that this powerful form of wordcraft may really find its natural milieu—in an environment where the identity of speakers becomes more ambiguous, and perhaps, less relevant. It remains to be seen what the consequences will be for language, communication, and our sense of human identity, which is so bound up with our ability to speak in natural language.
This is the sixth installment of a six-part series on the history of natural language processing. Last week’s post explained how an innocent Microsoft chatbot turned instantly racist on Twitter.
You can also check out our prior series on the untold history of AI. Continue reading