Tag Archives: talk
#435784 Amazon Uses 800 Robots to Run This ...
At Amazon’s re:MARS conference in Las Vegas today, who else but Amazon is introducing two new robots designed to make its fulfillment centers even more fulfilling. Xanthus (named after a mythological horse that could very briefly talk but let’s not read too much into that) is a completely redesigned drive unit, one of the robotic mobile bases that carries piles of stuff around for humans to pick from. It has a thinner profile, a third of the parts, costs half as much, and can wear different modules on top to perform a much wider variety of tasks than its predecessor.
Pegasus (named after a mythological horse that could fly but let’s not read too much into that either) is also a mobile robot, but much smaller than Xanthus, designed to help the company quickly and accurately sort individual packages. For Amazon, it’s a completely new large-scale robotic system involving tightly coordinated fleets of robots tossing boxes down chutes, and it’s just as fun to watch as it sounds.
Amazon has 800 Pegasus units already deployed at a sorting facility in the United States, adding to their newly updated total of 200,000 robotic drive units worldwide.
If the Pegasus system looks familiar, it’s because other warehouse automation companies have had something that’s at least superficially very similar up and running for years.
Photo: Amazon
Pegasus is one of Amazon’s new warehouse robots, equipped with a conveyor belt on top and used in the company’s sorting facilities.
But the most interesting announcement that Amazon made, kind of low key and right at the end of their re:MARS talk, is that they’re working on ways of making some of their mobile robots actually collaborative, leveraging some of the technology that they acquired from Boulder, Colo.-based warehouse robotics startup Canvas Technology earlier this year:
“With our recent acquisition of Canvas, we expect to be able to combine this drive platform with AI and autonomous mobility capabilities, and for the first time, allow our robots to move outside of our robotic drive fields, and interact collaboratively with our associates to do a number of mobility tasks,” said Brad Porter, VP of robotics at Amazon.
At the moment, Amazon’s robots are physically separated from humans except for one highly structured station where the human only interacts with the robot in one or two very specific ways. We were told a few months ago that Amazon would like to have mobile robots that are able to move things through the areas of fulfillment centers that have people in them, but that they’re (quite rightly) worried about the safety aspects of having robots and humans work around each other. Other companies are already doing this on a smaller scale, and it means developing a reliable safety system that can handle randomly moving humans, environmental changes, and all kinds of other stuff. It’s much more difficult than having a nice, clean, roped-off area to work in where a wayward human would be an exception rather than just another part of the job.
Photo: Canvas Technology
A robot created by Canvas Technology, a Boulder, Colo.-based warehouse robotics startup acquired by Amazon earlier this year.
It now seems like Canvas has provided the secret sauce that Amazon needed to start implementing this level of autonomy. As for what it’s going to look like, our best guess is that Amazon is going to have to do a little bit more than slap some extra sensors onto Xanthus or Pegasus, if for no other reason than the robots will almost certainly need more ground clearance to let them operate away from the reliably flat floors that they’re accustomed to. We’re expecting to see them performing many of the tasks that companies like Fetch Robotics and OTTO Motors are doing already—moving everything from small boxes to large pallets to keep humans from having to waste time walking.
Of course, this all feeds back into what drives Amazon more than anything else: efficiency. And for better or worse, humans are not uniquely good at moving things from place to place, so it’s no surprise that Amazon wants to automate that, too. The good news is that, at least for now, Amazon still needs humans to babysit all those robots.
[ Amazon ] Continue reading
#435769 The Ultimate Optimization Problem: How ...
Lucas Joppa thinks big. Even while gazing down into his cup of tea in his modest office on Microsoft’s campus in Redmond, Washington, he seems to see the entire planet bobbing in there like a spherical tea bag.
As Microsoft’s first chief environmental officer, Joppa came up with the company’s AI for Earth program, a five-year effort that’s spending US $50 million on AI-powered solutions to global environmental challenges.
The program is not just about specific deliverables, though. It’s also about mindset, Joppa told IEEE Spectrum in an interview in July. “It’s a plea for people to think about the Earth in the same way they think about the technologies they’re developing,” he says. “You start with an objective. So what’s our objective function for Earth?” (In computer science, an objective function describes the parameter or parameters you are trying to maximize or minimize for optimal results.)
Photo: Microsoft
Lucas Joppa
AI for Earth launched in December 2017, and Joppa’s team has since given grants to more than 400 organizations around the world. In addition to receiving funding, some grantees get help from Microsoft’s data scientists and access to the company’s computing resources.
In a wide-ranging interview about the program, Joppa described his vision of the “ultimate optimization problem”—figuring out which parts of the planet should be used for farming, cities, wilderness reserves, energy production, and so on.
Every square meter of land and water on Earth has an infinite number of possible utility functions. It’s the job of Homo sapiens to describe our overall objective for the Earth. Then it’s the job of computers to produce optimization results that are aligned with the human-defined objective.
I don’t think we’re close at all to being able to do this. I think we’re closer from a technology perspective—being able to run the model—than we are from a social perspective—being able to make decisions about what the objective should be. What do we want to do with the Earth’s surface?
Such questions are increasingly urgent, as climate change has already begun reshaping our planet and our societies. Global sea and air surface temperatures have already risen by an average of 1 degree Celsius above preindustrial levels, according to the Intergovernmental Panel on Climate Change.
Today, people all around the world participated in a “climate strike,” with young people leading the charge and demanding a global transition to renewable energy. On Monday, world leaders will gather in New York for the United Nations Climate Action Summit, where they’re expected to present plans to limit warming to 1.5 degrees Celsius.
Joppa says such summit discussions should aim for a truly holistic solution.
We talk about how to solve climate change. There’s a higher-order question for society: What climate do we want? What output from nature do we want and desire? If we could agree on those things, we could put systems in place for optimizing our environment accordingly. Instead we have this scattered approach, where we try for local optimization. But the sum of local optimizations is never a global optimization.
There’s increasing interest in using artificial intelligence to tackle global environmental problems. New sensing technologies enable scientists to collect unprecedented amounts of data about the planet and its denizens, and AI tools are becoming vital for interpreting all that data.
The 2018 report “Harnessing AI for the Earth,” produced by the World Economic Forum and the consulting company PwC, discusses ways that AI can be used to address six of the world’s most pressing environmental challenges (climate change, biodiversity, and healthy oceans, water security, clean air, and disaster resilience).
Many of the proposed applications involve better monitoring of human and natural systems, as well as modeling applications that would enable better predictions and more efficient use of natural resources.
Joppa says that AI for Earth is taking a two-pronged approach, funding efforts to collect and interpret vast amounts of data alongside efforts that use that data to help humans make better decisions. And that’s where the global optimization engine would really come in handy.
For any location on earth, you should be able to go and ask: What’s there, how much is there, and how is it changing? And more importantly: What should be there?
On land, the data is really only interesting for the first few hundred feet. Whereas in the ocean, the depth dimension is really important.
We need a planet with sensors, with roving agents, with remote sensing. Otherwise our decisions aren’t going to be any good.
AI for Earth isn’t going to create such an online portal within five years, Joppa stresses. But he hopes the projects that he’s funding will contribute to making such a portal possible—eventually.
We’re asking ourselves: What are the fundamental missing layers in the tech stack that would allow people to build a global optimization engine? Some of them are clear, some are still opaque to me.
By the end of five years, I’d like to have identified these missing layers, and have at least one example of each of the components.
Some of the projects that AI for Earth has funded seem to fit that desire. Examples include SilviaTerra, which used satellite imagery and AI to create a map of the 92 billion trees in forested areas across the United States. There’s also OceanMind, a non-profit that detects illegal fishing and helps marine authorities enforce compliance. Platforms like Wildbook and iNaturalist enable citizen scientists to upload pictures of animals and plants, aiding conservation efforts and research on biodiversity. And FarmBeats aims to enable data-driven agriculture with low-cost sensors, drones, and cloud services.
It’s not impossible to imagine putting such services together into an optimization engine that knows everything about the land, the water, and the creatures who live on planet Earth. Then we’ll just have to tell that engine what we want to do about it.
Editor’s note: This story is published in cooperation with more than 250 media organizations and independent journalists that have focused their coverage on climate change ahead of the UN Climate Action Summit. IEEE Spectrum’s participation in the Covering Climate Now partnership builds on our past reporting about this global issue. Continue reading
#435687 Humanoid Robots Teach Coping Skills to ...
Photo: Rob Felt
IEEE Senior Member Ayanna Howard with one of the interactive androids that help children with autism improve their social and emotional engagement.
THE INSTITUTEChildren with autism spectrum disorder can have a difficult time expressing their emotions and can be highly sensitive to sound, sight, and touch. That sometimes restricts their participation in everyday activities, leaving them socially isolated. Occupational therapists can help them cope better, but the time they’re able to spend is limited and the sessions tend to be expensive.
Roboticist Ayanna Howard, an IEEE senior member, has been using interactive androids to guide children with autism on ways to socially and emotionally engage with others—as a supplement to therapy. Howard is chair of the School of Interactive Computing and director of the Human-Automation Systems Lab at Georgia Tech. She helped found Zyrobotics, a Georgia Tech VentureLab startup that is working on AI and robotics technologies to engage children with special needs. Last year Forbes named Howard, Zyrobotics’ chief technology officer, one of the Top 50 U.S. Women in Tech.
In a recent study, Howard and other researchers explored how robots might help children navigate sensory experiences. The experiment involved 18 participants between the ages of 4 and 12; five had autism, and the rest were meeting typical developmental milestones. Two humanoid robots were programmed to express boredom, excitement, nervousness, and 17 other emotional states. As children explored stations set up for hearing, seeing, smelling, tasting, and touching, the robots modeled what the socially acceptable responses should be.
“If a child’s expression is one of happiness or joy, the robot will have a corresponding response of encouragement,” Howard says. “If there are aspects of frustration or sadness, the robot will provide input to try again.” The study suggested that many children with autism exhibit stronger levels of engagement when the robots interact with them at such sensory stations.
It is one of many robotics projects Howard has tackled. She has designed robots for researching glaciers, and she is working on assistive robots for the home, as well as an exoskeleton that can help children who have motor disabilities.
Howard spoke about her work during the Ethics in AI: Impacts of (Anti?) Social Robotics panel session held in May at the IEEE Vision, Innovation, and Challenges Summit in San Diego. You can watch the session on IEEE.tv.
The next IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony will be held on 15 May 2020 at the JW Marriott Parq Vancouver hotel, in Vancouver.
In this interview with The Institute, Howard talks about how she got involved with assistive technologies, the need for a more diverse workforce, and ways IEEE has benefited her career.
FOCUS ON ACCESSIBILITY
Howard was inspired to work on technology that can improve accessibility in 2008 while teaching high school students at a summer camp devoted to science, technology, engineering, and math.
“A young lady with a visual impairment attended camp. The robot programming tools being used at the camp weren’t accessible to her,” Howard says. “As an engineer, I want to fix problems when I see them, so we ended up designing tools to enable access to programming tools that could be used in STEM education.
“That was my starting motivation, and this theme of accessibility has expanded to become a main focus of my research. One of the things about this world of accessibility is that when you start interacting with kids and parents, you discover another world out there of assistive technologies and how robotics can be used for good in education as well as therapy.”
DIVERSITY OF THOUGHT
The Institute asked Howard why it’s important to have a more diverse STEM workforce and what could be done to increase the number of women and others from underrepresented groups.
“The makeup of the current engineering workforce isn’t necessarily representative of the world, which is composed of different races, cultures, ages, disabilities, and socio-economic backgrounds,” Howard says. “We’re creating products used by people around the globe, so we have to ensure they’re being designed for a diverse population. As IEEE members, we also need to engage with people who aren’t engineers, and we don’t do that enough.”
Educational institutions are doing a better job of increasing diversity in areas such as gender, she says, adding that more work is needed because the enrollment numbers still aren’t representative of the population and the gains don’t necessarily carry through after graduation.
“There has been an increase in the number of underrepresented minorities and females going into engineering and computer science,” she says, “but data has shown that their numbers are not sustained in the workforce.”
ROLE MODEL
Because there are more underrepresented groups on today’s college campuses that can form a community, the lack of engineering role models—although a concern on campuses—is more extreme for preuniversity students, Howard says.
“Depending on where you go to school, you may not know what an engineer does or even consider engineering as an option,” she says, “so there’s still a big disconnect there.”
Howard has been involved for many years in math- and science-mentoring programs for at-risk high school girls. She tells them to find what they’re passionate about and combine it with math and science to create something. She also advises them not to let anyone tell them that they can’t.
Howard’s father is an engineer. She says he never encouraged or discouraged her to become one, but when she broke something, he would show her how to fix it and talk her through the process. Along the way, he taught her a logical way of thinking she says all engineers have.
“When I would try to explain something, he would quiz me and tell me to ‘think more logically,’” she says.
Howard earned a bachelor’s degree in engineering from Brown University, in Providence, R.I., then she received both a master’s and doctorate degree in electrical engineering from the University of Southern California. Before joining the faculty of Georgia Tech in 2005, she worked at NASA’s Jet Propulsion Laboratory at the California Institute of Technology for more than a decade as a senior robotics researcher and deputy manager in the Office of the Chief Scientist.
ACTIVE VOLUNTEER
Howard’s father was also an IEEE member, but that’s not why she joined the organization. She says she signed up when she was a student because, “that was something that you just did. Plus, my student membership fee was subsidized.”
She kept the membership as a grad student because of the discounted rates members receive on conferences.
Those conferences have had an impact on her career. “They allow you to understand what the state of the art is,” she says. “Back then you received a printed conference proceeding and reading through it was brutal, but by attending it in person, you got a 15-minute snippet about the research.”
Howard is an active volunteer with the IEEE Robotics and Automation and the IEEE Systems, Man, and Cybernetics societies, holding many positions and serving on several committees. She is also featured in the IEEE Impact Creators campaign. These members were selected because they inspire others to innovate for a better tomorrow.
“I value IEEE for its community,” she says. “One of the nice things about IEEE is that it’s international.” Continue reading