Tag Archives: story

#429385 Robots Learning To Pick Things Up As ...

Robots Learning To Pick Things Up As Babies Do
Carnegie Mellon Unleashing Lots of Robots to Push, Poke, Grab Objects
Babies learn about their world by pushing and poking objects, putting them in their mouths and throwing them. Carnegie Mellon University scientists are taking a similar approach to teach robots how to recognize and grasp objects around them.
Manipulation remains a major challenge for robots and has become a bottleneck for many applications. But researchers at CMU’s Robotics Institute have shown that by allowing robots to spend hundreds of hours poking, grabbing and otherwise physically interacting with a variety of objects, those robots can teach themselves how to pick up objects.
In their latest findings, presented last fall at the European Conference on Computer Vision, they showed that robots gained a deeper visual understanding of objects when they were able to manipulate them.
The researchers, led by Abhinav Gupta, assistant professor of robotics, are now scaling up this approach, with help from a three-year, $1.5 million “focused research award” from Google.
“We will use dozens of different robots, including one- and two-armed robots and even drones, to learn about the world and actions that can be performed in the world,” Gupta said. “The cost of robots has come down significantly in recent years, enabling us to unleash lots of robots to collect an unprecedented amount of data on physical interactions.”
Gupta said the shortcomings of previous approaches to robot manipulation were apparent during the Defense Advanced Research Projects Agency’s Robotics Challenge in 2015. Some of the world’s most advanced robots, designed to respond to natural or manmade emergencies, had difficulty with tasks such as opening doors or unplugging and re-plugging an electrical cable.
“Our robots still cannot understand what they see and their action and manipulation capabilities pale in comparison to those of a two-year-old,” Gupta said.
For decades, visual perception and robotic control have been studied separately. Visual perception developed with little consideration of physical interaction, and most manipulation and planning frameworks can’t cope with perception failures. Gupta predicts that by allowing the robot to explore perception and action simultaneously, like a baby, can help overcome these failures.
“Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a Ph.D. student in robotics in Gupta’s research group. “Interaction with the real world exposes a lot of visual dynamics.”
Robots are slow learners, however, requiring hundreds of hours of interaction to learn how to pick up objects. And because robots have previously been expensive and often unreliable, researchers relying on this data-driven approach have long suffered from “data starvation.”
Scaling up the learning process will help address this data shortage. Pinto said much of the work by the CMU group has been done using a two-armed Baxter robot with a simple, two-fingered manipulator. Using more and different robots, including those with more sophisticated hands, will enrich manipulation databases.
Meanwhile, the success of this research approach has inspired other research groups in academia and by Google’s own array of robots to adopt this approach and help expand databases even further.
“If you can get the data faster, you can try a lot more things — different software frameworks, different algorithms,” Pinto said. And once one robot learns something, it can be shared with all robots.
In addition to Gupta and Pinto, the research team for the Google-funded project includes Martial Hebert, director of the Robotics Institute; Deva Ramanan, associate professor of robotics; and Ruslan Salakhutdinov, associate professor of machine learning and director of artificial intelligence research at Apple. The Office of Naval Research and the National Science Foundation also sponsor this research.
About Carnegie Mellon University: Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.
5000 Forbes Ave.
Pittsburgh, PA 15213
412-268-2900
Fax: 412-268-6929
Contact: Byron Spice
412-268-9068
bspice@cs.cmu.edu
The post Robots Learning To Pick Things Up As Babies Do appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429382 Portable and Wearable Obstacle Detection

LETI ANNOUNCES PROJECT TO ADAPT OBSTACLE-DETECTION TECHNOLOGY USED IN AUTONOMOUS CARS FOR MULTIPLE USES

INSPEX to Combine Knowhow of Nine European Organizations to Create
Portable and Wearable Spatial-Exploration Systems

GRENOBLE, France – Feb. 2, 2017 – Leti, a technology research institute of CEA Tech, today announced a European project to develop a portable and wearable, multisensor and low-power spatial-exploration and obstacle-detection system for all conditions of weather and visibility.

The INSPEX system will adapt obstacle-detection capabilities common in autonomous cars for portable and wearable applications, including guidance for the visually impaired and blind, robotics, drones and smart manufacturing. It will be used for real-time, 3D detection, location and warning of obstacles under all environmental conditions. These include smoke, dust, fog, heavy rain/snow, and darkness, and in indoor and outdoor environments with unknown stationary and mobile obstacles.
Credit: LetiApplying expertise and technologies of the nine partners in the three-year project, the system will be based on state-of-the-art range sensors such as LiDAR, UWB radar and MEMS ultrasound.

Coordinated by Leti, INSPEX will miniaturize and reduce the power consumption of these sensors to ease their integration in the new system. They will then be co-integrated with an inertial measurement unit (IMU), environmental sensing, wireless communications, signal-and-data processing, power-efficient data fusion and user interface, all in a miniature, low-power system designed to operate within wider smart and Internet of Things environments.

The main project demonstrator will embed the INSPEX system in a white cane for the visually impaired and provide 3D spatial audio feedback on obstacle location.

“Sophisticated obstacle-detection systems such as those in autonomous vehicles are typically large and heavy, have high power consumption and require large computational capabilities,” said Suzanne Lesecq, project coordinator at Leti. “The INSPEX team will work together to miniaturize and adapt this technology for individual and personal applications, which will require even greater capability for all-conditions obstacle detection. The project is a strong example of European innovation to bring leading-edge technology to a broader segment of users.”

In addition to applications for the visually impaired, drones and robots, the INSPEX system application domains are expected to include:
Human mobility – First responders, disabled persons
Instrumentation – Distance-measuring tools
Smart homes and factories – Assembly machines, security surveillance systems

Joining Leti in the project are:
University of Manchester, UK
Cork Institute of Technology, Ireland
STMicroelectronics SRL, Italy
Swiss Center for Electronics and Microtechnology CSEM, Switzerland
Tyndall National Institute University College Cork, Ireland
University of Namur ASBL, Belgium
GoSense, France
SensL Technologies Ltd., Ireland
The INSPEX demonstrator will integrate the INSPEX mobile detection device into a white cane for the visually impaired. For this application, an Augmented Reality Audio Interface will be integrated to provide spatial 3D sound feedback using extra-auricular earphones. This feedback will take into account head attitude, tracked by an AHRS in the headset, to provide 3D spatial sound feedback of an obstacle’s real direction and range. The context-aware communications will integrate the user with wider smart environments such as smart traffic lights, navigation beacons and ID tags associated with IoT objects. The user’s mobile device will allow integration with, for example, mapping apps.

About Leti
Leti, a technology research institute at CEA Tech, is a global leader in miniaturization technologies enabling smart, energy-efficient and secure solutions for industry. Founded in 1967, Leti pioneers micro-& nanotechnologies, tailoring differentiating applicative solutions for global companies, SMEs and startups. Leti tackles critical challenges in healthcare, energy and digital migration. From sensors to data processing and computing solutions, Leti’s multidisciplinary teams deliver solid expertise, leveraging world-class pre-industrialization facilities. With a staff of more than 1,900, a portfolio of 2,700 patents, 91,500 sq. ft. of cleanroom space and a clear IP policy, the institute is based in Grenoble, France, and has offices in Silicon Valley and Tokyo. Leti has launched 60 startups and is a member of the Carnot Institutes network. This year, the institute celebrates its 50th anniversary. Follow us on www.leti.fr/en and @CEA_Leti.
CEA Tech is the technology research branch of the French Alternative Energies and Atomic Energy Commission (CEA), a key player in innovative R&D, defense & security, nuclear energy, technological research for industry and fundamental science. In 2015, Thomson Reuters identified CEA as the most innovative research organization in the world.

Press Contact
Agency
+33 6 74 93 23 47
sldampoux@mahoneylyle.com

Sarah-Lyle Dampoux
Mahoney l Lyle
+33 6 74 93 23 47
sldampoux@mahoneylyle.com
Twitter and Linkedin

The post Portable and Wearable Obstacle Detection appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429377 Robot to carry your things

PIAGGIO GROUP REINVENTS THE FUTURE OF TRANSPORTATION WITH INTRODUCTION OF GITA, THE FIRST OFFERING BY PIAGGIO FAST FORWARDPiaggio Fast Forward to Host Launch Event in BostonBoston, MA – January 30, 2017 – Piaggio Group, the largest European manufacturer of two-wheel motor vehicles, a leader in light mobility founded over 130 years ago, has launched Piaggio Fast Forward, a newly-established company based in the U.S. formed to pioneer the future of light mobility by fundamentally rethinking the movement of people and goods.
Image Credit: Piaggio Group“We have established Piaggio Fast Forward to reinvent the future of mobility and this is best accomplished with PFF as a separate entity, but with the backing and experience of the Piaggio Group,” said Mr. Michele Colaninno, Chairman of the Board of PFF. “You can expect to find Piaggio Fast Forward blurring the lines between transportation, robotics, and urban environments.”Piaggio Fast Forward is based in Boston, Massachusetts and is led by CEO Jeffrey Schnapp, Chief Creative Officer Greg Lynn, Chief Operating Officer Sasha Hoffman and Chief Design Research Officer Dr. Beth Altringer. PFF’s Board of Advisors include Nicholas Negroponte (Founder, MIT Media Lab), John Hoke (VP Global Design, Nike), Doug Brent (VP Technology Innovation, Trimble), and Jeff Linnell (former Director of Robotics, Google).“The transportation and robotics industries tend to focus on optimizing tasks and displacing labor,” according to Jeffrey Schnapp, CEO of Piaggio Fast Forward “We are developing products that augment and extend human capabilities, instead of simply seeking to replace them.”
Image Credit: Piaggio GroupIntroducing GitaPiaggio Fast Forward’s first offering is Gita (pronounced “jee-ta” and in Italian means a “short trip”), an autonomous vehicle that extends a person’s cargo-carrying abilities. Gita is an intelligent vehicle with a communicative personality. It learns and navigates indoors and out with the oversight and decision-making of humans. Gita is able to follow a human operator or move autonomously in a mapped environment.

Gita is 26 inches tall, has a cargo carrying capacity of 40 lbs, and a maximum speed of 22 mph. Gita is designed to travel at human speeds with human agility. Gita has a zero turning radius and is designed to accompany people at speeds from a crawl, to a walk, to a jog, to riding a bike. Instead of deciding to use an automobile or truck to transport 40 pounds worth of packages, Piaggio Fast Forward wants to help people walk, run, pedal and skate through life with the assistance of a family of vehicles like Gita.
Image Credit: Piaggio Group“Think about how much more freely you would be able to move from one point to another if lugging cumbersome items was removed from the equation,” added Schnapp. “Gita frees up the human hand to focus on complex and creative everyday tasks by taking over mundane transportation chores. You can also send your Gita off on missions while you are busy doing something more pressing.”PFF will be deploying Gita in a variety of B2B pilot programs in the near term, with an eye toward future consumer applications.Press Reception and DemonstrationPiaggio Fast Forward will privately debut their product on Thursday, February 2nd to an exclusive group of partners and select members of the media.When: Thursday, February 2, 2017 from 5:45 – 9:00 p.m.Where: Cambridge, MAWhom: Executives from PFF will be available for interviews, as will Piaggio Fast Forward Chairman Michele Colaninno and Piaggio Group Chairman and CEO Roberto ColaninnoRSVP: Reporters may request an invite at press@piaggiofastforward.com
About Piaggio Fast ForwardPFF was founded in 2015 by the Piaggio Group to pioneer the intelligent movement of people and goods at just the right scale: larger than aerial drones but smaller than self-driving cars and trucks. The company’s mission is to help people to move better, further, faster, and more enjoyably. We build robots and lightweight transportation solutions that travel behind and beside people on the move. In the present era of machine intelligence, autonomy, and ubiquitous networks, we seek to promote more vibrant cities filled with pedestrians, cyclists, and skaters whose mobility is enhanced by new varieties of smart vehicles. PFF is based in Boston, Massachusetts. For more information, please visit www.piaggiofastforward.com or follow the company @P_F_F
About Piaggio GroupThe Piaggio Group is the largest European manufacturer of two-wheel motor vehicles and one of the world leaders in its sector. The Group is also a major international player on the commercial vehicle market. Established in 1884 by Rinaldo Piaggio, since 2003 the Piaggio Group has been controlled by Immsi S.p.A. (IMS.MI), an industrial holding listed on the Italian stock exchange and headed by Chairman Roberto Colaninno. Immsi’s Chief Executive Officer and MD is Michele Colaninno. Roberto Colaninno is the Chairman and Chief Executive Officer of the Piaggio Group, Matteo Colaninno is Deputy Chairman. Piaggio (PIA.MI) has been listed on the Italian stock exchange since 2006. The Piaggio Group product range includes scooters, motorcycles and mopeds from 50 to 1,400 cc marketed under the Piaggio, Vespa, Gilera, Aprilia, Moto Guzzi, Derbi and Scarabeo brands. The Group also operates in the three and four-wheel light transport sector with commercial vehicles.

Tim Smith
Element Public Relations
415-350-3019
tsmith@elementpr.com
The post Robot to carry your things appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429373 Video Friday: A Humanoid in the Kitchen, ...

Your weekly selection of awesome robot videos Continue reading

Posted in Human Robots

#429372 The Surprisingly Simple Invention That ...

Today, sewing relies on the low-tech power of human hands, but soon that may not be the case. Human workers are still needed for the final steps of making clothes, in order to align fabrics and correctly feed them into sewing machines. If robots could do that instead, shock waves of change would surely ripple through global supply chains and disrupt the lives of millions of low-wage earners in the developing world.
For better or worse, plenty of technologists, researchers, and companies are working on the challenge—but so far, getting robots to navigate the imprecisions of flimsy textile materials that easily bend has proven elusive.
One promising solution, though, has come from an unlikely place: the sleepless brain of Jonathan Zornow, a young freelance web developer with no previous background in robotics, manufacturing, or the apparel business. His project, Sewbo, recently demonstrated the world’s first robotically sewn garment, and the inspiration came while watching a late-night Science Channel show called How It’s Made.
“I would watch How It’s Made to help me fall asleep at night, because I found it meditative and soothing to see the machines performing constant repetitive tasks,” Zornow told me. “In one episode they did blue jeans. In this case there were no machines doing serene repetitive motions. There were people involved every step along the way, and rather than help me fall asleep, it kept me up.”
It’s true that in almost every scene of the blue jeans episode, there’s the distinctive presence of human fingertips. Zornow got stuck asking why there weren’t robots there to perform the stitching, and what clicked for him while watching the show was the realization that getting robots to handle the complexities of a material that bends is difficult.

"The standard approach to robotic sewing has been to counter the complexity of working with fabrics, with equally complex machines."

Robots are good at tasks that are precisely the same each time, but with clothing, the flimsiness adds a layer of complexity robots can’t yet handle.
The solution Zornow then came up with is almost laughably simple. Instead of pouring millions of dollars into fancier robots, he decided to find a way to stiffen the clothes in order to make them suitable for robotic machines.
“When I looked into it, it seemed that the standard approach to robotic sewing has been to counter the complexity of working with fabrics, with equally complex machines,” he told me. Instead, Zornow removed the complexity by making the fabrics stiff.
He decided to use a water-soluble polymer that washes out in warm water and can temporarily stiffen fabric, meaning the robot is handling a precisely-defined object when stitching the garment. The inspiration for that came to him while reading an article in Make Magazine that explored the water-soluble support structures for 3D printers.
To prove out the concept, Zornow rented an off-the-shelf robotic arm, which he used to demonstrate the approach. Then, last year, he formally announced the project as the first time a robot has been able to sew a garment. And he’s now pursuing the project full time to turn it into a business and was even invited to a robotics manufacturing consortium sponsored by the US Department of Defense.

The eye-catching part of Zornow’s breakthrough isn’t so much the technology used or the specific idea. Other promising high-tech solutions are being pursued and some have even tried, unsuccessfully, to automate sewing by stiffening clothes with starch.
What’s stunning is that since the 1980s, hundreds of millions of dollars have been spent trying to automate garment sewing, yet a single curious kid empowered by random ideas from the internet was able to address the challenge with a different (and relatively low-tech) approach. And beyond simply having the idea, Zornow had access to the robotic tools to actually pursue the concept, which due to cost would have been the stuff of dreams just a few years ago.
On display is a phenomenon entirely new in recent history, where big idea breakthroughs can come as much from individuals with access to the internet as from a heavily-funded research lab inside a government or corporation. It’s also an age where experts are being upended by radical outsiders, where leading biotechnology startups are being founded by aerospace engineers, and novice amateurs can routinely outperform experts in data science tasks.
In the age of the internet, we’re seeing the accelerating forces of ideas spread, and one where a single kid in Seattle can try to reshape a $1 trillion dollar global industry.
Whether Zornow’s solution will actually make meaningful headway into the apparel industry is uncertain, but ultimately his triumph is in showing that good ideas come from strange places.
Image Credit: Sewbo Continue reading

Posted in Human Robots