Tag Archives: robotics

#429389 Visual Processing System

The article below is by our reader Kyle Stuart, where he introduces his work:
The worlds only optical artificial intelligence system
The visual processing system is an artificial intelligence system which operates in a very similar way to the human eye and brain.
Like the human eye and brain the VPS consists of two key parts, the image sensor, and the image processing unit (IPU), then the machine outputs, whatever they might be, which can be referred to as the human body of the machine, though it may have wings and motors, wheels, or just be a PC program or app.
Image Credit: Kyle Stuart. The basic layout of a VPS, depicting the two key components, the image sensor, and IPU and also output control unit fixed to a motherboard. The output control unit will be comprised of the robotics circuits used to regulate your machines outputs.

The VPS receives optical data from the image sensor which then sends that data to the IPU for processing, using the key component of the VPS, the optical capacitor, which acts like a tranzistor in computer chips, processing the optical data, which triggers the machine outputs. On sight of your dropped key, the IPU may trigger the Humanoid helper robots machine outputs, its appendages and vocal system, to call out, walk over to your dropped keys, pick them up, and hand them to you.
This is just one example of how a VPS can be produced for a robotics or automation system, it can be produced for any type of robotics system, to perform any function you can see, in any form you can imagine. You can literally build a VPS to throw mud at Grizzly bears with three legs, only when there is a Unicorn in view, using a tennis racket attached to a pogo stick; or, a drone which monitors traffic, reports incidents, can communicate with stranded motorists, and lands when it notices traffic dying down, to name one.
Due to the magnitude of systems the VPS can be produced for, the VPS is available to anyone with an idea for a robot as a patent portfolio for license manufacture. Should you have an idea for a robot, or automation system, you can license the VPS patent portfolio and produce that robot for the market.
Please call inventor Kyle Stuart in Australia on +61 497 551 391 should you wish to speak to someone.

The post Visual Processing System appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429385 Robots Learning To Pick Things Up As ...

Robots Learning To Pick Things Up As Babies Do
Carnegie Mellon Unleashing Lots of Robots to Push, Poke, Grab Objects
Babies learn about their world by pushing and poking objects, putting them in their mouths and throwing them. Carnegie Mellon University scientists are taking a similar approach to teach robots how to recognize and grasp objects around them.
Manipulation remains a major challenge for robots and has become a bottleneck for many applications. But researchers at CMU’s Robotics Institute have shown that by allowing robots to spend hundreds of hours poking, grabbing and otherwise physically interacting with a variety of objects, those robots can teach themselves how to pick up objects.
In their latest findings, presented last fall at the European Conference on Computer Vision, they showed that robots gained a deeper visual understanding of objects when they were able to manipulate them.
The researchers, led by Abhinav Gupta, assistant professor of robotics, are now scaling up this approach, with help from a three-year, $1.5 million “focused research award” from Google.
“We will use dozens of different robots, including one- and two-armed robots and even drones, to learn about the world and actions that can be performed in the world,” Gupta said. “The cost of robots has come down significantly in recent years, enabling us to unleash lots of robots to collect an unprecedented amount of data on physical interactions.”
Gupta said the shortcomings of previous approaches to robot manipulation were apparent during the Defense Advanced Research Projects Agency’s Robotics Challenge in 2015. Some of the world’s most advanced robots, designed to respond to natural or manmade emergencies, had difficulty with tasks such as opening doors or unplugging and re-plugging an electrical cable.
“Our robots still cannot understand what they see and their action and manipulation capabilities pale in comparison to those of a two-year-old,” Gupta said.
For decades, visual perception and robotic control have been studied separately. Visual perception developed with little consideration of physical interaction, and most manipulation and planning frameworks can’t cope with perception failures. Gupta predicts that by allowing the robot to explore perception and action simultaneously, like a baby, can help overcome these failures.
“Psychological studies have shown that if people can’t affect what they see, their visual understanding of that scene is limited,” said Lerrel Pinto, a Ph.D. student in robotics in Gupta’s research group. “Interaction with the real world exposes a lot of visual dynamics.”
Robots are slow learners, however, requiring hundreds of hours of interaction to learn how to pick up objects. And because robots have previously been expensive and often unreliable, researchers relying on this data-driven approach have long suffered from “data starvation.”
Scaling up the learning process will help address this data shortage. Pinto said much of the work by the CMU group has been done using a two-armed Baxter robot with a simple, two-fingered manipulator. Using more and different robots, including those with more sophisticated hands, will enrich manipulation databases.
Meanwhile, the success of this research approach has inspired other research groups in academia and by Google’s own array of robots to adopt this approach and help expand databases even further.
“If you can get the data faster, you can try a lot more things — different software frameworks, different algorithms,” Pinto said. And once one robot learns something, it can be shared with all robots.
In addition to Gupta and Pinto, the research team for the Google-funded project includes Martial Hebert, director of the Robotics Institute; Deva Ramanan, associate professor of robotics; and Ruslan Salakhutdinov, associate professor of machine learning and director of artificial intelligence research at Apple. The Office of Naval Research and the National Science Foundation also sponsor this research.
About Carnegie Mellon University: Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.
5000 Forbes Ave.
Pittsburgh, PA 15213
Fax: 412-268-6929
Contact: Byron Spice
The post Robots Learning To Pick Things Up As Babies Do appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429382 Portable and Wearable Obstacle Detection


INSPEX to Combine Knowhow of Nine European Organizations to Create
Portable and Wearable Spatial-Exploration Systems

GRENOBLE, France – Feb. 2, 2017 – Leti, a technology research institute of CEA Tech, today announced a European project to develop a portable and wearable, multisensor and low-power spatial-exploration and obstacle-detection system for all conditions of weather and visibility.

The INSPEX system will adapt obstacle-detection capabilities common in autonomous cars for portable and wearable applications, including guidance for the visually impaired and blind, robotics, drones and smart manufacturing. It will be used for real-time, 3D detection, location and warning of obstacles under all environmental conditions. These include smoke, dust, fog, heavy rain/snow, and darkness, and in indoor and outdoor environments with unknown stationary and mobile obstacles.
Credit: LetiApplying expertise and technologies of the nine partners in the three-year project, the system will be based on state-of-the-art range sensors such as LiDAR, UWB radar and MEMS ultrasound.

Coordinated by Leti, INSPEX will miniaturize and reduce the power consumption of these sensors to ease their integration in the new system. They will then be co-integrated with an inertial measurement unit (IMU), environmental sensing, wireless communications, signal-and-data processing, power-efficient data fusion and user interface, all in a miniature, low-power system designed to operate within wider smart and Internet of Things environments.

The main project demonstrator will embed the INSPEX system in a white cane for the visually impaired and provide 3D spatial audio feedback on obstacle location.

“Sophisticated obstacle-detection systems such as those in autonomous vehicles are typically large and heavy, have high power consumption and require large computational capabilities,” said Suzanne Lesecq, project coordinator at Leti. “The INSPEX team will work together to miniaturize and adapt this technology for individual and personal applications, which will require even greater capability for all-conditions obstacle detection. The project is a strong example of European innovation to bring leading-edge technology to a broader segment of users.”

In addition to applications for the visually impaired, drones and robots, the INSPEX system application domains are expected to include:
Human mobility – First responders, disabled persons
Instrumentation – Distance-measuring tools
Smart homes and factories – Assembly machines, security surveillance systems

Joining Leti in the project are:
University of Manchester, UK
Cork Institute of Technology, Ireland
STMicroelectronics SRL, Italy
Swiss Center for Electronics and Microtechnology CSEM, Switzerland
Tyndall National Institute University College Cork, Ireland
University of Namur ASBL, Belgium
GoSense, France
SensL Technologies Ltd., Ireland
The INSPEX demonstrator will integrate the INSPEX mobile detection device into a white cane for the visually impaired. For this application, an Augmented Reality Audio Interface will be integrated to provide spatial 3D sound feedback using extra-auricular earphones. This feedback will take into account head attitude, tracked by an AHRS in the headset, to provide 3D spatial sound feedback of an obstacle’s real direction and range. The context-aware communications will integrate the user with wider smart environments such as smart traffic lights, navigation beacons and ID tags associated with IoT objects. The user’s mobile device will allow integration with, for example, mapping apps.

About Leti
Leti, a technology research institute at CEA Tech, is a global leader in miniaturization technologies enabling smart, energy-efficient and secure solutions for industry. Founded in 1967, Leti pioneers micro-& nanotechnologies, tailoring differentiating applicative solutions for global companies, SMEs and startups. Leti tackles critical challenges in healthcare, energy and digital migration. From sensors to data processing and computing solutions, Leti’s multidisciplinary teams deliver solid expertise, leveraging world-class pre-industrialization facilities. With a staff of more than 1,900, a portfolio of 2,700 patents, 91,500 sq. ft. of cleanroom space and a clear IP policy, the institute is based in Grenoble, France, and has offices in Silicon Valley and Tokyo. Leti has launched 60 startups and is a member of the Carnot Institutes network. This year, the institute celebrates its 50th anniversary. Follow us on www.leti.fr/en and @CEA_Leti.
CEA Tech is the technology research branch of the French Alternative Energies and Atomic Energy Commission (CEA), a key player in innovative R&D, defense & security, nuclear energy, technological research for industry and fundamental science. In 2015, Thomson Reuters identified CEA as the most innovative research organization in the world.

Press Contact
+33 6 74 93 23 47

Sarah-Lyle Dampoux
Mahoney l Lyle
+33 6 74 93 23 47
Twitter and Linkedin

The post Portable and Wearable Obstacle Detection appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429377 Robot to carry your things

PIAGGIO GROUP REINVENTS THE FUTURE OF TRANSPORTATION WITH INTRODUCTION OF GITA, THE FIRST OFFERING BY PIAGGIO FAST FORWARDPiaggio Fast Forward to Host Launch Event in BostonBoston, MA – January 30, 2017 – Piaggio Group, the largest European manufacturer of two-wheel motor vehicles, a leader in light mobility founded over 130 years ago, has launched Piaggio Fast Forward, a newly-established company based in the U.S. formed to pioneer the future of light mobility by fundamentally rethinking the movement of people and goods.
Image Credit: Piaggio Group“We have established Piaggio Fast Forward to reinvent the future of mobility and this is best accomplished with PFF as a separate entity, but with the backing and experience of the Piaggio Group,” said Mr. Michele Colaninno, Chairman of the Board of PFF. “You can expect to find Piaggio Fast Forward blurring the lines between transportation, robotics, and urban environments.”Piaggio Fast Forward is based in Boston, Massachusetts and is led by CEO Jeffrey Schnapp, Chief Creative Officer Greg Lynn, Chief Operating Officer Sasha Hoffman and Chief Design Research Officer Dr. Beth Altringer. PFF’s Board of Advisors include Nicholas Negroponte (Founder, MIT Media Lab), John Hoke (VP Global Design, Nike), Doug Brent (VP Technology Innovation, Trimble), and Jeff Linnell (former Director of Robotics, Google).“The transportation and robotics industries tend to focus on optimizing tasks and displacing labor,” according to Jeffrey Schnapp, CEO of Piaggio Fast Forward “We are developing products that augment and extend human capabilities, instead of simply seeking to replace them.”
Image Credit: Piaggio GroupIntroducing GitaPiaggio Fast Forward’s first offering is Gita (pronounced “jee-ta” and in Italian means a “short trip”), an autonomous vehicle that extends a person’s cargo-carrying abilities. Gita is an intelligent vehicle with a communicative personality. It learns and navigates indoors and out with the oversight and decision-making of humans. Gita is able to follow a human operator or move autonomously in a mapped environment.

Gita is 26 inches tall, has a cargo carrying capacity of 40 lbs, and a maximum speed of 22 mph. Gita is designed to travel at human speeds with human agility. Gita has a zero turning radius and is designed to accompany people at speeds from a crawl, to a walk, to a jog, to riding a bike. Instead of deciding to use an automobile or truck to transport 40 pounds worth of packages, Piaggio Fast Forward wants to help people walk, run, pedal and skate through life with the assistance of a family of vehicles like Gita.
Image Credit: Piaggio Group“Think about how much more freely you would be able to move from one point to another if lugging cumbersome items was removed from the equation,” added Schnapp. “Gita frees up the human hand to focus on complex and creative everyday tasks by taking over mundane transportation chores. You can also send your Gita off on missions while you are busy doing something more pressing.”PFF will be deploying Gita in a variety of B2B pilot programs in the near term, with an eye toward future consumer applications.Press Reception and DemonstrationPiaggio Fast Forward will privately debut their product on Thursday, February 2nd to an exclusive group of partners and select members of the media.When: Thursday, February 2, 2017 from 5:45 – 9:00 p.m.Where: Cambridge, MAWhom: Executives from PFF will be available for interviews, as will Piaggio Fast Forward Chairman Michele Colaninno and Piaggio Group Chairman and CEO Roberto ColaninnoRSVP: Reporters may request an invite at press@piaggiofastforward.com
About Piaggio Fast ForwardPFF was founded in 2015 by the Piaggio Group to pioneer the intelligent movement of people and goods at just the right scale: larger than aerial drones but smaller than self-driving cars and trucks. The company’s mission is to help people to move better, further, faster, and more enjoyably. We build robots and lightweight transportation solutions that travel behind and beside people on the move. In the present era of machine intelligence, autonomy, and ubiquitous networks, we seek to promote more vibrant cities filled with pedestrians, cyclists, and skaters whose mobility is enhanced by new varieties of smart vehicles. PFF is based in Boston, Massachusetts. For more information, please visit www.piaggiofastforward.com or follow the company @P_F_F
About Piaggio GroupThe Piaggio Group is the largest European manufacturer of two-wheel motor vehicles and one of the world leaders in its sector. The Group is also a major international player on the commercial vehicle market. Established in 1884 by Rinaldo Piaggio, since 2003 the Piaggio Group has been controlled by Immsi S.p.A. (IMS.MI), an industrial holding listed on the Italian stock exchange and headed by Chairman Roberto Colaninno. Immsi’s Chief Executive Officer and MD is Michele Colaninno. Roberto Colaninno is the Chairman and Chief Executive Officer of the Piaggio Group, Matteo Colaninno is Deputy Chairman. Piaggio (PIA.MI) has been listed on the Italian stock exchange since 2006. The Piaggio Group product range includes scooters, motorcycles and mopeds from 50 to 1,400 cc marketed under the Piaggio, Vespa, Gilera, Aprilia, Moto Guzzi, Derbi and Scarabeo brands. The Group also operates in the three and four-wheel light transport sector with commercial vehicles.

Tim Smith
Element Public Relations
The post Robot to carry your things appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429373 Video Friday: A Humanoid in the Kitchen, ...

Your weekly selection of awesome robot videos Continue reading

Posted in Human Robots