Tag Archives: sensor

#431987 OptoForce Industrial Robot Sensors

OptoForce Sensors Providing Industrial Robots with

a “Sense of Touch” to Advance Manufacturing Automation

Global efforts to expand the capabilities of industrial robots are on the rise, as the demand from manufacturing companies to strengthen their operations and improve performance grows.

Hungary-based OptoForce, with a North American office in Charlotte, North Carolina, is one company that continues to support organizations with new robotic capabilities, as evidenced by its several new applications released in 2017.

The company, a leading robotics technology provider of multi-axis force and torque sensors, delivers 6 degrees of freedom force and torque measurement for industrial automation, and provides sensors for most of the currently-used industrial robots.

It recently developed and brought to market three new applications for KUKA industrial robots.

The new applications are hand guiding, presence detection, and center pointing and will be utilized by both end users and systems integrators. Each application is summarized below and what they provide for KUKA robots, along with video demonstrations to show how they operate.

Photo By: www.optoforce.com

Hand Guiding: With OptoForce’s Hand Guiding application, KUKA robots can easily and smoothly move in an assigned direction and selected route. This video shows specifically how to program the robot for hand guiding.

Presence Detection: This application allows KUKA robots to detect the presence of a specific object and to find the object even if it has moved. Visit here to learn more about presence detection.
Center Pointing: With this application, the OptoForce sensor helps the KUKA robot find the center point of an object by providing the robot with a sense of touch. This solution also works with glossy metal objects where a vision system would not be able to define its position. This video shows in detail how the center pointing application works.

The company’s CEO explained how these applications help KUKA robots and industrial automation.

Photo By: www.optoforce.com
“OptoForce’s new applications for KUKA robots pave the way for substantial improvements in industrial automation for both end users and systems integrators,” said Ákos Dömötör, CEO of OptoForce. “Our 6-axis force/torque sensors are combined with highly functional hardware and a comprehensive software package, which include the pre-programmed industrial applications. Essentially, we’re adding a ‘sense of touch’ to KUKA robot arms, enabling these robots to have abilities similar to a human hand, and opening up numerous new capabilities in industrial automation.”

Along with these new applications recently released for KUKA robots, OptoForce sensors are also being used by various companies on numerous industrial robots and manufacturing automation projects around the world. Examples of other uses include: path recording, polishing plastic and metal, box insertion, placing pins in holes, stacking/destacking, palletizing, and metal part sanding.

Specifically, some of the projects current underway by companies include: a plastic parting line removal; an obstacle detection for a major car manufacturing company; and a center point insertion application for a car part supplier, where the task of the robot is to insert a mirror, completely centered, onto a side mirror housing.

For more information, visit www.optoforce.com.

This post was provided by: OptoForce

The post OptoForce Industrial Robot Sensors appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots

#431916 3-D-printed underwater vortex sensor ...

A new study has shown that a fully 3D-printed whisker sensor made of polyurethane, graphene, and copper tape can detect underwater vortexes with very high sensitivity. The simple design, mechanical reliability, and low-cost fabrication method contribute to the important commercial implications of this versatile new sensor, as described in an article in Soft Robotics Continue reading

Posted in Human Robots

#431790 FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal RobotsForce Torque Sensor feeds data to Universal Robots force mode
Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.
This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.
The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”
See some of the FT 300’s new capabilities in the following demo videos:
#1 How to calibrate with the FT 300 URCap Dashboard
#2 Linear search demo
#3 Path recording demo
Visit the FT 300 webpage or get a quote here
Get the FT 300 specs here
Get more info in the FAQ
Get free Skills to accelerate robot programming of force control tasks.
Get free robot cell deployment resources on leanrobotics.org
* Available with Universal Robots CB3.1 controller only
About Robotiq
Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.
Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.
Media contact
David Maltais, Communications and Public Relations Coordinator
d.maltais@robotiq.com
1-418-929-2513
////
Press Release Provided by: Robotiq.Com
The post FT 300 force torque sensor appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431689 Robotic Materials Will Distribute ...

The classical view of a robot as a mechanical body with a central “brain” that controls its behavior could soon be on its way out. The authors of a recent article in Science Robotics argue that future robots will have intelligence distributed throughout their bodies.
The concept, and the emerging discipline behind it, are variously referred to as “material robotics” or “robotic materials” and are essentially a synthesis of ideas from robotics and materials science. Proponents say advances in both fields are making it possible to create composite materials capable of combining sensing, actuation, computation, and communication and operating independently of a central processing unit.
Much of the inspiration for the field comes from nature, with practitioners pointing to the adaptive camouflage of the cuttlefish’s skin, the ability of bird wings to morph in response to different maneuvers, or the banyan tree’s ability to grow roots above ground to support new branches.
Adaptive camouflage and morphing wings have clear applications in the defense and aerospace sector, but the authors say similar principles could be used to create everything from smart tires able to calculate the traction needed for specific surfaces to grippers that can tailor their force to the kind of object they are grasping.
“Material robotics represents an acknowledgment that materials can absorb some of the challenges of acting and reacting to an uncertain world,” the authors write. “Embedding distributed sensors and actuators directly into the material of the robot’s body engages computational capabilities and offloads the rigid information and computational requirements from the central processing system.”
The idea of making materials more adaptive is not new, and there are already a host of “smart materials” that can respond to stimuli like heat, mechanical stress, or magnetic fields by doing things like producing a voltage or changing shape. These properties can be carefully tuned to create materials capable of a wide variety of functions such as movement, self-repair, or sensing.
The authors say synthesizing these kinds of smart materials, alongside other advanced materials like biocompatible conductors or biodegradable elastomers, is foundational to material robotics. But the approach also involves integration of many different capabilities in the same material, careful mechanical design to make the most of mechanical capabilities, and closing the loop between sensing and control within the materials themselves.
While there are stand-alone applications for such materials in the near term, like smart fabrics or robotic grippers, the long-term promise of the field is to distribute decision-making in future advanced robots. As they are imbued with ever more senses and capabilities, these machines will be required to shuttle huge amounts of control and feedback data to and fro, placing a strain on both their communication and computation abilities.
Materials that can process sensor data at the source and either autonomously react to it or filter the most relevant information to be passed on to the central processing unit could significantly ease this bottleneck. In a press release related to an earlier study, Nikolaus Correll, an assistant professor of computer science at the University of Colorado Boulder who is also an author of the current paper, pointed out this is a tactic used by the human body.
“The human sensory system automatically filters out things like the feeling of clothing rubbing on the skin,” he said. “An artificial skin with possibly thousands of sensors could do the same thing, and only report to a central ‘brain’ if it touches something new.”
There are still considerable challenges to realizing this vision, though, the authors say, noting that so far the young field has only produced proof of concepts. The biggest challenge remains manufacturing robotic materials in a way that combines all these capabilities in a small enough package at an affordable cost.
Luckily, the authors note, the field can draw on convergent advances in both materials science, such as the development of new bulk materials with inherent multifunctionality, and robotics, such as the ever tighter integration of components.
And they predict that doing away with the prevailing dichotomy of “brain versus body” could lay the foundations for the emergence of “robots with brains in their bodies—the foundation of inexpensive and ubiquitous robots that will step into the real world.”
Image Credit: Anatomy Insider / Shutterstock.com Continue reading

Posted in Human Robots