Tag Archives: insert

#431987 OptoForce Industrial Robot Sensors

OptoForce Sensors Providing Industrial Robots with

a “Sense of Touch” to Advance Manufacturing Automation

Global efforts to expand the capabilities of industrial robots are on the rise, as the demand from manufacturing companies to strengthen their operations and improve performance grows.

Hungary-based OptoForce, with a North American office in Charlotte, North Carolina, is one company that continues to support organizations with new robotic capabilities, as evidenced by its several new applications released in 2017.

The company, a leading robotics technology provider of multi-axis force and torque sensors, delivers 6 degrees of freedom force and torque measurement for industrial automation, and provides sensors for most of the currently-used industrial robots.

It recently developed and brought to market three new applications for KUKA industrial robots.

The new applications are hand guiding, presence detection, and center pointing and will be utilized by both end users and systems integrators. Each application is summarized below and what they provide for KUKA robots, along with video demonstrations to show how they operate.

Photo By: www.optoforce.com

Hand Guiding: With OptoForce’s Hand Guiding application, KUKA robots can easily and smoothly move in an assigned direction and selected route. This video shows specifically how to program the robot for hand guiding.

Presence Detection: This application allows KUKA robots to detect the presence of a specific object and to find the object even if it has moved. Visit here to learn more about presence detection.
Center Pointing: With this application, the OptoForce sensor helps the KUKA robot find the center point of an object by providing the robot with a sense of touch. This solution also works with glossy metal objects where a vision system would not be able to define its position. This video shows in detail how the center pointing application works.

The company’s CEO explained how these applications help KUKA robots and industrial automation.

Photo By: www.optoforce.com
“OptoForce’s new applications for KUKA robots pave the way for substantial improvements in industrial automation for both end users and systems integrators,” said Ákos Dömötör, CEO of OptoForce. “Our 6-axis force/torque sensors are combined with highly functional hardware and a comprehensive software package, which include the pre-programmed industrial applications. Essentially, we’re adding a ‘sense of touch’ to KUKA robot arms, enabling these robots to have abilities similar to a human hand, and opening up numerous new capabilities in industrial automation.”

Along with these new applications recently released for KUKA robots, OptoForce sensors are also being used by various companies on numerous industrial robots and manufacturing automation projects around the world. Examples of other uses include: path recording, polishing plastic and metal, box insertion, placing pins in holes, stacking/destacking, palletizing, and metal part sanding.

Specifically, some of the projects current underway by companies include: a plastic parting line removal; an obstacle detection for a major car manufacturing company; and a center point insertion application for a car part supplier, where the task of the robot is to insert a mirror, completely centered, onto a side mirror housing.

For more information, visit www.optoforce.com.

This post was provided by: OptoForce

The post OptoForce Industrial Robot Sensors appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431839 The Hidden Human Workforce Powering ...

The tech industry touts its ability to automate tasks and remove slow and expensive humans from the equation. But in the background, a lot of the legwork training machine learning systems, solving problems software can’t, and cleaning up its mistakes is still done by people.
This was highlighted recently when Expensify, which promises to automatically scan photos of receipts to extract data for expense reports, was criticized for sending customers’ personally identifiable receipts to workers on Amazon’s Mechanical Turk (MTurk) crowdsourcing platform.
The company uses text analysis software to read the receipts, but if the automated system falls down then the images are passed to a human for review. While entrusting this job to random workers on MTurk was maybe not so wise—and the company quickly stopped after the furor—the incident brought to light that this kind of human safety net behind AI-powered services is actually very common.
As Wired notes, similar services like Ibotta and Receipt Hog that collect receipt information for marketing purposes also use crowdsourced workers. In a similar vein, while most users might assume their Facebook newsfeed is governed by faceless algorithms, the company has been ramping up the number of human moderators it employs to catch objectionable content that slips through the net, as has YouTube. Twitter also has thousands of human overseers.
Humans aren’t always witting contributors either. The old text-based reCAPTCHA problems Google used to use to distinguish humans from machines was actually simultaneously helping the company digitize books by getting humans to interpret hard-to-read text.
“Every product that uses AI also uses people,” Jeffrey Bigham, a crowdsourcing expert at Carnegie Mellon University, told Wired. “I wouldn’t even say it’s a backstop so much as a core part of the process.”
Some companies are not shy about their use of crowdsourced workers. Startup Eloquent Labs wants to insert them between customer service chatbots and human agents who step in when the machines fail. Many times the AI is pretty certain what particular work means, and an MTurk worker can step in and quickly classify them faster and cheaper than a service agent.
Fashion retailer Gilt provides “pre-emptive shipping,” which uses data analytics to predict what people will buy to get products to them faster. The company uses MTurk workers to provide subjective critiques of clothing that feed into their models.
MTurk isn’t the only player. Companies like Cloudfactory and Crowdflower provide crowdsourced human manpower tailored to particular niches, and some companies prefer to maintain their own communities of workers. Unlabel uses an army of 50,000 humans to check and edit the translations its artificial intelligence system produces for customers.
Most of the time these human workers aren’t just filling in the gaps, they’re also helping to train the machine learning component of these companies’ services by providing new examples of how to solve problems. Other times humans aren’t used “in-the-loop” with AI systems, but to prepare data sets they can learn from by labeling images, text, or audio.
It’s even possible to use crowdsourced workers to carry out tasks typically tackled by machine learning, such as large-scale image analysis and forecasting.
Zooniverse gets citizen scientists to classify images of distant galaxies or videos of animals to help academics analyze large data sets too complex for computers. Almanis creates forecasts on everything from economics to politics with impressive accuracy by giving those who sign up to the website incentives for backing the correct answer to a question. Researchers have used MTurkers to power a chatbot, and there’s even a toolkit for building algorithms to control this human intelligence called TurKit.
So what does this prominent role for humans in AI services mean? Firstly, it suggests that many tools people assume are powered by AI may in fact be relying on humans. This has obvious privacy implications, as the Expensify story highlighted, but should also raise concerns about whether customers are really getting what they pay for.
One example of this is IBM’s Watson for oncology, which is marketed as a data-driven AI system for providing cancer treatment recommendations. But an investigation by STAT highlighted that it’s actually largely driven by recommendations from a handful of (admittedly highly skilled) doctors at Memorial Sloan Kettering Cancer Center in New York.
Secondly, humans intervening in AI-run processes also suggests AI is still largely helpless without us, which is somewhat comforting to know among all the doomsday predictions of AI destroying jobs. At the same time, though, much of this crowdsourced work is monotonous, poorly paid, and isolating.
As machines trained by human workers get better at all kinds of tasks, this kind of piecemeal work filling in the increasingly small gaps in their capabilities may get more common. While tech companies often talk about AI augmenting human intelligence, for many it may actually end up being the other way around.
Image Credit: kentoh / Shutterstock.com Continue reading

Posted in Human Robots

#431592 Reactive Content Will Get to Know You ...

The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.

For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#430640 RE2 Robotics Receives Air Force Funding ...

PITTSBURGH, PA – June 21, 2017 – RE2 Robotics announced today that the Company was selected by the Air Force to develop a drop-in robotic system to rapidly convert a variety of traditionally manned aircraft to robotically piloted, autonomous aircraft under the Small Business Innovation Research (SBIR) program. This robotic system, named “Common Aircraft Retrofit for Novel Autonomous Control” (CARNAC), will operate the aircraft similarly to a human pilot and will not require any modifications to the aircraft.
Automation and autonomy have broad value to the Department of Defense with the potential to enhance system performance of existing platforms, reduce costs, and enable new missions and capabilities, especially with reduced human exposure to dangerous or life-threatening situations. The CARNAC project leverages existing aviation assets and advances in vehicle automation technologies to develop a cutting-edge drop-in robotic flight system.
During the program, RE2 Robotics will demonstrate system architecture feasibility, humanoid-like robotic manipulation capabilities, vision-based flight-status recognition, and cognitive architecture-based decision making.
“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”
###
About RE2 RoboticsRE2 Robotics develops mobile robotic technologies that enable robot users to remotely interact with their world from a safe distance — whether on the ground, in the air, or underwater. RE2 creates interoperable robotic manipulator arms with human-like performance, intuitive human robot interfaces, and advanced autonomy software for mobile robotics. For more information, please visit www.resquared.com or call 412.681.6382.
Media Contact: RE2 Public Relations, pr@resquared.com, 412.681.6382.
The post RE2 Robotics Receives Air Force Funding to Develop Robotic Pilot appeared first on Roboticmagazine. Continue reading

Posted in Human Robots