Tag Archives: Performance

#431790 FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal RobotsForce Torque Sensor feeds data to Universal Robots force mode
Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.
This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.
The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”
See some of the FT 300’s new capabilities in the following demo videos:
#1 How to calibrate with the FT 300 URCap Dashboard
#2 Linear search demo
#3 Path recording demo
Visit the FT 300 webpage or get a quote here
Get the FT 300 specs here
Get more info in the FAQ
Get free Skills to accelerate robot programming of force control tasks.
Get free robot cell deployment resources on leanrobotics.org
* Available with Universal Robots CB3.1 controller only
About Robotiq
Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.
Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.
Media contact
David Maltais, Communications and Public Relations Coordinator
d.maltais@robotiq.com
1-418-929-2513
////
Press Release Provided by: Robotiq.Com
The post FT 300 force torque sensor appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431553 This Week’s Awesome Stories From ...

ROBOTS
Boston Dynamics’ Atlas Robot Does Backflips Now and It’s Full-Tilt InsaneMatt Simon | Wired “To be clear: Humanoids aren’t supposed to be able to do this. It’s extremely difficult to make a bipedal robot that can move effectively, much less kick off a tumbling routine.”

TRANSPORTATION
This Is the Tesla Semi TruckZac Estrada | The Verge“What Tesla has done today is shown that it wants to invigorate a segment, rather than just make something to comply with more stringent emissions regulations… And in the process, it’s trying to do for heavy-duty commercial vehicles what it did for luxury cars—plough forward in its own lane.”
PRIVACY AND SECURITY
Should Facebook Notify Readers When They’ve Been Fed Disinformation?Austin Carr | Fast Company “It would be, Reed suggested, the social network equivalent of a newspaper correction—only one that, with the tech companies’ expansive data, could actually reach its intended audience, like, say, the 250,000-plus Facebook users who shared the debunked YourNewsWire.com story.”
BRAIN HEALTH
Brain Implant Boosts Memory for First Time EverKristin Houser | NBC News “Once implanted in the volunteers, Song’s device could collect data on their brain activity during tests designed to stimulate either short-term memory or working memory. The researchers then determined the pattern associated with optimal memory performance and used the device’s electrodes to stimulate the brain following that pattern during later tests.”
COMPUTING
Yale Professors Race Google and IBM to the First Quantum ComputerCade Metz | New York Times “Though Quantum Circuits is using the same quantum method as its bigger competitors, Mr. Schoelkopf argued that his company has an edge because it is tackling the problem differently. Rather than building one large quantum machine, it is constructing a series of tiny machines that can be networked together. He said this will make it easier to correct errors in quantum calculations—one of the main difficulties in building one of these complex machines.”
Image Credit: Tesla Motors Continue reading

Posted in Human Robots

#431385 Here’s How to Get to Conscious ...

“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots

#431171 SceneScan: Real-Time 3D Depth Sensing ...

Nerian Introduces a High-Performance Successor for the Proven SP1 System
Stereo vision, which is the three-dimensional perception of our environment with two sensors likeour eyes, is a well-known technology. As a passive method – there is no need to emit light in thevisible or invisible spectral range – this technology can open up new possibilities for three dimensional perception, even under difficult conditions.
But as often, the devil is in the details: for most applications, the software implementation withstandard PCs, but also with graphics processors, is too slow. Another complicating factor is thatthese hardware platforms are expensive and not energy-efficient. The solution is to instead usespecialized hardware for image processing. A programmable logic device – a so-called FPGA – cangreatly accelerate the image processing.
As a technology leader, Nerian Vision Technologies has been following this path successfully forthe past two years with the SP1 stereo vision system, which has enabled completely newapplications in the fields of robotics, automation technology, medical technology, autonomousdriving and other domains. Now the company introduces two successors:
SceneScan and SceneScan Pro. Real eye-catchers in a double sense: stereo vision in an elegant design!But more important is, of course, the significantly improved inner workings of the two new modelsin comparison to their predecessor. The new hardware allows processing rates of up to 100 framesper second at resolutions of up to 3 megapixels, which leaves the SP1 far behind:
Photo Credit: Nerian Vision Technologies – www.nerian.com

The table illustrates the difference: while SceneScan Pro has the highest possible computing powerand is designed for the most demanding applications, SceneScan has been cost-reduced forapplications with lower requirements. The customer can thus optimize his embedded vision solution both in terms of costs and technology.
The new duo is completed by Nerian’s proven Karmin stereo cameras. Of course, industrialUSB3Vision cameras by other manufacturers are also supported.This combination not only supports the above-mentioned applications even better, but alsofacilitates completely new and innovative ones. If required, customer-specific adaptations are alsopossible.
ContactNerian Vision TechnologiesOwner: Dr. Konstantin SchauweckerGotenstr. 970771 Leinfelden-EchterdingenGermanyPhone: +49 711 / 2195 9414Email: service@nerian.comWebsite: http://nerian.com
Press Release Authored By: Nerian Vision Technologies
Photo Credit: Nerian Vision Technologies – www.nerian.com
The post SceneScan: Real-Time 3D Depth Sensing Through Stereo Vision appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431058 How to Make Your First Chatbot With the ...

You’re probably wondering what Game of Thrones has to do with chatbots and artificial intelligence. Before I explain this weird connection, I need to warn you that this article may contain some serious spoilers. Continue with your reading only if you are a passionate GoT follower, who watches new episodes immediately after they come out.
Why are chatbots so important anyway?
According to the study “When Will AI Exceed Human Performance?,” researchers believe there is a 50% chance artificial intelligence could take over all human jobs by around the year 2060. This technology has already replaced dozens of customer service and sales positions and helped businesses make substantial savings.
Apart from the obvious business advantages, chatbot creation can be fun. You can create an artificial personality with a strong attitude and a unique set of traits and flaws. It’s like creating a new character for your favorite TV show. That’s why I decided to explain the most important elements of the chatbot creation process by using the TV characters we all know and love (or hate).
Why Game of Thrones?
Game of Thrones is the most popular TV show in the world. More than 10 million viewers watched the seventh season premiere, and you have probably seen internet users fanatically discussing the series’ characters, storyline, and possible endings.
Apart from writing about chatbots, I’m also a GoT fanatic, and I will base this chatbot on one of the characters from my favorite series. But before you find out the name of my bot, you should read a few lines about incredible free tools that allow us to build chatbots without coding.
Are chatbots expensive?
Today, you can create a chatbot even if you don’t know how to code. Most chatbot building platforms offer at least one free plan that allows you to use basic functionalities, create your bot, deploy it to Facebook Messenger, and analyze its performance. Free plans usually allow your bot to talk to a limited number of users.
Why should you personalize your bot?
Every platform will ask you to write a bot’s name before you start designing conversations. You will also be able to add the bot’s photograph and bio. Personalizing your bot is the only way to ensure that you will stick to the same personality and storyline throughout the building process. Users often see chatbots as people, and by giving your bot an identity, you will make sure that it doesn’t sound like it has multiple personality disorder.
I think connecting my chatbot with a GoT character will help readers understand the process of chatbot creation.
And the name of our GoT chatbot is…
…Cersei. She is mean, pragmatic, and fearless and she would do anything to stay on the Iron Throne. Many people would rather hang out with Daenerys or Jon Snow. These characters are honest, noble and good-hearted, which means their actions are often predictable.
Cersei, on the other hand, is the queen of intrigues. As the meanest and the most vengeful character in the series, she has an evil plan for everybody who steps on her toes. While viewers can easily guess where Jon and Daenerys stand, there are dozens of questions they would like to ask Cersei. But before we start talking to our bot, we need to build her personality by using the most basic elements of chatbot interaction.
Choosing the bot’s name on Botsify.
Welcome / Greeting Message
The welcome message is the greeting Cersei says to every commoner who clicks on the ‘start conversation’ button. She is not a welcoming person (ask Sansa), except if you are a banker from Braavos. Her introductory message may sound something like this:
“Dear {{user_full_name}}, My name is Cersei of the House Lannister, the First of Her Name, Queen of the Andals and the First Men, Protector of the Seven Kingdoms. You can ask me questions, and I will answer them. If the question is not worth answering, I will redirect you to Ser Gregor Clegane, who will give you a step-by-step course on how to talk to the Queen of Westeros.”
Creating the welcome message on Chatfuel
Default Message / Answer
In the bot game, users, bots, and their creators often need to learn from failed attempts and mistakes. The default message is the text Cersei will send whenever you ask her a question she doesn’t understand. Knowing Cersei, it would sound something like this:
“Ser Gregor, please escort {{user_full_name}} to the dungeon.”
Creating default message on Botsify
Menu
To avoid calling out the Mountain every time someone asks her a question, Cersei might give you a few (safe) options to choose. The best way to do this is by using a menu function. We can classify the questions people want to ask Cersei in several different categories:

Iron Throne
Relationship with Jaime — OK, this isn’t a “safe option,” get ready to get close and personal with Sir Gregor Clegane.
War plans
Euron Greyjoy

After users choose a menu item, Cersei can give them a default response on the topic or set up a plot that will make their lives miserable. Knowing Cersei, she will probably go for the second option.
Adding chatbot menu on Botsify
Stories / Blocks
This feature allows us to build a longer Cersei-to-user interaction. The structure of stories and blocks is different on every chatbot platform, but most of them use keywords and phrases for finding out the user’s intention.

Keywords — where the bot recognizes a certain keyword within the user’s reply. Users who have chosen the ‘war plans’ option might ask Cersei how is she planning to defeat Daenerys’s dragons. We can add ‘dragon’ and ‘dragons’ as keywords, and connect them with an answer that will sound something like this:

“Dragons are not invulnerable as you may think. Maester Qyburn is developing a weapon that will bring them down for good!”
Adding keywords on Chatfuel
People may also ask her about White Walkers. Do you plan to join Daenerys and Jon Snow in a fight against White Walkers? After we add ‘White Walker’ and ‘White Walkers’ on the keyword list, Cersei will answer:
“White Walkers? Do you think the Queen of Westeros has enough free time to think about creatures from fairy tales and legends?”
Adding Keywords on Botsify

Phrases — are more complex syntaxes that the bot can be trained to recognize. Many people would like to ask Cersei if she’s going to marry Euron Greyjoy after the war ends. We can add ‘Euron’ as a keyword, but then we won’t be sure what answer the user is expecting. Instead, we can use the phrase ‘(Will you) marry Euron Greyjoy (after the war?)’. Just to be sure, we should also add a few alternative phrases like ‘(Do you plan on) marrying Euron Greyjoy (after the war),’ ‘(Will you) end up with Euron Greyjoy (after the war?)’, ‘(Will) Euron Greyjoy be the new King?’ etc. Cersei would probably answer this inquiry in her style:

“Of course not, Euron is a useful idiot. I will use his fleet and send him back to the Iron Islands, where he belongs.”
Adding phrases on Botsify
Forms
We have already asked Cersei several questions, and now she would like to ask us something. She can do so by using the form/user input feature. Most tools allow us to add a question and the criteria for checking the users’ answer. If the user provides us the answer that is compliant to the predefined form (like email address, phone number, or a ZIP code), the bot will identify and extract the answer. If the answer doesn’t fit into the predefined criteria, the bot will notify the user and ask him/her to try again.
If Cersei would ask you a question, she would probably want to know your address so she could send her guards to fill your basement with barrels of wildfire.
Creating forms on Botsify
Templates
If you have problems building your first chatbot, templates can help you create the basic conversation structure. Unfortunately, not all platforms offer this feature for free. Snatchbot currently has the most comprehensive list of free templates. There you can choose a pre-built layout. The template selection ranges from simple FAQ bots to ones created for a specific industry, like banking, airline, healthcare, or e-commerce.
Choosing templates on Snatchbot
Plugins
Most tools also provide plugins that can be used for making the conversations more meaningful. These plugins allow Cersei to send images, audio and video files. She can unleash her creativity and make you suffer by sending you her favorite GoT execution videos.

With the help of integrations, Cersei can talk to you on Facebook Messenger, Telegram, WeChat, Slack, and many other communication apps. She can also sell her fan gear and ask you for donations by integrating in-bot payments from PayPal accounts. Her sales pitch will probably sound something like this:
“Gold wins wars! Would you rather invest your funds in a member of a respected family, who always pays her debts, or in the chaotic war endeavor of a crazy revolutionary, whose strength lies in three flying lizards? If your pockets are full of gold, you are already on my side. Now you can complete your checkout on PayPal.”
Chatbot building is now easier than ever, and even small businesses are starting to use the incredible benefits of artificial intelligence. If you still don’t believe that chatbots can replace customer service representatives, I suggest you try to develop a bot based on your favorite TV show, movie or book character and talk with him/her for a while. This way, you will be able to understand the concept that stands behind this amazing technology and use it to improve your business.
Now I’m off to talk to Cersei. Maybe she will feed me some Season 8 spoilers.
This article was originally published by Chatbots Magazine. Read the original post here.
Image credits for screenshots in post: Branislav Srdanovic
Banner stock media provided by new_vision_studio / Pond5 Continue reading

Posted in Human Robots