Tag Archives: knows

#431592 Reactive Content Will Get to Know You ...

The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.

For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431389 Tech Is Becoming Emotionally ...

Many people get frustrated with technology when it malfunctions or is counterintuitive. The last thing people might expect is for that same technology to pick up on their emotions and engage with them differently as a result.
All of that is now changing. Computers are increasingly able to figure out what we’re feeling—and it’s big business.
A recent report predicts that the global affective computing market will grow from $12.2 billion in 2016 to $53.98 billion by 2021. The report by research and consultancy firm MarketsandMarkets observed that enabling technologies have already been adopted in a wide range of industries and noted a rising demand for facial feature extraction software.
Affective computing is also referred to as emotion AI or artificial emotional intelligence. Although many people are still unfamiliar with the category, researchers in academia have already discovered a multitude of uses for it.
At the University of Tokyo, Professor Toshihiko Yamasaki decided to develop a machine learning system that evaluates the quality of TED Talk videos. Of course, a TED Talk is only considered to be good if it resonates with a human audience. On the surface, this would seem too qualitatively abstract for computer analysis. But Yamasaki wanted his system to watch videos of presentations and predict user impressions. Could a machine learning system accurately evaluate the emotional persuasiveness of a speaker?
Yamasaki and his colleagues came up with a method that analyzed correlations and “multimodal features including linguistic as well as acoustic features” in a dataset of 1,646 TED Talk videos. The experiment was successful. The method obtained “a statistically significant macro-average accuracy of 93.3 percent, outperforming several competitive baseline methods.”
A machine was able to predict whether or not a person would emotionally connect with other people. In their report, the authors noted that these findings could be used for recommendation purposes and also as feedback to the presenters, in order to improve the quality of their public presentation. However, the usefulness of affective computing goes far beyond the way people present content. It may also transform the way they learn it.
Researchers from North Carolina State University explored the connection between students’ affective states and their ability to learn. Their software was able to accurately predict the effectiveness of online tutoring sessions by analyzing the facial expressions of participating students. The software tracked fine-grained facial movements such as eyebrow raising, eyelid tightening, and mouth dimpling to determine engagement, frustration, and learning. The authors concluded that “analysis of facial expressions has great potential for educational data mining.”
This type of technology is increasingly being used within the private sector. Affectiva is a Boston-based company that makes emotion recognition software. When asked to comment on this emerging technology, Gabi Zijderveld, chief marketing officer at Affectiva, explained in an interview for this article, “Our software measures facial expressions of emotion. So basically all you need is our software running and then access to a camera so you can basically record a face and analyze it. We can do that in real time or we can do this by looking at a video and then analyzing data and sending it back to folks.”
The technology has particular relevance for the advertising industry.
Zijderveld said, “We have products that allow you to measure how consumers or viewers respond to digital content…you could have a number of people looking at an ad, you measure their emotional response so you aggregate the data and it gives you insight into how well your content is performing. And then you can adapt and adjust accordingly.”
Zijderveld explained that this is the first market where the company got traction. However, they have since packaged up their core technology in software development kits or SDKs. This allows other companies to integrate emotion detection into whatever they are building.
By licensing its technology to others, Affectiva is now rapidly expanding into a wide variety of markets, including gaming, education, robotics, and healthcare. The core technology is also used in human resources for the purposes of video recruitment. The software analyzes the emotional responses of interviewees, and that data is factored into hiring decisions.
Richard Yonck is founder and president of Intelligent Future Consulting and the author of a book about our relationship with technology. “One area I discuss in Heart of the Machine is the idea of an emotional economy that will arise as an ecosystem of emotionally aware businesses, systems, and services are developed. This will rapidly expand into a multi-billion-dollar industry, leading to an infrastructure that will be both emotionally responsive and potentially exploitive at personal, commercial, and political levels,” said Yonck, in an interview for this article.
According to Yonck, these emotionally-aware systems will “better anticipate needs, improve efficiency, and reduce stress and misunderstandings.”
Affectiva is uniquely positioned to profit from this “emotional economy.” The company has already created the world’s largest emotion database. “We’ve analyzed a little bit over 4.7 million faces in 75 countries,” said Zijderveld. “This is data first and foremost, it’s data gathered with consent. So everyone has opted in to have their faces analyzed.”
The vastness of that database is essential for deep learning approaches. The software would be inaccurate if the data was inadequate. According to Zijderveld, “If you don’t have massive amounts of data of people of all ages, genders, and ethnicities, then your algorithms are going to be pretty biased.”
This massive database has already revealed cultural insights into how people express emotion. Zijderveld explained, “Obviously everyone knows that women are more expressive than men. But our data confirms that, but not only that, it can also show that women smile longer. They tend to smile more often. There’s also regional differences.”
Yonck believes that affective computing will inspire unimaginable forms of innovation and that change will happen at a fast pace.
He explained, “As businesses, software, systems, and services develop, they’ll support and make possible all sorts of other emotionally aware technologies that couldn’t previously exist. This leads to a spiral of increasingly sophisticated products, just as happened in the early days of computing.”
Those who are curious about affective technology will soon be able to interact with it.
Hubble Connected unveiled the Hubble Hugo at multiple trade shows this year. Hugo is billed as “the world’s first smart camera,” with emotion AI video analytics powered by Affectiva. The product can identify individuals, figure out how they’re feeling, receive voice commands, video monitor your home, and act as a photographer and videographer of events. Media can then be transmitted to the cloud. The company’s website describes Hugo as “a fun pal to have in the house.”
Although he sees the potential for improved efficiencies and expanding markets, Richard Yonck cautions that AI technology is not without its pitfalls.
“It’s critical that we understand we are headed into very unknown territory as we develop these systems, creating problems unlike any we’ve faced before,” said Yonck. “We should put our focus on ensuring AI develops in a way that represents our human values and ideals.”
Image Credit: Kisan / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431385 Here’s How to Get to Conscious ...

“We cannot be conscious of what we are not conscious of.” – Julian Jaynes, The Origin of Consciousness in the Breakdown of the Bicameral Mind
Unlike the director leads you to believe, the protagonist of Ex Machina, Andrew Garland’s 2015 masterpiece, isn’t Caleb, a young programmer tasked with evaluating machine consciousness. Rather, it’s his target Ava, a breathtaking humanoid AI with a seemingly child-like naïveté and an enigmatic mind.
Like most cerebral movies, Ex Machina leaves the conclusion up to the viewer: was Ava actually conscious? In doing so, it also cleverly avoids a thorny question that has challenged most AI-centric movies to date: what is consciousness, and can machines have it?
Hollywood producers aren’t the only people stumped. As machine intelligence barrels forward at breakneck speed—not only exceeding human performance on games such as DOTA and Go, but doing so without the need for human expertise—the question has once more entered the scientific mainstream.
Are machines on the verge of consciousness?
This week, in a review published in the prestigious journal Science, cognitive scientists Drs. Stanislas Dehaene, Hakwan Lau and Sid Kouider of the Collège de France, University of California, Los Angeles and PSL Research University, respectively, argue: not yet, but there is a clear path forward.
The reason? Consciousness is “resolutely computational,” the authors say, in that it results from specific types of information processing, made possible by the hardware of the brain.
There is no magic juice, no extra spark—in fact, an experiential component (“what is it like to be conscious?”) isn’t even necessary to implement consciousness.
If consciousness results purely from the computations within our three-pound organ, then endowing machines with a similar quality is just a matter of translating biology to code.
Much like the way current powerful machine learning techniques heavily borrow from neurobiology, the authors write, we may be able to achieve artificial consciousness by studying the structures in our own brains that generate consciousness and implementing those insights as computer algorithms.
From Brain to Bot
Without doubt, the field of AI has greatly benefited from insights into our own minds, both in form and function.
For example, deep neural networks, the architecture of algorithms that underlie AlphaGo’s breathtaking sweep against its human competitors, are loosely based on the multi-layered biological neural networks that our brain cells self-organize into.
Reinforcement learning, a type of “training” that teaches AIs to learn from millions of examples, has roots in a centuries-old technique familiar to anyone with a dog: if it moves toward the right response (or result), give a reward; otherwise ask it to try again.
In this sense, translating the architecture of human consciousness to machines seems like a no-brainer towards artificial consciousness. There’s just one big problem.
“Nobody in AI is working on building conscious machines because we just have nothing to go on. We just don’t have a clue about what to do,” said Dr. Stuart Russell, the author of Artificial Intelligence: A Modern Approach in a 2015 interview with Science.
Multilayered consciousness
The hard part, long before we can consider coding machine consciousness, is figuring out what consciousness actually is.
To Dehaene and colleagues, consciousness is a multilayered construct with two “dimensions:” C1, the information readily in mind, and C2, the ability to obtain and monitor information about oneself. Both are essential to consciousness, but one can exist without the other.
Say you’re driving a car and the low fuel light comes on. Here, the perception of the fuel-tank light is C1—a mental representation that we can play with: we notice it, act upon it (refill the gas tank) and recall and speak about it at a later date (“I ran out of gas in the boonies!”).
“The first meaning we want to separate (from consciousness) is the notion of global availability,” explains Dehaene in an interview with Science. When you’re conscious of a word, your whole brain is aware of it, in a sense that you can use the information across modalities, he adds.
But C1 is not just a “mental sketchpad.” It represents an entire architecture that allows the brain to draw multiple modalities of information from our senses or from memories of related events, for example.
Unlike subconscious processing, which often relies on specific “modules” competent at a defined set of tasks, C1 is a global workspace that allows the brain to integrate information, decide on an action, and follow through until the end.
Like The Hunger Games, what we call “conscious” is whatever representation, at one point in time, wins the competition to access this mental workspace. The winners are shared among different brain computation circuits and are kept in the spotlight for the duration of decision-making to guide behavior.
Because of these features, C1 consciousness is highly stable and global—all related brain circuits are triggered, the authors explain.
For a complex machine such as an intelligent car, C1 is a first step towards addressing an impending problem, such as a low fuel light. In this example, the light itself is a type of subconscious signal: when it flashes, all of the other processes in the machine remain uninformed, and the car—even if equipped with state-of-the-art visual processing networks—passes by gas stations without hesitation.
With C1 in place, the fuel tank would alert the car computer (allowing the light to enter the car’s “conscious mind”), which in turn checks the built-in GPS to search for the next gas station.
“We think in a machine this would translate into a system that takes information out of whatever processing module it’s encapsulated in, and make it available to any of the other processing modules so they can use the information,” says Dehaene. “It’s a first sense of consciousness.”
Meta-cognition
In a way, C1 reflects the mind’s capacity to access outside information. C2 goes introspective.
The authors define the second facet of consciousness, C2, as “meta-cognition:” reflecting on whether you know or perceive something, or whether you just made an error (“I think I may have filled my tank at the last gas station, but I forgot to keep a receipt to make sure”). This dimension reflects the link between consciousness and sense of self.
C2 is the level of consciousness that allows you to feel more or less confident about a decision when making a choice. In computational terms, it’s an algorithm that spews out the probability that a decision (or computation) is correct, even if it’s often experienced as a “gut feeling.”
C2 also has its claws in memory and curiosity. These self-monitoring algorithms allow us to know what we know or don’t know—so-called “meta-memory,” responsible for that feeling of having something at the tip of your tongue. Monitoring what we know (or don’t know) is particularly important for children, says Dehaene.
“Young children absolutely need to monitor what they know in order to…inquire and become curious and learn more,” he explains.
The two aspects of consciousness synergize to our benefit: C1 pulls relevant information into our mental workspace (while discarding other “probable” ideas or solutions), while C2 helps with long-term reflection on whether the conscious thought led to a helpful response.
Going back to the low fuel light example, C1 allows the car to solve the problem in the moment—these algorithms globalize the information, so that the car becomes aware of the problem.
But to solve the problem, the car would need a “catalog of its cognitive abilities”—a self-awareness of what resources it has readily available, for example, a GPS map of gas stations.
“A car with this sort of self-knowledge is what we call having C2,” says Dehaene. Because the signal is globally available and because it’s being monitored in a way that the machine is looking at itself, the car would care about the low gas light and behave like humans do—lower fuel consumption and find a gas station.
“Most present-day machine learning systems are devoid of any self-monitoring,” the authors note.
But their theory seems to be on the right track. The few examples whereby a self-monitoring system was implemented—either within the structure of the algorithm or as a separate network—the AI has generated “internal models that are meta-cognitive in nature, making it possible for an agent to develop a (limited, implicit, practical) understanding of itself.”
Towards conscious machines
Would a machine endowed with C1 and C2 behave as if it were conscious? Very likely: a smartcar would “know” that it’s seeing something, express confidence in it, report it to others, and find the best solutions for problems. If its self-monitoring mechanisms break down, it may also suffer “hallucinations” or even experience visual illusions similar to humans.
Thanks to C1 it would be able to use the information it has and use it flexibly, and because of C2 it would know the limit of what it knows, says Dehaene. “I think (the machine) would be conscious,” and not just merely appearing so to humans.
If you’re left with a feeling that consciousness is far more than global information sharing and self-monitoring, you’re not alone.
“Such a purely functional definition of consciousness may leave some readers unsatisfied,” the authors acknowledge.
“But we’re trying to take a radical stance, maybe simplifying the problem. Consciousness is a functional property, and when we keep adding functions to machines, at some point these properties will characterize what we mean by consciousness,” Dehaene concludes.
Image Credit: agsandrew / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#426831 Industrial robot runtime programming

Article provided by: www.robotct.ru
In this article, runtime programming is understood as the process of creating an executable program for a robot controller (hereinafter referred to as “robot”) on an external controller. In this case the robot performs the program iteratively, by sending the minimum executable command or batch of commands to it. In other words, in runtime programming, the executable program is sent to the robot in portions, thus the robot does not have, store, or know the entire executable program beforehand. Such an approach allows creating an abstract parameterized executable program, which is generated by the external device “on the fly”, i.e., during runtime.
Under the cut, there is the description and a real example of how runtime programming works.
Typically, a program for a robot is a sequence of positions of the robot manipulator. Each of these positions is characterized by the TCP (Tool Center Point) position, the point of the tip of the tool mounted on the manipulator (by default, TCP is in the center of robot’s flange, see the picture below, but its position may be adjusted, and it is often that TCP with the tip of the tool mounted on the manipulator of the robot). Therefore, when programming, TCP position in space is often specified, and the robot determines the positions of manipulator’s joints itself. Further in this article, we will use the term “TCP position”, or, in other words, the point that the robot shall arrive to.

The program for the robot may also contain control logic (branching, loops), simple mathematical operations, and commands for controlling peripheral devices – analog and digital inputs/outputs. In the proposed approach to runtime programming, a standard PC is used as an external controller, which can use powerful software that ensures the necessary level of abstraction (OOP and other paradigms), and tools that ensure speed and ease of developing complex logic (high-level programming languages). The robot itself has only to deal with the logic that is critical to response rate, for execution of which the reliability of an industrial controller is required, for example, prompt and adequate response to an emergency situation. The control of the peripherals connected to the robot is simply “proxied” by the robot on the PC, allowing the PC to activate or deactivate corresponding signals on the robot; it is something similar to controlling “legs” of Arduino.

As it has been noted earlier, runtime programming enables sending the program to the robot in portions. Usually, a set of states of output signals and several points, or even only a single point is sent. Thus, the trajectory of the TCP movement performed by the robot may be built dynamically, and some of its parts may belong both to different technological processes, and even to different robots (connected to the same external controller), where a group of robots works.
For example, the robot has moved to one of the working areas, performed the required operations, then – to the next one, then to yet another one, and then back to the first one, etc. In different working areas, the robot performs operations required for different technological processes, where programs are executed in parallel threads on the external controller, which allocates the robot to different processes that do not require constant presence of the robot. This mechanism is similar to the way an OS allocates processor time (execution resource) to various threads, and at the same time, different executors are not linked to threads throughout the whole period of program execution.
A little more theory, and we will proceed to practice.
Description of the existing methods of programming industrial robots.
Without regard to the approach of runtime programming introduced in this article, two ways of programming industrial robots are usually identified. Offline and online programming.
The process of online programming occurs with direct interaction of the programmer and the robot at the location of usage. Using a remote control, or by physical movement, the tool (TCP) mounted on the flange of the robot is moved to the desired point.
The advantage of this method of programming is the ease of approach to robot programming. One does not have to know anything about programming; it is enough to state the sequence of robot positions.
An important disadvantage of this approach is the significantly increased time consumption, when the program is increased at least to several dozen (not to mention thousands) points, or when it (the program) is subsequently modified. In addition, during such learning, the robot cannot be used for work.
The process of offline programming, as the name implies, occurs away from the robot and its controller. The executable program is developed in any programming environment on a PC, after which it is entirely loaded into the robot. However, programming tools for such development are not included into the basic delivery set of the robot, and are additional options to be purchased separately, and expensive on the whole.
The advantage of offline programming is that the robot may be used in production and may work, while the program is being developed. The robot is only needed to debug ready programs. There is no need to go to the automation object and program the robot in person.
A great disadvantage of the existing offline programming environments is their high cost. Besides, it is impossible to dynamically distribute the executable program to different robots.
As an example, let us consider creating a robot program in runtime mode, which enables the process of writing an ad with a marker.

Result:

ATTENTION! The video is not an advertisement, the vacancy is closed. The article was written after the video had become obsolete, to show the proposed approach to programming.

The written text:
HELLO, PEOPLE! WE NEED A DEVELOPER TO CREATE A WEB INTERFACE OF OUR KNOWLEDGE SYSTEM.
THIS WAY WE WILL BE ABLE TO GET KNOWLEDGE FROM YOU HUMANOIDS.
AND, FINALLY, WE’LL BE ABLE TO CONQUER AND IMPROVE THIS WORLD

READ MORE: HTTP://ROBOTCT.COM/HI
SINCERELY YOURS, SKYNET =^-^=
To make the robot write this text, it was necessary to send over 1,700 points to the robot.
As an example, the spoiler contained a screenshot of the program drawing a square from the robot’s remote control. It only has 5 points (lines 4-8); each point is in fact a complete expression, and takes one line. The manipulator traverses each of the four points, and returns to the starting point upon completion.
The screenshot of the remote control with the executable program:

If the program is written this way, it would take at least 1,700 lines of code, a line per point. What if you have to change the text, or the height of the characters, or the distance between them? Edit all the 1,700 point lines? This contradicts the spirit of automation!
So, let’s proceed to the solution…
We have a FANUC LR Mate 200iD robot with an R-30i B series cabinet controller. The robot has a preconfigured TCP at the marker end, and the coordinate system of the desktop, so we can send the coordinates directly, without worrying about transforming the coordinates from the coordinate system of the table into the coordinate system of the robot.
To implement the program of sending the coordinates to the robot, which will calculate the absolute values of each point, we will use the RCML programming language that supports this robot and, which is important, which is free for anyone to use.
Let’s describe each letter with dots, but in the relative coordinates inside the frame, in which the letter will be inscribed, rather than in the real space coordinates. Each letter will be drawn by a separate function receiving the sequence number of the character in the line, line number and the size of the letter as input parameters, and sending a set of points to the robot with calculated absolute coordinates for each point.
To write a text, we will have to call a series of functions that would draw the letters in the sequence in which they (letters) are present in the text. RCML has a meager set of tools for working with strings, so we will write an external Python script which will generate a program in RCML – essentially, generate only the sequence of function calls that corresponds to the sequence of letters.
The whole code is available in repository: rct_paint_words
Let us consider the output file in more detail, execution starts from function main():

Spoiler: “Let us consider the code for drawing a letter, for example, letter A:”
function robot_fanuc::draw_A(x_cell,y_cell){
// Setting the marker to the point, the coordinates of the point are 5% along X and 95% along Y within the letter frame
robot->setPoint(x_cell, y_cell, 5, 95);
// Drawing a line
robot->movePoint(x_cell, y_cell, 50, 5);
// Drawing the second line
robot->movePoint(x_cell, y_cell, 95, 95);
// We get the “roof” /

// Moving the marker lifted from the table to draw the cross line
robot->setPoint(x_cell, y_cell, 35, 50);
// Drawing the cross-line
robot->movePoint(x_cell, y_cell, 65, 50);
// Lifting the marker from the table to move to the next letter
robot->marker_up();
}
End of spoiler

Spoiler: “The functions of moving the marker to the point, with or without lifting, are also very simple:”
// Moving the lifted marker to the point, or setting the point to start drawing
function robot_fanuc::setPoint(x_cell, y_cell, x_percent, y_precent){
// Calculating the absolute coordinates
x = calculate_absolute_coords_x(x_cell, x_percent); y = calculate_absolute_coords_y(y_cell, y_precent);

robot->marker_up(); // Lifting the marker from the table
robot->marker_move(x,y); // Moving
robot->marker_down(); // Lowering the marker to the table

// Moving the marker to the point without lifting, or actually drawing
function robot_fanuc::movePoint(x_cell, y_cell, x_percent, y_precent){ x = calculate_absolute_coords_x(x_cell, x_percent); y = calculate_absolute_coords_y(y_cell, y_precent);
// Here everything is clear robot->marker_move(x,y);
}
End of spoiler

Spoiler: Functions marker_up, marker_down, marker_move contain only the code of sending the changed part of the TCP point coordinates (Z or XY) to the robot.
function robot_fanuc::marker_up(){
robot->set_real_di(“z”, SAFE_Z);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker upn”);
throw er;
}
}

function robot_fanuc::marker_down(){
robot->set_real_di(“z”, START_Z);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker downn”);
throw er;
}
}

function robot_fanuc::marker_move(x,y){
robot->set_real_di(“x”, x);
robot->set_real_di(“y”, y);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker moven”);
throw er;
}
}
End of spoiler

All configuration constants, including size of letters, their number in the line, etc., were put to a separate file.
Spoiler: “Configuration file”
define CHAR_HEIGHT_MM 50 // Character height in mm
define CHAR_WIDTH_PERCENT 60 // Character width in percentage of height

define SAFE_Z -20 // Safe position of the tip of the marker along the z-axis
define START_Z 0 // Working position of the tip of the marker along the z-axis

// Working area border
define BORDER_Y 120
define BORDER_X 75

// ON/OFF signals
define ON 1
define OFF 0

// Pauses between sending certain signals, milliseconds
define _SIGNAL_PAUSE_MILLISEC 50
define _OFF_PAUSE_MILLISEC 200

// Euler angles of the initial marker position
define START_W -179.707 // Roll
define START_P -2.500 // Pitch
define START_R 103.269 // Yaw

// Euler angles of marker turn
define SECOND_W -179.704
define SECOND_P -2.514
define SECOND_R -14.699

define CHAR_OFFSET_MM 4 // Spacing between letters

define UFRAME 4 // Table number
define UTOOL 2 // Tool number
define PAYLOAD 4 // Load number
define SPEED 100 // Speed
define CNT 0 // Movement smoothness parameter
define ROTATE_SPEED // Speed in turn

define HOME_PNS 4 // The number of the PNS program for home position return
End of spoiler

In total, we’ve got about 300 lines of high level code that took not more than 1 hour to develop and write.
If the problem had been solved in the “straightforward” manner by online programming with the use of points, it would have taken more than 9 hours (approximately 20-25 seconds per point, given the fact that there are over 1,700 points). In this case, the developer’s sufferings are unimaginable :), especially when he would have found out that he had forgotten about the indents between the frames that the letters were inscribed in, or the height of the letters was wrong, and the text did not fit in.
Conclusion:
The use of runtime programming is one of the ways to create executable software. The advantages of this approach include the following:
The possibility of writing and debugging programs without the need to stop the robot, thus minimizing the downtime for changeover.
A parameterized executable program that’s easy to edit.
Dynamic activation and deactivation robots in the active technological task, and cooperation of robots from various manufacturers.
Thus, with the use of runtime programming, an executable command may be described in a way to make its execution available for any robot within the working group, or may be written for a particular robot, that will be the only one to execute it.
However, this approach has one significant limitation – incorrect understanding of the displacement smoothing instruction (CNT) by the robot, or ignoring it, since when only the current point is sent, the robot knows nothing about the next one, and cannot calculate the smoothed trajectory for bypassing the current point with smoothing.
Spoiler: “What is trajectory smoothing?”
When moving the robot’s tool, two parameters may be adjusted:
Travel speed
Level of smoothing
Travel speed sets the speed of the tool travel in mm/sec.
Level of smoothing (CNT) allows passing a group of points along the trajectory with the least distance between the extreme points of the group.

End of spoiler

The danger of using this instruction in the runtime mode is that the robot reports its arrival to the smoothed target point, although in reality the robot is still moving towards it. The robot does it to request the next point, and to calculate smoothing. Evidently, it is impossible to know exactly in what position the robot is when passing such a point, besides, tool activation at the manipulator may be required at a certain point. The robot will send a signal about reaching the point, but it is not actually so. In this case, the tool will be enabled before it is needed. At the best case, the robot will simply ignore the CNT instruction (depending on the model).
This may be fixed by sending 2 or more points at a time, where the CNT point is not the last one; however, this increases program complexity and the burden on the programmer.
Article provided by: robotct.ru
Photo Credits: Robotct.ru

The post Industrial robot runtime programming appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment