Tag Archives: dynamic

#431592 Reactive Content Will Get to Know You ...

The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.

For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#428140 Singapore International Robotics Expo

Singapore International Robo Expo debuts as the robotics sector is poised for accelerated growth

In partnership with Experia Events, the Singapore Industrial Automation Association sets its sights on boosting the robotics solutions industry with this strategic global platform for innovation and technology

Singapore, 18 October 2016 – The first Singapore International Robo Expo (SIRE), organised by Experia Events and co-organised by the Singapore Industrial Automation Association (SIAA), will be held from 1 to 2 November 2016, at Sands Expo and Convention Centre, Marina Bay Sands.

Themed Forging the Future of Robotics Solutions, SIRE will comprise an exhibition, product demonstrations, networking sessions and conferences. SIRE aims to be the global platform for governments, the private sector and the academia to engage in dialogues, share industry best practices, network, forge partnerships, and explore funding opportunities for the adoption of robotics solutions.

“SIRE debuts at a time when robotics has been gaining traction in the world due to the need for automation and better productivity. The latest World Robotics Report by the International Federation of Robotics has also identified Singapore as a market with one of the highest robot density in manufacturing – giving us more opportunities for further development in this field, and well as its extension into the services sectors.

With the S$450 million pledged by the Singapore government to the National Robotics Programme to develop the industry over the next three years, SIRE is aligned with these goals to cultivate the adoption of robotics and support the growing industry. As an association, we are constantly looking for ways to bring together robotic adoption, collaboration among partners, and providing support with funding for our members. SIRE is precisely the strategic platform for this,” said Mr Oliver Tian, President, SIAA.

SIRE has attracted strong interest from institutes of higher learning (IHLs), research institutes, local and international enterprises, with innovation and technology applicable for a vast range of industries from manufacturing to healthcare.

ST Kinetics, the Title Sponsor for the inaugural edition of the event, is one of the key exhibitors, together with other leading industry players such as ABB, Murata, Panasonic, SICK Pte Ltd, and Tech Avenue amongst others. Emerging SMEs such as H3 Dynamics, Design Tech Technologies and SMP Robotics Singapore will also showcase their innovations at the exhibition. Participating research institute, A*STAR’s SIMTech, and other IHLs supporting the event include Ngee Ann Polytechnic, Republic Polytechnic and the Institute of Technical Education (ITE).

Visitors will also be able to view “live” demonstrations at the Demo Zone and come up close with the latest innovations and technologies. Some of the key highlights at the zone includes the world’s only fully autonomous outdoor security robot developed by SMP Robotics Singapore, as well as ABB’s Yumi, IRB 14000, a collaborative robot designed to work in close collaboration and proximity with humans safely. Dynamic Stabilization Systems, SIMTech and Design Tech will also be demonstrating the capabilities of their robotic innovations at the zone.

At the Singapore International Robo Convention, key speakers representing regulators, industry leaders and academia will come together, exchange insights and engage in discourse to address the various aspects of robotic and automation technology, industry trends and case studies of robotics solutions. There will also be a session discussing the details of the Singapore National Robotics Programme led by Mr Haryanto Tan, Head, Precision Engineering Cluster Group, EDB Singapore.

SIRE will also host the France-Singapore Innovation Days in collaboration with Business France, the national agency supporting the international development of the French economy. The organisation will lead a delegation of 20 key French companies to explore business and networking opportunities with Singapore firms, and conduct specialized workshops.

To further foster a deeper appreciation and to inspire the next generation of robotics and automation experts, the event will also host students from higher institutes of learning on Education Day on 2 November. Students will be able to immerse themselves in the exciting developments of the robotics industry and get a sampling of how robotics can be applied to real-world settings by visiting the exhibits and interacting with representatives from participating companies.

Mr Leck Chet Lam, Managing Director, Experia Events, says, “SIRE will be a game changer for the industry. We are expecting the industry’s best and new-to-market players to showcase their innovations, which could potentially add value to the operations across a wide spectrum of industry sectors, from manufacturing to retail and service, and healthcare. We also hope to inspire the robotics and automation experts of tomorrow with our Education Day programme.

Experia Events prides itself as a company that organises strategic events for the global stage, featuring thought leaders and working with the industries’ best. It is an honour for us to be partnering SIAA, a recognised body and key player in the robotics industry. We are privileged to be able to help elevate Singapore’s robotics industry through SIRE and are pulling out all stops to ensure that the event will be a resounding success.”

SIRE is supported by Strategic Partner, IE Singapore as well as agencies including EDB Singapore, GovTech Singapore, InfoComm Media Development Authority, A*STAR’s SIMTech, and Spring Singapore.

###

For further enquiries, please contact:

Marilyn HoExperia Events Pte LtdDirector, CommunicationsTel: +65 6595 6130Email: marilynho@experiaevents.com

Genevieve YeoExperia Events Pte LtdAssistant Manager, CommunicationsTel: +65 6595 6131Email: genevieveyeo@experiaevents.com
The post Singapore International Robotics Expo appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#426831 Industrial robot runtime programming

Article provided by: www.robotct.ru
In this article, runtime programming is understood as the process of creating an executable program for a robot controller (hereinafter referred to as “robot”) on an external controller. In this case the robot performs the program iteratively, by sending the minimum executable command or batch of commands to it. In other words, in runtime programming, the executable program is sent to the robot in portions, thus the robot does not have, store, or know the entire executable program beforehand. Such an approach allows creating an abstract parameterized executable program, which is generated by the external device “on the fly”, i.e., during runtime.
Under the cut, there is the description and a real example of how runtime programming works.
Typically, a program for a robot is a sequence of positions of the robot manipulator. Each of these positions is characterized by the TCP (Tool Center Point) position, the point of the tip of the tool mounted on the manipulator (by default, TCP is in the center of robot’s flange, see the picture below, but its position may be adjusted, and it is often that TCP with the tip of the tool mounted on the manipulator of the robot). Therefore, when programming, TCP position in space is often specified, and the robot determines the positions of manipulator’s joints itself. Further in this article, we will use the term “TCP position”, or, in other words, the point that the robot shall arrive to.

The program for the robot may also contain control logic (branching, loops), simple mathematical operations, and commands for controlling peripheral devices – analog and digital inputs/outputs. In the proposed approach to runtime programming, a standard PC is used as an external controller, which can use powerful software that ensures the necessary level of abstraction (OOP and other paradigms), and tools that ensure speed and ease of developing complex logic (high-level programming languages). The robot itself has only to deal with the logic that is critical to response rate, for execution of which the reliability of an industrial controller is required, for example, prompt and adequate response to an emergency situation. The control of the peripherals connected to the robot is simply “proxied” by the robot on the PC, allowing the PC to activate or deactivate corresponding signals on the robot; it is something similar to controlling “legs” of Arduino.

As it has been noted earlier, runtime programming enables sending the program to the robot in portions. Usually, a set of states of output signals and several points, or even only a single point is sent. Thus, the trajectory of the TCP movement performed by the robot may be built dynamically, and some of its parts may belong both to different technological processes, and even to different robots (connected to the same external controller), where a group of robots works.
For example, the robot has moved to one of the working areas, performed the required operations, then – to the next one, then to yet another one, and then back to the first one, etc. In different working areas, the robot performs operations required for different technological processes, where programs are executed in parallel threads on the external controller, which allocates the robot to different processes that do not require constant presence of the robot. This mechanism is similar to the way an OS allocates processor time (execution resource) to various threads, and at the same time, different executors are not linked to threads throughout the whole period of program execution.
A little more theory, and we will proceed to practice.
Description of the existing methods of programming industrial robots.
Without regard to the approach of runtime programming introduced in this article, two ways of programming industrial robots are usually identified. Offline and online programming.
The process of online programming occurs with direct interaction of the programmer and the robot at the location of usage. Using a remote control, or by physical movement, the tool (TCP) mounted on the flange of the robot is moved to the desired point.
The advantage of this method of programming is the ease of approach to robot programming. One does not have to know anything about programming; it is enough to state the sequence of robot positions.
An important disadvantage of this approach is the significantly increased time consumption, when the program is increased at least to several dozen (not to mention thousands) points, or when it (the program) is subsequently modified. In addition, during such learning, the robot cannot be used for work.
The process of offline programming, as the name implies, occurs away from the robot and its controller. The executable program is developed in any programming environment on a PC, after which it is entirely loaded into the robot. However, programming tools for such development are not included into the basic delivery set of the robot, and are additional options to be purchased separately, and expensive on the whole.
The advantage of offline programming is that the robot may be used in production and may work, while the program is being developed. The robot is only needed to debug ready programs. There is no need to go to the automation object and program the robot in person.
A great disadvantage of the existing offline programming environments is their high cost. Besides, it is impossible to dynamically distribute the executable program to different robots.
As an example, let us consider creating a robot program in runtime mode, which enables the process of writing an ad with a marker.

Result:

ATTENTION! The video is not an advertisement, the vacancy is closed. The article was written after the video had become obsolete, to show the proposed approach to programming.

The written text:
HELLO, PEOPLE! WE NEED A DEVELOPER TO CREATE A WEB INTERFACE OF OUR KNOWLEDGE SYSTEM.
THIS WAY WE WILL BE ABLE TO GET KNOWLEDGE FROM YOU HUMANOIDS.
AND, FINALLY, WE’LL BE ABLE TO CONQUER AND IMPROVE THIS WORLD

READ MORE: HTTP://ROBOTCT.COM/HI
SINCERELY YOURS, SKYNET =^-^=
To make the robot write this text, it was necessary to send over 1,700 points to the robot.
As an example, the spoiler contained a screenshot of the program drawing a square from the robot’s remote control. It only has 5 points (lines 4-8); each point is in fact a complete expression, and takes one line. The manipulator traverses each of the four points, and returns to the starting point upon completion.
The screenshot of the remote control with the executable program:

If the program is written this way, it would take at least 1,700 lines of code, a line per point. What if you have to change the text, or the height of the characters, or the distance between them? Edit all the 1,700 point lines? This contradicts the spirit of automation!
So, let’s proceed to the solution…
We have a FANUC LR Mate 200iD robot with an R-30i B series cabinet controller. The robot has a preconfigured TCP at the marker end, and the coordinate system of the desktop, so we can send the coordinates directly, without worrying about transforming the coordinates from the coordinate system of the table into the coordinate system of the robot.
To implement the program of sending the coordinates to the robot, which will calculate the absolute values of each point, we will use the RCML programming language that supports this robot and, which is important, which is free for anyone to use.
Let’s describe each letter with dots, but in the relative coordinates inside the frame, in which the letter will be inscribed, rather than in the real space coordinates. Each letter will be drawn by a separate function receiving the sequence number of the character in the line, line number and the size of the letter as input parameters, and sending a set of points to the robot with calculated absolute coordinates for each point.
To write a text, we will have to call a series of functions that would draw the letters in the sequence in which they (letters) are present in the text. RCML has a meager set of tools for working with strings, so we will write an external Python script which will generate a program in RCML – essentially, generate only the sequence of function calls that corresponds to the sequence of letters.
The whole code is available in repository: rct_paint_words
Let us consider the output file in more detail, execution starts from function main():

Spoiler: “Let us consider the code for drawing a letter, for example, letter A:”
function robot_fanuc::draw_A(x_cell,y_cell){
// Setting the marker to the point, the coordinates of the point are 5% along X and 95% along Y within the letter frame
robot->setPoint(x_cell, y_cell, 5, 95);
// Drawing a line
robot->movePoint(x_cell, y_cell, 50, 5);
// Drawing the second line
robot->movePoint(x_cell, y_cell, 95, 95);
// We get the “roof” /

// Moving the marker lifted from the table to draw the cross line
robot->setPoint(x_cell, y_cell, 35, 50);
// Drawing the cross-line
robot->movePoint(x_cell, y_cell, 65, 50);
// Lifting the marker from the table to move to the next letter
robot->marker_up();
}
End of spoiler

Spoiler: “The functions of moving the marker to the point, with or without lifting, are also very simple:”
// Moving the lifted marker to the point, or setting the point to start drawing
function robot_fanuc::setPoint(x_cell, y_cell, x_percent, y_precent){
// Calculating the absolute coordinates
x = calculate_absolute_coords_x(x_cell, x_percent); y = calculate_absolute_coords_y(y_cell, y_precent);

robot->marker_up(); // Lifting the marker from the table
robot->marker_move(x,y); // Moving
robot->marker_down(); // Lowering the marker to the table

// Moving the marker to the point without lifting, or actually drawing
function robot_fanuc::movePoint(x_cell, y_cell, x_percent, y_precent){ x = calculate_absolute_coords_x(x_cell, x_percent); y = calculate_absolute_coords_y(y_cell, y_precent);
// Here everything is clear robot->marker_move(x,y);
}
End of spoiler

Spoiler: Functions marker_up, marker_down, marker_move contain only the code of sending the changed part of the TCP point coordinates (Z or XY) to the robot.
function robot_fanuc::marker_up(){
robot->set_real_di(“z”, SAFE_Z);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker upn”);
throw er;
}
}

function robot_fanuc::marker_down(){
robot->set_real_di(“z”, START_Z);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker downn”);
throw er;
}
}

function robot_fanuc::marker_move(x,y){
robot->set_real_di(“x”, x);
robot->set_real_di(“y”, y);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker moven”);
throw er;
}
}
End of spoiler

All configuration constants, including size of letters, their number in the line, etc., were put to a separate file.
Spoiler: “Configuration file”
define CHAR_HEIGHT_MM 50 // Character height in mm
define CHAR_WIDTH_PERCENT 60 // Character width in percentage of height

define SAFE_Z -20 // Safe position of the tip of the marker along the z-axis
define START_Z 0 // Working position of the tip of the marker along the z-axis

// Working area border
define BORDER_Y 120
define BORDER_X 75

// ON/OFF signals
define ON 1
define OFF 0

// Pauses between sending certain signals, milliseconds
define _SIGNAL_PAUSE_MILLISEC 50
define _OFF_PAUSE_MILLISEC 200

// Euler angles of the initial marker position
define START_W -179.707 // Roll
define START_P -2.500 // Pitch
define START_R 103.269 // Yaw

// Euler angles of marker turn
define SECOND_W -179.704
define SECOND_P -2.514
define SECOND_R -14.699

define CHAR_OFFSET_MM 4 // Spacing between letters

define UFRAME 4 // Table number
define UTOOL 2 // Tool number
define PAYLOAD 4 // Load number
define SPEED 100 // Speed
define CNT 0 // Movement smoothness parameter
define ROTATE_SPEED // Speed in turn

define HOME_PNS 4 // The number of the PNS program for home position return
End of spoiler

In total, we’ve got about 300 lines of high level code that took not more than 1 hour to develop and write.
If the problem had been solved in the “straightforward” manner by online programming with the use of points, it would have taken more than 9 hours (approximately 20-25 seconds per point, given the fact that there are over 1,700 points). In this case, the developer’s sufferings are unimaginable :), especially when he would have found out that he had forgotten about the indents between the frames that the letters were inscribed in, or the height of the letters was wrong, and the text did not fit in.
Conclusion:
The use of runtime programming is one of the ways to create executable software. The advantages of this approach include the following:
The possibility of writing and debugging programs without the need to stop the robot, thus minimizing the downtime for changeover.
A parameterized executable program that’s easy to edit.
Dynamic activation and deactivation robots in the active technological task, and cooperation of robots from various manufacturers.
Thus, with the use of runtime programming, an executable command may be described in a way to make its execution available for any robot within the working group, or may be written for a particular robot, that will be the only one to execute it.
However, this approach has one significant limitation – incorrect understanding of the displacement smoothing instruction (CNT) by the robot, or ignoring it, since when only the current point is sent, the robot knows nothing about the next one, and cannot calculate the smoothed trajectory for bypassing the current point with smoothing.
Spoiler: “What is trajectory smoothing?”
When moving the robot’s tool, two parameters may be adjusted:
Travel speed
Level of smoothing
Travel speed sets the speed of the tool travel in mm/sec.
Level of smoothing (CNT) allows passing a group of points along the trajectory with the least distance between the extreme points of the group.

End of spoiler

The danger of using this instruction in the runtime mode is that the robot reports its arrival to the smoothed target point, although in reality the robot is still moving towards it. The robot does it to request the next point, and to calculate smoothing. Evidently, it is impossible to know exactly in what position the robot is when passing such a point, besides, tool activation at the manipulator may be required at a certain point. The robot will send a signal about reaching the point, but it is not actually so. In this case, the tool will be enabled before it is needed. At the best case, the robot will simply ignore the CNT instruction (depending on the model).
This may be fixed by sending 2 or more points at a time, where the CNT point is not the last one; however, this increases program complexity and the burden on the programmer.
Article provided by: robotct.ru
Photo Credits: Robotct.ru

The post Industrial robot runtime programming appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment