Tag Archives: describe

#431671 The Doctor in the Machine: How AI Is ...

Artificial intelligence has received its fair share of hype recently. However, it’s hype that’s well-founded: IDC predicts worldwide spend on AI and cognitive computing will culminate to a whopping $46 billion (with a “b”) by 2020, and all the tech giants are jumping on board faster than you can say “ROI.” But what is AI, exactly?
According to Hilary Mason, AI today is being misused as a sort of catch-all term to basically describe “any system that uses data to do anything.” But it’s so much more than that. A truly artificially intelligent system is one that learns on its own, one that’s capable of crunching copious amounts of data in order to create associations and intelligently mimic actual human behavior.
It’s what powers the technology anticipating our next online purchase (Amazon), or the virtual assistant that deciphers our voice commands with incredible accuracy (Siri), or even the hipster-friendly recommendation engine that helps you discover new music before your friends do (Pandora). But AI is moving past these consumer-pleasing “nice-to-haves” and getting down to serious business: saving our butts.
Much in the same way robotics entered manufacturing, AI is making its mark in healthcare by automating mundane, repetitive tasks. This is especially true in the case of detecting cancer. By leveraging the power of deep learning, algorithms can now be trained to distinguish between sets of pixels in an image that represents cancer versus sets that don’t—not unlike how Facebook’s image recognition software tags pictures of our friends without us having to type in their names first. This software can then go ahead and scour millions of medical images (MRIs, CT scans, etc.) in a single day to detect anomalies on a scope that humans just aren’t capable of. That’s huge.
As if that wasn’t enough, these algorithms are constantly learning and evolving, getting better at making these associations with each new data set that gets fed to them. Radiology, dermatology, and pathology will experience a giant upheaval as tech giants and startups alike jump in to bring these deep learning algorithms to a hospital near you.
In fact, some already are: the FDA recently gave their seal of approval for an AI-powered medical imaging platform that helps doctors analyze and diagnose heart anomalies. This is the first time the FDA has approved a machine learning application for use in a clinical setting.
But how efficient is AI compared to humans, really? Well, aside from the obvious fact that software programs don’t get bored or distracted or have to check Facebook every twenty minutes, AI is exponentially better than us at analyzing data.
Take, for example, IBM’s Watson. Watson analyzed genomic data from both tumor cells and healthy cells and was ultimately able to glean actionable insights in a mere 10 minutes. Compare that to the 160 hours it would have taken a human to analyze that same data. Diagnoses aside, AI is also being leveraged in pharmaceuticals to aid in the very time-consuming grunt work of discovering new drugs, and all the big players are getting involved.
But AI is far from being just a behind-the-scenes player. Gartner recently predicted that by 2025, 50 percent of the population will rely on AI-powered “virtual personal health assistants” for their routine primary care needs. What this means is that consumer-facing voice and chat-operated “assistants” (think Siri or Cortana) would, in effect, serve as a central hub of interaction for all our connected health devices and the algorithms crunching all our real-time biometric data. These assistants would keep us apprised of our current state of well-being, acting as a sort of digital facilitator for our personal health objectives and an always-on health alert system that would notify us when we actually need to see a physician.
Slowly, and thanks to the tsunami of data and advancements in self-learning algorithms, healthcare is transitioning from a reactive model to more of a preventative model—and it’s completely upending the way care is delivered. Whether Elon Musk’s dystopian outlook on AI holds any weight or not is yet to be determined. But one thing’s certain: for the time being, artificial intelligence is saving our lives.
Image Credit: Jolygon / Shutterstock.com Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#431315 Better Than Smart Speakers? Japan Is ...

While American internet giants are developing speakers, Japanese companies are working on robots and holograms. They all share a common goal: to create the future platform for the Internet of Things (IoT) and smart homes.
Names like Bocco, EMIEW3, Xperia Assistant, and Gatebox may not ring a bell to most outside of Japan, but Sony, Hitachi, Sharp, and Softbank most certainly do. The companies, along with Japanese start-ups, have developed robots, robot concepts, and even holograms like the ones hiding behind the short list of names.
While there are distinct differences between the various systems, they share the potential to act as a remote control for IoT devices and smart homes. It is a very different direction than that taken by companies like Google, Amazon, and Apple, who have so far focused on building IoT speaker systems.
Bocco robot. Image Credit: Yukai Engineering
“Technology companies are pursuing the platform—or smartphone if you will—for IoT. My impression is that Japanese companies—and Japanese consumers—prefer that such a platform should not just be an object, but a companion,” says Kosuke Tatsumi, designer at Yukai Engineering, a startup that has developed the Bocco robot system.
At Hitachi, a spokesperson said that the company’s human symbiotic service robot, EMIEW3, robot is currently in the field, doing proof-of-value tests at customer sites to investigate needs and potential solutions. This could include working as an interactive control system for the Internet of Things:
“EMIEW3 is able to communicate with humans, thus receive instructions, and as it is connected to a robotics IT platform, it is very much capable of interacting with IoT-based systems,” the spokesperson said.
The power of speech is getting feet
Gartner analysis predicts that there will be 8.4 billion internet-connected devices—collectively making up the Internet of Things—by the end of 2017. 5.2 billion of those devices are in the consumer category. By the end of 2020, the number of IoT devices will rise to 12.8 billion—and that is just in the consumer category.
As a child of the 80s, I can vividly remember how fun it was to have separate remote controls for TV, video, and stereo. I can imagine a situation where my internet-connected refrigerator and ditto thermostat, television, and toaster try to work out who I’m talking to and what I want them to do.
Consensus seems to be that speech will be the way to interact with many/most IoT devices. The same goes for a form of virtual assistant functioning as the IoT platform—or remote control. Almost everything else is still an open ballgame, despite an early surge for speaker-based systems, like those from Amazon, Google, and Apple.
Why robots could rule
Famous android creator and robot scientist Dr. Hiroshi Ishiguro sees the interaction between humans and the AI embedded in speakers or robots as central to both approaches. From there, the approaches differ greatly.
Image Credit: Hiroshi Ishiguro Laboratories
“It is about more than the difference of form. Speaking to an Amazon Echo is not a natural kind of interaction for humans. That is part of what we in Japan are creating in many human-like robot systems,” he says. “The human brain is constructed to recognize and interact with humans. This is part of why it makes sense to focus on developing the body for the AI mind as well as the AI mind itself. In a way, you can describe it as the difference between developing an assistant, which could be said to be what many American companies are currently doing, and a companion, which is more the focus here in Japan.”
Another advantage is that robots are more kawaii—a multifaceted Japanese word that can be translated as “cute”—than speakers are. This makes it easy for people to relate to them and forgive them.
“People are more willing to forgive children when they make mistakes, and the same is true with a robot like Bocco, which is designed to look kawaii and childlike,” Kosuke Tatsumi explains.
Japanese robots and holograms with IoT-control capabilities
So, what exactly do these robot and hologram companions look like, what can they do, and who’s making them? Here are seven examples of Japanese companies working to go a step beyond smart speakers with personable robots and holograms.
1. In 2016 Sony’s mobile division demonstrated the Xperia Agent concept robot that recognizes individual users, is voice controlled, and can do things like control your television and receive calls from services like Skype.

2. Sharp launched their Home Assistant at CES 2016. A robot-like, voice-controlled assistant that can to control, among other things, air conditioning units, and televisions. Sharp has also launched a robotic phone called RoBoHon.
3. Gatebox has created a holographic virtual assistant. Evil tongues will say that it is primarily the expression of an otaku (Japanese for nerd) dream of living with a manga heroine. Gatebox is, however, able to control things like lights, TVs, and other systems through API integration. It also provides its owner with weather-related advice like “remember your umbrella, it looks like it will rain later.” Gatebox can be controlled by voice, gesture, or via an app.
4. Hitachi’s EMIEW3 robot is designed to assist people in businesses and public spaces. It is connected to a robot IT-platform via the cloud that acts as a “remote brain.” Hitachi is currently investigating the business use cases for EMIEW3. This could include the role of controlling platform for IoT devices.

5. Softbank’s Pepper robot has been used as a platform to control use of medical IoT devices such as smart thermometers by Avatarion. The company has also developed various in-house systems that enable Pepper to control IoT-devices like a coffee machine. A user simply asks Pepper to brew a cup of coffee, and it starts the coffee machine for you.
6. Yukai Engineering’s Bocco registers when a person (e.g., young child) comes home and acts as a communication center between that person and other members of the household (e.g., parent still at work). The company is working on integrating voice recognition, voice control, and having Bocco control things like the lights and other connected IoT devices.
7. Last year Toyota launched the Kirobo Mini, a companion robot which aims to, among other things, help its owner by suggesting “places to visit, routes for travel, and music to listen to” during the drive.

Today, Japan. Tomorrow…?
One of the key questions is whether this emerging phenomenon is a purely Japanese thing. If the country’s love of robots makes it fundamentally different. Japan is, after all, a country where new units of Softbank’s Pepper robot routinely sell out in minutes and the RoBoHon robot-phone has its own cafe nights in Tokyo.
It is a country where TV introduces you to friendly, helpful robots like Doraemon and Astro Boy. I, on the other hand, first met robots in the shape of Arnold Schwarzenegger’s Terminator and struggled to work out why robots seemed intent on permanently borrowing things like clothes and motorcycles, not to mention why they hated people called Sarah.
However, research suggests that a big part of the reason why Japanese seem to like robots is a combination of exposure and positive experiences that leads to greater acceptance of them. As robots spread to more and more industries—and into our homes—our acceptance of them will grow.
The argument is also backed by a project by Avatarion, which used Softbank’s Nao-robot as a classroom representative for children who were in the hospital.
“What we found was that the other children quickly adapted to interacting with the robot and treating it as the physical representation of the child who was in hospital. They accepted it very quickly,” Thierry Perronnet, General Manager of Avatarion, explains.
His company has also developed solutions where Softbank’s Pepper robot is used as an in-home nurse and controls various medical IoT devices.
If robots end up becoming our preferred method for controlling IoT devices, it is by no means certain that said robots will be coming from Japan.
“I think that the goal for both Japanese and American companies—including the likes of Google, Amazon, Microsoft, and Apple—is to create human-like interaction. For this to happen, technology needs to evolve and adapt to us and how we are used to interacting with others, in other words, have a more human form. Humans’ speed of evolution cannot keep up with technology’s, so it must be the technology that changes,” Dr. Ishiguro says.
Image Credit: Sony Mobile Communications Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#430706 Researchers describe pneumatic actuator ...

A new study demonstrated the design, potential applications, and advantages of an innovative multi-chambered soft pneumatic actuator. Researchers described how the actuator generates cyclical motion and characterized its trajectory and the force it exerts in an article published in Soft Robotics. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , | Leave a comment

#428172 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE: Conscious Exotica Murray Shanahan | Aeon "In what follows I attempt to…[describe] the structure of the space of possible minds, in two dimensions: the capacity for consciousness and the human-likeness of behavior. Implicit in this mapping seems to be the possibility of forms of consciousness so alien that we would not recognize them." ROBOTICS: How Analog and Neuromorphic Chips Will Rule the Robotic Age Shahin Farshchi | IEEE Spectrum "Why the move to analog… read more Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment

#426831 Industrial robot runtime programming

Article provided by: www.robotct.ru
In this article, runtime programming is understood as the process of creating an executable program for a robot controller (hereinafter referred to as “robot”) on an external controller. In this case the robot performs the program iteratively, by sending the minimum executable command or batch of commands to it. In other words, in runtime programming, the executable program is sent to the robot in portions, thus the robot does not have, store, or know the entire executable program beforehand. Such an approach allows creating an abstract parameterized executable program, which is generated by the external device “on the fly”, i.e., during runtime.
Under the cut, there is the description and a real example of how runtime programming works.
Typically, a program for a robot is a sequence of positions of the robot manipulator. Each of these positions is characterized by the TCP (Tool Center Point) position, the point of the tip of the tool mounted on the manipulator (by default, TCP is in the center of robot’s flange, see the picture below, but its position may be adjusted, and it is often that TCP with the tip of the tool mounted on the manipulator of the robot). Therefore, when programming, TCP position in space is often specified, and the robot determines the positions of manipulator’s joints itself. Further in this article, we will use the term “TCP position”, or, in other words, the point that the robot shall arrive to.

The program for the robot may also contain control logic (branching, loops), simple mathematical operations, and commands for controlling peripheral devices – analog and digital inputs/outputs. In the proposed approach to runtime programming, a standard PC is used as an external controller, which can use powerful software that ensures the necessary level of abstraction (OOP and other paradigms), and tools that ensure speed and ease of developing complex logic (high-level programming languages). The robot itself has only to deal with the logic that is critical to response rate, for execution of which the reliability of an industrial controller is required, for example, prompt and adequate response to an emergency situation. The control of the peripherals connected to the robot is simply “proxied” by the robot on the PC, allowing the PC to activate or deactivate corresponding signals on the robot; it is something similar to controlling “legs” of Arduino.

As it has been noted earlier, runtime programming enables sending the program to the robot in portions. Usually, a set of states of output signals and several points, or even only a single point is sent. Thus, the trajectory of the TCP movement performed by the robot may be built dynamically, and some of its parts may belong both to different technological processes, and even to different robots (connected to the same external controller), where a group of robots works.
For example, the robot has moved to one of the working areas, performed the required operations, then – to the next one, then to yet another one, and then back to the first one, etc. In different working areas, the robot performs operations required for different technological processes, where programs are executed in parallel threads on the external controller, which allocates the robot to different processes that do not require constant presence of the robot. This mechanism is similar to the way an OS allocates processor time (execution resource) to various threads, and at the same time, different executors are not linked to threads throughout the whole period of program execution.
A little more theory, and we will proceed to practice.
Description of the existing methods of programming industrial robots.
Without regard to the approach of runtime programming introduced in this article, two ways of programming industrial robots are usually identified. Offline and online programming.
The process of online programming occurs with direct interaction of the programmer and the robot at the location of usage. Using a remote control, or by physical movement, the tool (TCP) mounted on the flange of the robot is moved to the desired point.
The advantage of this method of programming is the ease of approach to robot programming. One does not have to know anything about programming; it is enough to state the sequence of robot positions.
An important disadvantage of this approach is the significantly increased time consumption, when the program is increased at least to several dozen (not to mention thousands) points, or when it (the program) is subsequently modified. In addition, during such learning, the robot cannot be used for work.
The process of offline programming, as the name implies, occurs away from the robot and its controller. The executable program is developed in any programming environment on a PC, after which it is entirely loaded into the robot. However, programming tools for such development are not included into the basic delivery set of the robot, and are additional options to be purchased separately, and expensive on the whole.
The advantage of offline programming is that the robot may be used in production and may work, while the program is being developed. The robot is only needed to debug ready programs. There is no need to go to the automation object and program the robot in person.
A great disadvantage of the existing offline programming environments is their high cost. Besides, it is impossible to dynamically distribute the executable program to different robots.
As an example, let us consider creating a robot program in runtime mode, which enables the process of writing an ad with a marker.

Result:

ATTENTION! The video is not an advertisement, the vacancy is closed. The article was written after the video had become obsolete, to show the proposed approach to programming.

The written text:
HELLO, PEOPLE! WE NEED A DEVELOPER TO CREATE A WEB INTERFACE OF OUR KNOWLEDGE SYSTEM.
THIS WAY WE WILL BE ABLE TO GET KNOWLEDGE FROM YOU HUMANOIDS.
AND, FINALLY, WE’LL BE ABLE TO CONQUER AND IMPROVE THIS WORLD

READ MORE: HTTP://ROBOTCT.COM/HI
SINCERELY YOURS, SKYNET =^-^=
To make the robot write this text, it was necessary to send over 1,700 points to the robot.
As an example, the spoiler contained a screenshot of the program drawing a square from the robot’s remote control. It only has 5 points (lines 4-8); each point is in fact a complete expression, and takes one line. The manipulator traverses each of the four points, and returns to the starting point upon completion.
The screenshot of the remote control with the executable program:

If the program is written this way, it would take at least 1,700 lines of code, a line per point. What if you have to change the text, or the height of the characters, or the distance between them? Edit all the 1,700 point lines? This contradicts the spirit of automation!
So, let’s proceed to the solution…
We have a FANUC LR Mate 200iD robot with an R-30i B series cabinet controller. The robot has a preconfigured TCP at the marker end, and the coordinate system of the desktop, so we can send the coordinates directly, without worrying about transforming the coordinates from the coordinate system of the table into the coordinate system of the robot.
To implement the program of sending the coordinates to the robot, which will calculate the absolute values of each point, we will use the RCML programming language that supports this robot and, which is important, which is free for anyone to use.
Let’s describe each letter with dots, but in the relative coordinates inside the frame, in which the letter will be inscribed, rather than in the real space coordinates. Each letter will be drawn by a separate function receiving the sequence number of the character in the line, line number and the size of the letter as input parameters, and sending a set of points to the robot with calculated absolute coordinates for each point.
To write a text, we will have to call a series of functions that would draw the letters in the sequence in which they (letters) are present in the text. RCML has a meager set of tools for working with strings, so we will write an external Python script which will generate a program in RCML – essentially, generate only the sequence of function calls that corresponds to the sequence of letters.
The whole code is available in repository: rct_paint_words
Let us consider the output file in more detail, execution starts from function main():

Spoiler: “Let us consider the code for drawing a letter, for example, letter A:”
function robot_fanuc::draw_A(x_cell,y_cell){
// Setting the marker to the point, the coordinates of the point are 5% along X and 95% along Y within the letter frame
robot->setPoint(x_cell, y_cell, 5, 95);
// Drawing a line
robot->movePoint(x_cell, y_cell, 50, 5);
// Drawing the second line
robot->movePoint(x_cell, y_cell, 95, 95);
// We get the “roof” /

// Moving the marker lifted from the table to draw the cross line
robot->setPoint(x_cell, y_cell, 35, 50);
// Drawing the cross-line
robot->movePoint(x_cell, y_cell, 65, 50);
// Lifting the marker from the table to move to the next letter
robot->marker_up();
}
End of spoiler

Spoiler: “The functions of moving the marker to the point, with or without lifting, are also very simple:”
// Moving the lifted marker to the point, or setting the point to start drawing
function robot_fanuc::setPoint(x_cell, y_cell, x_percent, y_precent){
// Calculating the absolute coordinates
x = calculate_absolute_coords_x(x_cell, x_percent); y = calculate_absolute_coords_y(y_cell, y_precent);

robot->marker_up(); // Lifting the marker from the table
robot->marker_move(x,y); // Moving
robot->marker_down(); // Lowering the marker to the table

// Moving the marker to the point without lifting, or actually drawing
function robot_fanuc::movePoint(x_cell, y_cell, x_percent, y_precent){ x = calculate_absolute_coords_x(x_cell, x_percent); y = calculate_absolute_coords_y(y_cell, y_precent);
// Here everything is clear robot->marker_move(x,y);
}
End of spoiler

Spoiler: Functions marker_up, marker_down, marker_move contain only the code of sending the changed part of the TCP point coordinates (Z or XY) to the robot.
function robot_fanuc::marker_up(){
robot->set_real_di(“z”, SAFE_Z);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker upn”);
throw er;
}
}

function robot_fanuc::marker_down(){
robot->set_real_di(“z”, START_Z);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker downn”);
throw er;
}
}

function robot_fanuc::marker_move(x,y){
robot->set_real_di(“x”, x);
robot->set_real_di(“y”, y);
er = robot->sendMoveSignal();
if (er != 0){
system.echo(“error marker moven”);
throw er;
}
}
End of spoiler

All configuration constants, including size of letters, their number in the line, etc., were put to a separate file.
Spoiler: “Configuration file”
define CHAR_HEIGHT_MM 50 // Character height in mm
define CHAR_WIDTH_PERCENT 60 // Character width in percentage of height

define SAFE_Z -20 // Safe position of the tip of the marker along the z-axis
define START_Z 0 // Working position of the tip of the marker along the z-axis

// Working area border
define BORDER_Y 120
define BORDER_X 75

// ON/OFF signals
define ON 1
define OFF 0

// Pauses between sending certain signals, milliseconds
define _SIGNAL_PAUSE_MILLISEC 50
define _OFF_PAUSE_MILLISEC 200

// Euler angles of the initial marker position
define START_W -179.707 // Roll
define START_P -2.500 // Pitch
define START_R 103.269 // Yaw

// Euler angles of marker turn
define SECOND_W -179.704
define SECOND_P -2.514
define SECOND_R -14.699

define CHAR_OFFSET_MM 4 // Spacing between letters

define UFRAME 4 // Table number
define UTOOL 2 // Tool number
define PAYLOAD 4 // Load number
define SPEED 100 // Speed
define CNT 0 // Movement smoothness parameter
define ROTATE_SPEED // Speed in turn

define HOME_PNS 4 // The number of the PNS program for home position return
End of spoiler

In total, we’ve got about 300 lines of high level code that took not more than 1 hour to develop and write.
If the problem had been solved in the “straightforward” manner by online programming with the use of points, it would have taken more than 9 hours (approximately 20-25 seconds per point, given the fact that there are over 1,700 points). In this case, the developer’s sufferings are unimaginable :), especially when he would have found out that he had forgotten about the indents between the frames that the letters were inscribed in, or the height of the letters was wrong, and the text did not fit in.
Conclusion:
The use of runtime programming is one of the ways to create executable software. The advantages of this approach include the following:
The possibility of writing and debugging programs without the need to stop the robot, thus minimizing the downtime for changeover.
A parameterized executable program that’s easy to edit.
Dynamic activation and deactivation robots in the active technological task, and cooperation of robots from various manufacturers.
Thus, with the use of runtime programming, an executable command may be described in a way to make its execution available for any robot within the working group, or may be written for a particular robot, that will be the only one to execute it.
However, this approach has one significant limitation – incorrect understanding of the displacement smoothing instruction (CNT) by the robot, or ignoring it, since when only the current point is sent, the robot knows nothing about the next one, and cannot calculate the smoothed trajectory for bypassing the current point with smoothing.
Spoiler: “What is trajectory smoothing?”
When moving the robot’s tool, two parameters may be adjusted:
Travel speed
Level of smoothing
Travel speed sets the speed of the tool travel in mm/sec.
Level of smoothing (CNT) allows passing a group of points along the trajectory with the least distance between the extreme points of the group.

End of spoiler

The danger of using this instruction in the runtime mode is that the robot reports its arrival to the smoothed target point, although in reality the robot is still moving towards it. The robot does it to request the next point, and to calculate smoothing. Evidently, it is impossible to know exactly in what position the robot is when passing such a point, besides, tool activation at the manipulator may be required at a certain point. The robot will send a signal about reaching the point, but it is not actually so. In this case, the tool will be enabled before it is needed. At the best case, the robot will simply ignore the CNT instruction (depending on the model).
This may be fixed by sending 2 or more points at a time, where the CNT point is not the last one; however, this increases program complexity and the burden on the programmer.
Article provided by: robotct.ru
Photo Credits: Robotct.ru

The post Industrial robot runtime programming appeared first on Roboticmagazine. Continue reading

Posted in Human Robots | Tagged , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , | Leave a comment