Tag Archives: project

#430761 How Robots Are Getting Better at Making ...

The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading

Posted in Human Robots

#430649 Robotherapy for children with autism

New Robotherapy for children with autism could reduce patient supervision by therapists.
05.07.2017
Autism treatments and therapies routinely make headlines. With robot enhanced therapies on the rise, often overlooked though, is the mental stress and physical toll the procedures take on therapists. As autism treatments can be taxing on both patient and therapists, few realize the stress and workload of those working with autistic patients.
It is against this backdrop, that researchers from the Vrije Universiteit Brussel are pioneering a new technology to aid behavioural therapy, and one with a very deliberate aspect: they are using robots to boost the basic social learning skills of children with ASD and while doing so, they hope to make the therapists’ job substantially easier.
A study, just published in PALADYN – Journal of Behavioural Robotics examines the use of social robots as tools in clinical situations by addressing the challenge of increasing robot autonomy.
The growing deployment of robot-assisted therapies in recent decades means children with Autism Spectrum Disorder (ASD) can develop and nurture social behaviour and cognitive skills. Learning skills that hold out in real life is the first and foremost goal of all autism therapies, including the Robot-Assisted Therapy (RAT), with effectiveness always considered a key concern. However, this time round the scientists have set off on the additional mission to take the load off the human therapists by letting parts of the intervention be taken over by the supervised yet autonomous robots.
The researchers developed a complete system of robot-enhanced therapy (RET) for children with ASD. The therapy works by teaching behaviours during repeated sessions of interactive games. Since the individuals with ASD tend to be more responsive to feedback coming from an interaction with technology, robots are often used for this therapy. In this approach, the social robot acts as a mediator and typically remains remote-controlled by a human operator. The technique, called Wizard of Oz, requires the robot to be operated by an additional person and the robot is not recording the performance during the therapy. In order to reduce operator workload, authors introduced a system with a supervised autonomous robot – which is able to understand the psychological disposition of the child and use it to select actions appropriate to the current state of the interaction.
Admittedly, robots with supervised autonomy can substantially benefit behavioural therapy for children with ASD – diminishing the therapist workload on the one hand, and achieving more objective measurements of therapy outcomes on the other. Yet, complex as it is, this therapy requires a multidisciplinary approach, as RET provides mixed effectiveness for primary tasks: the turn-taking, joint attention and imitation task comparing to Standard Human Treatment (SHT).
Results are likely to prompt a further development of the robot assisted therapy with increasing robot’s autonomy. With many outstanding conceptual and technical issues yet to tackle –it is definitely the ethical questions that pose one of the major challenges as far as the potential and maximal degree of robot autonomy is concerned.
The article is fully available in open access to read, download and share on De Gruyter Online.
Research was conducted as a part of DREAM (Development of Robot-Enhanced therapy for children with Autism spectrum disorders) project.
DOI: 10.1515/pjbr-2017-0002
Image credit: P.G. Esteban
About the Journal: PALADYN – Journal of Behavioural Robotics is a fully peer-reviewed, electronic-only journal that publishes original, high-quality research on topics broadly related to neuronally and psychologically inspired robots and other behaving autonomous systems.
About De Gruyter Open: De Gruyter Open is a leading publisher of Open Access academic content. Publishing in all major disciplines, De Gruyter Open is home to more than 500 scholarly journals and over 100 books. The company is part of the De Gruyter Group (www.degruyter.com) and a member of the Association of Learned and Professional Society Publishers (ALPSP). De Gruyter Open’s book and journal programs have been endorsed by the international research community and some of the world’s top scientists, including Nobel laureates. The company’s mission is to make the very best in academic content freely available to scholars and lay readers alike.
The post Robotherapy for children with autism appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#430640 RE2 Robotics Receives Air Force Funding ...

PITTSBURGH, PA – June 21, 2017 – RE2 Robotics announced today that the Company was selected by the Air Force to develop a drop-in robotic system to rapidly convert a variety of traditionally manned aircraft to robotically piloted, autonomous aircraft under the Small Business Innovation Research (SBIR) program. This robotic system, named “Common Aircraft Retrofit for Novel Autonomous Control” (CARNAC), will operate the aircraft similarly to a human pilot and will not require any modifications to the aircraft.
Automation and autonomy have broad value to the Department of Defense with the potential to enhance system performance of existing platforms, reduce costs, and enable new missions and capabilities, especially with reduced human exposure to dangerous or life-threatening situations. The CARNAC project leverages existing aviation assets and advances in vehicle automation technologies to develop a cutting-edge drop-in robotic flight system.
During the program, RE2 Robotics will demonstrate system architecture feasibility, humanoid-like robotic manipulation capabilities, vision-based flight-status recognition, and cognitive architecture-based decision making.
“Our team is excited to incorporate the Company’s robotic manipulation expertise with proven technologies in applique systems, vision processing algorithms, and decision making to create a customized application that will allow a wide variety of existing aircraft to be outfitted with a robotic pilot,” stated Jorgen Pedersen, president and CEO of RE2 Robotics. “By creating a drop-in robotic pilot, we have the ability to insert autonomy into and expand the capabilities of not only traditionally manned air vehicles, but ground and underwater vehicles as well. This application will open up a whole new market for our mobile robotic manipulator systems.”
###
About RE2 RoboticsRE2 Robotics develops mobile robotic technologies that enable robot users to remotely interact with their world from a safe distance — whether on the ground, in the air, or underwater. RE2 creates interoperable robotic manipulator arms with human-like performance, intuitive human robot interfaces, and advanced autonomy software for mobile robotics. For more information, please visit www.resquared.com or call 412.681.6382.
Media Contact: RE2 Public Relations, pr@resquared.com, 412.681.6382.
The post RE2 Robotics Receives Air Force Funding to Develop Robotic Pilot appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#430579 What These Lifelike Androids Can Teach ...

For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.

ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading

Posted in Human Robots

#428626 Cimcorp to fully automate Turkish Tire ...

Cimcorp Selected to Supply Turnkey Automated Handling System to Large Turkish Tire Manufacturer, Petlas
The leading tire handling specialist’s system will handle tires in the tire-finishing and palletizing areas in Turkish manufacturer’s expanded facility
Ulvila, Finland – November 9, 2016 – Cimcorp, leading global supplier of turnkey automation for intralogistics and tire-handling solutions, announces it has been selected to implement a fully automated handling system in Petlas Tire Corporation’s (Petlas) factory in Kirsehir, Turkey. Based on Cimcorp’s Dream Factory solution, the automation will take care of the handling of passenger car radial (PCR) finished tires in the tire-finishing and palletizing areas. Work on the order is already underway and the’ turnkey material handling system will become fully operational in fall 2017.
The order, Cimcorp’s first project for Petlas, is part of a huge investment program to expand the Kirsehir plant in order to increase Petlas’ PCR production capacity and meet growing demand.
Turkey achieved record car production and export levels in 2015, with production up by 16 percent and exports up 12 percent over the preceding year. This growth rate is higher than in any other European country and, with its automotive plants rolling out 1.36 million vehicles in 2015, Turkey is now the seventh largest automotive producer in Europe.
With the production equipment – the tire-building machines, presses and testing machines – already installed, Petlas is commencing the automation of the plant’s material handling. This comprises Cimcorp’s robotic buffer stores, tire conveyors and control software – Cimcorp WCS (Warehouse Control Software) – to take care of all material flows. Using linear robots operating on overhead gantries, the system will automate the handling and transfer of finished tires from the trimming stations, through visual inspection and uniformity testing, to palletizing.
Yahya Ertem, general manager, Petlas Tire Corporation, said, “We think highly of Cimcorp’s software, which integrates the machines into one entity and keeps the flow of material and data under complete control. Cimcorp’s Dream Factory solution fits with our vision to achieve ‘excellence in business’ and will help us to achieve our strategic goals.”
Tero Peltomäki, vice president of sales and projects, Cimcorp, said, “It has been fantastic to work with the Petlas team, honing our design into the best possible solution for the Kirsehir plant. The automation will help Petlas to enhance its market position as a leading tire manufacturer and distributor and we look forward to working on future automation projects with the company.”
To receive high-resolution images, please send requests to Heidi Scott via email at: lasendio@dprgroup.com

About Cimcorp
Cimcorp Group – part of Murata Machinery, Ltd. (Muratec) – is a leading global supplier of turnkey automation for intralogistics, using advanced robotics and software technologies. As well as being a manufacturer and integrator of pioneering material handling systems for the tire industry, Cimcorp has developed unique robotic solutions for order fulfillment and storage that are being used in the food & beverage, retail, e-commerce, FMCG and postal services sectors. With locations in Finland, Canada and the United States, the group has around 300 employees and has delivered over 2,000 logistics automation solutions. Designed to reduce operating costs, ensure traceability and improve efficiency, these systems are used within manufacturing and distribution centers in 40 countries across five continents. For more information, visit www.cimcorp.com.
About Petlas Tire Corporation (Petlas)
Founded in 1976, Petlas Tire Corporation has operations in 98 countries worldwide and employs 2,150 people. The company’s plant in Kirsehir currently has the capacity to produce 8 million PCR (passenger car radial) tires, 2 million agricultural tires, 500,000 TBR (truck & bus radial) tires and 300,000 OTR (off-the-road) tires per year. For more information, visit www.petlas.com.

The post Cimcorp to fully automate Turkish Tire Manufacturer Petlas appeared first on Roboticmagazine. Continue reading

Posted in Human Robots