Tag Archives: news

#429950 Veo Gives Robots ‘Eyes and a Brain’ ...

The robots are coming.
Actually, they’re already here. Machines are learning to do tasks they’ve never done before, from locating and retrieving goods from a shelf to driving cars to performing surgery. In manufacturing environments, robots can place an object with millimeter precision over and over, lift hundreds of pounds without getting tired, and repeat the same action constantly for hundreds of hours.
But let’s not give robots all the glory just yet. A lot of things that are easy for humans are still hard or impossible for robots. A three-year-old child, for example, can differentiate between a dog and a cat, or intuitively scoot over when said dog or cat jumps into its play space. A computer can’t do either of these simple actions.
So how do we take the best robots have to offer and the best humans have to offer and combine them to reach new levels of output and performance?
That’s the question engineers at Veo Robotics are working to answer. At Singularity University’s Exponential Manufacturing Summit last week, Clara Vu, Veo’s cofounder and VP of Engineering, shared some of her company’s initiatives and why they’re becoming essential to today’s manufacturing world.
"Our system…essentially gives a robot arm 'eyes and a brain,'" Vu said. "Our system can understand the space, see what's around the robot, reason about it, and then control the robot so [it] can safely interact with people."

Why we’re awesome
If you think about it, we humans are pretty amazing creatures. Vu pointed out that the human visual system has wide range, precise focus, depth-rich color, and three dimensions. Our hands have five independently-articulated fingers, 29 joints, 34 muscles, and 123 tendons—and they're all covered in skin, a finely-grained material sensitive to force, temperature and touch.
Not only do we have all these tools, we have millions of years of evolution behind us that have taught us the best ways to use them. We use them for a huge variety of tasks, and we can adapt them to quickly-changing environments.
Most robots, on the other hand, know how to do one task, the same way, over and over again. Move the assembly line six inches to the right or make the load two pounds lighter, and a robot won’t be able to adapt and carry on.
Like oil and water
In today’s manufacturing environment, humans and robots don’t mix—they’re so different that it’s hard for them to work together. This leaves manufacturing engineers designing processes either entirely for robots, or entirely without them. But what if the best way to, say, attach a door to a refrigerator is to have a robot lift it, a human guide it into place, the robot put it down, and the human tighten its hinges?
Sounds simple enough, but with the big, dumb robots we have today, that’s close to impossible—and the manufacturing environment is evolving in a direction that will make it harder, not easier. “As the number of different things we want to make increases and the time between design and production decreases, we’ll want more flexibility in our processes, and it will be more difficult to use automation effectively,” Vu said.
Smaller, lighter, smarter
For people and robots to work together safely and efficiently, robots need to get smaller, lighter, and most importantly, smarter. “Autonomy is exactly what we need here,” Vu said. “At its core, autonomy is about the ability to perceive, decide and act independently.” An autonomous robot, she explained, needs to be able to answer questions like ‘where am I?’, ‘what's going on around me?’, ‘what actions are safe?’, and ‘what actions will bring me closer to my goal?’
Veo engineers are working on a responsive system to bring spatial awareness to robots. Depth-sensing cameras give the robot visual coverage, and its software learns to differentiate between the objects around it, to the point that it can be aware of the size and location of everything in its area. It can then be programmed to adjust its behavior to changes in its environment—if a human shows up where a human isn’t supposed to be, the robot can stop what it’s doing to make sure the human doesn’t get hurt.
3D sensors will also play a key part in the system, and Vu mentioned the importance of their declining costs. “Ten years ago, the only 3D sensors that were available were 3D liners that cost tens of thousands of dollars. Today, because of advances in consumer applications like gaming and gesture recognition, it's possible to get 3D time-of-flight chipsets for well under a hundred dollars, in quantity. These sensors give us exactly the kind of data we need to solve this problem,” she said.
3D sensors wouldn’t be very helpful without computers that can do something useful with all the data they collect. “Multiple sensors monitoring a large 3D area means millions of points that have to be processed in real time,” Vu noted. “Today's CPUs, and in particular GPUs, which can perform thousands of computations in parallel, are up to the task.”
A seamless future
Veo’s technology can be integrated with pre-existing robots of various sizes, types, and functionalities. The company is currently testing its prototypes with manufacturing partners, and is aiming to deploy in 2019.
Vu told the audience that industrial robots have a projected compound annual growth rate of 13 percent by 2019, and though collaborative robots account for just a small fraction of the installed base, their projected growth rate by 2019 is 67 percent.
Vu concluded with her vision of a future of seamless robot-human interaction. “We want to allow manufacturers to combine the creativity, flexibility, judgment and dexterity of humans with the strength, speed and precision of industrial robots,” she said. “We believe this will give manufacturers new tools to meet the growing needs of the modern economy.”
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#429921 This Week’s Awesome Stories From ...

ROBOTICS
San Francisco Considers Ban on Sidewalk Delivery RobotsSteven Musil | CNET"San Francisco is considering legislation that would put the brakes on delivery robots rolling across the city's sidewalks. The robots, once confined to sci-fi movies, have rolled into real-world testing. But they would be banned from San Francisco streets under legislation supervisor Norman Yee introduced on Tuesday. He told the San Francisco Chronicle that he initially considered regulating the robots but soon concluded rules would be unenforceable."
PRIVACY & SECURITY
A Massive Ransomware 'Explosion' Is Hitting Targets All Over the WorldJoseph Cox | Motherboard"WannaCry acts like a typical piece of ransomware, locking down computers and demanding bitcoin in exchange for decrypting the files. But the speed at which WanaCrypt0r has spread is alarming. In a few hours, the malware had already infected victims in 11 countries, including Russia, Turkey, Germany, Vietnam, and the Philippines, according to MalwareHunterTeam."
ENVIRONMENT
This Giant Smog Vacuum Cleaner in China Actually WorksAdele Peters | Fast Company"When the tower—which was designed by artist Daan Roosegaarde in 2015, and temporarily installed in Beijing in 2016—sucks in surrounding air in an open field, the test found that it can capture 70% of PM10, tiny particles of pollution that can lodge in the lungs. When the filtered air is released, mixing with the dirty air around it, the result is air with an up to a 45% reduction in PM10 pollution within 20 meters of the tower."
SPACE
Made In Space Releases Video Renderings of Archinaut 3D Printer; CEO Andrew Rush Tells Us More About the ProjectClare Scott | 3D Print"The two-year endeavor involves the construction of a massive 3D printer, equipped with a robotic arm, that is capable of fabricating structures in the middle of outer space. The Archinaut 3D printer is being developed by Made In Space, known for the production of the first 3D printer ever to go into space, as well as its follow-up, the Additive Manufacturing Facility, now in operation on the International Space Station."
ARTIFICIAL INTELLIGENCE
Google Reveals a Powerful New AI Chip and SupercomputerWill Knight | MIT Technology Review"CEO Sundar Pichai announced a new computer processor designed to perform the kind of machine learning that has taken the industry by storm in recent years. The announcement reflects how rapidly artificial intelligence is transforming Google itself, and it is the surest sign yet that the company plans to lead the development of every relevant aspect of software and hardware."
Image Source: Made In Space Continue reading

Posted in Human Robots

#429796 JR2, the Collaborative Mobile ...

JR2 Mobile Manipulator. Photo Credit : RobotnikRobotnik, in collaboration with Gaitech and Smokie Robotics, has developed its new mobile manipulatorcalled JR2. The new robot will be announced at the 2017 IEEE International Conference on Robotics andAutomation (ICRA, May 29 to June 3, 2017 at Singapore).JR2 is an industrial grade Collaborative Mobile Manipulator and it’s fully autonomous and completelyintegrated. In this sense, its main advantages are: an integrated software, wide range of tutorials andexamples in ROS software, omnidirectional base, competitive price and high speed.This innovating Collaborative Mobile Manipulator is specially designed for the development of industrialapplications. JR2 is the ideal robot for logistics, industrial mobile manipulation: pick&place, pick&feed,fetch&carry, etc.JR2 has as base an omnidirectional platform using 4 high power motor wheels and is able to carrypayloads up to 100 Kg. Furthermore, JR2 mounts a 6 DOF high quality industrial collaborative arm thatcan handle payloads up to 5 Kg (completely extended). The arm can mount almost any standard endeffector, including 2/3 finger servo-grippers and range of cameras and sensors.

Technical specificationsPlatformDimensions 800X550X420mmWeight 125 KgPayload 100 KgSpeed 3 m/sEnclosure class IP 54Autonomy 8 h.Batteries LiFePO4 15Ah@48VTraction motors 4x500WTemperature range 0º to 45º CRange Finders 10m / 20m (Optional)
Arm AUBO-I5Weight 24 Kg.Payload 5 Kg.Reach 924,5 mmGripper (Optional) WSG-50Weight 1,15 Kg.Repeatability +-0,03Strok per finger 55 mm
ControlJR2 uses the ROS open architecture. The software of the robot includes a navigation system as well as anHMI for mission planning, diagnostics and remote control. The JR2 model is available in ROS includingthe completely configured MoveIt! Packages.For more information: JR2Robotnik Automation, since 2002, has been established as a european reference company in mobileservice robotics.Gaitech Technology is an innovative company focused on robotics and develop advanced productsbased on ROS.Smokie Robotics manufactures light-weight collaborative robot.
For further information contact with María Benítez: mbenitez@robotnik.es
Robotnik © | C/ Ciutat de Barcelona, 3-A | 46988 – Valencia | Tel. +34 96 147 54 00 | info@robotnik.es
www.robotnik.eu
The post JR2, the Collaborative Mobile Manipulator appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429793 AI and Robotic Process Automation for ...

ARTIFICIAL INTELLIGENCE AND ROBOTIC PROCESS AUTOMATION: IS THE LEGAL SECTOR READY?
The legal profession is in a period of turbulence and transformation Automation and technology are becoming more utilised in the legal sector to increase efficiency and reduce costs. Artificial intelligence (AI) and robotic process automation (RPA) are being billed as the next industrial revolution and the legal profession will change in a way that has never been seen before.

Is the legal sector ready for Artificial Intelligence?
Many law firms choose not to be early adopters of new technologies. But the techniques of cognitive technologies are already– improving service to clients, reducing costs, and creating new opportunities for firms.
What can you expect in upcoming years?
Cognitive technologies in the law are prompting an ever-greater demand from clients for cheaper, faster, better services. What does that mean?
Cheaper Save money on resources and manpower by implementing the latest technology
Faster – Companies measure cycle time, time to market, and other indicators of speed throughout their businesses, and increasingly expect their lawyers to do the same.
Better – This is critical. Big companies face regulatory and operational complexity for which traditional legal services on the medieval, master craftsman model are simply inadequate.
How to respond to innovation?
To meet those needs and in a world of overwhelming choice of technologies, C5’s inaugural conference on AI and RPA in the Legal Sector will give you give you the information you need on which technology is right for you and your firm.

This conference will bring together law firms, in-house counsel and legal tech companies, providing key insights into the latest technologies and how you can implement them to drive innovation; enhancing your performance and saving time and money. Designed specifically with lawyers in mind, this conference will cut out the technical jargon giving you digestible, easy to understand information on how technology can benefit your daily work.

Gain insight into:

Understanding what legal technology is available to your practice
How technologies can increase efficiency and reduce costs
How to put together a business case for implementing an innovation strategy and getting buy-in from the relevant people
Risks in implementing new technologies and who is liable if something goes wrong
Impact of emerging technologies on current charging models
Hear from your clients how they view the use of technologies and how it can give you a competitive advantage

Would you like to find out more? Visit C5’s AI and RPA in the Legal Sector website
Also Robotic Magazine is partnering with this conference. Get 15%! Quote D15-999-RM17 when registering.
The post AI and Robotic Process Automation for Legal Sector appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#429790 4 Keys to Making the Robots of Our ...

“The robots of reality are starting to get a lot closer to the robots of our imagination,” said Sarah Bergbreiter, an image of a fast-moving, multi-jointed search and rescue robot displayed on the big screen behind her.
In her talk on advanced robotics at Singularity University’s Exponential Manufacturing Summit in Boston, Bergbreiter elaborated on how modern robots have already come to resemble the most fantastic robots humans have imagined over the past few decades. She also shared her vision of what’s ahead.

Bergbreiter joined the University of Maryland, College Park in 2008 as an Assistant Professor of Mechanical Engineering, with a joint appointment in the Institute for Systems Research. She received the DARPA Young Faculty Award in 2008, the NSF CAREER Award in 2011, and the Presidential Early Career Award for Scientists and Engineers (PECASE) Award in 2013 for her research on engineering robotic systems down to sub-millimeter size scales.
Below are four key areas Bergbreiter thinks roboticists need to hone to make sure their robots add maximum value to our jobs, our homes, and our lives.
1. Focus on how they interact with humans
At the Tesla plant in Fremont, California, there are dozens of robots, but they’re all caged off from people, with robots and employees performing completely separate tasks. Robots programmed to perform a task or series of tasks over and over are already widespread, but enabling robots to work with people is a still a major manufacturing challenge.
Robots need to be able to understand what people are doing, and vice-versa. How do we get robots to understand social cues and display them back to us?
The Advanced Robotics for Manufacturing Institute (ARM Institute) focuses on collaborative robotics, or robots complementing a person’s job to enhance productivity. The institute’s mission is to lower the barriers for companies to adopt robotics technology, and in the process, bring currently off-shored production back onshore.
Robots that work with people rather than instead of people will not only save jobs, they’ll bring new advances in efficiency and innovation—but we need to keep people in the equation as we develop them.
2. Make them softer
When you picture a robot, whether it currently exists or is a product of your imagination, you’re most likely picturing a rigid machine with a lot of right angles and not much squishiness or pliability. That’s because the field of soft robotics is just starting to take off, with the first-ever completely soft autonomous robot unveiled in December 2016.
One of the problems with traditional robots is that they tend to be clunky and heavy and their movement is limited. Soft robots can do things rigid robots can’t, like more precisely manipulate objects, climb, grow, or stretch.

Having robots perform these actions is useful across a variety of settings, from exoskeletons—which are beginning to be used to augment people in a manufacturing context—to rescue robots that could grasp and turn a valve or climb through rubble in places humans can’t access.
Soft robots are also more compliant and safer around humans; if you can touch a robot, there’s a lot more you can do in terms of programming it. And the best part is, making robots soft actually lowers their cost. This will enable robotic manufacturing in places that couldn’t do it before.
3. Give soft robots sensors
Soft robots have a lot of advantages over rigid ones, but they’re still stuck with one major drawback: they’re harder to control. Soft sensors are thus a crucial research area in robotics right now.
San Francisco startup Pneubotics makes robots out of fabric and air, with the goal of making robots that can interact with and react to the world. Their robots move by shifting air around to different compartments inside the fabric. To improve their precision and reactive capability, they’ll be equipped with sensors tailored to their function or task.
And there is some progress there. Recently, University of Minnesota researchers said they’ve created a process to 3D print flexible sensors. Something like this may act as a kind of “skin” for future robots.
Sensors will allow soft robots with their expanded capabilities to take on the precision of rigid robots, bringing the best of these two robotics worlds together for completely new applications.
4. Connect them
When we think of robots putting together cars or zooming around a warehouse to find a product, we often assume each individual robot is “smart.” That doesn’t have to be the case, though.
Robots can now network and interact with the cloud, eliminating the need for individual robots to be smart. The computation for the 45,000 robots Amazon uses in their warehouses happens in a central system, meaning not all 45,000 bots need to house all that computation inside their own “heads”—they just need to be able to coordinate with the system.
Especially for large-scale operations like this, it’s cheaper and more efficient to have ‘dumb’ robots taking instructions from one, centralized, in-charge bit of software than equipping all the robots with more advanced software and hardware of their own.
We are moving towards a manufacturing environment where robots will both work closely with humans and be able to do things in less-structured environments without human intervention.
As Bergbreiter said in closing, “It’s a fascinating time for robots.”
Image Credit: Shutterstock Continue reading

Posted in Human Robots