Tag Archives: automation

#431599 8 Ways AI Will Transform Our Cities by ...

How will AI shape the average North American city by 2030? A panel of experts assembled as part of a century-long study into the impact of AI thinks its effects will be profound.
The One Hundred Year Study on Artificial Intelligence is the brainchild of Eric Horvitz, technical fellow and a managing director at Microsoft Research.
Every five years a panel of experts will assess the current state of AI and its future directions. The first panel, comprised of experts in AI, law, political science, policy, and economics, was launched last fall and decided to frame their report around the impact AI will have on the average American city. Here’s how they think it will affect eight key domains of city life in the next fifteen years.
1. Transportation
The speed of the transition to AI-guided transport may catch the public by surprise. Self-driving vehicles will be widely adopted by 2020, and it won’t just be cars — driverless delivery trucks, autonomous delivery drones, and personal robots will also be commonplace.
Uber-style “cars as a service” are likely to replace car ownership, which may displace public transport or see it transition towards similar on-demand approaches. Commutes will become a time to relax or work productively, encouraging people to live further from home, which could combine with reduced need for parking to drastically change the face of modern cities.
Mountains of data from increasing numbers of sensors will allow administrators to model individuals’ movements, preferences, and goals, which could have major impact on the design city infrastructure.
Humans won’t be out of the loop, though. Algorithms that allow machines to learn from human input and coordinate with them will be crucial to ensuring autonomous transport operates smoothly. Getting this right will be key as this will be the public’s first experience with physically embodied AI systems and will strongly influence public perception.
2. Home and Service Robots
Robots that do things like deliver packages and clean offices will become much more common in the next 15 years. Mobile chipmakers are already squeezing the power of last century’s supercomputers into systems-on-a-chip, drastically boosting robots’ on-board computing capacity.
Cloud-connected robots will be able to share data to accelerate learning. Low-cost 3D sensors like Microsoft’s Kinect will speed the development of perceptual technology, while advances in speech comprehension will enhance robots’ interactions with humans. Robot arms in research labs today are likely to evolve into consumer devices around 2025.
But the cost and complexity of reliable hardware and the difficulty of implementing perceptual algorithms in the real world mean general-purpose robots are still some way off. Robots are likely to remain constrained to narrow commercial applications for the foreseeable future.
3. Healthcare
AI’s impact on healthcare in the next 15 years will depend more on regulation than technology. The most transformative possibilities of AI in healthcare require access to data, but the FDA has failed to find solutions to the difficult problem of balancing privacy and access to data. Implementation of electronic health records has also been poor.
If these hurdles can be cleared, AI could automate the legwork of diagnostics by mining patient records and the scientific literature. This kind of digital assistant could allow doctors to focus on the human dimensions of care while using their intuition and experience to guide the process.
At the population level, data from patient records, wearables, mobile apps, and personal genome sequencing will make personalized medicine a reality. While fully automated radiology is unlikely, access to huge datasets of medical imaging will enable training of machine learning algorithms that can “triage” or check scans, reducing the workload of doctors.
Intelligent walkers, wheelchairs, and exoskeletons will help keep the elderly active while smart home technology will be able to support and monitor them to keep them independent. Robots may begin to enter hospitals carrying out simple tasks like delivering goods to the right room or doing sutures once the needle is correctly placed, but these tasks will only be semi-automated and will require collaboration between humans and robots.
4. Education
The line between the classroom and individual learning will be blurred by 2030. Massive open online courses (MOOCs) will interact with intelligent tutors and other AI technologies to allow personalized education at scale. Computer-based learning won’t replace the classroom, but online tools will help students learn at their own pace using techniques that work for them.
AI-enabled education systems will learn individuals’ preferences, but by aggregating this data they’ll also accelerate education research and the development of new tools. Online teaching will increasingly widen educational access, making learning lifelong, enabling people to retrain, and increasing access to top-quality education in developing countries.
Sophisticated virtual reality will allow students to immerse themselves in historical and fictional worlds or explore environments and scientific objects difficult to engage with in the real world. Digital reading devices will become much smarter too, linking to supplementary information and translating between languages.
5. Low-Resource Communities
In contrast to the dystopian visions of sci-fi, by 2030 AI will help improve life for the poorest members of society. Predictive analytics will let government agencies better allocate limited resources by helping them forecast environmental hazards or building code violations. AI planning could help distribute excess food from restaurants to food banks and shelters before it spoils.
Investment in these areas is under-funded though, so how quickly these capabilities will appear is uncertain. There are fears valueless machine learning could inadvertently discriminate by correlating things with race or gender, or surrogate factors like zip codes. But AI programs are easier to hold accountable than humans, so they’re more likely to help weed out discrimination.
6. Public Safety and Security
By 2030 cities are likely to rely heavily on AI technologies to detect and predict crime. Automatic processing of CCTV and drone footage will make it possible to rapidly spot anomalous behavior. This will not only allow law enforcement to react quickly but also forecast when and where crimes will be committed. Fears that bias and error could lead to people being unduly targeted are justified, but well-thought-out systems could actually counteract human bias and highlight police malpractice.
Techniques like speech and gait analysis could help interrogators and security guards detect suspicious behavior. Contrary to concerns about overly pervasive law enforcement, AI is likely to make policing more targeted and therefore less overbearing.
7. Employment and Workplace
The effects of AI will be felt most profoundly in the workplace. By 2030 AI will be encroaching on skilled professionals like lawyers, financial advisers, and radiologists. As it becomes capable of taking on more roles, organizations will be able to scale rapidly with relatively small workforces.
AI is more likely to replace tasks rather than jobs in the near term, and it will also create new jobs and markets, even if it’s hard to imagine what those will be right now. While it may reduce incomes and job prospects, increasing automation will also lower the cost of goods and services, effectively making everyone richer.
These structural shifts in the economy will require political rather than purely economic responses to ensure these riches are shared. In the short run, this may include resources being pumped into education and re-training, but longer term may require a far more comprehensive social safety net or radical approaches like a guaranteed basic income.
8. Entertainment
Entertainment in 2030 will be interactive, personalized, and immeasurably more engaging than today. Breakthroughs in sensors and hardware will see virtual reality, haptics and companion robots increasingly enter the home. Users will be able to interact with entertainment systems conversationally, and they will show emotion, empathy, and the ability to adapt to environmental cues like the time of day.
Social networks already allow personalized entertainment channels, but the reams of data being collected on usage patterns and preferences will allow media providers to personalize entertainment to unprecedented levels. There are concerns this could endow media conglomerates with unprecedented control over people’s online experiences and the ideas to which they are exposed.
But advances in AI will also make creating your own entertainment far easier and more engaging, whether by helping to compose music or choreograph dances using an avatar. Democratizing the production of high-quality entertainment makes it nearly impossible to predict how highly fluid human tastes for entertainment will develop.
Image Credit: Asgord / Shutterstock.com Continue reading

Posted in Human Robots

#431178 Soft Robotics Releases Development Kit ...

Cambridge, MA – Soft Robotics Inc, which has built a fundamentally new class of robotic grippers, announced the release of its expanded and upgraded Soft Robotics Development Kit; SRDK 2.0.

The Soft Robotics Development Kit 2.0 comes complete with:

Robot tool flange mounting plate
4, 5 and 6 position hub plates
Tool Center Point
Soft Robotics Control Unit G2
6 rail mounted, 4 accordion actuator modules
Custom pneumatic manifold
Mounting hardware and accessories

Where the SRDK 1.0 included 5 four accordion actuator modules and the opportunity to create a gripper containing two to five actuators, The SRDK 2.0 contains 6 four accordion actuator modules plus the addition of a six position hub allowing users the ability to configure six actuator test tools. This expands use of the Development Kit to larger product applications, such as: large bagged and pouched items, IV bags, bags of nuts, bread and other food items.

SRDK 2.0 also contains an upgraded Soft Robotics Control Unit (SRCU G2) – the proprietary system that controls all software and hardware with one turnkey pneumatic operation. The upgraded SRCU features new software with a cleaner, user friendly interface and an IP65 rating. Highly intuitive, the software is able to store up to eight grip profiles and allows for very precise adjustments to actuation and vacuum.

Also new with the release of SRDK 2.0, is the introduction of several accessory kits that will allow for an expanded number of configurations and product applications available for testing.

Accessory Kit 1 – For SRDK 1.0 users only – includes the six position hub and 4 accordion actuators now included in SRDK 2.0
Accessory Kit 2 – For SRDK 1.0 or 2.0 users – includes 2 accordion actuators
Accessory Kit 3 – For SRDK 1.0 or 2.0 users – includes 3 accordion actuators

The shorter 2 and 3 accordion actuators provide increased stability for high-speed applications, increased placement precision, higher grip force capabilities and are optimized for gripping small, shallow objects.

Designed to plug and play with any existing robot currently in the market, the Soft Robotics Development Kit 2.0 allows end-users and OEM Integrators the ability to customize, test and validate their ideal Soft Robotics solution, with their own equipment, in their own environment.

Once an ideal solution has been found, the Soft Robotics team will take those exact specifications and build a production-grade tool for implementation into the manufacturing line. And, it doesn’t end there. Created to be fully reusable, the process – configure, test, validate, build, production – can start over again as many times as needed.

See the new SRDK 2.0 on display for the first time at PACK EXPO Las Vegas, September 25 – 27, 2017 in Soft Robotics booth S-5925.

Learn more about the Soft Robotics Development Kit at www.softroboticsinc.com/srdk.
Photo Credit: Soft Robotics – www.softroboticsinc.com
###
About Soft Robotics
Soft Robotics designs and builds soft robotic gripping systems and automation solutions
that can grasp and manipulate items of varying size, shape and weight. Spun out of the
Whitesides Group at Harvard University, Soft Robotics is the only company to be
commercializing this groundbreaking and proprietary technology platform. Today, the
company is a global enterprise solving previously off-limits automation challenges for
customers in food & beverage, advanced manufacturing and ecommerce. Soft Robotics’
engineers are building an ecosystem of robots, control systems, data and machine
learning to enable the workplace of the future. For more information, please visit
www.softroboticsinc.com.

Media contact:
Jennie Kondracki
The Kondracki Group, LLC
262-501-4507
jennie@kondrackigroup.com
The post Soft Robotics Releases Development Kit 2.0 appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431171 SceneScan: Real-Time 3D Depth Sensing ...

Nerian Introduces a High-Performance Successor for the Proven SP1 System
Stereo vision, which is the three-dimensional perception of our environment with two sensors likeour eyes, is a well-known technology. As a passive method – there is no need to emit light in thevisible or invisible spectral range – this technology can open up new possibilities for three dimensional perception, even under difficult conditions.
But as often, the devil is in the details: for most applications, the software implementation withstandard PCs, but also with graphics processors, is too slow. Another complicating factor is thatthese hardware platforms are expensive and not energy-efficient. The solution is to instead usespecialized hardware for image processing. A programmable logic device – a so-called FPGA – cangreatly accelerate the image processing.
As a technology leader, Nerian Vision Technologies has been following this path successfully forthe past two years with the SP1 stereo vision system, which has enabled completely newapplications in the fields of robotics, automation technology, medical technology, autonomousdriving and other domains. Now the company introduces two successors:
SceneScan and SceneScan Pro. Real eye-catchers in a double sense: stereo vision in an elegant design!But more important is, of course, the significantly improved inner workings of the two new modelsin comparison to their predecessor. The new hardware allows processing rates of up to 100 framesper second at resolutions of up to 3 megapixels, which leaves the SP1 far behind:
Photo Credit: Nerian Vision Technologies – www.nerian.com

The table illustrates the difference: while SceneScan Pro has the highest possible computing powerand is designed for the most demanding applications, SceneScan has been cost-reduced forapplications with lower requirements. The customer can thus optimize his embedded vision solution both in terms of costs and technology.
The new duo is completed by Nerian’s proven Karmin stereo cameras. Of course, industrialUSB3Vision cameras by other manufacturers are also supported.This combination not only supports the above-mentioned applications even better, but alsofacilitates completely new and innovative ones. If required, customer-specific adaptations are alsopossible.
ContactNerian Vision TechnologiesOwner: Dr. Konstantin SchauweckerGotenstr. 970771 Leinfelden-EchterdingenGermanyPhone: +49 711 / 2195 9414Email: service@nerian.comWebsite: http://nerian.com
Press Release Authored By: Nerian Vision Technologies
Photo Credit: Nerian Vision Technologies – www.nerian.com
The post SceneScan: Real-Time 3D Depth Sensing Through Stereo Vision appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431154 The Future of Technology – Robotics in ...

Introduction Now that our technological level has progressed as far as it has, the greatest amount of work is being put into the field of robotics as it directly pertains to home automation and the improvement of technology which already exists in a household. Robotics are seeing a lot of changes, since their technology and … Continue reading

Posted in Human Robots

#431130 Innovative Collaborative Robot sets new ...

Press Release by: HMK
As the trend of Industry 4.0 takes the world by storm, collaborative robots and smart factories are becoming the latest hot topic. At this year’s PPMA show, HMK will demonstrate the world’s first collaborative robot with built-in vision recognition from Techman Robot.
The new TM5 Cobot from HMK merges systems that usually function separately in conventional robots, the Cobot is the only collaborative robot to incorporate simple programming, a fully integrated vision system and the latest safety standards in a single unit.
With capabilities including direction identification, self-calibration of coordinates and visual task operation enabled by built-in vision, the TM5 can fine-tune in accordance with actual conditions at any time to accomplish complex processes that used to demand the integration of various equipment; it requires less manpower and time to recalibrate when objects or coordinates move and thus significantly improves flexibility as well as reducing maintenance cost.
Photo Credit: hmkdirect.com
Simple.Programming could not be easier. Using an easy to use flow chart program, TM-Flow will run on any tablet, PC or laptop over a wireless link to the TM control box, complex automation tasks can be realised in minutes. Clever teach functions and wizards also allow hand guided programming and easy incorporation of operation such as palletising, de-palletising and conveyor tracking.
SmartThe TM5 is the only cobot to feature a full colour vision package as standard mounted on the wrist of the robot, which in turn, is fully supported within TM-Flow. The result allows users to easily integrate the robot to the application, without complex tooling and the need for expensive add-on vision hardware and programming.
SafeThe recently CE marked TM5 now incorporates the new ISO/TS 15066 guidelines on safety in collaborative robots systems, which covers four types of collaborative operation:a) Safety-rated monitored stopb) Hand guidingc) Speed and separation monitoringd) Power and force limitingSafety hardware inputs also allow the Cobot to be integrated to wider safety systems.
When you add EtherCat and Modbus network connectivity and I/O expansion options, IoT ready network access and ex-stock delivery, the TM5 sets a new benchmark for this evolving robotics sector.
The TM5 is available with two payload options, 4Kg and 6Kg with a reach of 900mm and 700mm respectively, both with positioning capabilities to a repeatability of 0.05mm.
HMK will be showcasing the new TM5 Cobot at this year’s PPMA show at the NEC, visit stand F102 to get hands on the with the Cobot and experience the innovative and intuitive graphic HMI and hand-guiding features.
For more information contact HMK on 01260 279411, email sales@hmkdirect.com or visit www.hmkdirect.com
Photo Credit: hmkdirect.com
The post Innovative Collaborative Robot sets new benchmark appeared first on Roboticmagazine. Continue reading

Posted in Human Robots