Tag Archives: practical
As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”
Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading
For Dr. Hiroshi Ishiguro, one of the most interesting things about androids is the changing questions they pose us, their creators, as they evolve. Does it, for example, do something to the concept of being human if a human-made creation starts telling you about what kind of boys ‘she’ likes?
If you want to know the answer to the boys question, you need to ask ERICA, one of Dr. Ishiguro’s advanced androids. Beneath her plastic skull and silicone skin, wires connect to AI software systems that bring her to life. Her ability to respond goes far beyond standard inquiries. Spend a little time with her, and the feeling of a distinct personality starts to emerge. From time to time, she works as a receptionist at Dr. Ishiguro and his team’s Osaka University labs. One of her android sisters is an actor who has starred in plays and a film.
ERICA’s ‘brother’ is an android version of Dr. Ishiguro himself, which has represented its creator at various events while the biological Ishiguro can remain in his offices in Japan. Microphones and cameras capture Ishiguro’s voice and face movements, which are relayed to the android. Apart from mimicking its creator, the Geminoid™ android is also capable of lifelike blinking, fidgeting, and breathing movements.
Say hello to relaxation
As technological development continues to accelerate, so do the possibilities for androids. From a position as receptionist, ERICA may well branch out into many other professions in the coming years. Companion for the elderly, comic book storyteller (an ancient profession in Japan), pop star, conversational foreign language partner, and newscaster are some of the roles and responsibilities Dr. Ishiguro sees androids taking on in the near future.
“Androids are not uncanny anymore. Most people adapt to interacting with Erica very quickly. Actually, I think that in interacting with androids, which are still different from us, we get a better appreciation of interacting with other cultures. In both cases, we are talking with someone who is different from us and learn to overcome those differences,” he says.
A lot has been written about how robots will take our jobs. Dr. Ishiguro believes these fears are blown somewhat out of proportion.
“Robots and androids will take over many simple jobs. Initially there might be some job-related issues, but new schemes, like for example a robot tax similar to the one described by Bill Gates, should help,” he says.
“Androids will make it possible for humans to relax and keep evolving. If we compare the time we spend studying now compared to 100 years ago, it has grown a lot. I think it needs to keep growing if we are to keep expanding our scientific and technological knowledge. In the future, we might end up spending 20 percent of our lifetime on work and 80 percent of the time on education and growing our skills.”
Android asks who you are
For Dr. Ishiguro, another aspect of robotics in general, and androids in particular, is how they question what it means to be human.
“Identity is a very difficult concept for humans sometimes. For example, I think clothes are part of our identity, in a way that is similar to our faces and bodies. We don’t change those from one day to the next, and that is why I have ten matching black outfits,” he says.
This link between physical appearance and perceived identity is one of the aspects Dr. Ishiguro is exploring. Another closely linked concept is the connection between body and feeling of self. The Ishiguro avatar was once giving a presentation in Austria. Its creator recalls how he felt distinctly like he was in Austria, even capable of feeling sensation of touch on his own body when people laid their hands on the android. If he was distracted, he felt almost ‘sucked’ back into his body in Japan.
“I am constantly thinking about my life in this way, and I believe that androids are a unique mirror that helps us formulate questions about why we are here and why we have been so successful. I do not necessarily think I have found the answers to these questions, so if you have, please share,” he says with a laugh.
His work and these questions, while extremely interesting on their own, become extra poignant when considering the predicted melding of mind and machine in the near future.
The ability to be present in several locations through avatars—virtual or robotic—raises many questions of both philosophical and practical nature. Then add the hypotheticals, like why send a human out onto the hostile surface of Mars if you could send a remote-controlled android, capable of relaying everything it sees, hears and feels?
The two ways of robotics will meet
Dr. Ishiguro sees the world of AI-human interaction as currently roughly split into two. One is the chat-bot approach that companies like Amazon, Microsoft, Google, and recently Apple, employ using stationary objects like speakers. Androids like ERICA represent another approach.
“It is about more than the form factor. I think that the android approach is generally more story-based. We are integrating new conversation features based on assumptions about the situation and running different scenarios that expand the android’s vocabulary and interactions. Another aspect we are working on is giving androids desire and intention. Like with people, androids should have desires and intentions in order for you to want to interact with them over time,” Dr. Ishiguro explains.
This could be said to be part of a wider trend for Japan, where many companies are developing human-like robots that often have some Internet of Things capabilities, making them able to handle some of the same tasks as an Amazon Echo. The difference in approach could be summed up in the words ‘assistant’ (Apple, Amazon, etc.) and ‘companion’ (Japan).
Dr. Ishiguro sees this as partly linked to how Japanese as a language—and market—is somewhat limited. This has a direct impact on viability and practicality of ‘pure’ voice recognition systems. At the same time, Japanese people have had greater exposure to positive images of robots, and have a different cultural / religious view of objects having a ‘soul’. However, it may also mean Japanese companies and android scientists are both stealing a lap on their western counterparts.
“If you speak to an Amazon Echo, that is not a natural way to interact for humans. This is part of why we are making human-like robot systems. The human brain is set up to recognize and interact with humans. So, it makes sense to focus on developing the body for the AI mind, as well as the AI. I believe that the final goal for both Japanese and other companies and scientists is to create human-like interaction. Technology has to adapt to us, because we cannot adapt fast enough to it, as it develops so quickly,” he says.
Banner image courtesy of Hiroshi Ishiguro Laboratories, ATR all rights reserved.
Dr. Ishiguro’s team has collaborated with partners and developed a number of android systems:
Geminoid™ HI-2 has been developed by Hiroshi Ishiguro Laboratories and Advanced Telecommunications Research Institute International (ATR).
Geminoid™ F has been developed by Osaka University and Hiroshi Ishiguro Laboratories, Advanced Telecommunications Research Institute International (ATR).
ERICA has been developed by ERATO ISHIGURO Symbiotic Human-Robot Interaction Project Continue reading
Spherical Induction Motor Eliminates Robot’s Mechanical Drive System
PITTSBURGH— More than a decade ago, Ralph Hollis invented the ballbot, an elegantly simple robot whose tall, thin body glides atop a sphere slightly smaller than a bowling ball. The latest version, called SIMbot, has an equally elegant motor with just one moving part: the ball.
The only other active moving part of the robot is the body itself.
The spherical induction motor (SIM) invented by Hollis, a research professor in Carnegie Mellon University’s Robotics Institute, and Masaaki Kumagai, a professor of engineering at Tohoku Gakuin University in Tagajo, Japan, eliminates the mechanical drive systems that each used on previous ballbots. Because of this extreme mechanical simplicity, SIMbot requires less routine maintenance and is less likely to suffer mechanical failures.
The new motor can move the ball in any direction using only electronic controls. These movements keep SIMbot’s body balanced atop the ball.
Early comparisons between SIMbot and a mechanically driven ballbot suggest the new robot is capable of similar speed — about 1.9 meters per second, or the equivalent of a very fast walk — but is not yet as efficient, said Greg Seyfarth, a former member of Hollis’ lab who recently completed his master’s degree in robotics.
Induction motors are nothing new; they use magnetic fields to induce electric current in the motor’s rotor, rather than through an electrical connection. What is new here is that the rotor is spherical and, thanks to some fancy math and advanced software, can move in any combination of three axes, giving it omnidirectional capability. In contrast to other attempts to build a SIM, the design by Hollis and Kumagai enables the ball to turn all the way around, not just move back and forth a few degrees.
Though Hollis said it is too soon to compare the cost of the experimental motor with conventional motors, he said long-range trends favor the technologies at its heart.
“This motor relies on a lot of electronics and software,” he explained. “Electronics and software are getting cheaper. Mechanical systems are not getting cheaper, or at least not as fast as electronics and software are.”
SIMbot’s mechanical simplicity is a significant advance for ballbots, a type of robot that Hollis maintains is ideally suited for working with people in human environments. Because the robot’s body dynamically balances atop the motor’s ball, a ballbot can be as tall as a person, but remain thin enough to move through doorways and in between furniture. This type of robot is inherently compliant, so people can simply push it out of the way when necessary. Ballbots also can perform tasks such as helping a person out of a chair, helping to carry parcels and physically guiding a person.
Until now, moving the ball to maintain the robot’s balance has relied on mechanical means. Hollis’ ballbots, for instance, have used an “inverse mouse ball” method, in which four motors actuate rollers that press against the ball so that it can move in any direction across a floor, while a fifth motor controls the yaw motion of the robot itself.
“But the belts that drive the rollers wear out and need to be replaced,” said Michael Shomin, a Ph.D. student in robotics. “And when the belts are replaced, the system needs to be recalibrated.” He said the new motor’s solid-state system would eliminate that time-consuming process.
The rotor of the spherical induction motor is a precisely machined hollow iron ball with a copper shell. Current is induced in the ball with six laminated steel stators, each with three-phase wire windings. The stators are positioned just next to the ball and are oriented slightly off vertical.
The six stators generate travelling magnetic waves in the ball, causing the ball to move in the direction of the wave. The direction of the magnetic waves can be steered by altering the currents in the stators.
Hollis and Kumagai jointly designed the motor. Ankit Bhatia, a Ph.D. student in robotics, and Olaf Sassnick, a visiting scientist from Salzburg University of Applied Sciences, adapted it for use in ballbots.
Getting rid of the mechanical drive eliminates a lot of the friction of previous ballbot models, but virtually all friction could be eliminated by eventually installing an air bearing, Hollis said. The robot body would then be separated from the motor ball with a cushion of air, rather than passive rollers.
“Even without optimizing the motor’s performance, SIMbot has demonstrated impressive performance,” Hollis said. “We expect SIMbot technology will make ballbots more accessible and more practical for wide adoption.”
The National Science Foundation and, in Japan, Grants-in-Aid for Scientific Research (KAKENHI) supported this research. A report on the work was presented at the May IEEE International Conference on Robotics and Automation in Stockholm, Sweden.
Video by: Carnegie Mellon University
About Carnegie Mellon University: Carnegie Mellon (www.cmu.edu) is a private, internationally ranked research university with programs in areas ranging from science, technology and business, to public policy, the humanities and the arts. More than 13,000 students in the university’s seven schools and colleges benefit from a small student-to-faculty ratio and an education characterized by its focus on creating and implementing solutions for real problems, interdisciplinary collaboration and innovation.
Carnegie Mellon University
5000 Forbes Ave.
Pittsburgh, PA 15213
Contact: Byron Spice For immediate release:
412-268-9068 October 4, 2016
The post Omnidirectional Mobile Robot Has Just Two Moving Parts appeared first on Roboticmagazine. Continue reading
RPA and Artificial Intelligence Summit unites the needs of the 250,000-strong SSON and PEX Network communities to bring together those furthest along the maturity curve in automated and intelligent service innovation, like Vodafone, Barclays, ENGIE and even NHS Wales Shared Services, with those who are just starting out, like SAB Miller, LV and National Grid, for a frank and open discussion surrounding the best ways to compete in the digital business era.
Combining scores of practical end-user case studies, multiple conference streams surrounding human workforce augmentation across the front and back-offices and over 15 hours of interactive sessions and networking, this is your one-stop shop for ensuring you build the value-adding, scalable, intelligent processes that meet the business needs of the future.
Event Website: http://www.rpaandaisummit.com/
Discount Code for Robotic Magazine: VIP_ROBOTICSMAG
(Allows 20% off all standard practitioner passes)
The post RPA and Artificial Intelligence Summit appeared first on Roboticmagazine. Continue reading
In line with current requirements in this age of digital manufacturing, the scientists from Stuttgart are presenting an intelligent interplay of different exhibits at the AUTOMATICA in Munich from 21st till 24th June 2016. Covering the fields of man at the workplace, products and automation, as well as IT infrastructure and networking, the exhibits demonstrate the added value offered by a production plant geared towards Industrie 4.0.
At the Fraunhofer IPA booth, the four cornerstones of Industrie 4.0 can be experienced in various ways within the overall context of a digital production: a wide range of cyberphysical systems, a participatory platform, the Internet of things and services, and also a portal with intuitive man-machine interfaces for interacting with the manufacturing system. With the aid of exhibits interacting intelligently with the cloud, visitors can comprehend the solutions offered by the research institute for various segments of the value chain. These range from singularization, though (partially-)automated assembly processes and workpiece transportation, right up to networking components with the IT infrastructure. The services are not only relevant to users and decision-makers in manufacturing enterprises but also to their suppliers: for planning, operating and optimizing production plants, as well as developing innovative industrial components, machines and systems.
Source: Fraunhofer IPA, photo: Rainer Bez
Focus on the federative platform »Virtual Fort Knox«
Ever since 2012, Fraunhofer IPA has been working together with medium-sized enterprises on an open platform for manufacturing companies called »Virtual Fort Knox«. Under the motto »manufacturing-IT-as-a-service«, various applications (services) can make production data on the platform usable by any end-device. Joachim Seidelmann, head of DigiTools at Fraunhofer IPA, formulated the declared goal as follows. »On the one hand, we want to implement Industrie 4.0 concepts that enable users to increase their production efficiency. On the other hand, together with our industrial customers, we want to answer the question: which digital solutions can be integrated meaningfully into my product or production plant in order to develop new business models?«
At the AUTOMATICA, »Virtual Fort Knox« plays a key role: various demonstrators are linked via the platform. As with a real production plant, a wide range of near real-time status and process data is collected in the system for direct processing. The huge advantage, in particular for small and medium-sized enterprises, is that users can access the information from applications via an output medium of their choice. This does away with the need to procure and maintain a suitable IT environment. Furthermore, the user is billed for the use of the software and hardware on a »pay-as-you-go« basis, thus avoiding fixed costs.
Multiple benefits from the cloud for robotics
The basic technical requirement for an Industrie 4.0 environment is that all equipment with integrated sensors and controllers has to be networked as a cyber-physical system (CPS). A typical example of a CPS is robot systems, such as the bin-picking IPA demonstrator on show at the booth. The manufacturer-independent software bp3™ enables the robot to locate objects rapidly and reliably and plan trajectories for numerous workpieces. A further exhibit presents the advantages of a software package that can be used in conjunction with nearly all types and makes of robot to perform numerous assembly tasks. For the first time, complex tasks that were previously carried out manually, such as assembling switching cabinets, can now be taught intuitively by non-experts and thus be automated cost-effectively.
Through their connection to the cloud architecture, the potential of both software solutions is extended: thanks to the central data pool containing information on workpieces or program modules for direct implementation – so-called skills – robot systems can be put into operation and maintained more efficiently than in the past, components replaced easier and all processes traced and controlled centrally. This not only makes robot systems more adaptable but also significantly speeds up retrofitting to accommodate new product variants. Via a range of services, the cloud also offers new software functions. Similarly, locally-optimized processes can be played back to the cloud, thus enabling all networked robot systems to benefit from once-only program changes.
Source: Fraunhofer IPA, photo: Rainer BezFor flexible transport solutions, the IPA experts have developed »Cloud Navigation«. The advantage of this information is demonstrated by example at the booth with the aid of two mobile, self-navigating systems. Since both automated guided vehicles (AGVs), or multiple AGVs in an industrial context, supply their locally-acquired data to a central point, the entire fleet benefits from more accurate localization and more efficient pathplanning. The AGVs could then act as »lean clients«, i.e. because computer-intensive navigation algorithms could be outsourced to the cloud server, they would require less hardware but would still retain a high degree of navigation intelligence. External sensors, e.g. from the manufacturing environment, could be integrated, as well as navigation functions provided in the form of services.
Controlling and optimizing process
A further key element of Industrie 4.0 is the continuous monitoring of all process steps. This is achieved with »Smart System Optimization« developed at Fraunhofer IPA, which can be implemented without the need for expert IT knowledge. The mobile system collects and automatically analyzes near real-time component and process data using intelligent cameras, generally installed singly at each production station. The system not only detects process aberrations and their cause but also identifies possible losses or bottlenecks. In areas where »Smart System Optimization« is utilized, companies can increase their efficiency by more than ten percent. Moreover, with the intelligent workpiece carrier »smartWT«, single workpieces can also contribute towards process monitoring. Integrated sensors constantly gather logistics and process data relevant to quality and transmit the information wirelessly to the cloud. The user has access to current data at all times and can intervene as required, thus improving production quality and throughput.
As far as the IT infrastructure is concerned, Fraunhofer IPA also has a solution tailored to the demands of an adaptable production plant: with the software »Sense&Act«, companies can devise individual rules to network production equipment. Modifications to the IT, as well as extensions and new interfaces, can be realized at little effort. The software uses sensor data to monitor production, for example to detect system errors. Under certain circumstances, it initiates specific action, such as notifying the user or implementing a measure on the robotic system. Sensors and actuators are swiftly regulated via an intuitive user interface, or shared throughout the company and evaluated.
Relieving the burden on humans and analyzing data usefully
Even in Industrie 4.0 environments, the close involvement of man and his abilities in production brings significant advantages. How this can be achieved, even in the case of burdensome tasks and in view of the demographic change, is demonstrated by the first work exoskeleton that is capable of aiding the worker with overhead tasks. The assembly workplace is linked to the IT infrastructure and adapts automatically to the worker’s individual body measurements as well as the assembly process required. This reduces set-up times and relieves the strain off the worker. Additionally, a workplace analysis developed at Fraunhofer IPA quantifies how effectively the solution reduces the employee workload and optimizes production processes.
All the exhibits in the Industrie 4.0 environment have one thing in common in that they continuously collect information in the sense of »Smart Data«, which can then be used to optimize production. Visitors to the booth can see for themselves how the IPA experts visualize and analyze the sensor and status data acquired from an exhibit to advantage. The information gained from intelligent data analyses also represent approaches for new business models based on usage data, such as service intervals tailored to individual requirements or adaptions to product portfolios to suit customer needs.
Source: Fraunhofer IPA, photo: Rainer Bez
Basic research on Industrie 4.0 with TRUMPF
That theory alone is not enough is demonstrated by a cooperation initiative with TRUMPF: in the summer of 2015, the Ditzingen-based company – one of the world’s leading manufacturers of machine tools for flexible sheet metal processing and industrial lasers – entered into a five-year strategic cooperation with Fraunhofer IPA. The aim of the cooperation is to anchor knowledge from current research on Industrie 4.0 in sheet metal processing. In the so-called »flexible sheet metal processing lab«, workers from TRUMPF and Fraunhofer IPA are working together on innovative solutions for production technologies of tomorrow. In initial starter projects, the areas of »intralogistics«, »service-oriented machines« and »autonomous production« are being handled. The aim is to further develop the contents of the cooperation as it progresses with new project topics being regularly added.
Ulrich Schneider, project manager at Fraunhofer IPA, will be reporting on the joint cooperation scheme during AUTOMATICA on 24th June at 11 a.m. Together with Dr. Martin Landherr, he will be presenting the project under the title »Think in business models, work in cooperations – cooperation with TRUMPF as a practical example of the Application Center Industrie 4.0«. He will also talk about the Application Center Industrie 4.0, which is located at the Fraunhofer Institute Center. This is a test environment for industrial research that unites cyber-physical systems with a real manufacturing environment.
Source: Fraunhofer IPA, photo: Rainer Bez
The post Fraunhofer IPA demonstrates the advantages of networked manufacturing components appeared first on Roboticmagazine. Continue reading