Tag Archives: available

#431178 Soft Robotics Releases Development Kit ...

Cambridge, MA – Soft Robotics Inc, which has built a fundamentally new class of robotic grippers, announced the release of its expanded and upgraded Soft Robotics Development Kit; SRDK 2.0.

The Soft Robotics Development Kit 2.0 comes complete with:

Robot tool flange mounting plate
4, 5 and 6 position hub plates
Tool Center Point
Soft Robotics Control Unit G2
6 rail mounted, 4 accordion actuator modules
Custom pneumatic manifold
Mounting hardware and accessories

Where the SRDK 1.0 included 5 four accordion actuator modules and the opportunity to create a gripper containing two to five actuators, The SRDK 2.0 contains 6 four accordion actuator modules plus the addition of a six position hub allowing users the ability to configure six actuator test tools. This expands use of the Development Kit to larger product applications, such as: large bagged and pouched items, IV bags, bags of nuts, bread and other food items.

SRDK 2.0 also contains an upgraded Soft Robotics Control Unit (SRCU G2) – the proprietary system that controls all software and hardware with one turnkey pneumatic operation. The upgraded SRCU features new software with a cleaner, user friendly interface and an IP65 rating. Highly intuitive, the software is able to store up to eight grip profiles and allows for very precise adjustments to actuation and vacuum.

Also new with the release of SRDK 2.0, is the introduction of several accessory kits that will allow for an expanded number of configurations and product applications available for testing.

Accessory Kit 1 – For SRDK 1.0 users only – includes the six position hub and 4 accordion actuators now included in SRDK 2.0
Accessory Kit 2 – For SRDK 1.0 or 2.0 users – includes 2 accordion actuators
Accessory Kit 3 – For SRDK 1.0 or 2.0 users – includes 3 accordion actuators

The shorter 2 and 3 accordion actuators provide increased stability for high-speed applications, increased placement precision, higher grip force capabilities and are optimized for gripping small, shallow objects.

Designed to plug and play with any existing robot currently in the market, the Soft Robotics Development Kit 2.0 allows end-users and OEM Integrators the ability to customize, test and validate their ideal Soft Robotics solution, with their own equipment, in their own environment.

Once an ideal solution has been found, the Soft Robotics team will take those exact specifications and build a production-grade tool for implementation into the manufacturing line. And, it doesn’t end there. Created to be fully reusable, the process – configure, test, validate, build, production – can start over again as many times as needed.

See the new SRDK 2.0 on display for the first time at PACK EXPO Las Vegas, September 25 – 27, 2017 in Soft Robotics booth S-5925.

Learn more about the Soft Robotics Development Kit at www.softroboticsinc.com/srdk.
Photo Credit: Soft Robotics – www.softroboticsinc.com
###
About Soft Robotics
Soft Robotics designs and builds soft robotic gripping systems and automation solutions
that can grasp and manipulate items of varying size, shape and weight. Spun out of the
Whitesides Group at Harvard University, Soft Robotics is the only company to be
commercializing this groundbreaking and proprietary technology platform. Today, the
company is a global enterprise solving previously off-limits automation challenges for
customers in food & beverage, advanced manufacturing and ecommerce. Soft Robotics’
engineers are building an ecosystem of robots, control systems, data and machine
learning to enable the workplace of the future. For more information, please visit
www.softroboticsinc.com.

Media contact:
Jennie Kondracki
The Kondracki Group, LLC
262-501-4507
jennie@kondrackigroup.com
The post Soft Robotics Releases Development Kit 2.0 appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431165 Intel Jumps Into Brain-Like Computing ...

The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.

At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading

Posted in Human Robots

#431159 How Close Is Turing’s Dream of ...

The quest for conversational artificial intelligence has been a long one.
When Alan Turing, the father of modern computing, racked his considerable brains for a test that would truly indicate that a computer program was intelligent, he landed on this area. If a computer could convince a panel of human judges that they were talking to a human—if it could hold a convincing conversation—then it would indicate that artificial intelligence had advanced to the point where it was indistinguishable from human intelligence.
This gauntlet was thrown down in 1950 and, so far, no computer program has managed to pass the Turing test.
There have been some very notable failures, however: Joseph Weizenbaum, as early as 1966—when computers were still programmed with large punch-cards—developed a piece of natural language processing software called ELIZA. ELIZA was a machine intended to respond to human conversation by pretending to be a psychotherapist; you can still talk to her today.
Talking to ELIZA is a little strange. She’ll often rephrase things you’ve said back at you: so, for example, if you say “I’m feeling depressed,” she might say “Did you come to me because you are feeling depressed?” When she’s unsure about what you’ve said, ELIZA will usually respond with “I see,” or perhaps “Tell me more.”
For the first few lines of dialogue, especially if you treat her as your therapist, ELIZA can be convincingly human. This was something Weizenbaum noticed and was slightly alarmed by: people were willing to treat the algorithm as more human than it really was. Before long, even though some of the test subjects knew ELIZA was just a machine, they were opening up with some of their deepest feelings and secrets. They were pouring out their hearts to a machine. When Weizenbaum’s secretary spoke to ELIZA, even though she knew it was a fairly simple computer program, she still insisted Weizenbaum leave the room.
Part of the unexpected reaction ELIZA generated may be because people are more willing to open up to a machine, feeling they won’t be judged, even if the machine is ultimately powerless to do or say anything to really help. The ELIZA effect was named for this computer program: the tendency of humans to anthropomorphize machines, or think of them as human.

Weizenbaum himself, who later became deeply suspicious of the influence of computers and artificial intelligence in human life, was astonished that people were so willing to believe his script was human. He wrote, “I had not realized…that extremely short exposures to a relatively simple computer program could induce powerful delusional thinking in quite normal people.”

“Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.”

The ELIZA effect may have disturbed Weizenbaum, but it has intrigued and fascinated others for decades. Perhaps you’ve noticed it in yourself, when talking to an AI like Siri, Alexa, or Google Assistant—the occasional response can seem almost too real. Consciously, you know you’re talking to a big block of code stored somewhere out there in the ether. But subconsciously, you might feel like you’re interacting with a human.
Yet the ELIZA effect, as enticing as it is, has proved a source of frustration for people who are trying to create conversational machines. Natural language processing has proceeded in leaps and bounds since the 1960s. Now you can find friendly chatbots like Mitsuku—which has frequently won the Loebner Prize, awarded to the machines that come closest to passing the Turing test—that aim to have a response to everything you might say.
In the commercial sphere, Facebook has opened up its Messenger program and provided software for people and companies to design their own chatbots. The idea is simple: why have an app for, say, ordering pizza when you can just chatter to a robot through your favorite messenger app and make the order in natural language, as if you were telling your friend to get it for you?
Startups like Semantic Machines hope their AI assistant will be able to interact with you just like a secretary or PA would, but with an unparalleled ability to retrieve information from the internet. They may soon be there.
But people who engineer chatbots—both in the social and commercial realm—encounter a common problem: the users, perhaps subconsciously, assume the chatbots are human and become disappointed when they’re not able to have a normal conversation. Frustration with miscommunication can often stem from raised initial expectations.
So far, no machine has really been able to crack the problem of context retention—understanding what’s been said before, referring back to it, and crafting responses based on the point the conversation has reached. Even Mitsuku will often struggle to remember the topic of conversation beyond a few lines of dialogue.

“For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until you end up with vast numbers of potential conversations.”

This is, of course, understandable. Conversation can be almost unimaginably complex. For everything you say, there could be hundreds of responses that would make sense. When you travel a layer deeper into the conversation, those factors multiply until—like possible games of Go or chess—you end up with vast numbers of potential conversations.
But that hasn’t deterred people from trying, most recently, tech giant Amazon, in an effort to make their AI voice assistant, Alexa, friendlier. They have been running the Alexa Prize competition, which offers a cool $500,000 to the winning AI—and a bonus of a million dollars to any team that can create a ‘socialbot’ capable of sustaining a conversation with human users for 20 minutes on a variety of themes.
Topics Alexa likes to chat about include science and technology, politics, sports, and celebrity gossip. The finalists were recently announced: chatbots from universities in Prague, Edinburgh, and Seattle. Finalists were chosen according to the ratings from Alexa users, who could trigger the socialbots into conversation by saying “Hey Alexa, let’s chat,” although the reviews for the socialbots weren’t always complimentary.
By narrowing down the fields of conversation to a specific range of topics, the Alexa Prize has cleverly started to get around the problem of context—just as commercially available chatbots hope to do. It’s much easier to model an interaction that goes a few layers into the conversational topic if you’re limiting those topics to a specific field.
Developing a machine that can hold almost any conversation with a human interlocutor convincingly might be difficult. It might even be a problem that requires artificial general intelligence to truly solve, rather than the previously-employed approaches of scripted answers or neural networks that associate inputs with responses.
But a machine that can have meaningful interactions that people might value and enjoy could be just around the corner. The Alexa Prize winner is announced in November. The ELIZA effect might mean we will relate to machines sooner than we’d thought.
So, go well, little socialbots. If you ever want to discuss the weather or what the world will be like once you guys take over, I’ll be around. Just don’t start a therapy session.
Image Credit: Shutterstock Continue reading

Posted in Human Robots

#431142 Will Privacy Survive the Future?

Technological progress has radically transformed our concept of privacy. How we share information and display our identities has changed as we’ve migrated to the digital world.
As the Guardian states, “We now carry with us everywhere devices that give us access to all the world’s information, but they can also offer almost all the world vast quantities of information about us.” We are all leaving digital footprints as we navigate through the internet. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including governments, corporations, marketers, and criminals.
The ethical debate around privacy is complex. The reality is that our definition and standards for privacy have evolved over time, and will continue to do so in the next few decades.
Implications of Emerging Technologies
Protecting privacy will only become more challenging as we experience the emergence of technologies such as virtual reality, the Internet of Things, brain-machine interfaces, and much more.
Virtual reality headsets are already gathering information about users’ locations and physical movements. In the future all of our emotional experiences, reactions, and interactions in the virtual world will be able to be accessed and analyzed. As virtual reality becomes more immersive and indistinguishable from physical reality, technology companies will be able to gather an unprecedented amount of data.
It doesn’t end there. The Internet of Things will be able to gather live data from our homes, cities and institutions. Drones may be able to spy on us as we live our everyday lives. As the amount of genetic data gathered increases, the privacy of our genes, too, may be compromised.
It gets even more concerning when we look farther into the future. As companies like Neuralink attempt to merge the human brain with machines, we are left with powerful implications for privacy. Brain-machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface.
Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking,” which we must take extreme precaution against. However, it is important to note that lesser versions of these risks currently exist, i.e., by phone hacking, identify fraud, and the like.
A New Much-Needed Definition of Privacy
In many ways we are already cyborgs interfacing with technology. According to theories like the extended mind hypothesis, our technological devices are an extension of our identities. We use our phones to store memories, retrieve information, and communicate. We use powerful tools like the Hubble Telescope to extend our sense of sight. In parallel, one can argue that the digital world has become an extension of the physical world.
These technological tools are a part of who we are. This has led to many ethical and societal implications. Our Facebook profiles can be processed to infer secondary information about us, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality. Some argue that many of our devices may be mapping our every move. Your browsing history could be spied on and even sold in the open market.
While the argument to protect privacy and individuals’ information is valid to a certain extent, we may also have to accept the possibility that privacy will become obsolete in the future. We have inherently become more open as a society in the digital world, voluntarily sharing our identities, interests, views, and personalities.

“The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental?”

There also seems to be a contradiction with the positive trend towards mass transparency and the need to protect privacy. Many advocate for a massive decentralization and openness of information through mechanisms like blockchain.
The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental? We want to live in a world of fewer secrets, but also don’t want to live in a world where our every move is followed (not to mention our every feeling, thought and interaction). So, how do we find a balance?
Traditionally, privacy is used synonymously with secrecy. Many are led to believe that if you keep your personal information secret, then you’ve accomplished privacy. Danny Weitzner, director of the MIT Internet Policy Research Initiative, rejects this notion and argues that this old definition of privacy is dead.
From Witzner’s perspective, protecting privacy in the digital age means creating rules that require governments and businesses to be transparent about how they use our information. In other terms, we can’t bring the business of data to an end, but we can do a better job of controlling it. If these stakeholders spy on our personal information, then we should have the right to spy on how they spy on us.
The Role of Policy and Discourse
Almost always, policy has been too slow to adapt to the societal and ethical implications of technological progress. And sometimes the wrong laws can do more harm than good. For instance, in March, the US House of Representatives voted to allow internet service providers to sell your web browsing history on the open market.
More often than not, the bureaucratic nature of governance can’t keep up with exponential growth. New technologies are emerging every day and transforming society. Can we confidently claim that our world leaders, politicians, and local representatives are having these conversations and debates? Are they putting a focus on the ethical and societal implications of emerging technologies? Probably not.
We also can’t underestimate the role of public awareness and digital activism. There needs to be an emphasis on educating and engaging the general public about the complexities of these issues and the potential solutions available. The current solution may not be robust or clear, but having these discussions will get us there.
Stock Media provided by blasbike / Pond5 Continue reading

Posted in Human Robots

#431130 Innovative Collaborative Robot sets new ...

Press Release by: HMK
As the trend of Industry 4.0 takes the world by storm, collaborative robots and smart factories are becoming the latest hot topic. At this year’s PPMA show, HMK will demonstrate the world’s first collaborative robot with built-in vision recognition from Techman Robot.
The new TM5 Cobot from HMK merges systems that usually function separately in conventional robots, the Cobot is the only collaborative robot to incorporate simple programming, a fully integrated vision system and the latest safety standards in a single unit.
With capabilities including direction identification, self-calibration of coordinates and visual task operation enabled by built-in vision, the TM5 can fine-tune in accordance with actual conditions at any time to accomplish complex processes that used to demand the integration of various equipment; it requires less manpower and time to recalibrate when objects or coordinates move and thus significantly improves flexibility as well as reducing maintenance cost.
Photo Credit: hmkdirect.com
Simple.Programming could not be easier. Using an easy to use flow chart program, TM-Flow will run on any tablet, PC or laptop over a wireless link to the TM control box, complex automation tasks can be realised in minutes. Clever teach functions and wizards also allow hand guided programming and easy incorporation of operation such as palletising, de-palletising and conveyor tracking.
SmartThe TM5 is the only cobot to feature a full colour vision package as standard mounted on the wrist of the robot, which in turn, is fully supported within TM-Flow. The result allows users to easily integrate the robot to the application, without complex tooling and the need for expensive add-on vision hardware and programming.
SafeThe recently CE marked TM5 now incorporates the new ISO/TS 15066 guidelines on safety in collaborative robots systems, which covers four types of collaborative operation:a) Safety-rated monitored stopb) Hand guidingc) Speed and separation monitoringd) Power and force limitingSafety hardware inputs also allow the Cobot to be integrated to wider safety systems.
When you add EtherCat and Modbus network connectivity and I/O expansion options, IoT ready network access and ex-stock delivery, the TM5 sets a new benchmark for this evolving robotics sector.
The TM5 is available with two payload options, 4Kg and 6Kg with a reach of 900mm and 700mm respectively, both with positioning capabilities to a repeatability of 0.05mm.
HMK will be showcasing the new TM5 Cobot at this year’s PPMA show at the NEC, visit stand F102 to get hands on the with the Cobot and experience the innovative and intuitive graphic HMI and hand-guiding features.
For more information contact HMK on 01260 279411, email sales@hmkdirect.com or visit www.hmkdirect.com
Photo Credit: hmkdirect.com
The post Innovative Collaborative Robot sets new benchmark appeared first on Roboticmagazine. Continue reading

Posted in Human Robots