Tag Archives: new
One year ago, we wrote about some “high-tech” warehouse robots from Amazon that appeared to be anything but. It was confusing, honestly, to see not just hardware that looked dated but concepts about how robots should work in warehouses that seemed dated as well. Obviously we’d expected a company like Amazon to be at the forefront of developing robotic technology to make their fulfillment centers safer and more efficient. So it’s a bit of a relief that Amazon has just announced several new robotics projects that rely on sophisticated autonomy to do useful, valuable warehouse tasks.
The highlight of the announcement is Proteus, which is like one of Amazon’s Kiva shelf-transporting robots that’s smart enough (and safe enough) to transition from a highly structured environment to a moderately structured environment, an enormous challenge for any mobile robot.
Proteus is our first fully autonomous mobile robot. Historically, it’s been difficult to safely incorporate robotics in the same physical space as people. We believe Proteus will change that while remaining smart, safe, and collaborative.
Proteus autonomously moves through our facilities using advanced safety, perception, and navigation technology developed by Amazon. The robot was built to be automatically directed to perform its work and move around employees—meaning it has no need to be confined to restricted areas. It can operate in a manner that augments simple, safe interaction between technology and people—opening up a broader range of possible uses to help our employees—such as the lifting and movement of GoCarts, the nonautomated, wheeled transports used to move packages through our facilities.
I assume that moving these GoCarts around is a significant task within Amazon’s warehouse, because last year, one of the robots that Amazon introduced (and that we were most skeptical of) was designed to do exactly that. It was called Scooter, and it was this massive mobile system that required manual loading and could move only a few carts to the same place at the same time, which seemed like a super weird approach for Amazon, as I explained at the time:
We know Amazon already understands that a great way of moving carts around is by using much smaller robots that can zip underneath a cart, lift it up, and carry it around with them. Obviously, the Kiva drive units only operate in highly structured environments, but other AMR companies are making this concept work on the warehouse floor just fine.
From what I can make out from the limited information available, Proteus shows that Amazon is not, in fact,behind the curve with autonomous mobile robots (AMRs) and has actually been doing what makes sense all along, while for some reason occasionally showing us videos of other robots like Scooter and Bert in order to (I guess?) keep their actually useful platforms secret.
Anyway, Proteus looks to be a combination of one of Amazon’s newer Kiva mobile bases, along with the sensing and intelligence that allow AMRs to operate in semistructured warehouse environments alongside moderately trained humans. Its autonomy seems to be enabled by a combination of stereo-vision sensors and several planar lidars at the front and sides, a good combination for both safety and effective indoor localization in environments with a bunch of reliably static features.
I’m particularly impressed with the emphasis on human-robot interaction with Proteus, which often seems to be a secondary concern for robots designed for work in industry. The “eyes” are expressive in a minimalist sort of way, and while the front of the robot is very functional in appearance, the arrangement of the sensors and light bar also manages to give it a sort of endearingly serious face. That green light that the robot projects in front of itself also seems to be designed for human interaction—I haven’t seen any sensors that use light like that, but it seems like an effective way of letting a human know that the robot is active and moving. Overall, I think it’s cute, although very much not in a “let’s try to make this robot look cute” way, which is good.
What we’re not seeing with Proteus is all of the software infrastructure required to make it work effectively. Don’t get me wrong—making this hardware cost effective and reliable enough that Amazon can scale to however many robots it wants to scale to (likely a frighteningly large number) is a huge achievement. But there’s also all that fleet-management stuff that gets much more complicated once you have robots autonomously moving things around an active warehouse full of fragile humans who need to be both collaborated with and avoided.
Proteus is certainly the star of the show here, but Amazon did also introduce a couple of new robotic systems. One is Cardinal:
The movement of heavy packages, as well as the reduction of twisting and turning motions by employees, are areas we continually look to automate to help reduce risk of injury. Enter Cardinal, the robotic work cell that uses advanced artificial intelligence (AI) and computer vision to nimbly and quickly select one package out of a pile of packages, lift it, read the label, and precisely place it in a GoCart to send the package on the next step of its journey. Cardinal reduces the risk of employee injuries by handling tasks that require lifting and turning of large or heavy packages or complicated packing in a confined space.
There’s also a new system for transferring pods from containers to adorable little container-hauling robots, designed to minimize the number of times that humans have to reach up or down or sideways:
It’s amazing to look at this kind of thing and realize the amount of effort that Amazon is putting in to maximize the efficiency of absolutely everything surrounding the (so far) very hard-to-replace humans in their fulfillment centers. There’s still nothing that can do a better job than our combination of eyes, brains, and hands when it comes to rapidly and reliably picking random things out of things and putting them into other things, but the sooner Amazon can solve that problem, the sooner the humans that those eyes and brains and hands belong to will be able to direct their attention to more creative and fulfilling tasks. Or that’s the idea, anyway.
Amazon says it expects Proteus to start off moving carts around in specific areas, with the hope that it’ll eventually automate cart movements in its warehouses as much as possible. And Cardinal is still in prototype form, but Amazon hopes that it’ll be deployed in fulfillment centers by next year. Continue reading
Researchers at IIT-Istituto Italiano di Tecnologia (Italian Institute of Technology) and INAIL (Italian Worker's Compensation Authority) have designed and created innovative prototypes of wearable robotic exoskeletons for industrial use to make work in the industrial and manufacturing sectors safer. Via electric motors and artificial intelligence algorithms, these wearable robotic devices will assist workers engaged in the most physically demanding tasks, significantly reducing the effort required by up to 40% and lowering the percentage of accidents at work and chronic occupational disorders. Researchers are starting to test the prototypes in real scenarios and are planning further development in order to reach the technological level required to bring them to the market in few years. Continue reading
During the Automate 2022 trade show on June 6-9 in Detroit, Southwest Research Institute is introducing new automation technology that allows industrial robots to visually classify work and autonomously perform tasks. Continue reading
Humans are unrivaled in the area of cognition. After all, no other species has sent probes to other planets, produced lifesaving vaccines, or created poetry. How information is processed in the human brain to make this possible is a question that has drawn endless fascination, yet no definitive answers.
Our understanding of brain function has changed over the years. But current theoretical models describe the brain as a “distributed information-processing system.” This means it has distinct components that are tightly networked through the brain’s wiring. To interact with each other, regions exchange information though a system of input and output signals.
However, this is only a small part of a more complex picture. In a study published last week in Nature Neuroscience, using evidence from different species and multiple neuroscientific disciplines, we show that there isn’t just one type of information processing in the brain. How information is processed also differs between humans and other primates, which may explain why our species’ cognitive abilities are so superior.
We borrowed concepts from what’s known as the mathematical framework of information theory—the study of measuring, storing, and communicating digital information which is crucial to technology such as the internet and artificial intelligence—to track how the brain processes information. We found that different brain regions in fact use different strategies to interact with each other.
Some brain regions exchange information with others in a very stereotypical way, using input and output. This ensures that signals get across in a reproducible and dependable manner. This is the case for areas that are specialized for sensory and motor functions (such as processing sound, visual, and movement information).
Take the eyes, for example, which send signals to the back of the brain for processing. The majority of information that is sent is duplicate, being provided by each eye. Half of this information, in other words, is not needed. So we call this type of input-output information processing “redundant.”
But the redundancy provides robustness and reliability; it is what enables us to still see with only one eye. This capability is essential for survival. In fact, it is so crucial that the connections between these brain regions are anatomically hard-wired in the brain, a bit like a telephone landline.
However, not all information provided by the eyes is redundant. Combining information from both eyes ultimately enables the brain to process depth and distance between objects. This is the basis for many kinds of 3D glasses at the cinema.
This is an example of a fundamentally different way of processing information, in a way that is greater than the sum of its parts. We call this type of information processing—when complex signals from across different brain networks are integrated—“synergistic.”
Synergistic processing is most prevalent in brain regions that support a wide range of more complex cognitive functions, such as attention, learning, working memory, and social and numerical cognition. It is not hardwired in the sense that it can change in response to our experiences, connecting different networks in different ways. This facilitates the combination of information.
Such areas where lots of synergy takes place—mostly in the the front and middle of the cortex (the brain’s outer layer)—integrate different sources of information from the entire brain. They are therefore more widely and efficiently connected with the rest of the brain than the regions which deal with primary sensory and movement-related information.
High-synergy areas that support integration of information also typically have lots of synapses, the microscopic connections that enable nerve cells to communicate.
Is Synergy What Makes Us Special?
We wanted to know whether this ability to accumulate and build information through complex networks across the brain is different between humans and other primates, which are close relatives of ours in evolutionary terms.
To find out, we looked at brain imaging data and genetic analyses of different species. We found that synergistic interactions account for a higher proportion of total information flow in the human brain than in the brains of macaque monkeys. In contrast, the brains of both species are equal in terms of how much they rely on redundant information.
However, we also looked specifically at the prefrontal cortex, an area in the front of the brain that supports more advanced cognitive functioning. In macaques, redundant information processing is more prevalent in this region, whereas in humans it is a synergy-heavy area.
The prefrontal cortex has also undergone significant expansion with evolution. When we examined data from chimpanzee brains, we found that the more a region of the human brain had expanded during evolution in size relative to its counterpart in the chimp, the more this region relied on synergy.
We also looked at genetic analyses from human donors. This showed that brain regions associated with processing synergistic information are more likely to express genes that are uniquely human and related to brain development and function, such as intelligence.
This led us to the conclusion that additional human brain tissue, acquired as a result of evolution, may be primarily dedicated to synergy. In turn, it is tempting to speculate that the advantages of greater synergy may, in part, explain our species’ additional cognitive capabilities. Synergy may add an important piece to the puzzle of human brain evolution, which was previously missing.
Ultimately, our work reveals how the human brain navigates the trade-off between reliability and integration of information; we need both. Importantly, the framework we developed holds the promise of critical new insights into a wide array of neuroscientific questions, from those about general cognition to disorders.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: Gerrit Bril from Pixabay Continue reading
Researchers at Shanghai Jiao Tong University, University of Oxford, and the Tencent Robotics X Lab have recently introduced a configuration-aware policy for safely controlling mobile robotic arms. This policy, introduced in a paper pre-published on arXiv, can help to better guide the movements of a robotic arm, while also reducing the risk that it will collide with objects and other obstacles in its vicinity. Continue reading