Tag Archives: robot

#440357 This Autonomous Delivery Robot Has ...

Autonomous delivery was already on multiple companies’ research and development agenda before the pandemic, but when people stopped wanting to leave their homes it took on a whole new level of urgency (and potential for profit). Besides the fact that the pandemic doesn’t seem to be subsiding—note the continuous parade of new Greek-letter variants—our habits have been altered in a lasting way, with more people shopping online and getting groceries and other items delivered to their homes.

This week Nuro, a robotics company based in Mountain View, California unveiled what it hopes will be a big player in last-mile delivery. The company’s third-generation autonomous delivery vehicle has some impressive features, and some clever ones—like external airbags that deploy if the vehicle hits a pedestrian (which hopefully won’t happen too often, if ever).

Despite being about 20 percent smaller in width than the average sedan, the delivery bot has 27 cubic feet of space inside; for comparison’s sake, the tiny SmartForTwo has 12.4 cubic feet of cargo space, while the Tesla Model S has 26. It can carry up to 500 pounds and move at a speed of 45 miles per hour.

Image Credit: Nuro
Nuro has committed to minimizing its environmental footprint—the delivery bot runs on batteries, and according to the press release, the company will use 100 percent renewable electricity from wind farms in Texas to power the fleet (though it’s unclear how they’ll do this, as Texas is pretty far from northern California, and that’s where the vehicles will initially be operating; Nuro likely buys credits that go towards expanding wind energy in Texas).

Nuro’s first delivery bot was unveiled in 2018, followed by a second iteration in 2019. The company recently partnered with 7-Eleven to do autonomous deliveries in its hometown (Mountain View) using this second iteration, called the R2—though in the initial phase of the service, deliveries will be made by autonomous Priuses.

The newest version of the bot is equipped with sensors that can tell the difference between a pile of leaves and an animal, as well as how many pedestrians are standing at a crosswalk in dense fog. Nuro says the vehicle “was designed to feel like a friendly member of the community.” This sounds a tad dystopian—it is, after all, an autonomous robot on wheels—but the intention is in the right place. Customers will retrieve their orders and interact with the bot using a large exterior touchscreen.

Whether an optimal future is one where any product we desire can be delivered to our door within hours or minutes is a debate all its own, but it seems that’s the direction we’re heading in. Nuro will have plenty of competition in the last-mile delivery market, potentially including an Amazon system that releases multiple small wheeled robots from a large truck (Amazon patented the concept last year, but there’s been no further word about whether they’re planning to trial it). Nuro is building a manufacturing facility and test track in Nevada, and is currently in the pre-production phase.

Image Credit: Nuro Continue reading

Posted in Human Robots

#440324 A Robot for the Worst Job in the ...

As COVID-19 stresses global supply chains, the logistics industry is looking to automation to help keep workers safe and boost their efficiency. But there are many warehouse operations that don’t lend themselves to traditional automation—namely, tasks where the inputs and outputs of a process aren’t always well defined and can’t be completely controlled. A new generation of robots with the intelligence and flexibility to handle the kind of variation that people take in stride is entering warehouse environments. A prime example is Stretch, a new robot from Boston Dynamics that can move heavy boxes where they need to go just as fast as an experienced warehouse worker.

Stretch’s design is somewhat of a departure from the humanoid and quadrupedal robots that Boston Dynamics is best known for, such as Atlas and Spot. With its single massive arm, a gripper packed with sensors and an array of suction cups, and an omnidirectional mobile base, Stretch can transfer boxes that weigh as much as 50 pounds (23 kilograms) from the back of a truck to a conveyor belt at a rate of 800 boxes per hour. An experienced human worker can move boxes at a similar rate, but not all day long, whereas Stretch can go for 16 hours before recharging. And this kind of work is punishing on the human body, especially when heavy boxes have to be moved from near a trailer’s ceiling or floor.

“Truck unloading is one of the hardest jobs in a warehouse, and that's one of the reasons we're starting there with Stretch,” says Kevin Blankespoor, senior vice president of warehouse robotics at Boston Dynamics. Blankespoor explains that Stretch isn’t meant to replace people entirely; the idea is that multiple Stretch robots could make a human worker an order of magnitude more efficient. “Typically, you’ll have two people unloading each truck. Where we want to get with Stretch is to have one person unloading four or five trucks at the same time, using Stretches as tools.”

All Stretch needs is to be shown the back of a trailer packed with boxes, and it’ll autonomously go to work, placing each box on a conveyor belt one by one until the trailer is empty. People are still there to make sure that everything goes smoothly, and they can step in if Stretch runs into something that it can’t handle, but their full-time job becomes robot supervision instead of lifting heavy boxes all day.

“No one wants to do receiving.” —Matt Beane, UCSB
Achieving this level of reliable autonomy with Stretch has taken Boston Dynamics years of work, building on decades of experience developing robots that are strong, fast, and agile. Besides the challenge of building a high-performance robotic arm, the company also had to solve some problems that people find trivial but are difficult for robots, like looking at a wall of closely packed brown boxes and being able to tell where one stops and another begins.

Safety is also a focus, says Blankespoor, explaining that Stretch follows the standards for mobile industrial robots set by the American National Standards Institute and the Robotics Industry Association. That the robot operates inside a truck or trailer also helps to keep Stretch safely isolated from people working nearby, and at least for now, the trailer opening is fenced off while the robot is inside.

Stretch is optimized for moving boxes, a task that’s required throughout a warehouse. Boston Dynamics hopes that over the longer term the robot will be flexible enough to put its box-moving expertise to use wherever it’s needed. In addition to unloading trucks, Stretch has the potential to unload boxes from pallets, put boxes on shelves, build orders out of multiple boxes from different places in a warehouse, and ultimately load boxes onto trucks, a much more difficult problem than unloading due to the planning and precision required.

“Where we want to get with Stretch is to have one person unloading four or five trucks at the same time.” —Kevin Blankespoor, Boston Dynamics
In the short term, unloading a trailer (part of a warehouse job called “receiving”) is the best place for a robot like Stretch, agrees Matt Beane, who studies work involving robotics and AI at the University of California, Santa Barbara. “No one wants to do receiving,” he says. “It’s dangerous, tiring, and monotonous.”

But Beane, who for the last two years has led a team of field researchers in a nationwide study of automation in warehousing, points out that there may be important nuances to the job that a robot such as Stretch will probably miss, like interacting with the people who are working other parts of the receiving process. “There's subtle, high-bandwidth information being exchanged about boxes that humans down the line use as key inputs to do their job effectively, and I will be singularly impressed if Stretch can match that.”

Boston Dynamics spent much of 2021 turning Stretch from a prototype, built largely from pieces designed for Atlas and Spot, into a production-ready system that will begin shipping to a select group of customers in 2022, with broader sales expected in 2023. For Blankespoor, that milestone will represent just the beginning. He feels that such robots are poised to have an enormous impact on the logistics industry. “Despite the success of automation in manufacturing, warehouses are still almost entirely manually operated—we’re just starting to see a new generation of robots that can handle the variation you see in a warehouse, and that’s what we’re excited about with Stretch.” Continue reading

Posted in Human Robots

#440297 Moving toward the first flying humanoid ...

Researchers at the Italian Institute of Technology (IIT) have recently been exploring a fascinating idea, that of creating humanoid robots that can fly. To efficiently control the movements of flying robots, objects or vehicles, however, researchers require systems that can reliably estimate the intensity of the thrust produced by propellers, which allow them to move through the air. Continue reading

Posted in Human Robots

#440269 Children as Social Robot Designers

The robot in the picture above is called YOLO, which stands for “your own living object.” It’s pretty weird looking—not like something you or I would design if someone were to tell us to design a social robot, right? And that’s because YOLO is a robot that was designed by, and for, children. Not adults making the kind of robot that they think kids will want, mind you, but actual children doing the designing from scratch.
Getting children to design a robot was not easy. It took years to take YOLO from concept to physical reality, incorporating considerations for simplicity, cost, and a level of durability that’s compatible with open-ended play. The end result was something completely different, and also something that was very effective at helping kids tell better stories.

Human-centered design of robots can be very challenging, because once you’ve gotten a robot to a point where you’re ready for user testing, the kinds of major changes that you can easily implement are typically pretty minor. On the other hand, if you try and get user input earlier, you’re generally restricted to things like interviews or questionnaires or asking people to look at images or animations, and none of that stuff is very reliable at providing useful feedback in the same way that interaction with a physical robot would be.
When you start talking about children, things get even more difficult, because even those questionnaires don’t work nearly as well as they would with an adult. This is a big problem because social robots are potentially (I would argue) very valuable for children as tools for education and social development. We’re not there with social robots yet, of course, but understanding how to design robots for kids is, if not step one, still one of the things that we need to figure out early.

What’s so cool about YOLO is how unapologetically child-centric the entire process is; there doesn’t seem to be even a little bit of adult going on in that robot. But it took input from 142 children to get to this point. For the humans working on this, YOLO has been quite a journey.

Patrícia Alves-Oliveira
For more details on YOLO and the thought and process behind the design, we spoke with YOLO’s creator, Patrícia Alves-Oliveira.
IEEE Spectrum: What are your general impressions of the current generation of social robots for children?
Patrícia Alves-Oliveira: Social robots seem to be the new generation of toys for children. Toys, in general, are the most important tool in a child’s life because while manipulating them, children learn to explore the world. Toys are the very first tool children use to express their own emotions and thoughts. The way we use tools, and toys in the case of children, can deeply influence and transform how we learn and experience the world. If we think of social robots for children as the new generation of tools, it feels that children can have access to a richer playset which enables them to be stimulated in new ways that traditional toys cannot elicit. Consequently, they can experience the world in richer ways.
What kind of toys are social robots?
Social robots can include robots for play and robots for learning. Robots for play usually take the form of an animal or a doll, and often include human-like features, such as eyes and a mouth. When children use robots for play, they are not only being entertained, but also stimulated. For example, they learn about problem solving, conflict resolution, and social-emotional skills while paying. This is possible because the robot can play back to them, resulting in bi-directional play, or social play. The play does not only depend on the child’s imagination, but also on the relationship that they can build with this artificial embodied system.
By contrast, robots for learning are deliberately designed to teach children something specific. They’re a tangible way of learning about abstract concepts that otherwise would be hard to digest. For example, children can use robots to learn about geometry—perhaps they need to program a robot to make a right angle, so they learn that 90º means a right angle, and that if the robot continues making right turns, it can actually make the shape of a rectangle. These types of robots are generally less animal-like or human-like, and instead have a more practical shape, such as a cube on wheels.
Can you describe what makes YOLO different?
YOLO is a robot whose purpose is to stimulate creativity in children during play. To do this, YOLO uses two techniques called “mirroring” and “contrasting.” These two techniques derive from creativity research and serve to develop convergent and divergent thinking, which are two different types of creative thought that we all have.
When YOLO uses mirroring, it means the robot mimics the same play patterns that children perform while manipulating the robot. So, if children move the robot to the right, YOLO memorizes this movement, and then mimics it. If we imagine what this means within a storytelling context, we can imagine a child moving the robot to the right because “the robot is going to school.” As the robot mimics this movement, the child understands that as a “yes, YOLO is continuing going to school.” The mirroring technique stimulates convergent thinking, related to the elaboration and exploration of details about one single idea.
When YOLO uses contrasting, it means the robot is contrasting the child’s play pattern. Taking the same school example, if a child moves the robot to the right to signal it’s going to school, the robot will then contrast that movement by moving to the left. In the context of storytelling, the child might think “oh, there is something the robot is afraid of in school!” This will shift the story, and children are likely to make a shift in their story narratives. The contrast technique stimulates divergent thinking, as children need to incorporate the novel behavior of the robot in their story in such a way that the story continues to make sense.

Note: the following video is an example of an original story created by a Portuguese child playing with YOLO. To protect the privacy of the child, the story was animated by an artist.

Why is this approach with YOLO valuable?
The value of YOLO lies in its interaction with children. The main idea is that through the interaction between children and YOLO, children’s creative abilities can be stimulated. While this robot uses creativity stimulation techniques, it’s also simple enough for children to play with it as a normal toy. YOLO incorporates both the benefits of social robotics combined with already known benefits of traditional toys. Overall, when creating stories with YOLO, children have more original ideas.
The reason why we were able to create such a robot is because children were involved in the entire design of the robot from the beginning. My major design principle for YOLO was that it can fit a child’s world. The way I accomplished this was to test and test and test YOLO with children until the robot was as natural to use as a toy.
If you had been asked to design a robot to perform YOLO's function without the input of children, how do you think the robot would be different?
I believe YOLO would be very different. But I also believe that YOLO would be very different if another sample of children was brought in during the design process. My key insight has been that even if the way the robot looks changes, the essence of YOLO’s interactions with children would remain stable. This is because we can keep the key aspects of YOLO’s operation constant, including creativity provocation, open-ended play, ease of use, and abstract non-anthropomorphic design. This means that YOLO can have a different shape and express itself using different physical behaviors, while still using the same creativity techniques.
Why is avoiding anthropomorphism important?
The main reason is that it’s much harder to fulfill expectations if we design a robot that looks like a human, because people will expect the robot to behave as a human, which is never the case due to technological shortcomings. Staying away from designing human-like robots also ended up being a gift in disguise as this opens new spaces when thinking of robots: if not like us, how should a robot look? I think this question is beautiful, and can open doors for many creative and innovative applications of robots.
When you design with children, how do you separate physical robot design, and robot purpose and functionality?
The design of YOLO began with the basic idea of moving a cube around. Many children’s toys have geometric shapes, so I started by building paper cubes using origami techniques. The paper cubes I built were of different sizes, and I asked children to create a story using the cubes as their characters. I then just observed how children as they played with the cubes, and based on their play, I made design decisions about YOLO.

For example, I started noticing that while grabbing the paper cubes, children would grasp the edges of the cube, making the edges round with use. I realized that these edges were not ideal for grasping and so I gave the children a new design with rounded edges. This is when the shape of the robot started being designed.
To design YOLO’s behaviors, I listened to how children told stories while using the paper cubes. They generally attributed specific personality traits to their paper cube characters. One character was grumpy, another was shy, and so on. This led me to think that YOLO should also exhibit personality as a way to engage children in their storytelling, and I started reading about personality research, specifically looking at non-verbal behaviors associated with personality expression. The next step was to translate the personality requirements onto the shape of the robot. For a grumpy personality, YOLO would move fast and with high amplitude movements. For a shy robot, YOLO would move slowly and with low amplitude movements, almost saying “don’t look at me, I am not here!”

A series of images showing different iterations of the YOLO robot, from sketches, to paper models, to designs of different sizes and shapes.Patrícia Alves-Oliveira
How much iteration did it take to arrive at YOLO’s final design?
I refined the robot a lot, changed the shape, the size, and the multimodal expression. I always included children to test every change, and I drew from their behavior as inspiration for the next design iteration. To give an example, at some point YOLO was making abstract sounds as an expressive modality. When children interacted with this prototype version, they completely ignored the sounds and talked on top of the robot, sometimes yelling over it. This indicated hat the sounds of the robot were obstructing their own expression in the story, so sounds were removed, and YOLO is now a silent robot.
Another example was the inclusion of touch as a social feature. When I was testing the first actuated prototype, YOLO would start moving while children were holding it, and the movements of the robot scared children. I remember one child actually said, “the robot does not like me, it wants to go away.” After this testing session, I incorporated a touch sensor in YOLO, so when it recognizes touch, YOLO does not move because a child is holding and playing with it. YOLO starts moving only when the touch sensor does not recognize touch anymore, to ensure that the play of children is not interrupted. This made a huge difference in the flow of the interaction.

Close up images of the YOLO robot showing wheels, a touch sensor, and a forest of small plastic whiskers with LEDs under them growing out of the robot's headPatrícia Alves-Oliveira
How do you incorporate multiple children (who may feel and want different things) into the design process for a single robot?

I’ve included children from 7-9 years old in the design of YOLO. Within this specific age group, children are in the same developmental stage where they attach abstract concepts to concrete situations and use objects locally present as a tool to learn and understand the world. Although there are differences related to a child’s personality and preferences, the way children use, manipulate, and understand objects is quite similar at this age. This was true also in the context of the robot’s design. Knowing the developmental stage of children helped me choose the activities and materials to use during the design sessions to best define the design requirements for the robot.
While designing YOLO, I faced a question that I think many designers face: When is the robot design finished? It is easy to fall into feature creep and keep adding and adding and adding features. To avoid this, I focused on design principles rather than designing features. The distinction between a principle and a feature is that when you design for a feature, you add or remove the exact option the user is asking for. This can make the design short-sighted.
Take the example of a child getting scared if YOLO started moving while they were holding it. Designing for features would mean removing the ability of the robot to move. However, if we design for principles, we require a deeper understanding of the interaction between the child and the robot. Once we have this, it’s clear that the main problem here was not the robot moving, but the timing of the robot staying still. So the child does not necessarily want a stationary robot; but rather a robot they can have some control over, and instead of removing the navigation possibility, we added a touch sensor that would stop the robot from moving while being held.
Do you think this long process was worthwhile?
Building YOLO, the way I did, was a 4-year journey that enabled me to explore many aspects about how robots can be used to nurture intrinsic human abilities, such as creativity. While I was designing and building YOLO, I faced many questions, such as what does it mean to build a robot for creativity? What methods do I need to develop to successfully include children in the design of the robot? How can I measure the success of using YOLO for creativity stimulation? Being able to answer all of these questions was a valuable part of designing and fabricating YOLO, and I feel that as we answer these questions with a robot like YOLO, we are also contributing to many other fields, such as psychology, design, and engineering more broadly.
A paper detailing the design of YOLO received the best paper award in the design track of the HRI 2021 conference, and you can see Patrícia Alves-Oliveira give a talk about her research here. Continue reading

Posted in Human Robots

#439404 Walker X by UBTECH

Walker X, is the latest version by UBTECH Robotics of its groundbreaking bipedal humanoid robot.

Posted in Human Robots