Tag Archives: robot

#439879 Teaching robots to think like us: Brain ...

Can intelligence be taught to robots? Advances in physical reservoir computing, a technology that makes sense of brain signals, could contribute to creating artificial intelligence machines that think like us. Continue reading

Posted in Human Robots

#439816 This Bipedal Drone Robot Can Walk, Fly, ...

Most animals are limited to either walking, flying, or swimming, with a handful of lucky species whose physiology allows them to cross over. A new robot took inspiration from them, and can fly like a bird just as well as it can walk like a (weirdly awkward, metallic, tiny) person. It also happens to be able to skateboard and slackline, two skills most humans will never pick up.

Described in a paper published this week in Science Robotics, the robot’s name is Leo, which is short for Leonardo, which is short for LEgs ONboARD drOne. The name makes it sound like a drone with legs, but it has a somewhat humanoid shape, with multi-joint legs, propeller thrusters that look like arms, a “body” that contains its motors and electronics, and a dome-shaped protection helmet.

Leo was built by a team at Caltech, and they were particularly interested in how the robot would transition between walking and flying. The team notes that they studied the way birds use their legs to generate thrust when they take off, and applied similar principles to the robot. In a video that shows Leo approaching a staircase, taking off, and gliding over the stairs to land near the bottom, the robot’s motions are seamlessly graceful.

“There is a similarity between how a human wearing a jet suit controls their legs and feet when landing or taking off and how LEO uses synchronized control of distributed propeller-based thrusters and leg joints,” said Soon-Jo Chung, one of the paper’s authors a professor at Caltech. “We wanted to study the interface of walking and flying from the dynamics and control standpoint.”

Leo walks at a speed of 20 centimeters (7.87 inches) per second, but can move faster by mixing in some flying with the walking. How wide our steps are, where we place our feet, and where our torsos are in relation to our legs all help us balance when we walk. The robot uses its propellers to help it balance, while its leg actuators move it forward.

To teach the robot to slackline—which is much harder than walking on a balance beam—the team overrode its feet contact sensors with a fixed virtual foot contact centered just underneath it, because the sensors weren’t able to detect the line. The propellers played a big part as well, helping keep Leo upright and balanced.

For the robot to ride a skateboard, the team broke the process down into two distinct components: controlling the steering angle and controlling the skateboard’s acceleration and deceleration. Placing Leo’s legs in specific spots on the board made it tilt to enable steering, and forward acceleration was achieved by moving the bot’s center of mass backward while pitching the body forward at the same time.

So besides being cool (and a little creepy), what’s the goal of developing a robot like Leo? The paper authors see robots like Leo enabling a range of robotic missions that couldn’t be carried out by ground or aerial robots.

“Perhaps the most well-suited applications for Leo would be the ones that involve physical interactions with structures at a high altitude, which are usually dangerous for human workers and call for a substitution by robotic workers,” the paper’s authors said. Examples could include high-voltage line inspection, painting tall bridges or other high-up surfaces, inspecting building roofs or oil refinery pipes, or landing sensitive equipment on an extraterrestrial object.

Next up for Leo is an upgrade to its performance via a more rigid leg design, which will help support the robot’s weight and increase the thrust force of its propellers. The team also wants to make Leo more autonomous, and plans to add a drone landing control algorithm to its software, ultimately aiming for the robot to be able to decide where and when to walk versus fly.

Leo hasn’t quite achieved the wow factor of Boston Dynamics’ dancing robots (or its Atlas that can do parkour), but it’s on its way.

Image Credit: Caltech Center for Autonomous Systems and Technologies/Science Robotics Continue reading

Posted in Human Robots

#439768 DARPA SubT Finals: Robot Operator Wisdom

Each of the DARPA Subterranean Challenge teams is allowed to bring up to 20 people to the Louisville Mega Cavern for the final event. Of those 20 people, only five can accompany the robots to the course staging area to set up the robots. And of those five, just one person can be what DARPA calls the Human Supervisor.

The Human Supervisor role, which most teams refer to as Robot Operator, is the only person allowed to interface with the robots while they're on the course. Or, it's probably more accurate to say that the team's base station computer is the only thing allowed to interface with robots on the course, and the human operator is the only person allowed to use the base station. The operator can talk to their teammates at the staging area, but that's about it—the rest of the team can't even look at the base station screens.
Robot operator is a unique job that can be different for each team, depending on what kinds of robots that team has deployed, how autonomous those robots are, and what strategy the team is using during the competition. On the second day of the SubT preliminary competition, we talked with robot operators from all eight Systems Track teams to learn more about their robots, exactly what they do during the competition runs, and their approach to autonomy.
“DARPA is interested in approaches that are highly autonomous without the need for substantive human interventions; capable of remotely mapping and/or navigating complex and dynamic terrain; and able to operate with degraded and unreliable communication links. The team is permitted to have a single Human Supervisor at a Base Station… The Human Supervisor is permitted to view, access, and/or analyze both course data and status data. Only the Human Supervisor is permitted to use wireless communications with the systems during the competition run.” DARPA's idea here is that most of the robots competing in SubT will be mostly autonomous most of the time, hence their use of “supervisor” rather than “operator.” Requiring substantial human-in-the-loop-ness is problematic for a couple of reasons—first, direct supervision requires constant communication, and we've seen how problematic communication can be on the SubT course. And second, operation means the need for a skilled and experienced operator, which is fine if you're a SubT team that's been practicing for years but could be impractical for a system of robots that's being deployed operationally.
So how are teams making the robot operator role work, and how close are they to being robot supervisors instead? I went around the team garages on the second day of preliminary runs, and asked each team operator the same three questions about their roles. I also asked the operators, “What is one question I should I ask the next operator I talk to?” I added this as a bonus question, with each operator answering a question suggested by a different team operator.
Team RobotikaRobot Operator: Martin DlouhyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
This is the third time we've participated in a SubT event; we've tried various robots, small ones, bigger ones, but for us, these two robots seem to be optimal. Because we are flying from Czech Republic, the robots have to fit in our checked luggage. We also don't have the smaller robots or the drones that we had because like three weeks ago, we didn't even know if we would be allowed to enter the United States. So this is optimal for what we can bring to the competition, and we would like to demonstrate that we can do something with a simple solution.
Once your team of robots is on the course, what do you do during the run?
We have two robots, so it's easier than for some other teams. When the robots are in network range, I have some small tools to locally analyze data to help find artifacts that are hard for the robots to see, like the cellphone or the gas source. If everything goes fine, I basically don't have to be there. We've been more successful in the Virtual SubT competition because over half our team are software developers. We've really pushed hard to make the Virtual and System software as close as possible, and in Virtual, it's fully autonomous from beginning to end. There's one step that I do manually as operator—the robots have neural networks to recognize artifacts, but it's on me to click confirm to submit the artifact reports to DARPA.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
I would actually like an operator-less solution, and we could run it, but it's still useful to have a human operator—it's safer for the robot, because it's obvious to a human when the robot is not doing well.
Bonus operator question: What are the lowest and highest level decisions you have to make?
The lowest level is, I open the code and change it on the fly. I did it yesterday to change some of the safety parameters. I do this all the time, it's normal. The highest level is asking the team, “guys, how are we going to run our robots today.”
Team MARBLERobot Operator: Dan RileyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We've been using the Huskies [wheeled robots] since the beginning of the competition, it's a reliable platform with a lot of terrain capability. It's a workhorse that can do a lot of stuff. We were also using a tank-like robot at one time, but we had traversability issues so we decided to drop that one for this competition. We also had UAVs, because there's a lot of value in not having to worry about the ground while getting to areas that you can't get to with a ground robot, but unfortunately we had to drop that too because of the number of people and time that we had. We decided to focus on what we knew we could do well, and make sure that our baseline system was super solid. And we added the Spot robots within the last two months mostly to access areas that the Huskies can't, like going up and down stairs and tricky terrain. It's fast, and we really like it.
Our team of robots is closely related to our deployment strategy. The way our planner and multi-robot coordination works is that the first robot really just plows through the course looking for big frontiers and new areas, and then subsequent robots will fill in the space behind looking for more detail. So we deploy the Spots first to push the environment since they're faster than the Huskies, and the Huskies will follow along and fill in the communications network.
We know we don't want to run five robots tomorrow. Before we got here, we saw the huge cavern and thought that running more robots would be better. But based on the first couple runs, we now know that the space inside is much smaller, so we think four robots is good.
Once your team of robots is on the course, what do you do during the run?
The main thing I'm watching for is artifact reports from robots. While I'm waiting for artifact reports, I'm monitoring where the robots are going, and mainly I want to see them going to new areas. If I see them backtracking or going where another robot has explored already, I have the ability to send them new goal points in another area. When I get an artifact report, I look at the image to verify that it's a good report. For objects that may not be visible, like the cell phone [which has to be detected through the wireless signal it emits], if it's early in the mission I'll generally wait and see if I get any other reports from another robot on it. The localization isn't great on those artifacts, so once I do submit, if it doesn't score, I have to look around to find an area where it might be. For instance, we found this giant room with lots of shelves and stuff, and that's a great place to put a cell phone, and sure enough, that's where the cell phone was.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
We pride ourselves on our autonomy. From the very beginning, that was our goal, and actually in earlier competitions I had very little control over the robot, I could not even send it a goal point. All I was getting was reports—it was a one-way street of information. I might have been able to stop the robot, but that was about it. Later on, we added the goal point capability and an option to drive the robot if I need to take over to get it out of a situation.
I'm actually the lead for our Virtual Track team as well, and that's already decision-free. We're running the exact same software stack on our robots, and the only difference is that the virtual system also does artifact reporting. Honestly, I'd say that we're more effective having the human be able to make some decisions, but the exact same system works pretty well without having any human at all.
Bonus operator question: How much sleep did you get last night?
I got eight hours, and I could have had more, except I sat around watching TV for a while. We stressed ourselves out a lot during the first two competitions, and we had so many problems. It was horrible, so we said, “we're not doing that again!” A lot of our problems started with the setup and launching phase, just getting the robots started up and ready to go and out of the gate. So we spent a ton of time making sure that our startup procedures were all automated. And when you're able to start up easily, things just go well.
Team ExplorerRobot Operator: Chao CaoTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We tried to diversify our robots for the different kinds of environments in the challenge. We have wheeled vehicles, aerial vehicles, and legged vehicles (Spot robots). Our wheeled vehicles are different sizes; two are relatively big and one is smaller, and two are articulated in the middle to give them better mobility performance in rough terrain. Our smaller drones can be launched from the bigger ground robots, and we have a larger drone with better battery life and more payload.
In total, there are 11 robots, which is quite a lot to be managed by a single human operator under a constrained time limit, but if we manage those robots well, we can explore quite a large three dimensional area.
Once your team of robots is on the course, what do you do during the run?
Most of the time, to be honest, it's like playing a video game. It's about allocating resources to gain rewards (which in this case are artifacts) by getting the robots spread out to maximize coverage of the course. I'm monitoring the status of the robots, where they're at, and what they're doing. Most of the time I rely on the autonomy of the robots, including for exploration, coordination between multiple robots, and detecting artifacts. But there are still times when the robots might need my help, for example yesterday one of the bigger robots got itself stuck in the cave branch but I was able to intervene and get it to drive out.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Humans have a semantic understanding of the environment. Just by looking at a camera image, I can predict what an environment will be like and how risky it will be, but robots don't have that kind of higher level decision capability. So I might want a specific kind of robot to go into a specific kind of environment based on what I see, and I can redirect robots to go into areas that are a better fit for them. For me as an operator, at least from my personal experience, I think it's still quite challenging for robots to perform this kind of semantic understanding, and I still have to make those decisions.
Bonus operator question: What is your flow for decision making?
Before each run, we'll have a discussion among all the team members to figure out a rough game plan, including a deployment sequence—which robots go first, should the drones be launched from the ground vehicles or from the staging area. During the run, things are changing, and I have to make decisions based on the environment. I'll talk to the pit crew about what I can see through the base station, and then I'll make an initial proposal based on my instincts for what I think we should do. But I'm very focused during the run and have a lot of tasks to do, so my teammates will think about time constraints and how conservative we want to be and where other robots are because I can't think through all of those possibilities, and then they'll give me feedback. Usually this back and forth is quick and smooth.

The Robot Operator is the only person allowed to interface with the robots while they're on the course—the operators pretty much controls the entire run by themselves.DARPA
Team CTU-CRAS-NORLABRobot Operator: Vojtech SalnskyTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We chose many different platforms. We have some tracked robots, wheeled robots, Spot robots, and some other experimental UGVs [small hexapods and one big hexapod], and every UGV has a different ability to traverse terrain, and we are trying to cover all possible locomotion types to be able to traverse anything on the course. Besides the UGVs, we're using UAVs as well that are able to go through both narrow corridors and bigger spaces.
We brought a large number of robots, but the number that we're using, about ten, is enough to be able to explore a large part of the environment. Deploying more would be really hard for the pit crew of only five people, and there isn't enough space for more robots.
Once your team of robots is on the course, what do you do during the run?
It differs run by run, but the robots are mostly autonomous, so they decide where to go and I'm looking for artifact detections uploaded by the robots and approving or disapproving them. If I see that a robot is stuck somewhere, I can help it decide where to go. If it looks like a robot may lose communications, I can move some robots to make a chain from other robots to extend our network. I can do high level direction for exploration, but I don't have to—the robots are updating their maps and making decisions to best explore the whole environment.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Terrain assessment is subtle. At a higher level, the operator has to decide where to send a walking robot and where to send a rolling robot. It's tiny details on the ground and a feeling about the environment that help the operator make those decisions, and that is not done autonomously.
Bonus operator question: How much bandwidth do you have?
I'm on the edge. I have a map, I have some subsampled images, I have detections, I have topological maps, but it would be better to have everything in 4K and dense point clouds.
Team CSIRO Data61Robot Operator: Brendan TiddTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We've got three robot types that are here today—Spot legged robots, big tracked robots called Titans, and drones. The legged ones have been pretty amazing, especially for urban environments with narrow stairs and doorways. The tracked robots are really good in the tricky terrain of cave environments. And the drones can obviously add situational awareness from higher altitudes and detect those high artifacts.
Once your team of robots is on the course, what do you do during the run?
We use the term “operator” but I'm actually supervising. Our robots are all autonomous, they all know how to divide and conquer, they're all going to optimize exploring for depth, trying to split up where they can and not get in each other's way. In particular the Spots and the Titans have a special relationship where the Titan will give way to the Spot if they ever cross paths, for obvious reasons. So my role during the run is to coordinate node placement, that's something that we haven't automated—we've got a lot of information that comes back that I use to decide on good places to put nodes, and probably the next step is to automate that process. I also decide where to launch the drone. The launch itself is one click, but it still requires me to know where a good place is. If everything goes right, in general the robots will just do their thing.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
The node drop thing is vital, but I think it's quite a complex thing to automate because there are so many different aspects to consider. The node mesh is very dynamic, it's affected by all the robots that are around it and obviously by the environment. Similarly, the drone launch, but that requires the robots to know when it's worth it to launch a drone. So those two things, but also pushing on the nav stack to make sure it can handle the crazy stuff. And I guess the other side is the detection. It's not a trivial thing knowing what's a false positive or not, that's a hard thing to automate.
Bonus operator question: How stressed are you, knowing that it's just you controlling all the robots during the run?
Coping with that is a thing! I've got music playing when I'm operating, I actually play in a metal band and we get on stage sometimes and the feeling is very similar, so it's really helpful to have the music there. But also the team, you know? I'm confident in our system, and if I wasn't, that would really affect my mental state. But we test a lot, and all that preparedness helps with the stress.
Team CoSTARRobot Operator: Kyohei OtsuTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We have wheeled vehicles, legged vehicles, and aerial drones, so we can cover many terrains, handle stairs, and fly over obstacles. We picked three completely different mobility systems to be able to use many different strategies. The robots can autonomously adjust their roles by themselves; some explore, some help with communication for other robots. The number of robots we use depends on the environment—yesterday we deployed seven robots onto the course because we assumed that the environment would be huge, but it's a bit smaller than we expected, so we'll adapt our number to fit that environment.
Once your team of robots is on the course, what do you do during the run?
Our robots are autonomous, and I think we have very good autonomy software. During setup the robots need some operator attention; I have to make sure that everything is working including sensors, mobility systems, and all the algorithms. But after that, once I send the robot into the course, I totally forget about it and focus on another robot. Sometimes I intervene to better distribute our team of robots—that's something that a human is good at, using prior knowledge to understand the environment. And I look at artifact reports, that's most of my job.
In the first phases of the Subterranean Challenge, we were getting low level information from the robots and sometimes using low level commands. But as the project proceeded and our technology matured, we found that it was too difficult for the operator, so we added functionality for the robot to make all of those low level decisions, and the operator just deals with high level decisions.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible? [answered by CoSTAR co-Team Lead Joel Burdick]
Two things: the system reports that it thinks it found an artifact, and the operator has to confirm yes or no. He has to also confirm that the location seems right. The other thing is that our multi-robot coordination isn't as sophisticated as it could be, so the operator may have to retask robots to different areas. If we had another year, we'd be much closer to automating those things.
Bonus Operator Question: Would you prefer if your system was completely autonomous and your job was not necessary?
Yeah, I'd prefer that!
Team Coordinated RoboticsRobot Operator: Kevin KnoedlerTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
The ideal mix in my mind is a fleet of small drones with lidar, but they are very hard to test, and very hard to get right. Ground vehicles aren't necessarily easier to get right, but they're easier to test, and if you can test something, you're a lot more likely to succeed. So that's really the big difference with the team of robots we have here.
Once your team of robots is on the course, what do you do during the run?
Some of the robots have an automatic search function where if they find something they report back, and what I'd like to be doing is just monitoring. But, the search function only works in larger areas. So right now the goal is for me to drive them through the narrow areas, get them into the wider areas, and let them go, but getting them to that search area is something that I mostly need to do manually one at a time.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Ideally, the robots would be able to get through those narrow areas on their own. It's actually a simpler problem to solve than larger areas, it's just not where we focused our effort.
Bonus operator question: How many interfaces do you use to control your robots?
We have one computer with two monitors, one controller, and that's it.
Team CERBERUSRobot Operator: Marco TranzattoTell me about the team of robots that you're operating and why you think it's the optimal team for exploring underground environments.
We have a mix of legged and flying robots, supported by a rover carrying a wireless antenna. The idea is to take legged robots for harsh environments where wheel robots may not perform as well, combined with aerial scouts that can explore the environment fast to provide initial situational awareness to the operator so that I can decide where to deploy the legged machines. So the goal is to combine the legged and flying robots in a unified mission to give as much information as possible to the human operator. We also had some bigger robots, but we found them to be a bit too big for the environment that DARPA has prepared for us, so we're not going to deploy them.
Once your team of robots is on the course, what do you do during the run?
We use two main modes: one is fully autonomous on the robots, and the other one is supervised autonomy where I have an overview of what the robots are doing and can override specific actions. Based on the high level information that I can see, I can decide to control a single robot to give it a manual waypoint to reposition it to a different frontier inside the environment. I can go from high level control down to giving these single commands, but the commands are still relatively high level, like “go here and explore.” Each robot has artifact scoring capabilities, and all these artifact detections are sent to the base station once the robot is in communication range, and the human operator has to say, “okay this looks like a possible artifact so I accept it” and then can submit the position either as reported by the robot or the optimized position reported by the mapping server.
What autonomous decisions would you like your robots to be able to make that they aren't currently making, and what would it take to make that possible?
Each robot is autonomous by itself. But the cooperation between robots is still like… The operator has to set bounding boxes to tell each robot where to explore. The operator has a global overview, and then inside these boxes, the robots are autonomous. So I think at the moment in our pipeline, we still need a centralized human supervisor to say which robot explores in which direction. We are close to automating this, but we're not there yet.
Bonus operator question: What is one thing you would add to make your life as an operator easier?
I would like to have a more centralized way to give commands to the robots. At the moment I need to select each robot and give it a specific command. It would be very helpful to have a centralized map where I can tell a robot to say explore in a given area while considering data from a different robot. This was in our plan, but we didn't manage to deploy it yet. Continue reading

Posted in Human Robots

#439766 Understanding human-robot interaction ...

Robotic body-weight support (BWS) devices can play a key role in helping people with neurological disorders to improve their walking. The team that developed the advanced body-weight support device RYSEN in 2018 has since gained more fundamental insight in BWS but also concludes that improvement in this field is necessary. They find that recommendations for the optimal therapy settings have to be customized to each device and that developers should be more aware of the interaction between patient and the device. The researchers have published the results of their evaluation in Science Robotics on Wednesday September 22. Continue reading

Posted in Human Robots

#439700 Video Friday: Robot Gecko Smashes Face ...

Your weekly selection of awesome robot videos
Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USA
WeRobot 2021 – September 23-25, 2021 – [Online Event]
IROS 2021 – September 27-1, 2021 – [Online Event]
ROSCon 2021 – October 20-21, 2021 – [Online Event]
Let us know if you have suggestions for next week, and enjoy today's videos.
The incredible title of this paper is “Tails stabilize landing of gliding geckos crashing head-first into tree trunks.” No hype here at all: geckos really do glide, they really do crash head-first into tree trunks, and they really do rely on their tails for post-landing stabilization and look ridiculous while doing it.

Their gecko-inspired robot features a soft torso, where the tail can be taken off and put back on. When the front foot hits a surface, the robot is programmed to bend its tail just like the reflex that Jusufi discovered previously in climbing geckos. The information is processed via a microcontroller on the shoulder. This signal activates the motor to pull on a tendon and hence pushes the tail into the wall to slow the head over heels pitchback.

“Nature has many unexpected, elegant solutions to engineering problems—and this is wonderfully illustrated by the way geckos can use their tails to turn a head-first collision into a successful perching maneuver. Landing from flight is difficult, and we hope our findings will lead to new techniques for robot mobility—sometimes crashes are helpful,” Robert Siddall describes.[ Paper ] via [ UC Berkeley ]
Thanks, Robert!
The subterranean stage is being set for the DARPA Subterranean Challenge Final Event at Louisville's Mega Cavern. The event is the culmination of a vision to revolutionize search and rescue using robots in underground domains. Tune in Sept 21-24 on SubTV.
I'll be there!
[ SubT ]
Remote work has been solved thanks to Robovie-Z.

[ Vstone ]
The best part of this video is not the tube-launched net-firing drone-hunting drone, it's the logo of the giant chameleon perched on top of a Humvee firing its tongue at a bug while being attacked by bats for some reason.

[ Dynetics ]
I'm pretty sure this is an old video, but any robot named “Schmoobot” has a place in Video Friday.

LET ME TAKE YOU TO THE LOCATION OF JUICES
[ Ballbot ]
Some more recent videos on Ballbot, and we're very happy that it's still an active research platform!

The CMU ballbot using its whole body controller to maintain balance on top of its ball while also balancing a red cup with water on the right hand while tracking a circular motion and an empty water bottle on the left hand.[ Ballbot ]
On Aug. 18, 2021, the MQ25 T1 test asset refueled a U.S. Navy E-2D Hawkeye command-and-control aircraft. This is the unmanned aerial refueler's second refueling mission.
Not to throw shade here, but I think the robot plane landed a little bit better than the human piloted plane.
[ Boeing ]
We proposed a method to wirelessly drive multiple soft actuators by laser projection. Laser projection enables both wireless energy supply and the selection of target actuators. Thus, we do not need additional components such as electric circuits and batteries to achieve simple and scalable implementation of multiple soft actuators.
[ Takefumi Hiraki ]
Thanks, Fan!
In this video, we demonstrated the motion of our biped robot “Robovie-Z”, which we used to enter the “ROBO-ONE Ultimate Action” contest.
[ Robo-One ]
Some impressive performance here, but that poor drone is overstuffed.

[ RISLab ]
Proximity sensors and analog circuits are all it takes to make a fairly high performance manipulation.

[ Keisuke Koyama ]
Thanks, Fan!
This video showcases an LP control algorithm producing both gravitational load compensation and cuff force amplification capabilities via whole-body exoskeleton forces. Parts of this video contain an additional payload of 25lbs (a weight on the back).
[ UT Austin HCRL ]
An overview of Tertill the solar-powered weeding robot for home gardens. Watch Joe Jones, the inventor of Tertill (and Roomba!) talk about how the robot and how and where it works.
[ Tertill ]
One small step integrating our Extend AMAS VR software to operate Universal Robots UR5e. This VR application combines volumetric telepresence technology with interactive digital twin to provide intuitive interface for non-robotic expert to teleoperate or program the robot from remote location over the internet.
[ Extend Robotics ]
Enrollment is open for a pair of online courses taught by Christoph Bartneck that'll earn you a Professional Certificate in Human-Robot Interaction. While the website really wants you to think that it costs you $448.20, if you register, you can skip the fee and take the courses for free! The book is even free, too. I have no idea how they can afford to do this, but good on them, right?

[ edX ]
Thanks, Christoph! Continue reading

Posted in Human Robots