Tag Archives: Space

#435646 Video Friday: Kiki Is a New Social Robot ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

The DARPA Subterranean Challenge tunnel circuit takes place in just a few weeks, and we’ll be there!

[ DARPA SubT ]

Time lapse video of robotic arm on NASA’s Mars 2020 rover handily maneuvers 88-pounds (40 kilograms) worth of sensor-laden turret as it moves from a deployed to stowed configuration.

If you haven’t read our interview with Matt Robinson, now would be a great time, since he’s one of the folks at JPL who designed this arm.

[ Mars 2020 ]

Kiki is a small, white, stationary social robot with an evolving personality who promises to be your friend and costs $800 and is currently on Kickstarter.

The Kickstarter page is filled with the same type of overpromising that we’ve seen with other (now very dead) social robots: Kiki is “conscious,” “understands your feelings,” and “loves you back.” Oof. That said, we’re happy to see more startups trying to succeed in this space, which is certainly one of the toughest in consumer electronics, and hopefully they’ve been learning from the recent string of failures. And we have to say Kiki is a cute robot. Its overall design, especially the body mechanics and expressive face, look neat. And kudos to the team—the company was founded by two ex-Googlers, Mita Yun and Jitu Das—for including the “unedited prototype videos,” which help counterbalance the hype.

Another thing that Kiki has going for it is that everything runs on the robot itself. This simplifies privacy and means that the robot won’t partially die on you if the company behind it goes under, but also limits how clever the robot will be able to be. The Kickstarter campaign is already over a third funded, so…We’ll see.

[ Kickstarter ]

When your UAV isn’t enough UAV, so you put a UAV on your UAV.

[ CanberraUAV ]

ABB’s YuMi is testing ATMs because a human trying to do this task would go broke almost immediately.

[ ABB ]

DJI has a fancy new FPV system that features easy setup, digital HD streaming at up to 120 FPS, and <30ms latency.

If it looks expensive, that’s because it costs $930 with the remote included.

[ DJI ]

Honeybee Robotics has recently developed a regolith excavation and rock cleaning system for NASA JPL’s PUFFER rovers. This system, called POCCET (PUFFER-Oriented Compact Cleaning and Excavation Tool), uses compressed gas to perform all excavation and cleaning tasks. Weighing less than 300 grams with potential for further mass reduction, POCCET can be used not just on the Moon, but on other Solar System bodies such as asteroids, comets, and even Mars.

[ Honeybee Robotics ]

DJI’s 2019 RoboMaster tournament, which takes place this month in Shenzen, looks like it’ll be fun to watch, with a plenty of action and rules that are easy to understand.

[ RoboMaster ]

Robots and baked goods are an automatic Video Friday inclusion.

Wow I want a cupcake right now.

[ Soft Robotics ]

The ICRA 2019 Best Paper Award went to Michelle A. Lee at Stanford, for “Making Sense of Vision and Touch: Self-Supervised Learning of Multimodal Representations for Contact-Rich Tasks.”

The ICRA video is here, and you can find the paper at the link below.

[ Paper ] via [ RoboHub ]

Cobalt Robotics put out a bunch of marketing-y videos this week, but this one reasonably interesting, even if you’re familiar with what they’re doing over there.

[ Cobalt Robotics ]

RightHand Robotics launched RightPick2 with a gala event which looked like fun as long as you were really, really in to robots.

[ RightHand Robotics ]

Thanks Jeff!

This video presents a framework for whole-body control applied to the assistive robotic system EDAN. We show how the proposed method can be used for a task like open, pass through and close a door. Also, we show the efficiency of the whole-body coordination with controlling the end-effector with respect to a fixed reference. Additionally, showing how easy the system can be manually manoeuvred by direct interaction with the end-effector, without the need for an extra input device.

[ DLR ]

You’ll probably need to turn on auto-translated subtitles for most of this, but it’s worth it for the adorable little single-seat robotic car designed to help people get around airports.

[ ZMP ]

In this week’s episode of Robots in Depth, Per speaks with Gonzalo Rey from Moog about their fancy 3D printed integrated hydraulic actuators.

Gonzalo talks about how Moog got started with hydraulic control,taking part in the space program and early robotics development. He shares how Moog’s technology is used in fly-by-wire systems in aircraft and in flow control in deep space probes. They have even reached Mars.

[ Robots in Depth ] Continue reading

Posted in Human Robots

#435640 Video Friday: This Wearable Robotic Tail ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:

Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.

The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.

[ Lakshmi Nair ]

Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.

This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.

[ IIT ]

Thanks Victor!

You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!

The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.

[ Paper ] via [ Gizmodo ]

The noises in this video are fantastic.

[ ESA ]

Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.

[ MIT CSAIL ]

Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…

[ Skydio ]

The only thing more fun than watching robots is watching people react to robots.

[ SEER ]

There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.

[ Stanford ]

#autonomousicecreamtricycle

In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:

Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.

[ Roboy ]

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.

[ ROAR Lab ]

During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.

[ DARPA ]

I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.

[ Ghost Robotics ]

If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.

[ AP ]

As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.

The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.

[ Paper ]

Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.

[ GITAI ]

Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.

[ MSL ]

Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:

And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:

All of the ROSCon FR talks are available on Vimeo.

[ ROSCon FR ] Continue reading

Posted in Human Robots

#435605 All of the Winners in the DARPA ...

The first competitive event in the DARPA Subterranean Challenge concluded last week—hopefully you were able to follow along on the livestream, on Twitter, or with some of the articles that we’ve posted about the event. We’ll have plenty more to say about how things went for the SubT teams, but while they take a bit of a (well earned) rest, we can take a look at the winning teams as well as who won DARPA’s special superlative awards for the competition.

First Place: Team Explorer (25/40 artifacts found)
With their rugged, reliable robots featuring giant wheels and the ability to drop communications nodes, Team Explorer was in the lead from day 1, scoring in double digits on every single run.

Second Place: Team CoSTAR (11/40 artifacts found)
Team CoSTAR had one of the more diverse lineups of robots, and they switched up which robots they decided to send into the mine as they learned more about the course.

Third Place: Team CTU-CRAS (10/40 artifacts found)
While many teams came to SubT with DARPA funding, Team CTU-CRAS was self-funded, making them eligible for a special $200,000 Tunnel Circuit prize.

DARPA also awarded a bunch of “superlative awards” after SubT:

Most Accurate Artifact: Team Explorer

To score a point, teams had to submit the location of an artifact that was correct to within 5 meters of the artifact itself. However, DARPA was tracking the artifact locations with much higher precision—for example, the “zero” point on the backpack artifact was the center of the label on the front, which DARPA tracked to the millimeter. Team Explorer managed to return the location of a backpack with an error of just 0.18 meter, which is kind of amazing.

Down to the Wire: Team CSIRO Data61

With just an hour to find as many artifacts as possible, teams had to find the right balance between sending robots off to explore and bringing them back into communication range to download artifact locations. Team CSIRO Data61 cut their last point pretty close, sliding their final point in with a mere 22 seconds to spare.

Most Distinctive Robots: Team Robotika

Team Robotika had some of the quirkiest and most recognizable robots, which DARPA recognized with the “Most Distinctive” award. Robotika told us that part of the reason for that distinctiveness was practical—having a robot that was effectively in two parts meant that they could disassemble it so that it would fit in the baggage compartment of an airplane, very important for a team based in the Czech Republic.

Most Robots Per Person: Team Coordinated Robotics

Kevin Knoedler, who won NASA’s Space Robotics Challenge entirely by himself, brought his own personal swarm of drones to SubT. With a ratio of seven robots to one human, Kevin was almost certainly the hardest working single human at the challenge.

Fan Favorite: Team NCTU

Photo: Evan Ackerman/IEEE Spectrum

The Fan Favorite award went to the team that was most popular on Twitter (with the #SubTChallenge hashtag), and it may or may not be the case that I personally tweeted enough about Team NCTU’s blimp to win them this award. It’s also true that whenever we asked anyone on other teams what their favorite robot was (besides their own, of course), the blimp was overwhelmingly popular. So either way, the award is well deserved.

DARPA shared this little behind-the-scenes clip of the blimp in action (sort of), showing what happened to the poor thing when the mine ventilation system was turned on between runs and DARPA staff had to chase it down and rescue it:

The thing to keep in mind about the results of the Tunnel Circuit is that unlike past DARPA robotics challenges (like the DRC), they don’t necessarily indicate how things are going to go for the Urban or Cave circuits because of how different things are going to be. Explorer did a great job with a team of rugged wheeled vehicles, which turned out to be ideal for navigating through mines, but they’re likely going to need to change things up substantially for the rest of the challenges, where the terrain will be much more complex.

DARPA hasn’t provided any details on the location of the Urban Circuit yet; all we know is that it’ll be sometime in February 2020. This gives teams just six months to take all the lessons that they learned from the Tunnel Circuit and update their hardware, software, and strategies. What were those lessons, and what do teams plan to do differently next year? Check back next week, and we’ll tell you.

[ DARPA SubT ] Continue reading

Posted in Human Robots

#435591 Video Friday: This Robotic Thread Could ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Eight engineering students from ETH Zurich are working on a year-long focus project to develop a multimodal robot called Dipper, which can fly, swim, dive underwater, and manage that difficult air-water transition:

The robot uses one motor to selectively drive either a propeller or a marine screw depending on whether it’s in flight or not. We’re told that getting the robot to autonomously do the water to air transition is still a work in progress, but that within a few weeks things should be much smoother.

[ Dipper ]

Thanks Simon!

Giving a jellyfish a hug without stressing them out is exactly as hard as you think, but Harvard’s robot will make sure that all jellyfish get the emotional (and physical) support that they need.

The gripper’s six “fingers” are composed of thin, flat strips of silicone with a hollow channel inside bonded to a layer of flexible but stiffer polymer nanofibers. The fingers are attached to a rectangular, 3D-printed plastic “palm” and, when their channels are filled with water, curl in the direction of the nanofiber-coated side. Each finger exerts an extremely low amount of pressure — about 0.0455 kPA, or less than one-tenth of the pressure of a human’s eyelid on their eye. By contrast, current state-of-the-art soft marine grippers, which are used to capture delicate but more robust animals than jellyfish, exert about 1 kPA.

The gripper was successfully able to trap each jellyfish against the palm of the device, and the jellyfish were unable to break free from the fingers’ grasp until the gripper was depressurized. The jellyfish showed no signs of stress or other adverse effects after being released, and the fingers were able to open and close roughly 100 times before showing signs of wear and tear.

[ Harvard ]

MIT engineers have developed a magnetically steerable, thread-like robot that can actively glide through narrow, winding pathways, such as the labyrinthine vasculature of the brain. In the future, this robotic thread may be paired with existing endovascular technologies, enabling doctors to remotely guide the robot through a patient’s brain vessels to quickly treat blockages and lesions, such as those that occur in aneurysms and stroke.

[ MIT ]

See NASA’s next Mars rover quite literally coming together inside a clean room at the Jet Propulsion Laboratory. This behind-the-scenes look at what goes into building and preparing a rover for Mars, including extensive tests in simulated space environments, was captured from March to July 2019. The rover is expected to launch to the Red Planet in summer 2020 and touch down in February 2021.

The Mars 2020 rover doesn’t have a name yet, but you can give it one! As long as you’re not too old! Which you probably are!

[ Mars 2020 ]

I desperately wish that we could watch this next video at normal speed, not just slowed down, but it’s quite impressive anyway.

Here’s one more video from the Namiki Lab showing some high speed tracking with a pair of very enthusiastic robotic cameras:

[ Namiki Lab ]

Normally, tedious modeling of mechanics, electronics, and information science is required to understand how insects’ or robots’ moving parts coordinate smoothly to take them places. But in a new study, biomechanics researchers at the Georgia Institute of Technology boiled down the sprints of cockroaches to handy principles and equations they then used to make a test robot amble about better.

[ Georgia Tech ]

More magical obstacle-dodging footage from Skydio’s still secret new drone.

We’ve been hard at work extending the capabilities of our upcoming drone, giving you ways to get the control you want without the stress of crashing. The result is you can fly in ways, and get shots, that would simply be impossible any other way. How about flying through obstacles at full speed, backwards?

[ Skydio ]

This is a cute demo with Misty:

[ Misty Robotics ]

We’ve seen pieces of hardware like this before, but always made out of hard materials—a soft version is certainly something new.

Utilizing vacuum power and soft material actuators, we have developed a soft reconfigurable surface (SRS) with multi-modal control and performance capabilities. The SRS is comprised of a square grid array of linear vacuum-powered soft pneumatic actuators (linear V-SPAs), built into plug-and-play modules which enable the arrangement, consolidation, and control of many DoF.

[ RRL ]

The EksoVest is not really a robot, but it’ll make you a cyborg! With super strength!

“This is NOT intended to give you super strength but instead give you super endurance and reduce fatigue so that you have more energy and less soreness at the end of your shift.”

Drat!

[ EksoVest ]

We have created a solution for parents, grandparents, and their children who are living separated. This is an amazing tool to stay connected from a distance through the intimacy that comes through interactive play with a child. For parents who travel for work, deployed military, and families spread across the country, the Cushybot One is much more than a toy; it is the opportunity for maintaining a deep connection with your young child from a distance.

Hmm.

I think the concept here is great, but it’s going to be a serious challenge to successfully commercialize.

[ Indiegogo ]

What happens when you equip RVR with a parachute and send it off a cliff? Watch this episode of RVR Launchpad to find out – then go Behind the Build to see how we (eventually) accomplished this high-flying feat.

[ Sphero ]

These omnidirectional crawler robots aren’t new, but that doesn’t keep them from being fun to watch.

[ NEDO ] via [ Impress ]

We’ll finish up the week with a couple of past ICRA and IROS keynote talks—one by Gill Pratt on The Reliability Challenges of Autonomous Driving, and the other from Peter Hart, on Making Shakey.

[ IEEE RAS ] Continue reading

Posted in Human Robots

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots