Tag Archives: less

#435757 Robotic Animal Agility

An off-shore wind power platform, somewhere in the North Sea, on a freezing cold night, with howling winds and waves crashing against the impressive structure. An imperturbable ANYmal is quietly conducting its inspection.

ANYmal, a medium sized dog-like quadruped robot, walks down the stairs, lifts a “paw” to open doors or to call the elevator and trots along corridors. Darkness is no problem: it knows the place perfectly, having 3D-mapped it. Its laser sensors keep it informed about its precise path, location and potential obstacles. It conducts its inspection across several rooms. Its cameras zoom in on counters, recording the measurements displayed. Its thermal sensors record the temperature of machines and equipment and its ultrasound microphone checks for potential gas leaks. The robot also inspects lever positions as well as the correct positioning of regulatory fire extinguishers. As the electronic buzz of its engines resumes, it carries on working tirelessly.

After a little over two hours of inspection, the robot returns to its docking station for recharging. It will soon head back out to conduct its next solitary patrol. ANYmal played alongside Mulder and Scully in the “X-Files” TV series*, but it is in no way a Hollywood robot. It genuinely exists and surveillance missions are part of its very near future.

Off-shore oil platforms, the first test fields and probably the first actual application of ANYmal. ©ANYbotics

This quadruped robot was designed by ANYbotics, a spinoff of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Made of carbon fibre and aluminium, it weighs about thirty kilos. It is fully ruggedised, water- and dust-proof (IP-67). A kevlar belly protects its main body, carrying its powerful brain, batteries, network device, power management system and navigational systems.

ANYmal was designed for all types of terrain, including rubble, sand or snow. It has been field tested on industrial sites and is at ease with new obstacles to overcome (and it can even get up after a fall). Depending on its mission, its batteries last 2 to 4 hours.

On its jointed legs, protected by rubber pads, it can walk (at the speed of human steps), trot, climb, curl upon itself to crawl, carry a load or even jump and dance. It is the need to move on all surfaces that has driven its designers to choose a quadruped. “Biped robots are not easy to stabilise, especially on irregular terrain” explains Dr Péter Fankhauser, co-founder and chief business development officer of ANYbotics. “Wheeled or tracked robots can carry heavy loads, but they are bulky and less agile. Flying drones are highly mobile, but cannot carry load, handle objects or operate in bad weather conditions. We believe that quadrupeds combine the optimal characteristics, both in terms of mobility and versatility.”

What served as a source of inspiration for the team behind the project, the Robotic Systems Lab of the ETH Zurich, is a champion of agility on rugged terrain: the mountain goat. “We are of course still a long way” says Fankhauser. “However, it remains our objective on the longer term.

The first prototype, ALoF, was designed already back in 2009. It was still rather slow, very rigid and clumsy – more of a proof of concept than a robot ready for application. In 2012, StarlETH, fitted with spring joints, could hop, jump and climb. It was with this robot that the team started participating in 2014 in ARGOS, a full-scale challenge, launched by the Total oil group. The idea was to present a robot capable of inspecting an off-shore drilling station autonomously.

Up against dozens of competitors, the ETH Zurich team was the only team to enter the competition with such a quadrupedal robot. They didn’t win, but the multiple field tests were growing evermore convincing. Especially because, during the challenge, the team designed new joints with elastic actuators made in-house. These joints, inspired by tendons and muscles, are compact, sealed and include their own custom control electronics. They can regulate joint torque, position and impedance directly. Thanks to this innovation, the team could enter the same competition with a new version of its robot, ANYmal, fitted with three joints on each leg.

The ARGOS experience confirms the relevance of the selected means of locomotion. “Our robot is lighter, takes up less space on site and it is less noisy” says Fankhauser. “It also overcomes bigger obstacles than larger wheeled or tracked robots!” As ANYmal generated public interest and its transformation into a genuine product seemed more than possible, the startup ANYbotics was launched in 2016. It sold not only its robot, but also its revolutionary joints, called ANYdrive.

Today, ANYmal is not yet ready for sale to companies. However, ANYbotics has a growing number of partnerships with several industries, testing the robot for a few days or several weeks, for all types of tasks. Last October, for example, ANYmal navigated its way through the dark sewage system of the city of Zurich in order to test its capacity to help workers in similar difficult, repetitive and even dangerous tasks.

Why such an early interest among companies? “Because many companies want to integrate robots into their maintenance tasks” answers Fankhauser. “With ANYmal, they can actually evaluate its feasibility and plan their strategy. Eventually, both the architecture and the equipment of buildings could be rethought to be adapted to these maintenance robots”.

ANYmal requires ruggedised, sealed and extremely reliable interconnection solutions, such as LEMO. ©ANYbotics

Through field demonstrations and testing, ANYbotics can gather masses of information (up to 50,000 measurements are recorded every second during each test!) “It helps us to shape the product.” In due time, the startup will be ready to deliver a commercial product which really caters for companies’ needs.

Inspection and surveillance tasks on industrial sites are not the only applications considered. The startup is also thinking of agricultural inspections – with its onboard sensors, ANYmal is capable of mapping its environment, measuring bio mass and even taking soil samples. In the longer term, it could also be used for search and rescue operations. By the way, the robot can already be switched to “remote control” mode at any time and can be easily tele-operated. It is also capable of live audio and video transmission.

The transition from the prototype to the marketed product stage will involve a number of further developments. These include increasing ANYmal’s agility and speed, extending its capacity to map large-scale environments, improving safety, security, user handling and integrating the system with the customer’s data management software. It will also be necessary to enhance the robot’s reliability “so that it can work for days, weeks, or even months without human supervision.” All required certifications will have to be obtained. The locomotion system, which had triggered the whole business, is only one of a number of considerations of ANYbotics.

Designed for extreme environments, for ANYmal smoke is not a problem and it can walk in the snow, through rubble or in water. ©ANYbotics

The startup is not all alone. In fact, it has sold ANYmal robots to a dozen major universities who use them to develop their know-how in robotics. The startup has also founded ANYmal Research, a community including members such as Toyota Research Institute, the German Aerospace Center and the computer company Nvidia. Members have full access to ANYmal’s control software, simulations and documentation. Sharing has boosted both software and hardware ideas and developments (built on ROS, the open-source Robot Operating System). In particular, payload variations, providing for expandability and scalability. For instance, one of the universities uses a robotic arm which enables ANYmal to grasp or handle objects and open doors.

Among possible applications, ANYbotics mentions entertainment. It is not only about playing in more films or TV series, but rather about participating in various attractions (trade shows, museums, etc.). “ANYmal is so novel that it attracts a great amount of interest” confirms Fankhauser with a smile. “Whenever we present it somewhere, people gather around.”

Videos of these events show a fascinated and sometimes slightly fearful audience, when ANYmal gets too close to them. Is it fear of the “bad robot”? “This fear exists indeed and we are happy to be able to use ANYmal also to promote public awareness towards robotics and robots.” Reminiscent of a young dog, ANYmal is truly adapted for the purpose.

However, Péter Fankhauser softens the image of humans and sophisticated robots living together. “These coming years, robots will continue to work in the background, like they have for a long time in factories. Then, they will be used in public places in a selective and targeted way, for instance for dangerous missions. We will need to wait another ten years before animal-like robots, such as ANYmal will share our everyday lives!”

At the Consumer Electronics Show (CES) in Las Vegas in January, Continental, the German automotive manufacturing company, used robots to demonstrate a last-mile delivery. It showed ANYmal getting out of an autonomous vehicle with a parcel, climbing onto the front porch, lifting a paw to ring the doorbell, depositing the parcel before getting back into the vehicle. This futuristic image seems very close indeed.

*X-Files, season 11, episode 7, aired in February 2018 Continue reading

Posted in Human Robots

#435742 This ‘Useless’ Social Robot ...

The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.

One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.

Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:

We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:

Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.

It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.

IEEE Spectrum: What’s your background?

Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.

“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic

You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?

I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.

Can you elaborate on “completely useless?”

I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.

I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.

What kind of character are you creating with Kiki?

The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.

We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.

Photo: Zoetic

Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.

That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.

It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.

How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?

Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.

Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?

Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.

For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.

There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.

Photo: Zoetic

Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.

A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.

I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.

I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.

Photo: Zoetic

The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.

Is that where you’d put Kiki on the scale of human to pet?

Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.

I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.

What have you found so far when running user studies with Kiki?

When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.

There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.

While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.

[ Kickstarter ] Continue reading

Posted in Human Robots

#435722 Stochastic Robots Use Randomness to ...

The idea behind swarm robots is to replace discrete, expensive, breakable uni-tasking components with a whole bunch of much simpler, cheaper, and replaceable robots that can work together to do the same sorts of tasks. Unfortunately, all of those swarm robots end up needing their own computing and communications and stuff if you want to get them to do what you want them to do.

A different approach to swarm robotics is to use a swarm of much cheaper robots that are far less intelligent. In fact, they may not have to be intelligent at all, if you can rely on their physical characteristics to drive them instead. These swarms are “stochastic,” meaning that their motions are randomly determined, but if you’re clever and careful, you can still get them to do specific things.

Georgia Tech has developed some little swarm robots called “smarticles” that can’t really do much at all on their own, but once you put them together into a jumble, their randomness can actually accomplish something.

Honestly, calling these particle robots “smart” might be giving them a bit too much credit, because they’re actually kind of dumb and strictly speaking not capable of all that much on their own. A single smarticle weighs 35 grams, and consists of some little 3D-printed flappy bits attached to servos, plus an Arduino Pro Mini, a battery, and a light or sound sensor. When its little flappy bits are activated, each smarticle can move slightly, but a single one mostly just moves around in a square and then will gradually drift in a mostly random direction over time.

It gets more interesting when you throw a whole bunch of smarticles into a constrained area. A small collection of five or 10 smarticles constrained together form a “supersmarticle,” but besides being in close proximity to one another, the smarticles within the supersmarticle aren’t communicating or anything like that. As far as each smarticle is concerned, they’re independent, but weirdly, a bumble of them can work together without working together.

“These are very rudimentary robots whose behavior is dominated by mechanics and the laws of physics,” said Dan Goldman, a Dunn Family Professor in the School of Physics at the Georgia Institute of Technology.

The researchers noticed that if one small robot stopped moving, perhaps because its battery died, the group of smarticles would begin moving in the direction of that stalled robot. Graduate student Ross Warkentin learned he could control the movement by adding photo sensors to the robots that halt the arm flapping when a strong beam of light hits one of them.

“If you angle the flashlight just right, you can highlight the robot you want to be inactive, and that causes the ring to lurch toward or away from it, even though no robots are programmed to move toward the light,” Goldman said. “That allowed steering of the ensemble in a very rudimentary, stochastic way.”

It turns out that it’s possible to model this behavior, and control a supersmarticle with enough fidelity to steer it through a maze. And while these particular smarticles aren’t all that small, strictly speaking, the idea is to develop techniques that will work when robots are scaled way way down to the point where you can't physically fit useful computing in there at all.

The researchers are also working on some other concepts, like these:

Image: Science Robotics

The Georgia Tech researchers envision stochastic robot swarms that don’t have a perfectly defined shape or delineation but are capable of self-propulsion, relying on the ensemble-level behaviors that lead to collective locomotion. In such a robot, the researchers say, groups of largely generic agents may be able to achieve complex goals, as observed in biological collectives.

Er, yeah. I’m…not sure I really want there to be a bipedal humanoid robot built out of a bunch of tiny robots. Like, that seems creepy somehow, you know? I’m totally okay with slugs, but let’s not get crazy.

“A robot made of robots: Emergent transport and control of a smarticle ensemble, by William Savoie, Thomas A. Berrueta, Zachary Jackson, Ana Pervan, Ross Warkentin, Shengkai Li, Todd D. Murphey, Kurt Wiesenfeld, and Daniel I. Goldman” from the Georgia Institute of Technology, appears in the current issue of Science Robotics. Continue reading

Posted in Human Robots

#435716 Watch This Drone Explode Into Maple Seed ...

As useful as conventional fixed-wing and quadrotor drones have become, they still tend to be relatively complicated, expensive machines that you really want to be able to use more than once. When a one-way trip is all that you have in mind, you want something simple, reliable, and cheap, and we’ve seen a bunch of different designs for drone gliders that more or less fulfill those criteria.

For an even simpler gliding design, you want to minimize both airframe mass and control surfaces, and the maple tree provides some inspiration in the form of samara, those distinctive seed pods that whirl to the ground in the fall. Samara are essentially just an unbalanced wing that spins, and while the natural ones don’t steer, adding an actuated flap to the robotic version and moving it at just the right time results in enough controllability to aim for a specific point on the ground.

Roboticists at the Singapore University of Technology and Design (SUTD) have been experimenting with samara-inspired drones, and in a new paper in IEEE Robotics and Automation Letters they explore what happens if you attach five of the drones together and then separate them in mid air.

Image: Singapore University of Technology and Design

The drone with all five wings attached (top left), and details of the individual wings: (a) smaller 44.9-gram wing for semi-indoor testing; (b) larger 83.4-gram wing able to carry a Pixracer, GPS, and magnetometer for directional control experiments.

Fundamentally, a samara design acts as a decelerator for an aerial payload. You can think of it like a parachute: It makes sure that whatever you toss out of an airplane gets to the ground intact rather than just smashing itself to bits on impact. Steering is possible, but you don’t get a lot of stability or precision control. The RA-L paper describes one solution to this, which is to collaboratively use five drones at once in a configuration that looks a bit like a helicopter rotor.

And once the multi-drone is right where you want it, the five individual samara drones can split off all at once, heading out on their own missions. It's quite a sight:

The concept features a collaborative autorotation in the initial stage of drop whereby several wings are attached to each other to form a rotor hub. The combined form achieves higher rotational energy and a collaborative control strategy is possible. Once closer to the ground, they can exit the collaborative form and continue to descend to unique destinations. A section of each wing forms a flap and a small actuator changes its pitch cyclically. Since all wing-flaps can actuate simultaneously in collaborative mode, better maneuverability is possible, hence higher resistance against environmental conditions. The vertical and horizontal speeds can be controlled to a certain extent, allowing it to navigate towards a target location and land softly.

The samara autorotating wing drones themselves could conceivably carry small payloads like sensors or emergency medical supplies, with these small-scale versions in the video able to handle an extra 30 grams of payload. While they might not have as much capacity as a traditional fixed-wing glider, they have the advantage of being able to descent vertically, and can perform better than a parachute due to their ability to steer. The researchers plan on improving the design of their little drones, with the goal of increasing the rotation speed and improving the control performance of both the individual drones and the multi-wing collaborative version.

“Dynamics and Control of a Collaborative and Separating Descent of Samara Autorotating Wings,” by Shane Kyi Hla Win, Luke Soe Thura Win, Danial Sufiyan, Gim Song Soh, and Shaohui Foong from Singapore University of Technology and Design, appears in the current issue of IEEE Robotics and Automation Letters.
[ SUTD ]

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#435703 FarmWise Raises $14.5 Million to Teach ...

We humans spend most of our time getting hungry or eating, which must be really inconvenient for the people who have to produce food for everyone. For a sustainable and tasty future, we’ll need to make the most of what we’ve got by growing more food with less effort, and that’s where the robots can help us out a little bit.

FarmWise, a California-based startup, is looking to enhance farming efficiency by automating everything from seeding to harvesting, starting with the worst task of all: weeding. And they’ve just raised US $14.5 million to do it.

FarmWise’s autonomous, AI-enabled robots are designed to solve farmers’ most pressing challenges by performing a variety of farming functions – starting with weeding, and providing personalized care to every plant they touch. Using machine learning models, computer vision and high-precision mechanical tools, FarmWise’s sophisticated robots cleanly pick weeds from fields, leaving crops with the best opportunity to thrive while eliminating harmful chemical inputs. To date, FarmWise’s robots have efficiently removed weeds from more than 10 million plants.

FarmWise is not the first company to work on large mobile farming robots. A few years ago, we wrote about DeepField Robotics and their giant weed-punching robot. But considering how many humans there are, and how often we tend to get hungry, it certainly seems like there’s plenty of opportunity to go around.

Photo: FarmWise

FarmWise is collecting massive amounts of data about every single plant in an entire field, which is something that hasn’t been possible before. Above, one of the robots at a farm in Salinas Valley, Calif.

Weeding is just one thing that farm robots are able to do. FarmWise is collecting massive amounts of data about every single plant in an entire field, practically on the per-leaf level, which is something that hasn’t been possible before. Data like this could be used for all sorts of things, but generally, the long-term hope is that robots could tend to every single plant individually—weeding them, fertilizing them, telling them what good plants they are, and then mercilessly yanking them out of the ground at absolute peak ripeness. It’s not realistic to do this with human labor, but it’s the sort of data-intensive and monotonous task that robots could be ideal for.

The question with robots like this is not necessarily whether they can do the job that they were created for, because generally, they can—farms are structured enough environments that they lend themselves to autonomous robots, and the tasks are relatively well defined. The issue right now, I think, is whether robots are really time- and cost-effective for farmers. Capable robots are an expensive investment, and even if there is a shortage of human labor, will robots perform well enough to convince farmers to adopt the technology? That’s a solid maybe, and here’s hoping that FarmWise can figure out how to make it work.

[ FarmWise ] Continue reading

Posted in Human Robots