Tag Archives: gravity

#439495 Legged Robots Do Surprisingly Well in ...

Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.

In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.

Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean. Sort of like pronk, except in space. Let’s make it so!

Just look at that robot bok!

This reorientation technique was developed using deep reinforcement learning, and then transferred from simulation to a real SpaceBok robot, albeit in two degrees of freedom rather than three. The real challenge with this method is just how complicated things get when you start wiggling multiple limbs in the air trying to get to a specific configuration, since the dynamics here are (as the paper puts it) “highly non-linear,” and it proved somewhat difficult to even simulate everything well enough. What you see in the simulation, incidentally, is an environment similar to Ceres, the largest asteroid in the asteroid belt, which has a surface gravity of 0.03g.

Although SpaceBok has “space” right in the name, it’s not especially optimized for this particular kind of motion. As the video shows, having an actuated hip joint could make the difference between a reliable soft landing and, uh, not. Not landing softly is a big deal, because an uncontrolled bounce could send the robot flying huge distances, which is what happened to the Philae lander on comet 67P/Churyumov–Gerasimenko back in 2014.

For more details on SpaceBok’s space booking, we spoke with the paper’s first author, Nikita Rudin, via email.

IEEE Spectrum: Why are legs ideal for mobility in low gravity environments?

Rudin: In low gravity environments, rolling on wheels becomes more difficult because of reduced traction. However, legs can exploit the low gravity and use high jumps to move efficiently. With high jumps, you can also clear large obstacles along the way, which is harder to do in higher gravity.

Were there unique challenges to training your controller in 2D and 3D relative to training controllers for terrestrial legged robot motion?

The main challenge is the long flight phase, which is not present in terrestrial locomotion. In earth gravity, robots (and animals) use reaction forces from the ground to balance. During a jump, they don't usually need to re-orient themselves. In the case of low gravity, we have extended flight phases (multiple seconds) and only short contacts with the ground. The robot needs to be able to re-orient / balance in the air. Otherwise, a small disturbance at the moment of the jump will slowly flip the robot. In short, in low gravity, there is a new control problem that can be neglected on Earth.

Besides the addition of a hip joint, what other modifications would you like to make to the robot to enhance its capabilities? Would a tail be useful, for example? Or very heavy shoes?

A tail is a very interesting idea and heavy shoes would definitely help, however, they increase the total weight, which is costly in space. We actually add some minor weight to feet already (in the paper we analyze the effect of these weights). Another interesting addition would be a joint in the center of the robot allowing it to do cat-like backbone torsion.

How does the difficulty of this problem change as the gravity changes?

With changing gravity you change the importance of mid-air re-orientation compared to ground contacts. For locomotion, low-gravity is harder from the reasoning above. However, if the robot is dropped and needs to perform a flip before landing, higher gravity is harder because you have less time for the whole process.

What are you working on next?

We have a few ideas for the next projects including a legged robot specifically designed and certified for space and exploring cat-like re-orientation on earth with smaller/faster robots. We would also like to simulate a zero-g environment on earth by dropping the robot from a few dozens of meters into a safety net, and of course, a parabolic flight is still very much one of our objectives. However, we will probably need a smaller robot there as well.

Cat-Like Jumping and Landing of Legged Robots in Low Gravity Using Deep Reinforcement Learning, by Nikita Rudin, Hendrik Kolvenbach, Vassilios Tsounis, and Marco Hutter from ETH Zurich, is published in IEEE Transactions on Robotics. Continue reading

Posted in Human Robots

#439372 Legged Robots Do Surprisingly Well in ...

Here on Earth, we’re getting good enough at legged robots that we’re starting to see a transition from wheels to legs for challenging environments, especially environments with some uncertainty as to exactly what kind of terrain your robot might encounter. Beyond Earth, we’re still heavily reliant on wheeled vehicles, but even that might be starting to change. While wheels do pretty well on the Moon and on Mars, there are lots of other places to explore, like smaller moons and asteroids. And there, it’s not just terrain that’s a challenge: it’s gravity.

In low gravity environments, any robot moving over rough terrain risks entering a flight phase. Perhaps an extended flight phase, depending on how low the gravity is, which can be dangerous to robots that aren’t prepared for it. Researchers at the Robotic Systems Lab at ETH Zurich have been doing some experiments with the SpaceBok quadruped, and they’ve published a paper in IEEE T-RO showing that it’s possible to teach SpaceBok to effectively bok around in low gravity environments while using its legs to reorient itself during flight, exhibiting “cat-like jumping and landing” behaviors through vigorous leg-wiggling.

Also, while I’m fairly certain that “bok” is not a verb that means “to move dynamically in low gravity using legs,” I feel like that’s what it should mean. Sort of like pronk, except in space. Let’s make it so!

Just look at that robot bok!

This reorientation technique was developed using deep reinforcement learning, and then transferred from simulation to a real SpaceBok robot, albeit in two degrees of freedom rather than three. The real challenge with this method is just how complicated things get when you start wiggling multiple limbs in the air trying to get to a specific configuration, since the dynamics here are (as the paper puts it) “highly non-linear,” and it proved somewhat difficult to even simulate everything well enough. What you see in the simulation, incidentally, is an environment similar to Ceres, the largest asteroid in the asteroid belt, which has a surface gravity of 0.03g.

Although SpaceBok has “space” right in the name, it’s not especially optimized for this particular kind of motion. As the video shows, having an actuated hip joint could make the difference between a reliable soft landing and, uh, not. Not landing softly is a big deal, because an uncontrolled bounce could send the robot flying huge distances, which is what happened to the Philae lander on comet 67P/Churyumov–Gerasimenko back in 2014.

For more details on SpaceBok’s space booking, we spoke with the paper’s first author, Nikita Rudin, via email.

IEEE Spectrum: Why are legs ideal for mobility in low gravity environments?

Rudin: In low gravity environments, rolling on wheels becomes more difficult because of reduced traction. However, legs can exploit the low gravity and use high jumps to move efficiently. With high jumps, you can also clear large obstacles along the way, which is harder to do in higher gravity.

Were there unique challenges to training your controller in 2D and 3D relative to training controllers for terrestrial legged robot motion?

The main challenge is the long flight phase, which is not present in terrestrial locomotion. In earth gravity, robots (and animals) use reaction forces from the ground to balance. During a jump, they don't usually need to re-orient themselves. In the case of low gravity, we have extended flight phases (multiple seconds) and only short contacts with the ground. The robot needs to be able to re-orient / balance in the air. Otherwise, a small disturbance at the moment of the jump will slowly flip the robot. In short, in low gravity, there is a new control problem that can be neglected on Earth.

Besides the addition of a hip joint, what other modifications would you like to make to the robot to enhance its capabilities? Would a tail be useful, for example? Or very heavy shoes?

A tail is a very interesting idea and heavy shoes would definitely help, however, they increase the total weight, which is costly in space. We actually add some minor weight to feet already (in the paper we analyze the effect of these weights). Another interesting addition would be a joint in the center of the robot allowing it to do cat-like backbone torsion.

How does the difficulty of this problem change as the gravity changes?

With changing gravity you change the importance of mid-air re-orientation compared to ground contacts. For locomotion, low-gravity is harder from the reasoning above. However, if the robot is dropped and needs to perform a flip before landing, higher gravity is harder because you have less time for the whole process.

What are you working on next?

We have a few ideas for the next projects including a legged robot specifically designed and certified for space and exploring cat-like re-orientation on earth with smaller/faster robots. We would also like to simulate a zero-g environment on earth by dropping the robot from a few dozens of meters into a safety net, and of course, a parabolic flight is still very much one of our objectives. However, we will probably need a smaller robot there as well.

Cat-Like Jumping and Landing of Legged Robots in Low Gravity Using Deep Reinforcement Learning, by Nikita Rudin, Hendrik Kolvenbach, Vassilios Tsounis, and Marco Hutter from ETH Zurich, is published in IEEE Transactions on Robotics. Continue reading

Posted in Human Robots

#439089 Ingenuity’s Chief Pilot Explains How ...

On April 11, the Mars helicopter Ingenuity will take to the skies of Mars for the first time. It will do so fully autonomously, out of necessity—the time delay between Ingenuity’s pilots at the Jet Propulsion Laboratory and Jezero Crater on Mars makes manual or even supervisory control impossible. So the best that the folks at JPL can do is practice as much as they can in simulation, and then hope that the helicopter can handle everything on its own.

Here on Earth, simulation is a critical tool for many robotics applications, because it doesn’t rely on access to expensive hardware, is non-destructive, and can be run in parallel and at faster-than-real-time speeds to focus on solving specific problems. Once you think you’ve gotten everything figured out in simulation, you can always give it a try on the real robot and see how close you came. If it works in real life, great! And if not, well, you can tweak some stuff in the simulation and try again.

For the Mars helicopter, simulation is much more important, and much higher stakes. Testing the Mars helicopter under conditions matching what it’ll find on Mars is not physically possible on Earth. JPL has flown engineering models in Martian atmospheric conditions, and they’ve used an actuated tether to mimic Mars gravity, but there’s just no way to know what it’ll be like flying on Mars until they’ve actually flown on Mars. With that in mind, the Ingenuity team has been relying heavily on simulation, since that’s one of the best tools they have to prepare for their Martian flights. We talk with Ingenuity’s Chief Pilot, Håvard Grip, to learn how it all works.

Ingenuity Facts:
Body Size: a box of tissues

Brains: Qualcomm Snapdragon 801

Weight: 1.8 kilograms

Propulsion: Two 1.2m carbon fiber rotors

Navigation sensors: VGA camera, laser altimeter, inclinometer

Ingenuity is scheduled to make its first flight no earlier than April 11. Before liftoff, the Ingenuity team will conduct a variety of pre-flight checks, including verifying the responsiveness of the control system and spinning the blades up to full speed (2,537 rpm) without lifting off. If everything looks good, the first flight will consist of a 1 meter per second climb to 3 meters, 30 seconds of hover at 3 meters while rotating in place a bit, and then a descent to landing. If Ingenuity pulls this off, that will have made its entire mission a success. There will be more flights over the next few weeks, but all it takes is one to prove that autonomous helicopter flight on Mars is possible.

Last month, we spoke with Mars Helicopter Operations Lead Tim Canham about Ingenuity’s hardware, software, and autonomy, but we wanted to know more about how the Ingenuity team has been using simulation for everything from vehicle design to flight planning. To answer our questions, we talked with JPL’s Håvard Grip, who led the development of Ingenuity’s navigation and flight control systems. Grip also has the title of Ingenuity Chief Pilot, which is pretty awesome. He summarizes this role as “operating the flight control system to make the helicopter do what we want it to do.”

IEEE Spectrum: Can you tell me about the simulation environment that JPL uses for Ingenuity’s flight planning?

Håvard Grip: We developed a Mars helicopter simulation ourselves at JPL, based on a multi-body simulation framework that’s also developed at JPL, called DARTS/DSHELL. That's a system that has been in development at JPL for about 30 years now, and it's been used in a number of missions. And so we took that multibody simulation framework, and based on it we built our own Mars helicopter simulation, put together our own rotor model, our own aerodynamics models, and everything else that's needed in order to simulate a helicopter. We also had a lot of help from the rotorcraft experts at NASA Ames and NASA Langley.

Image: NASA/JPL

Ingenuity in JPL’s flight simulator.

Without being able to test on Mars, how much validation are you able to do of what you’re seeing in simulation?

We can do a fair amount, but it requires a lot of planning. When we made our first real prototype (with a full-size rotor that looked like what we were thinking of putting on Mars) we first spent a lot of time designing it and using simulation tools to guide that design, and when we were sufficiently confident that we were close enough, and that we understood enough about it, then we actually built the thing and designed a whole suite of tests in a vacuum chamber where where we could replicate Mars atmospheric conditions. And those tests were before we tried to fly the helicopter—they were specifically targeted at what we call system identification, which has to do with figuring out what the true properties, the true dynamics of a system are, compared to what we assumed in our models. So then we got to see how well our models did, and in the places where they needed adjustment, we could go back and do that.

The simulation work that we really started after that very first initial lift test, that’s what allowed us to unlock all of the secrets to building a helicopter that can fly on Mars.
—Håvard Grip, Ingenuity Chief Pilot

We did a lot of this kind of testing. It was a big campaign, in several stages. But there are of course things that you can't fully replicate, and you do depend on simulation to tie things together. For example, we can't truly replicate Martian gravity on Earth. We can replicate the atmosphere, but not the gravity, and so we have to do various things when we fly—either make the helicopter very light, or we have to help it a little bit by pulling up on it with a string to offload some of the weight. These things don't fully replicate what it will be like on Mars. We also can't simultaneously replicate the Mars aerodynamic environment and the physical and visual surroundings that the helicopter will be flying in. These are places where simulation tools definitely come in handy, with the ability to do full flight tests from A to B, with the helicopter taking off from the ground, running the flight software that it will be running on board, simulating the images that the navigation camera takes of the ground below as it flies, feeding that back into the flight software, and then controlling it.

To what extent can simulation really compensate for the kinds of physical testing that you can’t do on Earth?

It gives you a few different possibilities. We can take certain tests on Earth where we replicate key elements of the environment, like the atmosphere or the visual surroundings for example, and you can validate your simulation on those parameters that you can test on Earth. Then, you can combine those things in simulation, which gives you the ability to set up arbitrary scenarios and do lots and lots of tests. We can Monte Carlo things, we can do a flight a thousand times in a row, with small perturbations of various parameters and tease out what our sensitivities are to those things. And those are the kinds of things that you can't do with physical tests, both because you can't fully replicate the environment and also because of the resources that would be required to do the same thing a thousand times in a row.

Because there are limits to the physical testing we can do on Earth, there are elements where we know there's more uncertainty. On those aspects where the uncertainty is high, we tried to build in enough margin that we can handle a range of things. And simulation gives you the ability to then maybe play with those parameters, and put them at their outer limits, and test them beyond where the real parameters are going to be to make sure that you have robustness even in those extreme cases.

How do you make sure you’re not relying on simulation too much, especially since in some ways it’s your only option?

It’s about anchoring it in real data, and we’ve done a lot of that with our physical testing. I think what you’re referring to is making your simulation too perfect, and we’re careful to model the things that matter. For example, the simulated sensors that we use have realistic levels of simulated noise and bias in them, the navigation camera images have realistic levels of degradation, we have realistic disturbances from wind gusts. If you don’t properly account for those things, then you’re missing important details. So, we try to be as accurate as we can, and to capture that by overbounding in areas where we have a high degree of uncertainty.

What kinds of simulated challenges have you put the Mars helicopter through, and how do you decide how far to push those challenges?

One example is that we can simulate going over rougher terrain. We can push that, and see how far we can go and still have the helicopter behave the way that we want it to. Or we can inject levels of noise that maybe the real sensors don't see, but you want to just see how far you can push things and make sure that it's still robust.

Where we put the limits on this and what we consider to be realistic is often a challenge. We consider this on a case by case basis—if you have a sensor that you're dealing with, you try to do testing with it to characterize it and understand its performance as much as possible, and you build a level of confidence in it that allows you to find the proper balance.

When it comes to things like terrain roughness, it's a little bit of a different thing, because we're actually picking where we're flying the helicopter. We have made that choice, and we know what the terrain looks like around us, so we don’t have to wonder about that anymore.

Image: NASA/JPL-Caltech/University of Arizona

Satellite image of the Ingenuity flight area.

The way that we’re trying to approach this operationally is that we should be done with the engineering at this point. We’re not depending on going back and resimulating things, other than a few checks here and there.

Are there any examples of things you learned as part of the simulation process that resulted in changes to the hardware or mission?

You know, it’s been a journey. One of the early things that we discovered as part of modeling the helicopter was that the rotor dynamics were quite different for a helicopter on Mars, in particular with respect to how the rotor responds to the up and down bending of the blades because they’re not perfectly rigid. That motion is a very important influence on the overall flight dynamics of the helicopter, and what we discovered as we started modeling was that this motion is damped much less on Mars. Under-damped oscillatory things like that, you kind of figure might pose a control issue, and that is the case here: if you just naively design it as you might a helicopter on Earth, without taking this into account, you could have a system where the response to control inputs becomes very sluggish. So that required changes to the vehicle design from some of the very early concepts, and it led us to make a rotor that’s extremely light and rigid.

The design cycle for the Mars helicopter—it’s not like we could just build something and take it out to the back yard and try it and then come back and tweak it if it doesn’t work. It’s a much bigger effort to build something and develop a test program where you have to use a vacuum chamber to test it. So you really want to get as close as possible up front, on your first iteration, and not have to go back to the drawing board on the basic things.

So how close were you able to get on your first iteration of the helicopter design?

[This video shows] a very early demo which was done more or less just assuming that things were going to behave as they would on Earth, and that we’d be able to fly in a Martian atmosphere just spinning the rotor faster and having a very light helicopter. We were basically just trying to demonstrate that we could produce enough lift. You can see the helicopter hopping around, with someone trying to joystick it, but it turned out to be very hard to control. This was prior to doing any of the modeling that I talked about earlier. But once we started seriously focusing on the modeling and simulation, we then went on to build a prototype vehicle which had a full-size rotor that’s very close to the rotor that will be flying on Mars. One difference is that prototype had cyclic control only on the lower rotor, and later we added cyclic control on the upper rotor as well, and that decision was informed in large part by the work we did in simulation—we’d put in the kinds of disturbances that we thought we might see on Mars, and decided that we needed to have the extra control authority.

How much room do you think there is for improvement in simulation, and how could that help you in the future?

The tools that we have were definitely sufficient for doing the job that we needed to do in terms of building a helicopter that can fly on Mars. But simulation is a compute-intensive thing, and so I think there’s definitely room for higher fidelity simulation if you have the compute power to do so. For a future Mars helicopter, you could get some benefits by more closely coupling together high-fidelity aerodynamic models with larger multi-body models, and doing that in a fast way, where you can iterate quickly. There’s certainly more potential for optimizing things.

Photo: NASA/JPL-Caltech

Ingenuity preparing for flight.

Watching Ingenuity’s first flight take place will likely be much like watching the Perseverance landing—we’ll be able to follow along with the Ingenuity team while they send commands to the helicopter and receive data back, although the time delay will mean that any kind of direct control won’t be possible. If everything goes the way it’s supposed to, there will hopefully be some preliminary telemetry from Ingenuity saying so, but it sounds like we’ll likely have to wait until April 12 before we get pictures or video of the flight itself.

Because Mars doesn’t care what time it is on Earth, the flight will actually be taking place very early on April 12, with the JPL Mission Control livestream starting at 3:30 a.m. EDT (12:30 a.m. PDT). Details are here. Continue reading

Posted in Human Robots

#438553 New Drone Software Handles Motor ...

Good as some drones are becoming at obstacle avoidance, accidents do still happen. And as far as robots go, drones are very much on the fragile side of things. Any sort of significant contact between a drone and almost anything else usually results in a catastrophic, out-of-control spin followed by a death plunge to the ground. Bad times. Bad, expensive times.

A few years ago, we saw some interesting research into software that can keep the most common drone form factor, the quadrotor, aloft and controllable even after the failure of one motor. The big caveat to that software was that it relied on GPS for state estimation, meaning that without a GPS signal, the drone is unable to get the information it needs to keep itself under control. In a paper recently accepted to RA-L, researchers at the University of Zurich report that they have developed a vision-based system that brings state estimation completely on-board. The upshot: potentially any drone with some software and a camera can keep itself safe even under the most challenging conditions.

A few years ago, we wrote about first author Sihao Sun’s work on high speed controlled flight of a quadrotor with a non-functional motor. But that innovation relied on an external motion capture system. Since then, Sun has moved from Tu Delft to Davide Scaramuzza’s lab at UZH, and it looks like he’s been able to combine his work on controlled spinning flight with the Robotics and Perception Group’s expertise in vision. Now, a downward-facing camera is all it takes for a spinning drone to remain stable and controllable:

Remember, this software isn’t just about guarding against motor failure. Drone motors themselves don’t just up and fail all that often, either with respect to their software or hardware. But they do represent the most likely point of failure for any drone, usually because when you run into something, what ultimately causes your drone to crash is damage to a motor or a propeller that causes loss of control.

The reason that earlier solutions relied on GPS was because the spinning drone needs a method of state estimation—that is, in order to be closed-loop controllable, the drone needs to have a reasonable understanding of what its position is and how that position is changing over time. GPS is an easy way to take care of this, but GPS is also an external system that doesn’t work everywhere. Having a state estimation system that’s completely internal to the drone itself is much more fail safe, and Sun got his onboard system to work through visual feature tracking with a downward-facing camera, even as the drone is spinning at over 20 rad/s.

While the system works well enough with a regular downward-facing camera—something that many consumer drones are equipped with for stabilization purposes—replacing it with an event camera (you remember event cameras, right?) makes the performance even better, especially in low light.

For more details on this, including what you’re supposed to do with a rapidly spinning partially disabled quadrotor (as well as what it’ll take to make this a standard feature on consumer hardware), we spoke with Sihao Sun via email.

IEEE Spectrum: what usually happens when a drone spinning this fast lands? Is there any way to do it safely?

Sihao Sun: Our experience shows that we can safely land the drone while it is spinning. When the range sensor measurements are lower than a threshold (around 10 cm, indicating that the drone is close to the ground), we switch off the rotors. During the landing procedure, despite the fast spinning motion, the thrust direction oscillates around the gravity vector, thus the drone touches the ground with its legs without damaging other components.

Can your system handle more than one motor failure?

Yes, the system can also handle the failure of two opposing rotors. However, if two adjacent rotors or more than two rotors fail, our method cannot save the quadrotor. Some research has shown that it is possible to control a quadrotor with only one remaining rotor. But the drone requires a very special inertial property, which is hard to satisfy in real applications.

How different is your system's performance from a similar system that relies on GPS, in a favorable environment?

In a favorable environment, our system outperforms those relying on GPS signals because it obtains better position estimates. Since a damaged quadrotor spins fast, the accelerometer readings are largely affected by centrifugal forces. When the GPS signal is lost or degraded, a drone relying on GPS needs to integrate these biased accelerometer measurements for position estimation, leading to large position estimation errors. Feeding these erroneous estimates to the flight controller can easily crash the drone.

When you say that your solution requires “only onboard sensors and computation,” are those requirements specialized, or would they be generally compatible with the current generation of recreational and commercial quadrotors?

We use an NVIDIA Jetson TX2 to run our solution, which includes two parts: the control algorithm and the vision-based state estimation algorithm. The control algorithm is lightweight; thus, we believe that it is compatible with the current generation of quadrotors. On the other hand, the vision-based state estimation requires relatively more computational resources, which may not be affordable for cheap recreational platforms. But this is not an issue for commercial quadrotors because many of them have more powerful processors than a TX2.

What else can event cameras be used for, in recreational or commercial applications?

Many drone applications can benefit from event cameras, especially those in high-speed or low-light conditions, such as autonomous drone racing, cave exploration, drone delivery during night time, etc. Event cameras also consume very little power, which is a significant advantage for energy-critical missions, such as planetary aerial vehicles for Mars explorations. Regarding space applications, we are currently collaborating with JPL to explore the use of event cameras to address the key limitations of standard cameras for the next Mars helicopter.

[ UZH RPG ] Continue reading

Posted in Human Robots

#437460 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
A Radical New Technique Lets AI Learn With Practically No Data
Karen Hao | MIT Technology Review
“Shown photos of a horse and a rhino, and told a unicorn is something in between, [children] can recognize the mythical creature in a picture book the first time they see it. …Now a new paper from the University of Waterloo in Ontario suggests that AI models should also be able to do this—a process the researchers call ‘less than one’-shot, or LO-shot, learning.”

FUTURE
Artificial General Intelligence: Are We Close, and Does It Even Make Sense to Try?
Will Douglas Heaven | MIT Technology Review
“A machine that could think like a person has been the guiding vision of AI research since the earliest days—and remains its most divisive idea. …So why is AGI controversial? Why does it matter? And is it a reckless, misleading dream—or the ultimate goal?”

HEALTH
The Race for a Super-Antibody Against the Coronavirus
Apoorva Mandavilli | The New York Times
“Dozens of companies and academic groups are racing to develop antibody therapies. …But some scientists are betting on a dark horse: Prometheus, a ragtag group of scientists who are months behind in the competition—and yet may ultimately deliver the most powerful antibody.”

SPACE
How to Build a Spacecraft to Save the World
Daniel Oberhaus | Wired
“The goal of the Double Asteroid Redirection Test, or DART, is to slam the [spacecraft] into a small asteroid orbiting a larger asteroid 7 million miles from Earth. …It should be able to change the asteroid’s orbit just enough to be detectable from Earth, demonstrating that this kind of strike could nudge an oncoming threat out of Earth’s way. Beyond that, everything is just an educated guess, which is exactly why NASA needs to punch an asteroid with a robot.”

TRANSPORTATION
Inside Gravity’s Daring Mission to Make Jetpacks a Reality
Oliver Franklin-Wallis | Wired
“The first time someone flies a jetpack, a curious thing happens: just as their body leaves the ground, their legs start to flail. …It’s as if the vestibular system can’t quite believe what’s happening. This isn’t natural. Then suddenly, thrust exceeds weight, and—they’re aloft. …It’s that moment, lift-off, that has given jetpacks an enduring appeal for over a century.”

FUTURE OF FOOD
Inside Singapore’s Huge Bet on Vertical Farming
Megan Tatum | MIT Technology Review
“…to cram all [of Singapore’s] gleaming towers and nearly 6 million people into a land mass half the size of Los Angeles, it has sacrificed many things, including food production. Farms make up no more than 1% of its total land (in the United States it’s 40%), forcing the small city-state to shell out around $10 billion each year importing 90% of its food. Here was an example of technology that could change all that.”

COMPUTING
The Effort to Build the Mathematical Library of the Future
Kevin Hartnett | Quanta
“Digitizing mathematics is a longtime dream. The expected benefits range from the mundane—computers grading students’ homework—to the transcendent: using artificial intelligence to discover new mathematics and find new solutions to old problems.”

Image credit: Kevin Mueller / Unsplash Continue reading

Posted in Human Robots