Tag Archives: london

#435662 Video Friday: This 3D-Printed ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.

We’re used to seeing bristle bots about the size of a toothbrush head (which is not a coincidence), but Georgia Tech has downsized them, with some interesting benefits.

Researchers have created a new type of tiny 3D-printed robot that moves by harnessing vibration from piezoelectric actuators, ultrasound sources or even tiny speakers. Swarms of these “micro-bristle-bots” might work together to sense environmental changes, move materials – or perhaps one day repair injuries inside the human body.

The prototype robots respond to different vibration frequencies depending on their configurations, allowing researchers to control individual bots by adjusting the vibration. Approximately two millimeters long – about the size of the world’s smallest ant – the bots can cover four times their own length in a second despite the physical limitations of their small size.

“We are working to make the technology robust, and we have a lot of potential applications in mind,” said Azadeh Ansari, an assistant professor in the School of Electrical and Computer Engineering at the Georgia Institute of Technology. “We are working at the intersection of mechanics, electronics, biology and physics. It’s a very rich area and there’s a lot of room for multidisciplinary concepts.”

[ Georgia Tech ]

Most consumer drones are “multi-copters,” meaning that they have a series of rotors or propellers that allow them to hover like helicopters. But having rotors severely limits their energy efficiency, which means that they can’t easily carry heavy payloads or fly for long periods of time. To get the best of both worlds, drone designers have tried to develop “hybrid” fixed-wing drones that can fly as efficiently as airplanes, while still taking off and landing vertically like multi-copters.

These drones are extremely hard to control because of the complexity of dealing with their flight dynamics, but a team from MIT CSAIL aims to make the customization process easier, with a new system that allows users to design drones of different sizes and shapes that can nimbly switch between hovering and gliding – all by using a single controller.

In future work, the team plans to try to further increase the drone’s maneuverability by improving its design. The model doesn’t yet fully take into account complex aerodynamic effects between the propeller’s airflow and the wings. And lastly, their method trained the copter with “yaw velocity” set at zero, which means that it cannot currently perform sharp turns.

[ Paper ] via [ MIT ]

We’re not quite at the point where we can 3D print entire robots, but UCSD is getting us closer.

The UC San Diego researchers’ insight was twofold. They turned to a commercially available printer for the job, (the Stratasys Objet350 Connex3—a workhorse in many robotics labs). In addition, they realized one of the materials used by the 3D printer is made of carbon particles that can conduct power to sensors when connected to a power source. So roboticists used the black resin to manufacture complex sensors embedded within robotic parts made of clear polymer. They designed and manufactured several prototypes, including a gripper.

When stretched, the sensors failed at approximately the same strain as human skin. But the polymers the 3D printer uses are not designed to conduct electricity, so their performance is not optimal. The 3D printed robots also require a lot of post-processing before they can be functional, including careful washing to clean up impurities and drying.

However, researchers remain optimistic that in the future, materials will improve and make 3D printed robots equipped with embedded sensors much easier to manufacture.

[ UCSD ]

Congrats to Team Homer from the University of Koblenz-Landau, who won the RoboCup@Home world championship in Sydney!

[ Team Homer ]

When you’ve got a robot with both wheels and legs, motion planning is complicated. IIT has developed a new planner for CENTAURO that takes advantage of the different ways that the robot is able to get past obstacles.

[ Centauro ]

Thanks Dimitrios!

If you constrain a problem tightly enough, you can solve it even with a relatively simple robot. Here’s an example of an experimental breakfast robot named “Loraine” that can cook eggs, bacon, and potatoes using what looks to be zero sensing at all, just moving to different positions and actuating its gripper.

There’s likely to be enough human work required in the prep here to make the value that the robot adds questionable at best, but it’s a good example of how you can make a relatively complex task robot-compatible as long as you set it up in just the right way.

[ Connected Robotics ] via [ RobotStart ]

It’s been a while since we’ve seen a ball bot, and I’m not sure that I’ve ever seen one with a manipulator on it.

[ ETH Zurich RSL ]

Soft Robotics’ new mini fingers are able to pick up taco shells without shattering them, which as far as I can tell is 100 percent impossible for humans to do.

[ Soft Robotics ]

Yes, Starship’s wheeled robots can climb curbs, and indeed they have a pretty neat way of doing it.

[ Starship ]

Last year we posted a long interview with Christoph Bartneck about his research into robots and racism, and here’s a nice video summary of the work.

[ Christoph Bartneck ]

Canada’s contribution to the Lunar Gateway will be a smart robotic system which includes a next-generation robotic arm known as Canadarm3, as well as equipment, and specialized tools. Using cutting-edge software and advances in artificial intelligence, this highly-autonomous system will be able to maintain, repair and inspect the Gateway, capture visiting vehicles, relocate Gateway modules, help astronauts during spacewalks, and enable science both in lunar orbit and on the surface of the Moon.

[ CSA ]

An interesting demo of how Misty can integrate sound localization with other services.

[ Misty Robotics ]

The third and last period of H2020 AEROARMS project has brought the final developments in industrial inspection and maintenance tasks, such as the crawler retrieval and deployment (DLR) or the industrial validation in stages like a refinery or a cement factory.

[ Aeroarms ]

The Guardian S remote visual inspection and surveillance robot navigates a disaster training site to demonstrate its advanced maneuverability, long-range wireless communications and extended run times.

[ Sarcos ]

This appears to be a cake frosting robot and I wish I had like 3 more hours of this to share:

Also here is a robot that picks fried chicken using a curiously successful technique:

[ Kazumichi Moriyama ]

This isn’t strictly robots, but professor Hiroshi Ishii, associate director of the MIT Media Lab, gave a fascinating SIGCHI Lifetime Achievement Talk that’s absolutely worth your time.

[ Tangible Media Group ] Continue reading

Posted in Human Robots

#435658 Video Friday: A Two-Armed Robot That ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
Let us know if you have suggestions for next week, and enjoy today’s videos.

I’m sure you’ve seen this video already because you read this blog every day, but if you somehow missed it because you were skiing across Antarctica (the only valid excuse we’re accepting today), here’s our video introducing HMI’s Aquanaut transforming robot submarine.

And after you recover from all that frostbite, make sure and read our in-depth feature article here.

[ Aquanaut ]

Last week we complained about not having seen a ballbot with a manipulator, so Roberto from CMU shared a new video of their ballbot, featuring a pair of 7-DoF arms.

We should learn more at Humanoids 2019.

[ CMU ]

Thanks Roberto!

The FAA is making it easier for recreational drone pilots to get near-realtime approval to fly in lightly controlled airspace.

[ LAANC ]

Self-reconfigurable modular robots are usually composed of multiple modules with uniform docking interfaces that can be transformed into different configurations by themselves. The reconfiguration planning problem is finding what sequence of reconfiguration actions are required for one arrangement of modules to transform into another. We present a novel reconfiguration planning algorithm for modular robots. The algorithm compares the initial configuration with the goal configuration efficiently. The reconfiguration actions can be executed in a distributed manner so that each module can efficiently finish its reconfiguration task which results in a global reconfiguration for the system. In the end, the algorithm is demonstrated on real modular robots and some example reconfiguration tasks are provided.

[ CKbot ]

A nice design of a gripper that uses a passive thumb of sorts to pick up flat objects from flat surfaces.

[ Paper ] via [ Laval University ]

I like this video of a palletizing robot from Kawasaki because in the background you can see a human doing the exact same job and obviously not enjoying it.

[ Kawasaki ]

This robot cleans and “brings joy and laughter.” What else do we need?

I do appreciate that all the robots are named Leo, and that they’re also all female.

[ LionsBot ]

This is less of a dishwashing robot and more of a dishsorting robot, but we’ll forgive it because it doesn’t drop a single dish.

[ TechMagic ]

Thanks Ryosuke!

A slight warning here that the robot in the following video (which costs something like $180,000) appears “naked” in some scenes, none of which are strictly objectionable, we hope.

Beautifully slim and delicate motion life-size motion figures are ideal avatars for expressing emotions to customers in various arts, content and businesses. We can provide a system that integrates not only motion figures but all moving devices.

[ Speecys ]

The best way to operate a Husky with a pair of manipulators on it is to become the robot.

[ UT Austin ]

The FlyJacket drone control system from EPFL has been upgraded so that it can yank you around a little bit.

In several fields of human-machine interaction, haptic guidance has proven to be an effective training tool for enhancing user performance. This work presents the results of psychophysical and motor learning studies that were carried out with human participant to assess the effect of cable-driven haptic guidance for a task involving aerial robotic teleoperation. The guidance system was integrated into an exosuit, called the FlyJacket, that was developed to control drones with torso movements. Results for the Just Noticeable Difference (JND) and from the Stevens Power Law suggest that the perception of force on the users’ torso scales linearly with the amplitude of the force exerted through the cables and the perceived force is close to the magnitude of the stimulus. Motor learning studies reveal that this form of haptic guidance improves user performance in training, but this improvement is not retained when participants are evaluated without guidance.

[ EPFL ]

The SAND Challenge is an opportunity for small businesses to compete in an autonomous unmanned aerial vehicle (UAV) competition to help NASA address safety-critical risks associated with flying UAVs in the national airspace. Set in a post-natural disaster scenario, SAND will push the envelope of aviation.

[ NASA ]

Legged robots have the potential to traverse diverse and rugged terrain. To find a safe and efficient navigation path and to carefully select individual footholds, it is useful to predict properties of the terrain ahead of the robot. In this work, we propose a method to collect data from robot-terrain interaction and associate it to images, to then train a neural network to predict terrain properties from images.

[ RSL ]

Misty wants to be your new receptionist.

[ Misty Robotics ]

For years, we’ve been pointing out that while new Roombas have lots of great features, older Roombas still do a totally decent job of cleaning your floors. This video is a performance comparison between the newest Roomba (the S9+) and the original 2002 Roomba (!), and the results will surprise you. Or maybe they won’t.

[ Vacuum Wars ]

Lex Fridman from MIT interviews Chris Urmson, who was involved in some of the earliest autonomous vehicle projects, Google’s original self-driving car among them, and is currently CEO of Aurora Innovation.

Chris Urmson was the CTO of the Google Self-Driving Car team, a key engineer and leader behind the Carnegie Mellon autonomous vehicle entries in the DARPA grand challenges and the winner of the DARPA urban challenge. Today he is the CEO of Aurora Innovation, an autonomous vehicle software company he started with Sterling Anderson, who was the former director of Tesla Autopilot, and Drew Bagnell, Uber’s former autonomy and perception lead.

[ AI Podcast ]

In this week’s episode of Robots in Depth, Per speaks with Lael Odhner from RightHand Robotics.

Lael Odhner is a co-founder of RightHand Robotics, that is developing a gripper based on the combination of control and soft, compliant parts to get better grasping of objects. Their work focuses on grasping and manipulating everyday human objects in everyday environments.This mimics how human hands combine control and flexibility to grasp objects with great dexterity.

The combination of control and compliance makes the RightHand robotics gripper very light-weight and affordable. The compliance makes it easier to grasp objects of unknown shape and differs from the way industrial robots usually grip. The compliance also helps in a more unstructured environment where contact with the object and its surroundings cannot be exactly predicted.

[ RightHand Robotics ] via [ Robots in Depth ] Continue reading

Posted in Human Robots

#435656 Will AI Be Fashion Forward—or a ...

The narrative that often accompanies most stories about artificial intelligence these days is how machines will disrupt any number of industries, from healthcare to transportation. It makes sense. After all, technology already drives many of the innovations in these sectors of the economy.

But sneakers and the red carpet? The definitively low-tech fashion industry would seem to be one of the last to turn over its creative direction to data scientists and machine learning algorithms.

However, big brands, e-commerce giants, and numerous startups are betting that AI can ingest data and spit out Chanel. Maybe it’s not surprising, given that fashion is partly about buzz and trends—and there’s nothing more buzzy and trendy in the world of tech today than AI.

In its annual survey of the $3 trillion fashion industry, consulting firm McKinsey predicted that while AI didn’t hit a “critical mass” in 2018, it would increasingly influence the business of everything from design to manufacturing.

“Fashion as an industry really has been so slow to understand its potential roles interwoven with technology. And, to be perfectly honest, the technology doesn’t take fashion seriously.” This comment comes from Zowie Broach, head of fashion at London’s Royal College of Arts, who as a self-described “old fashioned” designer has embraced the disruptive nature of technology—with some caveats.

Co-founder in the late 1990s of the avant-garde fashion label Boudicca, Broach has always seen tech as a tool for designers, even setting up a website for the company circa 1998, way before an online presence became, well, fashionable.

Broach told Singularity Hub that while she is generally optimistic about the future of technology in fashion—the designer has avidly been consuming old sci-fi novels over the last few years—there are still a lot of difficult questions to answer about the interface of algorithms, art, and apparel.

For instance, can AI do what the great designers of the past have done? Fashion was “about designing, it was about a narrative, it was about meaning, it was about expression,” according to Broach.

AI that designs products based on data gleaned from human behavior can potentially tap into the Pavlovian response in consumers in order to make money, Broach noted. But is that channeling creativity, or just digitally dabbling in basic human brain chemistry?

She is concerned about people retaining control of the process, whether we’re talking about their data or their designs. But being empowered with the insights machines could provide into, for example, the geographical nuances of fashion between Dubai, Moscow, and Toronto is thrilling.

“What is it that we want the future to be from a fashion, an identity, and design perspective?” she asked.

Off on the Right Foot
Silicon Valley and some of the biggest brands in the industry offer a few answers about where AI and fashion are headed (though not at the sort of depths that address Broach’s broader questions of aesthetics and ethics).

Take what is arguably the biggest brand in fashion, at least by market cap but probably not by the measure of appearances on Oscar night: Nike. The $100 billion shoe company just gobbled up an AI startup called Celect to bolster its data analytics and optimize its inventory. In other words, Nike hopes it will be able to figure out what’s hot and what’s not in a particular location to stock its stores more efficiently.

The company is going even further with Nike Fit, a foot-scanning platform using a smartphone camera that applies AI techniques from fields like computer vision and machine learning to find the best fit for each person’s foot. The algorithms then identify and recommend the appropriately sized and shaped shoe in different styles.

No doubt the next step will be to 3D print personalized and on-demand sneakers at any store.

San Francisco-based startup ThirdLove is trying to bring a similar approach to bra sizes. Its 20-member data team, Fortune reported, has developed the Fit Finder quiz that uses machine learning algorithms to help pick just the right garment for every body type.

Data scientists are also a big part of the team at Stitch Fix, a former San Francisco startup that went public in 2017 and today sports a market cap of more than $2 billion. The online “personal styling” company uses hundreds of algorithms to not only make recommendations to customers, but to help design new styles and even manage the subscription-based supply chain.

Future of Fashion
E-commerce giant Amazon has thrown its own considerable resources into developing AI applications for retail fashion—with mixed results.

One notable attempt involved a “styling assistant” that came with the company’s Echo Look camera that helped people catalog and manage their wardrobes, evening helping pick out each day’s attire. The company more recently revisited the direct consumer side of AI with an app called StyleSnap, which matches clothes and accessories uploaded to the site with the retailer’s vast inventory and recommends similar styles.

Behind the curtains, Amazon is going even further. A team of researchers in Israel have developed algorithms that can deduce whether a particular look is stylish based on a few labeled images. Another group at the company’s San Francisco research center was working on tech that could generate new designs of items based on images of a particular style the algorithms trained on.

“I will say that the accumulation of many new technologies across the industry could manifest in a highly specialized style assistant, far better than the examples we’ve seen today. However, the most likely thing is that the least sexy of the machine learning work will become the most impactful, and the public may never hear about it.”

That prediction is from an online interview with Leanne Luce, a fashion technology blogger and product manager at Google who recently wrote a book called, succinctly enough, Artificial Intelligence and Fashion.

Data Meets Design
Academics are also sticking their beakers into AI and fashion. Researchers at the University of California, San Diego, and Adobe Research have previously demonstrated that neural networks, a type of AI designed to mimic some aspects of the human brain, can be trained to generate (i.e., design) new product images to match a buyer’s preference, much like the team at Amazon.

Meanwhile, scientists at Hong Kong Polytechnic University are working with China’s answer to Amazon, Alibaba, on developing a FashionAI Dataset to help machines better understand fashion. The effort will focus on how algorithms approach certain building blocks of design, what are called “key points” such as neckline and waistline, and “fashion attributes” like collar types and skirt styles.

The man largely behind the university’s research team is Calvin Wong, a professor and associate head of Hong Kong Polytechnic University’s Institute of Textiles and Clothing. His group has also developed an “intelligent fabric defect detection system” called WiseEye for quality control, reducing the chance of producing substandard fabric by 90 percent.

Wong and company also recently inked an agreement with RCA to establish an AI-powered design laboratory, though the details of that venture have yet to be worked out, according to Broach.

One hope is that such collaborations will not just get at the technological challenges of using machines in creative endeavors like fashion, but will also address the more personal relationships humans have with their machines.

“I think who we are, and how we use AI in fashion, as our identity, is not a superficial skin. It’s very, very important for how we define our future,” Broach said.

Image Credit: Inspirationfeed / Unsplash Continue reading

Posted in Human Robots

#435597 Water Jet Powered Drone Takes Off With ...

At ICRA 2015, the Aerial Robotics Lab at the Imperial College London presented a concept for a multimodal flying swimming robot called AquaMAV. The really difficult thing about a flying and swimming robot isn’t so much the transition from the first to the second, since you can manage that even if your robot is completely dead (thanks to gravity), but rather the other way: going from water to air, ideally in a stable and repetitive way. The AquaMAV concept solved this by basically just applying as much concentrated power as possible to the problem, using a jet thruster to hurl the robot out of the water with quite a bit of velocity to spare.

In a paper appearing in Science Robotics this week, the roboticists behind AquaMAV present a fully operational robot that uses a solid-fuel powered chemical reaction to generate an explosion that powers the robot into the air.

The 2015 version of AquaMAV, which was mostly just some very vintage-looking computer renderings and a little bit of hardware, used a small cylinder of CO2 to power its water jet thruster. This worked pretty well, but the mass and complexity of the storage and release mechanism for the compressed gas wasn’t all that practical for a flying robot designed for long-term autonomy. It’s a familiar challenge, especially for pneumatically powered soft robots—how do you efficiently generate gas on-demand, especially if you need a lot of pressure all at once?

An explosion propels the drone out of the water
There’s one obvious way of generating large amounts of pressurized gas all at once, and that’s explosions. We’ve seen robots use explosive thrust for mobility before, at a variety of scales, and it’s very effective as long as you can both properly harness the explosion and generate the fuel with a minimum of fuss, and this latest version of AquaMAV manages to do both:

The water jet coming out the back of this robot aircraft is being propelled by a gas explosion. The gas comes from the reaction between a little bit of calcium carbide powder stored inside the robot, and water. Water is mixed with the powder one drop at a time, producing acetylene gas, which gets piped into a combustion chamber along with air and water. When ignited, the acetylene air mixture explodes, forcing the water out of the combustion chamber and providing up to 51 N of thrust, which is enough to launch the 160-gram robot 26 meters up and over the water at 11 m/s. It takes just 50 mg of calcium carbide (mixed with 3 drops of water) to generate enough acetylene for each explosion, and both air and water are of course readily available. With 0.2 g of calcium carbide powder on board, the robot has enough fuel for multiple jumps, and the jump is powerful enough that the robot can get airborne even under fairly aggressive sea conditions.

Image: Science Robotics

The robot can transition from a floating state to an airborne jetting phase and back to floating (A). A 3D model render of the underside of the robot (B) shows the electronics capsule. The capsule contains the fuel tank (C), where calcium carbide reacts with air and water to propel the vehicle.

Next step: getting the robot to fly autonomously
Providing adequate thrust is just one problem that needs to be solved when attempting to conquer the water-air transition with a fixed-wing robot. The overall design of the robot itself is a challenge as well, because the optimal design and balance for the robot is quite different in each phase of operation, as the paper describes:

For the vehicle to fly in a stable manner during the jetting phase, the center of mass must be a significant distance in front of the center of pressure of the vehicle. However, to maintain a stable floating position on the water surface and the desired angle during jetting, the center of mass must be located behind the center of buoyancy. For the gliding phase, a fine balance between the center of mass and the center of pressure must be struck to achieve static longitudinal flight stability passively. During gliding, the center of mass should be slightly forward from the wing’s center of pressure.

The current version is mostly optimized for the jetting phase of flight, and doesn’t have any active flight control surfaces yet, but the researchers are optimistic that if they added some they’d have no problem getting the robot to fly autonomously. It’s just a glider at the moment, but a low-power propeller is the obvious step after that, and to get really fancy, a switchable gearbox could enable efficient movement on water as well as in the air. Long-term, the idea is that robots like these would be useful for tasks like autonomous water sampling over large areas, but I’d personally be satisfied with a remote controlled version that I could take to the beach.

“Consecutive aquatic jump-gliding with water-reactive fuel,” by R. Zufferey, A. Ortega Ancel, A. Farinha, R. Siddall, S. F. Armanini, M. Nasr, R. V. Brahmal, G. Kennedy, and M. Kovac from Imperial College in London, is published in the current issue of Science Robotics. Continue reading

Posted in Human Robots

#435106 Could Artificial Photosynthesis Help ...

Plants are the planet’s lungs, but they’re struggling to keep up due to rising CO2 emissions and deforestation. Engineers are giving them a helping hand, though, by augmenting their capacity with new technology and creating artificial substitutes to help them clean up our atmosphere.

Imperial College London, one of the UK’s top engineering schools, recently announced that it was teaming up with startup Arborea to build the company’s first outdoor pilot of its BioSolar Leaf cultivation system at the university’s White City campus in West London.

Arborea is developing large solar panel-like structures that house microscopic plants and can be installed on buildings or open land. The plants absorb light and carbon dioxide as they photosynthesize, removing greenhouse gases from the air and producing organic material, which can be processed to extract valuable food additives like omega-3 fatty acids.

The idea of growing algae to produce useful materials isn’t new, but Arborea’s pitch seems to be flexibility and affordability. The more conventional approach is to grow algae in open ponds, which are less efficient and open to contamination, or in photo-bioreactors, which typically require CO2 to be piped in rather than getting it from the air and can be expensive to run.

There’s little detail on how the technology deals with issues like nutrient supply and harvesting or how efficient it is. The company claims it can remove carbon dioxide as fast as 100 trees using the surface area of just a single tree, but there’s no published research to back that up, and it’s hard to compare the surface area of flat panels to that of a complex object like a tree. If you flattened out every inch of a tree’s surface it would cover a surprisingly large area.

Nonetheless, the ability to install these panels directly on buildings could present a promising way to soak up the huge amount of CO2 produced in our cities by transport and industry. And Arborea isn’t the only one trying to give plants a helping hand.

For decades researchers have been working on ways to use light-activated catalysts to split water into oxygen and hydrogen fuel, and more recently there have been efforts to fuse this with additional processes to combine the hydrogen with carbon from CO2 to produce all kinds of useful products.

Most notably, in 2016 Harvard researchers showed that water-splitting catalysts could be augmented with bacteria that combines the resulting hydrogen with CO2 to create oxygen and biomass, fuel, or other useful products. The approach was more efficient than plants at turning CO2 to fuel and was built using cheap materials, but turning it into a commercially viable technology will take time.

Not everyone is looking to mimic or borrow from biology in their efforts to suck CO2 out of the atmosphere. There’s been a recent glut of investment in startups working on direct-air capture (DAC) technology, which had previously been written off for using too much power and space to be practical. The looming climate change crisis appears to be rewriting some of those assumptions, though.

Most approaches aim to use the concentrated CO2 to produce synthetic fuels or other useful products, creating a revenue stream that could help improve their commercial viability. But we look increasingly likely to surpass the safe greenhouse gas limits, so attention is instead turning to carbon-negative technologies.

That means capturing CO2 from the air and then putting it into long-term storage. One way could be to grow lots of biomass and then bury it, mimicking the process that created fossil fuels in the first place. Or DAC plants could pump the CO2 they produce into deep underground wells.

But the former would take up unreasonably large amounts of land to make a significant dent in emissions, while the latter would require huge amounts of already scant and expensive renewable power. According to a recent analysis, artificial photosynthesis could sidestep these issues because it’s up to five times more efficient than its natural counterpart and could be cheaper than DAC.

Whether the technology will develop quickly enough for it to be deployed at scale and in time to mitigate the worst effects of climate change remains to be seen. Emissions reductions certainly present a more sure-fire way to deal with the problem, but nonetheless, cyborg plants could soon be a common sight in our cities.

Image Credit: GiroScience / Shutterstock.com Continue reading

Posted in Human Robots