Tag Archives: Visual
#437978 How Mirroring the Architecture of the ...
While AI can carry out some impressive feats when trained on millions of data points, the human brain can often learn from a tiny number of examples. New research shows that borrowing architectural principles from the brain can help AI get closer to our visual prowess.
The prevailing wisdom in deep learning research is that the more data you throw at an algorithm, the better it will learn. And in the era of Big Data, that’s easier than ever, particularly for the large data-centric tech companies carrying out a lot of the cutting-edge AI research.
Today’s largest deep learning models, like OpenAI’s GPT-3 and Google’s BERT, are trained on billions of data points, and even more modest models require large amounts of data. Collecting these datasets and investing the computational resources to crunch through them is a major bottleneck, particularly for less well-resourced academic labs.
It also means today’s AI is far less flexible than natural intelligence. While a human only needs to see a handful of examples of an animal, a tool, or some other category of object to be able pick it out again, most AI need to be trained on many examples of an object in order to be able to recognize it.
There is an active sub-discipline of AI research aimed at what is known as “one-shot” or “few-shot” learning, where algorithms are designed to be able to learn from very few examples. But these approaches are still largely experimental, and they can’t come close to matching the fastest learner we know—the human brain.
This prompted a pair of neuroscientists to see if they could design an AI that could learn from few data points by borrowing principles from how we think the brain solves this problem. In a paper in Frontiers in Computational Neuroscience, they explained that the approach significantly boosts AI’s ability to learn new visual concepts from few examples.
“Our model provides a biologically plausible way for artificial neural networks to learn new visual concepts from a small number of examples,” Maximilian Riesenhuber, from Georgetown University Medical Center, said in a press release. “We can get computers to learn much better from few examples by leveraging prior learning in a way that we think mirrors what the brain is doing.”
Several decades of neuroscience research suggest that the brain’s ability to learn so quickly depends on its ability to use prior knowledge to understand new concepts based on little data. When it comes to visual understanding, this can rely on similarities of shape, structure, or color, but the brain can also leverage abstract visual concepts thought to be encoded in a brain region called the anterior temporal lobe (ATL).
“It is like saying that a platypus looks a bit like a duck, a beaver, and a sea otter,” said paper co-author Joshua Rule, from the University of California Berkeley.
The researchers decided to try and recreate this capability by using similar high-level concepts learned by an AI to help it quickly learn previously unseen categories of images.
Deep learning algorithms work by getting layers of artificial neurons to learn increasingly complex features of an image or other data type, which are then used to categorize new data. For instance, early layers will look for simple features like edges, while later ones might look for more complex ones like noses, faces, or even more high-level characteristics.
First they trained the AI on 2.5 million images across 2,000 different categories from the popular ImageNet dataset. They then extracted features from various layers of the network, including the very last layer before the output layer. They refer to these as “conceptual features” because they are the highest-level features learned, and most similar to the abstract concepts that might be encoded in the ATL.
They then used these different sets of features to train the AI to learn new concepts based on 2, 4, 8, 16, 32, 64, and 128 examples. They found that the AI that used the conceptual features yielded much better performance than ones trained using lower-level features on lower numbers of examples, but the gap shrunk as they were fed more training examples.
While the researchers admit the challenge they set their AI was relatively simple and only covers one aspect of the complex process of visual reasoning, they said that using a biologically plausible approach to solving the few-shot problem opens up promising new avenues in both neuroscience and AI.
“Our findings not only suggest techniques that could help computers learn more quickly and efficiently, they can also lead to improved neuroscience experiments aimed at understanding how people learn so quickly, which is not yet well understood,” Riesenhuber said.
As the researchers note, the human visual system is still the gold standard when it comes to understanding the world around us. Borrowing from its design principles might turn out to be a profitable direction for future research.
Image Credit: Gerd Altmann from Pixabay Continue reading
#437957 Meet Assembloids, Mini Human Brains With ...
It’s not often that a twitching, snowman-shaped blob of 3D human tissue makes someone’s day.
But when Dr. Sergiu Pasca at Stanford University witnessed the tiny movement, he knew his lab had achieved something special. You see, the blob was evolved from three lab-grown chunks of human tissue: a mini-brain, mini-spinal cord, and mini-muscle. Each individual component, churned to eerie humanoid perfection inside bubbling incubators, is already a work of scientific genius. But Pasca took the extra step, marinating the three components together inside a soup of nutrients.
The result was a bizarre, Lego-like human tissue that replicates the basic circuits behind how we decide to move. Without external prompting, when churned together like ice cream, the three ingredients physically linked up into a fully functional circuit. The 3D mini-brain, through the information highway formed by the artificial spinal cord, was able to make the lab-grown muscle twitch on demand.
In other words, if you think isolated mini-brains—known formally as brain organoids—floating in a jar is creepy, upgrade your nightmares. The next big thing in probing the brain is assembloids—free-floating brain circuits—that now combine brain tissue with an external output.
The end goal isn’t to freak people out. Rather, it’s to recapitulate our nervous system, from input to output, inside the controlled environment of a Petri dish. An autonomous, living brain-spinal cord-muscle entity is an invaluable model for figuring out how our own brains direct the intricate muscle movements that allow us stay upright, walk, or type on a keyboard.
It’s the nexus toward more dexterous brain-machine interfaces, and a model to understand when brain-muscle connections fail—as in devastating conditions like Lou Gehrig’s disease or Parkinson’s, where people slowly lose muscle control due to the gradual death of neurons that control muscle function. Assembloids are a sort of “mini-me,” a workaround for testing potential treatments on a simple “replica” of a person rather than directly on a human.
From Organoids to Assembloids
The miniature snippet of the human nervous system has been a long time in the making.
It all started in 2014, when Dr. Madeleine Lancaster, then a post-doc at Stanford, grew a shockingly intricate 3D replica of human brain tissue inside a whirling incubator. Revolutionarily different than standard cell cultures, which grind up brain tissue to reconstruct as a flat network of cells, Lancaster’s 3D brain organoids were incredibly sophisticated in their recapitulation of the human brain during development. Subsequent studies further solidified their similarity to the developing brain of a fetus—not just in terms of neuron types, but also their connections and structure.
With the finding that these mini-brains sparked with electrical activity, bioethicists increasingly raised red flags that the blobs of human brain tissue—no larger than the size of a pea at most—could harbor the potential to develop a sense of awareness if further matured and with external input and output.
Despite these concerns, brain organoids became an instant hit. Because they’re made of human tissue—often taken from actual human patients and converted into stem-cell-like states—organoids harbor the same genetic makeup as their donors. This makes it possible to study perplexing conditions such as autism, schizophrenia, or other brain disorders in a dish. What’s more, because they’re grown in the lab, it’s possible to genetically edit the mini-brains to test potential genetic culprits in the search for a cure.
Yet mini-brains had an Achilles’ heel: not all were made the same. Rather, depending on the region of the brain that was reverse engineered, the cells had to be persuaded by different cocktails of chemical soups and maintained in isolation. It was a stark contrast to our own developing brains, where regions are connected through highways of neural networks and work in tandem.
Pasca faced the problem head-on. Betting on the brain’s self-assembling capacity, his team hypothesized that it might be possible to grow different mini-brains, each reflecting a different brain region, and have them fuse together into a synchronized band of neuron circuits to process information. Last year, his idea paid off.
In one mind-blowing study, his team grew two separate portions of the brain into blobs, one representing the cortex, the other a deeper part of the brain known to control reward and movement, called the striatum. Shockingly, when put together, the two blobs of human brain tissue fused into a functional couple, automatically establishing neural highways that resulted in one of the most sophisticated recapitulations of a human brain. Pasca crowned this tissue engineering crème-de-la-crème “assembloids,” a portmanteau between “assemble” and “organoids.”
“We have demonstrated that regionalized brain spheroids can be put together to form fused structures called brain assembloids,” said Pasca at the time.” [They] can then be used to investigate developmental processes that were previously inaccessible.”
And if that’s possible for wiring up a lab-grown brain, why wouldn’t it work for larger neural circuits?
Assembloids, Assemble
The new study is the fruition of that idea.
The team started with human skin cells, scraped off of eight healthy people, and transformed them into a stem-cell-like state, called iPSCs. These cells have long been touted as the breakthrough for personalized medical treatment, before each reflects the genetic makeup of its original host.
Using two separate cocktails, the team then generated mini-brains and mini-spinal cords using these iPSCs. The two components were placed together “in close proximity” for three days inside a lab incubator, gently floating around each other in an intricate dance. To the team’s surprise, under the microscope using tracers that glow in the dark, they saw highways of branches extending from one organoid to the other like arms in a tight embrace. When stimulated with electricity, the links fired up, suggesting that the connections weren’t just for show—they’re capable of transmitting information.
“We made the parts,” said Pasca, “but they knew how to put themselves together.”
Then came the ménage à trois. Once the mini-brain and spinal cord formed their double-decker ice cream scoop, the team overlaid them onto a layer of muscle cells—cultured separately into a human-like muscular structure. The end result was a somewhat bizarre and silly-looking snowman, made of three oddly-shaped spherical balls.
Yet against all odds, the brain-spinal cord assembly reached out to the lab-grown muscle. Using a variety of tools, including measuring muscle contraction, the team found that this utterly Frankenstein-like snowman was able to make the muscle component contract—in a way similar to how our muscles twitch when needed.
“Skeletal muscle doesn’t usually contract on its own,” said Pasca. “Seeing that first twitch in a lab dish immediately after cortical stimulation is something that’s not soon forgotten.”
When tested for longevity, the contraption lasted for up to 10 weeks without any sort of breakdown. Far from a one-shot wonder, the isolated circuit worked even better the longer each component was connected.
Pasca isn’t the first to give mini-brains an output channel. Last year, the queen of brain organoids, Lancaster, chopped up mature mini-brains into slices, which were then linked to muscle tissue through a cultured spinal cord. Assembloids are a step up, showing that it’s possible to automatically sew multiple nerve-linked structures together, such as brain and muscle, sans slicing.
The question is what happens when these assembloids become more sophisticated, edging ever closer to the inherent wiring that powers our movements. Pasca’s study targets outputs, but what about inputs? Can we wire input channels, such as retinal cells, to mini-brains that have a rudimentary visual cortex to process those examples? Learning, after all, depends on examples of our world, which are processed inside computational circuits and delivered as outputs—potentially, muscle contractions.
To be clear, few would argue that today’s mini-brains are capable of any sort of consciousness or awareness. But as mini-brains get increasingly more sophisticated, at what point can we consider them a sort of AI, capable of computation or even something that mimics thought? We don’t yet have an answer—but the debates are on.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading
#437924 How a Software Map of the Entire Planet ...
i
“3D map data is the scaffolding of the 21st century.”
–Edward Miller, Founder, Scape Technologies, UK
Covered in cameras, sensors, and a distinctly spaceship looking laser system, Google’s autonomous vehicles were easy to spot when they first hit public roads in 2015. The key hardware ingredient is a spinning laser fixed to the roof, called lidar, which provides the car with a pair of eyes to see the world. Lidar works by sending out beams of light and measuring the time it takes to bounce off objects back to the source. By timing the light’s journey, these depth-sensing systems construct fully 3D maps of their surroundings.
3D maps like these are essentially software copies of the real world. They will be crucial to the development of a wide range of emerging technologies including autonomous driving, drone delivery, robotics, and a fast-approaching future filled with augmented reality.
Like other rapidly improving technologies, lidar is moving quickly through its development cycle. What was an expensive technology on the roof of a well-funded research project is now becoming cheaper, more capable, and readily available to consumers. At some point, lidar will come standard on most mobile devices and is now available to early-adopting owners of the iPhone 12 Pro.
Consumer lidar represents the inevitable shift from wealthy tech companies generating our world’s map data, to a more scalable crowd-sourced approach. To develop the repository for their Street View Maps product, Google reportedly spent $1-2 billion sending cars across continents photographing every street. Compare that to a live-mapping service like Waze, which uses crowd-sourced user data from its millions of users to generate accurate and real-time traffic conditions. Though these maps serve different functions, one is a static, expensive, unchanging map of the world while the other is dynamic, real-time, and constructed by users themselves.
Soon millions of people may be scanning everything from bedrooms to neighborhoods, resulting in 3D maps of significant quality. An online search for lidar room scans demonstrates just how richly textured these three-dimensional maps are compared to anything we’ve had before. With lidar and other depth-sensing systems, we now have the tools to create exact software copies of everywhere and everything on earth.
At some point, likely aided by crowdsourcing initiatives, these maps will become living breathing, real-time representations of the world. Some refer to this idea as a “digital twin” of the planet. In a feature cover story, Kevin Kelly, the cofounder of Wired magazine, calls this concept the “mirrorworld,” a one-to-one software map of everything.
So why is that such a big deal? Take augmented reality as an example.
Of all the emerging industries dependent on such a map, none are more invested in seeing this concept emerge than those within the AR landscape. Apple, for example, is not-so-secretly developing a pair of AR glasses, which they hope will deliver a mainstream turning point for the technology.
For Apple’s AR devices to work as anticipated, they will require virtual maps of the world, a concept AR insiders call the “AR cloud,” which is synonymous with the “mirrorworld” concept. These maps will be two things. First, they will be a tool that creators use to place AR content in very specific locations; like a world canvas to paint on. Second, they will help AR devices both locate and understand the world around them so they can render content in a believable way.
Imagine walking down a street wanting to check the trading hours of a local business. Instead of pulling out your phone to do a tedious search online, you conduct the equivalent of a visual google search simply by gazing at the store. Albeit a trivial example, the AR cloud represents an entirely non-trivial new way of managing how we organize the world’s information. Access to knowledge can be shifted away from the faraway monitors in our pocket, to its relevant real-world location.
Ultimately this describes a blurring of physical and digital infrastructure. Our public and private spaces will thus be comprised equally of both.
No example demonstrates this idea better than Pokémon Go. The game is straightforward enough; users capture virtual characters scattered around the real world. Today, the game relies on traditional GPS technology to place its characters, but GPS is accurate only to within a few meters of a location. For a car navigating on a highway or locating Pikachus in the world, that level of precision is sufficient. For drone deliveries, driverless cars, or placing a Pikachu in a specific location, say on a tree branch in a park, GPS isn’t accurate enough. As astonishing as it may seem, many experimental AR cloud concepts, even entirely mapped cities, are location specific down to the centimeter.
Niantic, the $4 billion publisher behind Pokémon Go, is aggressively working on developing a crowd-sourced approach to building better AR Cloud maps by encouraging their users to scan the world for them. Their recent acquisition of 6D.ai, a mapping software company developed by the University of Oxford’s Victor Prisacariu through his work at Oxford’s Active Vision Lab, indicates Niantic’s ambition to compete with the tech giants in this space.
With 6D.ai’s technology, Niantic is developing the in-house ability to generate their own 3D maps while gaining better semantic understanding of the world. By going beyond just knowing there’s a temporary collection of orange cones in a certain location, for example, the game may one day understand the meaning behind this; that a temporary construction zone means no Pokémon should spawn here to avoid drawing players to this location.
Niantic is not the only company working on this. Many of the big tech firms you would expect have entire teams focused on map data. Facebook, for example, recently acquired the UK-based Scape technologies, a computer vision startup mapping entire cities with centimeter precision.
As our digital maps of the world improve, expect a relentless and justified discussion of privacy concerns as well. How will society react to the idea of a real-time 3D map of their bedroom living on a Facebook or Amazon server? Those horrified by the use of facial recognition AI being used in public spaces are unlikely to find comfort in the idea of a machine-readable world subject to infinite monitoring.
The ability to build high-precision maps of the world could reshape the way we engage with our planet and promises to be one of the biggest technology developments of the next decade. While these maps may stay hidden as behind-the-scenes infrastructure powering much flashier technologies that capture the world’s attention, they will soon prop up large portions of our technological future.
Keep that in mind when a car with no driver is sharing your road.
Image credit: sergio souza / Pexels Continue reading
#437905 New Deep Learning Method Helps Robots ...
One of the biggest things standing in the way of the robot revolution is their inability to adapt. That may be about to change though, thanks to a new approach that blends pre-learned skills on the fly to tackle new challenges.
Put a robot in a tightly-controlled environment and it can quickly surpass human performance at complex tasks, from building cars to playing table tennis. But throw these machines a curve ball and they’re in trouble—just check out this compilation of some of the world’s most advanced robots coming unstuck in the face of notoriously challenging obstacles like sand, steps, and doorways.
The reason robots tend to be so fragile is that the algorithms that control them are often manually designed. If they encounter a situation the designer didn’t think of, which is almost inevitable in the chaotic real world, then they simply don’t have the tools to react.
Rapid advances in AI have provided a potential workaround by letting robots learn how to carry out tasks instead of relying on hand-coded instructions. A particularly promising approach is deep reinforcement learning, where the robot interacts with its environment through a process of trial-and-error and is rewarded for carrying out the correct actions. Over many repetitions it can use this feedback to learn how to accomplish the task at hand.
But the approach requires huge amounts of data to solve even simple tasks. And most of the things we would want a robot to do are actually comprised of many smaller tasks—for instance, delivering a parcel involves learning how to pick an object up, how to walk, how to navigate, and how to pass an object to someone else, among other things.
Training all these sub-tasks simultaneously is hugely complex and far beyond the capabilities of most current AI systems, so many experiments so far have focused on narrow skills. Some have tried to train AI on multiple skills separately and then use an overarching system to flip between these expert sub-systems, but these approaches still can’t adapt to completely new challenges.
Building off this research, though, scientists have now created a new AI system that can blend together expert sub-systems specialized for a specific task. In a paper in Science Robotics, they explain how this allows a four-legged robot to improvise new skills and adapt to unfamiliar challenges in real time.
The technique, dubbed multi-expert learning architecture (MELA), relies on a two-stage training approach. First the researchers used a computer simulation to train two neural networks to carry out two separate tasks: trotting and recovering from a fall.
They then used the models these two networks learned as seeds for eight other neural networks specialized for more specific motor skills, like rolling over or turning left or right. The eight “expert networks” were trained simultaneously along with a “gating network,” which learns how to combine these experts to solve challenges.
Because the gating network synthesizes the expert networks rather than switching them on sequentially, MELA is able to come up with blends of different experts that allow it to tackle problems none could solve alone.
The authors liken the approach to training people in how to play soccer. You start out by getting them to do drills on individual skills like dribbling, passing, or shooting. Once they’ve mastered those, they can then intelligently combine them to deal with more dynamic situations in a real game.
After training the algorithm in simulation, the researchers uploaded it to a four-legged robot and subjected it to a battery of tests, both indoors and outdoors. The robot was able to adapt quickly to tricky surfaces like gravel or pebbles, and could quickly recover from being repeatedly pushed over before continuing on its way.
There’s still some way to go before the approach could be adapted for real-world commercially useful robots. For a start, MELA currently isn’t able to integrate visual perception or a sense of touch; it simply relies on feedback from the robot’s joints to tell it what’s going on around it. The more tasks you ask the robot to master, the more complex and time-consuming the training will get.
Nonetheless, the new approach points towards a promising way to make multi-skilled robots become more than the sum of their parts. As much fun as it is, it seems like laughing at compilations of clumsy robots may soon be a thing of the past.
Image Credit: Yang et al., Science Robotics Continue reading
#437869 Video Friday: Japan’s Gundam Robot ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today’s videos.
Another BIG step for Japan’s Gundam project.
[ Gundam Factory ]
We present an interactive design system that allows users to create sculpting styles and fabricate clay models using a standard 6-axis robot arm. Given a general mesh as input, the user iteratively selects sub-areas of the mesh through decomposition and embeds the design expression into an initial set of toolpaths by modifying key parameters that affect the visual appearance of the sculpted surface finish. We demonstrate the versatility of our approach by designing and fabricating different sculpting styles over a wide range of clay models.
[ Disney Research ]
China’s Chang’e-5 completed the drilling, sampling and sealing of lunar soil at 04:53 BJT on Wednesday, marking the first automatic sampling on the Moon, the China National Space Administration (CNSA) announced Wednesday.
[ CCTV ]
Red Hat’s been putting together an excellent documentary on Willow Garage and ROS, and all five parts have just been released. We posted Part 1 a little while ago, so here’s Part 2 and Part 3.
Parts 4 and 5 are at the link below!
[ Red Hat ]
Congratulations to ANYbotics on a well-deserved raise!
ANYbotics has origins in the Robotic Systems Lab at ETH Zurich, and ANYmal’s heritage can be traced back at least as far as StarlETH, which we first met at ICRA 2013.
[ ANYbotics ]
Most conventional robots are working with 0.05-0.1mm accuracy. Such accuracy requires high-end components like low-backlash gears, high-resolution encoders, complicated CNC parts, powerful motor drives, etc. Those in combination end up an expensive solution, which is either unaffordable or unnecessary for many applications. As a result, we found the Apicoo Robotics to provide our customers solutions with a much lower cost and higher stability.
[ Apicoo Robotics ]
The Skydio 2 is an incredible drone that can take incredible footage fully autonomously, but it definitely helps if you do incredible things in incredible places.
[ Skydio ]
Jueying is the first domestic sensitive quadruped robot for industry applications and scenarios. It can coordinate (replace) humans to reach any place that can be reached. It has superior environmental adaptability, excellent dynamic balance capabilities and precise Environmental perception capabilities. By carrying functional modules for different application scenarios in the safe load area, the mobile superiority of the quadruped robot can be organically integrated with the commercialization of functional modules, providing smart factories, smart parks, scene display and public safety application solutions.
[ DeepRobotics ]
We have developed semi-autonomous quadruped robot, called LASER-D (Legged-Agile-Smart-Efficient Robot for Disinfection) for performing disinfection in cluttered environments. The robot is equipped with a spray-based disinfection system and leverages the body motion to controlling the spray action without the need for an extra stabilization mechanism. The system includes an image processing capability to verify disinfected regions with high accuracy. This system allows the robot to successfully carry out effective disinfection tasks while safely traversing through cluttered environments, climb stairs/slopes, and navigate on slippery surfaces.
[ USC Viterbi ]
We propose the “multi-vision hand”, in which a number of small high-speed cameras are mounted on the robot hand of a common 7 degrees-of-freedom robot. Also, we propose visual-servoing control by using a multi-vision system that combines the multi-vision hand and external fixed high-speed cameras. The target task was ball catching motion, which requires high-speed operation. In the proposed catching control, the catch position of the ball, which is estimated by the external fixed high-speed cameras, is corrected by the multi-vision hand in real-time.
More details available through IROS on-demand.
[ Namiki Laboratory ]
Shunichi Kurumaya wrote in to share his work on PneuFinger, a pneumatically actuated compliant robotic gripping system.
[ Nakamura Lab ]
Thanks Shunichi!
Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent, e.g., “Go to the large green bowl’’. The training process, then, interrelates the different modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at run time on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity.
[ ASU ]
Thanks Heni!
Gita is on sale for the holidays for only $2,000.
[ Gita ]
This video introduces a computational approach for routing thin artificial muscle actuators through hyperelastic soft robots, in order to achieve a desired deformation behavior. Provided with a robot design, and a set of example deformations, we continuously co-optimize the routing of actuators, and their actuation, to approximate example deformations as closely as possible.
[ Disney Research ]
Researchers and mountain rescuers in Switzerland are making huge progress in the field of autonomous drones as the technology becomes more in-demand for global search-and-rescue operations.
[ SWI ]
This short clip of the Ghost Robotics V60 features an interesting, if awkward looking, righting behavior at the end.
[ Ghost Robotics ]
Europe’s Rosalind Franklin ExoMars rover has a younger ’sibling’, ExoMy. The blueprints and software for this mini-version of the full-size Mars explorer are available for free so that anyone can 3D print, assemble and program their own ExoMy.
[ ESA ]
The holiday season is here, and with the added impact of Covid-19 consumer demand is at an all-time high. Berkshire Grey is the partner that today’s leading organizations turn to when it comes to fulfillment automation.
[ Berkshire Grey ]
Until very recently, the vast majority of studies and reports on the use of cargo drones for public health were almost exclusively focused on the technology. The driving interest from was on the range that these drones could travel, how much they could carry and how they worked. Little to no attention was placed on the human side of these projects. Community perception, community engagement, consent and stakeholder feedback were rarely if ever addressed. This webinar presents the findings from a very recent study that finally sheds some light on the human side of drone delivery projects.
[ WeRobotics ] Continue reading