Tag Archives: vision

#437765 Video Friday: Massive Robot Joins ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
IROS 2020 – October 25-29, 2020 – Las Vegas, Nevada
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

Here are some professional circus artists messing around with an industrial robot for fun, like you do.

The acrobats are part of Östgötateatern, a Swedish theatre group, and the chair bit got turned into its own act, called “The Last Fish.” But apparently the Swedish Work Environment Authority didn’t like that an industrial robot—a large ABB robotic arm—was being used in an artistic performance, arguing that the same safety measures that apply in a factory setting would apply on stage. In other words, the robot had to operate inside a protective cage and humans could not physically interact with it.

When told that their robot had to be removed, the acrobats went to court. And won! At least that’s what we understand from this Swedish press release. The court in Linköping, in southern Sweden, ruled that the safety measures taken by the theater had been sufficient. The group had worked with a local robotics firm, Dyno Robotics, to program the manipulator and learn how to interact with it as safely as possible. The robot—which the acrobats say is the eighth member of their troupe—will now be allowed to return.

[ Östgötateatern ]

Houston Mechathronics’ Aquanaut continues to be awesome, even in the middle of a pandemic. It’s taken the big step (big swim?) out of NASA’s swimming pool and into open water.

[ HMI ]

Researchers from Carnegie Mellon University and Facebook AI Research have created a navigation system for robots powered by common sense. The technique uses machine learning to teach robots how to recognize objects and understand where they’re likely to be found in house. The result allows the machines to search more strategically.

[ CMU ]

Cassie manages 2.1 m/s, which is uncomfortably fast in a couple of different ways.

Next, untethered. After that, running!

[ Michigan Robotics ]

Engineers at Caltech have designed a new data-driven method to control the movement of multiple robots through cluttered, unmapped spaces, so they do not run into one another.

Multi-robot motion coordination is a fundamental robotics problem with wide-ranging applications that range from urban search and rescue to the control of fleets of self-driving cars to formation-flying in cluttered environments. Two key challenges make multi-robot coordination difficult: first, robots moving in new environments must make split-second decisions about their trajectories despite having incomplete data about their future path; second, the presence of larger numbers of robots in an environment makes their interactions increasingly complex (and more prone to collisions).

To overcome these challenges, Soon-Jo Chung, Bren Professor of Aerospace, and Yisong Yue, professor of computing and mathematical sciences, along with Caltech graduate student Benjamin Rivière (MS ’18), postdoctoral scholar Wolfgang Hönig, and graduate student Guanya Shi, developed a multi-robot motion-planning algorithm called “Global-to-Local Safe Autonomy Synthesis,” or GLAS, which imitates a complete-information planner with only local information, and “Neural-Swarm,” a swarm-tracking controller augmented to learn complex aerodynamic interactions in close-proximity flight.

[ Caltech ]

Fetch Robotics’ Freight robot is now hauling around pulsed xenon UV lamps to autonomously disinfect spaces with UV-A, UV-B, and UV-C, all at the same time.

[ SmartGuard UV ]

When you’re a vertically symmetrical quadruped robot, there is no upside-down.

[ Ghost Robotics ]

In the virtual world, the objects you pick up do not exist: you can see that cup or pen, but it does not feel like you’re touching them. That presented a challenge to EPFL professor Herbert Shea. Drawing on his extensive experience with silicone-based muscles and motors, Shea wanted to find a way to make virtual objects feel real. “With my team, we’ve created very small, thin and fast actuators,” explains Shea. “They are millimeter-sized capsules that use electrostatic energy to inflate and deflate.” The capsules have an outer insulating membrane made of silicone enclosing an inner pocket filled with oil. Each bubble is surrounded by four electrodes, that can close like a zipper. When a voltage is applied, the electrodes are pulled together, causing the center of the capsule to swell like a blister. It is an ingenious system because the capsules, known as HAXELs, can move not only up and down, but also side to side and around in a circle. “When they are placed under your fingers, it feels as though you are touching a range of different objects,” says Shea.

[ EPFL ]

Through the simple trick of reversing motors on impact, a quadrotor can land much more reliably on slopes.

[ Sherbrooke ]

Turtlebot delivers candy at Harvard.

I <3 Turtlebot SO MUCH

[ Harvard ]

Traditional drone controllers are a little bit counterintuitive, because there’s one stick that’s forwards and backwards and another stick that’s up and down but they’re both moving on the same axis. How does that make sense?! Here’s a remote that gives you actual z-axis control instead.

[ Fenics ]

Thanks Ashley!

Lio is a mobile robot platform with a multifunctional arm explicitly designed for human-robot interaction and personal care assistant tasks. The robot has already been deployed in several health care facilities, where it is functioning autonomously, assisting staff and patients on an everyday basis.

[ F&P Robotics ]

Video shows a ground vehicle autonomously exploring and mapping a multi-storage garage building and a connected patio on Carnegie Mellon University campus. The vehicle runs onboard state estimation and mapping leveraging range, vision, and inertial sensing, local planning for collision avoidance, and terrain analysis. All processing is real-time and no post-processing involved. The vehicle drives at 2m/s through the exploration run. This work is dedicated to DARPA Subterranean Challange.

[ CMU ]

Raytheon UK’s flagship STEM programme, the Quadcopter Challenge, gives 14-15 year olds the chance to participate in a hands-on, STEM-based engineering challenge to build a fully operational quadcopter. Each team is provided with an identical kit of parts, tools and instructions to build and customise their quadcopter, whilst Raytheon UK STEM Ambassadors provide mentoring, technical support and deliver bite-size learning modules to support the build.

[ Raytheon ]

A video on some of the research work that is being carried out at The Australian Centre for Field Robotics, University of Sydney.

[ University of Sydney ]

Jeannette Bohg, assistant professor of computer science at Stanford University, gave one of the Early Career Award Keynotes at RSS 2020.

[ RSS 2020 ]

Adam Savage remembers Grant Imahara.

[ Tested ] Continue reading

Posted in Human Robots

#437763 Peer Review of Scholarly Research Gets ...

In the world of academics, peer review is considered the only credible validation of scholarly work. Although the process has its detractors, evaluation of academic research by a cohort of contemporaries has endured for over 350 years, with “relatively minor changes.” However, peer review may be set to undergo its biggest revolution ever—the integration of artificial intelligence.

Open-access publisher Frontiers has debuted an AI tool called the Artificial Intelligence Review Assistant (AIRA), which purports to eliminate much of the grunt work associated with peer review. Since the beginning of June 2020, every one of the 11,000-plus submissions Frontiers received has been run through AIRA, which is integrated into its collaborative peer-review platform. This also makes it accessible to external users, accounting for some 100,000 editors, authors, and reviewers. Altogether, this helps “maximize the efficiency of the publishing process and make peer-review more objective,” says Kamila Markram, founder and CEO of Frontiers.

AIRA’s interactive online platform, which is a first of its kind in the industry, has been in development for three years.. It performs three broad functions, explains Daniel Petrariu, director of project management: assessing the quality of the manuscript, assessing quality of peer review, and recommending editors and reviewers. At the initial validation stage, the AI can make up to 20 recommendations and flag potential issues, including language quality, plagiarism, integrity of images, conflicts of interest, and so on. “This happens almost instantly and with [high] accuracy, far beyond the rate at which a human could be expected to complete a similar task,” Markram says.

“We have used a wide variety of machine-learning models for a diverse set of applications, including computer vision, natural language processing, and recommender systems,” says Markram. This includes simple bag-of-words models, as well as more sophisticated deep-learning ones. AIRA also leverages a large knowledge base of publications and authors.

Markram notes that, to address issues of possible AI bias, “We…[build] our own datasets and [design] our own algorithms. We make sure no statistical biases appear in the sampling of training and testing data. For example, when building a model to assess language quality, scientific fields are equally represented so the model isn’t biased toward any specific topic.” Machine- and deep-learning approaches, along with feedback from domain experts, including errors, are captured and used as additional training data. “By regularly re-training, we make sure our models improve in terms of accuracy and stay up-to-date.”

The AI’s job is to flag concerns; humans take the final decisions, says Petrariu. As an example, he cites image manipulation detection—something AI is super-efficient at but is nearly impossible for a human to perform with the same accuracy. “About 10 percent of our flagged images have some sort of problem,” he adds. “[In academic publishing] nobody has done this kind of comprehensive check [using AI] before,” says Petrariu. AIRA, he adds, facilitates Frontiers’ mission to make science open and knowledge accessible to all. Continue reading

Posted in Human Robots

#437758 Remotely Operated Robot Takes Straight ...

Roboticists love hard problems. Challenges like the DRC and SubT have helped (and are still helping) to catalyze major advances in robotics, but not all hard problems require a massive amount of DARPA funding—sometimes, a hard problem can just be something very specific that’s really hard for a robot to do, especially relative to the ease with which a moderately trained human might be able to do it. Catching a ball. Putting a peg in a hole. Or using a straight razor to shave someone’s face without Sweeney Todd-izing them.

This particular roboticist who sees straight-razor face shaving as a hard problem that robots should be solving is John Peter Whitney, who we first met back at IROS 2014 in Chicago when (working at Disney Research) he introduced an elegant fluidic actuator system. These actuators use tubes containing a fluid (like air or water) to transmit forces from a primary robot to a secondary robot in a very efficient way that also allows for either compliance or very high fidelity force feedback, depending on the compressibility of the fluid.

Photo: John Peter Whitney/Northeastern University

Barber meets robot: Boston based barber Jesse Cabbage [top, right] observes the machine created by roboticist John Peter Whitney. Before testing the robot on Whitney’s face, they used his arm for a quick practice [bottom].

Whitney is now at Northeastern University, in Boston, and he recently gave a talk at the RSS workshop on “Reacting to Contact,” where he suggested that straight razor shaving would be an interesting and valuable problem for robotics to work toward, due to its difficulty and requirement for an extremely high level of both performance and reliability.

Now, a straight razor is sort of like a safety razor, except with the safety part removed, which in fact does make it significantly less safe for humans, much less robots. Also not ideal for those worried about safety is that as part of the process the razor ends up in distressingly close proximity to things like the artery that is busily delivering your brain’s entire supply of blood, which is very close to the top of the list of things that most people want to keep blades very far away from. But that didn’t stop Whitney from putting his whiskers where his mouth is and letting his robotic system mediate the ministrations of a professional barber. It’s not an autonomous robotic straight-razor shave (because Whitney is not totally crazy), but it’s a step in that direction, and requires that the hardware Whitney developed be dead reliable.

Perhaps that was a poor choice of words. But, rest assured that Whitney lived long enough to answer our questions after. Here’s the video; it’s part of a longer talk, but it should start in the right spot, at about 23:30.

If Whitney looked a little bit nervous to you, that’s because he was. “This was the first time I’d ever been shaved by someone (something?!) else with a straight razor,” he told us, and while having a professional barber at the helm was some comfort, “the lack of feeling and control on my part was somewhat unsettling.” Whitney says that the barber, Jesse Cabbage of Dentes Barbershop in Somerville, Mass., was surprised by how well he could feel the tactile sensations being transmitted from the razor. “That’s one of the reasons we decided to make this video,” Whitney says. “I can’t show someone how something feels, so the next best thing is to show a delicate task that either from experience or intuition makes it clear to the viewer that the system must have these properties—otherwise the task wouldn’t be possible.”

And as for when Whitney might be comfortable getting shaved by a robotic system without a human in the loop? It’s going to take a lot of work, as do most other hard problems in robotics. “There are two parts to this,” he explains. “One is fault-tolerance of the components themselves (software, electronics, etc.) and the second is the quality of the perception and planning algorithms.”

He offers a comparison to self-driving cars, in which similar (or greater) risks are incurred: “To learn how to perceive, interpret, and adapt, we need a very high-fidelity model of the problem, or a wealth of data and experience, or both” he says. “But in the case of shaving we are greatly lacking in both!” He continues with the analogy: “I think there is a natural progression—the community started with autonomous driving of toy cars on closed courses and worked up to real cars carrying human passengers; in robotic manipulation we are beginning to move out of the ‘toy car’ stage and so I think it’s good to target high-consequence hard problems to help drive progress.”

The ultimate goal is much more general than the creation of a dedicated straight razor shaving robot. This particular hardware system is actually a testbed for exploring MRI-compatible remote needle biopsy.

Of course, the ultimate goal here is much more general than the creation of a dedicated straight razor shaving robot; it’s a challenge that includes a host of sub-goals that will benefit robotics more generally. This particular hardware system Whitney is developing is actually a testbed for exploring MRI-compatible remote needle biopsy, and he and his students are collaborating with Brigham and Women’s Hospital in Boston on adapting this technology to prostate biopsy and ablation procedures. They’re also exploring how delicate touch can be used as a way to map an environment and localize within it, especially where using vision may not be a good option. “These traits and behaviors are especially interesting for applications where we must interact with delicate and uncertain environments,” says Whitney. “Medical robots, assistive and rehabilitation robots and exoskeletons, and shared-autonomy teleoperation for delicate tasks.”
A paper with more details on this robotic system, “Series Elastic Force Control for Soft Robotic Fluid Actuators,” is available on arXiv. Continue reading

Posted in Human Robots

#437745 Video Friday: Japan’s Giant Gundam ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AWS Cloud Robotics Summit – August 18-19, 2020 – [Online Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online Conference]
IROS 2020 – October 25-29, 2020 – Las Vegas, Nev., USA
ICSR 2020 – November 14-16, 2020 – Golden, Co., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s coming together—literally! Japan’s giant Gundam appears nearly finished and ready for its first steps. In a recent video, Gundam Factory Yokohama, which is constructing the 18-meter-tall, 25-ton walking robot, provided an update on the project. The video shows the Gundam getting its head attached—after being blessed by Shinto priests.

In the video update, they say the project is “steadily progressing” and further details will be announced around the end of September.

[ Gundam Factory Yokohama ]

Creating robots with emotional personalities will transform the usability of robots in the real-world. As previous emotive social robots are mostly based on statically stable robots whose mobility is limited, this work develops an animation to real-world pipeline that enables dynamic bipedal robots that can twist, wiggle, and walk to behave with emotions.

So that’s where Cassie’s eyes go.

[ Berkeley ]

Now that the DARPA SubT Cave Circuit is all virtual, here’s a good reminder of how it’ll work.

[ SubT ]

Since July 20, anyone 11+ years of age must wear a mask in closed public places in France. This measure also is highly recommended in many European, African and Persian Gulf countries. To support businesses and public places, SoftBank Robotics Europe unveils a new feature with Pepper: AI Face Mask Detection.

[ Softbank ]

University of Michigan researchers are developing new origami inspired methods for designing, fabricating and actuating micro-robots using heat.These improvements will expand the mechanical capabilities of the tiny bots, allowing them to fold into more complex shapes.

[ University of Michigan ]

Suzumori Endo Lab, Tokyo Tech has created various types of IPMC robots. Those robots are fabricated by novel 3D fabrication methods.

[ Suzimori Endo Lab ]

The most explode-y of drones manages not to explode this time.

[ SpaceX ]

At Amazon, we’re constantly innovating to support our employees, customers, and communities as effectively as possible. As our fulfillment and delivery teams have been hard at work supplying customers with items during the pandemic, Amazon’s robotics team has been working behind the scenes to re-engineer bots and processes to increase safety in our fulfillment centers.

While some folks are able to do their jobs at home with just a laptop and internet connection, it’s not that simple for other employees at Amazon, including those who spend their days building and testing robots. Some engineers have turned their homes into R&D labs to continue building these new technologies to better serve our customers and employees. Their creativity and resourcefulness to keep our important programs going is inspiring.

[ Amazon ]

Australian Army soldiers from 2nd/14th Light Horse Regiment (Queensland Mounted Infantry) demonstrated the PD-100 Black Hornet Nano unmanned aircraft vehicle during a training exercise at Shoalwater Bay Training Area, Queensland, on 4 May 2018.

This robot has been around for a long time—maybe 10 years or more? It makes you wonder what the next generation will look like, and if they can manage to make it even smaller.

[ FLIR ]

Event-based cameras are bio-inspired vision sensors whose pixels work independently from each other and respond asynchronously to brightness changes, with microsecond resolution. Their advantages make it possible to tackle challenging scenarios in robotics, such as high-speed and high dynamic range scenes. We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.

[ Paper ] via [ HKUST ]

Emys can help keep kindergarteners sitting still for a long time, which is not small feat!

[ Emys ]

Introducing the RoboMaster EP Core, an advanced educational robot that was built to take learning to the next level and provides an all-in-one solution for STEAM-based classrooms everywhere, offering AI and programming projects for students of all ages and experience levels.

[ DJI ]

This Dutch food company Heemskerk uses ABB robots to automate their order picking. Their new solution reduces the amount of time the fresh produce spends in the supply chain, extending its shelf life, minimizing wastage, and creating a more sustainable solution for the fresh food industry.

[ ABB ]

This week’s episode of Pass the Torque features NASA’s Satellite Servicing Projects Division (NExIS) Robotics Engineer, Zakiya Tomlinson.

[ NASA ]

Massachusetts has been challenging Silicon Valley as the robotics capital of the United States. They’re not winning, yet. But they’re catching up.

[ MassTech ]

San Francisco-based Formant is letting anyone remotely take its Spot robot for a walk. Watch The Robot Report editors, based in Boston, take Spot for a walk around Golden Gate Park.

You can apply for this experience through Formant at the link below.

[ Formant ] via [ TRR ]

Thanks Steve!

An Institute for Advanced Study Seminar on “Theoretical Machine Learning,” featuring Peter Stone from UT Austin.

For autonomous robots to operate in the open, dynamically changing world, they will need to be able to learn a robust set of skills from relatively little experience. This talk begins by introducing Grounded Simulation Learning as a way to bridge the so-called reality gap between simulators and the real world in order to enable transfer learning from simulation to a real robot. It then introduces two new algorithms for imitation learning from observation that enable a robot to mimic demonstrated skills from state-only trajectories, without any knowledge of the actions selected by the demonstrator. Connections to theoretical advances in off-policy reinforcement learning will be highlighted throughout.

[ IAS ] Continue reading

Posted in Human Robots

#437697 These Underwater Drones Use Water ...

Yi Chao likes to describe himself as an “armchair oceanographer” because he got incredibly seasick the one time he spent a week aboard a ship. So it’s maybe not surprising that the former NASA scientist has a vision for promoting remote study of the ocean on a grand scale by enabling underwater drones to recharge on the go using his company’s energy-harvesting technology.

Many of the robotic gliders and floating sensor stations currently monitoring the world’s oceans are effectively treated as disposable devices because the research community has a limited number of both ships and funding to retrieve drones after they’ve accomplished their mission of beaming data back home. That’s not only a waste of money, but may also contribute to a growing assortment of abandoned lithium-ion batteries polluting the ocean with their leaking toxic materials—a decidedly unsustainable approach to studying the secrets of the underwater world.

“Our goal is to deploy our energy harvesting system to use renewable energy to power those robots,” says Chao, president and CEO of the startup Seatrec. “We're going to save one battery at a time, so hopefully we're going to not to dispose more toxic batteries in the ocean.”

Chao’s California-based startup claims that its SL1 Thermal Energy Harvesting System can already help save researchers money equivalent to an order of magnitude reduction in the cost of using robotic probes for oceanographic data collection. The startup is working on adapting its system to work with autonomous underwater gliders. And it has partnered with defense giant Northrop Grumman to develop an underwater recharging station for oceangoing drones that incorporates Northrop Grumman’s self-insulating electrical connector capable of operating while the powered electrical contacts are submerged.

Seatrec’s energy-harvesting system works by taking advantage of how certain substances transition from solid-to-liquid phase and liquid-to-gas phase when they heat up. The company’s technology harnesses the pressure changes that result from such phase changes in order to generate electricity.

Image: Seatrec

To make the phase changes happen, Seatrec’s solution taps the temperature differences between warmer water at the ocean surface and colder water at the ocean depths. Even a relatively simple robotic probe can generate additional electricity by changing its buoyancy to either float at the surface or sink down into the colder depths.

By attaching an external energy-harvesting module, Seatrec has already begun transforming robotic probes into assets that can be recharged and reused more affordably than sending out a ship each time to retrieve the probes. This renewable energy approach could keep such drones going almost indefinitely barring electrical or mechanical failures. “We just attach the backpack to the robots, we give them a cable providing power, and they go into the ocean,” Chao explains.

The early buyers of Seatrec’s products are primarily academic researchers who use underwater drones to collect oceanographic data. But the startup has also attracted military and government interest. It has already received small business innovation research contracts from both the U.S. Office of Naval Research and National Oceanic and Atmospheric Administration (NOAA).

Seatrec has also won two $10,000 prizes under the Powering the Blue Economy: Ocean Observing Prize administered by the U.S. Department of Energy and NOAA. The prizes awarded during the DISCOVER Competition phase back in March 2020 included one prize split with Northrop Grumman for the joint Mission Unlimited UUV Station concept. The startup and defense giant are currently looking for a robotics company to partner with for the DEVELOP Competition phase of the Ocean Observing Prize that will offer a total of $3 million in prizes.

In the long run, Seatrec hopes its energy-harvesting technology can support commercial ventures such as the aquaculture industry that operates vast underwater farms. The technology could also support underwater drones carrying out seabed surveys that pave the way for deep sea mining ventures, although those are not without controversy because of their projected environmental impacts.

Among all the possible applications Chao seems especially enthusiastic about the prospect of Seatrec’s renewable power technology enabling underwater drones and floaters to collect oceanographic data for much longer periods of time. He spent the better part of two decades working at the NASA Jet Propulsion Laboratory in Pasadena, Calif., where he helped develop a satellite designed for monitoring the Earth’s oceans. But he and the JPL engineering team that developed Seatrec’s core technology believe that swarms of underwater drones can provide a continuous monitoring network to truly begin understanding the oceans in depth.

The COVID-19 pandemic has slowed production and delivery of Seatrec’s products somewhat given local shutdowns and supply chain disruptions. Still, the startup has been able to continue operating in part because it’s considered to be a defense contractor that is operating an essential manufacturing facility. Seatrec’s engineers and other staff members are working in shifts to practice social distancing.

“Rather than building one or two for the government, we want to scale up to build thousands, hundreds of thousands, hopefully millions, so we can improve our understanding and provide that data to the community,” Chao says. Continue reading

Posted in Human Robots