Tag Archives: modeling

#438613 Video Friday: Digit Takes a Hike

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

HRI 2021 – March 8-11, 2021 – [Online Conference]
RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

It's winter in Oregon, so everything is damp, all the time. No problem for Digit!

Also the case for summer in Oregon.

[ Agility Robotics ]

While other organisms form collective flocks, schools, or swarms for such purposes as mating, predation, and protection, the Lumbriculus variegatus worms are unusual in their ability to braid themselves together to accomplish tasks that unconnected individuals cannot. A new study reported by researchers at the Georgia Institute of Technology describes how the worms self-organize to act as entangled “active matter,” creating surprising collective behaviors whose principles have been applied to help blobs of simple robots evolve their own locomotion.

No, this doesn't squick me out at all, why would it.

[ Georgia Tech ]

A few years ago, we wrote about Zhifeng Huang's jet-foot equipped bipedal robot, and he's been continuing to work on it to the point where it can now step over gaps that are an absolutely astonishing 147% of its leg length.

[ Paper ]

Thanks Zhifeng!

The Inception Drive is a novel, ultra-compact design for an Infinitely Variable Transmission (IVT) that uses nested-pulleys to adjust the gear ratio between input and output shafts. This video shows the first proof-of-concept prototype for a “Fully Balanced” design, where the spinning masses within the drive are completely balanced to reduce vibration, thereby allowing the drive to operate more efficiently and at higher speeds than achievable on an unbalanced design.

As shown in this video, the Inception Drive can change both the speed and direction of rotation of the output shaft while keeping the direction and speed of the input shaft constant. This ability to adjust speed and direction within such a compact package makes the Inception Drive a compelling choice for machine designers in a wide variety of fields, including robotics, automotive, and renewable-energy generation.

[ SRI ]

Robots with kinematic loops are known to have superior mechanical performance. However, due to these loops, their modeling and control is challenging, and prevents a more widespread use. In this paper, we describe a versatile Inverse Kinematics (IK) formulation for the retargeting of expressive motions onto mechanical systems with loops.

[ Disney Research ]

Watch Engineered Arts put together one of its Mesmer robots in a not at all uncanny way.

[ Engineered Arts ]

There's been a bunch of interesting research into vision-based tactile sensing recently; here's some from Van Ho at JAIST:

[ Paper ]

Thanks Van!

This is really more of an automated system than a robot, but these little levitating pucks are very very slick.

ACOPOS 6D is based on the principle of magnetic levitation: Shuttles with integrated permanent magnets float over the surface of electromagnetic motor segments. The modular motor segments are 240 x 240 millimeters in size and can be arranged freely in any shape. A variety of shuttle sizes carry payloads of 0.6 to 14 kilograms and reach speeds of up to 2 meters per second. They can move freely in two-dimensional space, rotate and tilt along three axes and offer precise control over the height of levitation. All together, that gives them six degrees of motion control freedom.

[ ACOPOS ]

Navigation and motion control of a robot to a destination are tasks that have historically been performed with the assumption that contact with the environment is harmful. This makes sense for rigid-bodied robots where obstacle collisions are fundamentally dangerous. However, because many soft robots have bodies that are low-inertia and compliant, obstacle contact is inherently safe. We find that a planner that takes into account and capitalizes on environmental contact produces paths that are more robust to uncertainty than a planner that avoids all obstacle contact.

[ CHARM Lab ]

The quadrotor experts at UZH have been really cranking it up recently.

Aerodynamic forces render accurate high-speed trajectory tracking with quadrotors extremely challenging. These complex aerodynamic effects become a significant disturbance at high speeds, introducing large positional tracking errors, and are extremely difficult to model. To fly at high speeds, feedback control must be able to account for these aerodynamic effects in real-time. This necessitates a modelling procedure that is both accurate and efficient to evaluate. Therefore, we present an approach to model aerodynamic effects using Gaussian Processes, which we incorporate into a Model Predictive Controller to achieve efficient and precise real-time feedback control, leading to up to 70% reduction in trajectory tracking error at high speeds. We verify our method by extensive comparison to a state-of-the-art linear drag model in synthetic and real-world experiments at speeds of up to 14m/s and accelerations beyond 4g.

[ Paper ]

I have not heard much from Harvest Automation over the last couple years and their website was last updated in 2016, but I guess they're selling robots in France, so that's good?

[ Harvest Automation ]

Last year, Clearpath Robotics introduced a ROS package for Spot which enables robotics developers to leverage ROS capabilities out-of-the-box. Here at OTTO Motors, we thought it would be a compelling test case to see just how easy it would be to integrate Spot into our test fleet of OTTO materials handling robots.

[ OTTO Motors ]

Video showcasing recent robotics activities at PRISMA Lab, coordinated by Prof. Bruno Siciliano, at Università di Napoli Federico II.

[ PRISMA Lab ]

Thanks Fan!

State estimation framework developed by the team CoSTAR for the DARPA Subterranean Challenge, where the team achieved 2nd and 1st places in the Tunnel and Urban circuits.

[ Paper ]

Highlights from the 2020 ROS Industrial conference.

[ ROS Industrial ]

Thanks Thilo!

Not robotics, but entertaining anyway. From the CHI 1995 Technical Video Program, “The Tablet Newspaper: a Vision for the Future.”

[ CHI 1995 ]

This week's GRASP on Robotics seminar comes from Allison Okamura at Stanford, on “Wearable Haptic Devices for Ubiquitous Communication.”

Haptic devices allow touch-based information transfer between humans and intelligent systems, enabling communication in a salient but private manner that frees other sensory channels. For such devices to become ubiquitous, their physical and computational aspects must be intuitive and unobtrusive. We explore the design of a wide array of haptic feedback mechanisms, ranging from devices that can be actively touched by the fingertips to multi-modal haptic actuation mounted on the arm. We demonstrate how these devices are effective in virtual reality, human-machine communication, and human-human communication.

[ UPenn ] Continue reading

Posted in Human Robots

#437872 AlphaFold Proves That AI Can Crack ...

Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy.

AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.

Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein's biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thornton, director emeritus of the European Bioinformatics Institute. It also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.

Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.

As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.

How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things.

“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right sort of output that you can translate back into the real world.”

DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”

AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.

For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves.

The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures.

“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”

Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for.

Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.

“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”

Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.” Continue reading

Posted in Human Robots

#437864 Video Friday: Jet-Powered Flying ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 1-15, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

ICRA 2020, the world’s best, biggest, longest virtual robotics conference ever, kicked off last Sunday with an all-star panel on a critical topic: “COVID-19: How Can Roboticists Help?”

Watch other ICRA keynotes on IEEE.tv.

We’re getting closer! Well, kinda. iRonCub, the jet-powered flying humanoid, is still a simulation for now, but not only are the simulations getting better—the researchers have begun testing real jet engines!

This video shows the latest results on Aerial Humanoid Robotics obtained by the Dynamic Interaction Control Lab at the Italian Institute of Technology. The video simulates robot and jet dynamics, where the latter uses the results obtained in the paper “Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics” published in IEEE Robotics and Automation Letters.

This video presents the paper entitled “Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics” published in IEEE Robotics and Automation Letters (Volume: 5 , Issue: 2 , April 2020 ) Page(s): 2070 – 2077. Preprint at https://arxiv.org/pdf/1909.13296.pdf.​

[ IIT ]

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.

[ MIT CSAIL ]

UBTECH’s anti-epidemic solutions greatly relieve the workload of front-line medical staff and cut the consumption of personal protective equipment (PPE).

[ UBTECH ]

We demonstrate a method to assess the concrete deterioration in sewers by performing a tactile inspection motion with a sensorized foot of a legged robot.

[ THING ] via [ ANYmal Research ]

Get a closer look at the Virtual competition of the Urban Circuit and how teams can use the simulated environments to better prepare for the physical courses of the Subterranean Challenge.

[ SubT ]

Roboticists at the University of California San Diego have developed flexible feet that can help robots walk up to 40 percent faster on uneven terrain, such as pebbles and wood chips. The work has applications for search-and-rescue missions as well as space exploration.

[ UCSD ]

Thanks Ioana!

Tsuki is a ROS-enabled, highly dynamic quadruped robot developed by Lingkang Zhang.

And as far as we know, Lingkang is still chasing it.

[ Quadruped Tsuki ]

Thanks Lingkang!

Watch this.

This video shows an impressive demo of how YuMi’s superior precision, using precise servo gripper fingers and vacuum suction tool to pick up extremely small parts inside a mechanical watch. The video is not a final application used in production, it is a demo of how such an application can be implemented.

[ ABB ]

Meet Presso, the “5-minute dry cleaning robot.” Can you really call this a robot? We’re not sure. The company says it uses “soft robotics to hold the garment correctly, then clean, sanitize, press and dry under 5 minutes.” The machine was initially designed for use in the hospitality industry, but after adding a disinfectant function for COVID-19, it is now being used on movie and TV sets.

[ Presso ]

The next Mars rover launches next month (!), and here’s a look at some of the instruments on board.

[ JPL ]

Embodied Lead Engineer, Peter Teel, describes why we chose to build Moxie’s computing system from scratch and what makes it so unique.

[ Embodied ]

I did not know that this is where Pepper’s e-stop is. Nice design!

[ Softbank Robotics ]

State of the art in the field of swarm robotics lacks systems capable of absolute decentralization and is hence unable to mimic complex biological swarm systems consisting of simple units. Our research interconnects fields of swarm robotics and computer vision, and introduces novel use of a vision-based method UVDAR for mutual localization in swarm systems, allowing for absolute decentralization found among biological swarm systems. The developed methodology allows us to deploy real-world aerial swarming systems with robots directly localizing each other instead of communicating their states via a communication network, which is a typical bottleneck of current state of the art systems.

[ CVUT ]

I’m almost positive I could not do this task.

It’s easy to pick up objects using YuMi’s integrated vacuum functionality, it also supports ABB Robot’s Conveyor Tracking and Pickmaster 3 functionality, enabling it to track a moving conveyor and pick up objects using vision. Perfect for consumer products handling applications.

[ ABB ]

Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children’s behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.

[ Paper ]

Nathan Sam and Robert “Red” Jensen fabricate and fly a Prandtl-M aircraft at NASA’s Armstrong Flight Research Center in California. The aircraft is the second of three prototypes of varying sizes to provide scientists with options to fly sensors in the Martian atmosphere to collect weather and landing site information for future human exploration of Mars.

[ NASA ]

This is clever: In order to minimize time spent labeling datasets, you can use radar to identify other vehicles, not because the radar can actually recognize other vehicles, but because the radar can recognize other stuff that’s big and moving, which turns out to be almost as good.

[ ICRA Paper ]

Happy 10th birthday to the Natural Robotics Lab at the University of Sheffield.

[ NRL ] Continue reading

Posted in Human Robots

#437824 Video Friday: These Giant Robots Are ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

“Who doesn’t love giant robots?”

Luma, is a towering 8 metre snail which transforms spaces with its otherworldly presence. Another piece, Triffid, stands at 6 metres and its flexible end sweeps high over audiences’ heads like an enchanted plant. The movement of the creatures is inspired by the flexible, wiggling and contorting motions of the animal kingdom and is designed to provoke instinctive reactions and emotions from the people that meet them. Air Giants is a new creative robotic studio founded in 2020. They are based in Bristol, UK, and comprise a small team of artists, roboticists and software engineers. The studio is passionate about creating emotionally effective motion at a scale which is thought-provoking and transporting, as well as expanding the notion of what large robots can be used for.

Here’s a behind the scenes and more on how the creatures work.

[ Air Giants ]

Thanks Emma!

If the idea of submerging a very expensive sensor payload being submerged in a lake makes you as uncomfortable as it makes me, this is not the video for you.

[ ANYbotics ]

As the pandemic continues on, the measures due to this health crisis are increasingly stringent, and working from home continues to be promoted and solicited by many companies, Pepper will allow you to keep in touch with your relatives or even your colleagues.

[ Softbank ]

Fairly impressive footwork from Tencent Robotics.

Although, LittleDog was doing that like a decade ago:

[ Tencent ]

It's been long enough since I've been able to go out for boba tea that a robotic boba tea kiosk seems like a reasonable thing to get for my living room.

[ Bobacino ] via [ Gizmodo ]

Road construction and maintenance is challenging and dangerous work. Pioneer Industrial Systems has spent over twenty years designing custom robotic systems for industrial manufacturers around the world. These robotic systems greatly improve safety and increase efficiency. Now they’re taking that expertise on the road, with the Robotic Maintenance Vehicle. This base unit can be mounted on a truck or trailer, and utilizes various modules to perform a variety of road maintenance tasks.

[ Pioneer ]

Extend Robotics arm uses cloud-based teleoperation software, featuring human-like dexterity and intelligence, with multiple applications in healthcare, utilities and energy

[ Extend Robotics ]

ARC, short for “AI, Robot, Cloud,” includes the latest algorithms and high precision data required for human-robot coexistence. Now with ultra-low latency networks, many robots can simultaneously become smarter, just by connecting to ARC. “ARC Eye” serves as the eyes for all robots, accurately determining the current location and route even indoors where there is no GPS access. “ARC Brain” is the computing system shared simultaneously by all robots, which plans and processes movement, localization, and task performance for the robot.

[ Naver Labs ]

How can we re-imagine urban infrastructures with cutting-edge technologies? Listen to this webinar from Ger Baron, Amsterdam’s CTO, and Senseable City Lab’s researchers, on how MIT and Amsterdam Institute for Advanced Metropolitan Solutions (AMS Institute) are reimagining Amsterdam’s canals with the first fleet of autonomous boats.

[ MIT ]

Join Guy Burroughes in this webinar recording to hear about Spot, the robot dog created by Boston Dynamics, and how RACE plan to use it in nuclear decommissioning and beyond.

[ UKAEA ]

This GRASP on Robotics seminar comes from Marco Pavone at Stanford University, “On Safe and Efficient Human-robot interactions via Multimodal Intent Modeling and Reachability-based Safety Assurance.”

In this talk I will present a decision-making and control stack for human-robot interactions by using autonomous driving as a motivating example. Specifically, I will first discuss a data-driven approach for learning multimodal interaction dynamics between robot-driven and human-driven vehicles based on recent advances in deep generative modeling. Then, I will discuss how to incorporate such a learned interaction model into a real-time, interaction-aware decision-making framework. The framework is designed to be minimally interventional; in particular, by leveraging backward reachability analysis, it ensures safety even when other cars defy the robot's expectations without unduly sacrificing performance. I will present recent results from experiments on a full-scale steer-by-wire platform, validating the framework and providing practical insights. I will conclude the talk by providing an overview of related efforts from my group on infusing safety assurances in robot autonomy stacks equipped with learning-based components, with an emphasis on adding structure within robot learning via control-theoretical and formal methods.

[ UPenn ]

Autonomous Systems Failures: Who is Legally and Morally Responsible? Sponsored by Northwestern University’s Law and Technology Initiative and AI@NU, the event was moderated by Dan Linna and included Northwestern Engineering's Todd Murphey, University of Washington Law Professor Ryan Calo, and Google Senior Research Scientist Madeleine Clare Elish.

[ Northwestern ] Continue reading

Posted in Human Robots

#437671 Video Friday: Researchers 3D Print ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The Giant Gundam in Yokohama is actually way cooler than I thought it was going to be.

[ Gundam Factory ] via [ YouTube ]

A new 3D-printing method will make it easier to manufacture and control the shape of soft robots, artificial muscles and wearable devices. Researchers at UC San Diego show that by controlling the printing temperature of liquid crystal elastomer, or LCE, they can control the material’s degree of stiffness and ability to contract—also known as degree of actuation. What’s more, they are able to change the stiffness of different areas in the same material by exposing it to heat.

[ UCSD ]

Thanks Ioana!

This is the first successful reactive stepping test on our new torque-controlled biped robot named Bolt. The robot has 3 active degrees of freedom per leg and one passive joint in ankle. Since there is no active joint in ankle, the robot only relies on step location and timing adaptation to stabilize its motion. Not only can the robot perform stepping without active ankles, but it is also capable of rejecting external disturbances as we showed in this video.

[ ODRI ]

The curling robot “Curly” is the first AI-based robot to demonstrate competitive curling skills in an icy real environment with its high uncertainties. Scientists from seven different Korean research institutions including Prof. Klaus-Robert Müller, head of the machine-learning group at TU Berlin and guest professor at Korea University, have developed an AI-based curling robot.

[ TU Berlin ]

MoonRanger, a small robotic rover being developed by Carnegie Mellon University and its spinoff Astrobotic, has completed its preliminary design review in preparation for a 2022 mission to search for signs of water at the moon’s south pole. Red Whittaker explains why the new MoonRanger Lunar Explorer design is innovative and different from prior planetary rovers.

[ CMU ]

Cobalt’s security robot can now navigate unmodified elevators, which is an impressive feat.

Also, EXTERMINATE!

[ Cobalt ]

OrionStar, the robotics company invested in by Cheetah Mobile, announced the Robotic Coffee Master. Incorporating 3,000 hours of AI learning, 30,000 hours of robotic arm testing and machine vision training, the Robotic Coffee Master can perform complex brewing techniques, such as curves and spirals, with millimeter-level stability and accuracy (reset error ≤ 0.1mm).

[ Cheetah Mobile ]

DARPA OFFensive Swarm-Enabled Tactics (OFFSET) researchers recently tested swarms of autonomous air and ground vehicles at the Leschi Town Combined Arms Collective Training Facility (CACTF), located at Joint Base Lewis-McChord (JBLM) in Washington. The Leschi Town field experiment is the fourth of six planned experiments for the OFFSET program, which seeks to develop large-scale teams of collaborative autonomous systems capable of supporting ground forces operating in urban environments.

[ DARPA ]

Here are some highlights from Team Explorer’s SubT Urban competition back in February.

[ Team Explorer ]

Researchers with the Skoltech Intelligent Space Robotics Laboratory have developed a system that allows easy interaction with a micro-quadcopter with LEDs that can be used for light-painting. The researchers used a 92x92x29 mm Crazyflie 2.0 quadrotor that weighs just 27 grams, equipped with a light reflector and an array of controllable RGB LEDs. The control system consists of a glove equipped with an inertial measurement unit (IMU; an electronic device that tracks the movement of a user’s hand), and a base station that runs a machine learning algorithm.

[ Skoltech ]

“DeKonBot” is the prototype of a cleaning and disinfection robot for potentially contaminated surfaces in buildings such as door handles, light switches or elevator buttons. While other cleaning robots often spray the cleaning agents over a large area, DeKonBot autonomously identifies the surface to be cleaned.

[ Fraunhofer IPA ]

On Oct. 20, the OSIRIS-REx mission will perform the first attempt of its Touch-And-Go (TAG) sample collection event. Not only will the spacecraft navigate to the surface using innovative navigation techniques, but it could also collect the largest sample since the Apollo missions.

[ NASA ]

With all the robotics research that seems to happen in places where snow is more of an occasional novelty or annoyance, it’s good to see NORLAB taking things more seriously

[ NORLAB ]

Telexistence’s Model-T robot works very slowly, but very safely, restocking shelves.

[ Telexistence ] via [ YouTube ]

Roboy 3.0 will be unveiled next month!

[ Roboy ]

KUKA ready2_educate is your training cell for hands-on education in robotics. It is especially aimed at schools, universities and company training facilities. The training cell is a complete starter package and your perfect partner for entry into robotics.

[ KUKA ]

A UPenn GRASP Lab Special Seminar on Data Driven Perception for Autonomy, presented by Dapo Afolabi from UC Berkeley.

Perception systems form a crucial part of autonomous and artificial intelligence systems since they convert data about the relationship between an autonomous system and its environment into meaningful information. Perception systems can be difficult to build since they may involve modeling complex physical systems or other autonomous agents. In such scenarios, data driven models may be used to augment physics based models for perception. In this talk, I will present work making use of data driven models for perception tasks, highlighting the benefit of such approaches for autonomous systems.

[ GRASP Lab ]

A Maryland Robotics Center Special Robotics Seminar on Underwater Autonomy, presented by Ioannis Rekleitis from the University of South Carolina.

This talk presents an overview of algorithmic problems related to marine robotics, with a particular focus on increasing the autonomy of robotic systems in challenging environments. I will talk about vision-based state estimation and mapping of underwater caves. An application of monitoring coral reefs is going to be discussed. I will also talk about several vehicles used at the University of South Carolina such as drifters, underwater, and surface vehicles. In addition, a short overview of the current projects will be discussed. The work that I will present has a strong algorithmic flavour, while it is validated in real hardware. Experimental results from several testing campaigns will be presented.

[ MRC ]

This week’s CMU RI Seminar comes from Scott Niekum at UT Austin, on Scaling Probabilistically Safe Learning to Robotics.

Before learning robots can be deployed in the real world, it is critical that probabilistic guarantees can be made about the safety and performance of such systems. This talk focuses on new developments in three key areas for scaling safe learning to robotics: (1) a theory of safe imitation learning; (2) scalable reward inference in the absence of models; (3) efficient off-policy policy evaluation. The proposed algorithms offer a blend of safety and practicality, making a significant step towards safe robot learning with modest amounts of real-world data.

[ CMU RI ] Continue reading

Posted in Human Robots