Tag Archives: that

#437872 AlphaFold Proves That AI Can Crack ...

Any successful implementation of artificial intelligence hinges on asking the right questions in the right way. That’s what the British AI company DeepMind (a subsidiary of Alphabet) accomplished when it used its neural network to tackle one of biology’s grand challenges, the protein-folding problem. Its neural net, known as AlphaFold, was able to predict the 3D structures of proteins based on their amino acid sequences with unprecedented accuracy.

AlphaFold’s predictions at the 14th Critical Assessment of protein Structure Prediction (CASP14) were accurate to within an atom’s width for most of the proteins. The competition consisted of blindly predicting the structure of proteins that have only recently been experimentally determined—with some still awaiting determination.

Called the building blocks of life, proteins consist of 20 different amino acids in various combinations and sequences. A protein's biological function is tied to its 3D structure. Therefore, knowledge of the final folded shape is essential to understanding how a specific protein works—such as how they interact with other biomolecules, how they may be controlled or modified, and so on. “Being able to predict structure from sequence is the first real step towards protein design,” says Janet M. Thornton, director emeritus of the European Bioinformatics Institute. It also has enormous benefits in understanding disease-causing pathogens. For instance, at the moment only about 18 of the 26 proteins in the SARS-CoV-2 virus are known.

Predicting a protein’s 3D structure is a computational nightmare. In 1969 Cyrus Levinthal estimated that there are 10300 possible conformational combinations for a single protein, which would take longer than the age of the known universe to evaluate by brute force calculation. AlphaFold can do it in a few days.

As scientific breakthroughs go, AlphaFold’s discovery is right up there with the likes of James Watson and Francis Crick’s DNA double-helix model, or, more recently, Jennifer Doudna and Emmanuelle Charpentier’s CRISPR-Cas9 genome editing technique.

How did a team that just a few years ago was teaching an AI to master a 3,000-year-old game end up training one to answer a question plaguing biologists for five decades? That, says Briana Brownell, data scientist and founder of the AI company PureStrategy, is the beauty of artificial intelligence: The same kind of algorithm can be used for very different things.

“Whenever you have a problem that you want to solve with AI,” she says, “you need to figure out how to get the right data into the model—and then the right sort of output that you can translate back into the real world.”

DeepMind’s success, she says, wasn’t so much a function of picking the right neural nets but rather “how they set up the problem in a sophisticated enough way that the neural network-based modeling [could] actually answer the question.”

AlphaFold showed promise in 2018, when DeepMind introduced a previous iteration of their AI at CASP13, achieving the highest accuracy among all participants. The team had trained its to model target shapes from scratch, without using previously solved proteins as templates.

For 2020 they deployed new deep learning architectures into the AI, using an attention-based model that was trained end-to-end. Attention in a deep learning network refers to a component that manages and quantifies the interdependence between the input and output elements, as well as between the input elements themselves.

The system was trained on public datasets of the approximately 170,000 known experimental protein structures in addition to databases with protein sequences of unknown structures.

“If you look at the difference between their entry two years ago and this one, the structure of the AI system was different,” says Brownell. “This time, they’ve figured out how to translate the real world into data … [and] created an output that could be translated back into the real world.”

Like any AI system, AlphaFold may need to contend with biases in the training data. For instance, Brownell says, AlphaFold is using available information about protein structure that has been measured in other ways. However, there are also many proteins with as yet unknown 3D structures. Therefore, she says, a bias could conceivably creep in toward those kinds of proteins that we have more structural data for.

Thornton says it’s difficult to predict how long it will take for AlphaFold’s breakthrough to translate into real-world applications.

“We only have experimental structures for about 10 per cent of the 20,000 proteins [in] the human body,” she says. “A powerful AI model could unveil the structures of the other 90 per cent.”

Apart from increasing our understanding of human biology and health, she adds, “it is the first real step toward… building proteins that fulfill a specific function. From protein therapeutics to biofuels or enzymes that eat plastic, the possibilities are endless.” Continue reading

Posted in Human Robots

#437869 Video Friday: Japan’s Gundam Robot ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today’s videos.

Another BIG step for Japan’s Gundam project.

[ Gundam Factory ]

We present an interactive design system that allows users to create sculpting styles and fabricate clay models using a standard 6-axis robot arm. Given a general mesh as input, the user iteratively selects sub-areas of the mesh through decomposition and embeds the design expression into an initial set of toolpaths by modifying key parameters that affect the visual appearance of the sculpted surface finish. We demonstrate the versatility of our approach by designing and fabricating different sculpting styles over a wide range of clay models.

[ Disney Research ]

China’s Chang’e-5 completed the drilling, sampling and sealing of lunar soil at 04:53 BJT on Wednesday, marking the first automatic sampling on the Moon, the China National Space Administration (CNSA) announced Wednesday.

[ CCTV ]

Red Hat’s been putting together an excellent documentary on Willow Garage and ROS, and all five parts have just been released. We posted Part 1 a little while ago, so here’s Part 2 and Part 3.

Parts 4 and 5 are at the link below!

[ Red Hat ]

Congratulations to ANYbotics on a well-deserved raise!

ANYbotics has origins in the Robotic Systems Lab at ETH Zurich, and ANYmal’s heritage can be traced back at least as far as StarlETH, which we first met at ICRA 2013.

[ ANYbotics ]

Most conventional robots are working with 0.05-0.1mm accuracy. Such accuracy requires high-end components like low-backlash gears, high-resolution encoders, complicated CNC parts, powerful motor drives, etc. Those in combination end up an expensive solution, which is either unaffordable or unnecessary for many applications. As a result, we found the Apicoo Robotics to provide our customers solutions with a much lower cost and higher stability.

[ Apicoo Robotics ]

The Skydio 2 is an incredible drone that can take incredible footage fully autonomously, but it definitely helps if you do incredible things in incredible places.

[ Skydio ]

Jueying is the first domestic sensitive quadruped robot for industry applications and scenarios. It can coordinate (replace) humans to reach any place that can be reached. It has superior environmental adaptability, excellent dynamic balance capabilities and precise Environmental perception capabilities. By carrying functional modules for different application scenarios in the safe load area, the mobile superiority of the quadruped robot can be organically integrated with the commercialization of functional modules, providing smart factories, smart parks, scene display and public safety application solutions.

[ DeepRobotics ]

We have developed semi-autonomous quadruped robot, called LASER-D (Legged-Agile-Smart-Efficient Robot for Disinfection) for performing disinfection in cluttered environments. The robot is equipped with a spray-based disinfection system and leverages the body motion to controlling the spray action without the need for an extra stabilization mechanism. The system includes an image processing capability to verify disinfected regions with high accuracy. This system allows the robot to successfully carry out effective disinfection tasks while safely traversing through cluttered environments, climb stairs/slopes, and navigate on slippery surfaces.

[ USC Viterbi ]

We propose the “multi-vision hand”, in which a number of small high-speed cameras are mounted on the robot hand of a common 7 degrees-of-freedom robot. Also, we propose visual-servoing control by using a multi-vision system that combines the multi-vision hand and external fixed high-speed cameras. The target task was ball catching motion, which requires high-speed operation. In the proposed catching control, the catch position of the ball, which is estimated by the external fixed high-speed cameras, is corrected by the multi-vision hand in real-time.

More details available through IROS on-demand.

[ Namiki Laboratory ]

Shunichi Kurumaya wrote in to share his work on PneuFinger, a pneumatically actuated compliant robotic gripping system.

[ Nakamura Lab ]

Thanks Shunichi!

Motivated by insights into the human teaching process, we introduce a method for incorporating unstructured natural language into imitation learning. At training time, the expert can provide demonstrations along with verbal descriptions in order to describe the underlying intent, e.g., “Go to the large green bowl’’. The training process, then, interrelates the different modalities to encode the correlations between language, perception, and motion. The resulting language-conditioned visuomotor policies can be conditioned at run time on new human commands and instructions, which allows for more fine-grained control over the trained policies while also reducing situational ambiguity.

[ ASU ]

Thanks Heni!

Gita is on sale for the holidays for only $2,000.

[ Gita ]

This video introduces a computational approach for routing thin artificial muscle actuators through hyperelastic soft robots, in order to achieve a desired deformation behavior. Provided with a robot design, and a set of example deformations, we continuously co-optimize the routing of actuators, and their actuation, to approximate example deformations as closely as possible.

[ Disney Research ]

Researchers and mountain rescuers in Switzerland are making huge progress in the field of autonomous drones as the technology becomes more in-demand for global search-and-rescue operations.

[ SWI ]

This short clip of the Ghost Robotics V60 features an interesting, if awkward looking, righting behavior at the end.

[ Ghost Robotics ]

Europe’s Rosalind Franklin ExoMars rover has a younger ’sibling’, ExoMy. The blueprints and software for this mini-version of the full-size Mars explorer are available for free so that anyone can 3D print, assemble and program their own ExoMy.

[ ESA ]

The holiday season is here, and with the added impact of Covid-19 consumer demand is at an all-time high. Berkshire Grey is the partner that today’s leading organizations turn to when it comes to fulfillment automation.

[ Berkshire Grey ]

Until very recently, the vast majority of studies and reports on the use of cargo drones for public health were almost exclusively focused on the technology. The driving interest from was on the range that these drones could travel, how much they could carry and how they worked. Little to no attention was placed on the human side of these projects. Community perception, community engagement, consent and stakeholder feedback were rarely if ever addressed. This webinar presents the findings from a very recent study that finally sheds some light on the human side of drone delivery projects.

[ WeRobotics ] Continue reading

Posted in Human Robots

#437864 Video Friday: Jet-Powered Flying ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 1-15, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

ICRA 2020, the world’s best, biggest, longest virtual robotics conference ever, kicked off last Sunday with an all-star panel on a critical topic: “COVID-19: How Can Roboticists Help?”

Watch other ICRA keynotes on IEEE.tv.

We’re getting closer! Well, kinda. iRonCub, the jet-powered flying humanoid, is still a simulation for now, but not only are the simulations getting better—the researchers have begun testing real jet engines!

This video shows the latest results on Aerial Humanoid Robotics obtained by the Dynamic Interaction Control Lab at the Italian Institute of Technology. The video simulates robot and jet dynamics, where the latter uses the results obtained in the paper “Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics” published in IEEE Robotics and Automation Letters.

This video presents the paper entitled “Modeling, Identification and Control of Model Jet Engines for Jet Powered Robotics” published in IEEE Robotics and Automation Letters (Volume: 5 , Issue: 2 , April 2020 ) Page(s): 2070 – 2077. Preprint at https://arxiv.org/pdf/1909.13296.pdf.​

[ IIT ]

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) came up with new tools to let robots better perceive what they’re interacting with: the ability to see and classify items, and a softer, delicate touch.

[ MIT CSAIL ]

UBTECH’s anti-epidemic solutions greatly relieve the workload of front-line medical staff and cut the consumption of personal protective equipment (PPE).

[ UBTECH ]

We demonstrate a method to assess the concrete deterioration in sewers by performing a tactile inspection motion with a sensorized foot of a legged robot.

[ THING ] via [ ANYmal Research ]

Get a closer look at the Virtual competition of the Urban Circuit and how teams can use the simulated environments to better prepare for the physical courses of the Subterranean Challenge.

[ SubT ]

Roboticists at the University of California San Diego have developed flexible feet that can help robots walk up to 40 percent faster on uneven terrain, such as pebbles and wood chips. The work has applications for search-and-rescue missions as well as space exploration.

[ UCSD ]

Thanks Ioana!

Tsuki is a ROS-enabled, highly dynamic quadruped robot developed by Lingkang Zhang.

And as far as we know, Lingkang is still chasing it.

[ Quadruped Tsuki ]

Thanks Lingkang!

Watch this.

This video shows an impressive demo of how YuMi’s superior precision, using precise servo gripper fingers and vacuum suction tool to pick up extremely small parts inside a mechanical watch. The video is not a final application used in production, it is a demo of how such an application can be implemented.

[ ABB ]

Meet Presso, the “5-minute dry cleaning robot.” Can you really call this a robot? We’re not sure. The company says it uses “soft robotics to hold the garment correctly, then clean, sanitize, press and dry under 5 minutes.” The machine was initially designed for use in the hospitality industry, but after adding a disinfectant function for COVID-19, it is now being used on movie and TV sets.

[ Presso ]

The next Mars rover launches next month (!), and here’s a look at some of the instruments on board.

[ JPL ]

Embodied Lead Engineer, Peter Teel, describes why we chose to build Moxie’s computing system from scratch and what makes it so unique.

[ Embodied ]

I did not know that this is where Pepper’s e-stop is. Nice design!

[ Softbank Robotics ]

State of the art in the field of swarm robotics lacks systems capable of absolute decentralization and is hence unable to mimic complex biological swarm systems consisting of simple units. Our research interconnects fields of swarm robotics and computer vision, and introduces novel use of a vision-based method UVDAR for mutual localization in swarm systems, allowing for absolute decentralization found among biological swarm systems. The developed methodology allows us to deploy real-world aerial swarming systems with robots directly localizing each other instead of communicating their states via a communication network, which is a typical bottleneck of current state of the art systems.

[ CVUT ]

I’m almost positive I could not do this task.

It’s easy to pick up objects using YuMi’s integrated vacuum functionality, it also supports ABB Robot’s Conveyor Tracking and Pickmaster 3 functionality, enabling it to track a moving conveyor and pick up objects using vision. Perfect for consumer products handling applications.

[ ABB ]

Cycling safety gestures, such as hand signals and shoulder checks, are an essential part of safe manoeuvring on the road. Child cyclists, in particular, might have difficulties performing safety gestures on the road or even forget about them, given the lack of cycling experience, road distractions and differences in motor and perceptual-motor abilities compared with adults. To support them, we designed two methods to remind about safety gestures while cycling. The first method employs an icon-based reminder in heads-up display (HUD) glasses and the second combines vibration on the handlebar and ambient light in the helmet. We investigated the performance of both methods in a controlled test-track experiment with 18 children using a mid-size tricycle, augmented with a set of sensors to recognize children’s behavior in real time. We found that both systems are successful in reminding children about safety gestures and have their unique advantages and disadvantages.

[ Paper ]

Nathan Sam and Robert “Red” Jensen fabricate and fly a Prandtl-M aircraft at NASA’s Armstrong Flight Research Center in California. The aircraft is the second of three prototypes of varying sizes to provide scientists with options to fly sensors in the Martian atmosphere to collect weather and landing site information for future human exploration of Mars.

[ NASA ]

This is clever: In order to minimize time spent labeling datasets, you can use radar to identify other vehicles, not because the radar can actually recognize other vehicles, but because the radar can recognize other stuff that’s big and moving, which turns out to be almost as good.

[ ICRA Paper ]

Happy 10th birthday to the Natural Robotics Lab at the University of Sheffield.

[ NRL ] Continue reading

Posted in Human Robots

#437859 We Can Do Better Than Human-Like Hands ...

One strategy for designing robots that are capable in anthropomorphic environments is to make the robots themselves as anthropomorphic as possible. It makes sense—for example, there are stairs all over the place because humans have legs, and legs are good at stairs, so if we give robots legs like humans, they’ll be good at stairs too, right? We also see this tendency when it comes to robotic grippers, because robots need to grip things that have been optimized for human hands.

Despite some amazing robotic hands inspired by the biology of our own human hands, there are also opportunities for creativity in gripper designs that do things human hands are not physically capable of. At ICRA 2020, researchers from Stanford University presented a paper on the design of a robotic hand that has fingers made of actuated rollers, allowing it to manipulate objects in ways that would tie your fingers into knots.

While it’s got a couple fingers, this prototype “roller grasper” hand tosses anthropomorphic design out the window in favor of unique methods of in-hand manipulation. The roller grasper does share some features with other grippers designed for in-hand manipulation using active surfaces (like conveyor belts embedded in fingers), but what’s new and exciting here is that those articulated active roller fingertips (or whatever non-anthropomorphic name you want to give them) provide active surfaces that are steerable. This means that the hand can grasp objects and rotate them without having to resort to complex sequences of finger repositioning, which is how humans do it.

Photo: Stanford University

Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers.

Each of the hand’s fingers has three actuated degrees of freedom, which result in several different ways in which objects can be grasped and manipulated. Things like picking something flat off of a table, always tricky for robotic hands (and sometimes for human hands as well), is a breeze thanks to the fingertip rollers. The motion of an object in this gripper isn’t quite holonomic, meaning that it can’t arbitrarily reorient things without sometimes going through other intermediate steps. And it’s also not compliant in the way that many other grippers are, limiting some types of grasps. This particular design probably won’t replace every gripper out there, but it’s particularly skilled at some specific kinds of manipulations in a way that makes it unique.

We should be clear that it’s not the intent of this paper (or of this article!) to belittle five-fingered robotic hands—the point is that there are lots of things that you can do with totally different hand designs, and just because humans use one kind of hand doesn’t mean that robots need to do the same if they want to match (or exceed) some specific human capabilities. If we could make robotic hands with five fingers that had all of the actuation and sensing and control that our own hands do, that would be amazing, but it’s probably decades away. In the meantime, there are plenty of different designs to explore.

And speaking of exploring different designs, these same folks are already at work on version two of their hand, which replaces the fingertip rollers with fingertip balls:

For more on this new version of the hand (among other things), we spoke with lead author Shenli Yuan via email. And the ICRA page is here if you have questions of your own.

IEEE Spectrum: Human hands are often seen as the standard for manipulation. When adding degrees of freedom that human hands don’t have (as in your work) can make robotic hands more capable than ours in many ways, do you think we should still think of human hands as something to try and emulate?

Shenli Yuan: Yes, definitely. Not only because human hands have great manipulation capability, but because we’re constantly surrounded by objects that were designed and built specifically to be manipulated by the human hand. Anthropomorphic robot hands are still worth investigating, and still have a long way to go before they truly match the dexterity of a human hand. The design we came up with is an exploration of what unique capabilities may be achieved if we are not bound by the constraints of anthropomorphism, and what a biologically impossible mechanism may achieve in robotic manipulation. In addition, for lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much. The design constraints for robotics and biology have points in common (like mechanical wear, finite tendons stiffness) but also major differences (like continuous rotation for robots and less heat dissipation problems for humans).

“For lots of tasks, it isn’t necessarily optimal to try and emulate the human hand. Perhaps in 20 to 50 years when robot manipulators are much better, they won’t look like the human hand that much.”
—Shenli Yuan, Stanford University

What are some manipulation capabilities of human hands that are the most difficult to replicate with your system?

There are a few things that come to mind. It cannot perform a power grasp (using the whole hand for grasping as opposed to pinch grasp that uses only fingertips), which is something that can be easily done by human hands. It cannot move or rotate objects instantaneously in arbitrary directions or about arbitrary axes, though the human hand is somewhat limited in this respect as well. It also cannot perform gaiting. That being said, these limitations exist largely because this grasper only has 9 degrees of freedom, as opposed to the human hand which has more than 20. We don’t think of this grasper as a replacement for anthropomorphic hands, but rather as a way to provide unique capabilities without all of the complexity associated with a highly actuated, humanlike hand.

What’s the most surprising or impressive thing that your hand is able to do?

The most impressive feature is that it can rotate objects continuously, which is typically difficult or inefficient for humanlike robot hands. Something really surprising was that we put most of our energy into the design and analysis of the grasper, and the control strategy we implemented for demonstrations is very simple. This simple control strategy works surprisingly well with very little tuning or trial-and-error.

With this many degrees of freedom, how complicated is it to get the hand to do what you want it to do?

The number of degrees of freedom is actually not what makes controlling it difficult. Most of the difficulties we encountered were actually due to the rolling contact between the rollers and the object during manipulation. The rolling behavior can be viewed as constantly breaking and re-establishing contacts between the rollers and objects, this very dynamic behavior introduces uncertainties in controlling our grasper. Specifically, it was difficult estimating the velocity of each contact point with the object, which changes based on object and finger position, object shape (especially curvature), and slip/no slip.

What more can you tell us about Roller Grasper V2?

Roller Grasper V2 has spherical rollers, while the V1 has cylindrical rollers. We realized that cylindrical rollers are very good at manipulating objects when the rollers and the object form line contacts, but it can be unstable when the grasp geometry doesn’t allow for a line contact between each roller and the grasped object. Spherical rollers solve that problem by allowing predictable points of contact regardless of how a surface is oriented.

The parallelogram mechanism of Roller Grasper V1 makes the pivot axis offset a bit from the center of the roller, which made our control and analysis more challenging. The kinematics of the Roller Grasper V2 is simpler. The base joint intersects with the finger, which intersects with the pivot joint, and the pivot joint intersects with the roller joint. It’s symmetrical design and simpler kinematics make our control and analysis a lot more straightforward. Roller Grasper V2 also has a larger pivot range of 180 degrees, while V1 is limited to 90 degrees.

In terms of control, we implemented more sophisticated control strategies (including a hand-crafted control strategy and an imitation learning based strategy) for the grasper to perform autonomous in-hand manipulation.

“Design of a Roller-Based Dexterous Hand for Object Grasping and Within-Hand Manipulation,” by Shenli Yuan, Austin D. Epps, Jerome B. Nowak, and J. Kenneth Salisbury from Stanford University is being presented at ICRA 2020.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437857 Video Friday: Robotic Third Hand Helps ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ICRA 2020 – June 1-15, 2020 – [Virtual Conference]
RSS 2020 – July 12-16, 2020 – [Virtual Conference]
CLAWAR 2020 – August 24-26, 2020 – [Virtual Conference]
ICUAS 2020 – September 1-4, 2020 – Athens, Greece
ICRES 2020 – September 28-29, 2020 – Taipei, Taiwan
ICSR 2020 – November 14-16, 2020 – Golden, Colorado
Let us know if you have suggestions for next week, and enjoy today’s videos.

We are seeing some exciting advances in the development of supernumerary robotic limbs. But one thing about this technology remains a major challenge: How do you control the extra limb if your own hands are busy—say, if you’re carrying a package? MIT researchers at Professor Harry Asada’s lab have an idea. They are using subtle finger movements in sensorized gloves to control the supernumerary limb. The results are promising, and they’ve demonstrated a waist-mounted arm with a qb SoftHand that can help you with doors, elevators, and even handshakes.

[ Paper ]

ROBOPANDA

Fluid actuated soft robots, or fluidic elastomer actuators, have shown great potential in robotic applications where large compliance and safe interaction are dominant concerns. They have been widely studied in wearable robotics, prosthetics, and rehabilitations in recent years. However, such soft robots and actuators are tethered to a bulky pump and controlled by various valves, limiting their applications to a small confined space. In this study, we report a new and effective approach to fluidic power actuation that is untethered, easy to design, fabricate, control, and allows various modes of actuation. In the proposed approach, a sealed elastic tube filled with fluid (gas or liquid) is segmented by adaptors. When twisting a segment, two major effects could be observed: (1) the twisted segment exhibits a contraction force and (2) other segments inflate or deform according to their constraint patterns.

[ Paper ]

And now: “Magnetic cilia carpets.”

[ ETH Zurich ]

To adhere to government recommendations while maintaining requirements for social distancing during the COVID-19 pandemic, Yaskawa Motoman is now utilizing an HC10DT collaborative robot to take individual employee temperatures. Named “Covie”, the design and fabrication of the robotic solution and its software was a combined effort by Yaskawa Motoman’s Technology Advancement Team (TAT) and Product Solutions Group (PSG), as well as a group of robotics students from the University of Dayton.

They should have programmed it to nod if your temperature was normal, and smacked you upside the head while yelling “GO HOME” if it wasn’t.

[ Yaskawa ]

Driving slowly on pre-defined routes, ZMP’s RakuRo autonomous vehicle helps people with mobility challenges enjoy cherry blossoms in Japan.

RakuRo costs about US $1,000 per month to rent, but ZMP suggests that facilities or groups of ~10 people could get together and share one, which makes the cost much more reasonable.

[ ZMP ]

Jessy Grizzle from the Dynamic Legged Locomotion Lab at the University of Michigan writes:

Our lab closed on March 20, 2020 under the State of Michigan’s “Stay Home, Stay Safe” order. For a 24-hour period, it seemed that our labs would be “sanitized” during our absence. Since we had no idea what that meant, we decided that Cassie Blue needed to “Stay Home, Stay Safe” as well. We loaded up a very expensive robot and took her off campus. On May 26, we were allowed to re-open our laboratory. After thoroughly cleaning the lab, disinfecting tools and surfaces, developing and getting approval for new safe operation procedures, we then re-organized our work areas to respect social distancing requirements and brought Cassie back to the laboratory.

During the roughly two months we were working remotely, the lab’s members got a lot done. Papers were written, dissertation proposals were composed, and plans for a new course, ROB 101, Computational Linear Algebra, were developed with colleagues. In addition, one of us (Yukai Gong) found the lockdown to his liking! He needed the long period of quiet to work through some new ideas for how to control 3D bipedal robots.

[ Michigan Robotics ]

Thanks Jesse and Bruce!

You can tell that this video of how Pepper has been useful during COVID-19 is not focused on the United States, since it refers to the pandemic in past tense.

[ Softbank Robotics ]

NASA’s water-seeking robotic Moon rover just booked a ride to the Moon’s South Pole. Astrobotic of Pittsburgh, Pennsylvania, has been selected to deliver the Volatiles Investigating Polar Exploration Rover, or VIPER, to the Moon in 2023.

[ NASA ]

This could be the most impressive robotic gripper demo I have ever seen.

[ Soft Robotics ]

Whiz, an autonomous vacuum sweeper, innovates the cleaning industry by automating tedious tasks for your team. Easy to train, easy to use, Whiz works with your staff to deliver a high-quality clean while increasing efficiency and productivity.

[ Softbank Robotics ]

About 40 seconds into this video, a robot briefly chases a goose.

[ Ghost Robotics ]

SwarmRail is a new concept for rail-guided omnidirectional mobile robot systems. It aims for a highly flexible production process in the factory of the future by opening up the available work space from above. This means that transport and manipulation tasks can be carried out by floor- and ceiling-bound robot systems. The special feature of the system is the combination of omnidirectionally mobile units with a grid-shaped rail network, which is characterized by passive crossings and a continuous gap between the running surfaces of the rails. Through this gap, a manipulator operating below the rail can be connected to a mobile unit traveling on the rail.

[ DLRRMC ]

RightHand Robotics (RHR), a leader in providing robotic piece-picking solutions, is partnered with PALTAC Corporation, Japan’s largest wholesaler of consumer packaged goods. The collaboration introduces RightHand’s newest piece-picking solution to the Japanese market, with multiple workstations installed in PALTAC’s newest facility, RDC Saitama, which opened in 2019 in Sugito, Saitama Prefecture, Japan.

[ RightHand Robotics ]

From the ICRA 2020, a debate on the “Future of Robotics Research,” addressing such issues as “robotics research is over-reliant on benchmark datasets and simulation” and “robots designed for personal or household use have failed because of fundamental misunderstandings of Human-Robot Interaction (HRI).”

[ Robotics Debates ]

MassRobotics has a series of interviews where robotics celebrities are interviewed by high school students.The students are perhaps a little awkward (remember being in high school?), but it’s honest and the questions are interesting. The first two interviews are with Laurie Leshin, who worked on space robots at NASA and is now President of Worcester Polytechnic Institute, and Colin Angle, founder and CEO of iRobot.

[ MassRobotics ]

Thanks Andrew!

In this episode of the Voices from DARPA podcast, Dr. Timothy Chung, a program manager since 2016 in the agency’s Tactical Technology Office, delves into his robotics and autonomous technology programs – the Subterranean (SubT) Challenge and OFFensive Swarm-Enabled Tactics (OFFSET). From robot soccer to live-fly experimentation programs involving dozens of unmanned aircraft systems (UASs), he explains how he aims to assist humans heading into unknown environments via advances in collaborative autonomy and robotics.

[ DARPA ] Continue reading

Posted in Human Robots