Tag Archives: framework

#436482 50+ Reasons Our Favorite Emerging ...

For most of history, technology was about atoms, the manipulation of physical stuff to extend humankind’s reach. But in the last five or six decades, atoms have partnered with bits, the elemental “particles” of the digital world as we know it today. As computing has advanced at the accelerating pace described by Moore’s Law, technological progress has become increasingly digitized.

SpaceX lands and reuses rockets and self-driving cars do away with drivers thanks to automation, sensors, and software. Businesses find and hire talent from anywhere in the world, and for better and worse, a notable fraction of the world learns and socializes online. From the sequencing of DNA to artificial intelligence and from 3D printing to robotics, more and more new technologies are moving at a digital pace and quickly emerging to reshape the world around us.

In 2019, stories charting the advances of some of these digital technologies consistently made headlines. Below is, what is at best, an incomplete list of some of the big stories that caught our eye this year. With so much happening, it’s likely we’ve missed some notable headlines and advances—as well as some of your personal favorites. In either instance, share your thoughts and candidates for the biggest stories and breakthroughs on Facebook and Twitter.

With that said, let’s dive straight into the year.

Artificial Intelligence
No technology garnered as much attention as AI in 2019. With good reason. Intelligent computer systems are transitioning from research labs to everyday life. Healthcare, weather forecasting, business process automation, traffic congestion—you name it, and machine learning algorithms are likely beginning to work on it. Yet, AI has also been hyped up and overmarketed, and the latest round of AI technology, deep learning, is likely only one piece of the AI puzzle.

This year, Open AI’s game-playing algorithms beat some of the world’s best Dota 2 players, DeepMind notched impressive wins in Starcraft, and Carnegie Mellon University’s Libratus “crushed” pros at six-player Texas Hold‘em.
Speaking of games, AI’s mastery of the incredibly complex game of Go prompted a former world champion to quit, stating that AI ‘”cannot be defeated.”
But it isn’t just fun and games. Practical, powerful applications that make the best of AI’s pattern recognition abilities are on the way. Insilico Medicine, for example, used machine learning to help discover and design a new drug in just 46 days, and DeepMind is focused on using AI to crack protein folding.
Of course, AI can be a double-edged sword. When it comes to deepfakes and fake news, for example, AI makes both easier to create and detect, and early in the year, OpenAI created and announced a powerful AI text generator but delayed releasing it for fear of malicious use.
Recognizing AI’s power for good and ill, the OECD, EU, World Economic Forum, and China all took a stab at defining an ethical framework for the development and deployment of AI.

Computing Systems
Processors and chips kickstarted the digital boom and are still the bedrock of continued growth. While progress in traditional silicon-based chips continues, it’s slowing and getting more expensive. Some say we’re reaching the end of Moore’s Law. While that may be the case for traditional chips, specialized chips and entirely new kinds of computing are waiting in the wings.

In fall 2019, Google confirmed its quantum computer had achieved “quantum supremacy,” a term that means a quantum computer can perform a calculation a normal computer cannot. IBM pushed back on the claim, and it should be noted the calculation was highly specialized. But while it’s still early days, there does appear to be some real progress (and more to come).
Should quantum computing become truly practical, “the implications are staggering.” It could impact machine learning, medicine, chemistry, and materials science, just to name a few areas.
Specialized chips continue to take aim at machine learning—a giant new chip with over a trillion transistors, for example, may make machine learning algorithms significantly more efficient.
Cellular computers also saw advances in 2019 thanks to CRISPR. And the year witnessed the emergence of the first reprogrammable DNA computer and new chips inspired by the brain.
The development of hardware computing platforms is intrinsically linked to software. 2019 saw a continued move from big technology companies towards open sourcing (at least parts of) their software, potentially democratizing the use of advanced systems.

Networks
Increasing interconnectedness has, in many ways, defined the 21st century so far. Your phone is no longer just a phone. It’s access to the world’s population and accumulated knowledge—and it fits in your pocket. Pretty neat. This is all thanks to networks, which had some notable advances in 2019.

The biggest network development of the year may well be the arrival of the first 5G networks.
5G’s faster speeds promise advances across many emerging technologies.
Self-driving vehicles, for example, may become both smarter and safer thanks to 5G C-V2X networks. (Don’t worry with trying to remember that. If they catch on, they’ll hopefully get a better name.)
Wi-Fi may have heard the news and said “hold my beer,” as 2019 saw the introduction of Wi-Fi 6. Perhaps the most important upgrade, among others, is that Wi-Fi 6 ensures that the ever-growing number of network connected devices get higher data rates.
Networks also went to space in 2019, as SpaceX began launching its Starlink constellation of broadband satellites. In typical fashion, Elon Musk showed off the network’s ability to bounce data around the world by sending a Tweet.

Augmented Reality and Virtual Reality
Forget Pokemon Go (unless you want to add me as a friend in the game—in which case don’t forget Pokemon Go). 2019 saw AR and VR advance, even as Magic Leap, the most hyped of the lot, struggled to live up to outsized expectations and sell headsets.

Mixed reality AR and VR technologies, along with the explosive growth of sensor-based data about the world around us, is creating a one-to-one “Mirror World” of our physical reality—a digital world you can overlay on our own or dive into immersively thanks to AR and VR.
Facebook launched Replica, for example, which is a photorealistic virtual twin of the real world that, among other things, will help train AIs to better navigate their physical surroundings.
Our other senses (beyond eyes) may also become part of the Mirror World through the use of peripherals like a newly developed synthetic skin that aim to bring a sense of touch to VR.
AR and VR equipment is also becoming cheaper—with more producers entering the space—and more user-friendly. Instead of a wired headset requiring an expensive gaming PC, the new Oculus Quest is a wireless, self-contained step toward the mainstream.
Niche uses also continue to gain traction, from Google Glass’s Enterprise edition to the growth of AR and VR in professional education—including on-the-job-training and roleplaying emotionally difficult work encounters, like firing an employee.

Digital Biology and Biotech
The digitization of biology is happening at an incredible rate. With wild new research coming to light every year and just about every tech giant pouring money into new solutions and startups, we’re likely to see amazing advances in 2020 added to those we saw in 2019.

None were, perhaps, more visible than the success of protein-rich, plant-based substitutes for various meats. This was the year Beyond Meat was the top IPO on the NASDAQ stock exchange and people stood in line for the plant-based Impossible Whopper and KFC’s Beyond Chicken.
In the healthcare space, a report about three people with HIV who became virus free thanks to a bone marrow transplants of stem cells caused a huge stir. The research is still in relatively early stages, and isn’t suitable for most people, but it does provides a glimmer of hope.
CRISPR technology, which almost deserves its own section, progressed by leaps and bounds. One tweak made CRISPR up to 50 times more accurate, while the latest new CRISPR-based system, CRISPR prime, was described as a “word processor” for gene editing.
Many areas of healthcare stand to gain from CRISPR. For instance, cancer treatment, were a first safety test showed ‘promising’ results.
CRISPR’s many potential uses, however, also include some weird/morally questionable areas, which was exemplified by one the year’s stranger CRISPR-related stories about a human-monkey hybrid embryo in China.
Incidentally, China could be poised to take the lead on CRISPR thanks to massive investments and research programs.
As a consequence of quick advances in gene editing, we are approaching a point where we will be able to design our own biology—but first we need to have a serious conversation as a society about the ethics of gene editing and what lines should be drawn.

3D Printing
3D printing has quietly been growing both market size and the objects the printers are capable of producing. While both are impressive, perhaps the biggest story of 2019 is their increased speed.

One example was a boat that was printed in just three days, which also set three new world records for 3D printing.
3D printing is also spreading in the construction industry. In Mexico, the technology is being used to construct 50 new homes with subsidized mortgages of just $20/month.
3D printers also took care of all parts of a 640 square-meter home in Dubai.
Generally speaking, the use of 3D printing to make parts for everything from rocket engines (even entire rockets) to trains to cars illustrates the sturdiness of the technology, anno 2019.
In healthcare, 3D printing is also advancing the cause of bio-printed organs and, in one example, was used to print vascularized parts of a human heart.

Robotics
Living in Japan, I get to see Pepper, Aibo, and other robots on pretty much a daily basis. The novelty of that experience is spreading to other countries, and robots are becoming a more visible addition to both our professional and private lives.

We can’t talk about robots and 2019 without mentioning Boston Dynamics’ Spot robot, which went on sale for the general public.
Meanwhile, Google, Boston Dynamics’ former owner, rebooted their robotics division with a more down-to-earth focus on everyday uses they hope to commercialize.
SoftBank’s Pepper robot is working as a concierge and receptionist in various countries. It is also being used as a home companion. Not satisfied, Pepper rounded off 2019 by heading to the gym—to coach runners.
Indeed, there’s a growing list of sports where robots perform as well—or better—than humans.
2019 also saw robots launch an assault on the kitchen, including the likes of Samsung’s robot chef, and invade the front yard, with iRobot’s Terra robotic lawnmower.
In the borderlands of robotics, full-body robotic exoskeletons got a bit more practical, as the (by all accounts) user-friendly, battery-powered Sarcos Robotics Guardian XO went commercial.

Autonomous Vehicles
Self-driving cars did not—if you will forgive the play on words—stay quite on track during 2019. The fallout from Uber’s 2018 fatal crash marred part of the year, while some big players ratcheted back expectations on a quick shift to the driverless future. Still, self-driving cars, trucks, and other autonomous systems did make progress this year.

Winner of my unofficial award for best name in self-driving goes to Optimus Ride. The company also illustrates that self-driving may not be about creating a one-size-fits-all solution but catering to specific markets.
Self-driving trucks had a good year, with tests across many countries and states. One of the year’s odder stories was a self-driving truck traversing the US with a delivery of butter.
A step above the competition may be the future slogan (or perhaps not) of Boeing’s self-piloted air taxi that saw its maiden test flight in 2019. It joins a growing list of companies looking to create autonomous, flying passenger vehicles.
2019 was also the year where companies seemed to go all in on last-mile autonomous vehicles. Who wins that particular competition could well emerge during 2020.

Blockchain and Digital Currencies
Bitcoin continues to be the cryptocurrency equivalent of a rollercoaster, but the underlying blockchain technology is progressing more steadily. Together, they may turn parts of our financial systems cashless and digital—though how and when remains a slightly open question.

One indication of this was Facebook’s hugely controversial announcement of Libra, its proposed cryptocurrency. The company faced immediate pushback and saw a host of partners jump ship. Still, it brought the tech into mainstream conversations as never before and is putting the pressure on governments and central banks to explore their own digital currencies.
Deloitte’s in-depth survey of the state of blockchain highlighted how the technology has moved from fintech into just about any industry you can think of.
One of the biggest issues facing the spread of many digital currencies—Bitcoin in particular, you could argue—is how much energy it consumes to mine them. 2019 saw the emergence of several new digital currencies with a much smaller energy footprint.
2019 was also a year where we saw a new kind of digital currency, stablecoins, rise to prominence. As the name indicates, stablecoins are a group of digital currencies whose price fluctuations are more stable than the likes of Bitcoin.
In a geopolitical sense, 2019 was a year of China playing catch-up. Having initially banned blockchain, the country turned 180 degrees and announced that it was “quite close” to releasing a digital currency and a wave of blockchain-programs.

Renewable Energy and Energy Storage
While not every government on the planet seems to be a fan of renewable energy, it keeps on outperforming fossil fuel after fossil fuel in places well suited to it—even without support from some of said governments.

One of the reasons for renewable energy’s continued growth is that energy efficiency levels keep on improving.
As a result, an increased number of coal plants are being forced to close due to an inability to compete, and the UK went coal-free for a record two weeks.
We are also seeing more and more financial institutions refusing to fund fossil fuel projects. One such example is the European Investment Bank.
Renewable energy’s advance is tied at the hip to the rise of energy storage, which also had a breakout 2019, in part thanks to investments from the likes of Bill Gates.
The size and capabilities of energy storage also grew in 2019. The best illustration came from Australia were Tesla’s mega-battery proved that energy storage has reached a stage where it can prop up entire energy grids.

Image Credit: Mathew Schwartz / Unsplash Continue reading

Posted in Human Robots

#436215 Help Rescuers Find Missing Persons With ...

There’s a definite sense that robots are destined to become a critical part of search and rescue missions and disaster relief efforts, working alongside humans to help first responders move faster and more efficiently. And we’ve seen all kinds of studies that include the claim “this robot could potentially help with disaster relief,” to varying degrees of plausibility.

But it takes a long time, and a lot of extra effort, for academic research to actually become anything useful—especially for first responders, where there isn’t a lot of financial incentive for further development.

It turns out that if you actually ask first responders what they most need for disaster relief, they’re not necessarily interested in the latest and greatest robotic platform or other futuristic technology. They’re using commercial off-the-shelf drones, often consumer-grade ones, because they’re simple and cheap and great at surveying large areas. The challenge is doing something useful with all of the imagery that these drones collect. Computer vision algorithms could help with that, as long as those algorithms are readily accessible and nearly effortless to use.

The IEEE Robotics and Automation Society and the Center for Robotic-Assisted Search and Rescue (CRASAR) at Texas A&M University have launched a contest to bridge this gap between the kinds of tools that roboticists and computer vision researchers might call “basic” and a system that’s useful to first responders in the field. It’s a simple and straightforward idea, and somewhat surprising that no one had thought of it before now. And if you can develop such a system, it’s worth some cash.

CRASAR does already have a Computer Vision Emergency Response Toolkit (created right after Hurricane Harvey), which includes a few pixel filters and some edge and corner detectors. Through this contest, you can get paid your share of a $3,000 prize pool for adding some other excessively basic tools, including:

Image enhancement through histogram equalization, which can be applied to electro-optical (visible light cameras) and thermal imagery

Color segmentation for a range

Grayscale segmentation for a range in a thermal image

If it seems like this contest is really not that hard, that’s because it isn’t. “The first thing to understand about this contest is that strictly speaking, it’s really not that hard,” says Robin Murphy, director of CRASAR. “This contest isn’t necessarily about coming up with algorithms that are brand new, or even state-of-the-art, but rather algorithms that are functional and reliable and implemented in a way that’s immediately [usable] by inexperienced users in the field.”

Murphy readily admits that some of what needs to be done is not particularly challenging at all, but that’s not the point—the point is to make these functionalities accessible to folks who have better things to do than solve these problems themselves, as Murphy explains.

“A lot of my research is driven by problems that I’ve seen in the field that you’d think somebody would have solved, but apparently not. More than half of this is available in OpenCV, but who’s going to find it, download it, learn Python, that kind of thing? We need to get these tools into an open framework. We’re happy if you take libraries that already exist (just don’t steal code)—not everything needs to be rewritten from scratch. Just use what’s already there. Some of it may seem too simple, because it IS that simple. It already exists and you just need to move some code around.”

If you want to get very slightly more complicated, there’s a second category that involves a little bit of math:

Coders must provide a system that does the following for each nadir image in a set:

Reads the geotag embedded in the .jpg
Overlays a USNG grid for a user-specified interval (e.g., every 50, 100, or 200 meters)
Gives the GPS coordinates of each pixel if a cursor is rolled over the image
Given a set of images with the GPS or USNG coordinate and a bounding box, finds all images in the set that have a pixel intersecting that location

The final category awards prizes to anyone who comes up with anything else that turns out to be useful. Or, more specifically, “entrants can submit any algorithm they believe will be of value.” Whether or not it’s actually of value will be up to a panel of judges that includes both first responders and computer vision experts. More detailed rules can be found here, along with sample datasets that you can use for testing.

The contest deadline is 16 December, so you’ve got about a month to submit an entry. Winners will be announced at the beginning of January. Continue reading

Posted in Human Robots

#436186 Video Friday: Invasion of the Mini ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

There will be a Mini-Cheetah Workshop (sponsored by Naver Labs) a year from now at IROS 2020 in Las Vegas. Mini-Cheetahs for everyone!

That’s just a rendering, of course, but this isn’t:

[ MCW ]

I was like 95 percent sure that the Urban Circuit of the DARPA SubT Challenge was going to be in something very subway station-y. Oops!

In the Subterranean (SubT) Challenge, teams deploy autonomous ground and aerial systems to attempt to map, identify, and report artifacts along competition courses in underground environments. The artifacts represent items a first responder or service member may encounter in unknown underground sites. This video provides a preview of the Urban Circuit event location. The Urban Circuit is scheduled for February 18-27, 2020, at Satsop Business Park west of Olympia, Washington.

[ SubT ]

Researchers at SEAS and the Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.

To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties, that deform when an electric field is applied. By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.

Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.

[ Harvard ]

We present a system for fast and robust handovers with a robot character, together with a user study investigating the effect of robot speed and reaction time on perceived interaction quality. The system can match and exceed human speeds and confirms that users prefer human-level timing.

In a 3×3 user study, we vary the speed of the robot and add variable sensorimotor delays. We evaluate the social perception of the robot using the Robot Social Attribute Scale (RoSAS). Inclusion of a small delay, mimicking the delay of the human sensorimotor system, leads to an improvement in perceived qualities over both no delay and long delay conditions. Specifically, with no delay the robot is perceived as more discomforting and with a long delay, it is perceived as less warm.

[ Disney Research ]

When cars are autonomous, they’re not going to be able to pump themselves full of gas. Or, more likely, electrons. Kuka has the solution.

[ Kuka ]

This looks like fun, right?

[ Robocoaster ]

NASA is leading the way in the use of On-orbit Servicing, Assembly, and Manufacturing to enable large, persistent, upgradable, and maintainable spacecraft. This video was developed by the Advanced Concepts Lab (ACL) at NASA Langley Research Center.

[ NASA ]

The noisiest workshop by far at Humanoids last month (by far) was Musical Interactions With Humanoids, the end result of which was this:

[ Workshop ]

IROS is an IEEE event, and in furthering the IEEE mission to benefit humanity through technological innovation, IROS is doing a great job. But don’t take it from us – we are joined by IEEE President-Elect Professor Toshio Fukuda to find out a bit more about the impact events like IROS can have, as well as examine some of the issues around intelligent robotics and systems – from privacy to transparency of the systems at play.

[ IROS ]

Speaking of IROS, we hope you’ve been enjoying our coverage. We have already featured Harvard’s strange sea-urchin-inspired robot and a Japanese quadruped that can climb vertical ladders, with more stories to come over the next several weeks.

In the mean time, enjoy these 10 videos from the conference (as usual, we’re including the title, authors, and abstract for each—if you’d like more details about any of these projects, let us know and we’ll find out more for you).

“A Passive Closing, Tendon Driven, Adaptive Robot Hand for Ultra-Fast, Aerial Grasping and Perching,” by Andrew McLaren, Zak Fitzgerald, Geng Gao, and Minas Liarokapis from the University of Auckland, New Zealand.

Current grasping methods for aerial vehicles are slow, inaccurate and they cannot adapt to any target object. Thus, they do not allow for on-the-fly, ultra-fast grasping. In this paper, we present a passive closing, adaptive robot hand design that offers ultra-fast, aerial grasping for a wide range of everyday objects. We investigate alternative uses of structural compliance for the development of simple, adaptive robot grippers and hands and we propose an appropriate quick release mechanism that facilitates an instantaneous grasping execution. The quick release mechanism is triggered by a simple distance sensor. The proposed hand utilizes only two actuators to control multiple degrees of freedom over three fingers and it retains the superior grasping capabilities of adaptive grasping mechanisms, even under significant object pose or other environmental uncertainties. The hand achieves a grasping time of 96 ms, a maximum grasping force of 56 N and it is able to secure objects of various shapes at high speeds. The proposed hand can serve as the end-effector of grasping capable Unmanned Aerial Vehicle (UAV) platforms and it can offer perching capabilities, facilitating autonomous docking.

“Unstructured Terrain Navigation and Topographic Mapping With a Low-Cost Mobile Cuboid Robot,” by Andrew S. Morgan, Robert L. Baines, Hayley McClintock, and Brian Scassellati from Yale University, USA.

Current robotic terrain mapping techniques require expensive sensor suites to construct an environmental representation. In this work, we present a cube-shaped robot that can roll through unstructured terrain and construct a detailed topographic map of the surface that it traverses in real time with low computational and monetary expense. Our approach devolves many of the complexities of locomotion and mapping to passive mechanical features. Namely, rolling movement is achieved by sequentially inflating latex bladders that are located on four sides of the robot to destabilize and tip it. Sensing is achieved via arrays of fine plastic pins that passively conform to the geometry of underlying terrain, retracting into the cube. We developed a topography by shade algorithm to process images of the displaced pins to reconstruct terrain contours and elevation. We experimentally validated the efficacy of the proposed robot through object mapping and terrain locomotion tasks.

“Toward a Ballbot for Physically Leading People: A Human-Centered Approach,” by Zhongyu Li and Ralph Hollis from Carnegie Mellon University, USA.

This work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through congested and narrow spaces. As most previous work is robot-centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human-robot interaction module to guide the led person to a specified planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment.

“Achievement of Online Agile Manipulation Task for Aerial Transformable Multilink Robot,” by Fan Shi, Moju Zhao, Tomoki Anzai, Keita Ito, Xiangyu Chen, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.

Transformable aerial robots are favorable in aerial manipulation tasks for their flexible ability to change configuration during the flight. By assuming robot keeping in the mild motion, the previous researches sacrifice aerial agility to simplify the complex non-linear system into a single rigid body with a linear controller. In this paper, we present a framework towards agile swing motion for the transformable multi-links aerial robot. We introduce a computational-efficient non-linear model predictive controller and joints motion primitive frame-work to achieve agile transforming motions and validate with a novel robot named HYRURS-X. Finally, we implement our framework under a table tennis task to validate the online and agile performance.

“Small-Scale Compliant Dual Arm With Tail for Winged Aerial Robots,” by Alejandro Suarez, Manuel Perez, Guillermo Heredia, and Anibal Ollero from the University of Seville, Spain.

Winged aerial robots represent an evolution of aerial manipulation robots, replacing the multirotor vehicles by fixed or flapping wing platforms. The development of this morphology is motivated in terms of efficiency, endurance and safety in some inspection operations where multirotor platforms may not be suitable. This paper presents a first prototype of compliant dual arm as preliminary step towards the realization of a winged aerial robot capable of perching and manipulating with the wings folded. The dual arm provides 6 DOF (degrees of freedom) for end effector positioning in a human-like kinematic configuration, with a reach of 25 cm (half-scale w.r.t. the human arm), and 0.2 kg weight. The prototype is built with micro metal gear motors, measuring the joint angles and the deflection with small potentiometers. The paper covers the design, electronics, modeling and control of the arms. Experimental results in test-bench validate the developed prototype and its functionalities, including joint position and torque control, bimanual grasping, the dynamic equilibrium with the tail, and the generation of 3D maps with laser sensors attached at the arms.

“A Novel Small-Scale Turtle-inspired Amphibious Spherical Robot,” by Huiming Xing, Shuxiang Guo, Liwei Shi, Xihuan Hou, Yu Liu, Huikang Liu, Yao Hu, Debin Xia, and Zan Li from Beijing Institute of Technology, China.

This paper describes a novel small-scale turtle-inspired Amphibious Spherical Robot (ASRobot) to accomplish exploration tasks in the restricted environment, such as amphibious areas and narrow underwater cave. A Legged, Multi-Vectored Water-Jet Composite Propulsion Mechanism (LMVWCPM) is designed with four legs, one of which contains three connecting rod parts, one water-jet thruster and three joints driven by digital servos. Using this mechanism, the robot is able to walk like amphibious turtles on various terrains and swim flexibly in submarine environment. A simplified kinematic model is established to analyze crawling gaits. With simulation of the crawling gait, the driving torques of different joints contributed to the choice of servos and the size of links of legs. Then we also modeled the robot in water and proposed several underwater locomotion. In order to assess the performance of the proposed robot, a series of experiments were carried out in the lab pool and on flat ground using the prototype robot. Experiments results verified the effectiveness of LMVWCPM and the amphibious control approaches.

“Advanced Autonomy on a Low-Cost Educational Drone Platform,” by Luke Eller, Theo Guerin, Baichuan Huang, Garrett Warren, Sophie Yang, Josh Roy, and Stefanie Tellex from Brown University, USA.

PiDrone is a quadrotor platform created to accompany an introductory robotics course. Students build an autonomous flying robot from scratch and learn to program it through assignments and projects. Existing educational robots do not have significant autonomous capabilities, such as high-level planning and mapping. We present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and Fast-SLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python, making it ideal for use in educational settings.

“FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality,” by Winter Guerra, Ezra Tal, Varun Murali, Gilhyun Ryou and Sertac Karaman from the Massachusetts Institute of Technology, USA.

FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex dynamics are generated organically through natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest. FlightGoggles is distributed as open-source software along with the photorealistic graphics assets for several simulation environments, under the MIT license at http://flightgoggles.mit.edu.

“An Autonomous Quadrotor System for Robust High-Speed Flight Through Cluttered Environments Without GPS,” by Marc Rigter, Benjamin Morrell, Robert G. Reid, Gene B. Merewether, Theodore Tzanetos, Vinay Rajur, KC Wong, and Larry H. Matthies from University of Sydney, Australia; NASA Jet Propulsion Laboratory, California Institute of Technology, USA; and Georgia Institute of Technology, USA.

Robust autonomous flight without GPS is key to many emerging drone applications, such as delivery, search and rescue, and warehouse inspection. These and other appli- cations require accurate trajectory tracking through cluttered static environments, where GPS can be unreliable, while high- speed, agile, flight can increase efficiency. We describe the hardware and software of a quadrotor system that meets these requirements with onboard processing: a custom 300 mm wide quadrotor that uses two wide-field-of-view cameras for visual- inertial motion tracking and relocalization to a prior map. Collision-free trajectories are planned offline and tracked online with a custom tracking controller. This controller includes compensation for drag and variability in propeller performance, enabling accurate trajectory tracking, even at high speeds where aerodynamic effects are significant. We describe a system identification approach that identifies quadrotor-specific parameters via maximum likelihood estimation from flight data. Results from flight experiments are presented, which 1) validate the system identification method, 2) show that our controller with aerodynamic compensation reduces tracking error by more than 50% in both horizontal flights at up to 8.5 m/s and vertical flights at up to 3.1 m/s compared to the state-of-the-art, and 3) demonstrate our system tracking complex, aggressive, trajectories.

“Morphing Structure for Changing Hydrodynamic Characteristics of a Soft Underwater Walking Robot,” by Michael Ishida, Dylan Drotman, Benjamin Shih, Mark Hermes, Mitul Luhar, and Michael T. Tolley from the University of California, San Diego (UCSD) and University of Southern California, USA.

Existing platforms for underwater exploration and inspection are often limited to traversing open water and must expend large amounts of energy to maintain a position in flow for long periods of time. Many benthic animals overcome these limitations using legged locomotion and have different hydrodynamic profiles dictated by different body morphologies. This work presents an underwater legged robot with soft legs and a soft inflatable morphing body that can change shape to influence its hydrodynamic characteristics. Flow over the morphing body separates behind the trailing edge of the inflated shape, so whether the protrusion is at the front, center, or back of the robot influences the amount of drag and lift. When the legged robot (2.87 N underwater weight) needs to remain stationary in flow, an asymmetrically inflated body resists sliding by reducing lift on the body by 40% (from 0.52 N to 0.31 N) at the highest flow rate tested while only increasing drag by 5.5% (from 1.75 N to 1.85 N). When the legged robot needs to walk with flow, a large inflated body is pushed along by the flow, causing the robot to walk 16% faster than it would with an uninflated body. The body shape significantly affects the ability of the robot to walk against flow as it is able to walk against 0.09 m/s flow with the uninflated body, but is pushed backwards with a large inflated body. We demonstrate that the robot can detect changes in flow velocity with a commercial force sensor and respond by morphing into a hydrodynamically preferable shape. Continue reading

Posted in Human Robots

#436180 Bipedal Robot Cassie Cal Learns to ...

There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.

UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.

Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.

This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.

For a bit more detail, we spoke with Albert Li via email.

Image: UC Berkeley

UC Berkeley’s Cassie Cal getting ready to juggle.

IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?

Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.

What keeps Cassie from juggling indefinitely?

There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.

Would Cassie be able to juggle while walking (or hovershoe-ing)?

Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.

Can you give an example of a practical task that would be made possible by using a controller like this?

Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.

Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.

You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next?

We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).

On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.

[ Hybrid Robotics Lab ] Continue reading

Posted in Human Robots

#436114 Video Friday: Transferring Human Motion ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

We are very sad to say that MIT professor emeritus Woodie Flowers has passed away. Flowers will be remembered for (among many other things, like co-founding FIRST) the MIT 2.007 course that he began teaching in the mid-1970s, famous for its student competitions.

These competitions got a bunch of well-deserved publicity over the years; here’s one from 1985:

And the 2.007 competitions are still going strong—this year’s theme was Moonshot, and you can watch a replay of the event here.

[ MIT ]

Looks like Aibo is getting wireless integration with Hitachi appliances, which turns out to be pretty cute:

What is this magical box where you push a button and 60 seconds later fluffy pancakes come out?!

[ Aibo ]

LiftTiles are a “modular and reconfigurable room-scale shape display” that can turn your floor and walls into on-demand structures.

[ LiftTiles ]

Ben Katz, a grad student in MIT’s Biomimetics Robotics Lab, has been working on these beautiful desktop-sized Furuta pendulums:

That’s a crowdfunding project I’d pay way too much for.

[ Ben Katz ]

A clever bit of cable manipulation from MIT, using GelSight tactile sensors.

[ Paper ]

A useful display of industrial autonomy on ANYmal from the Oxford Robotics Group.

This video is of a demonstration for the ORCA Robotics Hub showing the ANYbotics ANYmal robot carrying out industrial inspection using autonomy software from Oxford Robotics Institute.

[ ORCA Hub ] via [ DRS ]

Thanks Maurice!

Meet Katie Hamilton, a software engineer at NASA’s Ames Research Center, who got into robotics because she wanted to help people with daily life. Katie writes code for robots, like Astrobee, who are assisting astronauts with routine tasks on the International Space Station.

[ NASA Astrobee ]

Transferring human motion to a mobile robotic manipulator and ensuring safe physical human-robot interaction are crucial steps towards automating complex manipulation tasks in human-shared environments. In this work we present a robot whole-body teleoperation framework for human motion transfer. We validate our approach through several experiments using the TIAGo robot, showing this could be an easy way for a non-expert to teach a rough manipulation skill to an assistive robot.

[ Paper ]

This is pretty cool looking for an autonomous boat, but we’ll see if they can build a real one by 2020 since at the moment it’s just an average rendering.

[ ProMare ]

I had no idea that asparagus grows like this. But, sure does make it easy for a robot to harvest.

[ Inaho ]

Skip to 2:30 in this Pepper unboxing video to hear the noise it makes when tickled.

[ HIT Lab NZ ]

In this interview, Jean Paul Laumond discusses his movement from mathematics to robotics and his career contributions to the field, especially in regards to motion planning and anthropomorphic motion. Describing his involvement at CNRS and in other robotics projects, such as HILARE, he comments on the distinction in perception between the robotics approach and a mathematics one.

[ IEEE RAS History ]

Here’s a couple of videos from the CMU Robotics Institute archives, showing some of the work that took place over the last few decades.

[ CMU RI ]

In this episode of the Artificial Intelligence Podcast, Lex Fridman speaks with David Ferrucci from IBM about Watson and (you guessed it) artificial intelligence.

David Ferrucci led the team that built Watson, the IBM question-answering system that beat the top humans in the world at the game of Jeopardy. He is also the Founder, CEO, and Chief Scientist of Elemental Cognition, a company working engineer AI systems that understand the world the way people do. This conversation is part of the Artificial Intelligence podcast.

[ AI Podcast ]

This week’s CMU RI Seminar is by Pieter Abbeel from UC Berkeley, on “Deep Learning for Robotics.”

Programming robots remains notoriously difficult. Equipping robots with the ability to learn would by-pass the need for what otherwise often ends up being time-consuming task specific programming. This talk will describe recent progress in deep reinforcement learning (robots learning through their own trial and error), in apprenticeship learning (robots learning from observing people), and in meta-learning for action (robots learning to learn). This work has led to new robotic capabilities in manipulation, locomotion, and flight, with the same approach underlying advances in each of these domains.

[ CMU RI ] Continue reading

Posted in Human Robots