Tag Archives: think

#439004 Video Friday: A Walking, Wheeling ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

RoboSoft 2021 – April 12-16, 2021 – [Online Conference]
ICRA 2021 – May 30-5, 2021 – Xi'an, China
Let us know if you have suggestions for next week, and enjoy today's videos.

This is a pretty terrible video, I think because it was harvested from WeChat, which is where Tencent decided to premiere its new quadruped robot.

Not bad, right? Its name is Max, it has a top speed of 25 kph thanks to its elbow wheels, and we know almost nothing else about it.

[ Tencent ]

Thanks Fan!

Can't bring yourself to mask-shame others? Build a robot to do it for you instead!

[ GitHub ]

Researchers at Georgia Tech have recently developed an entirely soft, long-stroke electromagnetic actuator using liquid metal, compliant magnetic composites, and silicone polymers. The robot was inspired by the motion of the Xenia coral, which pulses its polyps to circulate oxygen under water to promote photosynthesis.

In this work, power applied to soft coils generates an electromagnetic field, which causes the internal compliant magnet to move upward. This forces the squishy silicone linkages to convert linear to the rotational motion with an arclength of up to 42 mm with a bandwidth up to 30 Hz. This highly deformable, fast, and long-stroke actuator topology can be utilized for a variety of applications from biomimicry to fully-soft grasping to wearables applications.

[ Paper ] via [ Georgia Tech ]

Thanks Noah!

Jueying Mini Lite may look a little like a Boston Dynamics Spot, but according to DeepRobotics, its coloring is based on Bruce Lee's Kung Fu clothes.

[ DeepRobotics ]

Henrique writes, “I would like to share with you the supplementary video of our recent work accepted to ICRA 2021. The video features a quadruped and a full-size humanoid performing dynamic jumps, after a brief animated intro of what direct transcription is. Me and my colleagues have put a lot of hard work into this, and I am very proud of the results.”

Making big robots jump is definitely something to be proud of!

[ SLMC Edinburgh ]

Thanks Henrique!

The finals of the Powered Exoskeleton Race for Cybathlon Global 2020.

[ Cybathlon ]

Thanks Fan!

It's nice that every once in a while, the world can get excited about science and robots.

[ NASA ]

Playing the Imperial March over footage of an army of black quadrupeds may not be sending quite the right message.

[ Unitree ]

Kod*lab PhD students Abriana Stewart-Height, Diego Caporale and Wei-Hsi Chen, with former Kod*lab student Garrett Wenger were on set in the summer of 2019 to operate RHex for the filming of Lapsis, a first feature film by director and screenwriter Noah Hutton.

[ Kod*lab ]

In class 2.008, Design and Manufacturing II, mechanical engineering students at MIT learn the fundamental principles of manufacturing at scale by designing and producing their own yo-yos. Instructors stress the importance of sustainable practices in the global supply chain.

[ MIT ]

A short history of robotics, from ABB.

[ ABB ]

In this paper, we propose a whole-body planning framework that unifies dynamic locomotion and manipulation tasks by formulating a single multi-contact optimal control problem. This is demonstrated in a set of real hardware experiments done in free-motion, such as base or end-effector pose tracking, and while pushing/pulling a heavy resistive door. Robustness against model mismatches and external disturbances is also verified during these test cases.

[ Paper ]

This paper presents PANTHER, a real-time perception-aware (PA) trajectory planner in dynamic environments. PANTHER plans trajectories that avoid dynamic obstacles while also keeping them in the sensor field of view (FOV) and minimizing the blur to aid in object tracking.

Extensive hardware experiments in unknown dynamic environments with all the computation running onboard are presented, with velocities of up to 5.8 m/s, and with relative velocities (with respect to the obstacles) of up to 6.3 m/s. The only sensors used are an IMU, a forward-facing depth camera, and a downward-facing monocular camera.

[ MIT ]

With our SaaS solution, we enable robots to inspect industrial facilities. One of the robots our software supports, is the Boston Dynamics Spot robot. In this video we demonstrate how autonomous industrial inspection with the Boston Dynamics Spot Robot is performed with our teach and repeat solution.

[ Energy Robotics ]

In this week’s episode of Tech on Deck, learn about our first technology demonstration sent to Station: The Robotic Refueling Mission. This tech demo helped us develop the tools and techniques needed to robotically refuel a satellite in space, an important capability for space exploration.

[ NASA ]

At Covariant we are committed to research and development that will bring AI Robotics to the real world. As a part of this, we believe it's important to educate individuals on how these exciting innovations will make a positive, fundamental and global impact for years to come. In this presentation, our co-founder Pieter Abbeel breaks down his thoughts on the current state of play for AI robotics.

[ Covariant ]

How do you fly a helicopter on Mars? It takes Ingenuity and Perseverance. During this technology demo, Farah Alibay and Tim Canham will get into the details of how these craft will manage this incredible task.

[ NASA ]

Complex real-world environments continue to present significant challenges for fielding robotic teams, which often face expansive spatial scales, difficult and dynamic terrain, degraded environmental conditions, and severe communication constraints. Breakthrough technologies call for integrated solutions across autonomy, perception, networking, mobility, and human teaming thrusts. As such, the DARPA OFFSET program and the DARPA Subterranean Challenge seek novel approaches and new insights for discovering and demonstrating these innovative technologies, to help close critical gaps for robotic operations in complex urban and underground environments.

[ UPenn ] Continue reading

Posted in Human Robots

#439000 Can AI Stop People From Believing Fake ...

Machine learning algorithms provide a way to detect misinformation based on writing style and how articles are shared.

On topics as varied as climate change and the safety of vaccines, you will find a wave of misinformation all over social media. Trust in conventional news sources may seem lower than ever, but researchers are working on ways to give people more insight on whether they can believe what they read. Researchers have been testing artificial intelligence (AI) tools that could help filter legitimate news. But how trustworthy is AI when it comes to stopping the spread of misinformation?

Researchers at the Rensselaer Polytechnic Institute (RPI) and the University of Tennessee collaborated to study the role of AI in helping people identify whether the news they’re reading is legitimate or not.

The research paper, “Tailoring Heuristics and Timing AI Interventions for Supporting News Veracity Assessments,” was published in Computers in Human Behavior Reports. It discussed how crowdsourcing marketplace Amazon Mechanical Turk (AMT) can be used to identify misinformation for fresh news and specific heuristics, which are rules of thumb used to process information and consider its veracity. In other words, heuristics are essentially “shortcuts for decisions,” explained Dorit Nevo, an associate professor at RPI’s Lally School of Management and a lead author for the paper.

The study found that AI would be successful in flagging false stories only if the reader did not already have an opinion on the topic, Nevo said. When study subjects were set in their beliefs, confirmation bias kept them from reassessing their views.

Nevo said the first part of the project focused on whether subjects could detect misinformation around climate change and vaccines like the one designed to prevent chicken pox. Then, beginning in April 2020, her team studied how people responded to news related to COVID-19.

“With COVID-19, there was a significant difference,” Nevo said. They found that about 72 percent of respondents could identify misinformation about the coronavirus without heuristic clues, and roughly 93 percent were able to be convinced by the researcher’s heuristics that the content was fake.

Examples of heuristic clues include text with too many capital letters or the use of strong language, Nevo said.

There were two types of heuristics mentioned in the team’s paper: objective heuristics and source heuristics. They put a statement at the top of each article the subjects read; it instructed them to read the article and indicate whether they believed its central thesis.

“We either put a statement that says the AI finds this article reliable and accurate based on the objective heuristics, or we said the AI finds the source reliable,” Nevo said. “So that's the source heuristic.”

In her research on heuristics, Nevo found that people’s thinking takes one of two paths: The first path is to read the article, think about it and decide if they believe it; the second is to consider the source and what others think about the news, and decide whether to believe it before reading it.

Image: Dorit Nevo/RPI/IEEE Spectrum

Researchers at RPI researched the role of heuristics and AI in detecting whether people thought news was credible

Another research paper, “Timing Matters When Correcting Fake News,” published in the Proceedings of the National Academy of Science by researchers at Harvard University, differed from the RPI researchers in its findings. While Nevo and her collaborators found that it’s easier to convince people that a story is fake news before reading it, the Harvard researchers, led by Nadia M. Brashier, a psychologist and neuroscientist, discovered that a fact-check can convince people of misinformation even after reading headlines. When study subjects read true or false labels after reading a headline, that resulted in a 25.3 percent reduction in “subsequent misclassification,” when compared to headlines with no tag, Brashier and her team found.

In the end, fighting misinformation will require both computing and human efforts such as policy changes, says Benjamin D. Horne, an assistant professor of Information Sciences at the University of Tennessee and one of Nevo’s co-authors. He says the RPI-Tennessee work was inspired by AI tools he designed previously. Horne was previously a research assistant at RPI, where he developed machine learning (ML) algorithms that can detect partial truths as well as decontextualized truths and out-of-date information.

“Our algorithms are trained on source-level behavior, both when using the textual content of an article and the network of other news sources that it draws news from,” Horne said. “We have found that these two types of features together are quite good at distinguishing between sources labeled as reliable or unreliable by external news source ratings.”

The machine learning algorithms analyze the writing style and the content-sharing behavior of news outlets, Horne said. Researchers trained a supervised ML algorithm called Random Forest, a classification algorithm that uses decision trees.

AI for Detecting Fake News

So, what’s the potential for AI to be successful in detecting misinformation?

“The tools we have developed, and other tools developed in this area, have fairly high accuracy in lab settings,” says Horne. “For example, our most recent technical work showed around 83% accuracy in predicting when the source of a news article is reliable or unreliable.”

Despite the effectiveness of algorithms, old-fashioned fact-checking by journalists will still be required to combat fake news. AI could filter the information for fact-checkers to verify, according to Horne.

“AI tools are great at dealing with high quantities of information at fast speeds but lack the nuanced analysis that a journalist or fact-checker can provide,” Horne said. “I see a future where the two work together.” Continue reading

Posted in Human Robots

#438982 Quantum Computing and Reinforcement ...

Deep reinforcement learning is having a superstar moment.

Powering smarter robots. Simulating human neural networks. Trouncing physicians at medical diagnoses and crushing humanity’s best gamers at Go and Atari. While far from achieving the flexible, quick thinking that comes naturally to humans, this powerful machine learning idea seems unstoppable as a harbinger of better thinking machines.

Except there’s a massive roadblock: they take forever to run. Because the concept behind these algorithms is based on trial and error, a reinforcement learning AI “agent” only learns after being rewarded for its correct decisions. For complex problems, the time it takes an AI agent to try and fail to learn a solution can quickly become untenable.

But what if you could try multiple solutions at once?

This week, an international collaboration led by Dr. Philip Walther at the University of Vienna took the “classic” concept of reinforcement learning and gave it a quantum spin. They designed a hybrid AI that relies on both quantum and run-of-the-mill classic computing, and showed that—thanks to quantum quirkiness—it could simultaneously screen a handful of different ways to solve a problem.

The result is a reinforcement learning AI that learned over 60 percent faster than its non-quantum-enabled peers. This is one of the first tests that shows adding quantum computing can speed up the actual learning process of an AI agent, the authors explained.

Although only challenged with a “toy problem” in the study, the hybrid AI, once scaled, could impact real-world problems such as building an efficient quantum internet. The setup “could readily be integrated within future large-scale quantum communication networks,” the authors wrote.

The Bottleneck
Learning from trial and error comes intuitively to our brains.

Say you’re trying to navigate a new convoluted campground without a map. The goal is to get from the communal bathroom back to your campsite. Dead ends and confusing loops abound. We tackle the problem by deciding to turn either left or right at every branch in the road. One will get us closer to the goal; the other leads to a half hour of walking in circles. Eventually, our brain chemistry rewards correct decisions, so we gradually learn the correct route. (If you’re wondering…yeah, true story.)

Reinforcement learning AI agents operate in a similar trial-and-error way. As a problem becomes more complex, the number—and time—of each trial also skyrockets.

“Even in a moderately realistic environment, it may simply take too long to rationally respond to a given situation,” explained study author Dr. Hans Briegel at the Universität Innsbruck in Austria, who previously led efforts to speed up AI decision-making using quantum mechanics. If there’s pressure that allows “only a certain time for a response, an agent may then be unable to cope with the situation and to learn at all,” he wrote.

Many attempts have tried speeding up reinforcement learning. Giving the AI agent a short-term “memory.” Tapping into neuromorphic computing, which better resembles the brain. In 2014, Briegel and colleagues showed that a “quantum brain” of sorts can help propel an AI agent’s decision-making process after learning. But speeding up the learning process itself has eluded our best attempts.

The Hybrid AI
The new study went straight for that previously untenable jugular.

The team’s key insight was to tap into the best of both worlds—quantum and classical computing. Rather than building an entire reinforcement learning system using quantum mechanics, they turned to a hybrid approach that could prove to be more practical. Here, the AI agent uses quantum weirdness as it’s trying out new approaches—the “trial” in trial and error. The system then passes the baton to a classical computer to give the AI its reward—or not—based on its performance.

At the heart of the quantum “trial” process is a quirk called superposition. Stay with me. Our computers are powered by electrons, which can represent only two states—0 or 1. Quantum mechanics is far weirder, in that photons (particles of light) can simultaneously be both 0 and 1, with a slightly different probability of “leaning towards” one or the other.

This noncommittal oddity is part of what makes quantum computing so powerful. Take our reinforcement learning example of navigating a new campsite. In our classic world, we—and our AI—need to decide between turning left or right at an intersection. In a quantum setup, however, the AI can (in a sense) turn left and right at the same time. So when searching for the correct path back to home base, the quantum system has a leg up in that it can simultaneously explore multiple routes, making it far faster than conventional, consecutive trail and error.

“As a consequence, an agent that can explore its environment in superposition will learn significantly faster than its classical counterpart,” said Briegel.

It’s not all theory. To test out their idea, the team turned to a programmable chip called a nanophotonic processor. Think of it as a CPU-like computer chip, but it processes particles of light—photons—rather than electricity. These light-powered chips have been a long time in the making. Back in 2017, for example, a team from MIT built a fully optical neural network into an optical chip to bolster deep learning.

The chips aren’t all that exotic. Nanophotonic processors act kind of like our eyeglasses, which can carry out complex calculations that transform light that passes through them. In the glasses case, they let people see better. For a light-based computer chip, it allows computation. Rather than using electrical cables, the chips use “wave guides” to shuttle photons and perform calculations based on their interactions.

The “error” or “reward” part of the new hardware comes from a classical computer. The nanophotonic processor is coupled to a traditional computer, where the latter provides the quantum circuit with feedback—that is, whether to reward a solution or not. This setup, the team explains, allows them to more objectively judge any speed-ups in learning in real time.

In this way, a hybrid reinforcement learning agent alternates between quantum and classical computing, trying out ideas in wibbly-wobbly “multiverse” land while obtaining feedback in grounded, classic physics “normality.”

A Quantum Boost
In simulations using 10,000 AI agents and actual experimental data from 165 trials, the hybrid approach, when challenged with a more complex problem, showed a clear leg up.

The key word is “complex.” The team found that if an AI agent has a high chance of figuring out the solution anyway—as for a simple problem—then classical computing works pretty well. The quantum advantage blossoms when the task becomes more complex or difficult, allowing quantum mechanics to fully flex its superposition muscles. For these problems, the hybrid AI was 63 percent faster at learning a solution compared to traditional reinforcement learning, decreasing its learning effort from 270 guesses to 100.

Now that scientists have shown a quantum boost for reinforcement learning speeds, the race for next-generation computing is even more lit. Photonics hardware required for long-range light-based communications is rapidly shrinking, while improving signal quality. The partial-quantum setup could “aid specifically in problems where frequent search is needed, for example, network routing problems” that’s prevalent for a smooth-running internet, the authors wrote. With a quantum boost, reinforcement learning may be able to tackle far more complex problems—those in the real world—than currently possible.

“We are just at the beginning of understanding the possibilities of quantum artificial intelligence,” said lead author Walther.

Image Credit: Oleg Gamulinskiy from Pixabay Continue reading

Posted in Human Robots

#438807 Visible Touch: How Cameras Can Help ...

The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs.

To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.

A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.”

Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.

However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision.

“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.

This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device.

The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities.

In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.

As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.” Continue reading

Posted in Human Robots

#438774 The World’s First 3D Printed School ...

3D printed houses have been popping up all over the map. Some are hive-shaped, some can float, some are up for sale. Now this practical, cost-cutting technology is being employed for another type of building: a school.

Located on the island of Madagascar, the project is a collaboration between San Francisco-based architecture firm Studio Mortazavi and Thinking Huts, a nonprofit whose mission is to increase global access to education through 3D printing. The school will be built on the campus of a university in Fianarantsoa, a city in the south central area of the island nation.

According to the World Economic Forum, lack of physical infrastructure is one of the biggest barriers to education. Building schools requires not only funds, human capital, and building materials, but also community collaboration and ongoing upkeep and maintenance. For people to feel good about sending their kids to school each day, the buildings should be conveniently located, appealing, comfortable to spend several hours in, and of course safe. All of this is harder to accomplish than you might think, especially in low-income areas.

Because of its comparatively low cost and quick turnaround time, 3D printing has been lauded as a possible solution to housing shortages and a tool to aid in disaster relief. Cost details of the Madagascar school haven’t been released, but if 3D printed houses can go up in a day for under $10,000 or list at a much lower price than their non-3D-printed neighbors, it’s safe to say that 3D printing a school is likely substantially cheaper than building it through traditional construction methods.

The school’s modular design resembles a honeycomb, where as few or as many nodes as needed can be linked together. Each node consists of a room with two bathrooms, a closet, and a front and rear entrance. The Fianarantsoa school with just have one node to start with, but as local technologists will participate in the building process, they’ll learn the 3D printing ins and outs and subsequently be able to add new nodes or build similar schools in other areas.

Artist rendering of the completed school. Image Credit: Studio Mortazavi/Thinking Huts
The printer for the project is coming from Hyperion Robotics, a Finnish company that specializes in 3D printing solutions for reinforced concrete. The building’s walls will be made of layers of a special cement mixture that Thinking Huts says emits less carbon dioxide than traditional concrete. The roof, doors, and windows will be sourced locally, and the whole process can be completed in less than a week, another major advantage over traditional building methods.

“We can build these schools in less than a week, including the foundation and all the electrical and plumbing work that’s involved,” said Amir Mortazavi, lead architect on the project. “Something like this would typically take months, if not even longer.”

The roof of the building will be equipped with solar panels to provide the school with power, and in a true melding of modern technology and traditional design, the pattern of its walls is based on Malagasy textiles.

Thinking Huts considered seven different countries for its first school, and ended up choosing Madagascar for the pilot based on its need for education infrastructure, stable political outlook, opportunity for growth, and renewable energy potential. However, the team is hoping the pilot will be the first of many similar projects across multiple countries. “We can use this as a case study,” Mortazavi said. “Then we can go to other countries around the world and train the local technologists to use the 3D printer and start a nonprofit there to be able to build schools.”

Construction of the school will take place in the latter half of this year, with hopes of getting students into the classroom as soon as the pandemic is no longer a major threat to the local community’s health.

Image Credit: Studio Mortazavi/Thinking Huts Continue reading

Posted in Human Robots