Tag Archives: camera

#437577 A Swarm of Cyborg Cockroaches That Lives ...

Digital Nature Group at the University of Tsukuba in Japan is working towards a “post ubiquitous computing era consisting of seamless combination of computational resources and non-computational resources.” By “non-computational resources,” they mean leveraging the natural world, which for better or worse includes insects.

At small scales, the capabilities of insects far exceed the capabilities of robots. I get that. And I get that turning cockroaches into an army of insect cyborgs could be useful in a variety of ways. But what makes me fundamentally uncomfortable is the idea that “in the future, they’ll appear out of nowhere without us recognizing it, fulfilling their tasks and then hiding.” In other words, you’ll have cyborg cockroaches hiding all over your house, all the time.

Warning: This article contains video of cockroaches being modified with cybernetic implants that some people may find upsetting.

Remote controlling cockroaches isn’t a new idea, and it’s a fairly simple one. By stimulating the left or right antenna nerves of the cockroach, you can make it think that it’s running into something, and get it to turn in the opposite direction. Add wireless connectivity, some fiducial markers, an overhead camera system, and a bunch of cyborg cockroaches, and you have a resilient swarm that can collaborate on tasks. The researchers suggest that the swarm could be used as a display (by making each cockroach into a pixel), to transport objects, or to draw things. There’s also some mention of “input or haptic interfaces or an audio device,” which frankly sounds horrible.

The reason to use cockroaches is that you can take advantage of their impressive ruggedness, efficiency, high power to weight ratio, and mobility. They can also feed themselves, meaning that whenever you don’t need the swarm to perform some task for you, you can deactivate the control system and let them scurry off to find crumbs in dark places.

There are many other swarm robotic platforms that can perform what you’re seeing these cyborg roaches do, but according to the researchers, the reason to use cockroaches is that you can take advantage of their impressive ruggedness, efficiency, high power to weight ratio, and mobility. They’re a lot messier (yay biology!), but they can also feed themselves, meaning that whenever you don’t need the swarm to perform some task for you, you can deactivate the control system and let them scurry off to find crumbs in dark places. And when you need them again, turn the control system on and experience the nightmare of your cyborg cockroach swarm reassembling itself from all over your house.

While we’re on the subject of cockroach hacking, we would be doing you a disservice if we didn’t share some of project leader Yuga Tsukuda’s other projects. Here’s a cockroach-powered clock, about which the researchers note that “it is difficult to control the cockroaches when trying to control them by electrical stimulation because they move spontaneously. However, by cutting off the head and removing the brain, they do not move spontaneously and the control by the computer becomes easy.” So, zombie cockroaches. Good then.

And if that’s not enough for you, how about this:

The researchers describe this project as an “attempt to use cockroaches for makeup by sticking them on the face.” They stick electrodes into the cockroaches to make them wiggle their legs when electrical stimulation is applied. And the peacock feathers? They “make the cockroach movement bigger, and create a cosmic mystery.” Continue reading

Posted in Human Robots

#437571 Video Friday: Snugglebot Is What We All ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
Robotica 2020 – November 10-14, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

Snugglebot is what we all need right now.

[ Snugglebot ]

In his video message on his prayer intention for November, Pope Francis emphasizes that progress in robotics and artificial intelligence (AI) be oriented “towards respecting the dignity of the person and of Creation”.

[ Vatican News ]

KaPOW!

Apparently it's supposed to do that—the disruptor flies off backwards to reduce recoil on the robot, and has its own parachute to keep it from going too far.

[ Ghost Robotics ]

Animals have many muscles, receptors, and neurons which compose feedback loops. In this study, we designed artificial muscles, receptors, and neurons without any microprocessors, or software-based controllers. We imitate the reflexive rule observed in walking experiments of cats, as a result, the Pneumatic Brainless Robot II emerged running motion (a leg trajectory and a gait pattern) through the interaction between the body, the ground, and the artificial reflexes. We envision that the simple reflex circuit we discovered will be a candidate for a minimal model for describing the principles of animal locomotion.

Find the paper, “Brainless Running: A Quasi-quadruped Robot with Decentralized Spinal Reflexes by Solely Mechanical Devices,” on IROS On-Demand.

[ IROS ]

Thanks Yoichi!

I have no idea what these guys are saying, but they're talking about robots that serve chocolate!

The world of experience of the Zotter Schokoladen Manufaktur of managing director Josef Zotter counts more than 270,000 visitors annually. Since March 2019, this world of chocolate in Bergl near Riegersburg in Austria has been enriched by a new attraction: the world's first chocolate and praline robot from KUKA delights young and old alike and serves up chocolate and pralines to guests according to their personal taste.

[ Zotter ]

This paper proposes a systematic solution that uses an unmanned aerial vehicle (UAV) to aggressively and safely track an agile target. The solution properly handles the challenging situations where the intent of the target and the dense environments are unknown to the UAV. The proposed solution is integrated into an onboard quadrotor system. We fully test the system in challenging real-world tracking missions. Moreover, benchmark comparisons validate that the proposed method surpasses the cutting-edge methods on time efficiency and tracking effectiveness.

[ FAST Lab ]

Southwest Research Institute developed a cable management system for collaborative robotics, or “cobots.” Dress packs used on cobots can create problems when cables are too tight (e-stops) or loose (tangling). SwRI developed ADDRESS, or the Adaptive DRESing System, to provide smarter cobot dress packs that address e-stops and tangling.

[ SWRI ]

A quick demonstration of the acoustic contact sensor in the RBO Hand 2. An embedded microphone records the sound inside of the pneumatic finger. Depending on which part of the finger makes contact, the sound is a little bit different. We create a sensor that recognizes these small changes and predicts the contact location from the sound. The visualization on the left shows the recorded sound (top) and which of the nine contact classes the sensor is currently predicting (bottom).

[ TU Berlin ]

The MAVLab won the prize for the “most innovative design” in the IMAV 2018 indoor competition, in which drones had to fly through windows, gates, and follow a predetermined flight path. The prize was awarded for the demonstration of a fully autonomous version of the “DelFly Nimble”, a tailless flapping wing drone.

In order to fly by itself, the DelFly Nimble was equipped with a single, small camera and a small processor allowing onboard vision processing and control. The jury of international experts in the field praised the agility and autonomous flight capabilities of the DelFly Nimble.

[ MAVLab ]

A reactive walking controller for the Open Dynamic Robot Initiative's skinny quadruped.

[ ODRI ]

Mobile service robots are already able to recognize people and objects while navigating autonomously through their operating environments. But what is the ideal position of the robot to interact with a user? To solve this problem, Fraunhofer IPA developed an approach that connects navigation, 3D environment modeling, and person detection to find the optimal goal pose for HRI.

[ Fraunhofer ]

Yaskawa has been in robotics for a very, very long time.

[ Yaskawa ]

Black in Robotics IROS launch event, featuring Carlotta Berry.

[ Black in Robotics ]

What is AI? I have no idea! But these folks have some opinions.

[ MIT ]

Aerial-based Observations of Volcanic Emissions (ABOVE) is an international collaborative project that is changing the way we sample volcanic gas emissions. Harnessing recent advances in drone technology, unoccupied aerial systems (UAS) in the ABOVE fleet are able to acquire aerial measurements of volcanic gases directly from within previously inaccessible volcanic plumes. In May 2019, a team of 30 researchers undertook an ambitious field deployment to two volcanoes – Tavurvur (Rabaul) and Manam in Papua New Guinea – both amongst the most prodigious emitters of sulphur dioxide on Earth, and yet lacking any measurements of how much carbon they emit to the atmosphere.

[ ABOVE ]

A talk from IHMC's Robert Griffin for ICCAS 2020, including a few updates on their Nadia humanoid.

[ IHMC ] Continue reading

Posted in Human Robots

#437491 3.2 Billion Images and 720,000 Hours of ...

Twitter over the weekend “tagged” as manipulated a video showing US Democratic presidential candidate Joe Biden supposedly forgetting which state he’s in while addressing a crowd.

Biden’s “hello Minnesota” greeting contrasted with prominent signage reading “Tampa, Florida” and “Text FL to 30330.”

The Associated Press’s fact check confirmed the signs were added digitally and the original footage was indeed from a Minnesota rally. But by the time the misleading video was removed it already had more than one million views, The Guardian reports.

A FALSE video claiming Biden forgot what state he was in was viewed more than 1 million times on Twitter in the past 24 hours

In the video, Biden says “Hello, Minnesota.”

The event did indeed happen in MN — signs on stage read MN

But false video edited signs to read Florida pic.twitter.com/LdHQVaky8v

— Donie O'Sullivan (@donie) November 1, 2020

If you use social media, the chances are you see (and forward) some of the more than 3.2 billion images and 720,000 hours of video shared daily. When faced with such a glut of content, how can we know what’s real and what’s not?

While one part of the solution is an increased use of content verification tools, it’s equally important we all boost our digital media literacy. Ultimately, one of the best lines of defense—and the only one you can control—is you.

Seeing Shouldn’t Always Be Believing
Misinformation (when you accidentally share false content) and disinformation (when you intentionally share it) in any medium can erode trust in civil institutions such as news organizations, coalitions and social movements. However, fake photos and videos are often the most potent.

For those with a vested political interest, creating, sharing and/or editing false images can distract, confuse and manipulate viewers to sow discord and uncertainty (especially in already polarized environments). Posters and platforms can also make money from the sharing of fake, sensationalist content.

Only 11-25 percent of journalists globally use social media content verification tools, according to the International Centre for Journalists.

Could You Spot a Doctored Image?
Consider this photo of Martin Luther King Jr.

Dr. Martin Luther King Jr. Giving the middle finger #DopeHistoricPics pic.twitter.com/5W38DRaLHr

— Dope Historic Pics (@dopehistoricpic) December 20, 2013

This altered image clones part of the background over King Jr’s finger, so it looks like he’s flipping off the camera. It has been shared as genuine on Twitter, Reddit, and white supremacist websites.

In the original 1964 photo, King flashed the “V for victory” sign after learning the US Senate had passed the civil rights bill.

“Those who love peace must learn to organize as effectively as those who love war.”
Dr. Martin Luther King Jr.

This photo was taken on June 19th, 1964, showing Dr King giving a peace sign after hearing that the civil rights bill had passed the senate. @snopes pic.twitter.com/LXHmwMYZS5

— Willie's Reserve (@WilliesReserve) January 21, 2019

Beyond adding or removing elements, there’s a whole category of photo manipulation in which images are fused together.

Earlier this year, a photo of an armed man was photoshopped by Fox News, which overlaid the man onto other scenes without disclosing the edits, the Seattle Times reported.

You mean this guy who’s been photoshopped into three separate photos released by Fox News? pic.twitter.com/fAXpIKu77a

— Zander Yates ザンダーイェーツ (@ZanderYates) June 13, 2020

Similarly, the image below was shared thousands of times on social media in January, during Australia’s Black Summer bushfires. The AFP’s fact check confirmed it is not authentic and is actually a combination of several separate photos.

Image is more powerful than screams of Greta. A silent girl is holding a koala. She looks straight at you from the waters of the ocean where they found a refuge. She is wearing a breathing mask. A wall of fire is behind them. I do not know the name of the photographer #Australia pic.twitter.com/CrTX3lltdh

— EVC Music (@EVCMusicUK) January 6, 2020

Fully and Partially Synthetic Content
Online, you’ll also find sophisticated “deepfake” videos showing (usually famous) people saying or doing things they never did. Less advanced versions can be created using apps such as Zao and Reface.

Or, if you don’t want to use your photo for a profile picture, you can default to one of several websites offering hundreds of thousands of AI-generated, photorealistic images of people.

These people don’t exist, they’re just images generated by artificial intelligence. Generated Photos, CC BY

Editing Pixel Values and the (not so) Simple Crop
Cropping can greatly alter the context of a photo, too.

We saw this in 2017, when a US government employee edited official pictures of Donald Trump’s inauguration to make the crowd appear bigger, according to The Guardian. The staffer cropped out the empty space “where the crowd ended” for a set of pictures for Trump.

Views of the crowds at the inaugurations of former US President Barack Obama in 2009 (left) and President Donald Trump in 2017 (right). AP

But what about edits that only alter pixel values such as color, saturation, or contrast?

One historical example illustrates the consequences of this. In 1994, Time magazine’s cover of OJ Simpson considerably “darkened” Simpson in his police mugshot. This added fuel to a case already plagued by racial tension, to which the magazine responded, “No racial implication was intended, by Time or by the artist.”

Tools for Debunking Digital Fakery
For those of us who don’t want to be duped by visual mis/disinformation, there are tools available—although each comes with its own limitations (something we discuss in our recent paper).

Invisible digital watermarking has been proposed as a solution. However, it isn’t widespread and requires buy-in from both content publishers and distributors.

Reverse image search (such as Google’s) is often free and can be helpful for identifying earlier, potentially more authentic copies of images online. That said, it’s not foolproof because it:

Relies on unedited copies of the media already being online.
Doesn’t search the entire web.
Doesn’t always allow filtering by publication time. Some reverse image search services such as TinEye support this function, but Google’s doesn’t.
Returns only exact matches or near-matches, so it’s not thorough. For instance, editing an image and then flipping its orientation can fool Google into thinking it’s an entirely different one.

Most Reliable Tools Are Sophisticated
Meanwhile, manual forensic detection methods for visual mis/disinformation focus mostly on edits visible to the naked eye, or rely on examining features that aren’t included in every image (such as shadows). They’re also time-consuming, expensive, and need specialized expertise.

Still, you can access work in this field by visiting sites such as Snopes.com—which has a growing repository of “fauxtography.”

Computer vision and machine learning also offer relatively advanced detection capabilities for images and videos. But they too require technical expertise to operate and understand.

Moreover, improving them involves using large volumes of “training data,” but the image repositories used for this usually don’t contain the real-world images seen in the news.

If you use an image verification tool such as the REVEAL project’s image verification assistant, you might need an expert to help interpret the results.

The good news, however, is that before turning to any of the above tools, there are some simple questions you can ask yourself to potentially figure out whether a photo or video on social media is fake. Think:

Was it originally made for social media?
How widely and for how long was it circulated?
What responses did it receive?
Who were the intended audiences?

Quite often, the logical conclusions drawn from the answers will be enough to weed out inauthentic visuals. You can access the full list of questions, put together by Manchester Metropolitan University experts, here.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Simon Steinberger from Pixabay Continue reading

Posted in Human Robots

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots

#437236 Why We Need Mass Automation to ...

The scale of goods moving around the planet at any moment is staggering. Raw materials are dug up in one country, spun into parts and pieces in another, and assembled into products in a third. Crossing oceans and continents, they find their way to a local store or direct to your door.

Magically, a roll of toilet paper, power tool, or tube of toothpaste is there just when you need it.

Even more staggering is that this whole system, the global supply chain, works so well that it’s effectively invisible most of the time. Until now, that is. The pandemic has thrown a floodlight on the inner workings of this modern wonder—and it’s exposed massive vulnerabilities.

The e-commerce supply chain is an instructive example. As the world went into lockdown, and everything non-essential went online, demand for digital fulfillment skyrocketed.

Even under “normal” conditions, most e-commerce warehouses were struggling to meet demand. But Covid-19 has further strained the ability to cope with shifting supply, an unprecedented tidal wave of orders, and labor shortages. Local stores are running out of key products. Online grocers and e-commerce platforms are suspending some home deliveries, restricting online purchases of certain items, and limiting new customers. The whole system is being severely tested.

Why? Despite an abundance of 21st century technology, we’re stuck in the 20th century.

Today’s supply chain consists of fleets of ships, trucks, warehouses, and importantly, people scattered around the world. While there are some notable instances of advanced automation, the overwhelming majority of work is still manual, resembling a sort of human-powered bucket brigade, with people wandering around warehouses or standing alongside conveyor belts. Each package of diapers or bottle of detergent ordered by an online customer might be touched dozens of times by warehouse workers before finding its way into a box delivered to a home.

The pandemic has proven the critical need for innovation due to increased demand, concerns about the health and safety of workers, and traceability and safety of products and services.

At the 2020 World Economic Forum, there was much discussion about the ongoing societal transformation in which humans and machines work in tandem, automating and augmenting the way we get things done. At the time, pre-pandemic, debate trended toward skepticism and fear of job losses, with some even questioning the ethics and need for these technologies.

Now, we see things differently. To make the global supply chain more resilient to shocks like Covid-19, we must look to technology.

Perfecting the Global Supply Chain: The Massive ‘Matter Router’
Technology has faced and overcome similar challenges in the past.

World War II, for example, drove innovation in techniques for rapid production of many products on a large scale, including penicillin. We went from the availability of one dose of the drug in 1941, to four million sterile packages of the drug every month four years later.

Similarly, today’s companies, big and small, are looking to automation, robotics, and AI to meet the pandemic head on. These technologies are crucial to scaling the infrastructure that will fulfill most of the world’s e-commerce and food distribution needs.

You can think of this new infrastructure as a rapidly evolving “matter router” that will employ increasingly complex robotic systems to move products more freely and efficiently.

Robots powered by specialized AI software, for example, are already learning to adapt to changes in the environment, using the most recent advances in industrial robotics and machine learning. When customers suddenly need to order dramatically new items, these robots don’t need to stop or be reprogrammed. They can perform new tasks by learning from experience using low-cost camera systems and deep learning for visual and image recognition.

These more flexible robots can work around the clock, helping make facilities less sensitive to sudden changes in workforce and customer demand and strengthening the supply chain.

Today, e-commerce is roughly 12% of retail sales in the US and is expected to rise well beyond 25% within the decade, fueled by changes in buying habits. However, analysts have begun to consider whether the current crisis might cause permanent jumps in those numbers, as it has in the past (for instance with the SARS epidemic in China in 2003). Whatever happens, the larger supply chain will benefit from greater, more flexible automation, especially during global crises.

We must create what Hamza Mudassire of the University of Cambridge calls a “resilient ecosystem that links multiple buyers with multiple vendors, across a mesh of supply chains.” This ecosystem must be backed by robust, efficient, and scalable automation that uses robotics, autonomous vehicles, and the Internet of Things to help track the flow of goods through the supply chain.

The good news? We can accomplish this with technologies we have today.

Image credit: Guillaume Bolduc / Unsplash Continue reading

Posted in Human Robots