Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439884 This Spooky, Bizarre Haunted House Was ...

AI is slowly getting more creative, and as it does it’s raising questions about the nature of creativity itself, who owns works of art made by computers, and whether conscious machines will make art humans can understand. In the spooky spirit of Halloween, one engineer used an AI to produce a very specific, seasonal kind of “art”: a haunted house.

It’s not a brick-and-mortar house you can walk through, unfortunately; like so many things these days, it’s virtual, and was created by research scientist and writer Janelle Shane. Shane runs a machine learning humor blog called AI Weirdness where she writes about the “sometimes hilarious, sometimes unsettling ways that machine learning algorithms get things wrong.”

For the virtual haunted house, Shane used CLIP, a neural network built by OpenAI, and VQGAN, a neural network architecture that combines convolutional neural networks (which are typically used for images) with transformers (which are typically used for language).

CLIP (short for Contrastive Language–Image Pre-training) learns visual concepts from natural language supervision, using images and their descriptions to rate how well a given image matches a phrase. The algorithm uses zero-shot learning, a training methodology that decreases reliance on labeled data and enables the model to eventually recognize objects or images it hasn’t seen before.

The phrase Shane focused on for this experiment was “haunted Victorian house,” starting with a photo of a regular Victorian house then letting the AI use its feedback to modify the image with details it associated with the word “haunted.”

Image Credit: Josephyurko, cc-by SA 4.0
The results are somewhat ghoulish, though also perplexing. In the first iteration, the home’s wood has turned to stone, the windows are covered in something that could be cobwebs, the cloudy sky has a dramatic tilt to it, and there appears to be fire on the house’s lower level.

Image Credit: Janelle Shane, AI Weirdness
Shane then upped the ante and instructed the model to create an “extremely haunted” Victorian house. The second iteration looks a little more haunted, but also a little less like a house in general, partly because there appears to be a piece of night sky under the house’s roof near its center.

Image Credit: Janelle Shane, AI Weirdness
Shane then tried taking the word “haunted” out of the instructions, and things just got more bizarre from there. She wrote in her blog post about the project, “Apparently CLIP has learned that if you want to make things less haunted, add flowers, street lights, and display counters full of snacks.”

Image Credit: Janelle Shane, AI Weirdness
“All the AI’s changes tend to make the house make less sense,” Shane said. “That’s because it’s easier for it to look at tiny details like mist than the big picture like how a house fits together. In a lot of what AI does, it’s working on the level of surface details rather than deeper meaning.”

Shane’s description matches up with where AI stands as a field. Despite impressive progress in fields like protein folding, RNA structure, natural language processing, and more, AI has not yet approached “general intelligence” and is still very much in the “narrow” domain. Researcher Melanie Mitchell argues that common fallacies in the field, like using human language to describe machine intelligence, are hampering its advancement; computers don’t really “learn” or “understand” in the way humans do, and adjusting the language we used to describe AI systems could help do away with some of the misunderstandings around their capabilities.

Shane’s haunted house is a clear example of this lack of understanding, and a playful reminder that we should move cautiously in allowing machines to make decisions with real-world impact.

Banner Image Credit: Janelle Shane, AI Weirdness Continue reading

Posted in Human Robots

#439882 Robot umpires are coming to baseball. ...

Baseball fans know the bitter heartbreak of calls that don't go their way—especially, a ball that should've been a strike. And, with advances in technology including computer vision, artificial intelligence, and the ubiquity of Wi-Fi, it would be easier than ever for baseball officials to replace humans with robotic umpires. Continue reading

Posted in Human Robots

#439879 Teaching robots to think like us: Brain ...

Can intelligence be taught to robots? Advances in physical reservoir computing, a technology that makes sense of brain signals, could contribute to creating artificial intelligence machines that think like us. Continue reading

Posted in Human Robots

#439875 Not So Mysterious After All: Researchers ...

The deep learning neural networks at the heart of modern artificial intelligence are often described as “black boxes” whose inner workings are inscrutable. But new research calls that idea into question, with significant implications for privacy.

Unlike traditional software whose functions are predetermined by a developer, neural networks learn how to process or analyze data by training on examples. They do this by continually adjusting the strength of the links between their many neurons.

By the end of this process, the way they make decisions is tied up in a tangled network of connections that can be impossible to follow. As a result, it’s often assumed that even if you have access to the model itself, it’s more or less impossible to work out the data that the system was trained on.

But a pair of recent papers have brought this assumption into question, according to MIT Technology Review, by showing that two very different techniques can be used to identify the data a model was trained on. This could have serious implications for AI systems trained on sensitive information like health records or financial data.

The first approach takes aim at generative adversarial networks (GANs), the AI systems behind deepfake images. These systems are increasingly being used to create synthetic faces that are supposedly completely unrelated to real people.

But researchers from the University of Caen Normandy in France showed that they could easily link generated faces from a popular model to real people whose data had been used to train the GAN. They did this by getting a second facial recognition model to compare the generated faces against training samples to spot if they shared the same identity.

The images aren’t an exact match, as the GAN has modified them, but the researchers found multiple examples where generated faces were clearly linked to images in the training data. In a paper describing the research, they point out that in many cases the generated face is simply the original face in a different pose.

While the approach is specific to face-generation GANs, the researchers point out that similar ideas could be applied to things like biometric data or medical images. Another, more general approach to reverse engineering neural nets could do that straight off the bat, though.

A group from Nvidia has shown that they can infer the data the model was trained on without even seeing any examples of the trained data. They used an approach called model inversion, which effectively runs the neural net in reverse. This technique is often used to analyze neural networks, but using it to recover the input data had only been achieved on simple networks under very specific sets of assumptions.

In a recent paper, the researchers described how they were able to scale the approach to large networks by splitting the problem up and carrying out inversions on each of the networks’ layers separately. With this approach, they were able to recreate training data images using nothing but the models themselves.

While carrying out either attack is a complex process that requires intimate access to the model in question, both highlight the fact that AIs may not be the black boxes we thought they were, and determined attackers could extract potentially sensitive information from them.

Given that it’s becoming increasingly easy to reverse engineer someone else’s model using your own AI, the requirement to have access to the neural network isn’t even that big of a barrier.

The problem isn’t restricted to image-based algorithms. Last year, researchers from a consortium of tech companies and universities showed that they could extract news headlines, JavaScript code, and personally identifiable information from the large language model GPT-2.

These issues are only going to become more pressing as AI systems push their way into sensitive areas like health, finance, and defense. There are some solutions on the horizon, such as differential privacy, where models are trained on the statistical features of aggregated data rather than individual data points, or homomorphic encryption, an emerging paradigm that makes it possible to compute directly on encrypted data.

But these approaches are still a long way from being standard practice, so for the time being, entrusting your data to the black box of AI may not be as safe as you think.

Image Credit: Connect world / Shutterstock.com Continue reading

Posted in Human Robots

#439870 Video Friday: TurtleBot 4

Video Friday is your weekly selection of awesome robotics videos, collected by your friends at IEEE Spectrum robotics. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

Silicon Valley Robot Block Party – October 23, 2021 – Oakland, CA, USASSRR 2021 – October 25-27, 2021 – New York, NY, USALet us know if you have suggestions for next week, and enjoy today's videos.
We'll have more details on this next week, but there's a new TurtleBot, hooray!

Brought to you by iRobot (providing the base in the form of the new Create 3), Clearpath, and Open Robotics.
[ Clearpath ]
Cognitive Pilot's autonomous tech is now being integrated into production Kirovets K-7M tractors, and they've got big plans: “The third phase of the project envisages a fully self-driving tractor control mode without the need for human involvement. It includes group autonomous operation with a 'leader', the movement of a group of self-driving tractors on non-public roads, the autonomous movement of a robo-tractor paired with a combine harvester not equipped with an autonomous control system, and the use of an expanded set of farm implements with automated control and functionality to monitor their condition during operation.”

[ Cognitive Pilot ]
Thanks, Andrey!
Since the start of the year, Opteran has been working incredibly hard to deliver against our technology milestones and we're delighted to share the first video of our technology in action. In the video you can see Hopper, our robot dog (named after Grace Hopper, a pioneer of computer programming) moving around a course using components of Opteran Natural Intelligence, [rather than] a trained deep learning neural net. Our small development kit (housing an FPGA) sat on top of the robot dog guides Hopper, using Opteran See to provide 360 degrees of stabilised vision, and Opteran Sense to sense objects and avoid collisions.
[ Opteran ]
If you weren't paying any attention to the DARPA SubT Challenge and are now afraid to ask about it, here are two recap videos from DARPA.

[ DARPA SubT ]
A new control system, designed by researchers in MIT's Improbable AI Lab and demonstrated using MIT's robotic mini cheetah, enables four-legged robots to traverse across uneven terrain in real-time.
[ MIT ]
Using a mix of 3D-printed plastic and metal parts, a full-scale replica of NASA's Volatiles Investigating Polar Exploration Rover, or VIPER, was built inside a clean room at NASA's Johnson Space Center in Houston. The activity served as a dress rehearsal for the flight version, which is scheduled for assembly in the summer of 2022.
[ NASA ]
What if you could have 100x more information about your industrial sites? Agile mobile robots like Spot bring sensors to your assets in order to collect data and generate critical insights on asset health so you can optimize performance. Dynamic sensing unlocks flexible and reliable data capture for improved site awareness, safety, and efficiency.
[ Boston Dynamics ]
Fish in Washington are getting some help navigating through culverts under roads, thanks to a robot developed by University of Washington students Greg Joyce and Qishi Zhou. “HydroCUB is designed to operate from a distance through a 300-foot-long cable that supplies power to the rover and transmits video back to the operator. The goal is for the Washington State Department of Transportation which proposed the idea, to use the tool to look for vegetation, cracks, debris and other potential 'fish-barriers' in culverts.”

[ UW ]
Thanks, Sarah!
NASA's Perseverance Mars rover carries two microphones which are directly recording sounds on the Red Planet, including the Ingenuity helicopter and the rover itself at work. For the very first time, these audio recordings offer a new way to experience the planet. Earth and Mars have different atmospheres, which affects the way sound is heard. Justin Maki, a scientist at NASA's Jet Propulsion Laboratory and Nina Lanza, a scientist at Los Alamos National Laboratory, explain some of the notable audio recorded on Mars in this video.
[ JPL ]
A new kind of fiber developed by researchers at MIT and in Sweden can be made into cloth that senses how much it is being stretched or compressed, and then provides immediate tactile feedback in the form of pressure or vibration. Such fabrics, the team suggests, could be used in garments that help train singers or athletes to better control their breathing, or that help patients recovering from disease or surgery to recover their normal breathing patterns.
[ MIT ]
Partnering with Epitomical, Extend robotic has developed a mobile manipulator and a perception system, to let anyone to operate it intuitively through VR interface, over a wireless network.
[ Extend Robotics ]
Here are a couple of videos from Matei Ciocarlie at the Columbia University ROAM lab talking about embodied intelligence for manipulation.

[ ROAM Lab ]
The AirLab at CMU has been hosting an excellent series on SLAM. You should subscribe to their YouTube channel, but here are a couple of their more recent talks.

[ Tartan SLAM Series ]
Robots as Companions invites Sougwen Chung and Madeline Gannon, two artists and researchers whose practices not only involve various types of robots but actually include them as collaborators and companions, to join Maria Yablonina (Daniels Faculty) in conversation. Through their work, they challenge the notion of a robot as an obedient task execution device, questioning the ethos of robot arms as tools of industrial production and automation, and ask us to consider it as an equal participant in the creative process.
[ UofT ]
These two talks come from the IEEE RAS Seasonal School on Rehabilitation and Assistive Technologies based on Soft Robotics.

[ SofTech-Rehab ] Continue reading

Posted in Human Robots