Tag Archives: smile

#439070 Are Digital Humans the Next Step in ...

In the fictional worlds of film and TV, artificial intelligence has been depicted as so advanced that it is indistinguishable from humans. But what if we’re actually getting closer to a world where AI is capable of thinking and feeling?

Tech company UneeQ is embarking on that journey with its “digital humans.” These avatars act as visual interfaces for customer service chatbots, virtual assistants, and other applications. UneeQ’s digital humans appear lifelike not only in terms of language and tone of voice, but also because of facial movements: raised eyebrows, a tilt of the head, a smile, even a wink. They transform a transaction into an interaction: creepy yet astonishing, human, but not quite.

What lies beneath UneeQ’s digital humans? Their 3D faces are modeled on actual human features. Speech recognition enables the avatar to understand what a person is saying, and natural language processing is used to craft a response. Before the avatar utters a word, specific emotions and facial expressions are encoded within the response.

UneeQ may be part of a larger trend towards humanizing computing. ObEN’s digital avatars serve as virtual identities for celebrities, influencers, gaming characters, and other entities in the media and entertainment industry. Meanwhile, Soul Machines is taking a more biological approach, with a “digital brain” that simulates aspects of the human brain to modulate the emotions “felt” and “expressed” by its “digital people.” Amelia is employing a similar methodology in building its “digital employees.” It emulates parts of the brain involved with memory to respond to queries and, with each interaction, learns to deliver more engaging and personalized experiences.

Shiwali Mohan, an AI systems scientist at the Palo Alto Research Center, is skeptical of these digital beings. “They’re humanlike in their looks and the way they sound, but that in itself is not being human,” she says. “Being human is also how you think, how you approach problems, and how you break them down; and that takes a lot of algorithmic design. Designing for human-level intelligence is a different endeavor than designing graphics that behave like humans. If you think about the problems we’re trying to design these avatars for, we might not need something that looks like a human—it may not even be the right solution path.”

And even if these avatars appear near-human, they still evoke an uncanny valley feeling. “If something looks like a human, we have high expectations of them, but they might behave differently in ways that humans just instinctively know how other humans react. These differences give rise to the uncanny valley feeling,” says Mohan.

Yet the demand is there, with Amelia seeing high adoption of its digital employees across the financial, health care, and retail sectors. “We find that banks and insurance companies, which are so risk-averse, are leading the adoption of such disruptive technologies because they understand that the risk of non-adoption is much greater than the risk of early adoption,” says Chetan Dube, Amelia’s CEO. “Unless they innovate their business models and make them much more efficient digitally, they might be left behind.” Dube adds that the COVID-19 pandemic has accelerated adoption of digital employees in health care and retail as well.

Amelia, Soul Machines, and UneeQ are taking their digital beings a step further, enabling organizations to create avatars themselves using low-code or no-code platforms: Digital Employee Builder for Amelia, Creator for UneeQ, and Digital DNA Studio for Soul Machines. Unreal Engine, a game engine developed by Epic Games, is doing the same with MetaHuman Creator, a tool that allows anyone to create photorealistic digital humans. “The biggest motivation for Digital Employee Builder is to democratize AI,” Dube says.

Mohan is cautious about this approach. “AI has problems with bias creeping in from data sets and into the way it speaks. The AI community is still trying to figure out how to measure and counter that bias,” she says. “[Companies] have to have an AI expert on board that can recommend the right things to build for.”

Despite being wary of the technology, Mohan supports the purpose behind these virtual beings and is optimistic about where they’re headed. “We do need these tools that support humans in different kinds of things. I think the vision is the pro, and I’m behind that vision,” she says. “As we develop more sophisticated AI technology, we would then have to implement novel ways of interacting with that technology. Hopefully, all of that is designed to support humans in their goals.” Continue reading

Posted in Human Robots

#437935 Start the New Year Right: By Watching ...

I don’t need to tell you that 2020 was a tough year. There was almost nothing good about it, and we saw it off with a “good riddance” and hopes for a better 2021. But robotics company Boston Dynamics took a different approach to closing out the year: when all else fails, why not dance?

The company released a video last week that I dare you to watch without laughing—or at the very least, cracking a pretty big smile. Because, well, dancing robots are funny. And it’s not just one dancing robot, it’s four of them: two humanoid Atlas bots, one four-legged Spot, and one Handle, a bot-on-wheels built for materials handling.

The robots’ killer moves look almost too smooth and coordinated to be real, leading many to speculate that the video was computer-generated. But if you can trust Elon Musk, there’s no CGI here.

This is not CGI https://t.co/VOivE97vPR

— Elon Musk (@elonmusk) December 29, 2020

Boston Dynamics went through a lot of changes in the last ten years; it was acquired by Google in 2013, then sold to Japanese conglomerate SoftBank in 2017 before being acquired again by Hyundai just a few weeks ago for $1.1 billion. But this isn’t the first time they teach a robot to dance and make a video for all the world to enjoy; Spot tore up the floor to “Uptown Funk” back in 2018.

Four-legged Spot went commercial in June, with a hefty price tag of $74,500, and was put to some innovative pandemic-related uses, including remotely measuring patients’ vital signs and reminding people to social distance.

Hyundai plans to implement its newly-acquired robotics prowess for everything from service and logistics robots to autonomous driving and smart factories.

They’ll have their work cut out for them. Besides being hilarious, kind of heartwarming, and kind of creepy all at once, the robots’ new routine is pretty impressive from an engineering standpoint. Compare it to a 2016 video of Atlas trying to pick up a box (I know it’s a machine with no feelings, but it’s hard not to feel a little bit bad for it, isn’t it?), and it’s clear Boston Dynamics’ technology has made huge strides. It wouldn’t be surprising if, in two years’ time, we see a video of a flash mob of robots whose routine includes partner dancing and back flips (which, admittedly, Atlas can already do).

In the meantime, though, this one is pretty entertaining—and not a bad note on which to start the new year.

Image Credit: Boston Dynamics Continue reading

Posted in Human Robots

#437701 Robotics, AI, and Cloud Computing ...

IBM must be brimming with confidence about its new automated system for performing chemical synthesis because Big Blue just had twenty or so journalists demo the complex technology live in a virtual room.

IBM even had one of the journalists choose the molecule for the demo: a molecule in a potential Covid-19 treatment. And then we watched as the system synthesized and tested the molecule and provided its analysis in a PDF document that we all saw in the other journalist’s computer. It all worked; again, that’s confidence.

The complex system is based upon technology IBM started developing three years ago that uses artificial intelligence (AI) to predict chemical reactions. In August 2018, IBM made this service available via the Cloud and dubbed it RXN for Chemistry.

Now, the company has added a new wrinkle to its Cloud-based AI: robotics. This new and improved system is no longer named simply RXN for Chemistry, but RoboRXN for Chemistry.

All of the journalists assembled for this live demo of RoboRXN could watch as the robotic system executed various steps, such as moving the reactor to a small reagent and then moving the solvent to a small reagent. The robotic system carried out the entire set of procedures—completing the synthesis and analysis of the molecule—in eight steps.

Image: IBM Research

IBM RXN helps predict chemical reaction outcomes or design retrosynthesis in seconds.

In regular practice, a user will be able to suggest a combination of molecules they would like to test. The AI will pick up the order and task a robotic system to run the reactions necessary to produce and test the molecule. Users will be provided analyses of how well their molecules performed.

Back in March of this year, Silicon Valley-based startup Strateos demonstrated something similar that they had developed. That system also employed a robotic system to help researchers working from the Cloud create new chemical compounds. However, what distinguishes IBM’s system is its incorporation of a third element: the AI.

The backbone of IBM’s AI model is a machine learning translation method that treats chemistry like language translation. It translates the language of chemistry by converting reactants and reagents to products through the use of Statistical Machine Intelligence and Learning Engine (SMILE) representation to describe chemical entities.

IBM has also leveraged an automatic data driven strategy to ensure the quality of its data. Researchers there used millions of chemical reactions to teach the AI system chemistry, but contained within that data set were errors. So, how did IBM clean this so-called noisy data to eliminate the potential for bad models?

According to Alessandra Toniato, a researcher at IBM Zurichh, the team implemented what they dubbed the “forgetting experiment.”

Toniato explains that, in this approach, they asked the AI model how sure it was that the chemical examples it was given were examples of correct chemistry. When faced with this choice, the AI identified chemistry that it had “never learnt,” “forgotten six times,” or “never forgotten.” Those that were “never forgotten” were examples that were clean, and in this way they were able to clean the data that AI had been presented.

While the AI has always been part of the RXN for Chemistry, the robotics is the newest element. The main benefit that turning over the carrying out of the reactions to a robotic system is expected to yield is to free up chemists from doing the often tedious process of having to design a synthesis from scratch, says Matteo Manica, a research staff member in Cognitive Health Care and Life Sciences at IBM Research Zürich.

“In this demo, you could see how the system is synergistic between a human and AI,” said Manica. “Combine that with the fact that we can run all these processes with a robotic system 24/7 from anywhere in the world, and you can see how it will really help up to speed up the whole process.”

There appear to be two business models that IBM is pursuing with its latest technology. One is to deploy the entire system on the premises of a company. The other is to offer licenses to private Cloud installations.

Photo: Michael Buholzer

Teodoro Laino of IBM Research Europe.

“From a business perspective you can think of having a system like we demonstrated being replicated on the premise within companies or research groups that would like to have the technology available at their disposal,” says Teodoro Laino, distinguished RSM, manager at IBM Research Europe. “On the other hand, we are also pushing at bringing the entire system to a service level.”

Just as IBM is brimming with confidence about its new technology, the company also has grand aspirations for it.

Laino adds: “Our aim is to provide chemical services across the world, a sort of Amazon of chemistry, where instead of looking for chemistry already in stock, you are asking for chemistry on demand.”

< Back to IEEE COVID-19 Resources Continue reading

Posted in Human Robots

#437628 Video Friday: An In-Depth Look at Mesmer ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

AUVSI EXPONENTIAL 2020 – October 5-8, 2020 – [Online]
IROS 2020 – October 25-29, 2020 – [Online]
ROS World 2020 – November 12, 2020 – [Online]
CYBATHLON 2020 – November 13-14, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

Bear Robotics, a robotics and artificial intelligence company, and SoftBank Robotics Group, a leading robotics manufacturer and solutions provider, have collaborated to bring a new robot named Servi to the food service and hospitality field.

[ Bear Robotics ]

A literal in-depth look at Engineered Arts’ Mesmer android.

[ Engineered Arts ]

Is your robot running ROS? Is it connected to the Internet? Are you actually in control of it right now? Are you sure?

I appreciate how the researchers admitted to finding two of their own robots as part of the scan, a Baxter and a drone.

[ Brown ]

Smile Robotics describes this as “(possibly) world’s first full-autonomous clear-up-the-table robot.”

We’re not qualified to make a judgement on the world firstness, but personally I hate clearing tables, so this robot has my vote.

Smile Robotics founder and CEO Takashi Ogura, along with chief engineer Mitsutaka Kabasawa and engineer Kazuya Kobayashi, are former Google roboticists. Ogura also worked at SCHAFT. Smile says its robot uses ROS and is controlled by a framework written mainly in Rust, adding: “We are hiring Rustacean Roboticists!”

[ Smile Robotics ]

We’re not entirely sure why, but Panasonic has released plans for an Internet of Things system for hamsters.

We devised a recipe for a “small animal healthcare device” that can measure the weight and activity of small animals, the temperature and humidity of the breeding environment, and manage their health. This healthcare device visualizes the health status and breeding environment of small animals and manages their health to promote early detection of diseases. While imagining the scene where a healthcare device is actually used for an important small animal that we treat with affection, we hope to help overcome the current difficult situation through manufacturing.

[ Panasonic ] via [ RobotStart ]

Researchers at Yale have developed a robotic fabric, a breakthrough that could lead to such innovations as adaptive clothing, self-deploying shelters, or lightweight shape-changing machinery.

The researchers focused on processing functional materials into fiber-form so they could be integrated into fabrics while retaining its advantageous properties. For example, they made variable stiffness fibers out of an epoxy embedded with particles of Field’s metal, an alloy that liquifies at relatively low temperatures. When cool, the particles are solid metal and make the material stiffer; when warm, the particles melt into liquid and make the material softer.

[ Yale ]

In collaboration with Armasuisse and SBB, RSL demonstrated the use of a teleoperated Menzi Muck M545 to clean up a rock slide in Central Switzerland. The machine can be operated from a teloperation platform with visual and motion feedback. The walking excavator features an active chassis that can adapt to uneven terrain.

[ ETHZ RSL ]

An international team of JKU researchers is continuing to develop their vision for robots made out of soft materials. A new article in the journal “Communications Materials” demonstrates just how these kinds of soft machines react using weak magnetic fields to move very quickly. A triangle-shaped robot can roll itself in air at high speed and walk forward when exposed to an alternating in-plane square wave magnetic field (3.5 mT, 1.5 Hz). The diameter of the robot is 18 mm with a thickness of 80 µm. A six-arm robot can grab, transport, and release non-magnetic objects such as a polyurethane foam cube controlled by a permanent magnet.

Okay but tell me more about that cute sheep.

[ JKU ]

Interbotix has this “research level robotic crawler,” which both looks mean and runs ROS, a dangerous combination.

And here’s how it all came together:

[ Interbotix ]

I guess if you call them “loitering missile systems” rather than “drones that blow things up” people are less likely to get upset?

[ AeroVironment ]

In this video, we show a planner for a master dual-arm robot to manipulate tethered tools with an assistant dual-arm robot’s help. The assistant robot provides assistance to the master robot by manipulating the tool cable and avoiding collisions. The provided assistance allows the master robot to perform tool placements on the robot workspace table to regrasp the tool, which would typically fail since the tool cable tension may change the tool positions. It also allows the master robot to perform tool handovers, which would normally cause entanglements or collisions with the cable and the environment without the assistance.

[ Harada Lab ]

This video shows a flexible and robust robotic system for autonomous drawing on 3D surfaces. The system takes 2D drawing strokes and a 3D target surface (mesh or point clouds) as input. It maps the 2D strokes onto the 3D surface and generates a robot motion to draw the mapped strokes using visual recognition, grasp pose reasoning, and motion planning.

[ Harada Lab ]

Weekly mobility test. This time the Warthog takes on a fallen tree. Will it cross it? The answer is in the video!

And the answer is: kinda?

[ NORLAB ]

One of the advantages of walking machines is their ability to apply forces in all directions and of various magnitudes to the environment. Many of the multi-legged robots are equipped with point contact feet as these simplify the design and control of the robot. The iStruct project focuses on the development of a foot that allows extensive contact with the environment.

[ DFKI ]

An urgent medical transport was simulated in NASA’s second Systems Integration and Operationalization (SIO) demonstration Sept. 28 with partner Bell Textron Inc. Bell used the remotely-piloted APT 70 to conduct a flight representing an urgent medical transport mission. It is envisioned in the future that an operational APT 70 could provide rapid medical transport for blood, organs, and perishable medical supplies (payload up to 70 pounds). The APT 70 is estimated to move three times as fast as ground transportation.

Always a little suspicious when the video just shows the drone flying, and sitting on the ground, but not that tricky transition between those two states.

[ NASA ]

A Lockheed Martin Robotics Seminar on “Socially Assistive Mobile Robots,” by Yi Guo from Stevens Institute of Technology.

The use of autonomous mobile robots in human environments is on the rise. Assistive robots have been seen in real-world environments, such as robot guides in airports, robot polices in public parks, and patrolling robots in supermarkets. In this talk, I will first present current research activities conducted in the Robotics and Automation Laboratory at Stevens. I’ll then focus on robot-assisted pedestrian regulation, where pedestrian flows are regulated and optimized through passive human-robot interaction.

[ UMD ]

This week’s CMU RI Seminar is by CMU’s Zachary Manchester, on “The World’s Tiniest Space Program.”

The aerospace industry has experienced a dramatic shift over the last decade: Flying a spacecraft has gone from something only national governments and large defense contractors could afford to something a small startup can accomplish on a shoestring budget. A virtuous cycle has developed where lower costs have led to more launches and the growth of new markets for space-based data. However, many barriers remain. This talk will focus on driving these trends to their ultimate limit by harnessing advances in electronics, planning, and control to build spacecraft that cost less than a new smartphone and can be deployed in large numbers.

[ CMU RI ] Continue reading

Posted in Human Robots

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots