Tag Archives: talk

#437562 Video Friday: Aquanaut Robot Takes to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

[ UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech — “The Road to Vehicle Automation, a Toyota Guardian Approach” — to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach — which amplifies human drivers, rather than replacing them — and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ] Continue reading

Posted in Human Robots

#437293 These Scientists Just Completed a 3D ...

Human brain maps are a dime a dozen these days. Maps that detail neurons in a certain region. Maps that draw out functional connections between those cells. Maps that dive deeper into gene expression. Or even meta-maps that combine all of the above.

But have you ever wondered: how well do those maps represent my brain? After all, no two brains are alike. And if we’re ever going to reverse-engineer the brain as a computer simulation—as Europe’s Human Brain Project is trying to do—shouldn’t we ask whose brain they’re hoping to simulate?

Enter a new kind of map: the Julich-Brain, a probabilistic map of human brains that accounts for individual differences using a computational framework. Rather than generating a static PDF of a brain map, the Julich-Brain atlas is also dynamic, in that it continuously changes to incorporate more recent brain mapping results. So far, the map has data from over 24,000 thinly sliced sections from 23 postmortem brains covering most years of adulthood at the cellular level. But the atlas can also continuously adapt to progress in mapping technologies to aid brain modeling and simulation, and link to other atlases and alternatives.

In other words, rather than “just another” human brain map, the Julich-Brain atlas is its own neuromapping API—one that could unite previous brain-mapping efforts with more modern methods.

“It is exciting to see how far the combination of brain research and digital technologies has progressed,” said Dr. Katrin Amunts of the Institute of Neuroscience and Medicine at Research Centre Jülich in Germany, who spearheaded the study.

The Old Dogma
The Julich-Brain atlas embraces traditional brain-mapping while also yanking the field into the 21st century.

First, the new atlas includes the brain’s cytoarchitecture, or how brain cells are organized. As brain maps go, these kinds of maps are the oldest and most fundamental. Rather than exploring how neurons talk to each other functionally—which is all the rage these days with connectome maps—cytoarchitecture maps draw out the physical arrangement of neurons.

Like a census, these maps literally capture how neurons are distributed in the brain, what they look like, and how they layer within and between different brain regions.

Because neurons aren’t packed together the same way between different brain regions, this provides a way to parse the brain into areas that can be further studied. When we say the brain’s “memory center,” the hippocampus, or the emotion center, the “amygdala,” these distinctions are based on cytoarchitectural maps.

Some may call this type of mapping “boring.” But cytoarchitecture maps form the very basis of any sort of neuroscience understanding. Like hand-drawn maps from early explorers sailing to the western hemisphere, these maps provide the brain’s geographical patterns from which we try to decipher functional connections. If brain regions are cities, then cytoarchitecture maps attempt to show trading or other “functional” activities that occur in the interlinking highways.

You might’ve heard of the most common cytoarchitecture map used today: the Brodmann map from 1909 (yup, that old), which divided the brain into classical regions based on the cells’ morphology and location. The map, while impactful, wasn’t able to account for brain differences between people. More recent brain-mapping technologies have allowed us to dig deeper into neuronal differences and divide the brain into more regions—180 areas in the cortex alone, compared with 43 in the original Brodmann map.

The new study took inspiration from that age-old map and transformed it into a digital ecosystem.

A Living Atlas
Work began on the Julich-Brain atlas in the mid-1990s, with a little help from the crowd.

The preparation of human tissue and its microstructural mapping, analysis, and data processing is incredibly labor-intensive, the authors lamented, making it impossible to do for the whole brain at high resolution in just one lab. To build their “Google Earth” for the brain, the team hooked up with EBRAINS, a shared computing platform set up by the Human Brain Project to promote collaboration between neuroscience labs in the EU.

First, the team acquired MRI scans of 23 postmortem brains, sliced the brains into wafer-thin sections, and scanned and digitized them. They corrected distortions from the chopping using data from the MRI scans and then lined up neurons in consecutive sections—picture putting together a 3D puzzle—to reconstruct the whole brain. Overall, the team had to analyze 24,000 brain sections, which prompted them to build a computational management system for individual brain sections—a win, because they could now track individual donor brains too.

Their method was quite clever. They first mapped their results to a brain template from a single person, called the MNI-Colin27 template. Because the reference brain was extremely detailed, this allowed the team to better figure out the location of brain cells and regions in a particular anatomical space.

However, MNI-Colin27’s brain isn’t your or my brain—or any of the brains the team analyzed. To dilute any of Colin’s potential brain quirks, the team also mapped their dataset onto an “average brain,” dubbed the ICBM2009c (catchy, I know).

This step allowed the team to “standardize” their results with everything else from the Human Connectome Project and the UK Biobank, kind of like adding their Google Maps layer to the existing map. To highlight individual brain differences, the team overlaid their dataset on existing ones, and looked for differences in the cytoarchitecture.

The microscopic architecture of neurons change between two areas (dotted line), forming the basis of different identifiable brain regions. To account for individual differences, the team also calculated a probability map (right hemisphere). Image credit: Forschungszentrum Juelich / Katrin Amunts
Based on structure alone, the brains were both remarkably different and shockingly similar at the same time. For example, the cortexes—the outermost layer of the brain—were physically different across donor brains of different age and sex. The region especially divergent between people was Broca’s region, which is traditionally linked to speech production. In contrast, parts of the visual cortex were almost identical between the brains.

The Brain-Mapping Future
Rather than relying on the brain’s visible “landmarks,” which can still differ between people, the probabilistic map is far more precise, the authors said.

What’s more, the map could also pool yet unmapped regions in the cortex—about 30 percent or so—into “gap maps,” providing neuroscientists with a better idea of what still needs to be understood.

“New maps are continuously replacing gap maps with progress in mapping while the process is captured and documented … Consequently, the atlas is not static but rather represents a ‘living map,’” the authors said.

Thanks to its structurally-sound architecture down to individual cells, the atlas can contribute to brain modeling and simulation down the line—especially for personalized brain models for neurological disorders such as seizures. Researchers can also use the framework for other species, and they can even incorporate new data-crunching processors into the workflow, such as mapping brain regions using artificial intelligence.

Fundamentally, the goal is to build shared resources to better understand the brain. “[These atlases] help us—and more and more researchers worldwide—to better understand the complex organization of the brain and to jointly uncover how things are connected,” the authors said.

Image credit: Richard Watts, PhD, University of Vermont and Fair Neuroimaging Lab, Oregon Health and Science University Continue reading

Posted in Human Robots

#437276 Cars Will Soon Be Able to Sense and ...

Imagine you’re on your daily commute to work, driving along a crowded highway while trying to resist looking at your phone. You’re already a little stressed out because you didn’t sleep well, woke up late, and have an important meeting in a couple hours, but you just don’t feel like your best self.

Suddenly another car cuts you off, coming way too close to your front bumper as it changes lanes. Your already-simmering emotions leap into overdrive, and you lay on the horn and shout curses no one can hear.

Except someone—or, rather, something—can hear: your car. Hearing your angry words, aggressive tone, and raised voice, and seeing your furrowed brow, the onboard computer goes into “soothe” mode, as it’s been programmed to do when it detects that you’re angry. It plays relaxing music at just the right volume, releases a puff of light lavender-scented essential oil, and maybe even says some meditative quotes to calm you down.

What do you think—creepy? Helpful? Awesome? Weird? Would you actually calm down, or get even more angry that a car is telling you what to do?

Scenarios like this (maybe without the lavender oil part) may not be imaginary for much longer, especially if companies working to integrate emotion-reading artificial intelligence into new cars have their way. And it wouldn’t just be a matter of your car soothing you when you’re upset—depending what sort of regulations are enacted, the car’s sensors, camera, and microphone could collect all kinds of data about you and sell it to third parties.

Computers and Feelings
Just as AI systems can be trained to tell the difference between a picture of a dog and one of a cat, they can learn to differentiate between an angry tone of voice or facial expression and a happy one. In fact, there’s a whole branch of machine intelligence devoted to creating systems that can recognize and react to human emotions; it’s called affective computing.

Emotion-reading AIs learn what different emotions look and sound like from large sets of labeled data; “smile = happy,” “tears = sad,” “shouting = angry,” and so on. The most sophisticated systems can likely even pick up on the micro-expressions that flash across our faces before we consciously have a chance to control them, as detailed by Daniel Goleman in his groundbreaking book Emotional Intelligence.

Affective computing company Affectiva, a spinoff from MIT Media Lab, says its algorithms are trained on 5,313,751 face videos (videos of people’s faces as they do an activity, have a conversation, or react to stimuli) representing about 2 billion facial frames. Fascinatingly, Affectiva claims its software can even account for cultural differences in emotional expression (for example, it’s more normalized in Western cultures to be very emotionally expressive, whereas Asian cultures tend to favor stoicism and politeness), as well as gender differences.

But Why?
As reported in Motherboard, companies like Affectiva, Cerence, Xperi, and Eyeris have plans in the works to partner with automakers and install emotion-reading AI systems in new cars. Regulations passed last year in Europe and a bill just introduced this month in the US senate are helping make the idea of “driver monitoring” less weird, mainly by emphasizing the safety benefits of preemptive warning systems for tired or distracted drivers (remember that part in the beginning about sneaking glances at your phone? Yeah, that).

Drowsiness and distraction can’t really be called emotions, though—so why are they being lumped under an umbrella that has a lot of other implications, including what many may consider an eerily Big Brother-esque violation of privacy?

Our emotions, in fact, are among the most private things about us, since we are the only ones who know their true nature. We’ve developed the ability to hide and disguise our emotions, and this can be a useful skill at work, in relationships, and in scenarios that require negotiation or putting on a game face.

And I don’t know about you, but I’ve had more than one good cry in my car. It’s kind of the perfect place for it; private, secluded, soundproof.

Putting systems into cars that can recognize and collect data about our emotions under the guise of preventing accidents due to the state of mind of being distracted or the physical state of being sleepy, then, seems a bit like a bait and switch.

A Highway to Privacy Invasion?
European regulations will help keep driver data from being used for any purpose other than ensuring a safer ride. But the US is lagging behind on the privacy front, with car companies largely free from any enforceable laws that would keep them from using driver data as they please.

Affectiva lists the following as use cases for occupant monitoring in cars: personalizing content recommendations, providing alternate route recommendations, adapting environmental conditions like lighting and heating, and understanding user frustration with virtual assistants and designing those assistants to be emotion-aware so that they’re less frustrating.

Our phones already do the first two (though, granted, we’re not supposed to look at them while we drive—but most cars now let you use bluetooth to display your phone’s content on the dashboard), and the third is simply a matter of reaching a hand out to turn a dial or press a button. The last seems like a solution for a problem that wouldn’t exist without said… solution.

Despite how unnecessary and unsettling it may seem, though, emotion-reading AI isn’t going away, in cars or other products and services where it might provide value.

Besides automotive AI, Affectiva also makes software for clients in the advertising space. With consent, the built-in camera on users’ laptops records them while they watch ads, gauging their emotional response, what kind of marketing is most likely to engage them, and how likely they are to buy a given product. Emotion-recognition tech is also being used or considered for use in mental health applications, call centers, fraud monitoring, and education, among others.

In a 2015 TED talk, Affectiva co-founder Rana El-Kaliouby told her audience that we’re living in a world increasingly devoid of emotion, and her goal was to bring emotions back into our digital experiences. Soon they’ll be in our cars, too; whether the benefits will outweigh the costs remains to be seen.

Image Credit: Free-Photos from Pixabay Continue reading

Posted in Human Robots

#437265 This Russian Firm’s Star Designer Is ...

Imagine discovering a new artist or designer—whether visual art, fashion, music, or even writing—and becoming a big fan of her work. You follow her on social media, eagerly anticipate new releases, and chat about her talent with your friends. It’s not long before you want to know more about this creative, inspiring person, so you start doing some research. It’s strange, but there doesn’t seem to be any information about the artist’s past online; you can’t find out where she went to school or who her mentors were.

After some more digging, you find out something totally unexpected: your beloved artist is actually not a person at all—she’s an AI.

Would you be amused? Annoyed? Baffled? Impressed? Probably some combination of all these. If you wanted to ask someone who’s had this experience, you could talk to clients of the biggest multidisciplinary design company in Russia, Art.Lebedev Studio (I know, the period confused me at first too). The studio passed off an AI designer as human for more than a year, and no one caught on.

They gave the AI a human-sounding name—Nikolay Ironov—and it participated in more than 20 different projects that included designing brand logos and building brand identities. According to the studio’s website, several of the logos the AI made attracted “considerable public interest, media attention, and discussion in online communities” due to their unique style.

So how did an AI learn to create such buzz-worthy designs? It was trained using hand-drawn vector images each associated with one or more themes. To start a new design, someone enters a few words describing the client, such as what kind of goods or services they offer. The AI uses those words to find associated images and generate various starter designs, which then go through another series of algorithms that “touch them up.” A human designer then selects the best options to present to the client.

“These systems combined together provide users with the experience of instantly converting a client’s text brief into a corporate identity design pack archive. Within seconds,” said Sergey Kulinkovich, the studio’s art director. He added that clients liked Nikolay Ironov’s work before finding out he was an AI (and liked the media attention their brands got after Ironov’s identity was revealed even more).

Ironov joins a growing group of AI “artists” that are starting to raise questions about the nature of art and creativity. Where do creative ideas come from? What makes a work of art truly great? And when more than one person is involved in making art, who should own the copyright?

Art.Lebedev is far from the first design studio to employ artificial intelligence; Mailchimp is using AI to let businesses design multi-channel marketing campaigns without human designers, and Adobe is marketing its new Sensei product as an AI design assistant.

While art made by algorithms can be unique and impressive, though, there’s one caveat that’s important to keep in mind when we worry about human creativity being rendered obsolete. Here’s the thing: AIs still depend on people to not only program them, but feed them a set of training data on which their intelligence and output are based. Depending on the size and nature of an AI’s input data, its output will look pretty different from that of a similar system, and a big part of the difference will be due to the people that created and trained the AIs.

Admittedly, Nikolay Ironov does outshine his human counterparts in a handful of ways; as the studio’s website points out, he can handle real commercial tasks effectively, he doesn’t sleep, get sick, or have “crippling creative blocks,” and he can complete tasks in a matter of seconds.

Given these superhuman capabilities, then, why even keep human designers on staff? As detailed above, it will be a while before creative firms really need to consider this question on a large scale; for now, it still takes a hard-working creative human to make a fast-producing creative AI.

Image Credit: Art.Lebedev Continue reading

Posted in Human Robots

#437157 A Human-Centric World of Work: Why It ...

Long before coronavirus appeared and shattered our pre-existing “normal,” the future of work was a widely discussed and debated topic. We’ve watched automation slowly but surely expand its capabilities and take over more jobs, and we’ve wondered what artificial intelligence will eventually be capable of.

The pandemic swiftly turned the working world on its head, putting millions of people out of a job and forcing millions more to work remotely. But essential questions remain largely unchanged: we still want to make sure we’re not replaced, we want to add value, and we want an equitable society where different types of work are valued fairly.

To address these issues—as well as how the pandemic has impacted them—this week Singularity University held a digital summit on the future of work. Forty-three speakers from multiple backgrounds, countries, and sectors of the economy shared their expertise on everything from work in developing markets to why we shouldn’t want to go back to the old normal.

Gary Bolles, SU’s chair for the Future of Work, kicked off the discussion with his thoughts on a future of work that’s human-centric, including why it matters and how to build it.

What Is Work?
“Work” seems like a straightforward concept to define, but since it’s constantly shifting shape over time, let’s make sure we’re on the same page. Bolles defined work, very basically, as human skills applied to problems.

“It doesn’t matter if it’s a dirty floor or a complex market entry strategy or a major challenge in the world,” he said. “We as humans create value by applying our skills to solve problems in the world.” You can think of the problems that need solving as the demand and human skills as the supply, and the two are in constant oscillation, including, every few decades or centuries, a massive shift.

We’re in the midst of one of those shifts right now (and we already were, long before the pandemic). Skills that have long been in demand are declining. The World Economic Forum’s 2018 Future of Jobs report listed things like manual dexterity, management of financial and material resources, and quality control and safety awareness as declining skills. Meanwhile, skills the next generation will need include analytical thinking and innovation, emotional intelligence, creativity, and systems analysis.

Along Came a Pandemic
With the outbreak of coronavirus and its spread around the world, the demand side of work shrunk; all the problems that needed solving gave way to the much bigger, more immediate problem of keeping people alive. But as a result, tens of millions of people around the world are out of work—and those are just the ones that are being counted, and they’re a fraction of the true total. There are additional millions in seasonal or gig jobs or who work in informal economies now without work, too.

“This is our opportunity to focus,” Bolles said. “How do we help people re-engage with work? And make it better work, a better economy, and a better set of design heuristics for a world that we all want?”

Bolles posed five key questions—some spurred by impact of the pandemic—on which future of work conversations should focus to make sure it’s a human-centric future.

1. What does an inclusive world of work look like? Rather than seeing our current systems of work as immutable, we need to actually understand those systems and how we want to change them.

2. How can we increase the value of human work? We know that robots and software are going to be fine in the future—but for humans to be fine, we need to design for that very intentionally.

3. How can entrepreneurship help create a better world of work? In many economies the new value that’s created often comes from younger companies; how do we nurture entrepreneurship?

4. What will the intersection of workplace and geography look like? A large percentage of the global workforce is now working from home; what could some of the outcomes of that be? How does gig work fit in?

5. How can we ensure a healthy evolution of work and life? The health and the protection of those at risk is why we shut down our economies, but we need to find a balance that allows people to work while keeping them safe.

Problem-Solving Doesn’t End
The end result these questions are driving towards, and our overarching goal, is maximizing human potential. “If we come up with ways we can continue to do that, we’ll have a much more beneficial future of work,” Bolles said. “We should all be talking about where we can have an impact.”

One small silver lining? We had plenty of problems to solve in the world before ever hearing about coronavirus, and now we have even more. Is the pace of automation accelerating due to the virus? Yes. Are companies finding more ways to automate their processes in order to keep people from getting sick? They are.

But we have a slew of new problems on our hands, and we’re not going to stop needing human skills to solve them (not to mention the new problems that will surely emerge as second- and third-order effects of the shutdowns). If Bolles’ definition of work holds up, we’ve got ours cut out for us.

In an article from April titled The Great Reset, Bolles outlined three phases of the unemployment slump (we’re currently still in the first phase) and what we should be doing to minimize the damage. “The evolution of work is not about what will happen 10 to 20 years from now,” he said. “It’s about what we could be doing differently today.”

Watch Bolles’ talk and those of dozens of other experts for more insights into building a human-centric future of work here.

Image Credit: www_slon_pics from Pixabay Continue reading

Posted in Human Robots