Tag Archives: special

#436491 The Year’s Most Fascinating Tech ...

Last Saturday we took a look at some of the most-read Singularity Hub articles from 2019. This week, we’re featuring some of our favorite articles from the last year. As opposed to short pieces about what’s happening, these are long reads about why it matters and what’s coming next. Some of them make the news while others frame the news, go deep on big ideas, go behind the scenes, or explore the human side of technological progress.

We hope you find them as fascinating, inspiring, and illuminating as we did.

DeepMind and Google: The Battle to Control Artificial Intelligence
Hal Hodson | 1843
“[DeepMind cofounder and CEO Demis] Hassabis thought DeepMind would be a hybrid: it would have the drive of a startup, the brains of the greatest universities, and the deep pockets of one of the world’s most valuable companies. Every element was in place to hasten the arrival of [artificial general intelligence] and solve the causes of human misery.”

The Most Powerful Person in Silicon Valley
Katrina Brooker | Fast Company
“Billionaire Masayoshi Son—not Elon Musk, Jeff Bezos, or Mark Zuckerberg—has the most audacious vision for an AI-powered utopia where machines control how we live. And he’s spending hundreds of billions of dollars to realize it. Are you ready to live in Masa World?”

AR Will Spark the Next Big Tech Platform—Call It Mirrorworld
Kevin Kelly | Wired
“Eventually this melded world will be the size of our planet. It will be humanity’s greatest achievement, creating new levels of wealth, new social problems, and uncountable opportunities for billions of people. There are no experts yet to make this world; you are not late.”

Behind the Scenes of a Radical New Cancer Cure
Ilana Yurkiewicz | Undark
“I remember the first time I watched a patient get his Day 0 infusion. It felt anti-climactic. The entire process took about 15 minutes. The CAR-T cells are invisible to the naked eye, housed in a small plastic bag containing clear liquid. ‘That’s it?’ my patient asked when the nurse said it was over. The infusion part is easy. The hard part is everything that comes next.”

The Promise and Price of Cellular Therapies
Siddhartha Mukherjee | The New Yorker
“We like to imagine medical revolutions as, well, revolutionary—propelled forward through leaps of genius and technological innovation. But they are also evolutionary, nudged forward through the optimization of design and manufacture.”

Impossible Foods’ Rising Empire of Almost Meat
Chris Ip | Engadget
“Impossible says it wants to ultimately create a parallel universe of ersatz animal products from steak to eggs. …Yet as Impossible ventures deeper into the culinary uncanny valley, it also needs society to discard a fundamental cultural idea that dates back millennia and accept a new truth: Meat doesn’t have to come from animals.”

Inside the Amazon Warehouse Where Humans and Machines Become One
Matt Simon | Wired
“Seen from above, the scale of the system is dizzying. My robot, a little orange slab known as a ‘drive’ (or more formally and mythically, Pegasus), is just one of hundreds of its kind swarming a 125,000-square-foot ‘field’ pockmarked with chutes. It’s a symphony of electric whirring, with robots pausing for one another at intersections and delivering their packages to the slides.”

Boston Dynamics’ Robots Are Preparing to Leave the Lab—Is the World Ready?
James Vincent | The Verge
“After decades of kicking machines in parking lots, the company is set to launch its first ever commercial bot later this year: the quadrupedal Spot. It’s a crucial test for a company that’s spent decades pursuing long-sighted R&D. And more importantly, the success—or failure—of Spot will tell us a lot about our own robot future. Are we ready for machines to walk among us?”

I Cut the ‘Big Five’ Tech Giants From My Life. It Was Hell
Kashmir Hill | Gizmodo
“Critics of the big tech companies are often told, ‘If you don’t like the company, don’t use its products.’ I did this experiment to find out if that is possible, and I found out that it’s not—with the exception of Apple. …These companies are unavoidable because they control internet infrastructure, online commerce, and information flows.”

Why I (Still) Love Tech: In Defense of a Difficult Industry
Paul Ford | Wired
“The mysteries of software caught my eye when I was a boy, and I still see it with the same wonder, even though I’m now an adult. Proudshamed, yes, but I still love it, the mess of it, the code and toolkits, down to the pixels and the processors, and up to the buses and bridges. I love the whole made world. But I can’t deny that the miracle is over, and that there is an unbelievable amount of work left for us to do.”

The Peculiar Blindness of Experts
David Epstein | The Atlantic
“In business, esteemed (and lavishly compensated) forecasters routinely are wildly wrong in their predictions of everything from the next stock-market correction to the next housing boom. Reliable insight into the future is possible, however. It just requires a style of thinking that’s uncommon among experts who are certain that their deep knowledge has granted them a special grasp of what is to come.”

The Most Controversial Tree in the World
Rowan Jacobson | Pacific Standard
“…we are all GMOs, the beneficiaries of freakishly unlikely genetic mash-ups, and the real Island of Dr. Moreau is that blue-green botanical garden positioned third from the sun. Rather than changing the nature of nature, as I once thought, this might just be the very nature of nature.”

How an Augmented Reality Game Escalated Into Real-World Spy Warfare
Elizabeth Ballou | Vice
“In Ingress, players accept that every park and train station could be the site of an epic showdown, but that’s only the first step. The magic happens when other people accept that, too. When players feel like that magic is real, there are few limits to what they’ll do or where they’ll go for the sake of the game. ”

The Shady Cryptocurrency Boom on the Post-Soviet Frontier
Hannah Lucinda Smith | Wired
“…although the tourists won’t guess it as they stand at Kuchurgan’s gates, admiring how the evening light reflects off the silver plaque of Lenin, this plant is pumping out juice to a modern-day gold rush: a cryptocurrency boom that is underway all across the former Soviet Union, from the battlefields of eastern Ukraine to time-warp enclaves like Transnistria and freshly annexed Crimea.”

Scientists Are Totally Rethinking Animal Cognition
Ross Andersen | The Atlantic
“This idea that animals are conscious was long unpopular in the West, but it has lately found favor among scientists who study animal cognition. …For many scientists, the resonant mystery is no longer which animals are conscious, but which are not.”

I Wrote This on a 30-Year-Old Computer
Ian Bogost | The Atlantic
“[Back then] computing was an accompaniment to life, rather than the sieve through which all ideas and activities must filter. That makes using this 30-year-old device a surprising joy, one worth longing for on behalf of what it was at the time, rather than for the future it inaugurated.”

Image Credit: Wes Hicks / Unsplash Continue reading

Posted in Human Robots

#436403 Why Your 5G Phone Connection Could Mean ...

Will getting full bars on your 5G connection mean getting caught out by sudden weather changes?

The question may strike you as hypothetical, nonsensical even, but it is at the core of ongoing disputes between meteorologists and telecommunications companies. Everyone else, including you and I, are caught in the middle, wanting both 5G’s faster connection speeds and precise information about our increasingly unpredictable weather. So why can’t we have both?

Perhaps we can, but because of the way 5G networks function, it may take some special technology—specifically, artificial intelligence.

The Bandwidth Worries
Around the world, the first 5G networks are already being rolled out. The networks use a variety of frequencies to transmit data to and from devices at speeds up to 100 times faster than existing 4G networks.

One of the bandwidths used is between 24.25 and 24.45 gigahertz (GHz). In a recent FCC auction, telecommunications companies paid a combined $2 billion for the 5G usage rights for this spectrum in the US.

However, meteorologists are concerned that transmissions near the lower end of that range can interfere with their ability to accurately measure water vapor in the atmosphere. Wired reported that acting chief of the National Oceanic and Atmospheric Administration (NOAA), Neil Jacobs, told the US House Subcommittee on the Environment that 5G interference could substantially cut the amount of weather data satellites can gather. As a result, forecast accuracy could drop by as much as 30 percent.

Among the consequences could be less time to prepare for hurricanes, and it may become harder to predict storms’ paths. Due to the interconnectedness of weather patterns, measurement issues in one location can affect other areas too. Lack of accurate atmospheric data from the US could, for example, lead to less accurate forecasts for weather patterns over Europe.

The Numbers Game
Water vapor emits a faint signal at 23.8 GHz. Weather satellites measure the signals, and the data is used to gauge atmospheric humidity levels. Meteorologists have expressed concern that 5G signals in the same range can disturb those readings. The issue is that it would be nigh on impossible to tell whether a signal is water vapor or an errant 5G signal.

Furthermore, 5G disturbances in other frequency bands could make forecasting even more difficult. Rain and snow emit frequencies around 36-37 GHz. 50.2-50.4 GHz is used to measure atmospheric temperatures, and 86-92 GHz clouds and ice. All of the above are under consideration for international 5G signals. Some have warned that the wider consequences could set weather forecasts back to the 1980s.

Telecommunications companies and interest organizations have argued back, saying that weather sensors aren’t as susceptible to interference as meteorologists fear. Furthermore, 5G devices and signals will produce much less interference with weather forecasts than organizations like NOAA predict. Since very little scientific research has been carried out to examine the claims of either party, we seem stuck in a ‘wait and see’ situation.

To offset some of the possible effects, the two groups have tried to reach a consensus on a noise buffer between the 5G transmissions and water-vapor signals. It could be likened to limiting the noise from busy roads or loud sound systems to avoid bothering neighboring buildings.

The World Meteorological Organization was looking to establish a -55 decibel watts buffer. In Europe, regulators are locked in on a -42 decibel watts buffer for 5G base stations. For comparison, the US Federal Communications Commission has advocated for a -20 decibel watts buffer, which would, in reality, allow more than 150 times more noise than the European proposal.

How AI Could Help
Much of the conversation about 5G’s possible influence on future weather predictions is centered around mobile phones. However, the phones are far from the only systems that will be receiving and transmitting signals on 5G. Self-driving cars and the Internet of Things are two other technologies that could soon be heavily reliant on faster wireless signals.

Densely populated areas are likely going to be the biggest emitters of 5G signals, leading to a suggestion to only gather water-vapor data over oceans.

Another option is to develop artificial intelligence (AI) approaches to clean or process weather data. AI is playing an increasing role in weather forecasting. For example, in 2016 IBM bought The Weather Company for $2 billion. The goal was to combine the two companies’ models and data in IBM’s Watson to create more accurate forecasts. AI would also be able to predict increases or drops in business revenues due to weather changes. Monsanto has also been investing in AI for forecasting, in this case to provide agriculturally-related weather predictions.

Smartphones may also provide a piece of the weather forecasting puzzle. Studies have shown how data from thousands of smartphones can help to increase the accuracy of storm predictions, as well as the force of storms.

“Weather stations cost a lot of money,” Cliff Mass, an atmospheric scientist at the University of Washington in Seattle, told Inside Science, adding, “If there are already 20 million smartphones, you might as well take advantage of the observation system that’s already in place.”

Smartphones may not be the solution when it comes to finding new ways of gathering the atmospheric data on water vapor that 5G could disrupt. But it does go to show that some technologies open new doors, while at the same time, others shut them.

Image Credit: Image by Free-Photos from Pixabay Continue reading

Posted in Human Robots

#436220 How Boston Dynamics Is Redefining Robot ...

Gif: Bob O’Connor/IEEE Spectrum

With their jaw-dropping agility and animal-like reflexes, Boston Dynamics’ bioinspired robots have always seemed to have no equal. But that preeminence hasn’t stopped the company from pushing its technology to new heights, sometimes literally. Its latest crop of legged machines can trudge up and down hills, clamber over obstacles, and even leap into the air like a gymnast. There’s no denying their appeal: Every time Boston Dynamics uploads a new video to YouTube, it quickly racks up millions of views. These are probably the first robots you could call Internet stars.

Spot

Photo: Bob O’Connor

84 cm HEIGHT

25 kg WEIGHT

5.76 km/h SPEED

SENSING: Stereo cameras, inertial measurement unit, position/force sensors

ACTUATION: 12 DC motors

POWER: Battery (90 minutes per charge)

Boston Dynamics, once owned by Google’s parent company, Alphabet, and now by the Japanese conglomerate SoftBank, has long been secretive about its designs. Few publications have been granted access to its Waltham, Mass., headquarters, near Boston. But one morning this past August, IEEE Spectrum got in. We were given permission to do a unique kind of photo shoot that day. We set out to capture the company’s robots in action—running, climbing, jumping—by using high-speed cameras coupled with powerful strobes. The results you see on this page: freeze-frames of pure robotic agility.

We also used the photos to create interactive views, which you can explore online on our Robots Guide. These interactives let you spin the robots 360 degrees, or make them walk and jump on your screen.

Boston Dynamics has amassed a minizoo of robotic beasts over the years, with names like BigDog, SandFlea, and WildCat. When we visited, we focused on the two most advanced machines the company has ever built: Spot, a nimble quadruped, and Atlas, an adult-size humanoid.

Spot can navigate almost any kind of terrain while sensing its environment. Boston Dynamics recently made it available for lease, with plans to manufacture something like a thousand units per year. It envisions Spot, or even packs of them, inspecting industrial sites, carrying out hazmat missions, and delivering packages. And its YouTube fame has not gone unnoticed: Even entertainment is a possibility, with Cirque du Soleil auditioning Spot as a potential new troupe member.

“It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field,” Boston Dynamics CEO Marc Raibert says in an interview.

Atlas

Photo: Bob O’Connor

150 cm HEIGHT

80 kg WEIGHT

5.4 km/h SPEED

SENSING: Lidar and stereo vision

ACTUATION: 28 hydraulic actuators

POWER: Battery

Our other photographic subject, Atlas, is Boston Dynamics’ biggest celebrity. This 150-centimeter-tall (4-foot-11-inch-tall) humanoid is capable of impressive athletic feats. Its actuators are driven by a compact yet powerful hydraulic system that the company engineered from scratch. The unique system gives the 80-kilogram (176-pound) robot the explosive strength needed to perform acrobatic leaps and flips that don’t seem possible for such a large humanoid to do. Atlas has inspired a string of parody videos on YouTube and more than a few jokes about a robot takeover.

While Boston Dynamics excels at making robots, it has yet to prove that it can sell them. Ever since its founding in 1992 as a spin-off from MIT, the company has been an R&D-centric operation, with most of its early funding coming from U.S. military programs. The emphasis on commercialization seems to have intensified after the acquisition by SoftBank, in 2017. SoftBank’s founder and CEO, Masayoshi Son, is known to love robots—and profits.

The launch of Spot is a significant step for Boston Dynamics as it seeks to “productize” its creations. Still, Raibert says his long-term goals have remained the same: He wants to build machines that interact with the world dynamically, just as animals and humans do. Has anything changed at all? Yes, one thing, he adds with a grin. In his early career as a roboticist, he used to write papers and count his citations. Now he counts YouTube views.

In the Spotlight

Photo: Bob O’Connor

Boston Dynamics designed Spot as a versatile mobile machine suitable for a variety of applications. The company has not announced how much Spot will cost, saying only that it is being made available to select customers, which will be able to lease the robot. A payload bay lets you add up to 14 kilograms of extra hardware to the robot’s back. One of the accessories that Boston Dynamics plans to offer is a 6-degrees-of-freedom arm, which will allow Spot to grasp objects and open doors.

Super Senses

Photo: Bob O’Connor

Spot’s hardware is almost entirely custom-designed. It includes powerful processing boards for control as well as sensor modules for perception. The ­sensors are located on the front, rear, and sides of the robot’s body. Each module consists of a pair of stereo cameras, a wide-angle camera, and a texture projector, which enhances 3D sensing in low light. The sensors allow the robot to use the navigation method known as SLAM, or simultaneous localization and mapping, to get around autonomously.

Stepping Up

Photo: Bob O’Connor

In addition to its autonomous behaviors, Spot can also be steered by a remote operator with a game-style controller. But even when in manual mode, the robot still exhibits a high degree of autonomy. If there’s an obstacle ahead, Spot will go around it. If there are stairs, Spot will climb them. The robot goes into these operating modes and then performs the related actions completely on its own, without any input from the operator. To go down a flight of stairs, Spot walks backward, an approach Boston Dynamics says provides greater stability.

Funky Feet

Gif: Bob O’Connor/IEEE Spectrum

Spot’s legs are powered by 12 custom DC motors, each geared down to provide high torque. The robot can walk forward, sideways, and backward, and trot at a top speed of 1.6 meters per second. It can also turn in place. Other gaits include crawling and pacing. In one wildly popular YouTube video, Spot shows off its fancy footwork by dancing to the pop hit “Uptown Funk.”

Robot Blood

Photo: Bob O’Connor

Atlas is powered by a hydraulic system consisting of 28 actuators. These actuators are basically cylinders filled with pressurized fluid that can drive a piston with great force. Their high performance is due in part to custom servo valves that are significantly smaller and lighter than the aerospace models that Boston Dynamics had been using in earlier designs. Though not visible from the outside, the innards of an Atlas are filled with these hydraulic actuators as well as the lines of fluid that connect them. When one of those lines ruptures, Atlas bleeds the hydraulic fluid, which happens to be red.

Next Generation

Gif: Bob O’Connor/IEEE Spectrum

The current version of Atlas is a thorough upgrade of the original model, which was built for the DARPA Robotics Challenge in 2015. The newest robot is lighter and more agile. Boston Dynamics used industrial-grade 3D printers to make key structural parts, giving the robot greater strength-to-weight ratio than earlier designs. The next-gen Atlas can also do something that its predecessor, famously, could not: It can get up after a fall.

Walk This Way

Photo: Bob O’Connor

To control Atlas, an operator provides general steering via a manual controller while the robot uses its stereo cameras and lidar to adjust to changes in the environment. Atlas can also perform certain tasks autonomously. For example, if you add special bar-code-type tags to cardboard boxes, Atlas can pick them up and stack them or place them on shelves.

Biologically Inspired

Photos: Bob O’Connor

Atlas’s control software doesn’t explicitly tell the robot how to move its joints, but rather it employs mathematical models of the underlying physics of the robot’s body and how it interacts with the environment. Atlas relies on its whole body to balance and move. When jumping over an obstacle or doing acrobatic stunts, the robot uses not only its legs but also its upper body, swinging its arms to propel itself just as an athlete would.

This article appears in the December 2019 print issue as “By Leaps and Bounds.” Continue reading

Posted in Human Robots

#436190 What Is the Uncanny Valley?

Have you ever encountered a lifelike humanoid robot or a realistic computer-generated face that seem a bit off or unsettling, though you can’t quite explain why?

Take for instance AVA, one of the “digital humans” created by New Zealand tech startup Soul Machines as an on-screen avatar for Autodesk. Watching a lifelike digital being such as AVA can be both fascinating and disconcerting. AVA expresses empathy through her demeanor and movements: slightly raised brows, a tilt of the head, a nod.

By meticulously rendering every lash and line in its avatars, Soul Machines aimed to create a digital human that is virtually undistinguishable from a real one. But to many, rather than looking natural, AVA actually looks creepy. There’s something about it being almost human but not quite that can make people uneasy.

Like AVA, many other ultra-realistic avatars, androids, and animated characters appear stuck in a disturbing in-between world: They are so lifelike and yet they are not “right.” This void of strangeness is known as the uncanny valley.

Uncanny Valley: Definition and History
The uncanny valley is a concept first introduced in the 1970s by Masahiro Mori, then a professor at the Tokyo Institute of Technology. The term describes Mori’s observation that as robots appear more humanlike, they become more appealing—but only up to a certain point. Upon reaching the uncanny valley, our affinity descends into a feeling of strangeness, a sense of unease, and a tendency to be scared or freaked out.

Image: Masahiro Mori

The uncanny valley as depicted in Masahiro Mori’s original graph: As a robot’s human likeness [horizontal axis] increases, our affinity towards the robot [vertical axis] increases too, but only up to a certain point. For some lifelike robots, our response to them plunges, and they appear repulsive or creepy. That’s the uncanny valley.

In his seminal essay for Japanese journal Energy, Mori wrote:

I have noticed that, in climbing toward the goal of making robots appear human, our affinity for them increases until we come to a valley, which I call the uncanny valley.

Later in the essay, Mori describes the uncanny valley by using an example—the first prosthetic hands:

One might say that the prosthetic hand has achieved a degree of resemblance to the human form, perhaps on a par with false teeth. However, when we realize the hand, which at first site looked real, is in fact artificial, we experience an eerie sensation. For example, we could be startled during a handshake by its limp boneless grip together with its texture and coldness. When this happens, we lose our sense of affinity, and the hand becomes uncanny.

In an interview with IEEE Spectrum, Mori explained how he came up with the idea for the uncanny valley:

“Since I was a child, I have never liked looking at wax figures. They looked somewhat creepy to me. At that time, electronic prosthetic hands were being developed, and they triggered in me the same kind of sensation. These experiences had made me start thinking about robots in general, which led me to write that essay. The uncanny valley was my intuition. It was one of my ideas.”

Uncanny Valley Examples
To better illustrate how the uncanny valley works, here are some examples of the phenomenon. Prepare to be freaked out.

1. Telenoid

Photo: Hiroshi Ishiguro/Osaka University/ATR

Taking the top spot in the “creepiest” rankings of IEEE Spectrum’s Robots Guide, Telenoid is a robotic communication device designed by Japanese roboticist Hiroshi Ishiguro. Its bald head, lifeless face, and lack of limbs make it seem more alien than human.

2. Diego-san

Photo: Andrew Oh/Javier Movellan/Calit2

Engineers and roboticists at the University of California San Diego’s Machine Perception Lab developed this robot baby to help parents better communicate with their infants. At 1.2 meters (4 feet) tall and weighing 30 kilograms (66 pounds), Diego-san is a big baby—bigger than an average 1-year-old child.

“Even though the facial expression is sophisticated and intuitive in this infant robot, I still perceive a false smile when I’m expecting the baby to appear happy,” says Angela Tinwell, a senior lecturer at the University of Bolton in the U.K. and author of The Uncanny Valley in Games and Animation. “This, along with a lack of detail in the eyes and forehead, can make the baby appear vacant and creepy, so I would want to avoid those ‘dead eyes’ rather than interacting with Diego-san.”

​3. Geminoid HI

Photo: Osaka University/ATR/Kokoro

Another one of Ishiguro’s creations, Geminoid HI is his android replica. He even took hair from his own scalp to put onto his robot twin. Ishiguro says he created Geminoid HI to better understand what it means to be human.

4. Sophia

Photo: Mikhail Tereshchenko/TASS/Getty Images

Designed by David Hanson of Hanson Robotics, Sophia is one of the most famous humanoid robots. Like Soul Machines’ AVA, Sophia displays a range of emotional expressions and is equipped with natural language processing capabilities.

5. Anthropomorphized felines

The uncanny valley doesn’t only happen with robots that adopt a human form. The 2019 live-action versions of the animated film The Lion King and the musical Cats brought the uncanny valley to the forefront of pop culture. To some fans, the photorealistic computer animations of talking lions and singing cats that mimic human movements were just creepy.

Are you feeling that eerie sensation yet?

Uncanny Valley: Science or Pseudoscience?
Despite our continued fascination with the uncanny valley, its validity as a scientific concept is highly debated. The uncanny valley wasn’t actually proposed as a scientific concept, yet has often been criticized in that light.

Mori himself said in his IEEE Spectrum interview that he didn’t explore the concept from a rigorous scientific perspective but as more of a guideline for robot designers:

Pointing out the existence of the uncanny valley was more of a piece of advice from me to people who design robots rather than a scientific statement.

Karl MacDorman, an associate professor of human-computer interaction at Indiana University who has long studied the uncanny valley, interprets the classic graph not as expressing Mori’s theory but as a heuristic for learning the concept and organizing observations.

“I believe his theory is instead expressed by his examples, which show that a mismatch in the human likeness of appearance and touch or appearance and motion can elicit a feeling of eeriness,” MacDorman says. “In my own experiments, I have consistently reproduced this effect within and across sense modalities. For example, a mismatch in the human realism of the features of a face heightens eeriness; a robot with a human voice or a human with a robotic voice is eerie.”

How to Avoid the Uncanny Valley
Unless you intend to create creepy characters or evoke a feeling of unease, you can follow certain design principles to avoid the uncanny valley. “The effect can be reduced by not creating robots or computer-animated characters that combine features on different sides of a boundary—for example, human and nonhuman, living and nonliving, or real and artificial,” MacDorman says.

To make a robot or avatar more realistic and move it beyond the valley, Tinwell says to ensure that a character’s facial expressions match its emotive tones of speech, and that its body movements are responsive and reflect its hypothetical emotional state. Special attention must also be paid to facial elements such as the forehead, eyes, and mouth, which depict the complexities of emotion and thought. “The mouth must be modeled and animated correctly so the character doesn’t appear aggressive or portray a ‘false smile’ when they should be genuinely happy,” she says.

For Christoph Bartneck, an associate professor at the University of Canterbury in New Zealand, the goal is not to avoid the uncanny valley, but to avoid bad character animations or behaviors, stressing the importance of matching the appearance of a robot with its ability. “We’re trained to spot even the slightest divergence from ‘normal’ human movements or behavior,” he says. “Hence, we often fail in creating highly realistic, humanlike characters.”

But he warns that the uncanny valley appears to be more of an uncanny cliff. “We find the likability to increase and then crash once robots become humanlike,” he says. “But we have never observed them ever coming out of the valley. You fall off and that’s it.” Continue reading

Posted in Human Robots

#436079 Video Friday: This Humanoid Robot Will ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

Northeast Robotics Colloquium – October 12, 2019 – Philadelphia, Pa., USA
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

What’s better than a robotics paper with “dynamic” in the title? A robotics paper with “highly dynamic” in the title. From Sangbae Kim’s lab at MIT, the latest exploits of Mini Cheetah:

Yes I’d very much like one please. Full paper at the link below.

[ Paper ] via [ MIT ]

A humanoid robot serving you ice cream—on his own ice cream bike: What a delicious vision!

[ Roboy ]

The Roomba “i” series and “s” series vacuums have just gotten an update that lets you set “keep out” zones, which is super useful. Tell your robot where not to go!

I feel bad, that Roomba was probably just hungry 🙁

[ iRobot ]

We wrote about Voliro’s tilt-rotor hexcopter a couple years ago, and now it’s off doing practical things, like spray painting a building pretty much the same color that it was before.

[ Voliro ]

Thanks Mina!

Here’s a clever approach for bin-picking problematic objects, like shiny things: Just grab a whole bunch, and then sort out what you need on a nice robot-friendly table.

It might take a little bit longer, but what do you care, you’re probably off sipping a cocktail with a little umbrella in it on a beach somewhere.

[ Harada Lab ]

A unique combination of the IRB 1200 and YuMi industrial robots that use vision, AI and deep learning to recognize and categorize trash for recycling.

[ ABB ]

Measuring glacial movements in-situ is a challenging, but necessary task to model glaciers and predict their future evolution. However, installing GPS stations on ice can be dangerous and expensive when not impossible in the presence of large crevasses. In this project, the ASL develops UAVs for dropping and recovering lightweight GPS stations over inaccessible glaciers to record the ice flow motion. This video shows the results of first tests performed at Gorner glacier, Switzerland, in July 2019.

[ EPFL ]

Turns out Tertills actually do a pretty great job fighting weeds.

Plus, they leave all those cute lil’ Tertill tracks.

[ Franklin Robotics ]

The online autonomous navigation and semantic mapping experiment presented [below] is conducted with the Cassie Blue bipedal robot at the University of Michigan. The sensors attached to the robot include an IMU, a 32-beam LiDAR and an RGB-D camera. The whole online process runs in real-time on a Jetson Xavier and a laptop with an i7 processor.

The resulting map is so precise that it looks like we are doing real-time SLAM (simultaneous localization and mapping). In fact, the map is based on dead-reckoning via the InvEKF.

[ GTSAM ] via [ University of Michigan ]

UBTECH has announced an upgraded version of its Meebot, which is 30 percent bigger and comes with more sensors and programmable eyes.

[ UBTECH ]

ABB’s research team will be working with medical staff, scientist and engineers to develop non-surgical medical robotics systems, including logistics and next-generation automated laboratory technologies. The team will develop robotics solutions that will help eliminate bottlenecks in laboratory work and address the global shortage of skilled medical staff.

[ ABB ]

In this video, Ian and Chris go through Misty’s SDK, discussing the languages we’ve included, the tools that make it easy for you to get started quickly, a quick rundown of how to run the skills you build, plus what’s ahead on the Misty SDK roadmap.

[ Misty Robotics ]

My guess is that this was not one of iRobot’s testing environments for the Roomba.

You know, that’s actually super impressive. And maybe if they threw one of the self-emptying Roombas in there, it would be a viable solution to the entire problem.

[ How Farms Work ]

Part of WeRobotics’ Flying Labs network, Panama Flying Labs is a local knowledge hub catalyzing social good and empowering local experts. Through training and workshops, demonstrations and missions, the Panama Flying Labs team leverages the power of drones, data, and AI to promote entrepreneurship, build local capacity, and confront the pressing social challenges faced by communities in Panama and across Central America.

[ Panama Flying Labs ]

Go on a virtual flythrough of the NIOSH Experimental Mine, one of two courses used in the recent DARPA Subterranean Challenge Tunnel Circuit Event held 15-22 August, 2019. The data used for this partial flythrough tour were collected using 3D LIDAR sensors similar to the sensors commonly used on autonomous mobile robots.

[ SubT ]

Special thanks to PBS, Mark Knobil, Joe Seamans and Stan Brandorff and many others who produced this program in 1991.

It features Reid Simmons (and his 1 year old son), David Wettergreen, Red Whittaker, Mac Macdonald, Omead Amidi, and other Field Robotics Center alumni building the planetary walker prototype called Ambler. The team gets ready for an important demo for NASA.

[ CMU RI ]

As art and technology merge, roboticist Madeline Gannon explores the frontiers of human-robot interaction across the arts, sciences and society, and explores what this could mean for the future.

[ Sonar+D ] Continue reading

Posted in Human Robots