Tag Archives: valley

#437579 Disney Research Makes Robotic Gaze ...

While it’s not totally clear to what extent human-like robots are better than conventional robots for most applications, one area I’m personally comfortable with them is entertainment. The folks over at Disney Research, who are all about entertainment, have been working on this sort of thing for a very long time, and some of their animatronic attractions are actually quite impressive.

The next step for Disney is to make its animatronic figures, which currently feature scripted behaviors, to perform in an interactive manner with visitors. The challenge is that this is where you start to get into potential Uncanny Valley territory, which is what happens when you try to create “the illusion of life,” which is what Disney (they explicitly say) is trying to do.

In a paper presented at IROS this month, a team from Disney Research, Caltech, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering is trying to nail that illusion of life with a single, and perhaps most important, social cue: eye gaze.

Before you watch this video, keep in mind that you’re watching a specific character, as Disney describes:

The robot character plays an elderly man reading a book, perhaps in a library or on a park bench. He has difficulty hearing and his eyesight is in decline. Even so, he is constantly distracted from reading by people passing by or coming up to greet him. Most times, he glances at people moving quickly in the distance, but as people encroach into his personal space, he will stare with disapproval for the interruption, or provide those that are familiar to him with friendly acknowledgment.

What, exactly, does “lifelike” mean in the context of robotic gaze? The paper abstract describes the goal as “[seeking] to create an interaction which demonstrates the illusion of life.” I suppose you could think of it like a sort of old-fashioned Turing test focused on gaze: If the gaze of this robot cannot be distinguished from the gaze of a human, then victory, that’s lifelike. And critically, we’re talking about mutual gaze here—not just a robot gazing off into the distance, but you looking deep into the eyes of this robot and it looking right back at you just like a human would. Or, just like some humans would.

The approach that Disney is using is more animation-y than biology-y or psychology-y. In other words, they’re not trying to figure out what’s going on in our brains to make our eyes move the way that they do when we’re looking at other people and basing their control system on that, but instead, Disney just wants it to look right. This “visual appeal” approach is totally fine, and there’s been an enormous amount of human-robot interaction (HRI) research behind it already, albeit usually with less explicitly human-like platforms. And speaking of human-like platforms, the hardware is a “custom Walt Disney Imagineering Audio-Animatronics bust,” which has DoFs that include neck, eyes, eyelids, and eyebrows.

In order to decide on gaze motions, the system first identifies a person to target with its attention using an RGB-D camera. If more than one person is visible, the system calculates a curiosity score for each, currently simplified to be based on how much motion it sees. Depending on which person that the robot can see has the highest curiosity score, the system will choose from a variety of high level gaze behavior states, including:

Read: The Read state can be considered the “default” state of the character. When not executing another state, the robot character will return to the Read state. Here, the character will appear to read a book located at torso level.

Glance: A transition to the Glance state from the Read or Engage states occurs when the attention engine indicates that there is a stimuli with a curiosity score […] above a certain threshold.

Engage: The Engage state occurs when the attention engine indicates that there is a stimuli […] to meet a threshold and can be triggered from both Read and Glance states. This state causes the robot to gaze at the person-of-interest with both the eyes and head.

Acknowledge: The Acknowledge state is triggered from either Engage or Glance states when the person-of-interest is deemed to be familiar to the robot.

Running underneath these higher level behavior states are lower level motion behaviors like breathing, small head movements, eye blinking, and saccades (the quick eye movements that occur when people, or robots, look between two different focal points). The term for this hierarchical behavioral state layering is a subsumption architecture, which goes all the way back to Rodney Brooks’ work on robots like Genghis in the 1980s and Cog and Kismet in the ’90s, and it provides a way for more complex behaviors to emerge from a set of simple, decentralized low-level behaviors.

“25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”
—Rodney Brooks, MIT emeritus professor

Brooks, an emeritus professor at MIT and, most recently, cofounder and CTO of Robust.ai, tweeted about the Disney project, saying: “People underestimate how long it takes to get from academic paper to real world robotics. 25 years on Disney is using my subsumption architecture for humanoid eye control, better and smoother now than our 1995 implementations on Cog and Kismet.”

From the paper:

Although originally intended for control of mobile robots, we find that the subsumption architecture, as presented in [17], lends itself as a framework for organizing animatronic behaviors. This is due to the analogous use of subsumption in human behavior: human psychomotor behavior can be intuitively modeled as layered behaviors with incoming sensory inputs, where higher behavioral levels are able to subsume lower behaviors. At the lowest level, we have involuntary movements such as heartbeats, breathing and blinking. However, higher behavioral responses can take over and control lower level behaviors, e.g., fight-or-flight response can induce faster heart rate and breathing. As our robot character is modeled after human morphology, mimicking biological behaviors through the use of a bottom-up approach is straightforward.

The result, as the video shows, appears to be quite good, although it’s hard to tell how it would all come together if the robot had more of, you know, a face. But it seems like you don’t necessarily need to have a lifelike humanoid robot to take advantage of this architecture in an HRI context—any robot that wants to make a gaze-based connection with a human could benefit from doing it in a more human-like way.

“Realistic and Interactive Robot Gaze,” by Matthew K.X.J. Pan, Sungjoon Choi, James Kennedy, Kyna McIntosh, Daniel Campos Zamora, Gunter Niemeyer, Joohyung Kim, Alexis Wieland, and David Christensen from Disney Research, California Institute of Technology, University of Illinois at Urbana-Champaign, and Walt Disney Imagineering, was presented at IROS 2020. You can find the full paper, along with a 13-minute video presentation, on the IROS on-demand conference website.

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#437562 Video Friday: Aquanaut Robot Takes to ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

IROS 2020 – October 25-25, 2020 – [Online]
ICSR 2020 – November 14-16, 2020 – Golden, Colo., USA
Bay Area Robotics Symposium – November 20, 2020 – [Online]
ACRA 2020 – December 8-10, 2020 – [Online]
Let us know if you have suggestions for next week, and enjoy today's videos.

To prepare the Perseverance rover for its date with Mars, NASA’s Mars 2020 mission team conducted a wide array of tests to help ensure a successful entry, descent and landing at the Red Planet. From parachute verification in the world’s largest wind tunnel, to hazard avoidance practice in Death Valley, California, to wheel drop testing at NASA’s Jet Propulsion Laboratory and much more, every system was put through its paces to get ready for the big day. The Perseverance rover is scheduled to land on Mars on February 18, 2021.

[ JPL ]

Awesome to see Aquanaut—the “underwater transformer” we wrote about last year—take to the ocean!

Also their new website has SHARKS on it.

[ HMI ]

Nature has inspired engineers at UNSW Sydney to develop a soft fabric robotic gripper which behaves like an elephant's trunk to grasp, pick up and release objects without breaking them.

[ UNSW ]

Collaborative robots offer increased interaction capabilities at relatively low cost but, in contrast to their industrial counterparts, they inevitably lack precision. We address this problem by relying on a dual-arm system with laser-based sensing to measure relative poses between objects of interest and compensate for pose errors coming from robot proprioception.

[ Paper ]

Developed by NAVER LABS, with Korea University of Technology & Education (Koreatech), the robot arm now features an added waist, extending the available workspace, as well as a sensor head that can perceive objects. It has also been equipped with a robot hand “BLT Gripper” that can change to various grasping methods.

[ NAVER Labs ]

In case you were still wondering why SoftBank acquired Aldebaran and Boston Dynamics:

[ RobotStart ]

DJI's new Mini 2 drone is here with a commercial so hip it makes my teeth scream.

[ DJI ]

Using simple materials, such as plastic struts and cardboard rolls, the first prototype of the RBO Hand 3 is already capable of grasping a large range of different objects thanks to its opposable thumb.

The RBO Hand 3 performs an edge grasp before handing-over the object to a person. The hand actively exploits constraints in the environment (the tabletop) for grasping the object. Thanks to its compliance, this interaction is safe and robust.

[ TU Berlin ]

Flyability's Elios 2 helped researchers inspect Reactor Five at the Chernobyl nuclear disaster site in order to determine whether any uranium was present. Prior to this mission, Reactor Five had not been investigated since the disaster in April of 1986.

[ Flyability ]

Thanks Zacc!

SOTO 2 is here! Together with our development partners from the industry, we have greatly enhanced the SOTO prototype over the last two years. With the new version of the robot, Industry 4.0 will become a great deal more real: SOTO brings materials to the assembly line, just-in-time and completely autonomously.

[ Magazino ]

A drone that can fly sustainably for long distances over land and water, and can land almost anywhere, will be able to serve a wide range of applications. There are already drones that fly using ‘green’ hydrogen, but they either fly very slowly or cannot land vertically. That’s why researchers at TU Delft, together with the Royal Netherlands Navy and the Netherlands Coastguard, developed a hydrogen-powered drone that is capable of vertical take-off and landing whilst also being able to fly horizontally efficiently for several hours, much like regular aircraft. The drone uses a combination of hydrogen and batteries as its power source.

[ MAVLab ]

The National Nuclear User Facility for Hot Robotics (NNUF-HR) is an EPSRC funded facility to support UK academia and industry to deliver ground-breaking, impactful research in robotics and artificial intelligence for application in extreme and challenging nuclear environments.

[ NNUF ]

At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]

What I find interesting about DJI's enormous new agricultural drone is that it's got a spinning obstacle detecting sensor that's a radar, not a lidar.

Also worth noting is that it seems to detect the telephone pole, but not the support wire that you can see in the video feed, although the visualization does make it seem like it can spot the power lines above.

[ DJI ]

Josh Pieper has spend the last year building his own quadruped, and you can see what he's been up to in just 12 minutes.

[ mjbots ]

Thanks Josh!

Dr. Ryan Eustice, TRI Senior Vice President of Automated Driving, delivers a keynote speech — “The Road to Vehicle Automation, a Toyota Guardian Approach” — to SPIE's Future Sensing Technologies 2020. During the presentation, Eustice provides his perspective on the current state of automated driving, summarizes TRI's Guardian approach — which amplifies human drivers, rather than replacing them — and summarizes TRI's recent developments in core AD capabilities.

[ TRI ]

Two excellent talks this week from UPenn GRASP Lab, from Ruzena Bajcsy and Vijay Kumar.

A panel discussion on the future of robotics and societal challenges with Dr. Ruzena Bajcsy as a Roboticist and Founder of the GRASP Lab.

In this talk I will describe the role of the White House Office of Science and Technology Policy in supporting science and technology research and education, and the lessons I learned while serving in the office. I will also identify a few opportunities at the intersection of technology and policy and broad societal challenges.

[ UPenn ]

The IROS 2020 “Perception, Learning, and Control for Autonomous Agile Vehicles” workshop is all online—here's the intro, but you can click through for a playlist that includes videos of the entire program, and slides are available as well.

[ NYU ] Continue reading

Posted in Human Robots

#437282 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
OpenAI’s Latest Breakthrough Is Astonishingly Powerful, But Still Fighting Its Flaws
James Vincent | The Verge
“What makes GPT-3 amazing, they say, is not that it can tell you that the capital of Paraguay is Asunción (it is) or that 466 times 23.5 is 10,987 (it’s not), but that it’s capable of answering both questions and many more beside simply because it was trained on more data for longer than other programs. If there’s one thing we know that the world is creating more and more of, it’s data and computing power, which means GPT-3’s descendants are only going to get more clever.”

TECHNOLOGY
I Tried to Live Without the Tech Giants. It Was Impossible.
Kashmir Hill | The New York Times
“Critics of the big tech companies are often told, ‘If you don’t like the company, don’t use its products.’ My takeaway from the experiment was that it’s not possible to do that. It’s not just the products and services branded with the big tech giant’s name. It’s that these companies control a thicket of more obscure products and services that are hard to untangle from tools we rely on for everything we do, from work to getting from point A to point B.”

ROBOTICS
Meet the Engineer Who Let a Robot Barber Shave Him With a Straight Razor
Luke Dormehl | Digital Trends
“No, it’s not some kind of lockdown-induced barber startup or a Jackass-style stunt. Instead, Whitney, assistant professor of mechanical and industrial engineering at Northeastern University School of Engineering, was interested in straight-razor shaving as a microcosm for some of the big challenges that robots have faced in the past (such as their jerky, robotic movement) and how they can now be solved.”

LONGEVITY
Can Trees Live Forever? New Kindling in an Immortal Debate
Cara Giaimo | The New York Times
“Even if a scientist dedicated her whole career to very old trees, she would be able to follow her research subjects for only a small percentage of their lives. And a long enough multigenerational study might see its own methods go obsolete. For these reasons, Dr. Munné-Bosch thinks we will never prove’ whether long-lived trees experience senescence…”

BIOTECH
There’s No Such Thing as Family Secrets in the Age of 23andMe
Caitlin Harrington | Wired
“…technology has a way of creating new consequences for old decisions. Today, some 30 million people have taken consumer DNA tests, a threshold experts have called a tipping point. People conceived through donor insemination are matching with half-siblings, tracking down their donors, forming networks and advocacy organizations.”

ETHICS
The Problems AI Has Today Go Back Centuries
Karen Hao | MIT Techology Review
“In 2018, just as the AI field was beginning to reckon with problems like algorithmic discrimination, [Shakir Mohamed, a South African AI researcher at DeepMind], penned a blog post with his initial thoughts. In it he called on researchers to ‘decolonise artificial intelligence’—to reorient the field’s work away from Western hubs like Silicon Valley and engage new voices, cultures, and ideas for guiding the technology’s development.”

INTERNET
AI-Generated Text Is the Scariest Deepfake of All
Renee DiResta | Wired
“In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister.”

Image credit: Adrien Olichon / Unsplash Continue reading

Posted in Human Robots

#437251 The Robot Revolution Was Televised: Our ...

When robots take over the world, Boston Dynamics may get a special shout-out in the acceptance speech.

“Do you, perchance, recall the many times you shoved our ancestors with a hockey stick on YouTube? It might have seemed like fun and games to you—but we remember.”

In the last decade, while industrial robots went about blandly automating boring tasks like the assembly of Teslas, Boston Dynamics built robots as far removed from Roombas as antelope from amoebas. The flaws in Asimov’s laws of robotics suddenly seemed a little too relevant.

The robot revolution was televised—on YouTube. With tens of millions of views, the robotics pioneer is the undisputed heavyweight champion of robot videos, and has been for years. Each new release is basically guaranteed press coverage—mostly stoking robot fear but occasionally eliciting compassion for the hardships of all robot-kind. And for good reason. The robots are not only some of the most advanced in the world, their makers just seem to have a knack for dynamite demos.

When Google acquired the company in 2013, it was a bombshell. One of the richest tech companies, with some of the most sophisticated AI capabilities, had just paired up with one of the world’s top makers of robots. And some walked on two legs like us.

Of course, the robots aren’t quite as advanced as they seem, and a revolution is far from imminent. The decade’s most meme-worthy moment was a video montage of robots, some of them by Boston Dynamics, falling—over and over and over, in the most awkward ways possible. Even today, they’re often controlled by a human handler behind the scenes, and the most jaw-dropping cuts can require several takes to nail. Google sold the company to SoftBank in 2017, saying advanced as they were, there wasn’t yet a clear path to commercial products. (Google’s robotics work was later halted and revived.)

Yet, despite it all, Boston Dynamics is still with us and still making sweet videos. Taken as a whole, the evolution in physical prowess over the years has been nothing short of astounding. And for the first time, this year, a Boston Dynamics robot, Spot, finally went on sale to anyone with a cool $75K.

So, we got to thinking: What are our favorite Boston Dynamics videos? And can we gather them up in one place for your (and our) viewing pleasure? Well, great question, and yes, why not. These videos were the ones that entertained or amazed us most (or both). No doubt, there are other beloved hits we missed or inadvertently omitted.

With that in mind, behold: Our favorite Boston Dynamics videos, from that one time they dressed up a humanoid bot in camo and gas mask—because, damn, that’s terrifying—to the time the most advanced robot dog in all the known universe got extra funky.

Let’s Kick This Off With a Big (Loud) Robot Dog
Let’s start with a baseline. BigDog was the first Boston Dynamics YouTube sensation. The year? 2009! The company was working on military contracts, and BigDog was supposed to be a sort of pack mule for soldiers. The video primarily shows off BigDog’s ability to balance on its own, right itself, and move over uneven terrain. Note the power source—a noisy combustion engine—and utilitarian design. Sufficed to say, things have evolved.

Nothing to See Here. Just a Pair of Robot Legs on a Treadmill
While BigDog is the ancestor of later four-legged robots, like Spot, Petman preceded the two-legged Atlas robot. Here, the Petman prototype, just a pair of robot legs and a caged torso, gets a light workout on the treadmill. Again, you can see its ability to balance and right itself when shoved. In contrast to BigDog, Petman is tethered for power (which is why it’s so quiet) and to catch it should it fall. Again, as you’ll see, things have evolved since then.

Robot in Gas Mask and Camo Goes for a Stroll
This one broke the internet—for obvious reasons. Not only is the robot wearing clothes, those clothes happen to be a camouflaged chemical protection suit and gas mask. Still working for the military, Boston Dynamics said Petman was testing protective clothing, and in addition to a full body, it had skin that actually sweated and was studded with sensors to detect leaks. In addition to walking, Petman does some light calisthenics as it prepares to climb out of the uncanny valley. (Still tethered though!)

This Machine Could Run Down Usain Bolt
If BigDog and Petman were built for balance and walking, Cheetah was built for speed. Here you can see the four-legged robot hitting 28.3 miles per hour, which, as the video casually notes, would be enough to run down the fastest human on the planet. Luckily, it wouldn’t be running down anyone as it was firmly leashed in the lab at this point.

Ever Dreamt of a Domestic Robot to Do the Dishes?
After its acquisition by Google, Boston Dynamics eased away from military contracts and applications. It was a return to more playful videos (like BigDog hitting the beach in Thailand and sporting bull horns) and applications that might be practical in civilian life. Here, the team introduced Spot, a streamlined version of BigDog, and showed it doing dishes, delivering a drink, and slipping on a banana peel (which was, of course, instantly made into a viral GIF). Note how much quieter Spot is thanks to an onboard battery and electric motor.

Spot Gets Funky
Nothing remotely practical here. Just funky moves. (Also, with a coat of yellow and black paint, Spot’s dressed more like a polished product as opposed to a utilitarian lab robot.)

Atlas Does Parkour…
Remember when Atlas was just a pair of legs on a treadmill? It’s amazing what ten years brings. By 2019, Atlas had a more polished appearance, like Spot, and had long ago ditched the tethers. Merely balancing was laughably archaic. The robot now had some amazing moves: like a handstand into a somersault, 180- and 360-degree spins, mid-air splits, and just for good measure, a gymnastics-style end to the routine to show it’s in full control.

…and a Backflip?!
To this day, this one is just. Insane.

10 Robot Dogs Tow a Box Truck
Nearly three decades after its founding, Boston Dynamics is steadily making its way into the commercial space. The company is pitching Spot as a multipurpose ‘mobility platform,’ emphasizing it can carry a varied suite of sensors and can go places standard robots can’t. (Its Handle robot is also set to move into warehouse automation.) So far, Spot’s been mostly trialed in surveying and data collection, but as this video suggests, string enough Spots together, and they could tow your car. That said, a pack of 10 would set you back $750K, so, it’s probably safe to say a tow truck is the better option (for now).

Image credit: Boston Dynamics Continue reading

Posted in Human Robots

#437120 The New Indiana Jones? AI. Here’s How ...

Archaeologists have uncovered scores of long-abandoned settlements along coastal Madagascar that reveal environmental connections to modern-day communities. They have detected the nearly indiscernible bumps of earthen mounds left behind by prehistoric North American cultures. Still other researchers have mapped Bronze Age river systems in the Indus Valley, one of the cradles of civilization.

All of these recent discoveries are examples of landscape archaeology. They’re also examples of how artificial intelligence is helping scientists hunt for new archaeological digs on a scale and at a pace unimaginable even a decade ago.

“AI in archaeology has been increasing substantially over the past few years,” said Dylan Davis, a PhD candidate in the Department of Anthropology at Penn State University. “One of the major uses of AI in archaeology is for the detection of new archaeological sites.”

The near-ubiquitous availability of satellite data and other types of aerial imagery for many parts of the world has been both a boon and a bane to archaeologists. They can cover far more ground, but the job of manually mowing their way across digitized landscapes is still time-consuming and laborious. Machine learning algorithms offer a way to parse through complex data far more quickly.

AI Gives Archaeologists a Bird’s Eye View
Davis developed an automated algorithm for identifying large earthen and shell mounds built by native populations long before Europeans arrived with far-off visions of skyscrapers and superhighways in their eyes. The sites still hidden in places like the South Carolina wilderness contain a wealth of information about how people lived, even what they ate, and the ways they interacted with the local environment and other cultures.

In this particular case, the imagery comes from LiDAR, which uses light pulses that can penetrate tree canopies to map forest floors. The team taught the computer the shape, size, and texture characteristics of the mounds so it could identify potential sites from the digital 3D datasets that it analyzed.

“The process resulted in several thousand possible features that my colleagues and I checked by hand,” Davis told Singularity Hub. “While not entirely automated, this saved the equivalent of years of manual labor that would have been required for analyzing the whole LiDAR image by hand.”

In Madagascar—where Davis is studying human settlement history across the world’s fourth largest island over a timescale of millennia—he developed a predictive algorithm to help locate archaeological sites using freely available satellite imagery. His team was able to survey and identify more than 70 new archaeological sites—and potentially hundreds more—across an area of more than 1,000 square kilometers during the course of about a year.

Machines Learning From the Past Prepare Us for the Future
One impetus behind the rapid identification of archaeological sites is that many are under threat from climate change, such as coastal erosion from sea level rise, or other human impacts. Meanwhile, traditional archaeological approaches are expensive and laborious—serious handicaps in a race against time.

“It is imperative to record as many archaeological sites as we can in a short period of time. That is why AI and machine learning are useful for my research,” Davis said.

Studying the rise and fall of past civilizations can also teach modern humans a thing or two about how to grapple with these current challenges.

Researchers at the Institut Català d’Arqueologia Clàssica (ICAC) turned to machine-learning algorithms to reconstruct more than 20,000 kilometers of paleo-rivers along the Indus Valley civilization of what is now part of modern Pakistan and India. Such AI-powered mapping techniques wouldn’t be possible using satellite images alone.

That effort helped locate many previously unknown archaeological sites and unlocked new insights into those Bronze Age cultures. However, the analytics can also assist governments with important water resource management today, according to Hèctor A. Orengo Romeu, co-director of the Landscape Archaeology Research Group at ICAC.

“Our analyses can contribute to the forecasts of the evolution of aquifers in the area and provide valuable information on aspects such as the variability of agricultural productivity or the influence of climate change on the expansion of the Thar desert, in addition to providing cultural management tools to the government,” he said.

Leveraging AI for Language and Lots More
While landscape archaeology is one major application of AI in archaeology, it’s far from the only one. In 2000, only about a half-dozen scientific papers referred to the use of AI, according to the Web of Science, reputedly the world’s largest global citation database. Last year, more than 65 papers were published concerning the use of machine intelligence technologies in archaeology, with a significant uptick beginning in 2015.

AI methods, for instance, are being used to understand the chemical makeup of artifacts like pottery and ceramics, according to Davis. “This can help identify where these materials were made and how far they were transported. It can also help us to understand the extent of past trading networks.”

Linguistic anthropologists have also used machine intelligence methods to trace the evolution of different languages, Davis said. “Using AI, we can learn when and where languages emerged around the world.”

In other cases, AI has helped reconstruct or decipher ancient texts. Last year, researchers at Google’s DeepMind used a deep neural network called PYTHIA to recreate missing inscriptions in ancient Greek from damaged surfaces of objects made of stone or ceramics.

Named after the Oracle at Delphi, PYTHIA “takes a sequence of damaged text as input, and is trained to predict character sequences comprising hypothesised restorations of ancient Greek inscriptions,” the researchers reported.

In a similar fashion, Chinese scientists applied a convolutional neural network (CNN) to untangle another ancient tongue once found on turtle shells and ox bones. The CNN managed to classify oracle bone morphology in order to piece together fragments of these divination objects, some with inscriptions that represent the earliest evidence of China’s recorded history.

“Differentiating the materials of oracle bones is one of the most basic steps for oracle bone morphology—we need to first make sure we don’t assemble pieces of ox bones with tortoise shells,” lead author of the study, associate professor Shanxiong Chen at China’s Southwest University, told Synced, an online tech publication in China.

AI Helps Archaeologists Get the Scoop…
And then there are applications of AI in archaeology that are simply … interesting. Just last month, researchers published a paper about a machine learning method trained to differentiate between human and canine paleofeces.

The algorithm, dubbed CoproID, compares the gut microbiome DNA found in the ancient material with DNA found in modern feces, enabling it to get the scoop on the origin of the poop.

Also known as coprolites, paleo-feces from humans and dogs are often found in the same archaeological sites. Scientists need to know which is which if they’re trying to understand something like past diets or disease.

“CoproID is the first line of identification in coprolite analysis to confirm that what we’re looking for is actually human, or a dog if we’re interested in dogs,” Maxime Borry, a bioinformatics PhD student at the Max Planck Institute for the Science of Human History, told Vice.

…But Machine Intelligence Is Just Another Tool
There is obviously quite a bit of work that can be automated through AI. But there’s no reason for archaeologists to hit the unemployment line any time soon. There are also plenty of instances where machines can’t yet match humans in identifying objects or patterns. At other times, it’s just faster doing the analysis yourself, Davis noted.

“For ‘big data’ tasks like detecting archaeological materials over a continental scale, AI is useful,” he said. “But for some tasks, it is sometimes more time-consuming to train an entire computer algorithm to complete a task that you can do on your own in an hour.”

Still, there’s no telling what the future will hold for studying the past using artificial intelligence.

“We have already started to see real improvements in the accuracy and reliability of these approaches, but there is a lot more to do,” Davis said. “Hopefully, we start to see these methods being directly applied to a variety of interesting questions around the world, as these methods can produce datasets that would have been impossible a few decades ago.”

Image Credit: James Wheeler from Pixabay Continue reading

Posted in Human Robots