Tag Archives: museum

#438779 Meet Catfish Charlie, the CIA’s ...

Photo: CIA Museum

CIA roboticists designed Catfish Charlie to take water samples undetected. Why they wanted a spy fish for such a purpose remains classified.

In 1961, Tom Rogers of the Leo Burnett Agency created Charlie the Tuna, a jive-talking cartoon mascot and spokesfish for the StarKist brand. The popular ad campaign ran for several decades, and its catchphrase “Sorry, Charlie” quickly hooked itself in the American lexicon.

When the CIA’s Office of Advanced Technologies and Programs started conducting some fish-focused research in the 1990s, Charlie must have seemed like the perfect code name. Except that the CIA’s Charlie was a catfish. And it was a robot.

More precisely, Charlie was an unmanned underwater vehicle (UUV) designed to surreptitiously collect water samples. Its handler controlled the fish via a line-of-sight radio handset. Not much has been revealed about the fish’s construction except that its body contained a pressure hull, ballast system, and communications system, while its tail housed the propulsion. At 61 centimeters long, Charlie wouldn’t set any biggest-fish records. (Some species of catfish can grow to 2 meters.) Whether Charlie reeled in any useful intel is unknown, as details of its missions are still classified.

For exploring watery environments, nothing beats a robot
The CIA was far from alone in its pursuit of UUVs nor was it the first agency to do so. In the United States, such research began in earnest in the 1950s, with the U.S. Navy’s funding of technology for deep-sea rescue and salvage operations. Other projects looked at sea drones for surveillance and scientific data collection.

Aaron Marburg, a principal electrical and computer engineer who works on UUVs at the University of Washington’s Applied Physics Laboratory, notes that the world’s oceans are largely off-limits to crewed vessels. “The nature of the oceans is that we can only go there with robots,” he told me in a recent Zoom call. To explore those uncharted regions, he said, “we are forced to solve the technical problems and make the robots work.”

Image: Thomas Wells/Applied Physics Laboratory/University of Washington

An oil painting commemorates SPURV, a series of underwater research robots built by the University of Washington’s Applied Physics Lab. In nearly 400 deployments, no SPURVs were lost.

One of the earliest UUVs happens to sit in the hall outside Marburg’s office: the Self-Propelled Underwater Research Vehicle, or SPURV, developed at the applied physics lab beginning in the late ’50s. SPURV’s original purpose was to gather data on the physical properties of the sea, in particular temperature and sound velocity. Unlike Charlie, with its fishy exterior, SPURV had a utilitarian torpedo shape that was more in line with its mission. Just over 3 meters long, it could dive to 3,600 meters, had a top speed of 2.5 m/s, and operated for 5.5 hours on a battery pack. Data was recorded to magnetic tape and later transferred to a photosensitive paper strip recorder or other computer-compatible media and then plotted using an IBM 1130.

Over time, SPURV’s instrumentation grew more capable, and the scope of the project expanded. In one study, for example, SPURV carried a fluorometer to measure the dispersion of dye in the water, to support wake studies. The project was so successful that additional SPURVs were developed, eventually completing nearly 400 missions by the time it ended in 1979.

Working on underwater robots, Marburg says, means balancing technical risks and mission objectives against constraints on funding and other resources. Support for purely speculative research in this area is rare. The goal, then, is to build UUVs that are simple, effective, and reliable. “No one wants to write a report to their funders saying, ‘Sorry, the batteries died, and we lost our million-dollar robot fish in a current,’ ” Marburg says.

A robot fish called SoFi
Since SPURV, there have been many other unmanned underwater vehicles, of various shapes and sizes and for various missions, developed in the United States and elsewhere. UUVs and their autonomous cousins, AUVs, are now routinely used for scientific research, education, and surveillance.

At least a few of these robots have been fish-inspired. In the mid-1990s, for instance, engineers at MIT worked on a RoboTuna, also nicknamed Charlie. Modeled loosely on a blue-fin tuna, it had a propulsion system that mimicked the tail fin of a real fish. This was a big departure from the screws or propellers used on UUVs like SPURV. But this Charlie never swam on its own; it was always tethered to a bank of instruments. The MIT group’s next effort, a RoboPike called Wanda, overcame this limitation and swam freely, but never learned to avoid running into the sides of its tank.

Fast-forward 25 years, and a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled SoFi, a decidedly more fishy robot designed to swim next to real fish without disturbing them. Controlled by a retrofitted Super Nintendo handset, SoFi could dive more than 15 meters, control its own buoyancy, and swim around for up to 40 minutes between battery charges. Noting that SoFi’s creators tested their robot fish in the gorgeous waters off Fiji, IEEE Spectrum’s Evan Ackerman noted, “Part of me is convinced that roboticists take on projects like these…because it’s a great way to justify a trip somewhere exotic.”

SoFi, Wanda, and both Charlies are all examples of biomimetics, a term coined in 1974 to describe the study of biological mechanisms, processes, structures, and substances. Biomimetics looks to nature to inspire design.

Sometimes, the resulting technology proves to be more efficient than its natural counterpart, as Richard James Clapham discovered while researching robotic fish for his Ph.D. at the University of Essex, in England. Under the supervision of robotics expert Huosheng Hu, Clapham studied the swimming motion of Cyprinus carpio, the common carp. He then developed four robots that incorporated carplike swimming, the most capable of which was iSplash-II. When tested under ideal conditions—that is, a tank 5 meters long, 2 meters wide, and 1.5 meters deep—iSpash-II obtained a maximum velocity of 11.6 body lengths per second (or about 3.7 m/s). That’s faster than a real carp, which averages a top velocity of 10 body lengths per second. But iSplash-II fell short of the peak performance of a fish darting quickly to avoid a predator.

Of course, swimming in a test pool or placid lake is one thing; surviving the rough and tumble of a breaking wave is another matter. The latter is something that roboticist Kathryn Daltorio has explored in depth.

Daltorio, an assistant professor at Case Western Reserve University and codirector of the Center for Biologically Inspired Robotics Research there, has studied the movements of cockroaches, earthworms, and crabs for clues on how to build better robots. After watching a crab navigate from the sandy beach to shallow water without being thrown off course by a wave, she was inspired to create an amphibious robot with tapered, curved feet that could dig into the sand. This design allowed her robot to withstand forces up to 138 percent of its body weight.

Photo: Nicole Graf

This robotic crab created by Case Western’s Kathryn Daltorio imitates how real crabs grab the sand to avoid being toppled by waves.

In her designs, Daltorio is following architect Louis Sullivan’s famous maxim: Form follows function. She isn’t trying to imitate the aesthetics of nature—her robot bears only a passing resemblance to a crab—but rather the best functionality. She looks at how animals interact with their environments and steals evolution’s best ideas.

And yet, Daltorio admits, there is also a place for realistic-looking robotic fish, because they can capture the imagination and spark interest in robotics as well as nature. And unlike a hyperrealistic humanoid, a robotic fish is unlikely to fall into the creepiness of the uncanny valley.

In writing this column, I was delighted to come across plenty of recent examples of such robotic fish. Ryomei Engineering, a subsidiary of Mitsubishi Heavy Industries, has developed several: a robo-coelacanth, a robotic gold koi, and a robotic carp. The coelacanth was designed as an educational tool for aquariums, to present a lifelike specimen of a rarely seen fish that is often only known by its fossil record. Meanwhile, engineers at the University of Kitakyushu in Japan created Tai-robot-kun, a credible-looking sea bream. And a team at Evologics, based in Berlin, came up with the BOSS manta ray.

Whatever their official purpose, these nature-inspired robocreatures can inspire us in return. UUVs that open up new and wondrous vistas on the world’s oceans can extend humankind’s ability to explore. We create them, and they enhance us, and that strikes me as a very fair and worthy exchange.

This article appears in the March 2021 print issue as “Catfish, Robot, Swimmer, Spy.”

About the Author
Allison Marsh is an associate professor of history at the University of South Carolina and codirector of the university’s Ann Johnson Institute for Science, Technology & Society. Continue reading

Posted in Human Robots

#434767 7 Non-Obvious Trends Shaping the Future

When you think of trends that might be shaping the future, the first things that come to mind probably have something to do with technology: Robots taking over jobs. Artificial intelligence advancing and proliferating. 5G making everything faster, connected cities making everything easier, data making everything more targeted.

Technology is undoubtedly changing the way we live, and will continue to do so—probably at an accelerating rate—in the near and far future. But there are other trends impacting the course of our lives and societies, too. They’re less obvious, and some have nothing to do with technology.

For the past nine years, entrepreneur and author Rohit Bhargava has read hundreds of articles across all types of publications, tagged and categorized them by topic, funneled frequent topics into broader trends, analyzed those trends, narrowed them down to the most significant ones, and published a book about them as part of his ‘Non-Obvious’ series. He defines a trend as “a unique curated observation of the accelerating present.”

In an encore session at South by Southwest last week (his initial talk couldn’t fit hundreds of people who wanted to attend, so a re-do was scheduled), Bhargava shared details of his creative process, why it’s hard to think non-obviously, the most important trends of this year, and how to make sure they don’t get the best of you.

Thinking Differently
“Non-obvious thinking is seeing the world in a way other people don’t see it,” Bhargava said. “The secret is curating your ideas.” Curation collects ideas and presents them in a meaningful way; museum curators, for example, decide which works of art to include in an exhibit and how to present them.

For his own curation process, Bhargava uses what he calls the haystack method. Rather than searching for a needle in a haystack, he gathers ‘hay’ (ideas and stories) then uses them to locate and define a ‘needle’ (a trend). “If you spend enough time gathering information, you can put the needle into the middle of the haystack,” he said.

A big part of gathering information is looking for it in places you wouldn’t normally think to look. In his case, that means that on top of reading what everyone else reads—the New York Times, the Washington Post, the Economist—he also buys publications like Modern Farmer, Teen Vogue, and Ink magazine. “It’s like stepping into someone else’s world who’s not like me,” he said. “That’s impossible to do online because everything is personalized.”

Three common barriers make non-obvious thinking hard.

The first is unquestioned assumptions, which are facts or habits we think will never change. When James Dyson first invented the bagless vacuum, he wanted to sell the license to it, but no one believed people would want to spend more money up front on a vacuum then not have to buy bags. The success of Dyson’s business today shows how mistaken that assumption—that people wouldn’t adapt to a product that, at the end of the day, was far more sensible—turned out to be. “Making the wrong basic assumptions can doom you,” Bhargava said.

The second barrier to thinking differently is constant disruption. “Everything is changing as industries blend together,” Bhargava said. “The speed of change makes everyone want everything, all the time, and people expect the impossible.” We’ve come to expect every alternative to be presented to us in every moment, but in many cases this doesn’t serve us well; we’re surrounded by noise and have trouble discerning what’s valuable and authentic.

This ties into the third barrier, which Bhargava calls the believability crisis. “Constant sensationalism makes people skeptical about everything,” he said. With the advent of fake news and technology like deepfakes, we’re in a post-truth, post-fact era, and are in a constant battle to discern what’s real from what’s not.

2019 Trends
Bhargava’s efforts to see past these barriers and curate information yielded 15 trends he believes are currently shaping the future. He shared seven of them, along with thoughts on how to stay ahead of the curve.

Retro Trust
We tend to trust things we have a history with. “People like nostalgic experiences,” Bhargava said. With tech moving as fast as it is, old things are quickly getting replaced by shinier, newer, often more complex things. But not everyone’s jumping on board—and some who’ve been on board are choosing to jump off in favor of what worked for them in the past.

“We’re turning back to vinyl records and film cameras, deliberately downgrading to phones that only text and call,” Bhargava said. In a period of too much change too fast, people are craving familiarity and dependability. To capitalize on that sentiment, entrepreneurs should seek out opportunities for collaboration—how can you build a product that’s new, but feels reliable and familiar?

Muddled Masculinity
Women have increasingly taken on more leadership roles, advanced in the workplace, now own more homes than men, and have higher college graduation rates. That’s all great for us ladies—but not so great for men or, perhaps more generally, for the concept of masculinity.

“Female empowerment is causing confusion about what it means to be a man today,” Bhargava said. “Men don’t know what to do—should they say something? Would that make them an asshole? Should they keep quiet? Would that make them an asshole?”

By encouraging the non-conforming, we can help take some weight off the traditional gender roles, and their corresponding divisions and pressures.

Innovation Envy
Innovation has become an over-used word, to the point that it’s thrown onto ideas and actions that aren’t really innovative at all. “We innovate by looking at someone else and doing the same,” Bhargava said. If an employee brings a radical idea to someone in a leadership role, in many companies the leadership will say they need a case study before implementing the radical idea—but if it’s already been done, it’s not innovative. “With most innovation what ends up happening is not spectacular failure, but irrelevance,” Bhargava said.

He suggests that rather than being on the defensive, companies should play offense with innovation, and when it doesn’t work “fail as if no one’s watching” (often, no one will be).

Artificial Influence
Thanks to social media and other technologies, there are a growing number of fabricated things that, despite not being real, influence how we think. “15 percent of all Twitter accounts may be fake, and there are 60 million fake Facebook accounts,” Bhargava said. There are virtual influencers and even virtual performers.

“Don’t hide the artificial ingredients,” Bhargava advised. “Some people are going to pretend it’s all real. We have to be ethical.” The creators of fabrications meant to influence the way people think, or the products they buy, or the decisions they make, should make it crystal-clear that there aren’t living, breathing people behind the avatars.

Enterprise Empathy
Another reaction to the fast pace of change these days—and the fast pace of life, for that matter—is that empathy is regaining value and even becoming a driver of innovation. Companies are searching for ways to give people a sense of reassurance. The Tesco grocery brand in the UK has a “relaxed lane” for those who don’t want to feel rushed as they check out. Starbucks opened a “signing store” in Washington DC, and most of its regular customers have learned some sign language.

“Use empathy as a principle to help yourself stand out,” Bhargava said. Besides being a good business strategy, “made with empathy” will ideally promote, well, more empathy, a quality there’s often a shortage of.

Robot Renaissance
From automating factory jobs to flipping burgers to cleaning our floors, robots have firmly taken their place in our day-to-day lives—and they’re not going away anytime soon. “There are more situations with robots than ever before,” Bhargava said. “They’re exploring underwater. They’re concierges at hotels.”

The robot revolution feels intimidating. But Bhargava suggests embracing robots with more curiosity than concern. While they may replace some tasks we don’t want replaced, they’ll also be hugely helpful in multiple contexts, from elderly care to dangerous manual tasks.

Back-storytelling
Similar to retro trust and enterprise empathy, organizations have started to tell their brand’s story to gain customer loyalty. “Stories give us meaning, and meaning is what we need in order to be able to put the pieces together,” Bhargava said. “Stories give us a way of understanding the world.”

Finding the story behind your business, brand, or even yourself, and sharing it openly, can help you connect with people, be they customers, coworkers, or friends.

Tech’s Ripple Effects
While it may not overtly sound like it, most of the trends Bhargava identified for 2019 are tied to technology, and are in fact a sort of backlash against it. Tech has made us question who to trust, how to innovate, what’s real and what’s fake, how to make the best decisions, and even what it is that makes us human.

By being aware of these trends, sharing them, and having conversations about them, we’ll help shape the way tech continues to be built, and thus the way it impacts us down the road.

Image Credit: Rohit Bhargava by Brian Smale Continue reading

Posted in Human Robots

#433939 The Promise—and Complications—of ...

Every year, for just a few days in a major city, a small team of roboticists get to live the dream: ordering around their own personal robot butlers. In carefully-constructed replicas of a restaurant scene or a domestic setting, these robots perform any number of simple algorithmic tasks. “Get the can of beans from the shelf. Greet the visitors to the museum. Help the humans with their shopping. Serve the customers at the restaurant.”

This is Robocup @ Home, the annual tournament where teams of roboticists put their autonomous service robots to the test for practical domestic applications. The tasks seem simple and mundane, but considering the technology required reveals that they’re really not.

The Robot Butler Contest
Say you want a robot to fetch items in the supermarket. In a crowded, noisy environment, the robot must understand your commands, ask for clarification, and map out and navigate an unfamiliar environment, avoiding obstacles and people as it does so. Then it must recognize the product you requested, perhaps in a cluttered environment, perhaps in an unfamiliar orientation. It has to grasp that product appropriately—recall that there are entire multi-million-dollar competitions just dedicated to developing robots that can grasp a range of objects—and then return it to you.

It’s a job so simple that a child could do it—and so complex that teams of smart roboticists can spend weeks programming and engineering, and still end up struggling to complete simplified versions of this task. Of course, the child has the advantage of millions of years of evolutionary research and development, while the first robots that could even begin these tasks were only developed in the 1970s.

Even bearing this in mind, Robocup @ Home can feel like a place where futurist expectations come crashing into technologist reality. You dream of a smooth-voiced, sardonic JARVIS who’s already made your favorite dinner when you come home late from work; you end up shouting “remember the biscuits” at a baffled, ungainly droid in aisle five.

Caring for the Elderly
Famously, Japan is one of the most robo-enthusiastic nations in the world; they are the nation that stunned us all with ASIMO in 2000, and several studies have been conducted into the phenomenon. It’s no surprise, then, that humanoid robotics should be seriously considered as a solution to the crisis of the aging population. The Japanese government, as part of its robots strategy, has already invested $44 million in their development.

Toyota’s Human Support Robot (HSR-2) is a simple but programmable robot with a single arm; it can be remote-controlled to pick up objects and can monitor patients. HSR-2 has become the default robot for use in Robocup @ Home tournaments, at least in tasks that involve manipulating objects.

Alongside this, Toyota is working on exoskeletons to assist people in walking after strokes. It may surprise you to learn that nurses suffer back injuries more than any other occupation, at roughly three times the rate of construction workers, due to the day-to-day work of lifting patients. Toyota has a Care Assist robot/exoskeleton designed to fix precisely this problem by helping care workers with the heavy lifting.

The Home of the Future
The enthusiasm for domestic robotics is easy to understand and, in fact, many startups already sell robots marketed as domestic helpers in some form or another. In general, though, they skirt the immensely complicated task of building a fully capable humanoid robot—a task that even Google’s skunk-works department gave up on, at least until recently.

It’s plain to see why: far more research and development is needed before these domestic robots could be used reliably and at a reasonable price. Consumers with expectations inflated by years of science fiction saturation might find themselves frustrated as the robots fail to perform basic tasks.

Instead, domestic robotics efforts fall into one of two categories. There are robots specialized to perform a domestic task, like iRobot’s Roomba, which stuck to vacuuming and became the most successful domestic robot of all time by far.

The tasks need not necessarily be simple, either: the impressive but expensive automated kitchen uses the world’s most dexterous hands to cook meals, providing it can recognize the ingredients. Other robots focus on human-robot interaction, like Jibo: they essentially package the abilities of a voice assistant like Siri, Cortana, or Alexa to respond to simple questions and perform online tasks in a friendly, dynamic robot exterior.

In this way, the future of domestic automation starts to look a lot more like smart homes than a robot or domestic servant. General robotics is difficult in the same way that general artificial intelligence is difficult; competing with humans, the great all-rounders, is a challenge. Getting superhuman performance at a more specific task, however, is feasible and won’t cost the earth.

Individual startups without the financial might of a Google or an Amazon can develop specialized robots, like Seven Dreamers’ laundry robot, and hope that one day it will form part of a network of autonomous robots that each have a role to play in the household.

Domestic Bliss?
The Smart Home has been a staple of futurist expectations for a long time, to the extent that movies featuring smart homes out of control are already a cliché. But critics of the smart home idea—and of the internet of things more generally—tend to focus on the idea that, more often than not, software just adds an additional layer of things that can break (NSFW), in exchange for minimal added convenience. A toaster that can short-circuit is bad enough, but a toaster that can refuse to serve you toast because its firmware is updating is something else entirely.

That’s before you even get into the security vulnerabilities, which are all the more important when devices are installed in your home and capable of interacting with them. The idea of a smart watch that lets you keep an eye on your children might sound like something a security-conscious parent would like: a smart watch that can be hacked to track children, listen in on their surroundings, and even fool them into thinking a call is coming from their parents is the stuff of nightmares.

Key to many of these problems is the lack of standardization for security protocols, and even the products themselves. The idea of dozens of startups each developing a highly-specialized piece of robotics to perform a single domestic task sounds great in theory, until you realize the potential hazards and pitfalls of getting dozens of incompatible devices to work together on the same system.

It seems inevitable that there are yet more layers of domestic drudgery that can be automated away, decades after the first generation of time-saving domestic devices like the dishwasher and vacuum cleaner became mainstream. With projected market values into the billions and trillions of dollars, there is no shortage of industry interest in ironing out these kinks. But, for now at least, the answer to the question: “Where’s my robot butler?” is that it is gradually, painstakingly learning how to sort through groceries.

Image Credit: Nonchanon / Shutterstock.com Continue reading

Posted in Human Robots

#433506 MIT’s New Robot Taught Itself to Pick ...

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.

Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.

The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.

This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.

Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.

Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.

Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.

But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.

This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.

The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.

This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.

Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.

As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.

This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.

Image Credit: Tom Buehler/CSAIL Continue reading

Posted in Human Robots

#432884 This Week’s Awesome Stories From ...

ROBOTICS
Boston Dynamics’ SpotMini Robot Dog Goes on Sale in 2019
Stephen Shankland | CNET
“The company has 10 SpotMini prototypes now and will work with manufacturing partners to build 100 this year, said company co-founder and President Marc Raibert at a TechCrunch robotics conference Friday. ‘That’s a prelude to getting into a higher rate of production’ in anticipation of sales next year, he said. Who’ll buy it? Probably not you.”

Also from Boston Dynamics’ this week:

SPACE
Made In Space Wins NASA Contract for Next-Gen ‘Vulcan’ Manufacturing System
Mike Wall | Space.com
“’The Vulcan hybrid manufacturing system allows for flexible augmentation and creation of metallic components on demand with high precision,’ Mike Snyder, Made In Space chief engineer and principal investigator, said in a statement. …When Vulcan is ready to go, Made In Space aims to demonstrate the technology on the ISS, showing Vulcan’s potential usefulness for a variety of exploration missions.”

ARTIFICIAL INTELLIGENCE
Duplex Shows Google Failing at Ethical and Creative AI Design
Natasha Lomas | TechCrunch
“But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller—with Pichai going on to sketch a grand vision of the AI saving people and businesses time—the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration. One it does not allow to trouble the trajectory of its engineering ingenuity.”

DESIGN
What Artists Can Tech Us About Making Technology More Human
Elizabeth Stinson| Wired
“For the last year, Park, along with the artist Sougwen Chung and dancers Jason Oremus and Garrett Coleman of the dance collective Hammerstep, have been working out of Bell Labs as part of a residency called Experiments in Art and Technology. The year-long residency, a collaboration between Bell Labs and the New Museum’s incubator, New Inc, culminated in ‘Only Human,’ a recently-opened exhibition at Mana where the artists’ pieces will be on display through the end of May.”

GOVERNANCE
The White House Says a New AI Task Force Will Protect Workers and Keep America First
Will Knight | MIT Technology Review
“The meeting and the select committee signal that the administration takes the impact of artificial intellgence seriously. This has not always been apparent. In his campaign speeches, Trump suggested reviving industries that have already been overhauled by automation. The Treasury secretary, Steven Mnuchin, also previously said that the idea of robots and AI taking people’s jobs was ‘not even on my radar screen.’”

Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots