Tag Archives: receive

#437301 The Global Work Crisis: Automation, the ...

The alarm bell rings. You open your eyes, come to your senses, and slide from dream state to consciousness. You hit the snooze button, and eventually crawl out of bed to the start of yet another working day.

This daily narrative is experienced by billions of people all over the world. We work, we eat, we sleep, and we repeat. As our lives pass day by day, the beating drums of the weekly routine take over and years pass until we reach our goal of retirement.

A Crisis of Work
We repeat the routine so that we can pay our bills, set our kids up for success, and provide for our families. And after a while, we start to forget what we would do with our lives if we didn’t have to go back to work.

In the end, we look back at our careers and reflect on what we’ve achieved. It may have been the hundreds of human interactions we’ve had; the thousands of emails read and replied to; the millions of minutes of physical labor—all to keep the global economy ticking along.

According to Gallup’s World Poll, only 15 percent of people worldwide are actually engaged with their jobs. The current state of “work” is not working for most people. In fact, it seems we as a species are trapped by a global work crisis, which condemns people to cast away their time just to get by in their day-to-day lives.

Technologies like artificial intelligence and automation may help relieve the work burdens of millions of people—but to benefit from their impact, we need to start changing our social structures and the way we think about work now.

The Specter of Automation
Automation has been ongoing since the Industrial Revolution. In recent decades it has taken on a more elegant guise, first with physical robots in production plants, and more recently with software automation entering most offices.

The driving goal behind much of this automation has always been productivity and hence, profits: technology that can act as a multiplier on what a single human can achieve in a day is of huge value to any company. Powered by this strong financial incentive, the quest for automation is growing ever more pervasive.

But if automation accelerates or even continues at its current pace and there aren’t strong social safety nets in place to catch the people who are negatively impacted (such as by losing their jobs), there could be a host of knock-on effects, including more concentrated wealth among a shrinking elite, more strain on government social support, an increase in depression and drug dependence, and even violent social unrest.

It seems as though we are rushing headlong into a major crisis, driven by the engine of accelerating automation. But what if instead of automation challenging our fragile status quo, we view it as the solution that can free us from the shackles of the Work Crisis?

The Way Out
In order to undertake this paradigm shift, we need to consider what society could potentially look like, as well as the problems associated with making this change. In the context of these crises, our primary aim should be for a system where people are not obligated to work to generate the means to survive. This removal of work should not threaten access to food, water, shelter, education, healthcare, energy, or human value. In our current system, work is the gatekeeper to these essentials: one can only access these (and even then often in a limited form), if one has a “job” that affords them.

Changing this system is thus a monumental task. This comes with two primary challenges: providing people without jobs with financial security, and ensuring they maintain a sense of their human value and worth. There are several measures that could be implemented to help meet these challenges, each with important steps for society to consider.

Universal basic income (UBI)

UBI is rapidly gaining support, and it would allow people to become shareholders in the fruits of automation, which would then be distributed more broadly.

UBI trials have been conducted in various countries around the world, including Finland, Kenya, and Spain. The findings have generally been positive on the health and well-being of the participants, and showed no evidence that UBI disincentivizes work, a common concern among the idea’s critics. The most recent popular voice for UBI has been that of former US presidential candidate Andrew Yang, who now runs a non-profit called Humanity Forward.

UBI could also remove wasteful bureaucracy in administering welfare payments (since everyone receives the same amount, there’s no need to prevent false claims), and promote the pursuit of projects aligned with peoples’ skill sets and passions, as well as quantifying the value of tasks not recognized by economic measures like Gross Domestic Product (GDP). This includes looking after children and the elderly at home.

How a UBI can be initiated with political will and social backing and paid for by governments has been hotly debated by economists and UBI enthusiasts. Variables like how much the UBI payments should be, whether to implement taxes such as Yang’s proposed valued added tax (VAT), whether to replace existing welfare payments, the impact on inflation, and the impact on “jobs” from people who would otherwise look for work require additional discussion. However, some have predicted the inevitability of UBI as a result of automation.

Universal healthcare

Another major component of any society is the healthcare of its citizens. A move away from work would further require the implementation of a universal healthcare system to decouple healthcare from jobs. Currently in the US, and indeed many other economies, healthcare is tied to employment.

Universal healthcare such as Medicare in Australia is evidence for the adage “prevention is better than cure,” when comparing the cost of healthcare in the US with Australia on a per capita basis. This has already presented itself as an advancement in the way healthcare is considered. There are further benefits of a healthier population, including less time and money spent on “sick-care.” Healthy people are more likely and more able to achieve their full potential.

Reshape the economy away from work-based value

One of the greatest challenges in a departure from work is for people to find value elsewhere in life. Many people view their identities as being inextricably tied to their jobs, and life without a job is therefore a threat to one’s sense of existence. This presents a shift that must be made at both a societal and personal level.

A person can only seek alternate value in life when afforded the time to do so. To this end, we need to start reducing “work-for-a-living” hours towards zero, which is a trend we are already seeing in Europe. This should not come at the cost of reducing wages pro rata, but rather could be complemented by UBI or additional schemes where people receive dividends for work done by automation. This transition makes even more sense when coupled with the idea of deviating from using GDP as a measure of societal growth, and instead adopting a well-being index based on universal human values like health, community, happiness, and peace.

The crux of this issue is in transitioning away from the view that work gives life meaning and life is about using work to survive, towards a view of living a life that itself is fulfilling and meaningful. This speaks directly to notions from Maslow’s hierarchy of needs, where work largely addresses psychological and safety needs such as shelter, food, and financial well-being. More people should have a chance to grow beyond the most basic needs and engage in self-actualization and transcendence.

The question is largely around what would provide people with a sense of value, and the answers would differ as much as people do; self-mastery, building relationships and contributing to community growth, fostering creativity, and even engaging in the enjoyable aspects of existing jobs could all come into play.

Universal education

With a move towards a society that promotes the values of living a good life, the education system would have to evolve as well. Researchers have long argued for a more nimble education system, but universities and even most online courses currently exist for the dominant purpose of ensuring people are adequately skilled to contribute to the economy. These “job factories” only exacerbate the Work Crisis. In fact, the response often given by educational institutions to the challenge posed by automation is to find new ways of upskilling students, such as ensuring they are all able to code. As alluded to earlier, this is a limited and unimaginative solution to the problem we are facing.

Instead, education should be centered on helping people acknowledge the current crisis of work and automation, teach them how to derive value that is decoupled from work, and enable people to embrace progress as we transition to the new economy.

Disrupting the Status Quo
While we seldom stop to think about it, much of the suffering faced by humanity is brought about by the systemic foe that is the Work Crisis. The way we think about work has brought society far and enabled tremendous developments, but at the same time it has failed many people. Now the status quo is threatened by those very developments as we progress to an era where machines are likely to take over many job functions.

This impending paradigm shift could be a threat to the stability of our fragile system, but only if it is not fully anticipated. If we prepare for it appropriately, it could instead be the key not just to our survival, but to a better future for all.

Image Credit: mostafa meraji from Pixabay Continue reading

Posted in Human Robots

#437230 How Drones and Aerial Vehicles Could ...

Drones, personal flying vehicles, and air taxis may be part of our everyday life in the very near future. Drones and air taxis will create new means of mobility and transport routes. Drones will be used for surveillance, delivery, and in the construction sector as it moves towards automation.

The introduction of these aerial craft into cities will require the built environment to change dramatically. Drones and other new aerial vehicles will require landing pads, charging points, and drone ports. They could usher in new styles of building, and lead to more sustainable design.

My research explores the impact of aerial vehicles on urban design, mapping out possible future trajectories.

An Aerial Age
Already, civilian drones can vary widely in size and complexity. They can carry a range of items from high-resolution cameras, delivery mechanisms, and thermal image technology to speakers and scanners. In the public sector, drones are used in disaster response and by the fire service to tackle fires which could endanger firefighters.

During the coronavirus pandemic, drones have been used by the police to enforce lockdown. Drones normally used in agriculture have sprayed disinfectant over cities. In the UK, drone delivery trials are taking place to carry medical items to the Isle of Wight.

Alongside drones, our future cities could also be populated by vertical takeoff and landing craft (VTOL), used as private vehicles and air taxis.

These vehicles are familiar to sci-fi fans. The late Syd Mead’s illustrations of the Spinner VTOL craft in the film Blade Runner captured the popular imagination, and the screens for the Spinners in Blade Runner 2049 created by Territory Studio provided a careful design fiction of the experience of piloting these types of vehicle.

Now, though, these flying vehicles are reality. A number of companies are developing eVTOL with electric multi-rotor jets, and a whole new motorsport is being established around them.

These aircraft have the potential to change our cities. However, they need to be tested extensively in urban airspace. A study conducted by Airbus found that public concerns about VTOL use focused on the safety of those on the ground and noise emissions.

New Cities
The widespread adoption of drones and VTOL will lead to new architecture and infrastructure. Existing buildings will require adaptations: landing pads, solar photovoltaic panels for energy efficiency, charging points for delivery drones, and landscaping to mitigate noise emissions.

A number of companies are already trialing drone delivery services. Existing buildings will need to be adapted to accommodate these new networks, and new design principles will have to be implemented in future ones.

The architect Saúl Ajuria Fernández has developed a design for a delivery drone port hub. This drone port acts like a beehive where drones recharge and collect parcels for distribution. Architectural firm Humphreys & Partners’ Pier 2, a design for a modular apartment building of the future, includes a cantilevered drone port for delivery services.

The Norman Foster Foundation has designed a drone port for delivery of medical supplies and other items for rural communities in Rwanda. The structure is also intended to function as a space for the public to congregate, as well as to receive training in robotics.

Drones may also help the urban environment become more sustainable. Researchers at the University of Stuttgart have developed a re-configurable architectural roof canopy system deployed by drones. By adjusting to follow the direction of the sun, the canopy provides shade and reduces reliance on ventilation systems.

Demand for air taxis and personal flying vehicles will develop where failures in other transport systems take place. The Airbus research found that of the cities surveyed, highest demand for VTOLs was in Los Angeles and Mexico City, urban areas famous for traffic pollution. To accommodate these aerial vehicles, urban space will need to transform to include landing pads, airport-like infrastructure, and recharge points.

Furthermore, this whole logistics system in lower airspace (below 500 feet), or what I term “hover space,” will need an urban traffic management system. One great example of how this hover space could work can be seen in a speculative project from design studio Superflux in their Drone Aviary project. A number of drones with different functions move around an urban area in a network, following different paths at varying heights.

We are at a critical period in urban history, faced by climatic breakdown and pandemic. Drones and aerial vehicles can be part of a profound rethink of the urban environment.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: NASA Continue reading

Posted in Human Robots

#436184 Why People Demanded Privacy to Confide ...

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)

Human: Men are all alike.

Computer: IN WHAT WAY?

Human: They're always bugging us about something or other.

Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?

Human: Well, my boyfriend made me come here.

Computer: YOUR BOYFRIEND MADE YOU COME HERE

Human: He says I'm depressed much of the time.

Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED

Human: It's true. I am unhappy.

Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.

To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.

During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.

Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.

For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.

But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.

The simulation of intelligence, rather than intelligence itself, was enough to fool people.

In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.

In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.

Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.

This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#436149 Blue Frog Robotics Answers (Some of) Our ...

In September of 2015, Buddy the social home robot closed its Indiegogo crowdfunding campaign more than 600 percent over its funding goal. A thousand people pledged for a robot originally scheduled to be delivered in December of 2016. But nearly three years later, the future of Buddy is still unclear. Last May, Blue Frog Robotics asked for forgiveness from its backers and announced the launch of an “equity crowdfunding campaign” to try to raise the additional funding necessary to deliver the robot in April of 2020.

By the time the crowdfunding campaign launched in August, the delivery date had slipped again, to September 2020, even as Blue Frog attempted to draw investors by estimating that sales of Buddy would “increase from 2000 robots in 2020 to 20,000 in 2023.” Blue Frog’s most recent communication with backers, in September, mentions a new CTO and a North American office, but does little to reassure backers of Buddy that they’ll ever be receiving their robot.

Backers of the robot are understandably concerned about the future of Buddy, so we sent a series of questions to the founder and CEO of Blue Frog Robotics, Rodolphe Hasselvander.

We’ve edited this interview slightly for clarity, but we should also note that Hasselvander was unable to provide answers to every question. In particular, we asked for some basic information about Blue Frog’s near-term financial plans, on which the entire future of Buddy seems to depend. We’ve left those questions in the interview anyway, along with Hasselvander’s response.

1. At this point, how much additional funding is necessary to deliver Buddy to backers?
2. Assuming funding is successful, when can backers expect to receive Buddy?
3. What happens if the fundraising goal is not met?
4. You estimate that sales of Buddy will increase 10x over three years. What is this estimate based on?

Rodolphe Hasselvander: Regarding the questions 1-4, unfortunately, as we are fundraising in a Regulation D, we do not comment on prospect, customer data, sales forecasts, or figures. Please refer to our press release here to have information about the fundraising.

5. Do you feel that you are currently being transparent enough about this process to satisfy backers?
6. Buddy’s launch date has moved from April 2020 to September 2020 over the last four months. Why should backers remain confident about Buddy’s schedule?

Since the last newsletter, we haven’t changed our communication, the backers will be the first to receive their Buddy, and we plan an official launch in September 2020.

7. What is the goal of My Buddy World?

At Blue Frog, we think that matching a great product with a big market can only happen through continual experimentation, iteration and incorporation of customer feedback. That’s why we created the forum My Buddy World. It has been designed for our Buddy Community to join us, discuss the world’s first emotional robot, and create with us. The objective is to deepen our conversation with Buddy’s fans and users, stay agile in testing our hypothesis and validate our product-market fit. We trust the value of collaboration. Behind Buddy, there is a team of roboticists, engineers, and programmers that are eager to know more about our consumers’ needs and are excited to work with them to create the perfect human/robot experience.

8. How is the current version of Buddy different from the 2015 version that backers pledged for during the successful crowdfunding campaign, in both hardware and software?

We have completely revised some parts of Buddy as well as replaced and/or added more accurate and reliable components to ensure we fully satisfy our customers’ requirements for a mature and high-quality robot from day one. We sourced more innovative components to make sure that Buddy has the most up-to-date technologies such as adding four microphones, a high def thermal matrix, a 3D camera, an 8-megapixel RGB camera, time-of-flight sensors, and touch sensors.
If you want more info, we just posted an article about what is Buddy here.

9. Will the version of Buddy that ships to backers in 2020 do everything that that was shown in the original crowdfunding video?

Concerning the capabilities of Buddy regarding the video published on YouTube, I confirm that Buddy will be able to do everything you can see, like patrol autonomously and secure your home, telepresence, mathematics applications, interactive stories for children, IoT/smart home management, face recognition, alarm clock, reminder, message/photo sharing, music, hands free call, people following, games like hide and seek (and more). In addition, everyone will be able to create their own apps thanks to the “BuddyLab” application.

10. What makes you confident that Buddy will be successful when Jibo, Kuri, and other social robots have not?

Consumer robotics is a new market. Some people think it is a tough one. But we, at Blue Frog Robotics, believe it is a path of learning, understanding, and finding new ways to serve consumers. Here are the five key factors that will make Buddy successful.

1) A market-fit robot

Blue Frog Robotics is a consumer-centric company. We know that a successful business model and a compelling fit to market Buddy must come up from solving consumers’ frustrations and problems in a way that’s new and exciting. We started from there.

By leveraged existing research and syndicated consumer data sets to understand our customers’ needs and aspirations, we get that creating a robot is not about the best tech innovation and features, but always about how well technology becomes a service to one’s basic human needs and assets: convenience, connection, security, fun, self-improvement, and time. To answer to these consumers’ needs and wants, we designed an all-in-one robot with four vital capabilities: intelligence, emotionality, mobility, and customization.

With his multi-purpose brain, he addresses a broad range of needs in modern-day life, from securing homes to carrying out his owners’ daily activities, from helping people with disabilities to educating children, from entertaining to just becoming a robot friend.

Buddy is a disruptive innovative robot that is about to transform the way we live, learn, utilize information, play, and even care about our health.
2) Endless possibilities

One of the major advantages of Buddy is his adaptability. Beyond to be adorable, playful, talkative, and to accompany anyone in their daily life at home whether you are comfortable with technology or not, he offers via his platform applications to engage his owners in a wide range of activities. From fitness to cooking, from health monitoring to education, from games to meditation, the combination of intelligence, sensors, mobility, multi-touch panel opens endless possibilities for consumers and organizations to adapt their Buddy to their own needs.
3) An affordable price

Buddy will be the first robot combining smart, social, and mobile capabilities and a developed platform with a personality to enter the U.S. market at affordable price.

Our competitors are social or assistant robots but rarely both. Competitors differentiate themselves by features: mobile, non-mobile; by shapes: humanoid or not; by skills: social versus smart; targeting a specific domain like entertainment, retail assistant, eldercare, or education for children; and by price. Regarding our six competitors: Moorebot, Elli-Q, and Olly are not mobile; Lynx and Nao are in toy category; Pepper is above $10k targeting B2B market; and finally, Temi can’t be considered an emotional robot.
Buddy remains highly differentiated as an all-in-one, best of his class experience, covering the needs for social interactions and assistance of his owners at each stage of their life at an affordable price.

The price range of Buddy will be between US $1700 and $2000.

4) A winning business model

Buddy’s great business model combines hardware, software, and services, and provides game-changing convenience for consumers, organizations, and developers.

Buddy offers a multi-sided value proposition focused on three vertical markets: direct consumers, corporations (healthcare, education, hospitality), and developers. The model creates engagement and sustained usage and produces stable and diverse cash flow.
5) A Passion for people and technology

From day one, we have always believed in the power of our dream: To bring the services and the fun of an emotional robot in every house, every hospital, in every care house. Each day, we refuse to think that we are stuck or limited; we work hard to make Buddy a reality that will help people all over the world and make them smile.

While we certainly appreciate Hasselvander’s consistent optimism and obvious enthusiasm, we’re obligated to point out that some of our most important questions were not directly answered. We haven’t learned anything that makes us all that much more confident that Blue Frog will be able to successfully deliver Buddy this time. Hasselvander also didn’t address our specific question about whether he feels like Blue Frog’s communication strategy with backers has been adequate, which is particularly relevant considering that over the four months between the last two newsletters, Buddy’s launch date slipped by six months.

At this point, all we can do is hope that the strategy Blue Frog has chosen will be successful. We’ll let you know if as soon as we learn more.

[ Buddy ] Continue reading

Posted in Human Robots

#435726 This Is the Most Powerful Robot Arm Ever ...

Last month, engineers at NASA’s Jet Propulsion Laboratory wrapped up the installation of the Mars 2020 rover’s 2.1-meter-long robot arm. This is the most powerful arm ever installed on a Mars rover. Even though the Mars 2020 rover shares much of its design with Curiosity, the new arm was redesigned to be able to do much more complex science, drilling into rocks to collect samples that can be stored for later recovery.

JPL is well known for developing robots that do amazing work in incredibly distant and hostile environments. The Opportunity Mars rover, to name just one example, had a 90-day planned mission but remained operational for 5,498 days in a robot unfriendly place full of dust and wild temperature swings where even the most basic maintenance or repair is utterly impossible. (Its twin rover, Spirit, operated for 2,269 days.)

To learn more about the process behind designing robotic systems that are capable of feats like these, we talked with Matt Robinson, one of the engineers who designed the Mars 2020 rover’s new robot arm.

The Mars 2020 rover (which will be officially named through a public contest which opens this fall) is scheduled to launch in July of 2020, landing in Jezero Crater on February 18, 2021. The overall design is similar to the Mars Science Laboratory (MSL) rover, named Curiosity, which has been exploring Gale Crater on Mars since August 2012, except Mars 2020 will be a bit bigger and capable of doing even more amazing science. It will outweigh Curiosity by about 150 kilograms, but it’s otherwise about the same size, and uses the same type of radioisotope thermoelectric generator for power. Upgraded aluminum wheels will be more durable than Curiosity’s wheels, which have suffered significant wear. Mars 2020 will land on Mars in the same way that Curiosity did, with a mildly insane descent to the surface from a rocket-powered hovering “skycrane.”

Photo: NASA/JPL-Caltech

Last month, engineers at NASA's Jet Propulsion Laboratory install the main robotic arm on the Mars 2020 rover. Measuring 2.1 meters long, the arm will allow the rover to work as a human geologist would: by holding and using science tools with its turret.

Mars 2020 really steps it up when it comes to science. The most interesting new capability (besides serving as the base station for a highly experimental autonomous helicopter) is that the rover will be able to take surface samples of rock and soil, put them into tubes, seal the tubes up, and then cache the tubes on the surface for later retrieval (and potentially return to Earth for analysis). Collecting the samples is the job of a drill on the end of the robot arm that can be equipped with a variety of interchangeable bits, but the arm holds a number of other instruments as well. A “turret” can swap between the drill, a mineral identification sensor suite called SHERLOC, and an X-ray spectrometer and camera called PIXL. Fundamentally, most of Mars 2020’s science work is going to depend on the arm and the hardware that it carries, both in terms of close-up surface investigations and collecting samples for caching.

Matt Robinson is the Deputy Delivery Manager for the Sample Caching System on the Mars 2020 rover, which covers the robotic arm itself, the drill at the end of the arm, and the sample caching system within the body of the rover that manages the samples. Robinson has been at JPL since 2001, and he’s worked on the Mars Phoenix Lander mission as the robotic arm flight software developer and robotic arm test and operations engineer, as well as on Curiosity as the robotic arm test and operations lead engineer.

We spoke with Robinson about how the Mars 2020 arm was designed, and what it’s like to be building robots for exploring other planets.

IEEE Spectrum: How’d you end up working on robots at JPL?

Matt Robinson: When I was a grad student, my focus was on vision-based robotics research, so the kinds of things they do at JPL, or that we do at JPL now, were right within my wheelhouse. One of my advisors in grad school had a former student who was out here at JPL, so that’s how I made the contact. But I was very excited to come to JPL—as a young grad student working in robotics, space robotics was where it’s at.

For a robotics engineer, working in space is kind of the gold standard. You’re working in a challenging environment and you have to be prepared for any time of eventuality that may occur. And when you send your robot out to space, there’s no getting it back.

Once the rover arrives on Mars and you receive pictures back from it operating, there’s no greater feeling. You’ve built something that is now working 200+ million miles away. It’s an awesome experience! I have to pinch myself sometimes with the job I do. Working at JPL on space robotics is the holy grail for a roboticist.

What’s different about designing an arm for a rover that will operate on Mars?

We spent over five years designing, manufacturing, assembling, and testing the arm. Scientists have defined the high-level goals for what the mission has to do—acquire core samples and process them for return, carry science instruments on the arm to help determine what rocks to sample, and so on. We, as engineers, define the next level of requirements that support those goals.

When you’re building a robotic arm for another planet, you want to design something that is robust to the environment as well as robust from fault-protection standpoint. On Mars, we’re talking about an environment where the temperature can vary 100 degrees Celsius over the course of the day, so it’s very challenging thermally. With force sensing for instance, that’s a major problem. Force sensors aren’t typically designed to operate or even survive in temperature ranges that we’re talking about. So a lot of effort has to go into force sensor design and testing.

And then there’s a do-no-harm aspect—you’re sending this piece of hardware 200 million miles away, and you can’t get it back, so you want to make sure your hardware and software are robust and cannot do any harm to the system. It’s definitely a change in mindset from a terrestrial robot, where if you make a mistake, you can repair it.

“Once the rover arrives on Mars and you receive pictures back from it, there’s no greater feeling . . . I have to pinch myself sometimes with the job I do.”
—Matt Robinson, NASA JPL

How do you decide how much redundancy is enough?

That’s always a big question. It comes down to a couple of things, typically: mass and volume. You have a certain amount of mass that’s allocated to the robotic arm and we have a volume that it has to fit within, so those are often the drivers of the amount of redundancy that you can fit. We also have a lot of experience with sending arms to other planets, and at the beginning of projects, we establish a number of requirements that the design has to meet, and that’s where the redundancy is captured.

How much is the design of the arm driven by this need for redundancy, as opposed to trying to pack in all of the instrumentation that you want to have on there to do as much science as possible?

The requirements were driven by a couple of things. We knew roughly how big the instruments on the end of the arm were going to be, so the arm design is partially driven by that, because as the instruments get bigger and heavier, the arm has to get bigger and stronger. We have our coring drill at the end of the arm, and coring requires a certain level of force, so the arm has to be strong enough to do that. Those all became requirements that drove the design of the arm. On top of that, there was also that this arm also has to operate within the Martian environment, so you have things like the temperature changes and thermal expansion—you have to design for that as well. It’s a combination of both, really.

You were a test engineer for the arm used on the MSL rover. What did you learn from Spirit and Opportunity that informed the design of the arm on Curiosity?

Spirit and Opportunity did not have any force-sensing on the robotic arm. We had contact sensors that were good enough. Spirit and Opportunity’s arms were used to place instruments, that’s all it had to do, primarily. When you’re talking about actually acquiring samples, it’s not a matter of just placing the tool—you also have to apply forces to the environment. And once you start doing that, you really need a force sensor to protect you, and also to determine how much load to apply. So that was a big theme, a big difference between MSL and Spirit and Opportunity.

The size grew a lot too. If you look at Spirit and Opportunity, they’re the size of a riding lawnmower. Curiosity and the Mars 2020 rovers are the size of a small car. The Spirit and Opportunity arm was under a meter long, and the 2020 arm is twice that, and it has to apply forces that are much higher than the Spirit and Opportunity arm. From Curiosity to 2020, the payload of the arm grew by 50 percent, but the mass of the arm did not grow a whole lot, because our mass budget was kind of tight. We had to design an arm that was stronger, that had more capability, without adding more mass. That was a big challenge. We were fairly efficient on Curiosity, but on 2020, we sharpened the pencil even more.

Photo: NASA/JPL-Caltech

Three generations of Mars rovers developed at NASA’s Jet Propulsion Laboratory. Front and center: Sojourner rover, which landed on Mars in 1997 as part of the Mars Pathfinder Project. Left: Mars Exploration Rover Project rover (Spirit and Opportunity), which landed on Mars in 2004. Right: Mars Science Laboratory rover (Curiosity), which landed on Mars in August 2012.

MSL used its arm to drill into rocks like Mars 2020 will—how has the experience of operating MSL on Mars changed your thinking on how to make that work?

On MSL, the force sensor was used primarily for fault protection, just to protect the arm from being overloaded. [When drilling] we used a stiffness model of the arm to apply the force. The force sensor was only used in case you overloaded, and that’s very different from doing active force control, where you’re actually using the force sensor in a control loop.

On Mars 2020, we’re taking it to the next step, using the force sensor to actually actively control the level of force, both for pushing on the ground and for doing bit exchange. That’s a key point because fault protection to prevent damage usually has larger error bars. When you’re trying to actually push on the environment to apply force, and you’re doing active force control, the force sensor has to be significantly more accurate.

So a big thing that we learned on MSL—it was the first time we’d actually flown a force sensor, and we learned a lot about how to design and test force sensors to be used on the surface of Mars.

How do you effectively test the Mars 2020 arm on Earth?

That’s a good question. The arm was designed to operate on either Earth or Mars. It’s strong enough to do both. We also have a stiffness model of the arm which includes allows us to compensate for differences in gravity. For testing, we make two copies of the robotic arm. We have our copy that we’re going to fly to Mars, which is what we call our flight model, and we have our engineering model. They’re effectively duplicates of each other. The engineering arm stays on earth, so even once we’ve sent the flight model to Mars, we can continue to test. And if something were to happen, if say a drill bit got stuck in the ground on Mars, we could try to replicate those conditions on Earth with our engineering model arm, and use that to test out different scenarios to overcome the problem.

How much autonomy will the arm have?

We have different models of autonomy. We have pretty high levels flight software and, for instance, we have a command that just says “dock,” that moves the arm does all the force control to the dock the arm with the carousel. For surface interaction, we have stereo cameras on the rover, and those cameras allow us to generate 3D terrain models. Using those 3D terrain models, scientists can select a target on that surface, and then we can position the arm on the target.

Scientists like to select the particular sample targets, because they have very specific types of rocks they’re looking for to sample from. On 2020, we’re providing the ability for the next level of autonomy for the rover to drive up to an area and at least do the initial surveying of that area, so the scientists can select the specific target. So the way that that would happen is, if there’s an area off in the distance that the scientists find potentially interesting, the rover will autonomously drive up to it, and deploy the arm and take all the pictures so that we can generate those 3D terrain models and then the next day the scientists can pick the specific target they want. It’s really cool.

JPL is famous for making robots that operate for far longer than NASA necessarily plans for. What’s it like designing hardware and software for a system that will (hopefully) become part of that legacy?

The way that I look at it is, when you’re building an arm that’s going to go to another planet, all the things that could go wrong… You have to build something that’s robust and that can survive all that. It’s not that we’re trying to overdesign arms so that they’ll end up lasting much, much longer, it’s that, given all the things that you can encounter within a fairly unknown environment, and the level of robustness of the design you have to apply, it just so happens we end up with designs that end up lasting a lot longer than they do. Which is great, but we’re not held to that, although we’re very excited when we see them last that long. Without any calibration, without any maintenance, exactly, it’s amazing. They show their wear over time, but they still operate, it’s super exciting, it’s very inspirational to see.

[ Mars 2020 Rover ] Continue reading

Posted in Human Robots