Tag Archives: Simulation

#436184 Why People Demanded Privacy to Confide ...

This is part four of a six-part series on the history of natural language processing.

Between 1964 and 1966, Joseph Weizenbaum, a German American computer scientist at MIT’s artificial intelligence lab, developed the first-ever chatbot [PDF].

While there were already some rudimentary digital language generators in existence—programs that could spit out somewhat coherent lines of text—Weizenbaum’s program was the first designed explicitly for interactions with humans. The user could type in some statement or set of statements in their normal language, press enter, and receive a response from the machine. As Weizenbaum explained, his program made “certain kinds of natural-language conversation between man and computer possible.”

He named the program Eliza after Eliza Doolittle, the working-class hero of George Bernard Shaw’s Pygmalion who learns how to talk with an upper-class accent. The new Eliza was written for the 36-bit IBM 7094, an early transistorized mainframe computer, in a programming language that Weizenbaum developed called MAD-SLIP.

Because computer time was a valuable resource, Eliza could only be run via a time-sharing system; the user interacted with the program remotely via an electric typewriter and printer. When the user typed in a sentence and pressed enter, a message was sent to the mainframe computer. Eliza scanned the message for the presence of a keyword and used it in a new sentence to form a response that was sent back, printed out, and read by the user.

To encourage ongoing dialogue, Weizenbaum designed Eliza to simulate the type of conversational style used by a Rogerian psychoanalyst. The program would take something the user said and reflect it back with a question, as can be seen in the example below. (Notice how the program takes keywords like “boyfriend” and “depressed” and re-uses them.)

Human: Men are all alike.

Computer: IN WHAT WAY?

Human: They're always bugging us about something or other.

Computer: CAN YOU THINK OF A SPECIFIC EXAMPLE?

Human: Well, my boyfriend made me come here.

Computer: YOUR BOYFRIEND MADE YOU COME HERE

Human: He says I'm depressed much of the time.

Computer: I AM SORRY TO HEAR YOU ARE DEPRESSED

Human: It's true. I am unhappy.

Weizenbaum chose this mode of dialogue for Eliza because it gave the impression that the computer understood what was being said without having to offer anything new to the conversation. It created the illusion of comprehension and engagement in a mere 200 lines of code.

To test Eliza’s capacity to engage an interlocutor, Weizenbaum invited students and colleagues into his office and let them chat with the machine while he looked on. He noticed, with some concern, that during their brief interactions with Eliza, many users began forming emotional attachments to the algorithm. They would open up to the machine and confess problems they were facing in their lives and relationships.

During their brief interactions with Eliza, many users began forming emotional attachments to the algorithm.

Even more surprising was that this sense of intimacy persisted even after Weizenbaum described how the machine worked and explained that it didn’t really understand anything that was being said. Weizenbaum was most troubled when his secretary, who had watched him build the program from scratch over many months, insisted that he leave the room so she could talk to Eliza in private.

For Weizenbaum, this experiment with Eliza made him question an idea that Alan Turing had proposed in 1950 about machine intelligence. In his paper, entitled “Computing Machinery and Intelligence,” Turing suggested that if a computer could conduct a convincingly human conversation in text, one could assume it was intelligent—an idea that became the basis of the famous Turing Test.

But Eliza demonstrated that convincing communication between a human and a machine could take place even if comprehension only flowed from one side: The simulation of intelligence, rather than intelligence itself, was enough to fool people. Weizenbaum called this the Eliza effect, and believed it was a type of “delusional thinking” that humanity would collectively suffer from in the digital age. This insight was a profound shock for Weizenbaum, and one that came to define his intellectual trajectory over the next decade.

The simulation of intelligence, rather than intelligence itself, was enough to fool people.

In 1976, he published Computing Power and Human Reason: From Judgment to Calculation [PDF], which offered a long meditation on why people are willing to believe that a simple machine might be able to understand their complex human emotions.

In this book, he argues that the Eliza effect signifies a broader pathology afflicting “modern man.” In a world conquered by science, technology, and capitalism, people had grown accustomed to viewing themselves as isolated cogs in a large and uncaring machine. In such a diminished social world, Weizenbaum reasoned, people had grown so desperate for connection that they put aside their reason and judgment in order to believe that a program could care about their problems.

Weizenbaum spent the rest of his life developing this humanistic critique of artificial intelligence and digital technology. His mission was to remind people that their machines were not as smart as they were often said to be. And that even though it sometimes appeared as though they could talk, they were never really listening.

This is the fourth installment of a six-part series on the history of natural language processing. Last week’s post described Andrey Markov and Claude Shannon’s painstaking efforts to create statistical models of language for text generation. Come back next Monday for part five, “In 2016, Microsoft’s Racist Chatbot Revealed the Dangers of Conversation.”

You can also check out our prior series on the untold history of AI. Continue reading

Posted in Human Robots

#436180 Bipedal Robot Cassie Cal Learns to ...

There’s no particular reason why knowing how to juggle would be a useful skill for a robot. Despite this, robots are frequently taught how to juggle things. Blind robots can juggle, humanoid robots can juggle, and even drones can juggle. Why? Because juggling is hard, man! You have to think about a bunch of different things at once, and also do a bunch of different things at once, which this particular human at least finds to be overly stressful. While juggling may not stress robots out, it does require carefully coordinated sensing and computing and actuation, which means that it’s as good a task as any (and a more entertaining task than most) for testing the capabilities of your system.

UC Berkeley’s Cassie Cal robot, which consists of two legs and what could be called a torso if you were feeling charitable, has just learned to juggle by bouncing a ball on what would be her head if she had one of those. The idea is that if Cassie can juggle while balancing at the same time, she’ll be better able to do other things that require dynamic multitasking, too. And if that doesn’t work out, she’ll still be able to join the circus.

Cassie’s juggling is assisted by an external motion capture system that tracks the location of the ball, but otherwise everything is autonomous. Cassie is able to juggle the ball by leaning forwards and backwards, left and right, and moving up and down. She does this while maintaining her own balance, which is the whole point of this research—successfully executing two dynamic behaviors that may sometimes be at odds with one another. The end goal here is not to make a better juggling robot, but rather to explore dynamic multitasking, a skill that robots will need in order to be successful in human environments.

This work is from the Hybrid Robotics Lab at UC Berkeley, led by Koushil Sreenath, and is being done by Katherine Poggensee, Albert Li, Daniel Sotsaikich, Bike Zhang, and Prasanth Kotaru.

For a bit more detail, we spoke with Albert Li via email.

Image: UC Berkeley

UC Berkeley’s Cassie Cal getting ready to juggle.

IEEE Spectrum: What would be involved in getting Cassie to juggle without relying on motion capture?

Albert Li: Our motivation for starting off with motion capture was to first address the control challenge of juggling on a biped without worrying about implementing the perception. We actually do have a ball detector working on a camera, which would mean we wouldn’t have to rely on the motion capture system. However, we need to mount the camera in a way that it would provide the best upwards field of view, and we also have develop a reliable estimator. The estimator is particularly important because when the ball gets close enough to the camera, we actually can’t track the ball and have to assume our dynamic models describe its motion accurately enough until it bounces back up.

What keeps Cassie from juggling indefinitely?

There are a few factors that affect how long Cassie can sustain a juggle. While in simulation the paddle exhibits homogeneous properties like its stiffness and damping, in reality every surface has anisotropic contact properties. So, there are parts of the paddle which may be better for juggling than others (and importantly, react differently than modeled). These differences in contact are also exacerbated due to how the paddle is cantilevered when mounted on Cassie. When the ball hits these areas, it leads to a larger than expected error in a juggle. Due to the small size of the paddle, the ball may then just hit the paddle’s edge and end the juggling run. Over a very long run, this is a likely occurrence. Additionally, some large juggling errors could cause Cassie’s feet to slip slightly, which ends up changing the stable standing position over time. Since this version of the controller assumes Cassie is stationary, this change in position eventually leads to poor juggles and failure.

Would Cassie be able to juggle while walking (or hovershoe-ing)?

Walking (and hovershoe-ing) while juggling is a far more challenging problem and is certainly a goal for future research. Some of these challenges include getting the paddle to precise poses to juggle the ball while also moving to avoid any destabilizing effects of stepping incorrectly. The number of juggles per step of walking could also vary and make the mathematics of the problem more challenging. The controller goal is also more involved. While the current goal of the juggling controller is to juggle the ball to a static apex position, with a walking juggling controller, we may instead want to hit the ball forwards and also walk forwards to bounce it, juggle the ball along a particular path, etc. Solving such challenges would be the main thrusts of the follow-up research.

Can you give an example of a practical task that would be made possible by using a controller like this?

Studying juggling means studying contact behavior and leveraging our models of it to achieve a known objective. Juggling could also be used to study predictable post-contact flight behavior. Consider the scenario where a robot is attempting to make a catch, but fails, letting the ball to bounce off of its hand, and then recovering the catch. This behavior could also be intentional: It is often easier to first execute a bounce to direct the target and then perform a subsequent action. For example, volleyball players could in principle directly hit a spiked ball back, but almost always bump the ball back up and then return it.

Even beyond this motivating example, the kinds of models we employ to get juggling working are more generally applicable to any task that involves contact, which could include tasks besides bouncing like sliding and rolling. For example, clearing space on a desk by pushing objects to the side may be preferable than individually manipulating each and every object on it.

You mention collaborative juggling or juggling multiple balls—is that something you’ve tried yet? Can you talk a bit more about what you’re working on next?

We haven’t yet started working on collaborative or multi-ball juggling, but that’s also a goal for future work. Juggling multiple balls statically is probably the most reasonable next goal, but presents additional challenges. For instance, you have to encode a notion of juggling urgency (if the second ball isn’t hit hard enough, you have less time to get the first ball up before you get back to the second one).

On the other hand, collaborative human-robot juggling requires a more advanced decision-making framework. To get robust multi-agent juggling, the robot will need to employ some sort of probabilistic model of the expected human behavior (are they likely to move somewhere? Are they trying to catch the ball high or low? Is it safe to hit the ball back?). In general, developing such human models is difficult since humans are fairly unpredictable and often don’t exhibit rational behavior. This will be a focus of future work.

[ Hybrid Robotics Lab ] Continue reading

Posted in Human Robots

#436174 How Selfish Are You? It Matters for ...

Our personalities impact almost everything we do, from the career path we choose to the way we interact with others to how we spend our free time.

But what about the way we drive—could personality be used to predict whether a driver will cut someone off, speed, or, say, zoom through a yellow light instead of braking?

There must be something to the idea that those of us who are more mild-mannered are likely to drive a little differently than the more assertive among us. At least, that’s what a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is betting on.

“Working with and around humans means figuring out their intentions to better understand their behavior,” said graduate student Wilko Schwarting, lead author on the paper published this week in Proceedings of the National Academy of Sciences. “People’s tendencies to be collaborative or competitive often spills over into how they behave as drivers. In this paper we sought to understand if this was something we could actually quantify.”

The team is building a model that classifies drivers according to how selfish or selfless they are, then uses that classification to help predict how drivers will behave on the road. Ideally, the system will help improve safety for self-driving cars by integrating a degree of ‘humanity’ into how their software perceives its surroundings; right now, human drivers and their cars are just another object, not much different than a tree or a sign.

But unlike trees and signs, humans have behavioral patterns and motivations. For greater success on roads that are still dominated by us mercurial humans, the CSAIL team believes, driverless cars should take our personalities into account.

How Selfish Are You?
About how important is your own well-being to you vs. the well-being of other people? It’s a hard question to answer without specifying who the other people are; your answer would likely differ if we’re talking about your friends, loved ones, strangers, or people you actively dislike.

In social psychology, social value orientation (SVO) refers to people’s preferences for allocating resources between themselves and others. The two broad categories people can fall into are pro-social (people who are more cooperative, and expect cooperation from others) and pro-self (pretty self-explanatory: “Me first!”).

Based on drivers’ behavior in two different road scenarios—merging and making a left turn—the CSAIL team’s model classified drivers as pro-social or egoistic. Slowing down to let someone merge into your lane in front of you would earn you a pro-social classification, while cutting someone off or not slowing down to allow a left turn would make you egoistic.

On the Road
The system then uses these classifications to model and predict drivers’ behavior. The team demonstrated that using their model, errors in predicting the behavior of other cars were reduced by 25 percent.

In a left-turn simulation, for example, their car would wait when an approaching car had an egoistic driver, but go ahead and make the turn when the other driver was prosocial. Similarly, if a self-driving car is trying to merge into the left lane and it’s identified the drivers in that lane as egoistic, it will assume they won’t slow down to let it in, and will wait to merge behind them. If, on the other hand, the self-driving car knows that the human drivers in the left lane are prosocial, it will attempt to merge between them since they’re likely to let it in.

So how does this all translate to better safety?

It’s essentially a starting point for imbuing driverless cars with some of the abilities and instincts that are innate to humans. If you’re driving down the highway and you see a car swerving outside its lane, you’ll probably distance yourself from that car because you know it’s more likely to cause an accident. Our senses take in information we can immediately interpret and act on, and this includes predictions about what might happen based on observations of what just happened. Our observations can clue us in to a driver’s personality (the swerver must be careless) or simply to the circumstances of a given moment (the swerver was texting).

But right now, self-driving cars assume all human drivers behave the same way, and they have no mechanism for incorporating observations about behavioral differences between drivers into their decisions.

“Creating more human-like behavior in autonomous vehicles (AVs) is fundamental for the safety of passengers and surrounding vehicles, since behaving in a predictable manner enables humans to understand and appropriately respond to the AV’s actions,” said Schwarting.

Though it may feel a bit unsettling to think of an algorithm lumping you into a category and driving accordingly around you, maybe it’s less unsettling than thinking of self-driving cars as pre-programmed, oblivious robots unable to adapt to different driving styles.

The team’s next step is to apply their model to pedestrians, bikes, and other agents frequently found in driving environments. They also plan to look into other robotic systems acting among people, like household robots, and integrating social value orientation into their algorithms.

Image Credit: Image by Free-Photos from Pixabay Continue reading

Posted in Human Robots

#435779 This Robot Ostrich Can Ride Around on ...

Proponents of legged robots say that they make sense because legs are often required to go where humans go. Proponents of wheeled robots say, “Yeah, that’s great but watch how fast and efficient my robot is, compared to yours.” Some robots try and take advantage of wheels and legs with hybrid designs like whegs or wheeled feet, but a simpler and more versatile solution is to do what humans do, and just take advantage of wheels when you need them.

We’ve seen a few experiments with this. The University of Michigan managed to convince Cassie to ride a Segway, with mostly positive (but occasionally quite negative) results. A Segway, and hoverboard-like systems, can provide wheeled mobility for legged robots over flat terrain, but they can’t handle things like stairs, which is kind of the whole point of having a robot with legs anyway.

Image: UC Berkeley

From left, a Segway, a hovercraft, and hovershoes, with complexity in terms of user control increasing from left to right.

At UC Berkeley’s Hybrid Robotics Lab, led by Koushil Sreenath, researchers have taken things a step further. They are teaching their Cassie bipedal robot (called Cassie Cal) to wheel around on a pair of hovershoes. Hovershoes are like hoverboards that have been chopped in half, resulting in a pair of motorized single-wheel skates. You balance on the skates, and control them by leaning forwards and backwards and left and right, which causes each skate to accelerate or decelerate in an attempt to keep itself upright. It’s not easy to get these things to work, even for a human, but by adding a sensor package to Cassie the UC Berkeley researchers have managed to get it to zip around campus fully autonomously.

Remember, Cassie is operating autonomously here—it’s performing vSLAM (with an Intel RealSense) and doing all of its own computation onboard in real time. Watching it jolt across that cracked sidewalk is particularly impressive, especially considering that it only has pitch control over its ankles and can’t roll its feet to maintain maximum contact with the hovershoes. But you can see the advantage that this particular platform offers to a robot like Cassie, including the ability to handle stairs. Stairs in one direction, anyway.

It’s a testament to the robustness of UC Berkeley’s controller that they were willing to let the robot operate untethered and outside, and it sounds like they’re thinking long-term about how legged robots on wheels would be real-world useful:

Our feedback control and autonomous system allow for swift movement through urban environments to aid in everything from food delivery to security and surveillance to search and rescue missions. This work can also help with transportation in large factories and warehouses.

For more details, we spoke with the UC Berkeley students (Shuxiao Chen, Jonathan Rogers, and Bike Zhang) via email.

IEEE Spectrum: How representative of Cassie’s real-world performance is what we see in the video? What happens when things go wrong?

Cassie’s real-world performance is similar to what we see in the video. Cassie can ride the hovershoes successfully all around the campus. Our current controller allows Cassie to robustly ride the hovershoes and rejects various perturbations. At present, one of the failure modes is when the hovershoe rolls to the side—this happens when it goes sideways down a step or encounters a large obstacle on one side of it, causing it to roll over. Under these circumstances, Cassie doesn’t have sufficient control authority (due to the thin narrow feet) to get the hovershoe back on its wheel.

The Hybrid Robotics Lab has been working on robots that walk over challenging terrain—how do wheeled platforms like hovershoes fit in with that?

Surprisingly, this research is related to our prior work on walking on discrete terrain. While locomotion using legs is efficient when traveling over rough and discrete terrain, wheeled locomotion is more efficient when traveling over flat continuous terrain. Enabling legged robots to ride on various micro-mobility platforms will offer multimodal locomotion capabilities, improving the efficiency of locomotion over various terrains.

Our current research furthers the locomotion ability for bipedal robots over continuous terrains by using a wheeled platform. In the long run, we would like to develop multi-modal locomotion strategies based on our current and prior work to allow legged robots to robustly and efficiently locomote in our daily life.

Photo: UC Berkeley

In their experiments, the UC Berkeley researchers say Cassie proved quite capable of riding the hovershoes over rough and uneven terrain, including going down stairs.

How long did it take to train Cassie to use the hovershoes? Are there any hovershoe skills that Cassie is better at than an average human?

We spent about eight months to develop our whole system, including a controller, a path planner, and a vision system. This involved developing mathematical models of Cassie and the hovershoes, setting up a dynamical simulation, figuring out how to interface and communicate with various sensors and Cassie, and doing several experiments to slowly improve performance. In contrast, a human with a good sense of balance needs a few hours to learn to use the hovershoes. A human who has never used skates or skis will probably need a longer time.

A human can easily turn in place on the hovershoes, while Cassie cannot do this motion currently due to our algorithm requiring a non-zero forward speed in order to turn. However, Cassie is much better at riding the hovershoes over rough and uneven terrain including riding the hovershoes down some stairs!

What would it take to make Cassie faster or more agile on the hovershoes?

While Cassie can currently move at a decent pace on the hovershoes and navigate obstacles, Cassie’s ability to avoid obstacles at rapid speeds is constrained by the sensing, the controller, and the onboard computation. To enable Cassie to dynamically weave around obstacles at high speeds exhibiting agile motions, we need to make progress on different fronts.

We need planners that take into account the entire dynamics of the Cassie-Hovershoe system and rapidly generate dynamically-feasible trajectories; we need controllers that tightly coordinate all the degrees-of-freedom of Cassie to dynamically move while balancing on the hovershoes; we need sensors that are robust to motion-blur artifacts caused due to fast turns; and we need onboard computation that can execute our algorithms at real-time speeds.

What are you working on next?

We are working on enabling more aggressive movements for Cassie on the hovershoes by fully exploiting Cassie’s dynamics. We are working on approaches that enable us to easily go beyond hovershoes to other challenging micro-mobility platforms. We are working on enabling Cassie to step onto and off from wheeled platforms such as hovershoes. We would like to create a future of multi-modal locomotion strategies for legged robots to enable them to efficiently help people in our daily life.

“Feedback Control for Autonomous Riding of Hovershoes by a Cassie Bipedal Robot,” by Shuxiao Chen, Jonathan Rogers, Bike Zhang, and Koushil Sreenath from the Hybrid Robotics Lab at UC Berkeley, has been submitted to IEEE Robotics and Automation Letters with option to be presented at the 2019 IEEE RAS International Conference on Humanoid Robots. Continue reading

Posted in Human Robots

#435640 Video Friday: This Wearable Robotic Tail ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:

Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.

The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.

[ Lakshmi Nair ]

Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.

This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.

[ IIT ]

Thanks Victor!

You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!

The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.

[ Paper ] via [ Gizmodo ]

The noises in this video are fantastic.

[ ESA ]

Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.

[ MIT CSAIL ]

Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…

[ Skydio ]

The only thing more fun than watching robots is watching people react to robots.

[ SEER ]

There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.

[ Stanford ]

#autonomousicecreamtricycle

In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:

Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.

[ Roboy ]

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.

[ ROAR Lab ]

During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.

[ DARPA ]

I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.

[ Ghost Robotics ]

If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.

[ AP ]

As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.

The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.

[ Paper ]

Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.

[ GITAI ]

Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.

[ MSL ]

Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:

And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:

All of the ROSCon FR talks are available on Vimeo.

[ ROSCon FR ] Continue reading

Posted in Human Robots