Tag Archives: mobile

#435669 Watch World Champion Soccer Robots Take ...

RoboCup 2019 took place earlier this month down in Sydney, Australia. While there are many different events including RoboCup@Home, RoboCup Rescue, and a bunch of different soccer leagues, one of the most compelling events is middle-size league (MSL), where mobile robots each about the size of a fire hydrant play soccer using a regular size FIFA soccer ball. The robots are fully autonomous, making their own decisions in real time about when to dribble, pass, and shoot.

The long-term goal of RoboCup is this:

By the middle of the 21st century, a team of fully autonomous humanoid robot soccer players shall win a soccer game, complying with the official rules of FIFA, against the winner of the most recent World Cup.

While the robots are certainly not there yet, they're definitely getting closer.

Even if you’re not a particular fan of soccer, it’s impressive to watch the robots coordinate with each other, setting up multiple passes and changing tactics on the fly in response to the movements of the other team. And the ability of these robots to shoot accurately is world-class (like, human world-class), as they’re seemingly able to put the ball in whatever corner of the goal they choose with split-second timing.

The final match was between Tech United from Eindhoven University of Technology in the Netherlands (whose robots are called TURTLE), and Team Water from Beijing Information Science & Technology University. Without spoiling it, I can tell you that the game was tied within just the last few seconds, meaning that it had to go to overtime. You can watch the entire match on YouTube, or a 5-minute commentated highlight video here:

It’s become a bit of a tradition to have the winning MSL robots play a team of what looks to be inexperienced adult humans wearing long pants and dress shoes.

The fact that the robots managed to score even once is pretty awesome, and it also looks like the robots are playing very conservatively (more so than the humans) so as not to accidentally injure any of us fragile meatbags with our spindly little legs. I get that RoboCup wants its first team of robots that can beat a human World Cup winning team to be humanoids, but at the moment, the MSL robots are where all the skill is.

To get calibrated on the state of the art for humanoid soccer robots, here’s the adult size final, Team Nimbro from the University of Bonn in Germany versus Team Sweaty from Offenburg University in Germany:

Yup, still a lot of falling over.

There’s lots more RoboCup on YouTube: Some channels to find more matches include the official RoboCup 2019 channel, and Tech United Eindhoven’s channel, which has both live English commentary and some highlight videos.

[ RoboCup 2019 ] Continue reading

Posted in Human Robots

#435660 Toyota Research Developing New ...

With the Olympics taking place next year in Japan, Toyota is (among other things) stepping up its robotics game to help provide “mobility for all.” We know that Toyota’s HSR will be doing work there, along with a few other mobile systems, but the Toyota Research Institute (TRI) has just announced a new telepresence robot called the T-TR1, featuring an absolutely massive screen designed to give you a near-lifesize virtual presence.

T-TR1 is a virtual mobility/tele-presence robot developed by Toyota Research Institute in the United States. It is equipped with a camera atop a large, near-lifesize display.
By projecting an image of a user from a remote location, the robot will help that person feel more physically present at the robot’s location.
With T-TR1, Toyota will give people that are physically unable to attend the events such as the Games a chance to virtually attend, with an on-screen presence capable of conversation between the two locations.

TRI isn’t ready to share much more detail on this system yet (we asked, of course), but we can infer some things from the video and the rest of the info that’s out there. For example, that ball on top is a 360-degree camera (that looks a lot like an Insta360 Pro), giving the remote user just as good of an awareness of their surroundings as they would if they were there in person. There are multiple 3D-sensing systems, including at least two depth cameras plus a lidar at the base. It’s not at all clear whether the robot is autonomous or semi-autonomous (using the sensors for automated obstacle avoidance, say), and since the woman on the other end of the robot does not seem to be controlling it at all for the demo, it’s hard to make an educated guess about the level of autonomy, or even how it’s supposed to be controlled.

We really like that enormous screen—despite the fact that telepresence now requires pants. It adds to the embodiment that makes independent telepresence robots useful.

We really like that enormous screen—despite the fact that telepresence now requires pants. It adds to the embodiment that makes independent telepresence robots useful. It’s also nice that the robot can move fast enough to keep up a person walking briskly. Hopefully, it’s safe for it to move at that speed in an environment more realistic than a carpeted, half-empty conference room, although it’ll probably have to leverage all of those sensors to do so. The other challenge for the T-TR1 will be bandwidth—even assuming that all of the sensor data processing and stuff is done on-robot, 360 cameras are huge bandwidth hogs, plus there’s the primary (presumably high quality) feed from the main camera, and then the video of the user coming the other way. It’s a lot of data in a very latency-sensitive application, and it’ll presumably be operating in places where connectivity is going to be a challenge due to crowds. This has always been a problem for telepresence robots—no matter how amazing your robot is, the experience will often for better or worse be defined by Internet connections that you may have no control over.

We should emphasize that Toyota has only released the bare minimum of information about the T-TR1, although we’re told that we can expect more as the 2020 Olympics approach: opening ceremonies are one year from today.

[ TRI ] Continue reading

Posted in Human Robots

#435640 Video Friday: This Wearable Robotic Tail ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:

Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.

The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.

[ Lakshmi Nair ]

Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.

This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.

[ IIT ]

Thanks Victor!

You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!

The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.

[ Paper ] via [ Gizmodo ]

The noises in this video are fantastic.

[ ESA ]

Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.

[ MIT CSAIL ]

Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…

[ Skydio ]

The only thing more fun than watching robots is watching people react to robots.

[ SEER ]

There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.

[ Stanford ]

#autonomousicecreamtricycle

In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:

Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.

[ Roboy ]

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.

[ ROAR Lab ]

During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.

[ DARPA ]

I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.

[ Ghost Robotics ]

If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.

[ AP ]

As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.

The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.

[ Paper ]

Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.

[ GITAI ]

Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.

[ MSL ]

Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:

And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:

All of the ROSCon FR talks are available on Vimeo.

[ ROSCon FR ] Continue reading

Posted in Human Robots

#435541 This Giant AI Chip Is the Size of an ...

People say size doesn’t matter, but when it comes to AI the makers of the largest computer chip ever beg to differ. There are plenty of question marks about the gargantuan processor, but its unconventional design could herald an innovative new era in silicon design.

Computer chips specialized to run deep learning algorithms are a booming area of research as hardware limitations begin to slow progress, and both established players and startups are vying to build the successor to the GPU, the specialized graphics chip that has become the workhorse of the AI industry.

On Monday Californian startup Cerebras came out of stealth mode to unveil an AI-focused processor that turns conventional wisdom on its head. For decades chip makers have been focused on making their products ever-smaller, but the Wafer Scale Engine (WSE) is the size of an iPad and features 1.2 trillion transistors, 400,000 cores, and 18 gigabytes of on-chip memory.

The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It measures 46,225 square millimeters and includes 1.2 trillion transistors. Optimized for artificial intelligence compute, the WSE is shown here for comparison alongside the largest graphics processing unit. Image Credit: Used with permission from Cerebras Systems.
There is a method to the madness, though. Currently, getting enough cores to run really large-scale deep learning applications means connecting banks of GPUs together. But shuffling data between these chips is a major drain on speed and energy efficiency because the wires connecting them are relatively slow.

Building all 400,000 cores into the same chip should get round that bottleneck, but there are reasons it’s not been done before, and Cerebras has had to come up with some clever hacks to get around those obstacles.

Regular computer chips are manufactured using a process called photolithography to etch transistors onto the surface of a wafer of silicon. The wafers are inches across, so multiple chips are built onto them at once and then split up afterwards. But at 8.5 inches across, the WSE uses the entire wafer for a single chip.

The problem is that while for standard chip-making processes any imperfections in manufacturing will at most lead to a few processors out of several hundred having to be ditched, for Cerebras it would mean scrapping the entire wafer. To get around this the company built in redundant circuits so that even if there are a few defects, the chip can route around them.

The other big issue with a giant chip is the enormous amount of heat the processors can kick off—so the company has had to design a proprietary water-cooling system. That, along with the fact that no one makes connections and packaging for giant chips, means the WSE won’t be sold as a stand-alone component, but as part of a pre-packaged server incorporating the cooling technology.

There are no details on costs or performance so far, but some customers have already been testing prototypes, and according to Cerebras results have been promising. CEO and co-founder Andrew Feldman told Fortune that early tests show they are reducing training time from months to minutes.

We’ll have to wait until the first systems ship to customers in September to see if those claims stand up. But Feldman told ZDNet that the design of their chip should help spur greater innovation in the way engineers design neural networks. Many cornerstones of this process—for instance, tackling data in batches rather than individual data points—are guided more by the hardware limitations of GPUs than by machine learning theory, but their chip will do away with many of those obstacles.

Whether that turns out to be the case or not, the WSE might be the first indication of an innovative new era in silicon design. When Google announced it’s AI-focused Tensor Processing Unit in 2016 it was a wake-up call for chipmakers that we need some out-of-the-box thinking to square the slowing of Moore’s Law with skyrocketing demand for computing power.

It’s not just tech giants’ AI server farms driving innovation. At the other end of the spectrum, the desire to embed intelligence in everyday objects and mobile devices is pushing demand for AI chips that can run on tiny amounts of power and squeeze into the smallest form factors.

These trends have spawned renewed interest in everything from brain-inspired neuromorphic chips to optical processors, but the WSE also shows that there might be mileage in simply taking a sideways look at some of the other design decisions chipmakers have made in the past rather than just pumping ever more transistors onto a chip.

This gigantic chip might be the first exhibit in a weird and wonderful new menagerie of exotic, AI-inspired silicon.

Image Credit: Used with permission from Cerebras Systems. Continue reading

Posted in Human Robots

#435520 These Are the Meta-Trends Shaping the ...

Life is pretty different now than it was 20 years ago, or even 10 years ago. It’s sort of exciting, and sort of scary. And hold onto your hat, because it’s going to keep changing—even faster than it already has been.

The good news is, maybe there won’t be too many big surprises, because the future will be shaped by trends that have already been set in motion. According to Singularity University co-founder and XPRIZE founder Peter Diamandis, a lot of these trends are unstoppable—but they’re also pretty predictable.

At SU’s Global Summit, taking place this week in San Francisco, Diamandis outlined some of the meta-trends he believes are key to how we’ll live our lives and do business in the (not too distant) future.

Increasing Global Abundance
Resources are becoming more abundant all over the world, and fewer people are seeing their lives limited by scarcity. “It’s hard for us to realize this as we see crisis news, but what people have access to is more abundant than ever before,” Diamandis said. Products and services are becoming cheaper and thus available to more people, and having more resources then enables people to create more, thus producing even more resources—and so on.

Need evidence? The proportion of the world’s population living in extreme poverty is currently lower than it’s ever been. The average human life expectancy is longer than it’s ever been. The costs of day-to-day needs like food, energy, transportation, and communications are on a downward trend.

Take energy. In most of the world, though its costs are decreasing, it’s still a fairly precious commodity; we turn off our lights and our air conditioners when we don’t need them (ideally, both to save money and to avoid wastefulness). But the cost of solar energy has plummeted, and the storage capacity of batteries is improving, and solar technology is steadily getting more efficient. Bids for new solar power plants in the past few years have broken each other’s records for lowest cost per kilowatt hour.

“We’re not far from a penny per kilowatt hour for energy from the sun,” Diamandis said. “And if you’ve got energy, you’ve got water.” Desalination, for one, will be much more widely feasible once the cost of the energy needed for it drops.

Knowledge is perhaps the most crucial resource that’s going from scarce to abundant. All the world’s knowledge is now at the fingertips of anyone who has a mobile phone and an internet connection—and the number of people connected is only going to grow. “Everyone is being connected at gigabit connection speeds, and this will be transformative,” Diamandis said. “We’re heading towards a world where anyone can know anything at any time.”

Increasing Capital Abundance
It’s not just goods, services, and knowledge that are becoming more plentiful. Money is, too—particularly money for business. “There’s more and more capital available to invest in companies,” Diamandis said. As a result, more people are getting the chance to bring their world-changing ideas to life.

Venture capital investments reached a new record of $130 billion in 2018, up from $84 billion in 2017—and that’s just in the US. Globally, VC funding grew 21 percent from 2017 to a total of $207 billion in 2018.

Through crowdfunding, any person in any part of the world can present their idea and ask for funding. That funding can come in the form of a loan, an equity investment, a reward, or an advanced purchase of the proposed product or service. “Crowdfunding means it doesn’t matter where you live, if you have a great idea you can get it funded by people from all over the world,” Diamandis said.

All this is making a difference; the number of unicorns—privately-held startups valued at over $1 billion—currently stands at an astounding 360.

One of the reasons why the world is getting better, Diamandis believes, is because entrepreneurs are trying more crazy ideas—not ideas that are reasonable or predictable or linear, but ideas that seem absurd at first, then eventually end up changing the world.

Everyone and Everything, Connected
As already noted, knowledge is becoming abundant thanks to the proliferation of mobile phones and wireless internet; everyone’s getting connected. In the next decade or sooner, connectivity will reach every person in the world. 5G is being tested and offered for the first time this year, and companies like Google, SpaceX, OneWeb, and Amazon are racing to develop global satellite internet constellations, whether by launching 12,000 satellites, as SpaceX’s Starlink is doing, or by floating giant balloons into the stratosphere like Google’s Project Loon.

“We’re about to reach a period of time in the next four to six years where we’re going from half the world’s people being connected to the whole world being connected,” Diamandis said. “What happens when 4.2 billion new minds come online? They’re all going to want to create, discover, consume, and invent.”

And it doesn’t stop at connecting people. Things are becoming more connected too. “By 2020 there will be over 20 billion connected devices and more than one trillion sensors,” Diamandis said. By 2030, those projections go up to 500 billion and 100 trillion. Think about it: there’s home devices like refrigerators, TVs, dishwashers, digital assistants, and even toasters. There’s city infrastructure, from stoplights to cameras to public transportation like buses or bike sharing. It’s all getting smart and connected.

Soon we’ll be adding autonomous cars to the mix, and an unimaginable glut of data to go with them. Every turn, every stop, every acceleration will be a data point. Some cars already collect over 25 gigabytes of data per hour, Diamandis said, and car data is projected to generate $750 billion of revenue by 2030.

“You’re going to start asking questions that were never askable before, because the data is now there to be mined,” he said.

Increasing Human Intelligence
Indeed, we’ll have data on everything we could possibly want data on. We’ll also soon have what Diamandis calls just-in-time education, where 5G combined with artificial intelligence and augmented reality will allow you to learn something in the moment you need it. “It’s not going and studying, it’s where your AR glasses show you how to do an emergency surgery, or fix something, or program something,” he said.

We’re also at the beginning of massive investments in research working towards connecting our brains to the cloud. “Right now, everything we think, feel, hear, or learn is confined in our synaptic connections,” Diamandis said. What will it look like when that’s no longer the case? Companies like Kernel, Neuralink, Open Water, Facebook, Google, and IBM are all investing billions of dollars into brain-machine interface research.

Increasing Human Longevity
One of the most important problems we’ll use our newfound intelligence to solve is that of our own health and mortality, making 100 years old the new 60—then eventually, 120 or 150.

“Our bodies were never evolved to live past age 30,” Diamandis said. “You’d go into puberty at age 13 and have a baby, and by the time you were 26 your baby was having a baby.”

Seeing how drastically our lifespans have changed over time makes you wonder what aging even is; is it natural, or is it a disease? Many companies are treating it as one, and using technologies like senolytics, CRISPR, and stem cell therapy to try to cure it. Scaffolds of human organs can now be 3D printed then populated with the recipient’s own stem cells so that their bodies won’t reject the transplant. Companies are testing small-molecule pharmaceuticals that can stop various forms of cancer.

“We don’t truly know what’s going on inside our bodies—but we can,” Diamandis said. “We’re going to be able to track our bodies and find disease at stage zero.”

Chins Up
The world is far from perfect—that’s not hard to see. What’s less obvious but just as true is that we’re living in an amazing time. More people are coming together, and they have more access to information, and that information moves faster, than ever before.

“I don’t think any of us understand how fast the world is changing,” Diamandis said. “Most people are fearful about the future. But we should be excited about the tools we now have to solve the world’s problems.”

Image Credit: spainter_vfx / Shutterstock.com Continue reading

Posted in Human Robots