Tag Archives: forward

#435664 Swarm Robots Mimic Ant Jaws to Flip and ...

Small robots are appealing because they’re simple, cheap, and it’s easy to make a lot of them. Unfortunately, being simple and cheap means that each robot individually can’t do a whole lot. To make up for this, you can do what insects do—leverage that simplicity and low-cost to just make a huge swarm of simple robots, and together, they can cooperate to carry out relatively complex tasks.

Using insects as an example does set a bit of an unfair expectation for the poor robots, since insects are (let’s be honest) generally smarter and much more versatile than a robot on their scale could ever hope to be. Most robots with insect-like capabilities (like DASH and its family) are really too big and complex to be turned into swarms, because to make a vast amount of small robots, things like motors aren’t going to work because they’re too expensive.

The question, then, is to how to make a swarm of inexpensive small robots with insect-like mobility that don’t need motors to get around, and Jamie Paik’s Reconfigurable Robotics Lab at EPFL has an answer, inspired by trap-jaw ants.

Let’s talk about trap-jaw ants for just a second, because they’re insane. You can read this 2006 paper about them if you’re particularly interested in insane ants (and who isn’t!), but if you just want to hear the insane bit, it’s that trap-jaw ants can fire themselves into the air by biting the ground (!). In just 0.06 millisecond, their half-millimeter long mandibles can close at a top speed of 64 meters per second, which works out to an acceleration of about 100,000 g’s. Biting the ground causes the ant’s head to snap back with a force of 300 times the body weight of the ant itself, which launches the ant upwards. The ants can fly 8 centimeters vertically, and up to 15 cm horizontally—this is a lot, for an ant that’s just a few millimeters long.

Trap-jaw ants can fire themselves into the air by biting the ground, causing the ant’s head to snap back with a force of 300 times the body weight of the ant itself

EPFL’s robots, called Tribots, look nothing at all like trap-jaw ants, which personally I am fine with. They’re about 5 cm tall, weighing 10 grams each, and can be built on a flat sheet, and then folded into a tripod shape, origami-style. Or maybe it’s kirigami, because there’s some cutting involved. The Tribots are fully autonomous, meaning they have onboard power and control, including proximity sensors that allow them to detect objects and avoid them.

Photo: Marc Delachaux/EPFL

EPFL researchers Zhenishbek Zhakypov and Jamie Paik.

Avoiding objects is where the trap-jaw ants come in. Using two different shape-memory actuators (a spring and a latch, similar to how the ant’s jaw works), the Tribots can move around using a bunch of different techniques that can adapt to the terrain that they’re on, including:

Vertical jumping for height
Horizontal jumping for distance
Somersault jumping to clear obstacles
Walking on textured terrain with short hops (called “flic-flac” walking)
Crawling on flat surfaces

Here’s the robot in action:

Tribot’s maximum vertical jump is 14 cm (2.5 times its height), and horizontally it can jump about 23 cm (almost 4 times its length). Tribot is actually quite efficient in these movements, with a cost of transport much lower than similarly-sized robots, on par with insects themselves.

Working together, small groups of Tribots can complete tasks that a single robot couldn’t do alone. One example is pushing a heavy object a set distance. It turns out that you need five Tribots for this task—a leader robot, two worker robots, a monitor robot to measure the distance that the object has been pushed, and then a messenger robot to relay communications around the obstacle.

Image: EPFL

Five Tribots collaborate to move an object to a desired position, using coordination between a leader, two workers, a monitor, and a messenger robot. The leader orders the two worker robots to push the object while the monitor measures the relative position of the object. As the object blocks the two-way link between the leader and the monitor, the messenger maintains the communication link.

The researchers acknowledge that the current version of the hardware is limited in pretty much every way (mobility, sensing, and computation), but it does a reasonable job of demonstrating what’s possible with the concept. The plan going forward is to automate fabrication in order to “enable on-demand, ’push-button-manufactured’” robots.

“Designing minimal and scalable insect-inspired multi-locomotion millirobots,” by Zhenishbek Zhakypov, Kazuaki Mori, Koh Hosoda, and Jamie Paik from EPFL and Osaka University, is published in the current issue of Nature.
[ RRL ] via [ EPFL ] Continue reading

Posted in Human Robots

#435656 Will AI Be Fashion Forward—or a ...

The narrative that often accompanies most stories about artificial intelligence these days is how machines will disrupt any number of industries, from healthcare to transportation. It makes sense. After all, technology already drives many of the innovations in these sectors of the economy.

But sneakers and the red carpet? The definitively low-tech fashion industry would seem to be one of the last to turn over its creative direction to data scientists and machine learning algorithms.

However, big brands, e-commerce giants, and numerous startups are betting that AI can ingest data and spit out Chanel. Maybe it’s not surprising, given that fashion is partly about buzz and trends—and there’s nothing more buzzy and trendy in the world of tech today than AI.

In its annual survey of the $3 trillion fashion industry, consulting firm McKinsey predicted that while AI didn’t hit a “critical mass” in 2018, it would increasingly influence the business of everything from design to manufacturing.

“Fashion as an industry really has been so slow to understand its potential roles interwoven with technology. And, to be perfectly honest, the technology doesn’t take fashion seriously.” This comment comes from Zowie Broach, head of fashion at London’s Royal College of Arts, who as a self-described “old fashioned” designer has embraced the disruptive nature of technology—with some caveats.

Co-founder in the late 1990s of the avant-garde fashion label Boudicca, Broach has always seen tech as a tool for designers, even setting up a website for the company circa 1998, way before an online presence became, well, fashionable.

Broach told Singularity Hub that while she is generally optimistic about the future of technology in fashion—the designer has avidly been consuming old sci-fi novels over the last few years—there are still a lot of difficult questions to answer about the interface of algorithms, art, and apparel.

For instance, can AI do what the great designers of the past have done? Fashion was “about designing, it was about a narrative, it was about meaning, it was about expression,” according to Broach.

AI that designs products based on data gleaned from human behavior can potentially tap into the Pavlovian response in consumers in order to make money, Broach noted. But is that channeling creativity, or just digitally dabbling in basic human brain chemistry?

She is concerned about people retaining control of the process, whether we’re talking about their data or their designs. But being empowered with the insights machines could provide into, for example, the geographical nuances of fashion between Dubai, Moscow, and Toronto is thrilling.

“What is it that we want the future to be from a fashion, an identity, and design perspective?” she asked.

Off on the Right Foot
Silicon Valley and some of the biggest brands in the industry offer a few answers about where AI and fashion are headed (though not at the sort of depths that address Broach’s broader questions of aesthetics and ethics).

Take what is arguably the biggest brand in fashion, at least by market cap but probably not by the measure of appearances on Oscar night: Nike. The $100 billion shoe company just gobbled up an AI startup called Celect to bolster its data analytics and optimize its inventory. In other words, Nike hopes it will be able to figure out what’s hot and what’s not in a particular location to stock its stores more efficiently.

The company is going even further with Nike Fit, a foot-scanning platform using a smartphone camera that applies AI techniques from fields like computer vision and machine learning to find the best fit for each person’s foot. The algorithms then identify and recommend the appropriately sized and shaped shoe in different styles.

No doubt the next step will be to 3D print personalized and on-demand sneakers at any store.

San Francisco-based startup ThirdLove is trying to bring a similar approach to bra sizes. Its 20-member data team, Fortune reported, has developed the Fit Finder quiz that uses machine learning algorithms to help pick just the right garment for every body type.

Data scientists are also a big part of the team at Stitch Fix, a former San Francisco startup that went public in 2017 and today sports a market cap of more than $2 billion. The online “personal styling” company uses hundreds of algorithms to not only make recommendations to customers, but to help design new styles and even manage the subscription-based supply chain.

Future of Fashion
E-commerce giant Amazon has thrown its own considerable resources into developing AI applications for retail fashion—with mixed results.

One notable attempt involved a “styling assistant” that came with the company’s Echo Look camera that helped people catalog and manage their wardrobes, evening helping pick out each day’s attire. The company more recently revisited the direct consumer side of AI with an app called StyleSnap, which matches clothes and accessories uploaded to the site with the retailer’s vast inventory and recommends similar styles.

Behind the curtains, Amazon is going even further. A team of researchers in Israel have developed algorithms that can deduce whether a particular look is stylish based on a few labeled images. Another group at the company’s San Francisco research center was working on tech that could generate new designs of items based on images of a particular style the algorithms trained on.

“I will say that the accumulation of many new technologies across the industry could manifest in a highly specialized style assistant, far better than the examples we’ve seen today. However, the most likely thing is that the least sexy of the machine learning work will become the most impactful, and the public may never hear about it.”

That prediction is from an online interview with Leanne Luce, a fashion technology blogger and product manager at Google who recently wrote a book called, succinctly enough, Artificial Intelligence and Fashion.

Data Meets Design
Academics are also sticking their beakers into AI and fashion. Researchers at the University of California, San Diego, and Adobe Research have previously demonstrated that neural networks, a type of AI designed to mimic some aspects of the human brain, can be trained to generate (i.e., design) new product images to match a buyer’s preference, much like the team at Amazon.

Meanwhile, scientists at Hong Kong Polytechnic University are working with China’s answer to Amazon, Alibaba, on developing a FashionAI Dataset to help machines better understand fashion. The effort will focus on how algorithms approach certain building blocks of design, what are called “key points” such as neckline and waistline, and “fashion attributes” like collar types and skirt styles.

The man largely behind the university’s research team is Calvin Wong, a professor and associate head of Hong Kong Polytechnic University’s Institute of Textiles and Clothing. His group has also developed an “intelligent fabric defect detection system” called WiseEye for quality control, reducing the chance of producing substandard fabric by 90 percent.

Wong and company also recently inked an agreement with RCA to establish an AI-powered design laboratory, though the details of that venture have yet to be worked out, according to Broach.

One hope is that such collaborations will not just get at the technological challenges of using machines in creative endeavors like fashion, but will also address the more personal relationships humans have with their machines.

“I think who we are, and how we use AI in fashion, as our identity, is not a superficial skin. It’s very, very important for how we define our future,” Broach said.

Image Credit: Inspirationfeed / Unsplash Continue reading

Posted in Human Robots

#435642 Drone X Challenge 2020

Krypto Labs opens applications for Drone X Challenge 2020 Phase II, a US$1.5+ Million Global Challenge (US$1 Million Final Prize and US$500,000+ in R&D Grants)

In its most rewarding initiative to date, Krypto Labs, the global innovation hub with a unique ecosystem for funding ground-breaking startups, has announced the opening of Phase II of Drone X Challenge (DXC) 2020, the global multimillion-dollar challenge that is pushing the frontiers of innovation in drone technologies focusing on high payload capacity and high flight endurance.

Drone X Challenge 2020 is open to entrepreneurs, start-ups, researchers, university students and established companies. Teams that want to apply for Drone X Challenge 2020 Phase II will have to develop a drone system capable of achieving the minimum endurance and payload as per the category they are applying to.

Categories:

Fixed-wing drones battery powered
Fixed-wing drones hybrid/hydrocarbon powered
Multi-rotor drones battery powered
Multi-rotor drones hybrid/hydrocarbon powered

Drone X Challenge 2020 is divided in 3 phases and a final event, providing US$1 Million Final Prize. The outstanding applications that meet the requirements of Phase II will collectively receive US$300,000 in R&D grants.

The shortlisted teams of Phase I received US$320,000 in R&D grants, which required applicants to provide a technical proposal detailing the design of a drone capable of meeting the minimum requirements of payload and endurance.

The shortlisted teams of Drone X Challenge 2020 Phase I are:

RigiTech from Switzerland
Forward Robotics from Canada
Industrial Technology Research Institute (ITRI) from Taiwan
KopterKraft from Germany
DV8 Tech from USA
Richen Power from China
Industrial Technology Research Institute (ITRI) from Taiwan
Vulcan UAV Ltd from UK

Dr. Saleh Al Hashemi, Managing Director of Krypto Labs said: “This competition aligns with our efforts in contributing to the development of drone technology globally. We aim to redefine the way drone technologies are impacting our lives, and Krypto Labs is proud to be leading the way in the region by supporting startups, established companies, and industries involved in the field of drone development. By catalyzing and supporting these cutting-edge solutions, we aim to continue leveraging disruptive technologies that can create value and make an impact.”

For more information about Drone X Challenge 2020, please visit https://dronexchallenge2020.com. Continue reading

Posted in Human Robots

#435634 Robot Made of Clay Can Sculpt Its Own ...

We’re very familiar with a wide variety of transforming robots—whether for submarines or drones, transformation is a way of making a single robot adaptable to different environments or tasks. Usually, these robots are restricted to a discrete number of configurations—perhaps two or three different forms—because of the constraints imposed by the rigid structures that robots are typically made of.

Soft robotics has the potential to change all this, with robots that don’t have fixed forms but instead can transform themselves into whatever shape will enable them to do what they need to do. At ICRA in Montreal earlier this year, researchers from Yale University demonstrated a creative approach toward a transforming robot powered by string and air, with a body made primarily out of clay.

Photo: Evan Ackerman

The robot is actuated by two different kinds of “skin,” one layered on top of another. There’s a locomotion skin, made of a pattern of pneumatic bladders that can roll the robot forward or backward when the bladders are inflated sequentially. On top of that is the morphing skin, which is cable-driven, and can sculpt the underlying material into a variety of shapes, including spheres, cylinders, and dumbbells. The robot itself consists of both of those skins wrapped around a chunk of clay, with the actuators driven by offboard power and control. Here it is in action:

The Yale researchers have been experimenting with morphing robots that use foams and tensegrity structures for their bodies, but that stuff provides a “restoring force,” springing back into its original shape once the actuation stops. Clay is different because it holds whatever shape it’s formed into, making the robot more energy efficient. And if the dumbbell shape stops being useful, the morphing layer can just squeeze it back into a cylinder or a sphere.

While this robot, and the sample transformation shown in the video, are relatively simplistic, the researchers suggest some ways in which a more complex version could be used in the future:

Photo: IEEE Xplore

This robot’s morphing skin sculpts its clay body into different shapes.

Applications where morphing and locomotion might serve as complementary functions are abundant. For the example skins presented in this work, a search-and-rescue operation could use the clay as a medium to hold a payload such as sensors or transmitters. More broadly, applications include resource-limited conditions where supply chains for materiel are sparse. For example, the morphing sequence shown in Fig. 4 [above] could be used to transform from a rolling sphere to a pseudo-jointed robotic arm. With such a morphing system, it would be possible to robotically morph matter into different forms to perform different functions.

Read this article for free on IEEE Xplore until 5 September 2019

Morphing Robots Using Robotic Skins That Sculpt Clay, by Dylan S. Shah, Michelle C. Yuen, Liana G. Tilton, Ellen J. Yang, and Rebecca Kramer-Bottiglio from Yale University, was presented at ICRA 2019 in Montreal.

[ Yale Faboratory ]

< Back to IEEE Journal Watch Continue reading

Posted in Human Robots

#435632 DARPA Subterranean Challenge: Tunnel ...

The Tunnel Circuit of the DARPA Subterranean Challenge starts later this week at the NIOSH research mine just outside of Pittsburgh, Pennsylvania. From 15-22 August, 11 teams will send robots into a mine that they've never seen before, with the goal of making maps and locating items. All DARPA SubT events involve tunnels of one sort or another, but in this case, the “Tunnel Circuit” refers to mines as opposed to urban underground areas or natural caves. This month’s challenge is the first of three discrete events leading up to a huge final event in August of 2021.

While the Tunnel Circuit competition will be closed to the public, and media are only allowed access for a single day (which we'll be at, of course), DARPA has provided a substantial amount of information about what teams will be able to expect. We also have details from the SubT Integration Exercise, called STIX, which was a completely closed event that took place back in April. STIX was aimed at giving some teams (and DARPA) a chance to practice in a real tunnel environment.

For more general background on SubT, here are some articles to get you all caught up:

SubT: The Next DARPA Challenge for Robotics

Q&A with DARPA Program Manager Tim Chung

Meet The First Nine Teams

It makes sense to take a closer look at what happened at April's STIX exercise, because it is (probably) very similar to what teams will experience in the upcoming Tunnel Circuit. STIX took place at Edgar Experimental Mine in Colorado, and while no two mines are the same (and many are very, very different), there are enough similarities for STIX to have been a valuable experience for teams. Here's an overview video of the exercise from DARPA:

DARPA has also put together a much more detailed walkthrough of the STIX mine exercise, which gives you a sense of just how vast, complicated, and (frankly) challenging for robots the mine environment is:

So, that's the kind of thing that teams had to deal with back in April. Since the event was an exercise, rather than a competition, DARPA didn't really keep score, and wouldn't comment on the performance of individual teams. We've been trolling YouTube for STIX footage, though, to get a sense of how things went, and we found a few interesting videos.

Here's a nice overview from Team CERBERUS, which used drones plus an ANYmal quadruped:

Team CTU-CRAS also used drones, along with a tracked robot:

Team Robotika was brave enough to post video of a “fatal failure” experienced by its wheeled robot; the poor little bot gets rescued at about 7:00 in case you get worried:

So that was STIX. But what about the Tunnel Circuit competition this week? Here's a course preview video from DARPA:

It sort of looks like the NIOSH mine might be a bit less dusty than the Edgar mine was, but it could also be wetter and muddier. It’s hard to tell, because we’re just getting a few snapshots of what’s probably an enormous area with kilometers of tunnels that the robots will have to explore. But DARPA has promised “constrained passages, sharp turns, large drops/climbs, inclines, steps, ladders, and mud, sand, and/or water.” Combine that with the serious challenge to communications imposed by the mine itself, and robots will have to be both physically capable, and almost entirely autonomous. Which is, of course, exactly what DARPA is looking to test with this challenge.

Lastly, we had a chance to catch up with Tim Chung, Program Manager for the Subterranean Challenge at DARPA, and ask him a few brief questions about STIX and what we have to look forward to this week.

IEEE Spectrum: How did STIX go?

Tim Chung: It was a lot of fun! I think it gave a lot of the teams a great opportunity to really get a taste of what these types of real world environments look like, and also what DARPA has in store for them in the SubT Challenge. STIX I saw as an experiment—a learning experience for all the teams involved (as well as the DARPA team) so that we can continue our calibration.

What do you think teams took away from STIX, and what do you think DARPA took away from STIX?

I think the thing that teams took away was that, when DARPA hosts a challenge, we have very audacious visions for what the art of the possible is. And that's what we want—in my mind, the purpose of a DARPA Grand Challenge is to provide that inspiration of, ‘Holy cow, someone thinks we can do this!’ So I do think the teams walked away with a better understanding of what DARPA's vision is for the capabilities we're seeking in the SubT Challenge, and hopefully walked away with a better understanding of the technical, physical, even maybe mental challenges of doing this in the wild— which will all roll back into how they think about the problem, and how they develop their systems.

This was a collaborative exercise, so the DARPA field team was out there interacting with the other engineers, figuring out what their strengths and weaknesses and needs might be, and even understanding how to handle the robots themselves. That will help [strengthen] connections between these university teams and DARPA going forward. Across the board, I think that collaborative spirit is something we really wish to encourage, and something that the DARPA folks were able to take away.

What do we have to look forward to during the Tunnel Circuit?

The vision here is that the Tunnel Circuit is representative of one of the three subterranean subdomains, along with urban and cave. Characteristics of all of these three subdomains will be mashed together in an epic final course, so that teams will have to face hints of tunnel once again in that final event.

Without giving too much away, the NIOSH mine will be similar to the Edgar mine in that it's a human-made environment that supports mining operations and research. But of course, every site is different, and these differences, I think, will provide good opportunities for the teams to shine.

Again, we'll be visiting the NIOSH mine in Pennsylvania during the Tunnel Circuit and will post as much as we can from there. But if you’re an actual participant in the Subterranean Challenge, please tweet me @BotJunkie so that I can follow and help share live updates.

[ DARPA Subterranean Challenge ] Continue reading

Posted in Human Robots