Tag Archives: LED

#438014 Meet Blueswarm, a Smart School of ...

Anyone who’s seen an undersea nature documentary has marveled at the complex choreography that schooling fish display, a darting, synchronized ballet with a cast of thousands.

Those instinctive movements have inspired researchers at the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS), and the Wyss Institute for Biologically Inspired Engineering. The results could improve the performance and dependability of not just underwater robots, but other vehicles that require decentralized locomotion and organization, such as self-driving cars and robotic space exploration.

The fish collective called Blueswarm was created by a team led by Radhika Nagpal, whose lab is a pioneer in self-organizing systems. The oddly adorable robots can sync their movements like biological fish, taking cues from their plastic-bodied neighbors with no external controls required. Nagpal told IEEE Spectrum that this marks a milestone, demonstrating complex 3D behaviors with implicit coordination in underwater robots.

“Insights from this research will help us develop future miniature underwater swarms that can perform environmental monitoring and search in visually-rich but fragile environments like coral reefs,” Nagpal said. “This research also paves a way to better understand fish schools, by synthetically recreating their behavior.”

The research is published in Science Robotics, with Florian Berlinger as first author. Berlinger said the “Bluedot” robots integrate a trio of blue LED lights, a lithium-polymer battery, a pair of cameras, a Raspberry Pi computer and four controllable fins within a 3D-printed hull. The fish-lens cameras detect LED’s of their fellow swimmers, and apply a custom algorithm to calculate distance, direction and heading.

Based on that simple production and detection of LED light, the team proved that Blueswarm could self-organize behaviors, including aggregation, dispersal and circle formation—basically, swimming in a clockwise synchronization. Researchers also simulated a successful search mission, an autonomous Finding Nemo. Using their dispersion algorithm, the robot school spread out until one could detect a red light in the tank. Its blue LEDs then flashed, triggering the aggregation algorithm to gather the school around it. Such a robot swarm might prove valuable in search-and-rescue missions at sea, covering miles of open water and reporting back to its mates.

“Each Bluebot implicitly reacts to its neighbors’ positions,” Berlinger said. The fish—RoboCod, perhaps?—also integrate a Wifi module to allow uploading new behaviors remotely. The lab’s previous efforts include a 1,000-strong army of “Kilobots,” and a robotic construction crew inspired by termites. Both projects operated in two-dimensional space. But a 3D environment like air or water posed a tougher challenge for sensing and movement.

In nature, Berlinger notes, there’s no scaly CEO to direct the school’s movements. Nor do fish communicate their intentions. Instead, so-called “implicit coordination” guides the school’s collective behavior, with individual members executing high-speed moves based on what they see their neighbors doing. That decentralized, autonomous organization has long fascinated scientists, including in robotics.

“In these situations, it really benefits you to have a highly autonomous robot swarm that is self-sufficient. By using implicit rules and 3D visual perception, we were able to create a system with a high degree of autonomy and flexibility underwater where things like GPS and WiFi are not accessible.”

Berlinger adds the research could one day translate to anything that requires decentralized robots, from self-driving cars and Amazon warehouse vehicles to exploration of faraway planets, where poor latency makes it impossible to transmit commands quickly. Today’s semi-autonomous cars face their own technical hurdles in reliably sensing and responding to their complex environments, including when foul weather obscures onboard sensors or road markers, or when they can’t fix position via GPS. An entire subset of autonomous-car research involves vehicle-to-vehicle (V2V) communications that could give cars a hive mind to guide individual or collective decisions— avoiding snarled traffic, driving safely in tight convoys, or taking group evasive action during a crash that’s beyond their sensory range.

“Once we have millions of cars on the road, there can’t be one computer orchestrating all the traffic, making decisions that work for all the cars,” Berlinger said.

The miniature robots could also work long hours in places that are inaccessible to humans and divers, or even large tethered robots. Nagpal said the synthetic swimmers could monitor and collect data on reefs or underwater infrastructure 24/7, and work into tiny places without disturbing fragile equipment or ecosystems.

“If we could be as good as fish in that environment, we could collect information and be non-invasive, in cluttered environments where everything is an obstacle,” Nagpal said. Continue reading

Posted in Human Robots

#437988 Bio-in­spired ro­bot­ics: Learn­ing ...

It is a high-speed movement: within fractions of a second the mouthparts of the dragonfly larvae spring forwards to seize its prey. For decades, researchers had assumed that this action must have been driven primarily by hydraulic pressure. Now, for the first time, scientists at Kiel University (CAU) have completely decrypted the biomechanical functional principle of what is known as the labial mask of dragonfly larvae. A vital contribution to this discovery was made by the team led by Dr. Sebastian Büsse of the Zoological Institute in its development of a bio-inspired robot with the operating principle of the complex mouthparts adapted to test its own hypothesis—the technology used here could lead to a significant enhancement of agile robot systems. The results of the ambitious research project were published on Wednesday 20 January in the renowned specialist journal Science Robotics. Continue reading

Posted in Human Robots

#437964 How Explainable Artificial Intelligence ...

The field of artificial intelligence has created computers that can drive cars, synthesize chemical compounds, fold proteins, and detect high-energy particles at a superhuman level.

However, these AI algorithms cannot explain the thought processes behind their decisions. A computer that masters protein folding and also tells researchers more about the rules of biology is much more useful than a computer that folds proteins without explanation.

Therefore, AI researchers like me are now turning our efforts toward developing AI algorithms that can explain themselves in a manner that humans can understand. If we can do this, I believe that AI will be able to uncover and teach people new facts about the world that have not yet been discovered, leading to new innovations.

Learning From Experience
One field of AI, called reinforcement learning, studies how computers can learn from their own experiences. In reinforcement learning, an AI explores the world, receiving positive or negative feedback based on its actions.

This approach has led to algorithms that have independently learned to play chess at a superhuman level and prove mathematical theorems without any human guidance. In my work as an AI researcher, I use reinforcement learning to create AI algorithms that learn how to solve puzzles such as the Rubik’s Cube.

Through reinforcement learning, AIs are independently learning to solve problems that even humans struggle to figure out. This has got me and many other researchers thinking less about what AI can learn and more about what humans can learn from AI. A computer that can solve the Rubik’s Cube should be able to teach people how to solve it, too.

Peering Into the Black Box
Unfortunately, the minds of superhuman AIs are currently out of reach to us humans. AIs make terrible teachers and are what we in the computer science world call “black boxes.”

AI simply spits out solutions without giving reasons for its solutions. Computer scientists have been trying for decades to open this black box, and recent research has shown that many AI algorithms actually do think in ways that are similar to humans. For example, a computer trained to recognize animals will learn about different types of eyes and ears and will put this information together to correctly identify the animal.

The effort to open up the black box is called explainable AI. My research group at the AI Institute at the University of South Carolina is interested in developing explainable AI. To accomplish this, we work heavily with the Rubik’s Cube.

The Rubik’s Cube is basically a pathfinding problem: Find a path from point A—a scrambled Rubik’s Cube—to point B—a solved Rubik’s Cube. Other pathfinding problems include navigation, theorem proving and chemical synthesis.

My lab has set up a website where anyone can see how our AI algorithm solves the Rubik’s Cube; however, a person would be hard-pressed to learn how to solve the cube from this website. This is because the computer cannot tell you the logic behind its solutions.

Solutions to the Rubik’s Cube can be broken down into a few generalized steps—the first step, for example, could be to form a cross while the second step could be to put the corner pieces in place. While the Rubik’s Cube itself has over 10 to the 19th power possible combinations, a generalized step-by-step guide is very easy to remember and is applicable in many different scenarios.

Approaching a problem by breaking it down into steps is often the default manner in which people explain things to one another. The Rubik’s Cube naturally fits into this step-by-step framework, which gives us the opportunity to open the black box of our algorithm more easily. Creating AI algorithms that have this ability could allow people to collaborate with AI and break down a wide variety of complex problems into easy-to-understand steps.

A step-by-step refinement approach can make it easier for humans to understand why AIs do the things they do. Forest Agostinelli, CC BY-ND

Collaboration Leads to Innovation
Our process starts with using one’s own intuition to define a step-by-step plan thought to potentially solve a complex problem. The algorithm then looks at each individual step and gives feedback about which steps are possible, which are impossible and ways the plan could be improved. The human then refines the initial plan using the advice from the AI, and the process repeats until the problem is solved. The hope is that the person and the AI will eventually converge to a kind of mutual understanding.

Currently, our algorithm is able to consider a human plan for solving the Rubik’s Cube, suggest improvements to the plan, recognize plans that do not work and find alternatives that do. In doing so, it gives feedback that leads to a step-by-step plan for solving the Rubik’s Cube that a person can understand. Our team’s next step is to build an intuitive interface that will allow our algorithm to teach people how to solve the Rubik’s Cube. Our hope is to generalize this approach to a wide range of pathfinding problems.

People are intuitive in a way unmatched by any AI, but machines are far better in their computational power and algorithmic rigor. This back and forth between man and machine utilizes the strengths from both. I believe this type of collaboration will shed light on previously unsolved problems in everything from chemistry to mathematics, leading to new solutions, intuitions and innovations that may have, otherwise, been out of reach.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Serg Antonov / Unsplash Continue reading

Posted in Human Robots

#437884 Hyundai Buys Boston Dynamics for Nearly ...

This morning just after 3 a.m. ET, Boston Dynamics sent out a media release confirming that Hyundai Motor Group has acquired a controlling interest in the company that values Boston Dynamics at US $1.1 billion:

Under the agreement, Hyundai Motor Group will hold an approximately 80 percent stake in Boston Dynamics and SoftBank, through one of its affiliates, will retain an approximately 20 percent stake in Boston Dynamics after the closing of the transaction.

The release is very long, but does have some interesting bits—we’ll go through them, and talk about what this might mean for both Boston Dynamics and Hyundai.

We’ve asked Boston Dynamics for comment, but they’ve been unusually quiet for the last few days (I wonder why!). So at this point just keep in mind that the only things we know for sure are the ones in the release. If (when?) we hear anything from either Boston Dynamics or Hyundai, we’ll update this post.

The first thing to be clear on is that the acquisition is split between Hyundai Motor Group’s affiliates, including Hyundai Motor, Hyundai Mobis, and Hyundai Glovis. Hyundai Motor makes cars, Hyundai Mobis makes car parts and seems to be doing some autonomous stuff as well, and Hyundai Glovis does logistics. There are many other groups that share the Hyundai name, but they’re separate entities, at least on paper. For example, there’s a Hyundai Robotics, but that’s part of Hyundai Heavy Industries, a different company than Hyundai Motor Group. But for this article, when we say “Hyundai,” we’re talking about Hyundai Motor Group.

What’s in it for Hyundai?
Let’s get into the press release, which is filled with press release-y terms like “synergies” and “working together”—you can view the whole thing here—but still has some parts that convey useful info.

By establishing a leading presence in the field of robotics, the acquisition will mark another major step for Hyundai Motor Group toward its strategic transformation into a Smart Mobility Solution Provider. To propel this transformation, Hyundai Motor Group has invested substantially in development of future technologies, including in fields such as autonomous driving technology, connectivity, eco-friendly vehicles, smart factories, advanced materials, artificial intelligence (AI), and robots.

If Hyundai wants to be a “Smart Mobility Solution Provider” with a focus on vehicles, it really seems like there’s a whole bunch of other ways they could have spent most of a billion dollars that would get them there quicker. Will Boston Dynamics’ expertise help them develop autonomous driving technology? Sure, I guess, but why not just buy an autonomous car startup instead? Boston Dynamics is more about “robots,” which happens to be dead last on the list above.

There was some speculation a couple of weeks ago that Hyundai was going to try and leverage Boston Dynamics to make a real version of this hybrid wheeled/legged concept car, so if that’s what Hyundai means by “Smart Mobility Solution Provider,” then I suppose the Boston Dynamics acquisition makes more sense. Still, I think that’s unlikely, because it’s just a concept car, after all.

In addition to “smart mobility,” which seems like a longer-term goal for Hyundai, the company also mentions other, more immediate benefits from the acquisition:

Advanced robotics offer opportunities for rapid growth with the potential to positively impact society in multiple ways. Boston Dynamics is the established leader in developing agile, mobile robots that have been successfully integrated into various business operations. The deal is also expected to allow Hyundai Motor Group and Boston Dynamics to leverage each other’s respective strengths in manufacturing, logistics, construction and automation.

“Successfully integrated” might be a little optimistic here. They’re talking about Spot, of course, but I think the best you could say at this point is that Spot is in the middle of some promising pilot projects. Whether it’ll be successfully integrated in the sense that it’ll have long-term commercial usefulness and value remains to be seen. I’m optimistic about this as well, but Spot is definitely not there yet.

What does probably hold a lot of value for Hyundai is getting Spot, Pick, and perhaps even Handle into that “manufacturing, logistics, construction” stuff. This is the bread and butter for robots right now, and Boston Dynamics has plenty of valuable technology to offer in those spaces.

Photo: Bob O’Connor

Boston Dynamics is selling Spot for $74,500, shipping included.

Betting on Spot and Pick
With Boston Dynamics founder Marc Raibert’s transition to Chairman of the company, the CEO position is now occupied by Robert Playter, the long-time VP of engineering and more recently COO at Boston Dynamics. Here’s his statement from the release:

“Boston Dynamics’ commercial business has grown rapidly as we’ve brought to market the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility. We and Hyundai share a view of the transformational power of mobility and look forward to working together to accelerate our plans to enable the world with cutting edge automation, and to continue to solve the world’s hardest robotics challenges for our customers.”

Whether Spot is in fact “the first robot that can automate repetitive and dangerous tasks in workplaces designed for human-level mobility” on the market is perhaps something that could be argued against, although I won’t. Whether or not it was the first robot that can do these kinds of things, it’s definitely not the only robot that do these kinds of things, and going forward, it’s going to be increasingly challenging for Spot to maintain its uniqueness.

For a long time, Boston Dynamics totally owned the quadruped space. Now, they’re one company among many—ANYbotics and Unitree are just two examples of other quadrupeds that are being successfully commercialized. Spot is certainly very capable and easy to use, and we shouldn’t underestimate the effort required to create a robot as complex as Spot that can be commercially used and supported. But it’s not clear how long they’ll maintain that advantage, with much more affordable platforms coming out of Asia, and other companies offering some unique new capabilities.

Photo: Boston Dynamics

Boston Dynamics’ Handle is an all-electric robot featuring a leg-wheel hybrid mobility system, a manipulator arm with a vacuum gripper, and a counterbalancing tail.

Boston Dynamics’ picking system, which stemmed from their 2019 acquisition of Kinema Systems, faces the same kinds of challenges—it’s very good, but it’s not totally unique.

Boston Dynamics produces highly capable mobile robots with advanced mobility, dexterity and intelligence, enabling automation in difficult, dangerous, or unstructured environments. The company launched sales of its first commercial robot, Spot in June of 2020 and has since sold hundreds of robots in a variety of industries, such as power utilities, construction, manufacturing, oil and gas, and mining. Boston Dynamics plans to expand the Spot product line early next year with an enterprise version of the robot with greater levels of autonomy and remote inspection capabilities, and the release of a robotic arm, which will be a breakthrough in mobile manipulation.

Boston Dynamics is also entering the logistics automation market with the industry leading Pick, a computer vision-based depalletizing solution, and will introduce a mobile robot for warehouses in 2021.

Huh. We’ll be trying to figure out what “greater levels of autonomy” means, as well as whether the “mobile robot for warehouses” is Handle, or something more like an autonomous mobile robot (AMR) platform. I’d honestly be surprised if Handle was ready for work outside of Boston Dynamics next year, and it’s hard to imagine how Boston Dynamics could leverage their expertise into the AMR space with something that wouldn’t just seem… Dull, compared to what they usually do. I hope to be surprised, though!

A new deep-pocketed benefactor

Hyundai Motor Group’s decision to acquire Boston Dynamics is based on its growth potential and wide range of capabilities.

“Wide range of capabilities” we get, but that other phrase, “growth potential,” has a heck of a lot wrapped up in it. At the moment, Boston Dynamics is nowhere near profitable, as far as we know. SoftBank acquired Boston Dynamics in 2017 for between one hundred and two hundred million, and over the last three years they’ve poured hundreds of millions more into Boston Dynamics.

Hyundai’s 80 percent stake just means that they’ll need to take over the majority of that support, and perhaps even increase it if Boston Dynamics’ growth is one of their primary goals. Hyundai can’t have a reasonable expectation that Boston Dynamics will be profitable any time soon; they’re selling Spots now, but it’s an open question whether Spot will manage to find a scalable niche in which it’ll be useful in the sort of volume that will make it a sustainable commercial success. And even if it does become a success, it seems unlikely that Spot by itself will make a significant dent in Boston Dynamics’ burn rate anytime soon. Boston Dynamics will have more products of course, but it’s going to take a while, and Hyundai will need to support them in the interim.

Depending on whether Hyundai views Boston Dynamics as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the
next Atlas, when the
current one still seems so far from commercialization

It’s become clear that to sustain itself, Boston Dynamics needs a benefactor with very deep pockets and a long time horizon. Initially, Boston Dynamics’ business model (or whatever you want to call it) was to do bespoke projects for defense-ish folks like DARPA, but from what we understand Boston Dynamics stopped that sort of work after Google acquired them back in 2013. From one perspective, that government funding did exactly what it was supposed to do, which was to fund the development of legged robots through low TRLs (technology readiness levels) to the point where they could start to explore commercialization.

The question now, though, is whether Hyundai is willing to let Boston Dynamics undertake the kinds of low-TRL, high-risk projects that led from BigDog to LS3 to Spot, and from PETMAN to DRC Atlas to the current Atlas. So will Hyundai be cool about the whole thing and be the sort of benefactor that’s willing to give Boston Dynamics the resources that they need to keep doing what they’re doing, without having to answer too many awkward questions about things like practicality and profitability? Hyundai can certainly afford to do this, but so could SoftBank, and Google—the question is whether Hyundai will want to, over the length of time that’s required for the development of the kind of ultra-sophisticated robotics hardware that Boston Dynamics specializes in.

To put it another way: Depending whether Hyundai’s perspective on Boston Dynamics is as a company that does research or a company that makes robots that are useful and profitable, it may be difficult for Boston Dynamics to justify the cost to develop the next Atlas, when the current one still seems so far from commercialization.

Google, SoftBank, now Hyundai

Boston Dynamics possesses multiple key technologies for high-performance robots equipped with perception, navigation, and intelligence.

Hyundai Motor Group’s AI and Human Robot Interaction (HRI) expertise is highly synergistic with Boston Dynamics’s 3D vision, manipulation, and bipedal/quadruped expertise.

As it turns out, Hyundai Motors does have its own robotics lab, called Hyundai Motors Robotics Lab. Their website is not all that great, but here’s a video from last year:

I’m not entirely clear on what Hyundai means when they use the word “synergistic” when they talk about their robotics lab and Boston Dynamics, but it’s a little bit concerning. Usually, when a big company buys a little company that specializes in something that the big company is interested in, the idea is that the little company, to some extent, will be absorbed into the big company to give them some expertise in that area. Historically, however, Boston Dynamics has been highly resistant to this, maintaining its post-acquisition independence and appearing to be very reluctant to do anything besides what it wants to do, at whatever pace it wants to do it, and as by itself as possible.

From what we understand, Boston Dynamics didn’t integrate particularly well with Google’s robotics push in 2013, and we haven’t seen much evidence that SoftBank’s experience was much different. The most direct benefit to SoftBank (or at least the most visible one) was the addition of a fleet of Spot robots to the SoftBank Hawks baseball team cheerleading squad, along with a single (that we know about) choreographed gymnastics routine from an Atlas robot that was only shown on video.

And honestly, if you were a big manufacturing company with a bunch of money and you wanted to build up your own robotics program quickly, you’d probably have much better luck picking up some smaller robotics companies who were a bit less individualistic and would probably be more amenable to integration and would cost way less than a billion dollars-ish. And if integration is ultimately Hyundai’s goal, we’ll be very sad, because it’ll likely signal the end of Boston Dynamics doing the unfettered crazy stuff that we’ve grown to love.

Photo: Bob O’Connor

Possibly the most agile humanoid robot ever built, Atlas can run, climb, jump over obstacles, and even get up after a fall.

Boston Dynamics contemplates its future

The release ends by saying that the transaction is “subject to regulatory approvals and other customary closing conditions” and “is expected to close by June of 2021.” Again, you can read the whole thing here.

My initial reaction is that, despite the “synergies” described by Hyundai, it’s certainly not immediately obvious why the company wants to own 80 percent of Boston Dynamics. I’d also like a better understanding of how they arrived at the $1.1 billion valuation. I’m not saying this because I don’t believe in what Boston Dynamics is doing or in the inherent value of the company, because I absolutely do, albeit perhaps in a slightly less tangible sense. But when you start tossing around numbers like these, a big pile of expectations inevitably comes along with them. I hope that Boston Dynamics is unique enough that the kinds of rules that normally apply to robotics companies (or companies in general) can be set aside, at least somewhat, but I also worry that what made Boston Dynamics great was the explicit funding for the kinds of radical ideas that eventually resulted in robots like Atlas and Spot.

Can Hyundai continue giving Boston Dynamics the support and freedom that they need to keep doing the kinds of things that have made them legendary? I certainly hope so. Continue reading

Posted in Human Robots

#437848 UX Design – A Competitive ...

UX-led approach for your robotics design and build

UX design differentiates great products from average products, and helps you stand out from the rest. Now, in the age of robotics, your teams may be creating products with no blueprint to bounce off.

Find out why a UX-led approach for the design and build of your products is critical, learn from the masters in getting this right – first time, speed up the time it takes you to get to market, and deliver products users love.
DownloadUX design – Your Competitive Advantage today. Continue reading

Posted in Human Robots