Tag Archives: machine
#435748 Video Friday: This Robot Is Like a ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
It’s been a while since we last spoke to Joe Jones, the inventor of Roomba, about his solar-powered, weed-killing robot, called Tertill, which he was launching as a Kickstarter project. Tertill is now available for purchase (US $300) and is shipping right now.
[ Tertill ]
Usually, we don’t post videos that involve drone use that looks to be either illegal or unsafe. These flights over the protests in Hong Kong are almost certainly both. However, it’s also a unique perspective on the scale of these protests.
[ Team BlackSheep ]
ICYMI: iRobot announced this week that it has acquired Root Robotics.
[ iRobot ]
This Boston Dynamics parody video went viral this week.
The CGI is good but the gratuitous violence—even if it’s against a fake robot—is a bit too much?
This is still our favorite Boston Dynamics parody video:
[ Corridor ]
Biomedical Engineering Department Head Bin He and his team have developed the first-ever successful non-invasive mind-controlled robotic arm to continuously track a computer cursor.
[ CMU ]
Organic chemists, prepare to meet your replacement:
Automated chemical synthesis carries great promises of safety, efficiency and reproducibility for both research and industry laboratories. Current approaches are based on specifically-designed automation systems, which present two major drawbacks: (i) existing apparatus must be modified to be integrated into the automation systems; (ii) such systems are not flexible and would require substantial re-design to handle new reactions or procedures. In this paper, we propose a system based on a robot arm which, by mimicking the motions of human chemists, is able to perform complex chemical reactions without any modifications to the existing setup used by humans. The system is capable of precise liquid handling, mixing, filtering, and is flexible: new skills and procedures could be added with minimum effort. We show that the robot is able to perform a Michael reaction, reaching a yield of 34%, which is comparable to that obtained by a junior chemist (undergraduate student in Chemistry).
[ arXiv ] via [ NTU ]
So yeah, ICRA 2019 was huge and awesome. Here are some brief highlights.
[ Montreal Gazette ]
For about US $5, this drone will deliver raw meat and beer to you if you live on an uninhabited island in Tokyo Bay.
[ Nikkei ]
The Smart Microsystems Lab at Michigan State University has a new version of their Autonomous Surface Craft. It’s autonomous, open source, and awfully hard to sink.
[ SML ]
As drone shows go, this one is pretty good.
[ CCTV ]
Here’s a remote controlled robot shooting stuff with a very large gun.
[ HDT ]
Over a period of three quarters (September 2018 thru May 2019), we’ve had the opportunity to work with five graduating University of Denver students as they brought their idea for a Misty II arm extension to life.
[ Misty Robotics ]
If you wonder how it looks to inspect burners and superheaters of a boiler with an Elios 2, here you are! This inspection was performed by Svenska Elektrod in a peat-fired boiler for Vattenfall in Sweden. Enjoy!
[ Flyability ]
The newest Soft Robotics technology, mGrip mini fingers, made for tight spaces, small packaging, and delicate items, giving limitless opportunities for your applications.
[ Soft Robotics ]
What if legged robots were able to generate dynamic motions in real-time while interacting with a complex environment? Such technology would represent a significant step forward the deployment of legged systems in real world scenarios. This means being able to replace humans in the execution of dangerous tasks and to collaborate with them in industrial applications.
This workshop aims to bring together researchers from all the relevant communities in legged locomotion such as: numerical optimization, machine learning (ML), model predictive control (MPC) and computational geometry in order to chart the most promising methods to address the above-mentioned scientific challenges.
[ Num Opt Wkshp ]
Army researchers teamed with the U.S. Marine Corps to fly and test 3-D printed quadcopter prototypes a the Marine Corps Air Ground Combat Center in 29 Palms, California recently.
[ CCDC ARL ]
Lex Fridman’s Artificial Intelligence podcast featuring Rosalind Picard.
[ AI Podcast ]
In this week’s episode of Robots in Depth, per speaks with Christian Guttmann, executive director of the Nordic AI Artificial Intelligence Institute.
Christian Guttmann talks about AI and wanting to understand intelligence enough to recreate it. Christian has be focusing on AI in healthcare and has recently started to communicate the opportunities and challenges in artificial intelligence to the general public. This is something that the host Per Sjöborg is also very passionate about. We also get to hear about the Nordic AI institute and the work it does to inform all parts of society about AI.
[ Robots in Depth ] Continue reading
#435707 AI Agents Startle Researchers With ...
After 25 million games, the AI agents playing hide-and-seek with each other had mastered four basic game strategies. The researchers expected that part.
After a total of 380 million games, the AI players developed strategies that the researchers didn’t know were possible in the game environment—which the researchers had themselves created. That was the part that surprised the team at OpenAI, a research company based in San Francisco.
The AI players learned everything via a machine learning technique known as reinforcement learning. In this learning method, AI agents start out by taking random actions. Sometimes those random actions produce desired results, which earn them rewards. Via trial-and-error on a massive scale, they can learn sophisticated strategies.
In the context of games, this process can be abetted by having the AI play against another version of itself, ensuring that the opponents will be evenly matched. It also locks the AI into a process of one-upmanship, where any new strategy that emerges forces the opponent to search for a countermeasure. Over time, this “self-play” amounted to what the researchers call an “auto-curriculum.”
According to OpenAI researcher Igor Mordatch, this experiment shows that self-play “is enough for the agents to learn surprising behaviors on their own—it’s like children playing with each other.”
Reinforcement is a hot field of AI research right now. OpenAI’s researchers used the technique when they trained a team of bots to play the video game Dota 2, which squashed a world-champion human team last April. The Alphabet subsidiary DeepMind has used it to triumph in the ancient board game Go and the video game StarCraft.
Aniruddha Kembhavi, a researcher at the Allen Institute for Artificial Intelligence (AI2) in Seattle, says games such as hide-and-seek offer a good way for AI agents to learn “foundational skills.” He worked on a team that taught their AllenAI to play Pictionary with humans, viewing the gameplay as a way for the AI to work on common sense reasoning and communication. “We are, however, quite far away from being able to translate these preliminary findings in highly simplified environments into the real world,” says Kembhavi.
Illustration: OpenAI
AI agents construct a fort during a hide-and-seek game developed by OpenAI.
In OpenAI’s game of hide-and-seek, both the hiders and the seekers received a reward only if they won the game, leaving the AI players to develop their own strategies. Within a simple 3D environment containing walls, blocks, and ramps, the players first learned to run around and chase each other (strategy 1). The hiders next learned to move the blocks around to build forts (2), and then the seekers learned to move the ramps (3), enabling them to jump inside the forts. Then the hiders learned to move all the ramps into their forts before the seekers could use them (4).
The two strategies that surprised the researchers came next. First the seekers learned that they could jump onto a box and “surf” it over to a fort (5), allowing them to jump in—a maneuver that the researchers hadn’t realized was physically possible in the game environment. So as a final countermeasure, the hiders learned to lock all the boxes into place (6) so they weren’t available for use as surfboards.
Illustration: OpenAI
An AI agent uses a nearby box to surf its way into a competitor’s fort.
In this circumstance, having AI agents behave in an unexpected way wasn’t a problem: They found different paths to their rewards, but didn’t cause any trouble. However, you can imagine situations in which the outcome would be rather serious. Robots acting in the real world could do real damage. And then there’s Nick Bostrom’s famous example of a paper clip factory run by an AI, whose goal is to make as many paper clips as possible. As Bostrom told IEEE Spectrum back in 2014, the AI might realize that “human bodies consist of atoms, and those atoms could be used to make some very nice paper clips.”
Bowen Baker, another member of the OpenAI research team, notes that it’s hard to predict all the ways an AI agent will act inside an environment—even a simple one. “Building these environments is hard,” he says. “The agents will come up with these unexpected behaviors, which will be a safety problem down the road when you put them in more complex environments.”
AI researcher Katja Hofmann at Microsoft Research Cambridge, in England, has seen a lot of gameplay by AI agents: She started a competition that uses Minecraft as the playing field. She says the emergent behavior seen in this game, and in prior experiments by other researchers, shows that games can be a useful for studies of safe and responsible AI.
“I find demonstrations like this, in games and game-like settings, a great way to explore the capabilities and limitations of existing approaches in a safe environment,” says Hofmann. “Results like these will help us develop a better understanding on how to validate and debug reinforcement learning systems–a crucial step on the path towards real-world applications.”
Baker says there’s also a hopeful takeaway from the surprises in the hide-and-seek experiment. “If you put these agents into a rich enough environment they will find strategies that we never knew were possible,” he says. “Maybe they can solve problems that we can’t imagine solutions to.” Continue reading
#435703 FarmWise Raises $14.5 Million to Teach ...
We humans spend most of our time getting hungry or eating, which must be really inconvenient for the people who have to produce food for everyone. For a sustainable and tasty future, we’ll need to make the most of what we’ve got by growing more food with less effort, and that’s where the robots can help us out a little bit.
FarmWise, a California-based startup, is looking to enhance farming efficiency by automating everything from seeding to harvesting, starting with the worst task of all: weeding. And they’ve just raised US $14.5 million to do it.
FarmWise’s autonomous, AI-enabled robots are designed to solve farmers’ most pressing challenges by performing a variety of farming functions – starting with weeding, and providing personalized care to every plant they touch. Using machine learning models, computer vision and high-precision mechanical tools, FarmWise’s sophisticated robots cleanly pick weeds from fields, leaving crops with the best opportunity to thrive while eliminating harmful chemical inputs. To date, FarmWise’s robots have efficiently removed weeds from more than 10 million plants.
FarmWise is not the first company to work on large mobile farming robots. A few years ago, we wrote about DeepField Robotics and their giant weed-punching robot. But considering how many humans there are, and how often we tend to get hungry, it certainly seems like there’s plenty of opportunity to go around.
Photo: FarmWise
FarmWise is collecting massive amounts of data about every single plant in an entire field, which is something that hasn’t been possible before. Above, one of the robots at a farm in Salinas Valley, Calif.
Weeding is just one thing that farm robots are able to do. FarmWise is collecting massive amounts of data about every single plant in an entire field, practically on the per-leaf level, which is something that hasn’t been possible before. Data like this could be used for all sorts of things, but generally, the long-term hope is that robots could tend to every single plant individually—weeding them, fertilizing them, telling them what good plants they are, and then mercilessly yanking them out of the ground at absolute peak ripeness. It’s not realistic to do this with human labor, but it’s the sort of data-intensive and monotonous task that robots could be ideal for.
The question with robots like this is not necessarily whether they can do the job that they were created for, because generally, they can—farms are structured enough environments that they lend themselves to autonomous robots, and the tasks are relatively well defined. The issue right now, I think, is whether robots are really time- and cost-effective for farmers. Capable robots are an expensive investment, and even if there is a shortage of human labor, will robots perform well enough to convince farmers to adopt the technology? That’s a solid maybe, and here’s hoping that FarmWise can figure out how to make it work.
[ FarmWise ] Continue reading