Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#439674 Cerebras Upgrades Trillion-Transistor ...

Much of the recent progress in AI has come from building ever-larger neural networks. A new chip powerful enough to handle “brain-scale” models could turbo-charge this approach.

Chip startup Cerebras leaped into the limelight in 2019 when it came out of stealth to reveal a 1.2-trillion-transistor chip. The size of a dinner plate, the chip is called the Wafer Scale Engine and was the world’s largest computer chip. Earlier this year Cerebras unveiled the Wafer Scale Engine 2 (WSE-2), which more than doubled the number of transistors to 2.6 trillion.

Now the company has outlined a series of innovations that mean its latest chip can train a neural network with up to 120 trillion parameters. For reference, OpenAI’s revolutionary GPT-3 language model contains 175 billion parameters. The largest neural network to date, which was trained by Google, had 1.6 trillion.

“Larger networks, such as GPT-3, have already transformed the natural language processing landscape, making possible what was previously unimaginable,” said Cerebras CEO and co-founder Andrew Feldman in a press release.

“The industry is moving past 1 trillion parameter models, and we are extending that boundary by two orders of magnitude, enabling brain-scale neural networks with 120 trillion parameters.”

The genius of Cerebras’ approach is that rather than taking a silicon wafer and splitting it up to make hundreds of smaller chips, it makes a single massive one. While your average GPU will have a few hundred cores, the WSE-2 has 850,000. Because they’re all on the same hunk of silicon, they can work together far more seamlessly.

This makes the chip ideal for tasks that require huge numbers of operations to happen in parallel, which includes both deep learning and various supercomputing applications. And earlier this week at the Hotchips conference, the company unveiled new technology that is pushing the WSE-2’s capabilities even further.

A major challenge for large neural networks is shuttling around all the data involved in their calculations. Most chips have a limited amount of memory on-chip, and every time data has to be shuffled in and out it creates a bottleneck, which limits the practical size of networks.

The WSE-2 already has an enormous 40 gigabytes of on-chip memory, which means it can hold even the largest of today’s networks. But the company has also built an external unit called MemoryX that provides up to 2.4 Petabytes of high-performance memory, which is so tightly integrated it behaves as if it were on-chip.

Cerebras has also revamped its approach to that data it shuffles around. Previously the guts of the neural network would be stored on the chip, and only the training data would be fed in. Now, though, the weights of the connections between the network’s neurons are kept in the MemoryX unit and streamed in during training.

By combining these two innovations, the company says, they can train networks two orders of magnitude larger than anything that exists today. Other advances announced at the same time include the ability to run extremely sparse (and therefore efficient) neural networks, and a new communication system dubbed SwarmX that makes it possible to link up to 192 chips to create a combined total of 163 million cores.

How much all this cutting-edge technology will cost and who is in a position to take advantage of it is unclear. “This is highly specialized stuff,” Mike Demler, a senior analyst with the Linley Group, told Wired. “It only makes sense for training the very largest models.”

While the size of AI models has been increasing rapidly, it’s likely to be years before anyone can push the WSE-2 to its limits. And despite the insinuations in Cerebras’ press material, just because the parameter count roughly matches the number of synapses in the brain, that doesn’t mean the new chip will be able to run models anywhere close to its complexity or performance.

There’s a major debate in AI circles today over whether we can achieve general artificial intelligence by simply building larger neural networks, or this will require new theoretical breakthroughs. So far, increasing parameter counts has led to pretty consistent jumps in performance. A two-order-of-magnitude improvement over today’s largest models would undoubtedly be significant.

It’s still far from clear whether that trend will hold out, but Cerebras’ new chip could get us considerably closer to an answer.

Image Credit: Cerebras Continue reading

Posted in Human Robots

#439662 An Army of Grain-harvesting Robots ...

The field of automated precision agriculture is based on one concept—autonomous driving technologies that guide vehicles through GPS navigation. Fifteen years ago, when high-accuracy GPS became available for civilian use, farmers thought things would be simple: Put a GPS receiver station at the edge of the field, configure a route for a tractor or a combine harvester, and off you go, dear robot!

Practice has shown, however, that this kind of carefree field cultivation is inefficient and dangerous. It works only in ideal fields, which are almost never encountered in real life. If there's a log or a rock in the field, or a couple of village paramours dozing in the rye under the sun, the tractor will run right over them. And not all countries have reliable satellite coverage—in agricultural markets like Kazakhstan, coverage can be unstable. This is why, if you want safe and efficient farming, you need to equip your vehicle with sensors and an artificial intelligence that can see and understand its surroundings instead of blindly following GPS navigation instructions.

The Cognitive Agro Pilot system lets a human operator focus on harvesting rather than driving. An integrated display and control system in the cab handles driving based on a video feed from a single low-resolution camera, no GPS or Internet connectivity required. Cognitive Pilot

You might think that GPS navigation is ideal for automated agriculture, since the task facing the operator of a farm vehicle like a combine harvester is simply to drive around the field in a serpentine pattern, mowing down all the wheat or whatever crop it is filled with. But reality is far different. There are hundreds of things operators must watch even as they keep their eyes fastened to the edge of the field to ensure that they move alongside it with fine precision. An agricultural combine is not dissimilar to a church organ in terms of its operational complexity. When a combine operator works with an assistant, one of them steers along the crop edge, while the other controls the reel, the fan, the threshing drum, and the harvesting process in general. In Soviet times, there were two operators in a combine crew, but now there is only one. This means choosing between safe driving and efficient harvesting. And since you can't harvest grain without moving, driving becomes the top priority, and the efficiency of the harvesting process tends to suffer.

Harvesting efficiency is especially important in Eastern Europe, where farming is high risk and there is only one harvest a year. The season starts in March and farmers don't rest until the autumn, when they have only two weeks to harvest the crops. If something goes wrong, every day they miss may lead to a loss of 10 percent of the yield. If a driver does a poor job of harvesting or gets drunk and crashes the machine, precious time is lost—hours or even days. About 90 percent of the combine operator's time is spent making sure that the combine is driving exactly along the edge of the unharvested crop to maximize efficiency without missing any of the crop. But this is the most unpleasant part of the driving, and due to fatigue at the end of the shift, operators typically leave nearly a meter at the edge of each row uncut. These steering errors account for a 25 percent overall increase in harvesting time. Our technology allows combine operators to delegate the driving so that they can instead focus on optimizing harvesting quality.

Add to this the fact that the skilled combine operator is a dying breed. Professional education has declined, and the young people joining the labor force aren't up to the same standard. Though the same can be said of most manual trades, this effect creates a great demand for our robotic system, the Cognitive Agro Pilot.

Developing AI systems is in my genome. My father, Anatoly Uskov, was on the first team of AI program developers at the
System Research Institute of the Russian Academy of Sciences. Their program, named Kaissa, became the world computer chess champion in 1974. Two decades later, after the collapse of the Soviet Union, the Systems Research Institute's AI laboratories formed the foundation of my company, Cognitive Technologies. Our first business was developing optical character recognition software used by companies including HP, Oracle, and Samsung, and our success allowed us to support an R&D team of mathematicians and programmers conducting fundamental research in the field of computer vision and adjacent areas.

In 2012, we added a group of mathematicians developing neural networks. Later that year, this group proudly introduced me to their creation: Vasya, a football-playing toy car with a camera for an eye. “One-eyed Vasya” could recognize a ball among other objects in our long office hallway, and push it around. The robot was a massive distraction for everyone working on that floor, as employees went out into the hallway and started “testing” the car by tripping it up and blocking its way to the ball with obstacles. Meanwhile, the algorithm showed stable performance. Politely swerving around obstacles, the car kept on looking for the ball and pushing it. It almost gave an impression of a living creature, and this was our “eureka” moment—why don't we try doing the same with something larger and more useful?

Your browser does not support the video tag.
A combine driven by the Cognitive Agro Pilot harvests grain while a human supervises from the driver's seat.Cognitive Pilot

After initially experimenting with large heavy-duty trucks, we realized that the agricultural sector doesn't have the major legal and regulatory constraints that road transport has in Russia and elsewhere. Since our priority was to develop a commercially viable product, we set up a business unit called
Cognitive Pilot that develops add-on autonomy for combine harvesters, which are the machines used to harvest the vast majority of grain crops (including corn, wheat, barley, oats, and rye) on large farms.

Just five years ago, it was impossible to use video-content analysis to operate agricultural machinery at this level of automation because there weren't any fully functional neural networks that could detect the borders of a crop strip or see any obstacles in it.

At first, we considered combining GPS with visual data analysis, but it didn't take us long to realize that visual analytics alone is enough. For a GPS steering system to work, you need to prepare a map in advance, install a base station for corrections, or purchase a package of signals. It also requires pressing a lot of buttons in a lot of menus, and combine operators have very little appreciation for user interfaces. What we offer is a camera and a box stuffed with processing power and neural networks. As soon as the camera and the box are mounted and connected to the combine's control system, we're good to go. Once in the field, the newly installed Cognitive Agro Pilot says: “Hurray, we're in the field,” asks the driver for permission to take over, and starts driving. Five years from now, we predict that all combine harvesters will be equipped with a computer vision–based autopilot capable of controlling every aspect of harvesting crops.

From a single video stream, Cognitive Agro Pilot's neural networks are able to identify crops, cleared ground, static obstacles, and moving obstacles like people or other vehicles.Cognitive Pilot

Getting to this point has meant solving some fascinating challenges. We realized we would be facing an immense diversity of field scenes that our neural network must be trained to understand. Already working with farmers on the early project stages, we found out that the same crops can look completely different in different climatic zones. Preparing for mass production of our system, we tried to compile the most highly diversified data set with various fields and crops, starting with videos filmed in the fields of several farms across Russia under different weather and lighting conditions. But it soon became evident we needed to come up with a more adaptable solution.

We decided to use a coarse-to-fine approach to train our networks for autonomous driving. The initial version is improved with each new client, as we obtain additional data on different locations and crops. We use this data to make our networks more accurate and reliable, employing unsupervised domain adaptation to recalibrate them in a short time by adding carefully randomized noise and distortions to the training images to make the networks more robust. Humans are still needed to help with semantic segmentation on new varieties of crops. Thanks to this approach, we have now obtained highly resilient all-purpose networks suitable for use on over a dozen different crops grown across Eastern Europe.

The way the Cognitive Agro Pilot drives a combine is similar to how a human driver does it. That is, our unique competitive edge is the system's ability to see and understand the situation in the field much as a human would, so it maintains full efficiency in collaboration with human drivers. At the end of the day, it all comes down to economics. One human-driven combine can harvest around 20 hectares of crops during one shift. When Cognitive Agro Pilot does the driving, the operators' workload is considerably lower: They don't get tired, can make fewer stops, and take fewer breaks. In practical terms, it means harvesting around 25 to 30 hectares per shift. For a business owner, it means that two combines equipped with our system deliver the performance of three combines without it.

Your browser does not support the video tag.
While the combine drives itself, the human operator can make adjustments to the harvesting system to maximize speed and efficiency.Cognitive Pilot

On the market now there are some separate developments from various agricultural-harvesting companies. But each of their autonomous features is done as a separate function—driving along a field edge, driving along a row, and so on. We haven't yet seen another industrial system that can drive completely with computer vision, but one-eyed Vasya showed us that this was possible. And so as we thought about cost optimization and solving the task with a minimum set of devices, we decided that for a farmer's AI-based robot assistant, one camera is enough.

The Cognitive Agro Pilot's primary sensor is a single 2-megapixel color video camera that can see a wide area in front of the vehicle, mounted on a bracket near one of the combine's side mirrors. A control unit with an Nvidia Jetson TX2 computer module is mounted inside the cab, with an integrated display and driver interface. This control unit contains the main stack of autonomy algorithms, processes the video feed, and issues commands to the combine's hydraulic systems for control of steering, acceleration, and braking. A display in the cab provides the interface for the driver and displays warnings and settings. We are not tied to any particular brand; our retrofit kit will work with any combine harvester model available in the farmer's fleet. For a combine more than five years old, interfacing with its control system may not be quite so easy (sometimes an additional steering-angle sensor is required), but the installation and calibration can still usually be done within one day, and it takes just 10 minutes to train a new driver.

Our vision-based system drives the combine, so the operator can focus on the harvest and adjusting the process to the specific features of the crop. The Cognitive Agro Pilot does all of the steering and maintains a precise distance between rows, minimizing gaps. It looks for obstacles, categorizes them, and forecasts their trajectory if they're moving. If there is time, it warns the driver to avoid the obstacles, or it decides to drive around them or slow down. It also coordinates its movement with a grain truck and with other combines when it is part of a formation. The only time that the operator is routinely required to drive is to turn the combine around at the end of a run. If you need to turn, go ahead—the Cognitive Agro Pilot releases the controls and starts looking for a new crop edge. As soon as it finds one, the robot says: “Let me do the driving, man.” You push the button, and it takes over. Everything is simple and intuitive. And since a run is normally up to 5 kilometers long, these turns account for less than 1 percent of a driver's workload.

Once in the field, the newly installed Cognitive Agro Pilot says: “Hurray, we're in the field,” asks the driver for permission to take over, and starts driving.

During our pilot project last year, the yield from the same fields increased by 3 to 5 percent due to the ability of the harvester to maintain the cut width without leaving unharvested areas. It increased an additional 3 percent simply because the operators had time to more closely monitor what was going on in front of them, optimizing the harvesting performance. With our copilot, drivers' workloads are very low. They start the system, let go of the steering wheel, and can concentrate on controlling the machinery or checking commodity prices on their phones. Harvesting weeks are a real ordeal for combine drivers, who get no rest except for some sleep at night. In one month they need to earn enough for the upcoming six, so they are exhausted. However, the drivers who were using our solution realized they even had some energy left, and those who chose to work long hours said they could easily work 2 hours more than usual.

Gaining 10 or 15 percent more working hours over the course of the harvest may sound negligible, but it means that a driver has three extra days to harvest the crops. Consequently, if there are days of bad weather (like rain that causes the grain to germinate or fall down), the probability of keeping the crop yield high is a lot greater. And since combine operators get paid by harvested volume, using our system helps them make more money. Ultimately, both drivers and managers say unanimously that harvesting has become easier, and typically the cost of the system (about US $10,000) is paid off in just one season. Combine drivers quickly get the hang of our technology—after the first few days, many drivers either start to trust in our robot as an almighty intelligence, or decide to test it to death. Some get the misconception that our robots think like humans and are a little disappointed to see that our system underperforms at night and has trouble driving in dust when multiple combines are driving in file. Even though humans can have problems in these situations also, operators would grumble: “How can it not see?” A human driver understands that the distance to the combine ahead is about 10 meters and that they are traveling at a constant speed. The dust cloud will blow away in a minute, and everything will be fine. No need to brake. Alex, the driver of the combine ahead, definitely won't brake. Or will he? Since the system hasn't spent years alongside Alex and cannot use life experience to predict his actions, it stops the combine and releases the controls. This is where human intelligence once again wins out over AI.

Turns at the end of each run are also left to human intelligence, for now. This feature never failed to amaze combine drivers but turned out to be the most challenging during tests: The immense width of the header means that a huge number of hypotheses about objects beyond the line of sight of our single camera need to be factored in. To automate this feature, we're waiting for the completion of tests on rugged terrain. We are also experimenting with our own synthetic-aperture radar technology, which can see crop edges and crop rows as radio-frequency images. This does not add much to the total solution cost, and we plan to use radar for advanced versions of our “agrodroids” intended for work in low visibility and at night.

Robot in Disguise

It takes just four parts to transform almost any human-driven combine harvester into a robot. A camera [1] mounted on a side-view mirror watches the field ahead, sending a video stream to a combined computing unit, display, and driver interface [2] in the driver's cab. A neural network analyzes the video to find crop edges and obstacles, and sends commands to the hydraulic unit [3] to control the combine. For older combines, a steering sensor [4] mounted inside a wheel provides directional feedback for precision driving. While Cognitive Pilot's system takes care of the driving, it's the job of the human operator in the cab to optimize the performance of the header [5] to harvest the crop efficiently.Cognitive Pilot

During the summer and autumn of 2020, more than 350 autonomous combines equipped with the Cognitive Agro Pilot system drove across over 160,000 hectares of fields and helped their human supervisors harvest more than 720,000 tonnes of crops from Kaliningrad on the Baltic Sea to Vladivostok in the Russian Far East. Our robots have worked more than 230,000 hours, passing 950,000 autonomous kilometers driven last year. And by the end of 2021, our system will be available in the United States and South America.

Common farmers and the end users of our solutions may have heard about driverless cars in the news or seen the words “neural network” a couple of times, but that about sums up their AI experience. So it is fascinating to hear them say things like “Look how well the segmentation has worked!” or “The neural network is doing great!” in the driver's cab.

Changing the technological paradigm takes time, so we ensure the widest possible compatibility of our solutions with existing machinery. Undoubtedly, as farmers adapt to the current innovations, we will continuously increase the autonomy of all types of machinery for all kinds of tasks.

A few years ago, I studied the work of the United Nations mission in Rwanda dealing with the issues of chronic child malnutrition. I will never forget the photographs of emaciated children. It made me think of the famine that gripped a besieged Leningrad during World War II. Some of my relatives died there and their diaries are a testament to the fact that there are few endings more horrible than death from starvation. I believe that robotic automation and AI enhancement of agricultural machinery used in high-risk farming areas or regions with a shortage of skilled workers should be the highest priority for all governments concerned with providing an adequate response to the global food-security challenges.

This article appears in the September 2021 print issue as “On Russian Farms, the Robotic Revolution Has Begun.” Continue reading

Posted in Human Robots

#439658 Video Friday: Constant Gardener

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):

DARPA SubT Finals – September 21-23, 2021 – Louisville, KY, USAWeRobot 2021 – September 23-25, 2021 – [Online Event]IROS 2021 – September 27-1, 2021 – [Online Event]ROSCon 2021 – October 20-21, 2021 – [Online Event]Let us know if you have suggestions for next week, and enjoy today's videos.
It's been a hectic couple of days, so here are some retired industrial robots quietly drawing lines in sand.

[ Constant Gardener ] via [ RobotStart ]
Engineers at MIT and Shanghai Jiao Tong University have designed a soft, lightweight, and potentially low-cost neuroprosthetic hand. Amputees who tested the artificial limb performed daily activities such as zipping a suitcase, pouring a carton of juice, and petting a cat, just as well, and in some cases better than, more rigid neuroprosthetics.

[ MIT ]
To perceive the world around it, the Waymo Driver uses a single, integrated system comprised of lidars, cameras, and radars that enable it to see 360 degrees in every direction, day or night, and as far as three football fields away. This powerful sensing suite makes it easy for The Waymo Driver to navigate the complex scenarios it comes across multiple times a day while driving in San Francisco, like safely maneuvering around three, large double-parked vehicles on a narrow street.

[ Waymo ]
“Robot, stand up” – Oscar Constanza, 16, gives the order and slowly but surely a large frame strapped to his body lifts him up and he starts walking. Fastened to his shoulders, chest, waist, knees and feet, the exoskeleton allows Oscar – who has a genetic neurological condition that means his nerves do not send enough signals to his legs – to walk across the room and turn around.

[ Wandercraft ] via [ Reuters ]
Thanks Antonio!
Nothing super crazy in this video of Spot, but it's always interesting to pay close attention to some of the mobility challenges that the robot effortlessly manages, like the ladder, or that wobbly board.

[ Boston Dynamics ]
This video shows the evolution of a dynamic quadruped robot Panther. During my Ph.D. study, one of the most rewarding experiences is to improve upon the Panther robot. However, publication videos only show success, and the process of advancement (including failures and lessons) is rarely shared among the robotics community. This video, therefore, serves as complementary material showcasing the inglorious yet authentic aspect of research.

RIGHT. ON.
[ Yanran Ding ]
Thanks Fan!
This paper proposes the design of a robotic gripper motivated by the bin-picking problem, where a variety of objects need to be picked from cluttered bins. The presented gripper design focuses on an enveloping cage-like approach, which surrounds the object with three hooked fingers, and then presses into the object with a movable palm. The fingers are flexible and imbue grasps with some elasticity, helping to conform to objects and, crucially, adding friction to cases where an object cannot be caged.

[ Paper ]
Tin Lun Lam writes, “Recently, we have upgraded FreeBOT (a kind of Freeform Modular Self-reconfigurable Robot) such that they can detect the connection configuration dynamically without using any external sensing system. It is a very important milestone for our ongoing work to make FreeBOT fully autonomous.”

[ CUHK ]
Thanks Tin Lun!
Dusty Robotics develops robot-powered tools for the modern construction workforce. Our FieldPrinter automated layout robot turns BIM models into fully constructible layouts. This digital layout process shortens schedules, eliminates rework, and enables projects to finish faster at lower cost.

[ Dusty Robotics ]
NASA's Curiosity rover explores Mount Sharp, a 5-mile-tall (8-kilometer-tall) mountain within the basin of Gale Crater on Mars. Curiosity's Deputy Project Scientist, Abigail Fraeman of NASA's Jet Propulsion Laboratory in Southern California, gives viewers a descriptive tour of Curiosity's location. The panorama was captured by the rover's Mast Camera, or Mastcam, on July 3, 2021, the 3,167th Martian day, or sol, of its mission.

[ JPL ]
Robot arm manages to not kill plants. Or people!

[ HydroCobotics ]
Thanks Fan!
One Step Closer to Mapping Icy Moons Like Europa, Enceladus – Astrobotic tested AstroNav in Alaska to demonstrate precision landing and hazard detection on icy moons in the outer solar system.

[ Astrobotic ]
Researchers at Oak Ridge National Laboratory developed a robotic disassembly system for used electric vehicle batteries to make the process safer, more efficient, and less costly, while supporting recycling of critical materials and reducing waste.

[ ORNL ]
In a partnership with ANYbotics, Vale highlights its commitment to becoming one of the safest and most reliable mining companies in the world. The results showed that ANYmal helps reduce exposure to hazardous conditions and integrates seamlessly into Vale's team to autonomously perform routine inspections and deliver improved reporting during operations and periods of downtime.

[ ANYbotics ]
Thanks Cheila!
Adapted to the Spirit as an optional payload module, Exyn's industry-leading autonomous software, ExynAI, provides unprecedented 3D LIDAR mapping in GPS-denied environments. Now with Level 4 Autonomy and advanced data collection software, this payload enables volumetric autonomous navigation, superior security encryption, and increased speed and agility.

#SkinnyCopter
[ Ascent ]
At the Karolinska University Laboratory in Sweden, an innovation project based around an ABB collaborative robot has increased efficiency and created a better working environment for lab staff.

[ ABB ]
Alex from Berich Masonry shares his experience as a new member of the masonry community, and his positive experience with Construction Robotic's MULE, a lift-assist solution that can keep Alex stay safer and healthier throughout his career.

[ Construction Robotics ]
Older adults sharing what it's like to live with ElliQ, a personal care companion, for the past two years.

[ ElliQ ] Continue reading

Posted in Human Robots

#439652 Robot Could Operate a Docking Station ...

Picture, if you will, a cargo rocket launching into space and docking on the International Space Station. The rocket maneuvers up to the station and latches on with an airtight seal so that supplies can be transferred. Now imagine a miniaturized version of that process happening inside your body.
Researchers today announced that they have built a robotic system capable of this kind of supply drop, and which functions entirely inside the gut. The system involves an insulin delivery robot that is surgically implanted in the abdomen, and swallowable magnetic capsules that resupply the robot with insulin.
The robot's developers, based in Italy, tested their system in three diabetic pigs. The system successfully controlled the pigs' blood glucose levels for several hours, according to results published today in the journal Science Robotics.
“Maybe it's scary to think about a docking station inside the body, but it worked,” says Arianna Menciassi, an author of the paper and a professor of biomedical robotics and bioengineering at Sant'Anna School of Advanced Studies in Pisa, Italy.
In her team's system, a device the size of a flip phone is surgically implanted along the abdominal wall interfaced with the small intestine. The device delivers insulin into fluid in that space. When the implant's reservoir runs low on medication, a magnetic, insulin-filled capsule shuttles in to refill it.
Here's how the refill procedure would theoretically work in humans: The patient swallows the capsule just like a pill, and it moves through the digestive system naturally until it reaches a section of the small intestine where the implant has been placed. Using magnetic fields, the implant draws the capsule toward it, rotates it, and docks it in the correct position. The implant then punches the capsule with a retractable needle and pumps the insulin into its reservoir. The needle must also punch through a thin layer of intestinal tissue to reach the capsule.
In all, the implant contains four actuators that control the docking, needle punching, reservoir volume and aspiration, and pump. The motor responsible for docking rotates a magnet to maneuver the capsule into place. The design was inspired by industrial clamping systems and pipe-inspecting robots, the authors say.
After the insulin is delivered, the implant releases the capsule, allowing it to continue naturally through the digestive tract to be excreted from the body. The magnetic fields that control docking and release of the capsule are controlled wirelessly by an external programming device, and can be turned on or off. The implant's battery is wirelessly charged by an external device.
This kind of delivery system could prove useful to people with type 1 diabetes, especially those who must inject insulin into their bodies multiple times a day.
This kind of delivery system could prove useful to people with type 1 diabetes, especially those who must inject insulin into their bodies multiple times a day. Insulin pumps are available commercially, but these require external hardware that deliver the drug through a tube or needle that penetrates the body. Implantable insulin pumps are also available, but those devices have to be refilled by a tube that protrudes from the body, inviting bacterial infections; those systems have not proven popular.
A fully implantable system refilled by a pill would eliminate the need for protruding tubes and hardware, says Menciassi. Such a system could prove useful in delivering drugs for other diseases too, such as chemotherapy to people with ovarian, pancreatic, gastric, and colorectal cancers, the authors report.
As a next step, the authors are working on sealing the implanted device more robustly. “We observed in some pigs that [bodily] fluids are entering inside the robot,” says Menciassi. Some of the leaks are likely occurring during docking when the needle comes out of the implant, she says. The leaks did not occur when the team previously tested the device in water, but the human body, she notes, is much more complex. Continue reading

Posted in Human Robots

#439646 Elon Musk Has No Idea What He’s Doing ...

Yesterday, at the end of
Tesla's AI Day, Elon Musk introduced a concept for “Tesla Bot,” a 125 lb, 5'8″ tall electromechanically actuated autonomous bipedal “general purpose” humanoid robot. By “concept,” I mean that Musk showed some illustrations and talked about his vision for the robot, which struck me as, let's say, somewhat naïve. Based on the content of a six-minute long presentation, it seems as though Musk believes that someone (Tesla, suddenly?) should just go make an autonomous humanoid robot already—like, the technology exists, so why not do it?

To be fair, Musk did go out and do more or less much exactly that for electric cars and reusable rockets. But humanoid robots are much different, and much more complicated. With rockets, well, we already had rockets. And with electric cars, we already had cars, batteries, sensors, and the
DARPA competitions to build on. I don't say this to minimize what Musk has done with SpaceX and Tesla, but rather to emphasize that humanoid robotics is a very different challenge.

Unlike rockets or cars, humanoid robots aren't an existing technology that needs an ambitious vision plus a team of clever people plus sustained financial investment. With humanoid robotics, there are many more problems to solve, the problems are harder, and we're much farther away from practical solutions. Lots of very smart people have been actively working on these things for decades, and there's still a laundry list of fundamental breakthroughs in hardware and especially software that are probably necessary to make Musk's vision happen.

Are these fundamental breakthroughs impossible for Tesla? Not impossible, no. But from listening to what Elon Musk said today, I don't think he has any idea what getting humanoid robots to do useful stuff actually involves. Let's talk about why.

Watch the presentation if you haven't yet, and then let's go through what Musk talks about.

Okay, here we go!
“Our cars are semi-sentient robots on wheels.”

I don't know what that even means. Semi-sentient? Sure, whatever, a cockroach is semi-sentient I guess, although the implicit suggestion that these robots are therefore somehow part of the way towards actual sentience is ridiculous. Besides, autonomous cars live in a highly constrained action space within a semi-constrained environment, and Tesla cars in particular have plenty of well-known issues with their autonomy.

“With the full self-driving computer, essentially the inference engine on the car (which we'll keep evolving, obviously) and Dojo, and all the neural nets recognizing the world, understanding how to navigate through the world, it kind of makes sense to put that onto a humanoid form.”
Yes, because that's totally how it works. Look, the neural networks in a Tesla (the car) are trained to recognize the world from a car's perspective. They look for things that cars need to understand, and they have absolutely no idea about anything else, which can cause all kinds of problems for them. Same with navigation: autonomous cars navigate through a world that consists of roads and road-related stuff. You can't just “put that” onto a humanoid robot and have any sort of expectation that it'll be useful, unless all you want it to do is walk down the middle of the street and obey traffic lights. Also, the suggestion here seems to be that “AI for general purpose robotics” can be solved by just throwing enough computing power at it, which as far as I'm aware is not even remotely how that works, especially with physical robots.

“[Tesla] is also quite good at sensors and batteries and actuators. So, we think we'll probably have a prototype sometime next year.”
It's plausible that by spending enough money, Tesla could construct a humanoid robot with batteries, actuators, and computers in a similar design to what Musk has described. Can Tesla do it by sometime next year like Musk says they can? Sure, why not. But the hard part is not building a robot, it's getting that robot to do useful stuff, and I think Musk is way out of his depth here. People without a lot of experience in robotics often seem to think that once you've built the robot, you've solved most of the problem, so they focus on mechanical things like actuators and what it'll look like and how much it can lift and whatever. But that's backwards, and the harder problems come after you've got a robot that's mechanically functional.

What the heck does “human-level hands” mean?

“It's intended to navigate through a world built for humans…”
This is one of the few good reasons to make a humanoid robot, and I'm not even sure that by itself, it's a good enough reason to do so. But in any case, the word “intended” is doing a lot of heavy lifting here. The implications of a world built for humans includes an almost infinite variety of different environments, full of all kinds of robot-unfriendly things, not to mention the safety aspects of an inherently unstable 125 lb robot.

I feel like I have a pretty good handle on the current state of the art in humanoid robotics, and if you visit this site regularly, you probably do too. Companies like Boston Dynamics and Agility Robotics have been working on robots that can navigate through human environments for literally decades, and it's still a super hard problem. I don't know why Musk thinks that he can suddenly do better.

For anyone wondering why I Tweeted “Elon Musk has no idea what getting humanoid robots to do useful stuff actually… https://t.co/5uei4LIpyF
— Evan Ackerman (@BotJunkie)
1629446537.0

The “human-level hands” that you see annotated in Musk's presentation above are a good example of why I think Musk doesn't really grasp how much work this robot is going to be. What does “human-level hands” even mean? If we're talking about five-fingered hands with human-equivalent sensing and dexterity, those do exist (sort of), although they're generally fragile and expensive. It would take an enormous engineering effort to make hands like that into something practical just from a hardware perspective, which is why nobody has bothered—most robots use much simpler, much more robust two or three finger grippers instead. Could Tesla solve this problem? I have no doubt that they could, given enough time and money. But they've also got every other part of the robot to deal with. And even if you can make the hardware robust enough to be useful, you've still got to come up with all of the software to make it work. Again, we're talking about huge problems within huge problems at a scale that it seems like Musk hasn't considered.

“…And eliminate dangerous, repetitive, and boring tasks.”

Great. This is what robots should be doing. But as Musk himself knows, it's easy to say that robots will eliminate dangerous, repetitive, and boring tasks, and much more difficult to actually get them to do it—not because the robots aren't capable, but because humans are far more capable. We set a very high bar for performance and versatility in ways that aren't always obvious, and even when they are obvious, robots may not be able to replicate them effectively.

[Musk makes jokes about robots going rogue.]

Uh, okay.

“Things I think that are hard about having a really useful humanoid robot are, can it navigate through the world without being explicitly trained, without explicit line-by-line instructions? Can you talk to it and say, 'please pick up that bolt and attach it to the car with that wrench?' 'Please go to the store and get me the following groceries?' That kind of thing.”
Robots can already navigate through the world without “explicit line-by-line instructions” when they have a pretty good idea of what “the world” consists of. If the world is “roads” or “my apartment” or “this specific shopping mall,” that's probably a 95%+ solved problem, keeping in mind that the last 5% gets ridiculously complicated. But if you start talking about “my apartment plus any nearby grocery store along with everything between my apartment and that grocery store,” that's a whole lot of not necessarily well structured or predictable space.

And part of that challenge is just physically moving through those spaces. Are there stairs? Heavy doors? Crosswalks? Lots of people? These are complicated enough environments for those small wheeled sidewalk delivery robots with humans in the loop, never mind a (hypothetical) fully autonomous bipedal humanoid that is also carrying objects. And going into a crowded grocery store and picking things up off of shelves and putting them into a basket or a cart that then has to be pushed safely? These are cutting edge unsolved robotics problems, and we've barely seen this kind of thing happen with industrial arms on wheeled bases, even in a research context. Heck, even “pick up that bolt” is not an easy thing for a robot to do right now, if it wasn't specifically designed for that task.

“This I think will be quite profound, because what is the economy—at the foundation, it is labor. So, what happens when there is no shortage of labor? This is why I think long term there will need to be universal basic income. But not right now, because this robot doesn't work.”

Economics is well beyond my area of expertise, but as Musk says, until the robot works, this is all moot.

“AI for General Purpose Robotics.” Sure.

It's possible, even likely, that Tesla will build some sort of Tesla Bot by sometime next year, as Musk says. I think that it won't look all that much like the concept images in this presentation. I think that it'll be able to stand up, and perhaps walk. Maybe withstand a shove or two and do some basic object recognition and grasping. And I think after that, progress will be slow. I don't think Tesla will catch up with Boston Dynamics or Agility Robotics. Maybe they'll end up with the equivalent of Asimo, with a PR tool that can do impressive demos but is ultimately not all that useful.

Part of what bothers me so much about all this is how Musk's vision for the Tesla Bot implies that he's going to just casually leapfrog all of the roboticists who have been working towards useful humanoids for decades. Musk assumes that he will be able to wander into humanoid robot development and do what nobody else has yet been able to do: build a useful general purpose humanoid. I doubt Musk intended it this way, but I feel like he's backhandedly suggesting that the challenges with humanoids aren't actually that hard, and that if other people were cleverer, or worked harder, or threw more money at the problem, then we would have had general purpose humanoids already.
I think he's wrong. But if Tesla ends up investing time and money into solving some really hard robotics problems, perhaps they'll have some success that will help move the entire field forward. And I'd call that a win. Continue reading

Posted in Human Robots