Tag Archives: mobile

#437471 How Giving Robots a Hybrid, Human-Like ...

Squeezing a lot of computing power into robots without using up too much space or energy is a constant battle for their designers. But a new approach that mimics the structure of the human brain could provide a workaround.

The capabilities of most of today’s mobile robots are fairly rudimentary, but giving them the smarts to do their jobs is still a serious challenge. Controlling a body in a dynamic environment takes a surprising amount of processing power, which requires both real estate for chips and considerable amounts of energy to power them.

As robots get more complex and capable, those demands are only going to increase. Today’s most powerful AI systems run in massive data centers across far more chips than can realistically fit inside a machine on the move. And the slow death of Moore’s Law suggests we can’t rely on conventional processors getting significantly more efficient or compact anytime soon.

That prompted a team from the University of Southern California to resurrect an idea from more than 40 years ago: mimicking the human brain’s division of labor between two complimentary structures. While the cerebrum is responsible for higher cognitive functions like vision, hearing, and thinking, the cerebellum integrates sensory data and governs movement, balance, and posture.

When the idea was first proposed the technology didn’t exist to make it a reality, but in a paper recently published in Science Robotics, the researchers describe a hybrid system that combines analog circuits that control motion and digital circuits that govern perception and decision-making in an inverted pendulum robot.

“Through this cooperation of the cerebrum and the cerebellum, the robot can conduct multiple tasks simultaneously with a much shorter latency and lower power consumption,” write the researchers.

The type of robot the researchers were experimenting with looks essentially like a pole balancing on a pair of wheels. They have a broad range of applications, from hoverboards to warehouse logistics—Boston Dynamics’ recently-unveiled Handle robot operates on the same principles. Keeping them stable is notoriously tough, but the new approach managed to significantly improve all digital control approaches by radically improving the speed and efficiency of computations.

Key to bringing the idea alive was the recent emergence of memristors—electrical components whose resistance relies on previous input, which allows them to combine computing and memory in one place in a way similar to how biological neurons operate.

The researchers used memristors to build an analog circuit that runs an algorithm responsible for integrating data from the robot’s accelerometer and gyroscope, which is crucial for detecting the angle and velocity of its body, and another that controls its motion. One key advantage of this setup is that the signals from the sensors are analog, so it does away with the need for extra circuitry to convert them into digital signals, saving both space and power.

More importantly, though, the analog system is an order of magnitude faster and more energy-efficient than a standard all-digital system, the authors report. This not only lets them slash the power requirements, but also lets them cut the processing loop from 3,000 microseconds to just 6. That significantly improves the robot’s stability, with it taking just one second to settle into a steady state compared to more than three seconds using the digital-only platform.

At the minute this is just a proof of concept. The robot the researchers have built is small and rudimentary, and the algorithms being run on the analog circuit are fairly basic. But the principle is a promising one, and there is currently a huge amount of R&D going into neuromorphic and memristor-based analog computing hardware.

As often turns out to be the case, it seems like we can’t go too far wrong by mimicking the best model of computation we have found so far: our own brains.

Image Credit: Photos Hobby / Unsplash Continue reading

Posted in Human Robots

#437416 Robotics firm expands autonomous data ...

Back in 2013, local Brooklyn papers were excitedly reporting on a new initiative aimed at getting residents involved in cleaning up the highly polluted Gowanus Canal. Brooklyn Atlantis, as the project was known, was the brainchild of NYU Tandon Professor of Mechanical and Aerospace Engineering Maurizio Porfiri, who envisioned building and launching robotic boats to collect water-quality data and capture images of the infamous canal, which citizen scientists would then view and help classify. Those robotic boats ultimately led to the formation of the company Manifold Robotics, which aimed to further develop the unmanned surface vehicles (USVs) with sensor technology. (The fledgling company received support from PowerBridgeNY, a collaborative initiative to bring university research to market.) More recently, the startup has now branched out to develop a mobile data collection platform that allows unmanned aerial vehicles (UAVs) to operate safely in the sky near power lines. Continue reading

Posted in Human Robots

#437407 Nvidia’s Arm Acquisition Brings the ...

Artificial intelligence and mobile computing have been two of the most disruptive technologies of this century. The unification of the two companies that made them possible could have wide-ranging consequences for the future of computing.

California-based Nvidia’s graphics processing units (GPUs) have powered the deep learning revolution ever since Google researchers discovered in 2011 that they could run neural networks far more efficiently than conventional CPUs. UK company Arm’s energy-efficient chip designs have dominated the mobile and embedded computing markets for even longer.

Now the two will join forces after the American company announced a $40 billion deal to buy Arm from its Japanese owner, Softbank. In a press release announcing the deal, Nvidia touted its potential to rapidly expand the reach of AI into all areas of our lives.

“In the years ahead, trillions of computers running AI will create a new internet-of-things that is thousands of times larger than today’s internet-of-people,” said Nvidia founder and CEO Jensen Huang. “Uniting NVIDIA’s AI computing capabilities with the vast ecosystem of Arm’s CPU, we can advance computing from the cloud, smartphones, PCs, self-driving cars and robotics, to edge IoT, and expand AI computing to every corner of the globe.”

There are good reasons to believe the hype. The two companies are absolutely dominant in their respective fields—Nvidia’s GPUs support more than 97 percent of AI computing infrastructure offered by big cloud service providers, and Arm’s chips power more than 90 percent of smartphones. And there’s little overlap in their competencies, which means the relationship could be a truly symbiotic one.

“I think the deal “fits like a glove” in that Arm plays in areas that Nvidia does not or isn’t that successful, while NVIDIA plays in many places Arm doesn’t or isn’t that successful,” analyst Patrick Moorhead wrote in Forbes.

One of the most obvious directions would be to expand Nvidia’s AI capabilities to the kind of low-power edge devices that Arm excels in. There’s growing demand for AI in devices like smartphones, wearables, cars, and drones, where transmitting data to the cloud for processing is undesirable either for reasons of privacy or speed.

But there might also be fruitful exchanges in the other direction. Huang told Moorhead a major focus would be bringing Arm’s expertise in energy efficiency to the data center. That’s a big concern for technology companies whose electricity bills and green credentials are taking a battering thanks to the huge amounts of energy required to run millions of computer chips around the clock.

The deal may not be plain sailing, though, most notably due to the two companies’ differing business models. While Nvidia sells ready-made processors, Arm simply creates chip designs and then licenses them to other companies who can then customize them to their particular hardware needs. It operates on an open-licence basis whereby any company with the necessary cash can access its designs.

As a result, its designs are found in products built by hundreds of companies that license its innovations, including Apple, Samsung, Huawei, Qualcomm, and even Nvidia. Some, including two of the company’s co-founders, have raised concerns that the purchase by Nvidia, which competes with many of these other companies, could harm the neutrality that has been central to its success.

It’s possible this could push more companies towards RISC-V, an open-source technology developed by researchers at the University of California at Berkeley that rivals Arm’s and is not owned by any one company. However, there are plenty of reasons why most companies still prefer arm over the less feature-rich open-source option, and it might take a considerable push to convince Arm’s customers to jump ship.

The deal will also have to navigate some thorny political issues. Unions, politicians, and business leaders in the UK have voiced concerns that it could lead to the loss of high-tech jobs, and government sources have suggested conditions could be placed on the deal.

Regulators in other countries could also put a spanner in the works. China is concerned that if Arm becomes US-owned, many of the Chinese companies that rely on its technology could become victims of export restrictions as the China-US trade war drags on. South Korea is also wary that the deal could create a new technology juggernaut that could dent Samsung’s growth in similar areas.

Nvidia has made commitments to keep Arm’s headquarters in the UK, which it says should lessen concerns around jobs and export restrictions. It’s also pledged to open a new world-class technology center in Cambridge and build a state-of-the-art AI supercomputer powered by Arm’s chips there. Whether the deal goes through still hangs in the balance, but of it does it could spur a whole new wave of AI innovation.

Image Credit: Nvidia Continue reading

Posted in Human Robots

#437165 A smarter way of building with mobile ...

Researchers are working with a mobile robotic platform called Husky A200 that could be used for autonomous logistic tasks on construction sites. This mobile robot is one of many projects pursued by the Fraunhofer Italia Innovation Engineering Center to advance the cause of digitalization in construction and bridge the gap between robotics and the building industry. Researchers at this center based in Bolzano, Italy, are developing a software interface that will enable mobile robots to find their way around in construction sites. Continue reading

Posted in Human Robots

#436984 Robots to the Rescue: How They Can Help ...

As the coronavirus pandemic forces people to keep their distance, could this be robots‘ time to shine? A group of scientists think so, and they’re calling for robots to do the “dull, dirty, and dangerous jobs” of infectious disease management.

Social distancing has emerged as one of the most effective strategies for slowing the spread of COVID-19, but it’s also bringing many jobs to a standstill and severely restricting our daily lives. And unfortunately, the one group that can’t rely on its protective benefits are the medical and emergency services workers we’re relying on to save us.

Robots could be a solution, according to the editorial board of Science Robotics, by helping replace humans in a host of critical tasks, from disinfecting hospitals to collecting patient samples and automating lab tests.

According to the authors, the key areas where robots could help are clinical care, logistics, and reconnaissance, which refers to tasks like identifying the infected or making sure people comply with quarantines or social distancing requirements. Outside of the medical sphere, robots could also help keep the economy and infrastructure going by standing in for humans in factories or vital utilities like waste management or power plants.

When it comes to clinical care, robots can play important roles in disease prevention, diagnosis and screening, and patient care, the researchers say. Robots have already been widely deployed to disinfect hospitals and other public spaces either using UV light that kills bugs or by repurposing agricultural robots and drones to spray disinfectant, reducing the exposure of cleaning staff to potentially contaminated surfaces. They are also being used to carry out crucial deliveries of food and medication without exposing humans.

But they could also play an important role in tracking the disease, say the researchers. Thermal cameras combined with image recognition algorithms are already being used to detect potential cases at places like airports, but incorporating them into mobile robots or drones could greatly expand the coverage of screening programs.

A more complex challenge—but one that could significantly reduce medical workers’ exposure to the virus—would be to design robots that could automate the collection of nasal swabs used to test for COVID-19. Similarly automated blood collection for tests could be of significant help, and researchers are already investigating using ultrasound to help robots locate veins to draw blood from.

Convincing people it’s safe to let a robot stick a swab up their nose or jab a needle in their arm might be a hard sell right now, but a potentially more realistic scenario would be to get robots to carry out laboratory tests on collected samples to reduce exposure to lab technicians. Commercial laboratory automation systems already exist, so this might be a more achievable near-term goal.

Not all solutions need to be automated, though. While autonomous systems will be helpful for reducing the workload of stretched health workers, remote systems can still provide useful distancing. Remote control robotics systems are already becoming increasingly common in the delicate business of surgery, so it would be entirely feasible to create remote systems to carry out more prosaic medical tasks.

Such systems would make it possible for experts to contribute remotely in many different places without having to travel. And robotic systems could combine medical tasks like patient monitoring with equally important social interaction for people who may have been shut off from human contact.

In a teleconference last week Guang-Zhong Yang, a medical roboticist from Carnegie Mellon University and founding editor of Science Robotics, highlighted the importance of including both doctors and patients in the design of these robots to ensure they are safe and effective, but also to make sure people trust them to observe social protocols and not invade their privacy.

But Yang also stressed the importance of putting the pieces in place to enable the rapid development and deployment of solutions. During the 2015 Ebola outbreak, the White House Office of Science and Technology Policy and the National Science Foundation organized workshops to identify where robotics could help deal with epidemics.

But once the threat receded, attention shifted elsewhere, and by the time the next pandemic came around little progress had been made on potential solutions. The result is that it’s unclear how much help robots will really be able to provide to the COVID-19 response.

That means it’s crucial to invest in a sustained research effort into this field, say the paper’s authors, with more funding and multidisciplinary research partnerships between government agencies and industry so that next time around we will be prepared.

“These events are rare and then it’s just that people start to direct their efforts to other applications,” said Yang. “So I think this time we really need to nail it, because without a sustained approach to this history will repeat itself and robots won’t be ready.”

Image Credit: ABB’s YuMi collaborative robot. Image courtesy of ABB Continue reading

Posted in Human Robots