Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#435023 Inflatable Robot Astronauts and How to ...

The typical cultural image of a robot—as a steel, chrome, humanoid bucket of bolts—is often far from the reality of cutting-edge robotics research. There are difficulties, both social and technological, in realizing the image of a robot from science fiction—let alone one that can actually help around the house. Often, it’s simply the case that great expense in producing a humanoid robot that can perform dozens of tasks quite badly is less appropriate than producing some other design that’s optimized to a specific situation.

A team of scientists from Brigham Young University has received funding from NASA to investigate an inflatable robot called, improbably, King Louie. The robot was developed by Pneubotics, who have a long track record in the world of soft robotics.

In space, weight is at a premium. The world watched in awe and amusement when Commander Chris Hadfield sang “Space Oddity” from the International Space Station—but launching that guitar into space likely cost around $100,000. A good price for launching payload into outer space is on the order of $10,000 per pound ($22,000/kg).

For that price, it would cost a cool $1.7 million to launch Boston Dynamics’ famous ATLAS robot to the International Space Station, and its bulk would be inconvenient in the cramped living quarters available. By contrast, an inflatable robot like King Louie is substantially lighter and can simply be deflated and folded away when not in use. The robot can be manufactured from cheap, lightweight, and flexible materials, and minor damage is easy to repair.

Inflatable Robots Under Pressure
The concept of inflatable robots is not new: indeed, earlier prototypes of King Louie were exhibited back in 2013 at Google I/O’s After Hours, flailing away at each other in a boxing ring. Sparks might fly in fights between traditional robots, but the aim here was to demonstrate that the robots are passively safe: the soft, inflatable figures won’t accidentally smash delicate items when moving around.

Health and safety regulations form part of the reason why robots don’t work alongside humans more often, but soft robots would be far safer to use in healthcare or around children (whose first instinct, according to BYU’s promotional video, is either to hug or punch King Louie.) It’s also much harder to have nightmarish fantasies about robotic domination with these friendlier softbots: Terminator would’ve been a much shorter franchise if Skynet’s droids were inflatable.

Robotic exoskeletons are increasingly used for physical rehabilitation therapies, as well as for industrial purposes. As countries like Japan seek to care for their aging populations with robots and alleviate the burden on nurses, who suffer from some of the highest rates of back injuries of any profession, soft robots will become increasingly attractive for use in healthcare.

Precision and Proprioception
The main issue is one of control. Rigid, metallic robots may be more expensive and more dangerous, but the simple fact of their rigidity makes it easier to map out and control the precise motions of each of the robot’s limbs, digits, and actuators. Individual motors attached to these rigid robots can allow for a great many degrees of freedom—individual directions in which parts of the robot can move—and precision control.

For example, ATLAS has 28 degrees of freedom, while Shadow’s dexterous robot hand alone has 20. This is much harder to do with an inflatable robot, for precisely the same reasons that make it safer. Without hard and rigid bones, other methods of control must be used.

In the case of King Louie, the robot is made up of many expandable air chambers. An air-compressor changes the pressure levels in these air chambers, allowing them to expand and contract. This harks back to some of the earliest pneumatic automata. Pairs of chambers act antagonistically, like muscles, such that when one chamber “tenses,” another relaxes—allowing King Louie to have, for example, four degrees of freedom in each of its arms.

The robot is also surprisingly strong. Professor Killpack, who works at BYU on the project, estimates that its payload is comparable to other humanoid robots on the market, like Rethink Robotics’ Baxter (RIP).

Proprioception, that sixth sense that allows us to map out and control our own bodies and muscles in fine detail, is being enhanced for a wider range of soft, flexible robots with the use of machine learning algorithms connected to input from a whole host of sensors on the robot’s body.

Part of the reason this is so complicated with soft, flexible robots is that the shape and “map” of the robot’s body can change; that’s the whole point. But this means that every time King Louie is inflated, its body is a slightly different shape; when it becomes deformed, for example due to picking up objects, the shape changes again, and the complex ways in which the fabric can twist and bend are far more difficult to model and sense than the behavior of the rigid metal of King Louie’s hard counterparts. When you’re looking for precision, seemingly-small changes can be the difference between successfully holding an object or dropping it.

Learning to Move
Researchers at BYU are therefore spending a great deal of time on how to control the soft-bot enough to make it comparably useful. One method involves the commercial tracking technology used in the Vive VR system: by moving the game controller, which provides a constant feedback to the robot’s arm, you can control its position. Since the tracking software provides an estimate of the robot’s joint angles and continues to provide feedback until the arm is correctly aligned, this type of feedback method is likely to work regardless of small changes to the robot’s shape.

The other technologies the researchers are looking into for their softbot include arrays of flexible, tactile sensors to place on the softbot’s skin, and minimizing the complex cross-talk between these arrays to get coherent information about the robot’s environment. As with some of the new proprioception research, the project is looking into neural networks as a means of modeling the complicated dynamics—the motion and response to forces—of the softbot. This method relies on large amounts of observational data, mapping how the robot is inflated and how it moves, rather than explicitly understanding and solving the equations that govern its motion—which hopefully means the methods can work even as the robot changes.

There’s still a long way to go before soft and inflatable robots can be controlled sufficiently well to perform all the tasks they might be used for. Ultimately, no one robotic design is likely to be perfect for any situation.

Nevertheless, research like this gives us hope that one day, inflatable robots could be useful tools, or even companions, at which point the advertising slogans write themselves: Don’t let them down, and they won’t let you down!

Image Credit: Brigham Young University. Continue reading

Posted in Human Robots

#435021 Dreams of ubiquitous social robots still ...

Hopes that the tech industry was on the cusp of rolling personal robots into homes are dimming now that several once-promising consumer robotics companies have shut down. Continue reading

Posted in Human Robots

#435014 Consumer Robotics Company Anki Abruptly ...

After last-minute funding fell through, Anki becomes the latest consumer robotics company to close Continue reading

Posted in Human Robots

#434865 5 AI Breakthroughs We’ll Likely See in ...

Convergence is accelerating disruption… everywhere! Exponential technologies are colliding into each other, reinventing products, services, and industries.

As AI algorithms such as Siri and Alexa can process your voice and output helpful responses, other AIs like Face++ can recognize faces. And yet others create art from scribbles, or even diagnose medical conditions.

Let’s dive into AI and convergence.

Top 5 Predictions for AI Breakthroughs (2019-2024)
My friend Neil Jacobstein is my ‘go-to expert’ in AI, with over 25 years of technical consulting experience in the field. Currently the AI and Robotics chair at Singularity University, Jacobstein is also a Distinguished Visiting Scholar in Stanford’s MediaX Program, a Henry Crown Fellow, an Aspen Institute moderator, and serves on the National Academy of Sciences Earth and Life Studies Committee. Neil predicted five trends he expects to emerge over the next five years, by 2024.

AI gives rise to new non-human pattern recognition and intelligence results

AlphaGo Zero, a machine learning computer program trained to play the complex game of Go, defeated the Go world champion in 2016 by 100 games to zero. But instead of learning from human play, AlphaGo Zero trained by playing against itself—a method known as reinforcement learning.

Building its own knowledge from scratch, AlphaGo Zero demonstrates a novel form of creativity, free of human bias. Even more groundbreaking, this type of AI pattern recognition allows machines to accumulate thousands of years of knowledge in a matter of hours.

While these systems can’t answer the question “What is orange juice?” or compete with the intelligence of a fifth grader, they are growing more and more strategically complex, merging with other forms of narrow artificial intelligence. Within the next five years, who knows what successors of AlphaGo Zero will emerge, augmenting both your business functions and day-to-day life.

Doctors risk malpractice when not using machine learning for diagnosis and treatment planning

A group of Chinese and American researchers recently created an AI system that diagnoses common childhood illnesses, ranging from the flu to meningitis. Trained on electronic health records compiled from 1.3 million outpatient visits of almost 600,000 patients, the AI program produced diagnosis outcomes with unprecedented accuracy.

While the US health system does not tout the same level of accessible universal health data as some Chinese systems, we’ve made progress in implementing AI in medical diagnosis. Dr. Kang Zhang, chief of ophthalmic genetics at the University of California, San Diego, created his own system that detects signs of diabetic blindness, relying on both text and medical images.

With an eye to the future, Jacobstein has predicted that “we will soon see an inflection point where doctors will feel it’s a risk to not use machine learning and AI in their everyday practices because they don’t want to be called out for missing an important diagnostic signal.”

Quantum advantage will massively accelerate drug design and testing

Researchers estimate that there are 1060 possible drug-like molecules—more than the number of atoms in our solar system. But today, chemists must make drug predictions based on properties influenced by molecular structure, then synthesize numerous variants to test their hypotheses.

Quantum computing could transform this time-consuming, highly costly process into an efficient, not to mention life-changing, drug discovery protocol.

“Quantum computing is going to have a major industrial impact… not by breaking encryption,” said Jacobstein, “but by making inroads into design through massive parallel processing that can exploit superposition and quantum interference and entanglement, and that can wildly outperform classical computing.”

AI accelerates security systems’ vulnerability and defense

With the incorporation of AI into almost every aspect of our lives, cyberattacks have grown increasingly threatening. “Deep attacks” can use AI-generated content to avoid both human and AI controls.

Previous examples include fake videos of former President Obama speaking fabricated sentences, and an adversarial AI fooling another algorithm into categorizing a stop sign as a 45 mph speed limit sign. Without the appropriate protections, AI systems can be manipulated to conduct any number of destructive objectives, whether ruining reputations or diverting autonomous vehicles.

Jacobstein’s take: “We all have security systems on our buildings, in our homes, around the healthcare system, and in air traffic control, financial organizations, the military, and intelligence communities. But we all know that these systems have been hacked periodically and we’re going to see that accelerate. So, there are major business opportunities there and there are major opportunities for you to get ahead of that curve before it bites you.”

AI design systems drive breakthroughs in atomically precise manufacturing

Just as the modern computer transformed our relationship with bits and information, AI will redefine and revolutionize our relationship with molecules and materials. AI is currently being used to discover new materials for clean-tech innovations, such as solar panels, batteries, and devices that can now conduct artificial photosynthesis.

Today, it takes about 15 to 20 years to create a single new material, according to industry experts. But as AI design systems skyrocket in capacity, these will vastly accelerate the materials discovery process, allowing us to address pressing issues like climate change at record rates. Companies like Kebotix are already on their way to streamlining the creation of chemistries and materials at the click of a button.

Atomically precise manufacturing will enable us to produce the previously unimaginable.

Final Thoughts
Within just the past three years, countries across the globe have signed into existence national AI strategies and plans for ramping up innovation. Businesses and think tanks have leaped onto the scene, hiring AI engineers and tech consultants to leverage what computer scientist Andrew Ng has even called the new ‘electricity’ of the 21st century.

As AI plays an exceedingly vital role in everyday life, how will your business leverage it to keep up and build forward?

In the wake of burgeoning markets, new ventures will quickly arise, each taking advantage of untapped data sources or unmet security needs.

And as your company aims to ride the wave of AI’s exponential growth, consider the following pointers to leverage AI and disrupt yourself before it reaches you first:

Determine where and how you can begin collecting critical data to inform your AI algorithms
Identify time-intensive processes that can be automated and accelerated within your company
Discern which global challenges can be expedited by hyper-fast, all-knowing minds

Remember: good data is vital fuel. Well-defined problems are the best compass. And the time to start implementing AI is now.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: Yurchanka Siarhei / Shutterstock.com Continue reading

Posted in Human Robots

#434857 It’s 2019 – where’s my ...

I loved the “Thundercats” cartoon as a child, watching cat-like humanoids fighting the forces of evil. Whenever their leader was in trouble, he'd unleash the Sword of Omens to gain “sight beyond sight,” the ability to see events happening at faraway places, or bellow “Thunder, Thunder, Thunder, Thundercats, Hooo!” to instantaneously summon his allies to his location to join the fight. What kid didn't want those superpowers? Continue reading

Posted in Human Robots