Tag Archives: hand

#433776 Why We Should Stop Conflating Human and ...

It’s common to hear phrases like ‘machine learning’ and ‘artificial intelligence’ and believe that somehow, someone has managed to replicate a human mind inside a computer. This, of course, is untrue—but part of the reason this idea is so pervasive is because the metaphor of human learning and intelligence has been quite useful in explaining machine learning and artificial intelligence.

Indeed, some AI researchers maintain a close link with the neuroscience community, and inspiration runs in both directions. But the metaphor can be a hindrance to people trying to explain machine learning to those less familiar with it. One of the biggest risks of conflating human and machine intelligence is that we start to hand over too much agency to machines. For those of us working with software, it’s essential that we remember the agency is human—it’s humans who build these systems, after all.

It’s worth unpacking the key differences between machine and human intelligence. While there are certainly similarities, it’s by looking at what makes them different that we can better grasp how artificial intelligence works, and how we can build and use it effectively.

Neural Networks
Central to the metaphor that links human and machine learning is the concept of a neural network. The biggest difference between a human brain and an artificial neural net is the sheer scale of the brain’s neural network. What’s crucial is that it’s not simply the number of neurons in the brain (which reach into the billions), but more precisely, the mind-boggling number of connections between them.

But the issue runs deeper than questions of scale. The human brain is qualitatively different from an artificial neural network for two other important reasons: the connections that power it are analogue, not digital, and the neurons themselves aren’t uniform (as they are in an artificial neural network).

This is why the brain is such a complex thing. Even the most complex artificial neural network, while often difficult to interpret and unpack, has an underlying architecture and principles guiding it (this is what we’re trying to do, so let’s construct the network like this…).

Intricate as they may be, neural networks in AIs are engineered with a specific outcome in mind. The human mind, however, doesn’t have the same degree of intentionality in its engineering. Yes, it should help us do all the things we need to do to stay alive, but it also allows us to think critically and creatively in a way that doesn’t need to be programmed.

The Beautiful Simplicity of AI
The fact that artificial intelligence systems are so much simpler than the human brain is, ironically, what enables AIs to deal with far greater computational complexity than we can.

Artificial neural networks can hold much more information and data than the human brain, largely due to the type of data that is stored and processed in a neural network. It is discrete and specific, like an entry on an excel spreadsheet.

In the human brain, data doesn’t have this same discrete quality. So while an artificial neural network can process very specific data at an incredible scale, it isn’t able to process information in the rich and multidimensional manner a human brain can. This is the key difference between an engineered system and the human mind.

Despite years of research, the human mind still remains somewhat opaque. This is because the analog synaptic connections between neurons are almost impenetrable to the digital connections within an artificial neural network.

Speed and Scale
Consider what this means in practice. The relative simplicity of an AI allows it to do a very complex task very well, and very quickly. A human brain simply can’t process data at scale and speed in the way AIs need to if they’re, say, translating speech to text, or processing a huge set of oncology reports.

Essential to the way AI works in both these contexts is that it breaks data and information down into tiny constituent parts. For example, it could break sounds down into phonetic text, which could then be translated into full sentences, or break images into pieces to understand the rules of how a huge set of them is composed.

Humans often do a similar thing, and this is the point at which machine learning is most like human learning; like algorithms, humans break data or information into smaller chunks in order to process it.

But there’s a reason for this similarity. This breakdown process is engineered into every neural network by a human engineer. What’s more, the way this process is designed will be down to the problem at hand. How an artificial intelligence system breaks down a data set is its own way of ‘understanding’ it.

Even while running a highly complex algorithm unsupervised, the parameters of how an AI learns—how it breaks data down in order to process it—are always set from the start.

Human Intelligence: Defining Problems
Human intelligence doesn’t have this set of limitations, which is what makes us so much more effective at problem-solving. It’s the human ability to ‘create’ problems that makes us so good at solving them. There’s an element of contextual understanding and decision-making in the way humans approach problems.

AIs might be able to unpack problems or find new ways into them, but they can’t define the problem they’re trying to solve.

Algorithmic insensitivity has come into focus in recent years, with an increasing number of scandals around bias in AI systems. Of course, this is caused by the biases of those making the algorithms, but underlines the point that algorithmic biases can only be identified by human intelligence.

Human and Artificial Intelligence Should Complement Each Other
We must remember that artificial intelligence and machine learning aren’t simply things that ‘exist’ that we can no longer control. They are built, engineered, and designed by us. This mindset puts us in control of the future, and makes algorithms even more elegant and remarkable.

Image Credit: Liu zishan/Shutterstock Continue reading

Posted in Human Robots

#433689 The Rise of Dataism: A Threat to Freedom ...

What would happen if we made all of our data public—everything from wearables monitoring our biometrics, all the way to smartphones monitoring our location, our social media activity, and even our internet search history?

Would such insights into our lives simply provide companies and politicians with greater power to invade our privacy and manipulate us by using our psychological profiles against us?

A burgeoning new philosophy called dataism doesn’t think so.

In fact, this trending ideology believes that liberating the flow of data is the supreme value of the universe, and that it could be the key to unleashing the greatest scientific revolution in the history of humanity.

What Is Dataism?
First mentioned by David Brooks in his 2013 New York Times article “The Philosophy of Data,” dataism is an ethical system that has been most heavily explored and popularized by renowned historian, Yuval Noah Harari.

In his 2016 book Homo Deus, Harari described dataism as a new form of religion that celebrates the growing importance of big data.

Its core belief centers around the idea that the universe gives greater value and support to systems, individuals, and societies that contribute most heavily and efficiently to data processing. In an interview with Wired, Harari stated, “Humans were special and important because up until now they were the most sophisticated data processing system in the universe, but this is no longer the case.”

Now, big data and machine learning are proving themselves more sophisticated, and dataists believe we should hand over as much information and power to these algorithms as possible, allowing the free flow of data to unlock innovation and progress unlike anything we’ve ever seen before.

Pros: Progress and Personal Growth
When you let data run freely, it’s bound to be mixed and matched in new ways that inevitably spark progress. And as we enter the exponential future where every person is constantly connected and sharing their data, the potential for such collaborative epiphanies becomes even greater.

We can already see important increases in quality of life thanks to companies like Google. With Google Maps on your phone, your position is constantly updating on their servers. This information, combined with everyone else on the planet using a phone with Google Maps, allows your phone to inform you of traffic conditions. Based on the speed and location of nearby phones, Google can reroute you to less congested areas or help you avoid accidents. And since you trust that these algorithms have more data than you, you gladly hand over your power to them, following your GPS’s directions rather than your own.

We can do the same sort of thing with our bodies.

Imagine, for instance, a world where each person has biosensors in their bloodstreams—a not unlikely or distant possibility when considering diabetic people already wear insulin pumps that constantly monitor their blood sugar levels. And let’s assume this data was freely shared to the world.

Now imagine a virus like Zika or the Bird Flu breaks out. Thanks to this technology, the odd change in biodata coming from a particular region flags an artificial intelligence that feeds data to the CDC (Center for Disease Control and Prevention). Recognizing that a pandemic could be possible, AIs begin 3D printing vaccines on-demand, predicting the number of people who may be afflicted. When our personal AIs tell us the locations of the spreading epidemic and to take the vaccine it just delivered by drone to our homes, are we likely to follow its instructions? Almost certainly—and if so, it’s likely millions, if not billions, of lives will have been saved.

But to quickly create such vaccines, we’ll also need to liberate research.

Currently, universities and companies seeking to benefit humankind with medical solutions have to pay extensively to organize clinical trials and to find people who match their needs. But if all our biodata was freely aggregated, perhaps they could simply say “monitor all people living with cancer” to an AI, and thanks to the constant stream of data coming in from the world’s population, a machine learning program may easily be able to detect a pattern and create a cure.

As always in research, the more sample data you have, the higher the chance that such patterns will emerge. If data is flowing freely, then anyone in the world can suddenly decide they have a hunch they want to explore, and without having to spend months and months of time and money hunting down the data, they can simply test their hypothesis.

Whether garage tinkerers, at-home scientists, or PhD students—an abundance of free data allows for science to progress unhindered, each person able to operate without being slowed by lack of data. And any progress they make is immediately liberated, becoming free data shared with anyone else that may find a use for it.

Any individual with a curious passion would have the entire world’s data at their fingertips, empowering every one of us to become an expert in any subject that inspires us. Expertise we can then share back into the data stream—a positive feedback loop spearheading progress for the entirety of humanity’s knowledge.

Such exponential gains represent a dataism utopia.

Unfortunately, our current incentives and economy also show us the tragic failures of this model.

As Harari has pointed out, the rise of datism means that “humanism is now facing an existential challenge and the idea of ‘free will’ is under threat.”

Cons: Manipulation and Extortion
In 2017, The Economist declared that data was the most valuable resource on the planet—even more valuable than oil.

Perhaps this is because data is ‘priceless’: it represents understanding, and understanding represents control. And so, in the world of advertising and politics, having data on your consumers and voters gives you an incredible advantage.

This was evidenced by the Cambridge Analytica scandal, in which it’s believed that Donald Trump and the architects of Brexit leveraged users’ Facebook data to create psychological profiles that enabled them to manipulate the masses.

How powerful are these psychological models?

A team who built a model similar to that used by Cambridge Analytica said their model could understand someone as well as a coworker with access to only 10 Facebook likes. With 70 likes they could know them as well as a friend might, 150 likes to match their parents’ understanding, and at 300 likes they could even come to know someone better than their lovers. With more likes, they could even come to know someone better than that person knows themselves.

Proceeding With Caution
In a capitalist democracy, do we want businesses and politicians to know us better than we know ourselves?

In spite of the remarkable benefits that may result for our species by freely giving away our information, do we run the risk of that data being used to exploit and manipulate the masses towards a future without free will, where our daily lives are puppeteered by those who own our data?

It’s extremely possible.

And it’s for this reason that one of the most important conversations we’ll have as a species centers around data ownership: do we just give ownership of the data back to the users, allowing them to choose who to sell or freely give their data to? Or will that simply deter the entrepreneurial drive and cause all of the free services we use today, like Google Search and Facebook, to begin charging inaccessible prices? How much are we willing to pay for our freedom? And how much do we actually care?

If recent history has taught us anything, it’s that humans are willing to give up more privacy than they like to think. Fifteen years ago, it would have been crazy to suggest we’d all allow ourselves to be tracked by our cars, phones, and daily check-ins to our favorite neighborhood locations; but now most of us see it as a worthwhile trade for optimized commutes and dating. As we continue navigating that fine line between exploitation and innovation into a more technological future, what other trade-offs might we be willing to make?

Image Credit: graphicINmotion / Shutterstock.com Continue reading

Posted in Human Robots

#433655 First-Ever Grad Program in Space Mining ...

Maybe they could call it the School of Space Rock: A new program being offered at the Colorado School of Mines (CSM) will educate post-graduate students on the nuts and bolts of extracting and using valuable materials such as rare metals and frozen water from space rocks like asteroids or the moon.

Officially called Space Resources, the graduate-level program is reputedly the first of its kind in the world to offer a course in the emerging field of space mining. Heading the program is Angel Abbud-Madrid, director of the Center for Space Resources at Mines, a well-known engineering school located in Golden, Colorado, where Molson Coors taps Rocky Mountain spring water for its earthly brews.

The first semester for the new discipline began last month. While Abbud-Madrid didn’t immediately respond to an interview request, Singularity Hub did talk to Chris Lewicki, president and CEO of Planetary Resources, a space mining company whose founders include Peter Diamandis, Singularity University co-founder.

A former NASA engineer who worked on multiple Mars missions, Lewicki says the Space Resources program at CSM, with its multidisciplinary focus on science, economics, and policy, will help students be light years ahead of their peers in the nascent field of space mining.

“I think it’s very significant that they’ve started this program,” he said. “Having students with that kind of background exposure just allows them to be productive on day one instead of having to kind of fill in a lot of things for them.”

Who would be attracted to apply for such a program? There are many professionals who could be served by a post-baccalaureate certificate, master’s degree, or even Ph.D. in Space Resources, according to Lewicki. Certainly aerospace engineers and planetary scientists would be among the faces in the classroom.

“I think it’s [also] people who have an interest in what I would call maybe space robotics,” he said. Lewicki is referring not only to the classic example of robotic arms like the Canadarm2, which lends a hand to astronauts aboard the International Space Station, but other types of autonomous platforms.

One example might be Planetary Resources’ own Arkyd-6, a small, autonomous satellite called a CubeSat launched earlier this year to test different technologies that might be used for deep-space exploration of resources. The proof-of-concept was as much a test for the technology—such as the first space-based use of a mid-wave infrared imager to detect water resources—as it was for being able to work in space on a shoestring budget.

“We really proved that doing one of these billion-dollar science missions to deep space can be done for a lot less if you have a very focused goal, and if you kind of cut a lot of corners and then put some commercial approaches into those things,” Lewicki said.

A Trillion-Dollar Industry
Why space mining? There are at least a trillion reasons.

Astrophysicist Neil deGrasse Tyson famously said that the first trillionaire will be the “person who exploits the natural resources on asteroids.” That’s because asteroids—rocky remnants from the formation of our solar system more than four billion years ago—harbor precious metals, ranging from platinum and gold to iron and nickel.

For instance, one future target of exploration by NASA—an asteroid dubbed 16 Psyche, orbiting the sun in the asteroid belt between Mars and Jupiter—is worth an estimated $10,000 quadrillion. It’s a number so mind-bogglingly big that it would crash the global economy, if someone ever figured out how to tow it back to Earth without literally crashing it into the planet.

Living Off the Land
Space mining isn’t just about getting rich. Many argue that humanity’s ability to extract resources in space, especially water that can be refined into rocket fuel, will be a key technology to extend our reach beyond near-Earth space.

The presence of frozen water around the frigid polar regions of the moon, for example, represents an invaluable source to power future deep-space missions. Splitting H20 into its component elements of hydrogen and oxygen would provide a nearly inexhaustible source of rocket fuel. Today, it costs $10,000 to put a pound of payload in Earth orbit, according to NASA.

Until more advanced rocket technology is developed, the moon looks to be the best bet for serving as the launching pad to Mars and beyond.

Moon Versus Asteroid
However, Lewicki notes that despite the moon’s proximity and our more intimate familiarity with its pockmarked surface, that doesn’t mean a lunar mission to extract resources is any easier than a multi-year journey to a fast-moving asteroid.

For one thing, fighting gravity to and from the moon is no easy feat, as the moon has a significantly stronger gravitational field than an asteroid. Another challenge is that the frozen water is located in permanently shadowed lunar craters, meaning space miners can’t rely on solar-powered equipment, but on some sort of external energy source.

And then there’s the fact that moon craters might just be the coldest places in the solar system. NASA’s Lunar Reconnaissance Orbiter found temperatures plummeted as low as 26 Kelvin, or more than minus 400 degrees Fahrenheit. In comparison, the coldest temperatures on Earth have been recorded near the South Pole in Antarctica—about minus 148 degrees F.

“We don’t operate machines in that kind of thermal environment,” Lewicki said of the extreme temperatures detected in the permanent dark regions of the moon. “Antarctica would be a balmy desert island compared to a lunar polar crater.”

Of course, no one knows quite what awaits us in the asteroid belt. Answers may soon be forthcoming. Last week, the Japan Aerospace Exploration Agency landed two small, hopping rovers on an asteroid called Ryugu. Meanwhile, NASA hopes to retrieve a sample from the near-Earth asteroid Bennu when its OSIRIS-REx mission makes contact at the end of this year.

No Bucks, No Buck Rogers
Visionaries like Elon Musk and Jeff Bezos talk about colonies on Mars, with millions of people living and working in space. The reality is that there’s probably a reason Buck Rogers was set in the 25th century: It’s going to take a lot of money and a lot of time to realize those sci-fi visions.

Or, as Lewicki put it: “No bucks, no Buck Rogers.”

The cost of operating in outer space can be prohibitive. Planetary Resources itself is grappling with raising additional funding, with reports this year about layoffs and even a possible auction of company assets.

Still, Lewicki is confident that despite economic and technical challenges, humanity will someday exceed even the boldest dreamers—skyscrapers on the moon, interplanetary trips to Mars—as judged against today’s engineering marvels.

“What we’re doing is going to be very hard, very painful, and almost certainly worth it,” he said. “Who would have thought that there would be a job for a space miner that you could go to school for, even just five or ten years ago. Things move quickly.”

Image Credit: M-SUR / Shutterstock.com Continue reading

Posted in Human Robots

#433646 Was This Man a Bronze-Age Cyborg? His ...

Treasure hunters in Switzerland have unearthed a hand-some artifact: a 3,500-year-old bronze hand outfitted with a gold cuff, Swiss archaeologists announced last week. Continue reading

Posted in Human Robots

#433634 This Robotic Skin Makes Inanimate ...

In Goethe’s poem “The Sorcerer’s Apprentice,” made world-famous by its adaptation in Disney’s Fantasia, a lazy apprentice, left to fetch water, uses magic to bewitch a broom into performing his chores for him. Now, new research from Yale has opened up the possibility of being able to animate—and automate—household objects by fitting them with a robotic skin.

Yale’s Soft Robotics lab, the Faboratory, is led by Professor Rebecca Kramer-Bottiglio, and has long investigated the possibilities associated with new kinds of manufacturing. While the typical image of a robot is hard, cold steel and rigid movements, soft robotics aims to create something more flexible and versatile. After all, the human body is made up of soft, flexible surfaces, and the world is designed for us. Soft, deformable robots could change shape to adapt to different tasks.

When designing a robot, key components are the robot’s sensors, which allow it to perceive its environment, and its actuators, the electrical or pneumatic motors that allow the robot to move and interact with its environment.

Consider your hand, which has temperature and pressure sensors, but also muscles as actuators. The omni-skins, as the Science Robotics paper dubs them, combine sensors and actuators, embedding them into an elastic sheet. The robotic skins are moved by pneumatic actuators or memory alloy that can bounce back into shape. If this is then wrapped around a soft, deformable object, moving the skin with the actuators can allow the object to crawl along a surface.

The key to the design here is flexibility: rather than adding chips, sensors, and motors into every household object to turn them into individual automatons, the same skin can be used for many purposes. “We can take the skins and wrap them around one object to perform a task—locomotion, for example—and then take them off and put them on a different object to perform a different task, such as grasping and moving an object,” said Kramer-Bottiglio. “We can then take those same skins off that object and put them on a shirt to make an active wearable device.”

The task is then to dream up applications for the omni-skins. Initially, you might imagine demanding a stuffed toy to fetch the remote control for you, or animating a sponge to wipe down kitchen surfaces—but this is just the beginning. The scientists attached the skins to a soft tube and camera, creating a worm-like robot that could compress itself and crawl into small spaces for rescue missions. The same skins could then be worn by a person to sense their posture. One could easily imagine this being adapted into a soft exoskeleton for medical or industrial purposes: for example, helping with rehabilitation after an accident or injury.

The initial motivating factor for creating the robots was in an environment where space and weight are at a premium, and humans are forced to improvise with whatever’s at hand: outer space. Kramer-Bottoglio originally began the work after NASA called out for soft robotics systems for use by astronauts. Instead of wasting valuable rocket payload by sending up a heavy metal droid like ATLAS to fetch items or perform repairs, soft robotic skins with modular sensors could be adapted for a range of different uses spontaneously.

By reassembling components in the soft robotic skin, a crumpled ball of paper could provide the chassis for a robot that performs repairs on the spaceship, or explores the lunar surface. The dynamic compression provided by the robotic skin could be used for g-suits to protect astronauts when they rapidly accelerate or decelerate.

“One of the main things I considered was the importance of multi-functionality, especially for deep space exploration where the environment is unpredictable. The question is: How do you prepare for the unknown unknowns? … Given the design-on-the-fly nature of this approach, it’s unlikely that a robot created using robotic skins will perform any one task optimally,” Kramer-Bottiglio said. “However, the goal is not optimization, but rather diversity of applications.”

There are still problems to resolve. Many of the videos of the skins indicate that they can rely on an external power supply. Creating new, smaller batteries that can power wearable devices has been a focus of cutting-edge materials science research for some time. Much of the lab’s expertise is in creating flexible, stretchable electronics that can be deformed by the actuators without breaking the circuitry. In the future, the team hopes to work on streamlining the production process; if the components could be 3D printed, then the skins could be created when needed.

In addition, robotic hardware that’s capable of performing an impressive range of precise motions is quite an advanced technology. The software to control those robots, and enable them to perform a variety of tasks, is quite another challenge. With soft robots, it can become even more complex to design that control software, because the body itself can change shape and deform as the robot moves. The same set of programmed motions, then, can produce different results depending on the environment.

“Let’s say I have a soft robot with four legs that crawls along the ground, and I make it walk up a hard slope,” Dr. David Howard, who works on robotics at CSIRO in Australia, explained to ABC.

“If I make that slope out of gravel and I give it the same control commands, the actual body is going to deform in a different way, and I’m not necessarily going to know what that is.”

Despite these and other challenges, research like that at the Faboratory still hopes to redefine how we think of robots and robotics. Instead of a robot that imitates a human and manipulates objects, the objects themselves will become programmable matter, capable of moving autonomously and carrying out a range of tasks. Futurists speculate about a world where most objects are automated to some degree and can assemble and repair themselves, or are even built entirely of tiny robots.

The tale of the Sorcerer’s Apprentice was first written in 1797, at the dawn of the industrial revolution, over a century before the word “robot” was even coined. Yet more and more roboticists aim to prove Arthur C Clarke’s maxim: any sufficiently advanced technology is indistinguishable from magic.

Image Credit: Joran Booth, The Faboratory Continue reading

Posted in Human Robots