Tag Archives: need

#434643 Sensors and Machine Learning Are Giving ...

According to some scientists, humans really do have a sixth sense. There’s nothing supernatural about it: the sense of proprioception tells you about the relative positions of your limbs and the rest of your body. Close your eyes, block out all sound, and you can still use this internal “map” of your external body to locate your muscles and body parts – you have an innate sense of the distances between them, and the perception of how they’re moving, above and beyond your sense of touch.

This sense is invaluable for allowing us to coordinate our movements. In humans, the brain integrates senses including touch, heat, and the tension in muscle spindles to allow us to build up this map.

Replicating this complex sense has posed a great challenge for roboticists. We can imagine simulating the sense of sight with cameras, sound with microphones, or touch with pressure-pads. Robots with chemical sensors could be far more accurate than us in smell and taste, but building in proprioception, the robot’s sense of itself and its body, is far more difficult, and is a large part of why humanoid robots are so tricky to get right.

Simultaneous localization and mapping (SLAM) software allows robots to use their own senses to build up a picture of their surroundings and environment, but they’d need a keen sense of the position of their own bodies to interact with it. If something unexpected happens, or in dark environments where primary senses are not available, robots can struggle to keep track of their own position and orientation. For human-robot interaction, wearable robotics, and delicate applications like surgery, tiny differences can be extremely important.

Piecemeal Solutions
In the case of hard robotics, this is generally solved by using a series of strain and pressure sensors in each joint, which allow the robot to determine how its limbs are positioned. That works fine for rigid robots with a limited number of joints, but for softer, more flexible robots, this information is limited. Roboticists are faced with a dilemma: a vast, complex array of sensors for every degree of freedom in the robot’s movement, or limited skill in proprioception?

New techniques, often involving new arrays of sensory material and machine-learning algorithms to fill in the gaps, are starting to tackle this problem. Take the work of Thomas George Thuruthel and colleagues in Pisa and San Diego, who draw inspiration from the proprioception of humans. In a new paper in Science Robotics, they describe the use of soft sensors distributed through a robotic finger at random. This placement is much like the constant adaptation of sensors in humans and animals, rather than relying on feedback from a limited number of positions.

The sensors allow the soft robot to react to touch and pressure in many different locations, forming a map of itself as it contorts into complicated positions. The machine-learning algorithm serves to interpret the signals from the randomly-distributed sensors: as the finger moves around, it’s observed by a motion capture system. After training the robot’s neural network, it can associate the feedback from the sensors with the position of the finger detected in the motion-capture system, which can then be discarded. The robot observes its own motions to understand the shapes that its soft body can take, and translate them into the language of these soft sensors.

“The advantages of our approach are the ability to predict complex motions and forces that the soft robot experiences (which is difficult with traditional methods) and the fact that it can be applied to multiple types of actuators and sensors,” said Michael Tolley of the University of California San Diego. “Our method also includes redundant sensors, which improves the overall robustness of our predictions.”

The use of machine learning lets the roboticists come up with a reliable model for this complex, non-linear system of motions for the actuators, something difficult to do by directly calculating the expected motion of the soft-bot. It also resembles the human system of proprioception, built on redundant sensors that change and shift in position as we age.

In Search of a Perfect Arm
Another approach to training robots in using their bodies comes from Robert Kwiatkowski and Hod Lipson of Columbia University in New York. In their paper “Task-agnostic self-modeling machines,” also recently published in Science Robotics, they describe a new type of robotic arm.

Robotic arms and hands are getting increasingly dexterous, but training them to grasp a large array of objects and perform many different tasks can be an arduous process. It’s also an extremely valuable skill to get right: Amazon is highly interested in the perfect robot arm. Google hooked together an array of over a dozen robot arms so that they could share information about grasping new objects, in part to cut down on training time.

Individually training a robot arm to perform every individual task takes time and reduces the adaptability of your robot: either you need an ML algorithm with a huge dataset of experiences, or, even worse, you need to hard-code thousands of different motions. Kwiatkowski and Lipson attempt to overcome this by developing a robotic system that has a “strong sense of self”: a model of its own size, shape, and motions.

They do this using deep machine learning. The robot begins with no prior knowledge of its own shape or the underlying physics of its motion. It then repeats a series of a thousand random trajectories, recording the motion of its arm. Kwiatkowski and Lipson compare this to a baby in the first year of life observing the motions of its own hands and limbs, fascinated by picking up and manipulating objects.

Again, once the robot has trained itself to interpret these signals and build up a robust model of its own body, it’s ready for the next stage. Using that deep-learning algorithm, the researchers then ask the robot to design strategies to accomplish simple pick-up and place and handwriting tasks. Rather than laboriously and narrowly training itself for each individual task, limiting its abilities to a very narrow set of circumstances, the robot can now strategize how to use its arm for a much wider range of situations, with no additional task-specific training.

Damage Control
In a further experiment, the researchers replaced part of the arm with a “deformed” component, intended to simulate what might happen if the robot was damaged. The robot can then detect that something’s up and “reconfigure” itself, reconstructing its self-model by going through the training exercises once again; it was then able to perform the same tasks with only a small reduction in accuracy.

Machine learning techniques are opening up the field of robotics in ways we’ve never seen before. Combining them with our understanding of how humans and other animals are able to sense and interact with the world around us is bringing robotics closer and closer to becoming truly flexible and adaptable, and, eventually, omnipresent.

But before they can get out and shape the world, as these studies show, they will need to understand themselves.

Image Credit: jumbojan / Shutterstock.com Continue reading

Posted in Human Robots

#434623 The Great Myth of the AI Skills Gap

One of the most contentious debates in technology is around the question of automation and jobs. At issue is whether advances in automation, specifically with regards to artificial intelligence and robotics, will spell trouble for today’s workers. This debate is played out in the media daily, and passions run deep on both sides of the issue. In the past, however, automation has created jobs and increased real wages.

A widespread concern with the current scenario is that the workers most likely to be displaced by technology lack the skills needed to do the new jobs that same technology will create.

Let’s look at this concern in detail. Those who fear automation will hurt workers start by pointing out that there is a wide range of jobs, from low-pay, low-skill to high-pay, high-skill ones. This can be represented as follows:

They then point out that technology primarily creates high-paying jobs, like geneticists, as shown in the diagram below.

Meanwhile, technology destroys low-wage, low-skill jobs like those in fast food restaurants, as shown below:

Then, those who are worried about this dynamic often pose the question, “Do you really think a fast-food worker is going to become a geneticist?”

They worry that we are about to face a huge amount of systemic permanent unemployment, as the unskilled displaced workers are ill-equipped to do the jobs of tomorrow.

It is important to note that both sides of the debate are in agreement at this point. Unquestionably, technology destroys low-skilled, low-paying jobs while creating high-skilled, high-paying ones.

So, is that the end of the story? As a society are we destined to bifurcate into two groups, those who have training and earn high salaries in the new jobs, and those with less training who see their jobs vanishing to machines? Is this latter group forever locked out of economic plenty because they lack training?

No.

The question, “Can a fast food worker become a geneticist?” is where the error comes in. Fast food workers don’t become geneticists. What happens is that a college biology professor becomes a geneticist. Then a high-school biology teacher gets the college job. Then the substitute teacher gets hired on full-time to fill the high school teaching job. All the way down.

The question is not whether those in the lowest-skilled jobs can do the high-skilled work. Instead the question is, “Can everyone do a job just a little harder than the job they have today?” If so, and I believe very deeply that this is the case, then every time technology creates a new job “at the top,” everyone gets a promotion.

This isn’t just an academic theory—it’s 200 years of economic history in the west. For 200 years, with the exception of the Great Depression, unemployment in the US has been between 2 percent and 13 percent. Always. Europe’s range is a bit wider, but not much.

If I took 200 years of unemployment rates and graphed them, and asked you to find where the assembly line took over manufacturing, or where steam power rapidly replaced animal power, or the lightning-fast adoption of electricity by industry, you wouldn’t be able to find those spots. They aren’t even blips in the unemployment record.

You don’t even have to look back as far as the assembly line to see this happening. It has happened non-stop for 200 years. Every fifty years, we lose about half of all jobs, and this has been pretty steady since 1800.

How is it that for 200 years we have lost half of all jobs every half century, but never has this process caused unemployment? Not only has it not caused unemployment, but during that time, we have had full employment against the backdrop of rising wages.

How can wages rise while half of all jobs are constantly being destroyed? Simple. Because new technology always increases worker productivity. It creates new jobs, like web designer and programmer, while destroying low-wage backbreaking work. When this happens, everyone along the way gets a better job.

Our current situation isn’t any different than the past. The nature of technology has always been to create high-skilled jobs and increase worker productivity. This is good news for everyone.

People often ask me what their children should study to make sure they have a job in the future. I usually say it doesn’t really matter. If I knew everything I know now and went back to the mid 1980s, what could I have taken in high school to make me better prepared for today? There is only one class, and it wasn’t computer science. It was typing. Who would have guessed?

The great skill is to be able to learn new things, and luckily, we all have that. In fact, that is our singular ability as a species. What I do in my day-to-day job consists largely of skills I have learned as the years have passed. In my experience, if you ask people at all job levels,“Would you like a little more challenging job to make a little more money?” almost everyone says yes.

That’s all it has taken for us to collectively get here today, and that’s all we need going forward.

Image Credit: Lightspring / Shutterstock.com Continue reading

Posted in Human Robots

#434534 To Extend Our Longevity, First We Must ...

Healthcare today is reactive, retrospective, bureaucratic, and expensive. It’s sick care, not healthcare.

But that is radically changing at an exponential rate.

Through this multi-part blog series on longevity, I’ll take a deep dive into aging, longevity, and healthcare technologies that are working together to dramatically extend the human lifespan, disrupting the $3 trillion healthcare system in the process.

I’ll begin the series by explaining the nine hallmarks of aging, as explained in this journal article. Next, I’ll break down the emerging technologies and initiatives working to combat these nine hallmarks. Finally, I’ll explore the transformative implications of dramatically extending the human health span.

In this blog I’ll cover:

Why the healthcare system is broken
Why, despite this, we live in the healthiest time in human history
The nine mechanisms of aging

Let’s dive in.

The System is Broken—Here’s the Data:

Doctors spend $210 billion per year on procedures that aren’t based on patient need, but fear of liability.
Americans spend, on average, $8,915 per person on healthcare—more than any other country on Earth.
Prescription drugs cost around 50 percent more in the US than in other industrialized countries.
At current rates, by 2025, nearly 25 percent of the US GDP will be spent on healthcare.
It takes 12 years and $359 million, on average, to take a new drug from the lab to a patient.
Only 5 in 5,000 of these new drugs proceed to human testing. From there, only 1 of those 5 is actually approved for human use.

And Yet, We Live in the Healthiest Time in Human History
Consider these insights, which I adapted from Max Roser’s excellent database Our World in Data:

Right now, the countries with the lowest life expectancy in the world still have higher life expectancies than the countries with the highest life expectancy did in 1800.
In 1841, a 5-year-old had a life expectancy of 55 years. Today, a 5-year-old can expect to live 82 years—an increase of 27 years.
We’re seeing a dramatic increase in healthspan. In 1845, a newborn would expect to live to 40 years old. For a 70-year-old, that number became 79. Now, people of all ages can expect to live to be 81 to 86 years old.
100 years ago, 1 of 3 children would die before the age of 5. As of 2015, the child mortality rate fell to just 4.3 percent.
The cancer mortality rate has declined 27 percent over the past 25 years.

Figure: Around the globe, life expectancy has doubled since the 1800s. | Image from Life Expectancy by Max Roser – Our World in Data / CC BY SA
Figure: A dramatic reduction in child mortality in 1800 vs. in 2015. | Image from Child Mortality by Max Roser – Our World in Data / CC BY SA
The 9 Mechanisms of Aging
*This section was adapted from CB INSIGHTS: The Future Of Aging.

Longevity, healthcare, and aging are intimately linked.

With better healthcare, we can better treat some of the leading causes of death, impacting how long we live.

By investigating how to treat diseases, we’ll inevitably better understand what causes these diseases in the first place, which directly correlates to why we age.

Following are the nine hallmarks of aging. I’ll share examples of health and longevity technologies addressing each of these later in this blog series.

Genomic instability: As we age, the environment and normal cellular processes cause damage to our genes. Activities like flying at high altitude, for example, expose us to increased radiation or free radicals. This damage compounds over the course of life and is known to accelerate aging.
Telomere attrition: Each strand of DNA in the body (known as chromosomes) is capped by telomeres. These short snippets of DNA repeated thousands of times are designed to protect the bulk of the chromosome. Telomeres shorten as our DNA replicates; if a telomere reaches a certain critical shortness, a cell will stop dividing, resulting in increased incidence of disease.
Epigenetic alterations: Over time, environmental factors will change how genes are expressed, i.e., how certain sequences of DNA are read and the instruction set implemented.
Loss of proteostasis: Over time, different proteins in our body will no longer fold and function as they are supposed to, resulting in diseases ranging from cancer to neurological disorders.
Deregulated nutrient-sensing: Nutrient levels in the body can influence various metabolic pathways. Among the affected parts of these pathways are proteins like IGF-1, mTOR, sirtuins, and AMPK. Changing levels of these proteins’ pathways has implications on longevity.
Mitochondrial dysfunction: Mitochondria (our cellular power plants) begin to decline in performance as we age. Decreased performance results in excess fatigue and other symptoms of chronic illnesses associated with aging.
Cellular senescence: As cells age, they stop dividing and cannot be removed from the body. They build up and typically cause increased inflammation.
Stem cell exhaustion: As we age, our supply of stem cells begins to diminish as much as 100 to 10,000-fold in different tissues and organs. In addition, stem cells undergo genetic mutations, which reduce their quality and effectiveness at renovating and repairing the body.
Altered intercellular communication: The communication mechanisms that cells use are disrupted as cells age, resulting in decreased ability to transmit information between cells.

Conclusion
Over the past 200 years, we have seen an abundance of healthcare technologies enable a massive lifespan boom.

Now, exponential technologies like artificial intelligence, 3D printing and sensors, as well as tremendous advancements in genomics, stem cell research, chemistry, and many other fields, are beginning to tackle the fundamental issues of why we age.

In the next blog in this series, we will dive into how genome sequencing and editing, along with new classes of drugs, are augmenting our biology to further extend our healthy lives.

What will you be able to achieve with an extra 30 to 50 healthy years (or longer) in your lifespan? Personally, I’m excited for a near-infinite lifespan to take on moonshots.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: David Carbo / Shutterstock.com Continue reading

Posted in Human Robots

#434532 How Microrobots Will Fix Our Roads and ...

Swarms of microrobots will scuttle along beneath our roads and pavements, finding and fixing leaky pipes and faulty cables. Thanks to their efforts, we can avoid costly road work that costs billions of dollars each year—not to mention frustrating traffic delays.

That is, if a new project sponsored by the U.K. government is a success. Recent developments in the space seem to point towards a bright future for microrobots.

Microrobots Saving Billions
Each year, around 1.5 million road excavations take place across the U.K. Many are due to leaky pipes and faulty cables that necessitate excavation of road surfaces in order to fix them. The resulting repairs, alongside disruptions to traffic and businesses, are estimated to cost a whopping £6.3 billion ($8 billion).

A consortium of scientists, led by University of Sheffield Professor Kirill Horoshenkov, are planning to use microrobots to negate most of these costs. The group has received a £7.2 million ($9.2 million) grant to develop and build their bots.

According to Horoshenkov, the microrobots will come in two versions. One is an inspection bot, which will navigate along underground infrastructure and examine its condition via sonar. The inspectors will be complemented by worker bots capable of carrying out repairs with cement and adhesives or cleaning out blockages with a high-powered jet. The inspector bots will be around one centimeter long and possibly autonomous, while the worker bots will be slightly larger and steered via remote control.

If successful, it is believed the bots could potentially save the U.K. economy around £5 billion ($6.4 billion) a year.

The U.K. government has set aside a further £19 million ($24 million) for research into robots for hazardous environments, such as nuclear decommissioning, drones for oil pipeline monitoring, and artificial intelligence software to detect the need for repairs on satellites in orbit.

The Lowest-Hanging Fruit
Microrobots like the ones now under development in the U.K. have many potential advantages and use cases. Thanks to their small size they can navigate tight spaces, for example in search and rescue operations, and robot swarm technology would allow them to collaborate to perform many different functions, including in construction projects.

To date, the number of microrobots in use is relatively limited, but that could be about to change, with bots closing in on other types of inspection jobs, which could be considered one of the lowest-hanging fruits.

Engineering firm Rolls-Royce (not the car company, but the one that builds aircraft engines) is looking to use microrobots to inspect some of the up to 25,000 individual parts that make up an engine. The microrobots use the cockroach as a model, and Rolls Royce believes they could save engineers time when performing the maintenance checks that can take over a month per engine.

Even Smaller Successes
Going further down in scale, recent years have seen a string of successes for nanobots. For example, a team of researchers at the Femto-ST Institute have used nanobots to build what is likely the world’s smallest house (if this isn’t a category at Guinness, someone needs to get on the phone with them), which stands a ‘towering’ 0.015 millimeters.

One of the areas where nanobots have shown great promise is in medicine. Several studies have shown how the minute bots are capable of delivering drugs directly into dense biological tissue, which can otherwise be highly challenging to target directly. Such delivery systems have a great potential for improving the treatment of a wide range of ailments and illnesses, including cancer.

There’s no question that the ecosystem of microrobots and nanobots is evolving. While still in their early days, the above successes point to a near-future boom in the bots we may soon refer to as our ‘littlest everyday helpers.’

Image Credit: 5nikolas5 / Shutterstock.com Continue reading

Posted in Human Robots

#434324 Big Brother Nation: The Case for ...

Powerful surveillance cameras have crept into public spaces. We are filmed and photographed hundreds of times a day. To further raise the stakes, the resulting video footage is fed to new forms of artificial intelligence software that can recognize faces in real time, read license plates, even instantly detect when a particular pre-defined action or activity takes place in front of a camera.

As most modern cities have quietly become surveillance cities, the law has been slow to catch up. While we wait for robust legal frameworks to emerge, the best way to protect our civil liberties right now is to fight technology with technology. All cities should place local surveillance video into a public cloud-based data trust. Here’s how it would work.

In Public Data We Trust
To democratize surveillance, every city should implement three simple rules. First, anyone who aims a camera at public space must upload that day’s haul of raw video file (and associated camera meta-data) into a cloud-based repository. Second, this cloud-based repository must have open APIs and a publicly-accessible log file that records search histories and tracks who has accessed which video files. And third, everyone in the city should be given the same level of access rights to the stored video data—no exceptions.

This kind of public data repository is called a “data trust.” Public data trusts are not just wishful thinking. Different types of trusts are already in successful use in Estonia and Barcelona, and have been proposed as the best way to store and manage the urban data that will be generated by Alphabet’s planned Sidewalk Labs project in Toronto.

It’s true that few people relish the thought of public video footage of themselves being looked at by strangers and friends, by ex-spouses, potential employers, divorce attorneys, and future romantic prospects. In fact, when I propose this notion when I give talks about smart cities, most people recoil in horror. Some turn red in the face and jeer at my naiveté. Others merely blink quietly in consternation.

The reason we should take this giant step towards extreme transparency is to combat the secrecy that surrounds surveillance. Openness is a powerful antidote to oppression. Edward Snowden summed it up well when he said, “Surveillance is not about public safety, it’s about power. It’s about control.”

Let Us Watch Those Watching Us
If public surveillance video were put back into the hands of the people, citizens could watch their government as it watches them. Right now, government cameras are controlled by the state. Camera locations are kept secret, and only the agencies that control the cameras get to see the footage they generate.

Because of these information asymmetries, civilians have no insight into the size and shape of the modern urban surveillance infrastructure that surrounds us, nor the uses (or abuses) of the video footage it spawns. For example, there is no swift and efficient mechanism to request a copy of video footage from the cameras that dot our downtown. Nor can we ask our city’s police force to show us a map that documents local traffic camera locations.

By exposing all public surveillance videos to the public gaze, cities could give regular people tools to assess the size, shape, and density of their local surveillance infrastructure and neighborhood “digital dragnet.” Using the meta-data that’s wrapped around video footage, citizens could geo-locate individual cameras onto a digital map to generate surveillance “heat maps.” This way people could assess whether their city’s camera density was higher in certain zip codes, or in neighborhoods populated by a dominant ethnic group.

Surveillance heat maps could be used to document which government agencies were refusing to upload their video files, or which neighborhoods were not under surveillance. Given what we already know today about the correlation between camera density, income, and social status, these “dark” camera-free regions would likely be those located near government agencies and in more affluent parts of a city.

Extreme transparency would democratize surveillance. Every city’s data trust would keep a publicly-accessible log of who’s searching for what, and whom. People could use their local data trust’s search history to check whether anyone was searching for their name, face, or license plate. As a result, clandestine spying on—and stalking of—particular individuals would become difficult to hide and simpler to prove.

Protect the Vulnerable and Exonerate the Falsely Accused
Not all surveillance video automatically works against the underdog. As the bungled (and consequently no longer secret) assassination of journalist Jamal Khashoggi demonstrated, one of the unexpected upsides of surveillance cameras has been the fact that even kings become accountable for their crimes. If opened up to the public, surveillance cameras could serve as witnesses to justice.

Video evidence has the power to protect vulnerable individuals and social groups by shedding light onto messy, unreliable (and frequently conflicting) human narratives of who did what to whom, and why. With access to a data trust, a person falsely accused of a crime could prove their innocence. By searching for their own face in video footage or downloading time/date stamped footage from a particular camera, a potential suspect could document their physical absence from the scene of a crime—no lengthy police investigation or high-priced attorney needed.

Given Enough Eyeballs, All Crimes Are Shallow
Placing public surveillance video into a public trust could make cities safer and would streamline routine police work. Linus Torvalds, the developer of open-source operating system Linux, famously observed that “given enough eyeballs, all bugs are shallow.” In the case of public cameras and a common data repository, Torvald’s Law could be restated as “given enough eyeballs, all crimes are shallow.”

If thousands of citizen eyeballs were given access to a city’s public surveillance videos, local police forces could crowdsource the work of solving crimes and searching for missing persons. Unfortunately, at the present time, cities are unable to wring any social benefit from video footage of public spaces. The most formidable barrier is not government-imposed secrecy, but the fact that as cameras and computers have grown cheaper, a large and fast-growing “mom and pop” surveillance state has taken over most of the filming of public spaces.

While we fear spooky government surveillance, the reality is that we’re much more likely to be filmed by security cameras owned by shopkeepers, landlords, medical offices, hotels, homeowners, and schools. These businesses, organizations, and individuals install cameras in public areas for practical reasons—to reduce their insurance costs, to prevent lawsuits, or to combat shoplifting. In the absence of regulations governing their use, private camera owners store video footage in a wide variety of locations, for varying retention periods.

The unfortunate (and unintended) result of this informal and decentralized network of public surveillance is that video files are not easy to access, even for police officers on official business. After a crime or terrorist attack occurs, local police (or attorneys armed with a subpoena) go from door to door to manually collect video evidence. Once they have the videos in hand, their next challenge is searching for the right “codex” to crack the dozens of different file formats they encounter so they can watch and analyze the footage.

The result of these practical barriers is that as it stands today, only people with considerable legal or political clout are able to successfully gain access into a city’s privately-owned, ad-hoc collections of public surveillance videos. Not only are cities missing the opportunity to streamline routine evidence-gathering police work, they’re missing a radically transformative benefit that would become possible once video footage from thousands of different security cameras were pooled into a single repository: the ability to apply the power of citizen eyeballs to the work of improving public safety.

Why We Need Extreme Transparency
When regular people can’t access their own surveillance videos, there can be no data justice. While we wait for the law to catch up with the reality of modern urban life, citizens and city governments should use technology to address the problem that lies at the heart of surveillance: a power imbalance between those who control the cameras and those who don’t.

Cities should permit individuals and organizations to install and deploy as many public-facing cameras as they wish, but with the mandate that camera owners must place all resulting video footage into the mercilessly bright sunshine of an open data trust. This way, cloud computing, open APIs, and artificial intelligence software can help combat abuses of surveillance and give citizens insight into who’s filming us, where, and why.

Image Credit: VladFotoMag / Shutterstock.com Continue reading

Posted in Human Robots