Tag Archives: eyes

#431682 Oxford Study Says Alien Life Would ...

The alternative universe known as science fiction has given our culture a menagerie of alien species. From overstuffed teddy bears like Ewoks and Wookies to terrifying nightmares such as Alien and Predator, our collective imagination of what form alien life from another world may take has been irrevocably imprinted by Hollywood.
It might all be possible, or all these bug-eyed critters might turn out to be just B-movie versions of how real extraterrestrials will appear if and when they finally make the evening news.
One thing for certain is that aliens from another world will be shaped by the same evolutionary forces as here on Earth—natural selection. That’s the conclusion of a team of scientists from the University of Oxford in a study published this month in the International Journal of Astrobiology.
A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests.Image Credit: Helen S. Cooper/University of Oxford.
The researchers suggest that evolutionary theory—famously put forth by Charles Darwin in his seminal book On the Origin of Species 158 years ago this month—can be used to make some predictions about alien species. In particular, the team argues that extraterrestrials will undergo natural selection, because that is the only process by which organisms can adapt to their environment.
“Adaptation is what defines life,” lead author Samuel Levin tells Singularity Hub.
While it’s likely that NASA or some SpaceX-like private venture will eventually kick over a few space rocks and discover microbial life in the not-too-distant future, the sorts of aliens Levin and his colleagues are interested in describing are more complex. That’s because natural selection is at work.
A quick evolutionary theory 101 refresher: Natural selection is the process by which certain traits are favored over others in a given population. For example, take a group of brown and green beetles. It just so happens that birds prefer foraging on green beetles, allowing more brown beetles to survive and reproduce than the more delectable green ones. Eventually, if these population pressures persist, brown beetles will become the dominant type. Brown wins, green loses.
And just as human beings are the result of millions of years of adaptations—eyes and thumbs, for example—aliens will similarly be constructed from parts that were once free living but through time came together to work as one organism.
“Life has so many intricate parts, so much complexity, for that to happen (randomly),” Levin explains. “It’s too complex and too many things working together in a purposeful way for that to happen by chance, as how certain molecules come about. Instead you need a process for making it, and natural selection is that process.”
Just don’t expect ET to show up as a bipedal humanoid with a large head and almond-shaped eyes, Levin says.
“They can be built from entirely different chemicals and so visually, superficially, unfamiliar,” he explains. “They will have passed through the same evolutionary history as us. To me, that’s way cooler and more exciting than them having two legs.”
Need for Data
Seth Shostak, a lead astronomer at the SETI Institute and host of the organization’s Big Picture Science radio show, wrote that while the argument is interesting, it doesn’t answer the question of ET’s appearance.
Shostak argues that a more productive approach would invoke convergent evolution, where similar environments lead to similar adaptations, at least assuming a range of Earth-like conditions such as liquid oceans and thick atmospheres. For example, an alien species that evolved in a liquid environment would evolve a streamlined body to move through water.
“Happenstance and the specifics of the environment will produce variations on an alien species’ planet as it has on ours, and there’s really no way to predict these,” Shostak concludes. “Alas, an accurate cosmic bestiary cannot be written by the invocation of biological mechanisms alone. We need data. That requires more than simply thinking about alien life. We need to actually discover it.”
Search is On
The search is on. On one hand, the task seems easy enough: There are at least 100 billion planets in the Milky Way alone, and at least 20 percent of those are likely to be capable of producing a biosphere. Even if the evolution of life is exceedingly rare—take a conservative estimate of .001 percent or 200,000 planets, as proposed by the Oxford paper—you have to like the odds.
Of course, it’s not that easy by a billion light years.
Planet hunters can’t even agree on what signatures of life they should focus on. The idea is that where there’s smoke there’s fire. In the case of an alien world home to biological life, astrobiologists are searching for the presence of “biosignature gases,” vapors that could only be produced by alien life.
As Quanta Magazine reported, scientists do this by measuring a planet’s atmosphere against starlight. Gases in the atmosphere absorb certain frequencies of starlight, offering a clue as to what is brewing around a particular planet.
The presence of oxygen would seem to be a biological no-brainer, but there are instances where a planet can produce a false positive, meaning non-biological processes are responsible for the exoplanet’s oxygen. Scientists like Sara Seager, an astrophysicist at MIT, have argued there are plenty of examples of other types of gases produced by organisms right here on Earth that could also produce the smoking gun, er, planet.

Life as We Know It
Indeed, the existence of Earth-bound extremophiles—organisms that defy conventional wisdom about where life can exist, such as in the vacuum of space—offer another clue as to what kind of aliens we might eventually meet.
Lynn Rothschild, an astrobiologist and synthetic biologist in the Earth Science Division at NASA’s Ames Research Center in Silicon Valley, takes extremophiles as a baseline and then supersizes them through synthetic biology.
For example, say a bacteria is capable of surviving at 120 degrees Celsius. Rothschild’s lab might tweak an organism’s DNA to see if it could metabolize at 150 degrees. The idea, as she explains, is to expand the envelope for life without ever getting into a rocket ship.

While researchers may not always agree on the “where” and “how” and “what” of the search for extraterrestrial life, most do share one belief: Alien life must be out there.
“It would shock me if there weren’t [extraterrestrials],” Levin says. “There are few things that would shock me more than to find out there aren’t any aliens…If I had to bet on it, I would bet on the side of there being lots and lots of aliens out there.”
Image Credit: NASA Continue reading

Posted in Human Robots

#431543 China Is an Entrepreneurial Hotbed That ...

Last week, Eric Schmidt, chairman of Alphabet, predicted that China will rapidly overtake the US in artificial intelligence…in as little as five years.
Last month, China announced plans to open a $10 billion quantum computing research center in 2020.
Bottom line, China is aggressively investing in exponential technologies, pursuing a bold goal of becoming the global AI superpower by 2030.
Based on what I’ve observed from China’s entrepreneurial scene, I believe they have a real shot of hitting that goal.
As I described in a previous tech blog, I recently traveled to China with a group of my Abundance 360 members, where I was hosted by my friend Kai-Fu Lee, the founder, chairman, and CEO of Sinovation Ventures.
On one of our first nights, Kai-Fu invited us to a special dinner at Da Dong Roast, which specializes in Peking duck, where we shared an 18-course meal.
The meal was amazing, and Kai-Fu’s dinner conversation provided us priceless insights on Chinese entrepreneurs.
Three topics opened my eyes. Here’s the wisdom I’d like to share with you.
1. The Entrepreneurial Culture in China
Chinese entrepreneurship has exploded onto the scene and changed significantly over the past 10 years.
In my opinion, one significant way that Chinese entrepreneurs vary from their American counterparts is in work ethic. The mantra I found in the startups I visited in Beijing and Shanghai was “9-9-6”—meaning the employees only needed to work from 9 am to 9 pm, 6 days a week.
Another concept Kai-Fu shared over dinner was the almost ‘dictatorial’ leadership of the founder/CEO. In China, it’s not uncommon for the Founder/CEO to own the majority of the company, or at least 30–40 percent. It’s also the case that what the CEO says is gospel. Period, no debate. There is no minority or dissenting opinion. When the CEO says “march,” the company asks, “which way?”
When Kai-Fu started Sinovation (his $1 billion+ venture fund), there were few active angel investors. Today, China has a rich ecosystem of angel, venture capital, and government-funded innovation parks.
As venture capital in China has evolved, so too has the mindset of the entrepreneur.
Kai -Fu recalled an early investment he made in which, after an unfortunate streak, the entrepreneur came to him, almost in tears, apologizing for losing his money and promising he would earn it back for him in another way. Kai-Fu comforted the entrepreneur and said there was no such need.
Only a few years later, the situation was vastly different. An entrepreneur who was going through a similar unfortunate streak came to Kai Fu and told him he only had $2 million left of his initial $12 million investment. He informed him he saw no value in returning the money and instead was going to take the last $2 million and use it as a final push to see if the company could succeed. He then promised Kai-Fu if he failed, he would remember what Kai-Fu did for him and, as such, possibly give Sinovation an opportunity to invest in him with his next company.
2. Chinese Companies Are No Longer Just ‘Copycats’
During dinner, Kai-Fu lamented that 10 years ago, it would be fair to call Chinese companies copycats of American companies. Five years ago, the claim would be controversial. Today, however, Kai-Fu is clear that claim is entirely false.
While smart Chinese startups will still look at what American companies are doing and build on trends, today it’s becoming a wise business practice for American tech giants to analyze Chinese companies. If you look at many new features of Facebook’s Messenger, it seems to very closely mirror TenCent’s WeChat.
Interestingly, tight government controls in China have actually spurred innovation. Take TV, for example, a highly regulated industry. Because of this regulation, most entertainment in China is consumed on the internet or by phone. Game shows, reality shows, and more will be entirely centered online.
Kai-Fu told us about one of his investments in a company that helps create Chinese singing sensations. They take girls in from a young age, school them, and regardless of talent, help build their presence and brand as singers. Once ready, these singers are pushed across all the available platforms, and superstars are born. The company recognizes its role in this superstar status, though, which is why it takes a 50 percent cut of all earnings.
This company is just one example of how Chinese entrepreneurs take advantage of China’s unique position, market, and culture.
3. China’s Artificial Intelligence Play
Kai-Fu wrapped up his talk with a brief introduction into the expansive AI industry in China. I previously discussed Face++, a Sinovation investment, which is creating radically efficient facial recognition technology. Face++ is light years ahead of anyone else globally at recognition in live videos. However, Face++ is just one of the incredible advances in AI coming out of China.
Baidu, one of China’s most valuable tech companies, started out as just a search company. However, they now run one of the country’s leading self-driving car programs.
Baidu’s goal is to create a software suite atop existing hardware that will control all self-driving aspects of a vehicle but also be able to provide additional services such as HD mapping and more.
Another interesting application came from another of Sinovation’s investments, Smart Finance Group (SFG). Given most payments are mobile (through WeChat or Alipay), only ~20 percent of the population in China have a credit history. This makes it very difficult for individuals in China to acquire a loan.
SFG’s mobile application takes in user data (as much as the user allows) and, based on the information provided, uses an AI agent to create a financial profile with the power to offer an instant loan. This loan can be deposited directly into their WeChat or Alipay account and is typically approved in minutes. Unlike American loan companies, they avoid default and long-term debt by only providing a one-month loan with 10% interest. Borrow $200, and you pay back $220 by the following month.
Artificial intelligence is exploding in China, and Kai-Fu believes it will touch every single industry.
The only constant is change, and the rate of change is constantly increasing.
In the next 10 years, we’ll see tremendous changes on the geopolitical front and the global entrepreneurial scene caused by technological empowerment.
China is an entrepreneurial hotbed that cannot be ignored. I’m monitoring it closely. Are you?
Image Credit: anekoho / Shutterstock.com Continue reading

Posted in Human Robots

#431377 The Farms of the Future Will Be ...

Swarms of drones buzz overhead, while robotic vehicles crawl across the landscape. Orbiting satellites snap high-resolution images of the scene far below. Not one human being can be seen in the pre-dawn glow spreading across the land.
This isn’t some post-apocalyptic vision of the future à la The Terminator. This is a snapshot of the farm of the future. Every phase of the operation—from seed to harvest—may someday be automated, without the need to ever get one’s fingernails dirty.
In fact, it’s science fiction already being engineered into reality. Today, robots empowered with artificial intelligence can zap weeds with preternatural precision, while autonomous tractors move with tireless efficiency across the farmland. Satellites can assess crop health from outer space, providing gobs of data to help produce the sort of business intelligence once accessible only to Fortune 500 companies.
“Precision agriculture is on the brink of a new phase of development involving smart machines that can operate by themselves, which will allow production agriculture to become significantly more efficient. Precision agriculture is becoming robotic agriculture,” said professor Simon Blackmore last year during a conference in Asia on the latest developments in robotic agriculture. Blackmore is head of engineering at Harper Adams University and head of the National Centre for Precision Farming in the UK.
It’s Blackmore’s university that recently showcased what may someday be possible. The project, dubbed Hands Free Hectare and led by researchers from Harper Adams and private industry, farmed one hectare (about 2.5 acres) of spring barley without one person ever setting foot in the field.
The team re-purposed, re-wired and roboticized farm equipment ranging from a Japanese tractor to a 25-year-old combine. Drones served as scouts to survey the operation and collect samples to help the team monitor the progress of the barley. At the end of the season, the robo farmers harvested about 4.5 tons of barley at a price tag of £200,000.

“This project aimed to prove that there’s no technological reason why a field can’t be farmed without humans working the land directly now, and we’ve done that,” said Martin Abell, mechatronics researcher for Precision Decisions, which partnered with Harper Adams, in a press release.
I, Robot Farmer
The Harper Adams experiment is the latest example of how machines are disrupting the agricultural industry. Around the same time that the Hands Free Hectare combine was harvesting barley, Deere & Company announced it would acquire a startup called Blue River Technology for a reported $305 million.
Blue River has developed a “see-and-spray” system that combines computer vision and artificial intelligence to discriminate between crops and weeds. It hits the former with fertilizer and blasts the latter with herbicides with such precision that it can eliminate 90 percent of the chemicals used in conventional agriculture.
It’s not just farmland that’s getting a helping hand from robots. A California company called Abundant Robotics, spun out of the nonprofit research institute SRI International, is developing robots capable of picking apples with vacuum-like arms that suck the fruit straight off the trees in the orchards.
“Traditional robots were designed to perform very specific tasks over and over again. But the robots that will be used in food and agricultural applications will have to be much more flexible than what we’ve seen in automotive manufacturing plants in order to deal with natural variation in food products or the outdoor environment,” Dan Harburg, an associate at venture capital firm Anterra Capital who previously worked at a Massachusetts-based startup making a robotic arm capable of grabbing fruit, told AgFunder News.
“This means ag-focused robotics startups have to design systems from the ground up, which can take time and money, and their robots have to be able to complete multiple tasks to avoid sitting on the shelf for a significant portion of the year,” he noted.
Eyes in the Sky
It will take more than an army of robotic tractors to grow a successful crop. The farm of the future will rely on drones, satellites, and other airborne instruments to provide data about their crops on the ground.
Companies like Descartes Labs, for instance, employ machine learning to analyze satellite imagery to forecast soy and corn yields. The Los Alamos, New Mexico startup collects five terabytes of data every day from multiple satellite constellations, including NASA and the European Space Agency. Combined with weather readings and other real-time inputs, Descartes Labs can predict cornfield yields with 99 percent accuracy. Its AI platform can even assess crop health from infrared readings.
The US agency DARPA recently granted Descartes Labs $1.5 million to monitor and analyze wheat yields in the Middle East and Africa. The idea is that accurate forecasts may help identify regions at risk of crop failure, which could lead to famine and political unrest. Another company called TellusLabs out of Somerville, Massachusetts also employs machine learning algorithms to predict corn and soy yields with similar accuracy from satellite imagery.
Farmers don’t have to reach orbit to get insights on their cropland. A startup in Oakland, Ceres Imaging, produces high-resolution imagery from multispectral cameras flown across fields aboard small planes. The snapshots capture the landscape at different wavelengths, identifying insights into problems like water stress, as well as providing estimates of chlorophyll and nitrogen levels. The geo-tagged images mean farmers can easily locate areas that need to be addressed.
Growing From the Inside
Even the best intelligence—whether from drones, satellites, or machine learning algorithms—will be challenged to predict the unpredictable issues posed by climate change. That’s one reason more and more companies are betting the farm on what’s called controlled environment agriculture. Today, that doesn’t just mean fancy greenhouses, but everything from warehouse-sized, automated vertical farms to grow rooms run by robots, located not in the emptiness of Kansas or Nebraska but smack dab in the middle of the main streets of America.
Proponents of these new concepts argue these high-tech indoor farms can produce much higher yields while drastically reducing water usage and synthetic inputs like fertilizer and herbicides.
Iron Ox, out of San Francisco, is developing one-acre urban greenhouses that will be operated by robots and reportedly capable of producing the equivalent of 30 acres of farmland. Powered by artificial intelligence, a team of three robots will run the entire operation of planting, nurturing, and harvesting the crops.
Vertical farming startup Plenty, also based in San Francisco, uses AI to automate its operations, and got a $200 million vote of confidence from the SoftBank Vision Fund earlier this year. The company claims its system uses only 1 percent of the water consumed in conventional agriculture while producing 350 times as much produce. Plenty is part of a new crop of urban-oriented farms, including Bowery Farming and AeroFarms.
“What I can envision is locating a larger scale indoor farm in the economically disadvantaged food desert, in order to stimulate a broader economic impact that could create jobs and generate income for that area,” said Dr. Gary Stutte, an expert in space agriculture and controlled environment agriculture, in an interview with AgFunder News. “The indoor agriculture model is adaptable to becoming an engine for economic growth and food security in both rural and urban food deserts.”
Still, the model is not without its own challenges and criticisms. Most of what these farms can produce falls into the “leafy greens” category and often comes with a premium price, which seems antithetical to the proposed mission of creating oases in the food deserts of cities. While water usage may be minimized, the electricity required to power the operation, especially the LEDs (which played a huge part in revolutionizing indoor agriculture), are not cheap.
Still, all of these advances, from robo farmers to automated greenhouses, may need to be part of a future where nearly 10 billion people will inhabit the planet by 2050. An oft-quoted statistic from the Food and Agriculture Organization of the United Nations says the world must boost food production by 70 percent to meet the needs of the population. Technology may not save the world, but it will help feed it.
Image Credit: Valentin Valkov / Shutterstock.com Continue reading

Posted in Human Robots

#431238 AI Is Easy to Fool—Why That Needs to ...

Con artistry is one of the world’s oldest and most innovative professions, and it may soon have a new target. Research suggests artificial intelligence may be uniquely susceptible to tricksters, and as its influence in the modern world grows, attacks against it are likely to become more common.
The root of the problem lies in the fact that artificial intelligence algorithms learn about the world in very different ways than people do, and so slight tweaks to the data fed into these algorithms can throw them off completely while remaining imperceptible to humans.
Much of the research into this area has been conducted on image recognition systems, in particular those relying on deep learning neural networks. These systems are trained by showing them thousands of examples of images of a particular object until they can extract common features that allow them to accurately spot the object in new images.
But the features they extract are not necessarily the same high-level features a human would be looking for, like the word STOP on a sign or a tail on a dog. These systems analyze images at the individual pixel level to detect patterns shared between examples. These patterns can be obscure combinations of pixel values, in small pockets or spread across the image, that would be impossible to discern for a human, but highly accurate at predicting a particular object.

“An attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human.”

What this means is that by identifying these patterns and overlaying them over a different image, an attacker can trick the object recognition algorithm into seeing something that isn’t there, without these alterations being obvious to a human. This kind of manipulation is known as an “adversarial attack.”
Early attempts to trick image recognition systems this way required access to the algorithm’s inner workings to decipher these patterns. But in 2016 researchers demonstrated a “black box” attack that enabled them to trick such a system without knowing its inner workings.
By feeding the system doctored images and seeing how it classified them, they were able to work out what it was focusing on and therefore generate images they knew would fool it. Importantly, the doctored images were not obviously different to human eyes.
These approaches were tested by feeding doctored image data directly into the algorithm, but more recently, similar approaches have been applied in the real world. Last year it was shown that printouts of doctored images that were then photographed on a smartphone successfully tricked an image classification system.
Another group showed that wearing specially designed, psychedelically-colored spectacles could trick a facial recognition system into thinking people were celebrities. In August scientists showed that adding stickers to stop signs in particular configurations could cause a neural net designed to spot them to misclassify the signs.
These last two examples highlight some of the potential nefarious applications for this technology. Getting a self-driving car to miss a stop sign could cause an accident, either for insurance fraud or to do someone harm. If facial recognition becomes increasingly popular for biometric security applications, being able to pose as someone else could be very useful to a con artist.
Unsurprisingly, there are already efforts to counteract the threat of adversarial attacks. In particular, it has been shown that deep neural networks can be trained to detect adversarial images. One study from the Bosch Center for AI demonstrated such a detector, an adversarial attack that fools the detector, and a training regime for the detector that nullifies the attack, hinting at the kind of arms race we are likely to see in the future.
While image recognition systems provide an easy-to-visualize demonstration, they’re not the only machine learning systems at risk. The techniques used to perturb pixel data can be applied to other kinds of data too.

“Bypassing cybersecurity defenses is one of the more worrying and probable near-term applications for this approach.”

Chinese researchers showed that adding specific words to a sentence or misspelling a word can completely throw off machine learning systems designed to analyze what a passage of text is about. Another group demonstrated that garbled sounds played over speakers could make a smartphone running the Google Now voice command system visit a particular web address, which could be used to download malware.
This last example points toward one of the more worrying and probable near-term applications for this approach: bypassing cybersecurity defenses. The industry is increasingly using machine learning and data analytics to identify malware and detect intrusions, but these systems are also highly susceptible to trickery.
At this summer’s DEF CON hacking convention, a security firm demonstrated they could bypass anti-malware AI using a similar approach to the earlier black box attack on the image classifier, but super-powered with an AI of their own.
Their system fed malicious code to the antivirus software and then noted the score it was given. It then used genetic algorithms to iteratively tweak the code until it was able to bypass the defenses while maintaining its function.
All the approaches noted so far are focused on tricking pre-trained machine learning systems, but another approach of major concern to the cybersecurity industry is that of “data poisoning.” This is the idea that introducing false data into a machine learning system’s training set will cause it to start misclassifying things.
This could be particularly challenging for things like anti-malware systems that are constantly being updated to take into account new viruses. A related approach bombards systems with data designed to generate false positives so the defenders recalibrate their systems in a way that then allows the attackers to sneak in.
How likely it is that these approaches will be used in the wild will depend on the potential reward and the sophistication of the attackers. Most of the techniques described above require high levels of domain expertise, but it’s becoming ever easier to access training materials and tools for machine learning.
Simpler versions of machine learning have been at the heart of email spam filters for years, and spammers have developed a host of innovative workarounds to circumvent them. As machine learning and AI increasingly embed themselves in our lives, the rewards for learning how to trick them will likely outweigh the costs.
Image Credit: Nejron Photo / Shutterstock.com Continue reading

Posted in Human Robots

#431203 Could We Build a Blade Runner-Style ...

The new Blade Runner sequel will return us to a world where sophisticated androids made with organic body parts can match the strength and emotions of their human creators. As someone who builds biologically inspired robots, I’m interested in whether our own technology will ever come close to matching the “replicants” of Blade Runner 2049.
The reality is that we’re a very long way from building robots with human-like abilities. But advances in so-called soft robotics show a promising way forward for technology that could be a new basis for the androids of the future.
From a scientific point of view, the real challenge is replicating the complexity of the human body. Each one of us is made up of millions and millions of cells, and we have no clue how we can build such a complex machine that is indistinguishable from us humans. The most complex machines today, for example the world’s largest airliner, the Airbus A380, are composed of millions of parts. But in order to match the complexity level of humans, we would need to scale this complexity up about a million times.
There are currently three different ways that engineering is making the border between humans and robots more ambiguous. Unfortunately, these approaches are only starting points and are not yet even close to the world of Blade Runner.
There are human-like robots built from scratch by assembling artificial sensors, motors, and computers to resemble the human body and motion. However, extending the current human-like robot would not bring Blade Runner-style androids closer to humans, because every artificial component, such as sensors and motors, are still hopelessly primitive compared to their biological counterparts.
There is also cyborg technology, where the human body is enhanced with machines such as robotic limbs and wearable and implantable devices. This technology is similarly very far away from matching our own body parts.
Finally, there is the technology of genetic manipulation, where an organism’s genetic code is altered to modify that organism’s body. Although we have been able to identify and manipulate individual genes, we still have a limited understanding of how an entire human emerges from genetic code. As such, we don’t know the degree to which we can actually program code to design everything we wish.
Soft robotics: a way forward?
But we might be able to move robotics closer to the world of Blade Runner by pursuing other technologies and, in particular, by turning to nature for inspiration. The field of soft robotics is a good example. In the last decade or so, robotics researchers have been making considerable efforts to make robots soft, deformable, squishable, and flexible.
This technology is inspired by the fact that 90% of the human body is made from soft substances such as skin, hair, and tissues. This is because most of the fundamental functions in our body rely on soft parts that can change shape, from the heart and lungs pumping fluid around our body to the eye lenses generating signals from their movement. Cells even change shape to trigger division, self-healing and, ultimately, the evolution of the body.
The softness of our bodies is the origin of all their functionality needed to stay alive. So being able to build soft machines would at least bring us a step closer to the robotic world of Blade Runner. Some of the recent technological advances include artificial hearts made out of soft functional materials that are pumping fluid through deformation. Similarly, soft, wearable gloves can help make hand grasping stronger. And “epidermal electronics” has enabled us to tattoo electronic circuits onto our biological skins.
Softness is the keyword that brings humans and technologies closer together. Sensors, motors, and computers are all of a sudden integrated into human bodies once they became soft, and the border between us and external devices becomes ambiguous, just like soft contact lenses became part of our eyes.
Nevertheless, the hardest challenge is how to make individual parts of a soft robot body physically adaptable by self-healing, growing, and differentiating. After all, every part of a living organism is also alive in biological systems in order to make our bodies totally adaptable and evolvable, the function of which could make machines totally indistinguishable from ourselves.
It is impossible to predict when the robotic world of Blade Runner might arrive, and if it does, it will probably be very far in the future. But as long as the desire to build machines indistinguishable from humans is there, the current trends of robotic revolution could make it possible to achieve that dream.
This article was originally published on The Conversation. Read the original article.
Image Credit: Dariush M / Shutterstock.com Continue reading

Posted in Human Robots