Category Archives: Human Robots

Everything about Humanoid Robots and Androids

#429725 First cable-driven robot that prints ...

Together with the Institute for Advanced Architecture of Catalonia (IAAC), Tecnalia has developed the first cable-driven robot that allows large parts and even small buildings to be created in situ. The innovative technology includes the latest advances in the field of robotics, digital manufacturing and 3-D printing. Continue reading

Posted in Human Robots

#429724 Science Has Outgrown the Human Mind and ...

The duty of man who investigates the writings of scientists, if learning the truth is his goal, is to make himself an enemy of all that he reads and … attack it from every side. He should also suspect himself as he performs his critical examination of it, so that he may avoid falling into either prejudice or leniency.
– Ibn al-Haytham (965-1040 CE)
Science is in the midst of a data crisis. Last year, there were more than 1.2 million new papers published in the biomedical sciences alone, bringing the total number of peer-reviewed biomedical papers to over 26 million. However, the average scientist reads only about 250 papers a year. Meanwhile, the quality of the scientific literature has been in decline. Some recent studies found that the majority of biomedical papers were irreproducible.
The twin challenges of too much quantity and too little quality are rooted in the finite neurological capacity of the human mind. Scientists are deriving hypotheses from a smaller and smaller fraction of our collective knowledge and consequently, more and more, asking the wrong questions, or asking ones that have already been answered. Also, human creativity seems to depend increasingly on the stochasticity of previous experiences – particular life events that allow a researcher to notice something others do not. Although chance has always been a factor in scientific discovery, it is currently playing a much larger role than it should.
One promising strategy to overcome the current crisis is to integrate machines and artificial intelligence in the scientific process. Machines have greater memory and higher computational capacity than the human brain. Automation of the scientific process could greatly increase the rate of discovery. It could even begin another scientific revolution. That huge possibility hinges on an equally huge question: can scientific discovery really be automated?
I believe it can, using an approach that we have known about for centuries. The answer to this question can be found in the work of Sir Francis Bacon, the 17th-century English philosopher and a key progenitor of modern science.
The first reiterations of the scientific method can be traced back many centuries earlier to Muslim thinkers such as Ibn al-Haytham, who emphasised both empiricism and experimentation. However, it was Bacon who first formalised the scientific method and made it a subject of study. In his book Novum Organum (1620), he proposed a model for discovery that is still known as the Baconian method. He argued against syllogistic logic for scientific synthesis, which he considered to be unreliable. Instead, he proposed an approach in which relevant observations about a specific phenomenon are systematically collected, tabulated and objectively analysed using inductive logic to generate generalisable ideas. In his view, truth could be uncovered only when the mind is free from incomplete (and hence false) axioms.
The Baconian method attempted to remove logical bias from the process of observation and conceptualisation, by delineating the steps of scientific synthesis and optimising each one separately. Bacon’s vision was to leverage a community of observers to collect vast amounts of information about nature and tabulate it into a central record accessible to inductive analysis. In Novum Organum, he wrote: ‘Empiricists are like ants; they accumulate and use. Rationalists spin webs like spiders. The best method is that of the bee; it is somewhere in between, taking existing material and using it.’
The Baconian method is rarely used today. It proved too laborious and extravagantly expensive; its technological applications were unclear. However, at the time the formalisation of a scientific method marked a revolutionary advance. Before it, science was metaphysical, accessible only to a few learned men, mostly of noble birth. By rejecting the authority of the ancient Greeks and delineating the steps of discovery, Bacon created a blueprint that would allow anyone, regardless of background, to become a scientist.
Bacon’s insights also revealed an important hidden truth: the discovery process is inherently algorithmic. It is the outcome of a finite number of steps that are repeated until a meaningful result is uncovered. Bacon explicitly used the word ‘machine’ in describing his method. His scientific algorithm has three essential components: first, observations have to be collected and integrated into the total corpus of knowledge. Second, the new observations are used to generate new hypotheses. Third, the hypotheses are tested through carefully designed experiments.
If science is algorithmic, then it must have the potential for automation. This futuristic dream has eluded information and computer scientists for decades, in large part because the three main steps of scientific discovery occupy different planes. Observation is sensual; hypothesis-generation is mental; and experimentation is mechanical. Automating the scientific process will require the effective incorporation of machines in each step, and in all three feeding into each other without friction. Nobody has yet figured out how to do that.
Experimentation has seen the most substantial recent progress. For example, the pharmaceutical industry commonly uses automated high-throughput platforms for drug design. Startups such as Transcriptic and Emerald Cloud Lab, both in California, are building systems to automate almost every physical task that biomedical scientists do. Scientists can submit their experiments online, where they are converted to code and fed into robotic platforms that carry out a battery of biological experiments. These solutions are most relevant to disciplines that require intensive experimentation, such as molecular biology and chemical engineering, but analogous methods can be applied in other data-intensive fields, and even extended to theoretical disciplines.
Automated hypothesis-generation is less advanced, but the work of Don Swanson in the 1980s provided an important step forward. He demonstrated the existence of hidden links between unrelated ideas in the scientific literature; using a simple deductive logical framework, he could connect papers from various fields with no citation overlap. In this way, Swanson was able to hypothesize a novel link between dietary fish oil and Reynaud’s Syndrome without conducting any experiments or being an expert in either field. Other, more recent approaches, such as those of Andrey Rzhetsky at the University of Chicago and Albert-László Barabási at Northeastern University, rely on mathematical modeling and graph theory. They incorporate large datasets, in which knowledge is projected as a network, where nodes are concepts and links are relationships between them. Novel hypotheses would show up as undiscovered links between nodes.
The most challenging step in the automation process is how to collect reliable scientific observations on a large scale. There is currently no central data bank that holds humanity’s total scientific knowledge on an observational level. Natural language-processing has advanced to the point at which it can automatically extract not only relationships but also context from scientific papers. However, major scientific publishers have placed severe restrictions on text-mining. More important, the text of papers is biased towards the scientist’s interpretations (or misconceptions), and it contains synthesised complex concepts and methodologies that are difficult to extract and quantify.
Nevertheless, recent advances in computing and networked databases make the Baconian method practical for the first time in history. And even before scientific discovery can be automated, embracing Bacon’s approach could prove valuable at a time when pure reductionism is reaching the edge of its usefulness.
Human minds simply cannot reconstruct highly complex natural phenomena efficiently enough in the age of big data. A modern Baconian method that incorporates reductionist ideas through data-mining, but then analyses this information through inductive computational models, could transform our understanding of the natural world. Such an approach would enable us to generate novel hypotheses that have higher chances of turning out to be true, to test those hypotheses, and to fill gaps in our knowledge. It would also provide a much-needed reminder of what science is supposed to be: truth-seeking, anti-authoritarian, and limitlessly free.

This article was originally published at Aeon and has been republished under Creative Commons.
Banner Image Credit: Portrait of Sir Francis Bacon by John Vanderbank/Wikimedia Commons Continue reading

Posted in Human Robots

#429717 Can Artificial Intelligence Learn Racism ...

Artificial intelligence systems that learn from human language acquire the same gender and racial biases as people, according to a new study. Continue reading

Posted in Human Robots

#429715 Bad News: Artificial Intelligence Is ...

Artificial intelligence picks up human biases from language, a new study finds. Continue reading

Posted in Human Robots

#429714 This Is the Dawn of Brain Tech, But How ...

What distinguishes Elon Musk’s reputation as an entrepreneur is that any venture he takes on comes from a bold and inspiring vision for the future of our species. Not long ago, Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. Given Musk’s track record of accomplishing the seemingly impossible, the world is bound to pay extra attention when he says he wants to connect our brains to computers.
Neuralink is registered as a medical company in California. With further details yet to be announced, it will attempt to create a “neural lace,” which is a brain-machine interface that can be implanted directly into our brains to monitor and enhance them.
In the short run, this technology has medical applications and may be used to treat paralysis or diseases like Parkinson’s. In the coming decades, it could allow us to exponentially boost our mental abilities or even digitize human consciousness. Fundamentally, it is a step towards the convergence of humans and machines and maybe a leap in human progress—one that could address various challenges we face.
Current state of research
Musk isn’t the first or only person who wants to connect brains to machines. Another tech entrepreneur, Bryan Johnson, founded startup Kernel in 2016 to similarly look into brain-machine interfaces, and the scientific community has been making strides in recent years.
Earlier this month, researchers in Switzerland announced paralyzed primates could walk again with the assistance of a neuroprosthetic system. And CNN reported a man paralyzed from the shoulders down regained use of his right hand with a brain-machine interface.
The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret the neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing brains to communicate with one another purely through brainwaves. So far, most of these successful applications have been in enabling motor control or very basic communication in individuals with brain injuries.
There remain, however, many challenges to ongoing developments of BMIs.
For one, the most powerful and precise BMIs require invasive surgery. Another challenge is implementing robust algorithms that can interpret the complex interactions of the brain’s 86 billion neurons. Most progress has also been one-directional: brain to machine. We have yet to develop BMIs that can provide us with sensory information or allow us to feel the subjective experience of tactile sensations such as touch, temperature or pain. (Although there has been progress giving prosthetics-users a sense of touch via electrodes attached to nerves in their arm.)
There is also the general challenge that our understanding of the brain is in its infancy. We have a long way to go before we fully understand how and where various functions such as cognition, perception and self-awareness arise. To enhance or integrate machines with these functions, we need to understand their physical underpinnings. Designing interfaces that can communicate with individual neurons and safely integrate with existing biological networks requires a great amount of medical innovation.
However, it’s important to remember this technology is rapidly advancing.
The rise of cyborgs
Hollywood often depicts a dystopian future where machines and humans go to war. Instead, however, we are seeing hints of a future where human and machine converge.
In many ways, we are already cyborgs.
Futurists like Jason Silva point out that our devices are an abstract form of brain-machine interface. We use smartphones to store and retrieve information, perform calculations and communicate with each other. According to philosophers Andy Clark and David Chalmers’ theory of the extended mind, we use technology to expand the boundaries of the human mind beyond our skulls. We use tools like machine learning to enhance our cognitive skills or powerful telescopes to enhance our visual reach. Technology has become a part of our exoskeleton, allowing us to push beyond our limitations.
Musk has pointed out that the merger of biological and machine intelligence may also be necessary if we are to remain “economically valuable.” Brain-machine interfaces could allow us to better reap the benefits of advancing artificial intelligence. With increasing automation of jobs, this could be a way to keep up with machines that perform tasks far more efficiently than we can.
Technologist Ray Kurzweil believes that by 2030s we will connect the neocortex of our brains to the cloud via nanobots. He points out that the neocortex is the source of all “beauty, love and creativity and intelligence in the world.” Notably, due to his predictive accuracy, Kurzweil has been referred to by Bill Gates and others as the best predictor of future technologies.
Whether Kurzweil is right or things take longer than expected, our current trajectory suggests we’ll get there eventually. What might such a future look like when it arrives?
We could scale our intelligence and imagination a thousand-fold. It would radically disrupt how we think, feel and communicate. Transferring our thoughts and feelings directly to others’ brains could re-define human sociality and intimacy. Ultimately, uploading our entire selves into machines could allow us to transcend our biological skins and become digitally immortal.
The implications are truly profound, and many questions remain unanswered. What will the subjective experience of human consciousness feel like when our minds are digitized? How will we prevent our digital brains from getting hacked and overwritten with unwanted thoughts? How do we ensure access to brain-machine interfaces for all, not just the wealthy?
As Peter Diamandis says, “If this future becomes reality, connected humans are going to change everything. We need to discuss the implications in order to make the right decisions now so that we are prepared for the future.”
Image Credit: Shutterstock Continue reading

Posted in Human Robots