Tag Archives: kind

#431958 The Next Generation of Cameras Might See ...

You might be really pleased with the camera technology in your latest smartphone, which can recognize your face and take slow-mo video in ultra-high definition. But these technological feats are just the start of a larger revolution that is underway.

The latest camera research is shifting away from increasing the number of mega-pixels towards fusing camera data with computational processing. By that, we don’t mean the Photoshop style of processing where effects and filters are added to a picture, but rather a radical new approach where the incoming data may not actually look like at an image at all. It only becomes an image after a series of computational steps that often involve complex mathematics and modeling how light travels through the scene or the camera.

This additional layer of computational processing magically frees us from the chains of conventional imaging techniques. One day we may not even need cameras in the conventional sense any more. Instead we will use light detectors that only a few years ago we would never have considered any use for imaging. And they will be able to do incredible things, like see through fog, inside the human body and even behind walls.

Single Pixel Cameras
One extreme example is the single pixel camera, which relies on a beautifully simple principle. Typical cameras use lots of pixels (tiny sensor elements) to capture a scene that is likely illuminated by a single light source. But you can also do things the other way around, capturing information from many light sources with a single pixel.

To do this you need a controlled light source, for example a simple data projector that illuminates the scene one spot at a time or with a series of different patterns. For each illumination spot or pattern, you then measure the amount of light reflected and add everything together to create the final image.

Clearly the disadvantage of taking a photo in this is way is that you have to send out lots of illumination spots or patterns in order to produce one image (which would take just one snapshot with a regular camera). But this form of imaging would allow you to create otherwise impossible cameras, for example that work at wavelengths of light beyond the visible spectrum, where good detectors cannot be made into cameras.

These cameras could be used to take photos through fog or thick falling snow. Or they could mimic the eyes of some animals and automatically increase an image’s resolution (the amount of detail it captures) depending on what’s in the scene.

It is even possible to capture images from light particles that have never even interacted with the object we want to photograph. This would take advantage of the idea of “quantum entanglement,” that two particles can be connected in a way that means whatever happens to one happens to the other, even if they are a long distance apart. This has intriguing possibilities for looking at objects whose properties might change when lit up, such as the eye. For example, does a retina look the same when in darkness as in light?

Multi-Sensor Imaging
Single-pixel imaging is just one of the simplest innovations in upcoming camera technology and relies, on the face of it, on the traditional concept of what forms a picture. But we are currently witnessing a surge of interest for systems that use lots of information but traditional techniques only collect a small part of it.

This is where we could use multi-sensor approaches that involve many different detectors pointed at the same scene. The Hubble telescope was a pioneering example of this, producing pictures made from combinations of many different images taken at different wavelengths. But now you can buy commercial versions of this kind of technology, such as the Lytro camera that collects information about light intensity and direction on the same sensor, to produce images that can be refocused after the image has been taken.

The next generation camera will probably look something like the Light L16 camera, which features ground-breaking technology based on more than ten different sensors. Their data are combined using a computer to provide a 50 MB, re-focusable and re-zoomable, professional-quality image. The camera itself looks like a very exciting Picasso interpretation of a crazy cell-phone camera.

Yet these are just the first steps towards a new generation of cameras that will change the way in which we think of and take images. Researchers are also working hard on the problem of seeing through fog, seeing behind walls, and even imaging deep inside the human body and brain.

All of these techniques rely on combining images with models that explain how light travels through through or around different substances.

Another interesting approach that is gaining ground relies on artificial intelligence to “learn” to recognize objects from the data. These techniques are inspired by learning processes in the human brain and are likely to play a major role in future imaging systems.

Single photon and quantum imaging technologies are also maturing to the point that they can take pictures with incredibly low light levels and videos with incredibly fast speeds reaching a trillion frames per second. This is enough to even capture images of light itself traveling across as scene.

Some of these applications might require a little time to fully develop, but we now know that the underlying physics should allow us to solve these and other problems through a clever combination of new technology and computational ingenuity.

This article was originally published on The Conversation. Read the original article.

Image Credit: Sylvia Adams / Shutterstock.com Continue reading

Posted in Human Robots

#431920 If We Could Engineer Animals to Be as ...

Advances in neural implants and genetic engineering suggest that in the not–too–distant future we may be able to boost human intelligence. If that’s true, could we—and should we—bring our animal cousins along for the ride?
Human brain augmentation made headlines last year after several tech firms announced ambitious efforts to build neural implant technology. Duke University neuroscientist Mikhail Lebedev told me in July it could be decades before these devices have applications beyond the strictly medical.
But he said the technology, as well as other pharmacological and genetic engineering approaches, will almost certainly allow us to boost our mental capacities at some point in the next few decades.
Whether this kind of cognitive enhancement is a good idea or not, and how we should regulate it, are matters of heated debate among philosophers, futurists, and bioethicists, but for some it has raised the question of whether we could do the same for animals.
There’s already tantalizing evidence of the idea’s feasibility. As detailed in BBC Future, a group from MIT found that mice that were genetically engineered to express the human FOXP2 gene linked to learning and speech processing picked up maze routes faster. Another group at Wake Forest University studying Alzheimer’s found that neural implants could boost rhesus monkeys’ scores on intelligence tests.
The concept of “animal uplift” is most famously depicted in the Planet of the Apes movie series, whose planet–conquering protagonists are likely to put most people off the idea. But proponents are less pessimistic about the outcomes.
Science fiction author David Brin popularized the concept in his “Uplift” series of novels, in which humans share the world with various other intelligent animals that all bring their own unique skills, perspectives, and innovations to the table. “The benefits, after a few hundred years, could be amazing,” he told Scientific American.
Others, like George Dvorsky, the director of the Rights of Non-Human Persons program at the Institute for Ethics and Emerging Technologies, go further and claim there is a moral imperative. He told the Boston Globe that denying augmentation technology to animals would be just as unethical as excluding certain groups of humans.
Others are less convinced. Forbes’ Alex Knapp points out that developing the technology to uplift animals will likely require lots of very invasive animal research that will cause huge suffering to the animals it purports to help. This is problematic enough with normal animals, but could be even more morally dubious when applied to ones whose cognitive capacities have been enhanced.
The whole concept could also be based on a fundamental misunderstanding of the nature of intelligence. Humans are prone to seeing intelligence as a single, self-contained metric that progresses in a linear way with humans at the pinnacle.
In an opinion piece in Wired arguing against the likelihood of superhuman artificial intelligence, Kevin Kelly points out that science has no such single dimension with which to rank the intelligence of different species. Each one combines a bundle of cognitive capabilities, some of which are well below our own capabilities and others which are superhuman. He uses the example of the squirrel, which can remember the precise location of thousands of acorns for years.
Uplift efforts may end up being less about boosting intelligence and more about making animals more human-like. That represents “a kind of benevolent colonialism” that assumes being more human-like is a good thing, Paul Graham Raven, a futures researcher at the University of Sheffield in the United Kingdom, told the Boston Globe. There’s scant evidence that’s the case, and it’s easy to see how a chimpanzee with the mind of a human might struggle to adjust.
There are also fundamental barriers that may make it difficult to achieve human-level cognitive capabilities in animals, no matter how advanced brain augmentation technology gets. In 2013 Swedish researchers selectively bred small fish called guppies for bigger brains. This made them smarter, but growing the energy-intensive organ meant the guppies developed smaller guts and produced fewer offspring to compensate.
This highlights the fact that uplifting animals may require more than just changes to their brains, possibly a complete rewiring of their physiology that could prove far more technically challenging than human brain augmentation.
Our intelligence is intimately tied to our evolutionary history—our brains are bigger than other animals’; opposable thumbs allow us to use tools; our vocal chords make complex communication possible. No matter how much you augment a cow’s brain, it still couldn’t use a screwdriver or talk to you in English because it simply doesn’t have the machinery.
Finally, from a purely selfish point of view, even if it does become possible to create a level playing field between us and other animals, it may not be a smart move for humanity. There’s no reason to assume animals would be any more benevolent than we are, having evolved in the same ‘survival of the fittest’ crucible that we have. And given our already endless capacity to divide ourselves along national, religious, or ethnic lines, conflict between species seems inevitable.
We’re already likely to face considerable competition from smart machines in the coming decades if you believe the hype around AI. So maybe adding a few more intelligent species to the mix isn’t the best idea.
Image Credit: Ron Meijer / Shutterstock.com Continue reading

Posted in Human Robots

#431906 Low-Cost Soft Robot Muscles Can Lift 200 ...

Jerky mechanical robots are staples of science fiction, but to seamlessly integrate into everyday life they’ll need the precise yet powerful motor control of humans. Now scientists have created a new class of artificial muscles that could soon make that a reality.
The advance is the latest breakthrough in the field of soft robotics. Scientists are increasingly designing robots using soft materials that more closely resemble biological systems, which can be more adaptable and better suited to working in close proximity to humans.
One of the main challenges has been creating soft components that match the power and control of the rigid actuators that drive mechanical robots—things like motors and pistons. Now researchers at the University of Colorado Boulder have built a series of low-cost artificial muscles—as little as 10 cents per device—using soft plastic pouches filled with electrically insulating liquids that contract with the force and speed of mammalian skeletal muscles when a voltage is applied to them.

Three different designs of the so-called hydraulically amplified self-healing electrostatic (HASEL) actuators were detailed in two papers in the journals Science and Science Robotics last week. They could carry out a variety of tasks, from gently picking up delicate objects like eggs or raspberries to lifting objects many times their own weight, such as a gallon of water, at rapid repetition rates.
“We draw our inspiration from the astonishing capabilities of biological muscle,” Christoph Keplinger, an assistant professor at UC Boulder and senior author of both papers, said in a press release. “Just like biological muscle, HASEL actuators can reproduce the adaptability of an octopus arm, the speed of a hummingbird and the strength of an elephant.”
The artificial muscles work by applying a voltage to hydrogel electrodes on either side of pouches filled with liquid insulators, which can be as simple as canola oil. This creates an attraction between the two electrodes, pulling them together and displacing the liquid. This causes a change of shape that can push or pull levers, arms or any other articulated component.
The design is essentially a synthesis of two leading approaches to actuating soft robots. Pneumatic and hydraulic actuators that pump fluids around have been popular due to their high forces, easy fabrication and ability to mimic a variety of natural motions. But they tend to be bulky and relatively slow.
Dielectric elastomer actuators apply an electric field across a solid insulating layer to make it flex. These can mimic the responsiveness of biological muscle. But they are not very versatile and can also fail catastrophically, because the high voltages required can cause a bolt of electricity to blast through the insulator, destroying it. The likelihood of this happening increases in line with the size of their electrodes, which makes it hard to scale them up. By combining the two approaches, researchers get the best of both worlds, with the power, versatility and easy fabrication of a fluid-based system and the responsiveness of electrically-powered actuators.
One of the designs holds particular promise for robotics applications, as it behaves a lot like biological muscle. The so-called Peano-HASEL actuators are made up of multiple rectangular pouches connected in series, which allows them to contract linearly, just like real muscle. They can lift more than 200 times their weight, but being electrically powered, they exceed the flexing speed of human muscle.
As the name suggests, the HASEL actuators are also self-healing. They are still prone to the same kind of electrical damage as dielectric elastomer actuators, but the liquid insulator is able to immediately self-heal by redistributing itself and regaining its insulating properties.
The muscles can even monitor the amount of strain they’re under to provide the same kind of feedback biological systems would. The muscle’s capacitance—its ability to store an electric charge—changes as the device stretches, which makes it possible to power the arm while simultaneously measuring what position it’s in.
The researchers say this could imbue robots with a similar sense of proprioception or body-awareness to that found in plants and animals. “Self-sensing allows for the development of closed-loop feedback controllers to design highly advanced and precise robots for diverse applications,” Shane Mitchell, a PhD student in Keplinger’s lab and an author on both papers, said in an email.
The researchers say the high voltages required are an ongoing challenge, though they’ve already designed devices in the lab that use a fifth of the voltage of those features in the recent papers.
In most of their demonstrations, these soft actuators were being used to power rigid arms and levers, pointing to the fact that future robots are likely to combine both rigid and soft components, much like animals do. The potential applications for the technology range from more realistic prosthetics to much more dextrous robots that can work easily alongside humans.
It will take some work before these devices appear in commercial robots. But the combination of high-performance with simple and inexpensive fabrication methods mean other researchers are likely to jump in, so innovation could be rapid.
Image Credit: Keplinger Research Group/University of Colorado Continue reading

Posted in Human Robots

#431899 Darker Still: Black Mirror’s New ...

The key difference between science fiction and fantasy is that science fiction is entirely possible because of its grounding in scientific facts, while fantasy is not. This is where Black Mirror is both an entertaining and terrifying work of science fiction. Created by Charlie Brooker, the anthological series tells cautionary tales of emerging technology that could one day be an integral part of our everyday lives.
While watching the often alarming episodes, one can’t help but recognize the eerie similarities to some of the tech tools that are already abundant in our lives today. In fact, many previous Black Mirror predictions are already becoming reality.
The latest season of Black Mirror was arguably darker than ever. This time, Brooker seemed to focus on the ethical implications of one particular area: neurotechnology.
Emerging Neurotechnology
Warning: The remainder of this article may contain spoilers from Season 4 of Black Mirror.
Most of the storylines from season four revolve around neurotechnology and brain-machine interfaces. They are based in a world where people have the power to upload their consciousness onto machines, have fully immersive experiences in virtual reality, merge their minds with other minds, record others’ memories, and even track what others are thinking, feeling, and doing.
How can all this ever be possible? Well, these capabilities are already being developed by pioneers and researchers globally. Early last year, Elon Musk unveiled Neuralink, a company whose goal is to merge the human mind with AI through a neural lace. We’ve already connected two brains via the internet, allowing one brain to communicate with another. Various research teams have been able to develop mechanisms for “reading minds” or reconstructing memories of individuals via devices. The list goes on.
With many of the technologies we see in Black Mirror it’s not a question of if, but when. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to upload our consciousness onto the cloud via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” While other experts continue to challenge Kurzweil on the exact year we’ll accomplish this feat, with the current exponential growth of our technological capabilities, we’re on track to get there eventually.
Ethical Questions
As always, technology is only half the conversation. Equally fascinating are the many ethical and moral questions this topic raises.
For instance, with the increasing convergence of artificial intelligence and virtual reality, we have to ask ourselves if our morality from the physical world transfers equally into the virtual world. The first episode of season four, USS Calister, tells the story of a VR pioneer, Robert Daley, who creates breakthrough AI and VR to satisfy his personal frustrations and sexual urges. He uses the DNA of his coworkers (and their children) to re-create them digitally in his virtual world, to which he escapes to torture them, while they continue to be indifferent in the “real” world.
Audiences are left asking themselves: should what happens in the digital world be considered any less “real” than the physical world? How do we know if the individuals in the virtual world (who are ultimately based on algorithms) have true feelings or sentiments? Have they been developed to exhibit characteristics associated with suffering, or can they really feel suffering? Fascinatingly, these questions point to the hard problem of consciousness—the question of if, why, and how a given physical process generates the specific experience it does—which remains a major mystery in neuroscience.
Towards the end of USS Calister, the hostages of Daley’s virtual world attempt to escape through suicide, by committing an act that will delete the code that allows them to exist. This raises yet another mind-boggling ethical question: if we “delete” code that signifies a digital being, should that be considered murder (or suicide, in this case)? Why shouldn’t it? When we murder someone we are, in essence, taking away their capacity to live and to be, without their consent. By unplugging a self-aware AI, wouldn’t we be violating its basic right to live in the same why? Does AI, as code, even have rights?
Brain implants can also have a radical impact on our self-identity and how we define the word “I”. In the episode Black Museum, instead of witnessing just one horror, we get a series of scares in little segments. One of those segments tells the story of a father who attempts to reincarnate the mother of his child by uploading her consciousness into his mind and allowing her to live in his head (essentially giving him multiple personality disorder). In this way, she can experience special moments with their son.
With “no privacy for him, and no agency for her” the good intention slowly goes very wrong. This story raises a critical question: should we be allowed to upload consciousness into limited bodies? Even more, if we are to upload our minds into “the cloud,” at what point do we lose our individuality to become one collective being?
These questions can form the basis of hours of debate, but we’re just getting started. There are no right or wrong answers with many of these moral dilemmas, but we need to start having such discussions.
The Downside of Dystopian Sci-Fi
Like last season’s San Junipero, one episode of the series, Hang the DJ, had an uplifting ending. Yet the overwhelming majority of the stories in Black Mirror continue to focus on the darkest side of human nature, feeding into the pre-existing paranoia of the general public. There is certainly some value in this; it’s important to be aware of the dangers of technology. After all, what better way to explore these dangers before they occur than through speculative fiction?
A big takeaway from every tale told in the series is that the greatest threat to humanity does not come from technology, but from ourselves. Technology itself is not inherently good or evil; it all comes down to how we choose to use it as a society. So for those of you who are techno-paranoid, beware, for it’s not the technology you should fear, but the humans who get their hands on it.
While we can paint negative visions for the future, though, it is also important to paint positive ones. The kind of visions we set for ourselves have the power to inspire and motivate generations. Many people are inherently pessimistic when thinking about the future, and that pessimism in turn can shape their contributions to humanity.
While utopia may not exist, the future of our species could and should be one of solving global challenges, abundance, prosperity, liberation, and cosmic transcendence. Now that would be a thrilling episode to watch.
Image Credit: Billion Photos / Shutterstock.com Continue reading

Posted in Human Robots

#431872 AI Uses Titan Supercomputer to Create ...

You don’t have to dig too deeply into the archive of dystopian science fiction to uncover the horror that intelligent machines might unleash. The Matrix and The Terminator are probably the most well-known examples of self-replicating, intelligent machines attempting to enslave or destroy humanity in the process of building a brave new digital world.
The prospect of artificially intelligent machines creating other artificially intelligent machines took a big step forward in 2017. However, we’re far from the runaway technological singularity futurists are predicting by mid-century or earlier, let alone murderous cyborgs or AI avatar assassins.
The first big boost this year came from Google. The tech giant announced it was developing automated machine learning (AutoML), writing algorithms that can do some of the heavy lifting by identifying the right neural networks for a specific job. Now researchers at the Department of Energy’s Oak Ridge National Laboratory (ORNL), using the most powerful supercomputer in the US, have developed an AI system that can generate neural networks as good if not better than any developed by a human in less than a day.
It can take months for the brainiest, best-paid data scientists to develop deep learning software, which sends data through a complex web of mathematical algorithms. The system is modeled after the human brain and known as an artificial neural network. Even Google’s AutoML took weeks to design a superior image recognition system, one of the more standard operations for AI systems today.
Computing Power
Of course, Google Brain project engineers only had access to 800 graphic processing units (GPUs), a type of computer hardware that works especially well for deep learning. Nvidia, which pioneered the development of GPUs, is considered the gold standard in today’s AI hardware architecture. Titan, the supercomputer at ORNL, boasts more than 18,000 GPUs.
The ORNL research team’s algorithm, called MENNDL for Multinode Evolutionary Neural Networks for Deep Learning, isn’t designed to create AI systems that cull cute cat photos from the internet. Instead, MENNDL is a tool for testing and training thousands of potential neural networks to work on unique science problems.
That requires a different approach from the Google and Facebook AI platforms of the world, notes Steven Young, a postdoctoral research associate at ORNL who is on the team that designed MENNDL.
“We’ve discovered that those [neural networks] are very often not the optimal network for a lot of our problems, because our data, while it can be thought of as images, is different,” he explains to Singularity Hub. “These images, and the problems, have very different characteristics from object detection.”
AI for Science
One application of the technology involved a particle physics experiment at the Fermi National Accelerator Laboratory. Fermilab researchers are interested in understanding neutrinos, high-energy subatomic particles that rarely interact with normal matter but could be a key to understanding the early formation of the universe. One Fermilab experiment involves taking a sort of “snapshot” of neutrino interactions.
The team wanted the help of an AI system that could analyze and classify Fermilab’s detector data. MENNDL evaluated 500,000 neural networks in 24 hours. Its final solution proved superior to custom models developed by human scientists.
In another case involving a collaboration with St. Jude Children’s Research Hospital in Memphis, MENNDL improved the error rate of a human-designed algorithm for identifying mitochondria inside 3D electron microscopy images of brain tissue by 30 percent.
“We are able to do better than humans in a fraction of the time at designing networks for these sort of very different datasets that we’re interested in,” Young says.
What makes MENNDL particularly adept is its ability to define the best or most optimal hyperparameters—the key variables—to tackle a particular dataset.
“You don’t always need a big, huge deep network. Sometimes you just need a small network with the right hyperparameters,” Young says.
A Virtual Data Scientist
That’s not dissimilar to the approach of a company called H20.ai, a startup out of Silicon Valley that uses open source machine learning platforms to “democratize” AI. It applies machine learning to create business solutions for Fortune 500 companies, including some of the world’s biggest banks and healthcare companies.
“Our software is more [about] pattern detection, let’s say anti-money laundering or fraud detection or which customer is most likely to churn,” Dr. Arno Candel, chief technology officer at H2O.ai, tells Singularity Hub. “And that kind of insight-generating software is what we call AI here.”
The company’s latest product, Driverless AI, promises to deliver the data scientist equivalent of a chessmaster to its customers (the company claims several such grandmasters in its employ and advisory board). In other words, the system can analyze a raw dataset and, like MENNDL, automatically identify what features should be included in the computer model to make the most of the data based on the best “chess moves” of its grandmasters.
“So we’re using those algorithms, but we’re giving them the human insights from those data scientists, and we automate their thinking,” he explains. “So we created a virtual data scientist that is relentless at trying these ideas.”
Inside the Black Box
Not unlike how the human brain reaches a conclusion, it’s not always possible to understand how a machine, despite being designed by humans, reaches its own solutions. The lack of transparency is often referred to as the AI “black box.” Experts like Young say we can learn something about the evolutionary process of machine learning by generating millions of neural networks and seeing what works well and what doesn’t.
“You’re never going to be able to completely explain what happened, but maybe we can better explain it than we currently can today,” Young says.
Transparency is built into the “thought process” of each particular model generated by Driverless AI, according to Candel.
The computer even explains itself to the user in plain English at each decision point. There is also real-time feedback that allows users to prioritize features, or parameters, to see how the changes improve the accuracy of the model. For example, the system may include data from people in the same zip code as it creates a model to describe customer turnover.
“That’s one of the advantages of our automatic feature engineering: it’s basically mimicking human thinking,” Candel says. “It’s not just neural nets that magically come up with some kind of number, but we’re trying to make it statistically significant.”
Moving Forward
Much digital ink has been spilled over the dearth of skilled data scientists, so automating certain design aspects for developing artificial neural networks makes sense. Experts agree that automation alone won’t solve that particular problem. However, it will free computer scientists to tackle more difficult issues, such as parsing the inherent biases that exist within the data used by machine learning today.
“I think the world has an opportunity to focus more on the meaning of things and not on the laborious tasks of just fitting a model and finding the best features to make that model,” Candel notes. “By automating, we are pushing the burden back for the data scientists to actually do something more meaningful, which is think about the problem and see how you can address it differently to make an even bigger impact.”
The team at ORNL expects it can also make bigger impacts beginning next year when the lab’s next supercomputer, Summit, comes online. While Summit will boast only 4,600 nodes, it will sport the latest and greatest GPU technology from Nvidia and CPUs from IBM. That means it will deliver more than five times the computational performance of Titan, the world’s fifth-most powerful supercomputer today.
“We’ll be able to look at much larger problems on Summit than we were able to with Titan and hopefully get to a solution much faster,” Young says.
It’s all in a day’s work.
Image Credit: Gennady Danilkin / Shutterstock.com Continue reading

Posted in Human Robots