Tag Archives: university
We all have scars, and each one tells a story. Tales of tomfoolery, tales of haphazardness, or in my case, tales of stupidity.
Whether the cause of your scar was a push-bike accident, a lack of concentration while cutting onions, or simply the byproduct of an active lifestyle, the experience was likely extremely painful and distressing. Not to mention the long and vexatious recovery period, stretching out for weeks and months after the actual event!
Cast your minds back to that time. How you longed for instant relief from your discomfort! How you longed to have your capabilities restored in an instant!
Well, materials that can heal themselves in an instant may not be far from becoming a reality—and a family of them known as elastomers holds the key.
“Elastomer” is essentially a big, fancy word for rubber. However, elastomers have one unique property—they are capable of returning to their original form after being vigorously stretched and deformed.
This unique property of elastomers has caught the eye of many scientists around the world, particularly those working in the field of robotics. The reason? Elastomer can be encouraged to return to its original shape, in many cases by simply applying heat. The implication of this is the quick and cost-effective repair of “wounds”—cuts, tears, and punctures to the soft, elastomer-based appendages of a robot’s exoskeleton.
Researchers from Vrije University in Brussels, Belgium have been toying with the technique, and with remarkable success. The team built a robotic hand with fingers made of a type of elastomer. They found that cuts and punctures were indeed able to repair themselves simply by applying heat to the affected area.
How long does the healing process take? In this instance, about a day. Now that’s a lot shorter than the weeks and months of recovery time we typically need for a flesh wound, during which we are unable to write, play the guitar, or do the dishes. If you consider the latter to be a bad thing…
However, it’s not the first time scientists have played around with elastomers and examined their self-healing properties. Another team of scientists, headed up by Cheng-Hui Li and Chao Wang, discovered another type of elastomer that exhibited autonomous self-healing properties. Just to help you picture this stuff, the material closely resembles animal muscle— strong, flexible, and elastic. With autogenetic restorative powers to boot.
Advancements in the world of self-healing elastomers, or rubbers, may also affect the lives of everyday motorists. Researchers from the Harvard John A. Paulson School of Engineering and Applied Sciences (SEAS) have developed a self-healing rubber material that could be used to make tires that repair their own punctures.
This time the mechanism of self-healing doesn’t involve heat. Rather, it is related to a physical phenomenon associated with the rubber’s unique structure. Normally, when a large enough stress is applied to a typical rubber, there is catastrophic failure at the focal point of that stress. The self-healing rubber the researchers created, on the other hand, distributes that same stress evenly over a network of “crazes”—which are like cracks connected by strands of fiber.
Here’s the interesting part. Not only does this unique physical characteristic of the rubber prevent catastrophic failure, it facilitates self-repair. According to Harvard researchers, when the stress is released, the material snaps back to its original form and the crazes heal.
This wonder material could be used in any number of rubber-based products.
Professor Jinrong Wu, of Sichuan University, China, and co-author of the study, happened to single out tires: “Imagine that we could use this material as one of the components to make a rubber tire… If you have a cut through the tire, this tire wouldn’t have to be replaced right away. Instead, it would self-heal while driving, enough to give you leeway to avoid dramatic damage,” said Wu.
So where to from here? Well, self-healing elastomers could have a number of different applications. According to the article published by Quartz, cited earlier, the material could be used on artificial limbs. Perhaps it will provide some measure of structural integrity without looking like a tattered mess after years of regular use.
Or perhaps a sort of elastomer-based hybrid skin is on the horizon. A skin in which wounds heal instantly. And recovery time, unlike your regular old human skin of yesteryear, is significantly slashed. Furthermore, this future skin might eliminate those little reminders we call scars.
For those with poor judgment skills, this spells an end to disquieting reminders of our own stupidity.
Image Credit: Vrije Universiteit Brussel / Prof. Dr. ir. Bram Vanderborght Continue reading
“The Six Ds are a chain reaction of technological progression, a road map of rapid development that always leads to enormous upheaval and opportunity.”
–Peter Diamandis and Steven Kotler, Bold
We live in incredible times. News travels the globe in an instant. Music, movies, games, communication, and knowledge are ever-available on always-connected devices. From biotechnology to artificial intelligence, powerful technologies that were once only available to huge organizations and governments are becoming more accessible and affordable thanks to digitization.
The potential for entrepreneurs to disrupt industries and corporate behemoths to unexpectedly go extinct has never been greater.
One hundred or fifty or even twenty years ago, disruption meant coming up with a product or service people needed but didn’t have yet, then finding a way to produce it with higher quality and lower costs than your competitors. This entailed hiring hundreds or thousands of employees, having a large physical space to put them in, and waiting years or even decades for hard work to pay off and products to come to fruition.
“Technology is disrupting traditional industrial processes, and they’re never going back.”
But thanks to digital technologies developing at exponential rates of change, the landscape of 21st-century business has taken on a dramatically different look and feel.
The structure of organizations is changing. Instead of thousands of employees and large physical plants, modern start-ups are small organizations focused on information technologies. They dematerialize what was once physical and create new products and revenue streams in months, sometimes weeks.
It no longer takes a huge corporation to have a huge impact.
Technology is disrupting traditional industrial processes, and they’re never going back. This disruption is filled with opportunity for forward-thinking entrepreneurs.
The secret to positively impacting the lives of millions of people is understanding and internalizing the growth cycle of digital technologies. This growth cycle takes place in six key steps, which Peter Diamandis calls the Six Ds of Exponentials: digitization, deception, disruption, demonetization, dematerialization, and democratization.
According to Diamandis, cofounder and chairman of Singularity University and founder and executive chairman of XPRIZE, when something is digitized it begins to behave like an information technology.
Newly digitized products develop at an exponential pace instead of a linear one, fooling onlookers at first before going on to disrupt companies and whole industries. Before you know it, something that was once expensive and physical is an app that costs a buck.
Newspapers and CDs are two obvious recent examples. The entertainment and media industries are still dealing with the aftermath of digitization as they attempt to transform and update old practices tailored to a bygone era. But it won’t end with digital media. As more of the economy is digitized—from medicine to manufacturing—industries will hop on an exponential curve and be similarly disrupted.
Diamandis’s 6 Ds are critical to understanding and planning for this disruption.
The 6 Ds of Exponential Organizations are Digitized, Deceptive, Disruptive, Demonetized, Dematerialized, and Democratized.
Diamandis uses the contrasting fates of Kodak and Instagram to illustrate the power of the six Ds and exponential thinking.
Kodak invented the digital camera in 1975, but didn’t invest heavily in the new technology, instead sticking with what had always worked: traditional cameras and film. In 1996, Kodak had a $28 billion market capitalization with 95,000 employees.
But the company didn’t pay enough attention to how digitization of their core business was changing it; people were no longer taking pictures in the same way and for the same reasons as before.
After a downward spiral, Kodak went bankrupt in 2012. That same year, Facebook acquired Instagram, a digital photo sharing app, which at the time was a startup with 13 employees. The acquisition’s price tag? $1 billion. And Instagram had been founded only 18 months earlier.
The most ironic piece of this story is that Kodak invented the digital camera; they took the first step toward overhauling the photography industry and ushering it into the modern age, but they were unwilling to disrupt their existing business by taking a risk in what was then uncharted territory. So others did it instead.
The same can happen with any technology that’s just getting off the ground. It’s easy to stop pursuing it in the early part of the exponential curve, when development appears to be moving slowly. But failing to follow through only gives someone else the chance to do it instead.
The Six Ds are a road map showing what can happen when an exponential technology is born. Not every phase is easy, but the results give even small teams the power to change the world in a faster and more impactful way than traditional business ever could.
Image Credit: Mohammed Tareq / Shutterstock Continue reading
The tech industry touts its ability to automate tasks and remove slow and expensive humans from the equation. But in the background, a lot of the legwork training machine learning systems, solving problems software can’t, and cleaning up its mistakes is still done by people.
This was highlighted recently when Expensify, which promises to automatically scan photos of receipts to extract data for expense reports, was criticized for sending customers’ personally identifiable receipts to workers on Amazon’s Mechanical Turk (MTurk) crowdsourcing platform.
The company uses text analysis software to read the receipts, but if the automated system falls down then the images are passed to a human for review. While entrusting this job to random workers on MTurk was maybe not so wise—and the company quickly stopped after the furor—the incident brought to light that this kind of human safety net behind AI-powered services is actually very common.
As Wired notes, similar services like Ibotta and Receipt Hog that collect receipt information for marketing purposes also use crowdsourced workers. In a similar vein, while most users might assume their Facebook newsfeed is governed by faceless algorithms, the company has been ramping up the number of human moderators it employs to catch objectionable content that slips through the net, as has YouTube. Twitter also has thousands of human overseers.
Humans aren’t always witting contributors either. The old text-based reCAPTCHA problems Google used to use to distinguish humans from machines was actually simultaneously helping the company digitize books by getting humans to interpret hard-to-read text.
“Every product that uses AI also uses people,” Jeffrey Bigham, a crowdsourcing expert at Carnegie Mellon University, told Wired. “I wouldn’t even say it’s a backstop so much as a core part of the process.”
Some companies are not shy about their use of crowdsourced workers. Startup Eloquent Labs wants to insert them between customer service chatbots and human agents who step in when the machines fail. Many times the AI is pretty certain what particular work means, and an MTurk worker can step in and quickly classify them faster and cheaper than a service agent.
Fashion retailer Gilt provides “pre-emptive shipping,” which uses data analytics to predict what people will buy to get products to them faster. The company uses MTurk workers to provide subjective critiques of clothing that feed into their models.
MTurk isn’t the only player. Companies like Cloudfactory and Crowdflower provide crowdsourced human manpower tailored to particular niches, and some companies prefer to maintain their own communities of workers. Unlabel uses an army of 50,000 humans to check and edit the translations its artificial intelligence system produces for customers.
Most of the time these human workers aren’t just filling in the gaps, they’re also helping to train the machine learning component of these companies’ services by providing new examples of how to solve problems. Other times humans aren’t used “in-the-loop” with AI systems, but to prepare data sets they can learn from by labeling images, text, or audio.
It’s even possible to use crowdsourced workers to carry out tasks typically tackled by machine learning, such as large-scale image analysis and forecasting.
Zooniverse gets citizen scientists to classify images of distant galaxies or videos of animals to help academics analyze large data sets too complex for computers. Almanis creates forecasts on everything from economics to politics with impressive accuracy by giving those who sign up to the website incentives for backing the correct answer to a question. Researchers have used MTurkers to power a chatbot, and there’s even a toolkit for building algorithms to control this human intelligence called TurKit.
So what does this prominent role for humans in AI services mean? Firstly, it suggests that many tools people assume are powered by AI may in fact be relying on humans. This has obvious privacy implications, as the Expensify story highlighted, but should also raise concerns about whether customers are really getting what they pay for.
One example of this is IBM’s Watson for oncology, which is marketed as a data-driven AI system for providing cancer treatment recommendations. But an investigation by STAT highlighted that it’s actually largely driven by recommendations from a handful of (admittedly highly skilled) doctors at Memorial Sloan Kettering Cancer Center in New York.
Secondly, humans intervening in AI-run processes also suggests AI is still largely helpless without us, which is somewhat comforting to know among all the doomsday predictions of AI destroying jobs. At the same time, though, much of this crowdsourced work is monotonous, poorly paid, and isolating.
As machines trained by human workers get better at all kinds of tasks, this kind of piecemeal work filling in the increasingly small gaps in their capabilities may get more common. While tech companies often talk about AI augmenting human intelligence, for many it may actually end up being the other way around.
Image Credit: kentoh / Shutterstock.com Continue reading
The first time Dr. Blake Richards heard about deep learning, he was convinced that he wasn’t just looking at a technique that would revolutionize artificial intelligence. He also knew he was looking at something fundamental about the human brain.
That was the early 2000s, and Richards was taking a course with Dr. Geoff Hinton at the University of Toronto. Hinton, a pioneer architect of the algorithm that would later take the world by storm, was offering an introductory course on his learning method inspired by the human brain.
The key words here are “inspired by.” Despite Richards’ conviction, the odds were stacked against him. The human brain, as it happens, seems to lack a critical function that’s programmed into deep learning algorithms. On the surface, the algorithms were violating basic biological facts already proven by neuroscientists.
But what if, superficial differences aside, deep learning and the brain are actually compatible?
Now, in a new study published in eLife, Richards, working with DeepMind, proposed a new algorithm based on the biological structure of neurons in the neocortex. Also known as the cortex, this outermost region of the brain is home to higher cognitive functions such as reasoning, prediction, and flexible thought.
The team networked their artificial neurons together into a multi-layered network and challenged it with a classic computer vision task—identifying hand-written numbers.
The new algorithm performed well. But the kicker is that it analyzed the learning examples in a way that’s characteristic of deep learning algorithms, even though it was completely based on the brain’s fundamental biology.
“Deep learning is possible in a biological framework,” concludes the team.
Because the model is only a computer simulation at this point, Richards hopes to pass the baton to experimental neuroscientists, who could actively test whether the algorithm operates in an actual brain.
If so, the data could then be passed back to computer scientists to work out the next generation of massively parallel and low-energy algorithms to power our machines.
It’s a first step towards merging the two fields back into a “virtuous circle” of discovery and innovation.
The blame game
While you’ve probably heard of deep learning’s recent wins against humans in the game of Go, you might not know the nitty-gritty behind the algorithm’s operations.
In a nutshell, deep learning relies on an artificial neural network with virtual “neurons.” Like a towering skyscraper, the network is structured into hierarchies: lower-level neurons process aspects of an input—for example, a horizontal or vertical stroke that eventually forms the number four—whereas higher-level neurons extract more abstract aspects of the number four.
To teach the network, you give it examples of what you’re looking for. The signal propagates forward in the network (like climbing up a building), where each neuron works to fish out something fundamental about the number four.
Like children trying to learn a skill the first time, initially the network doesn’t do so well. It spits out what it thinks a universal number four should look like—think a Picasso-esque rendition.
But here’s where the learning occurs: the algorithm compares the output with the ideal output, and computes the difference between the two (dubbed “error”). This error is then “backpropagated” throughout the entire network, telling each neuron: hey, this is how far off you were, so try adjusting your computation closer to the ideal.
Millions of examples and tweakings later, the network inches closer to the desired output and becomes highly proficient at the trained task.
This error signal is crucial for learning. Without efficient “backprop,” the network doesn’t know which of its neurons are off kilter. By assigning blame, the AI can better itself.
The brain does this too. How? We have no clue.
What’s clear, though, is that the deep learning solution doesn’t work.
Backprop is a pretty needy function. It requires a very specific infrastructure for it to work as expected.
For one, each neuron in the network has to receive the error feedback. But in the brain, neurons are only connected to a few downstream partners (if that). For backprop to work in the brain, early-level neurons need to be able to receive information from billions of connections in their downstream circuits—a biological impossibility.
And while certain deep learning algorithms adapt a more local form of backprop— essentially between neurons—it requires their connection forwards and backwards to be symmetric. This hardly ever occurs in the brain’s synapses.
More recent algorithms adapt a slightly different strategy, in that they implement a separate feedback pathway that helps the neurons to figure out errors locally. While it’s more biologically plausible, the brain doesn’t have a separate computational network dedicated to the blame game.
What it does have are neurons with intricate structures, unlike the uniform “balls” that are currently applied in deep learning.
The team took inspiration from pyramidal cells that populate the human cortex.
“Most of these neurons are shaped like trees, with ‘roots’ deep in the brain and ‘branches’ close to the surface,” says Richards. “What’s interesting is that these roots receive a different set of inputs than the branches that are way up at the top of the tree.”
This is an illustration of a multi-compartment neural network model for deep learning. Left: Reconstruction of pyramidal neurons from mouse primary visual cortex. Right: Illustration of simplified pyramidal neuron models. Image Credit: CIFAR
Curiously, the structure of neurons often turn out be “just right” for efficiently cracking a computational problem. Take the processing of sensations: the bottoms of pyramidal neurons are right smack where they need to be to receive sensory input, whereas the tops are conveniently placed to transmit feedback errors.
Could this intricate structure be evolution’s solution to channeling the error signal?
The team set up a multi-layered neural network based on previous algorithms. But rather than having uniform neurons, they gave those in middle layers—sandwiched between the input and output—compartments, just like real neurons.
When trained with hand-written digits, the algorithm performed much better than a single-layered network, despite lacking a way to perform classical backprop. The cell-like structure itself was sufficient to assign error: the error signals at one end of the neuron are naturally kept separate from input at the other end.
Then, at the right moment, the neuron brings both sources of information together to find the best solution.
There’s some biological evidence for this: neuroscientists have long known that the neuron’s input branches perform local computations, which can be integrated with signals that propagate backwards from the so-called output branch.
However, we don’t yet know if this is the brain’s way of dealing blame—a question that Richards urges neuroscientists to test out.
What’s more, the network parsed the problem in a way eerily similar to traditional deep learning algorithms: it took advantage of its multi-layered structure to extract progressively more abstract “ideas” about each number.
“[This is] the hallmark of deep learning,” the authors explain.
The Deep Learning Brain
Without doubt, there will be more twists and turns to the story as computer scientists incorporate more biological details into AI algorithms.
One aspect that Richards and team are already eyeing is a top-down predictive function, in which signals from higher levels directly influence how lower levels respond to input.
Feedback from upper levels doesn’t just provide error signals; it could also be nudging lower processing neurons towards a “better” activity pattern in real-time, says Richards.
The network doesn’t yet outperform other non-biologically derived (but “brain-inspired”) deep networks. But that’s not the point.
“Deep learning has had a huge impact on AI, but, to date, its impact on neuroscience has been limited,” the authors say.
Now neuroscientists have a lead they could experimentally test: that the structure of neurons underlie nature’s own deep learning algorithm.
“What we might see in the next decade or so is a real virtuous cycle of research between neuroscience and AI, where neuroscience discoveries help us to develop new AI and AI can help us interpret and understand our experimental data in neuroscience,” says Richards.
Image Credit: christitzeimaging.com / Shutterstock.com Continue reading