Tag Archives: 3d
#431110 Soft robotics: self-contained soft ...
Researchers at Columbia Engineering have solved a long-standing issue in the creation of untethered soft robots whose actions and movements can help mimic natural biological systems. A group in the Creative Machines lab led by Hod Lipson, professor of mechanical engineering, has developed a 3D-printable synthetic soft muscle, a one-of-a-kind artificial active tissue with intrinsic expansion ability that does not require an external compressor or high voltage equipment as previous muscles required. The new material has a strain density (expansion per gram) that is 15 times larger than natural muscle, and can lift 1000 times its own weight. Continue reading
#430868 These 7 Forces Are Changing the World at ...
It was the Greek philosopher Heraclitus who first said, “The only thing that is constant is change.”
He was onto something. But even he would likely be left speechless at the scale and pace of change the world has experienced in the past 100 years—not to mention the past 10.
Since 1917, the global population has gone from 1.9 billion people to 7.5 billion. Life expectancy has more than doubled in many developing countries and risen significantly in developed countries. In 1917 only eight percent of homes had phones—in the form of landline telephones—while today more than seven in 10 Americans own a smartphone—aka, a supercomputer that fits in their pockets.
And things aren’t going to slow down anytime soon. In a talk at Singularity University’s Global Summit this week in San Francisco, SU cofounder and chairman Peter Diamandis told the audience, “Tomorrow’s speed of change will make today look like we’re crawling.” He then shared his point of view about some of the most important factors driving this accelerating change.
Peter Diamandis at Singularity University’s Global Summit in San Francisco.
Computation
In 1965, Gordon Moore (cofounder of Intel) predicted computer chips would double in power and halve in cost every 18 to 24 months. What became known as Moore’s Law turned out to be accurate, and today affordable computer chips contain a billion or more transistors spaced just nanometers apart.
That means computers can do exponentially more calculations per second than they could thirty, twenty, or ten years ago—and at a dramatically lower cost. This in turn means we can generate a lot more information, and use computers for all kinds of applications they wouldn’t have been able to handle in the past (like diagnosing rare forms of cancer, for example).
Convergence
Increased computing power is the basis for a myriad of technological advances, which themselves are converging in ways we couldn’t have imagined a couple decades ago. As new technologies advance, the interactions between various subsets of those technologies create new opportunities that accelerate the pace of change much more than any single technology can on its own.
A breakthrough in biotechnology, for example, might spring from a crucial development in artificial intelligence. An advance in solar energy could come about by applying concepts from nanotechnology.
Interface Moments
Technology is becoming more accessible even to the most non-techy among us. The internet was once the domain of scientists and coders, but these days anyone can make their own web page, and browsers make those pages easily searchable. Now, interfaces are opening up areas like robotics or 3D printing.
As Diamandis put it, “You don’t need to know how to code to 3D print an attachment for your phone. We’re going from mind to materialization, from intentionality to implication.”
Artificial intelligence is what Diamandis calls “the ultimate interface moment,” enabling everyone who can speak their mind to connect and leverage exponential technologies.
Connectivity
Today there are about three billion people around the world connected to the internet—that’s up from 1.8 billion in 2010. But projections show that by 2025 there will be eight billion people connected. This is thanks to a race between tech billionaires to wrap the Earth in internet; Elon Musk’s SpaceX has plans to launch a network of 4,425 satellites to get the job done, while Google’s Project Loon is using giant polyethylene balloons for the task.
These projects will enable five billion new minds to come online, and those minds will have access to exponential technologies via interface moments.
Sensors
Diamandis predicts that after we establish a 5G network with speeds of 10–100 Gbps, a proliferation of sensors will follow, to the point that there’ll be around 100,000 sensors per city block. These sensors will be equipped with the most advanced AI, and the combination of these two will yield an incredible amount of knowledge.
“By 2030 we’re heading towards 100 trillion sensors,” Diamandis said. “We’re heading towards a world in which we’re going to be able to know anything we want, anywhere we want, anytime we want.” He added that tens of thousands of drones will hover over every major city.
Intelligence
“If you think there’s an arms race going on for AI, there’s also one for HI—human intelligence,” Diamandis said. He explained that if a genius was born in a remote village 100 years ago, he or she would likely not have been able to gain access to the resources needed to put his or her gifts to widely productive use. But that’s about to change.
Private companies as well as military programs are working on brain-machine interfaces, with the ultimate aim of uploading the human mind. The focus in the future will be on increasing intelligence of individuals as well as companies and even countries.
Wealth Concentration
A final crucial factor driving mass acceleration is the increase in wealth concentration. “We’re living in a time when there’s more wealth in the hands of private individuals, and they’re willing to take bigger risks than ever before,” Diamandis said. Billionaires like Mark Zuckerberg, Jeff Bezos, Elon Musk, and Bill Gates are putting millions of dollars towards philanthropic causes that will benefit not only themselves, but humanity at large.
What It All Means
One of the biggest implications of the rate at which the world is changing, Diamandis said, is that the cost of everything is trending towards zero. We are heading towards abundance, and the evidence lies in the reduction of extreme poverty we’ve already seen and will continue to see at an even more rapid rate.
Listening to Diamandis’ optimism, it’s hard not to find it contagious.
“The world is becoming better at an extraordinary rate,” he said, pointing out the rises in literacy, democracy, vaccinations, and life expectancy, and the concurrent decreases in child mortality, birth rate, and poverty.
“We’re alive during a pivotal time in human history,” he concluded. “There is nothing we don’t have access to.”
Stock Media provided by seanpavonephoto / Pond5 Continue reading
#430761 How Robots Are Getting Better at Making ...
The multiverse of science fiction is populated by robots that are indistinguishable from humans. They are usually smarter, faster, and stronger than us. They seem capable of doing any job imaginable, from piloting a starship and battling alien invaders to taking out the trash and cooking a gourmet meal.
The reality, of course, is far from fantasy. Aside from industrial settings, robots have yet to meet The Jetsons. The robots the public are exposed to seem little more than over-sized plastic toys, pre-programmed to perform a set of tasks without the ability to interact meaningfully with their environment or their creators.
To paraphrase PayPal co-founder and tech entrepreneur Peter Thiel, we wanted cool robots, instead we got 140 characters and Flippy the burger bot. But scientists are making progress to empower robots with the ability to see and respond to their surroundings just like humans.
Some of the latest developments in that arena were presented this month at the annual Robotics: Science and Systems Conference in Cambridge, Massachusetts. The papers drilled down into topics that ranged from how to make robots more conversational and help them understand language ambiguities to helping them see and navigate through complex spaces.
Improved Vision
Ben Burchfiel, a graduate student at Duke University, and his thesis advisor George Konidaris, an assistant professor of computer science at Brown University, developed an algorithm to enable machines to see the world more like humans.
In the paper, Burchfiel and Konidaris demonstrate how they can teach robots to identify and possibly manipulate three-dimensional objects even when they might be obscured or sitting in unfamiliar positions, such as a teapot that has been tipped over.
The researchers trained their algorithm by feeding it 3D scans of about 4,000 common household items such as beds, chairs, tables, and even toilets. They then tested its ability to identify about 900 new 3D objects just from a bird’s eye view. The algorithm made the right guess 75 percent of the time versus a success rate of about 50 percent for other computer vision techniques.
In an email interview with Singularity Hub, Burchfiel notes his research is not the first to train machines on 3D object classification. How their approach differs is that they confine the space in which the robot learns to classify the objects.
“Imagine the space of all possible objects,” Burchfiel explains. “That is to say, imagine you had tiny Legos, and I told you [that] you could stick them together any way you wanted, just build me an object. You have a huge number of objects you could make!”
The infinite possibilities could result in an object no human or machine might recognize.
To address that problem, the researchers had their algorithm find a more restricted space that would host the objects it wants to classify. “By working in this restricted space—mathematically we call it a subspace—we greatly simplify our task of classification. It is the finding of this space that sets us apart from previous approaches.”
Following Directions
Meanwhile, a pair of undergraduate students at Brown University figured out a way to teach robots to understand directions better, even at varying degrees of abstraction.
The research, led by Dilip Arumugam and Siddharth Karamcheti, addressed how to train a robot to understand nuances of natural language and then follow instructions correctly and efficiently.
“The problem is that commands can have different levels of abstraction, and that can cause a robot to plan its actions inefficiently or fail to complete the task at all,” says Arumugam in a press release.
In this project, the young researchers crowdsourced instructions for moving a virtual robot through an online domain. The space consisted of several rooms and a chair, which the robot was told to manipulate from one place to another. The volunteers gave various commands to the robot, ranging from general (“take the chair to the blue room”) to step-by-step instructions.
The researchers then used the database of spoken instructions to teach their system to understand the kinds of words used in different levels of language. The machine learned to not only follow instructions but to recognize the level of abstraction. That was key to kickstart its problem-solving abilities to tackle the job in the most appropriate way.
The research eventually moved from virtual pixels to a real place, using a Roomba-like robot that was able to respond to instructions within one second 90 percent of the time. Conversely, when unable to identify the specificity of the task, it took the robot 20 or more seconds to plan a task about 50 percent of the time.
One application of this new machine-learning technique referenced in the paper is a robot worker in a warehouse setting, but there are many fields that could benefit from a more versatile machine capable of moving seamlessly between small-scale operations and generalized tasks.
“Other areas that could possibly benefit from such a system include things from autonomous vehicles… to assistive robotics, all the way to medical robotics,” says Karamcheti, responding to a question by email from Singularity Hub.
More to Come
These achievements are yet another step toward creating robots that see, listen, and act more like humans. But don’t expect Disney to build a real-life Westworld next to Toon Town anytime soon.
“I think we’re a long way off from human-level communication,” Karamcheti says. “There are so many problems preventing our learning models from getting to that point, from seemingly simple questions like how to deal with words never seen before, to harder, more complicated questions like how to resolve the ambiguities inherent in language, including idiomatic or metaphorical speech.”
Even relatively verbose chatbots can run out of things to say, Karamcheti notes, as the conversation becomes more complex.
The same goes for human vision, according to Burchfiel.
While deep learning techniques have dramatically improved pattern matching—Google can find just about any picture of a cat—there’s more to human eyesight than, well, meets the eye.
“There are two big areas where I think perception has a long way to go: inductive bias and formal reasoning,” Burchfiel says.
The former is essentially all of the contextual knowledge people use to help them reason, he explains. Burchfiel uses the example of a puddle in the street. People are conditioned or biased to assume it’s a puddle of water rather than a patch of glass, for instance.
“This sort of bias is why we see faces in clouds; we have strong inductive bias helping us identify faces,” he says. “While it sounds simple at first, it powers much of what we do. Humans have a very intuitive understanding of what they expect to see, [and] it makes perception much easier.”
Formal reasoning is equally important. A machine can use deep learning, in Burchfiel’s example, to figure out the direction any river flows once it understands that water runs downhill. But it’s not yet capable of applying the sort of human reasoning that would allow us to transfer that knowledge to an alien setting, such as figuring out how water moves through a plumbing system on Mars.
“Much work was done in decades past on this sort of formal reasoning… but we have yet to figure out how to merge it with standard machine-learning methods to create a seamless system that is useful in the actual physical world.”
Robots still have a lot to learn about being human, which should make us feel good that we’re still by far the most complex machines on the planet.
Image Credit: Alex Knight via Unsplash Continue reading