Tag Archives: mit

#432884 This Week’s Awesome Stories From ...

ROBOTICS
Boston Dynamics’ SpotMini Robot Dog Goes on Sale in 2019
Stephen Shankland | CNET
“The company has 10 SpotMini prototypes now and will work with manufacturing partners to build 100 this year, said company co-founder and President Marc Raibert at a TechCrunch robotics conference Friday. ‘That’s a prelude to getting into a higher rate of production’ in anticipation of sales next year, he said. Who’ll buy it? Probably not you.”

Also from Boston Dynamics’ this week:

SPACE
Made In Space Wins NASA Contract for Next-Gen ‘Vulcan’ Manufacturing System
Mike Wall | Space.com
“’The Vulcan hybrid manufacturing system allows for flexible augmentation and creation of metallic components on demand with high precision,’ Mike Snyder, Made In Space chief engineer and principal investigator, said in a statement. …When Vulcan is ready to go, Made In Space aims to demonstrate the technology on the ISS, showing Vulcan’s potential usefulness for a variety of exploration missions.”

ARTIFICIAL INTELLIGENCE
Duplex Shows Google Failing at Ethical and Creative AI Design
Natasha Lomas | TechCrunch
“But while the home crowd cheered enthusiastically at how capable Google had seemingly made its prototype robot caller—with Pichai going on to sketch a grand vision of the AI saving people and businesses time—the episode is worryingly suggestive of a company that views ethics as an after-the-fact consideration. One it does not allow to trouble the trajectory of its engineering ingenuity.”

DESIGN
What Artists Can Tech Us About Making Technology More Human
Elizabeth Stinson| Wired
“For the last year, Park, along with the artist Sougwen Chung and dancers Jason Oremus and Garrett Coleman of the dance collective Hammerstep, have been working out of Bell Labs as part of a residency called Experiments in Art and Technology. The year-long residency, a collaboration between Bell Labs and the New Museum’s incubator, New Inc, culminated in ‘Only Human,’ a recently-opened exhibition at Mana where the artists’ pieces will be on display through the end of May.”

GOVERNANCE
The White House Says a New AI Task Force Will Protect Workers and Keep America First
Will Knight | MIT Technology Review
“The meeting and the select committee signal that the administration takes the impact of artificial intellgence seriously. This has not always been apparent. In his campaign speeches, Trump suggested reviving industries that have already been overhauled by automation. The Treasury secretary, Steven Mnuchin, also previously said that the idea of robots and AI taking people’s jobs was ‘not even on my radar screen.’”

Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots

#432563 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Pedro Domingos on the Arms Race in Artificial Intelligence
Christoph Scheuermann and Bernhard Zand | Spiegel Online
“AI lowers the cost of knowledge by orders of magnitude. One good, effective machine learning system can do the work of a million people, whether it’s for commercial purposes or for cyberespionage. Imagine a country that produces a thousand times more knowledge than another. This is the challenge we are facing.”

BIOTECHNOLOGY
Gene Therapy Could Free Some People From a Lifetime of Blood Transfusions
Emily Mullin | MIT Technology Review
“A one-time, experimental treatment for an inherited blood disorder has shown dramatic results in a small study. …[Lead author Alexis Thompson] says the effect on patients has been remarkable. ‘They have been tied to this ongoing medical therapy that is burdensome and expensive for their whole lives,’ she says. ‘Gene therapy has allowed people to have aspirations and really pursue them.’ ”

ENVIRONMENT
The Revolutionary Giant Ocean Cleanup Machine Is About to Set Sail
Adele Peters | Fast Company
“By the end of 2018, the nonprofit says it will bring back its first harvest of ocean plastic from the North Pacific Gyre, along with concrete proof that the design works. The organization expects to bring 5,000 kilograms of plastic ashore per month with its first system. With a full fleet of systems deployed, it believes that it can collect half of the plastic trash in the Great Pacific Garbage Patch—around 40,000 metric tons—within five years.”

ROBOTICS
Autonomous Boats Will Be on the Market Sooner Than Self-Driving Cars
Tracey Lindeman | Motherboard
“Some unmanned watercraft…may be at sea commercially before 2020. That’s partly because automating all ships could generate a ridiculous amount of revenue. According to the United Nations, 90 percent of the world’s trade is carried by sea and 10.3 billion tons of products were shipped in 2016.”

DIGITAL CULTURE
Style Is an Algorithm
Kyle Chayka | Racked
“Confronting the Echo Look’s opaque statements on my fashion sense, I realize that all of these algorithmic experiences are matters of taste: the question of what we like and why we like it, and what it means that taste is increasingly dictated by black-box robots like the camera on my shelf.”

COMPUTING
How Apple Will Use AR to Reinvent the Human-Computer Interface
Tim Bajarin | Fast Company
“It’s in Apple’s DNA to continually deliver the ‘next’ major advancement to the personal computing experience. Its innovation in man-machine interfaces started with the Mac and then extended to the iPod, the iPhone, the iPad, and most recently, the Apple Watch. Now, get ready for the next chapter, as Apple tackles augmented reality, in a way that could fundamentally transform the human-computer interface.”

SCIENCE
Advanced Microscope Shows Cells at Work in Incredible Detail
Steve Dent | Engadget
“For the first time, scientists have peered into living cells and created videos showing how they function with unprecedented 3D detail. Using a special microscope and new lighting techniques, a team from Harvard and the Howard Hughes Medical Institute captured zebrafish immune cell interactions with unheard-of 3D detail and resolution.”

Image Credit: dubassy / Shutterstock.com Continue reading

Posted in Human Robots

#432482 This Week’s Awesome Stories From ...

CYBERNETICS
A Brain-Boosting Prosthesis Moves From Rats to Humans
Robbie Gonzalez | WIRED
“Today, their proof-of-concept prosthetic lives outside a patient’s head and connects to the brain via wires. But in the future, Hampson hopes, surgeons could implant a similar apparatus entirely within a person’s skull, like a neural pacemaker. It could augment all manner of brain functions—not just in victims of dementia and brain injury, but healthy individuals, as well.”

ARTIFICIAL INTELLIGENCE
Here’s How the US Needs to Prepare for the Age of Artificial Intelligence
Will Knight | MIT Technology Review
“The Trump administration has abandoned this vision and has no intention of devising its own AI plan, say those working there. They say there is no need for an AI moonshot, and that minimizing government interference is the best way to make sure the technology flourishes… That looks like a huge mistake. If it essentially ignores such a technological transformation, the US might never make the most of an opportunity to reboot its economy and kick-start both wage growth and job creation. Failure to plan could also cause the birthplace of AI to lose ground to international rivals.”

BIOMIMICRY
Underwater GPS Inspired by Shrimp Eyes
Jeremy Hsu | IEEE Spectrum
“A few years ago, U.S. and Australian researchers developed a special camera inspired by the eyes of mantis shrimp that can see the polarization patterns of light waves, which resemble those in a rope being waved up and down. That means the bio-inspired camera can detect how light polarization patterns change once the light enters the water and gets deflected or scattered.”

POLITICS & TECHNOLOGY
‘The Business of War’: Google Employees Protest Work for the Pentagon
Scott Shane and Daisuke Wakabayashi | The New York Times
“Thousands of Google employees, including dozens of senior engineers, have signed a letter protesting the company’s involvement in a Pentagon program that uses artificial intelligence to interpret video imagery and could be used to improve the targeting of drone strikes.

The letter, which is circulating inside Google and has garnered more than 3,100 signatures, reflects a culture clash between Silicon Valley and the federal government that is likely to intensify as cutting-edge artificial intelligence is increasingly employed for military purposes. ‘We believe that Google should not be in the business of war,’ says the letter, addressed to Sundar Pichai, the company’s chief executive. It asks that Google pull out of Project Maven, a Pentagon pilot program, and announce a policy that it will not ‘ever build warfare technology.’ (Read the text of the letter.)”

CYBERNETICS
MIT’s New Headset Reads the ‘Words in Your Head’
Brian Heater | TechCrunch
“A team at MIT has been working on just such a device, though the hardware design, admittedly, doesn’t go too far toward removing that whole self-consciousness bit from the equation. AlterEgo is a headmounted—or, more properly, jaw-mounted—device that’s capable of reading neuromuscular signals through built-in electrodes. The hardware, as MIT puts it, is capable of reading ‘words in your head.’”



Image Credit: christitzeimaging.com / Shutterstock.com Continue reading

Posted in Human Robots

#432421 Cheetah III robot preps for a role as a ...

If you were to ask someone to name a new technology that emerged from MIT in the 21st century, there's a good chance they would name the robotic cheetah. Developed by the MIT Department of Mechanical Engineering's Biomimetic Robotics Lab under the direction of Associate Professor Sangbae Kim, the quadruped MIT Cheetah has made headlines for its dynamic legged gait, speed, jumping ability, and biomimetic design. Continue reading

Posted in Human Robots