Tag Archives: the

#438886 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
This Chip for AI Works Using Light, Not Electrons
Will Knight | Wired
“As demand for artificial intelligence grows, so does hunger for the computer power needed to keep AI running. Lightmatter, a startup born at MIT, is betting that AI’s voracious hunger will spawn demand for a fundamentally different kind of computer chip—one that uses light to perform key calculations. ‘Either we invent new kinds of computers to continue,’ says Lightmatter CEO Nick Harris, ‘or AI slows down.’i”

BIOTECH
With This CAD for Genomes, You Can Design New Organisms
Eliza Strickland | IEEE Spectrum
“Imagine being able to design a new organism as easily as you can design a new integrated circuit. That’s the ultimate vision behind the computer-aided design (CAD) program being developed by the GP-write consortium. ‘We’re taking the same things we’d do for design automation in electronics, and applying them to biology,’ says Doug Densmore, an associate professor of electrical and computer engineering at Boston University.”

BIOLOGY
Hey, So These Sea Slugs Decapitate Themselves and Grow New Bodies
Matt Simon | Wired
“That’s right: It pulled a Deadpool. Just a few hours after its self-decapitation, the head began dragging itself around to feed. After a day, the neck wound had closed. After a week, it started to regenerate a heart. In less than a month, the whole body had grown back, and the disembodied slug was embodied once more.”

INTERNET
Move Over, Deep Nostalgia, This AI App Can Make Kim Jong-un Sing ‘I Will Survive’
Helen Sullivan | The Guardian
“If you’ve ever wanted to know what it might be like to see Kim Jong-un let loose at karaoke, your wish has been granted, thanks to an app that lets users turn photographs of anyone—or anything remotely resembling a face—into uncanny AI-powered videos of them lip syncing famous songs.”

ENERGY
GM Unveils Plans for Lithium-Metal Batteries That Could Boost EV Range
Steve Dent | Engadget
“GM has released more details about its next-generation Ultium batteries, including plans for lithium-metal (Li-metal) technology to boost performance and energy density. The automaker announced that it has signed an agreement to work with SolidEnergy Systems (SES), an MIT spinoff developing prototype Li-metal batteries with nearly double the capacity of current lithium-ion cells.”

TECHNOLOGY
Xi’s Gambit: China Plans for a World Without American Technology
Paul Mozur and Steven Lee Myers | The New York Times
“China is freeing up tens of billions of dollars for its tech industry to borrow. It is cataloging the sectors where the United States or others could cut off access to crucial technologies. And when its leaders released their most important economic plans last week, they laid out their ambitions to become an innovation superpower beholden to none.”

SCIENCE
Imaginary Numbers May Be Essential for Describing Reality
Charlie Wood | Wired
“…physicists may have just shown for the first time that imaginary numbers are, in a sense, real. A group of quantum theorists designed an experiment whose outcome depends on whether nature has an imaginary side. Provided that quantum mechanics is correct—an assumption few would quibble with—the team’s argument essentially guarantees that complex numbers are an unavoidable part of our description of the physical universe.”

PHILOSOPHY
What Is Life? Its Vast Diversity Defies Easy Definition
Carl Zimmer | Quanta
“i‘It is commonly said,’ the scientists Frances Westall and André Brack wrote in 2018, ‘that there are as many definitions of life as there are people trying to define it.’ …As an observer of science and of scientists, I find this behavior strange. It is as if astronomers kept coming up with new ways to define stars. …With scientists adrift in an ocean of definitions, philosophers rowed out to offer lifelines.”

Image Credit: Kir Simakov / Unsplash Continue reading

Posted in Human Robots

#437351 Human or Humanoid?

Humanoids illustrating how the gap between man and machine is shrinking almost every day.

Posted in Human Robots

#438809 This Week’s Awesome Tech Stories From ...

ARTIFICIAL INTELLIGENCE
Facebook’s New AI Teaches Itself to See With Less Human Help
Will Knight | Wired
“Peer inside an AI algorithm and you’ll find something constructed using data that was curated and labeled by an army of human workers. Now, Facebook has shown how some AI algorithms can learn to do useful work with far less human help. The company built an algorithm that learned to recognize objects in images with little help from labels.”

CULTURE
New AI ‘Deep Nostalgia’ Brings Old Photos, Including Very Old Ones, to Life
Kim Lyons | The Verge
“The Deep Nostalgia service, offered by online genealogy company MyHeritage, uses AI licensed from D-ID to create the effect that a still photo is moving. It’s kinda like the iOS Live Photos feature, which adds a few seconds of video to help smartphone photographers find the best shot. But Deep Nostalgia can take photos from any camera and bring them to ‘life.’i”

COMPUTING
Could ‘Topological Materials’ Be a New Medium For Ultra-Fast Electronics?
Charles Q. Choi | IEEE Spectrum
“Potential future transistors that can exceed Moore’s law may rely on exotic materials called ‘topological matter’ in which electricity flows across surfaces only, with virtually no dissipation of energy. And now new findings suggest these special topological materials might one day find use in high-speed, low-power electronics and in quantum computers.”

ENERGY
A Chinese Province Could Ban Bitcoin Mining to Cut Down Energy Use
Dharna Noor | Gizmodo
“Since energy prices in Inner Mongolia are particularly low, many bitcoin miners have set up shop there specifically. The region is the third-largest mining site in China. Because the grid is heavily coal-powered, however, that’s led to skyrocketing emissions, putting it in conflict with President Xi Jinping’s promise last September to have China reach peak carbon emissions by 2030 at the latest and achieve carbon neutrality before 2060.”

VIRTUAL REALITY
Mesh Is Microsoft’s Vision for Sending Your Hologram Back to the Office
Sam Rutherford | Gizmodo
“With Mesh, Microsoft is hoping to create a virtual environment capable of sharing data, 3D models, avatars, and more—basically, the company wants to upgrade the traditional remote-working experience with the power of AR and VR. In the future, Microsoft is planning for something it’s calling ‘holoportation,’ which will allow Mesh devices to create photorealistic digital avatars of your body that can appear in virtual spaces anywhere in the world—assuming you’ve been invited, of course.”

SPACE
Rocket Lab Could Be SpaceX’s Biggest Rival
Neel V. Patel | MIT Technology Review
“At 40 meters tall and able to carry 20 times the weight that Electron can, [the new] Neutron [rocket] is being touted by Rocket Lab as its entry into markets for large satellite and mega-constellation launches, as well as future robotics missions to the moon and Mars. Even more tantalizing, Rocket Lab says Neutron will be designed for human spaceflight as well.”

SCIENCE
Can Alien Smog Lead Us to Extraterrestrial Civilizations?
Meghan Herbst | Wired
“Kopparapu is at the forefront of an emerging field in astronomy that is aiming to identify technosignatures, or technological markers we can search for in the cosmos. No longer conceptually limited to radio signals, astronomers are looking for ways we could identify planets or other spacefaring objects by looking for things like atmospheric gases, lasers, and even hypothetical sun-encircling structures called Dyson spheres.”

DIGITAL CURRENCIES
China Charges Ahead With a National Digital Currency
Nathaniel Popper and Cao Li | The New York Times
“China has charged ahead with a bold effort to remake the way that government-backed money works, rolling out its own digital currency with different qualities than cash or digital deposits. The country’s central bank, which began testing eCNY last year in four cities, recently expanded those trials to bigger cities such as Beijing and Shanghai, according to government presentations.”

Image Credit: Leon Seibert / Unsplash Continue reading

Posted in Human Robots

#437299 Human-Robot Communication

Stefanie Tellex, an assistant professor in the Computer Science Department at Brown University, explains how robots will soon seamlessly use natural language to communicate with humans.

Posted in Human Robots

#438807 Visible Touch: How Cameras Can Help ...

The dawn of the robot revolution is already here, and it is not the dystopian nightmare we imagined. Instead, it comes in the form of social robots: Autonomous robots in homes and schools, offices and public spaces, able to interact with humans and other robots in a socially acceptable, human-perceptible way to resolve tasks related to core human needs.

To design social robots that “understand” humans, robotics scientists are delving into the psychology of human communication. Researchers from Cornell University posit that embedding the sense of touch in social robots could teach them to detect physical interactions and gestures. They describe a way of doing so by relying not on touch but on vision.

A USB camera inside the robot captures shadows of hand gestures on the robot’s surface and classifies them with machine-learning software. They call this method ShadowSense, which they define as a modality between vision and touch, bringing “the high resolution and low cost of vision-sensing to the close-up sensory experience of touch.”

Touch-sensing in social or interactive robots is usually achieved with force sensors or capacitive sensors, says study co-author Guy Hoffman of the Sibley School of Mechanical and Aerospace Engineering at Cornell University. The drawback to his group’s approach has been that, even to achieve coarse spatial resolution, many sensors are needed in a small area.

However, working with non-rigid, inflatable robots, Hoffman and his co-researchers installed a consumer-grade USB camera to which they attached a fisheye lens for a wider field of vision.

“Given that the robot is already hollow, and has a soft and translucent skin, we could do touch interaction by looking at the shadows created by people touching the robot,” says Hoffman. They used deep neural networks to interpret the shadows. “And we were able to do it with very high accuracy,” he says. The robot was able to interpret six different gestures, including one- or two-handed touch, pointing, hugging and punching, with an accuracy of 87.5 to 96 percent, depending on the lighting.

This is not the first time that computer vision has been used for tactile sensing, though the scale and application of ShadowSense is unique. “Photography has been used for touch mainly in robotic grasping,” says Hoffman. By contrast, Hoffman and collaborators wanted to develop a sense that could be “felt” across the whole of the device.

The potential applications for ShadowSense include mobile robot guidance using touch, and interactive screens on soft robots. A third concerns privacy, especially in home-based social robots. “We have another paper currently under review that looks specifically at the ability to detect gestures that are further away [from the robot’s skin],” says Hoffman. This way, users would be able to cover their robot’s camera with a translucent material and still allow it to interpret actions and gestures from shadows. Thus, even though it’s prevented from capturing a high-resolution image of the user or their surrounding environment, using the right kind of training datasets, the robot can continue to monitor some kinds of non-tactile activities.

In its current iteration, Hoffman says, ShadowSense doesn’t do well in low-light conditions. Environmental noise, or shadows from surrounding objects, also interfere with image classification. Relying on one camera also means a single point of failure. “I think if this were to become a commercial product, we would probably [have to] work a little bit better on image detection,” says Hoffman.

As it was, the researchers used transfer learning—reusing a pre-trained deep-learning model in a new problem—for image analysis. “One of the problems with multi-layered neural networks is that you need a lot of training data to make accurate predictions,” says Hoffman. “Obviously, we don’t have millions of examples of people touching a hollow, inflatable robot. But we can use pre-trained networks trained on general images, which we have billions of, and we only retrain the last layers of the network using our own dataset.” Continue reading

Posted in Human Robots