Tag Archives: time

#435593 AI at the Speed of Light

Neural networks shine for solving tough problems such as facial and voice recognition, but conventional electronic versions are limited in speed and hungry for power. In theory, optics could beat digital electronic computers in the matrix calculations used in neural networks. However, optics had been limited by their inability to do some complex calculations that had required electronics. Now new experiments show that all-optical neural networks can tackle those problems.

The key attraction of neural networks is their massive interconnections among processors, comparable to the complex interconnections among neurons in the brain. This lets them perform many operations simultaneously, like the human brain does when looking at faces or listening to speech, making them more efficient for facial and voice recognition than traditional electronic computers that execute one instruction at a time.

Today's electronic neural networks have reached eight million neurons, but their future use in artificial intelligence may be limited by their high power usage and limited parallelism in connections. Optical connections through lenses are inherently parallel. The lens in your eye simultaneously focuses light from across your field of view onto the retina in the back of your eye, where an array of light-detecting nerve cells detects the light. Each cell then relays the signal it receives to neurons in the brain that process the visual signals to show us an image.

Glass lenses process optical signals by focusing light, which performs a complex mathematical operation called a Fourier transform that preserves the information in the original scene but rearranges is completely. One use of Fourier transforms is converting time variations in signal intensity into a plot of the frequencies present in the signal. The military used this trick in the 1950s to convert raw radar return signals recorded by an aircraft in flight into a three-dimensional image of the landscape viewed by the plane. Today that conversion is done electronically, but the vacuum-tube computers of the 1950s were not up to the task.

Development of neural networks for artificial intelligence started with electronics, but their AI applications have been limited by their slow processing and need for extensive computing resources. Some researchers have developed hybrid neural networks, in which optics perform simple linear operations, but electronics perform more complex nonlinear calculations. Now two groups have demonstrated simple all-optical neural networks that do all processing with light.

In May, Wolfram Pernice of the Institute of Physics at the University of Münster in Germany and colleagues reported testing an all-optical “neuron” in which signals change target materials between liquid and solid states, an effect that has been used for optical data storage. They demonstrated nonlinear processing, and produced output pulses like those from organic neurons. They then produced an integrated photonic circuit that incorporated four optical neurons operating at different wavelengths, each of which connected to 15 optical synapses. The photonic circuit contained more than 140 components and could recognize simple optical patterns. The group wrote that their device is scalable, and that the technology promises “access to the high speed and high bandwidth inherent to optical systems, thus enabling the direct processing of optical telecommunication and visual data.”

Now a group at the Hong Kong University of Science and Technology reports in Optica that they have made an all-optical neural network based on a different process, electromagnetically induced transparency, in which incident light affects how atoms shift between quantum-mechanical energy levels. The process is nonlinear and can be triggered by very weak light signals, says Shengwang Du, a physics professor and coauthor of the paper.

In their demonstration, they illuminated rubidium-85 atoms cooled by lasers to about 10 microKelvin (10 microdegrees above absolute zero). Although the technique may seem unusually complex, Du said the system was the most accessible one in the lab that could produce the desired effects. “As a pure quantum atomic system [it] is ideal for this proof-of-principle experiment,” he says.

Next, they plan to scale up the demonstration using a hot atomic vapor center, which is less expensive, does not require time-consuming preparation of cold atoms, and can be integrated with photonic chips. Du says the major challenges are reducing cost of the nonlinear processing medium and increasing the scale of the all-optical neural network for more complex tasks.

“Their demonstration seems valid,” says Volker Sorger, an electrical engineer at George Washington University in Washington who was not involved in either demonstration. He says the all-optical approach is attractive because it offers very high parallelism, but the update rate is limited to about 100 hertz because of the liquid crystals used in their test, and he is not completely convinced their approach can be scaled error-free. Continue reading

Posted in Human Robots

#435589 Construction Robots Learn to Excavate by ...

Pavel Savkin remembers the first time he watched a robot imitate his movements. Minutes earlier, the engineer had finished “showing” the robotic excavator its new goal by directing its movements manually. Now, running on software Savkin helped design, the robot was reproducing his movements, gesture for gesture. “It was like there was something alive in there—but I knew it was me,” he said.

Savkin is the CTO of SE4, a robotics software project that styles itself the “driver” of a fleet of robots that will eventually build human colonies in space. For now, SE4 is focused on creating software that can help developers communicate with robots, rather than on building hardware of its own.
The Tokyo-based startup showed off an industrial arm from Universal Robots that was running SE4’s proprietary software at SIGGRAPH in July. SE4’s demonstration at the Los Angeles innovation conference drew the company’s largest audience yet. The robot, nicknamed Squeezie, stacked real blocks as directed by SE4 research engineer Nathan Quinn, who wore a VR headset and used handheld controls to “show” Squeezie what to do.

As Quinn manipulated blocks in a virtual 3D space, the software learned a set of ordered instructions to be carried out in the real world. That order is essential for remote operations, says Quinn. To build remotely, developers need a way to communicate instructions to robotic builders on location. In the age of digital construction and industrial robotics, giving a computer a blueprint for what to build is a well-explored art. But operating on a distant object—especially under conditions that humans haven’t experienced themselves—presents challenges that only real-time communication with operators can solve.

The problem is that, in an unpredictable setting, even simple tasks require not only instruction from an operator, but constant feedback from the changing environment. Five years ago, the Swedish fiber network provider umea.net (part of the private Umeå Energy utility) took advantage of the virtual reality boom to promote its high-speed connections with the help of a viral video titled “Living with Lag: An Oculus Rift Experiment.” The video is still circulated in VR and gaming circles.

In the experiment, volunteers donned headgear that replaced their real-time biological senses of sight and sound with camera and audio feeds of their surroundings—both set at a 3-second delay. Thus equipped, volunteers attempt to complete everyday tasks like playing ping-pong, dancing, cooking, and walking on a beach, with decidedly slapstick results.

At outer-orbit intervals, including SE4’s dream of construction projects on Mars, the limiting factor in communication speed is not an artificial delay, but the laws of physics. The shifting relative positions of Earth and Mars mean that communications between the planets—even at the speed of light—can take anywhere from 3 to 22 minutes.

A long-distance relationship

Imagine trying to manage a construction project from across an ocean without the benefit of intelligent workers: sending a ship to an unknown world with a construction crew and blueprints for a log cabin, and four months later receiving a letter back asking how to cut down a tree. The parallel problem in long-distance construction with robots, according to SE4 CEO Lochlainn Wilson, is that automation relies on predictability. “Every robot in an industrial setting today is expecting a controlled environment.”
Platforms for applying AR and VR systems to teach tasks to artificial intelligences, as SE4 does, are already proliferating in manufacturing, healthcare, and defense. But all of the related communications systems are bound by physics and, specifically, the speed of light.
The same fundamental limitation applies in space. “Our communications are light-based, whether they’re radio or optical,” says Laura Seward Forczyk, a planetary scientist and consultant for space startups. “If you’re going to Mars and you want to communicate with your robot or spacecraft there, you need to have it act semi- or mostly-independently so that it can operate without commands from Earth.”

Semantic control
That’s exactly what SE4 aims to do. By teaching robots to group micro-movements into logical units—like all the steps to building a tower of blocks—the Tokyo-based startup lets robots make simple relational judgments that would allow them to receive a full set of instruction modules at once and carry them out in order. This sidesteps the latency issue in real-time bilateral communications that could hamstring a project or at least make progress excruciatingly slow.
The key to the platform, says Wilson, is the team’s proprietary operating software, “Semantic Control.” Just as in linguistics and philosophy, “semantics” refers to meaning itself, and meaning is the key to a robot’s ability to make even the smallest decisions on its own. “A robot can scan its environment and give [raw data] to us, but it can’t necessarily identify the objects around it and what they mean,” says Wilson.

That’s where human intelligence comes in. As part of the demonstration phase, the human operator of an SE4-controlled machine “annotates” each object in the robot’s vicinity with meaning. By labeling objects in the VR space with useful information—like which objects are building material and which are rocks—the operator helps the robot make sense of its real 3D environment before the building begins.

Giving robots the tools to deal with a changing environment is an important step toward allowing the AI to be truly independent, but it’s only an initial step. “We’re not letting it do absolutely everything,” said Quinn. “Our robot is good at moving an object from point A to point B, but it doesn’t know the overall plan.” Wilson adds that delegating environmental awareness and raw mechanical power to separate agents is the optimal relationship for a mixed human-robot construction team; it “lets humans do what they’re good at, while robots do what they do best.”

This story was updated on 4 September 2019. Continue reading

Posted in Human Robots

#435583 Soft Self-Healing Materials for Robots ...

If there’s one thing we know about robots, it’s that they break. They break, like, literally all the time. The software breaks. The hardware breaks. The bits that you think could never, ever, ever possibly break end up breaking just when you need them not to break the most, and then you have to try to explain what happened to your advisor who’s been standing there watching your robot fail and then stay up all night fixing the thing that seriously was not supposed to break.

While most of this is just a fundamental characteristic of robots that can’t be helped, the European Commission is funding a project called SHERO (Self HEaling soft RObotics) to try and solve at least some of those physical robot breaking problems through the use of structural materials that can autonomously heal themselves over and over again.

SHERO is a three year, €3 million collaboration between Vrije Universiteit Brussel, University of Cambridge, École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI-Paris), and Swiss Federal Laboratories for Materials Science and Technology (Empa). As the name SHERO suggests, the goal of the project is to develop soft materials that can completely recover from the kinds of damage that robots are likely to suffer in day to day operations, as well as the occasional more extreme accident.

Most materials, especially soft materials, are fixable somehow, whether it’s with super glue or duct tape. But fixing things involves a human first identifying when they’re broken, and then performing a potentially skill, labor, time, and money intensive task. SHERO’s soft materials will, eventually, make this entire process autonomous, allowing robots to self-identify damage and initiate healing on their own.

Photos: SHERO Project

The damaged robot finger [top] can operate normally after healing itself.

How the self-healing material works
What these self-healing materials can do is really pretty amazing. The researchers are actually developing two different types—the first one heals itself when there’s an application of heat, either internally or externally, which gives some control over when and how the healing process starts. For example, if the robot is handling stuff that’s dirty, you’d want to get it cleaned up before healing it so that dirt doesn’t become embedded in the material. This could mean that the robot either takes itself to a heating station, or it could activate some kind of embedded heating mechanism to be more self-sufficient.

The second kind of self-healing material is autonomous, in that it will heal itself at room temperature without any additional input, and is probably more suitable for relatively minor scrapes and cracks. Here are some numbers about how well the healing works:

Autonomous self-healing polymers do not require heat. They can heal damage at room temperature. Developing soft robotic systems from autonomous self-healing polymers excludes the need of additional heating devices… The healing however takes some time. The healing efficiency after 3 days, 7 days and 14 days is respectively 62 percent, 91 percent and 97 percent.

This material was used to develop a healable soft pneumatic hand. Relevant large cuts can be healed entirely without the need of external heat stimulus. Depending on the size of the damage and even more on the location of damage, the healing takes only seconds or up to a week. Damage on locations on the actuator that are subjected to very small stresses during actuation was healed instantaneously. Larger damages, like cutting the actuator completely in half, took 7 days to heal. But even this severe damage could be healed completely without the need of any external stimulus.

Applications of self-healing robots
Both of these materials can be mixed together, and their mechanical properties can be customized so that the structure that they’re a part of can be tuned to move in different ways. The researchers also plan on introducing flexible conductive sensors into the material, which will help sense damage as well as providing position feedback for control systems. A lot of development will happen over the next few years, and for more details, we spoke with Bram Vanderborght at Vrije Universiteit in Brussels.

IEEE Spectrum: How easy or difficult or expensive is it to produce these materials? Will they add significant cost to robotic grippers?

Bram Vanderborght: They are definitely more expensive materials, but it’s also a matter of size of production. At the moment, we’ve made a few kilograms of the material (enough to make several demonstrators), and the price already dropped significantly from when we ordered 100 grams of the material in the first phase of the project. So probably the cost of the gripper will be higher [than a regular gripper], but you won’t need to replace the gripper as often as other grippers that need to be replaced due to wear, so it can be an advantage.

Moreover due to the method of 3D printing the material, the surface is smoother and airtight (so no post-processing is required to make it airtight). Also, the smooth surface is better to avoid contamination for food handling, for example.

In commercial or industrial applications, gradual fatigue seems to be a more common issue than more abrupt trauma like cuts. How well does the self-healing work to improve durability over long periods of time?

We did not test for gradual fatigue over very long times. But both macroscopic and microscopic damage can be healed. So hopefully it can provide an answer here as well.

Image: SHERO Project

After developing a self-healing robot gripper, the researchers plan to use similar materials to build parts that can be used as the skeleton of robots, allowing them to repair themselves on a regular basis.

How much does the self-healing capability restrict the material properties? What are the limits for softness or hardness or smoothness or other characteristics of the material?

Typically the mechanical properties of networked polymers are much better than thermoplastics. Our material is a networked polymer but in which the crosslinks are reversible. We can change quite a lot of parameters in the design of the materials. So we can develop very stiff (fracture strain at 1.24 percent) and very elastic materials (fracture strain at 450 percent). The big advantage that our material has is we can mix it to have intermediate properties. Moreover, at the interface of the materials with different mechanical properties, we have the same chemical bonds, so the interface is perfect. While other materials, they may need to glue it, which gives local stresses and a weak spot.

When the material heals itself, is it less structurally sound in that spot? Can it heal damage that happens to the same spot over and over again?

In theory we can heal it an infinite amount of times. When the wound is not perfectly aligned, of course in that spot it will become weaker. Also too high temperatures lead to irreversible bonds, and impurities lead to weak spots.

Besides grippers and skins, what other potential robotics applications would this technology be useful for?

Most of self healing materials available now are used for coatings. What we are developing are structural components, therefore the mechanical properties of the material need to be good for such applications. So maybe part of the skeleton of the robot can be developed with such materials to make it lighter, since can be designed for regular repair. And for exceptional loads, it breaks and can be repaired like our human body.

[ SHERO Project ] Continue reading

Posted in Human Robots

#435541 This Giant AI Chip Is the Size of an ...

People say size doesn’t matter, but when it comes to AI the makers of the largest computer chip ever beg to differ. There are plenty of question marks about the gargantuan processor, but its unconventional design could herald an innovative new era in silicon design.

Computer chips specialized to run deep learning algorithms are a booming area of research as hardware limitations begin to slow progress, and both established players and startups are vying to build the successor to the GPU, the specialized graphics chip that has become the workhorse of the AI industry.

On Monday Californian startup Cerebras came out of stealth mode to unveil an AI-focused processor that turns conventional wisdom on its head. For decades chip makers have been focused on making their products ever-smaller, but the Wafer Scale Engine (WSE) is the size of an iPad and features 1.2 trillion transistors, 400,000 cores, and 18 gigabytes of on-chip memory.

The Cerebras Wafer-Scale Engine (WSE) is the largest chip ever built. It measures 46,225 square millimeters and includes 1.2 trillion transistors. Optimized for artificial intelligence compute, the WSE is shown here for comparison alongside the largest graphics processing unit. Image Credit: Used with permission from Cerebras Systems.
There is a method to the madness, though. Currently, getting enough cores to run really large-scale deep learning applications means connecting banks of GPUs together. But shuffling data between these chips is a major drain on speed and energy efficiency because the wires connecting them are relatively slow.

Building all 400,000 cores into the same chip should get round that bottleneck, but there are reasons it’s not been done before, and Cerebras has had to come up with some clever hacks to get around those obstacles.

Regular computer chips are manufactured using a process called photolithography to etch transistors onto the surface of a wafer of silicon. The wafers are inches across, so multiple chips are built onto them at once and then split up afterwards. But at 8.5 inches across, the WSE uses the entire wafer for a single chip.

The problem is that while for standard chip-making processes any imperfections in manufacturing will at most lead to a few processors out of several hundred having to be ditched, for Cerebras it would mean scrapping the entire wafer. To get around this the company built in redundant circuits so that even if there are a few defects, the chip can route around them.

The other big issue with a giant chip is the enormous amount of heat the processors can kick off—so the company has had to design a proprietary water-cooling system. That, along with the fact that no one makes connections and packaging for giant chips, means the WSE won’t be sold as a stand-alone component, but as part of a pre-packaged server incorporating the cooling technology.

There are no details on costs or performance so far, but some customers have already been testing prototypes, and according to Cerebras results have been promising. CEO and co-founder Andrew Feldman told Fortune that early tests show they are reducing training time from months to minutes.

We’ll have to wait until the first systems ship to customers in September to see if those claims stand up. But Feldman told ZDNet that the design of their chip should help spur greater innovation in the way engineers design neural networks. Many cornerstones of this process—for instance, tackling data in batches rather than individual data points—are guided more by the hardware limitations of GPUs than by machine learning theory, but their chip will do away with many of those obstacles.

Whether that turns out to be the case or not, the WSE might be the first indication of an innovative new era in silicon design. When Google announced it’s AI-focused Tensor Processing Unit in 2016 it was a wake-up call for chipmakers that we need some out-of-the-box thinking to square the slowing of Moore’s Law with skyrocketing demand for computing power.

It’s not just tech giants’ AI server farms driving innovation. At the other end of the spectrum, the desire to embed intelligence in everyday objects and mobile devices is pushing demand for AI chips that can run on tiny amounts of power and squeeze into the smallest form factors.

These trends have spawned renewed interest in everything from brain-inspired neuromorphic chips to optical processors, but the WSE also shows that there might be mileage in simply taking a sideways look at some of the other design decisions chipmakers have made in the past rather than just pumping ever more transistors onto a chip.

This gigantic chip might be the first exhibit in a weird and wonderful new menagerie of exotic, AI-inspired silicon.

Image Credit: Used with permission from Cerebras Systems. Continue reading

Posted in Human Robots

#435528 The Time for AI Is Now. Here’s Why

You hear a lot these days about the sheer transformative power of AI.

There’s pure intelligence: DeepMind’s algorithms readily beat humans at Go and StarCraft, and DeepStack triumphs over humans at no-limit hold’em poker. Often, these silicon brains generate gameplay strategies that don’t resemble anything from a human mind.

There’s astonishing speed: algorithms routinely surpass radiologists in diagnosing breast cancer, eye disease, and other ailments visible from medical imaging, essentially collapsing decades of expert training down to a few months.

Although AI’s silent touch is mainly felt today in the technological, financial, and health sectors, its impact across industries is rapidly spreading. At the Singularity University Global Summit in San Francisco this week Neil Jacobstein, Chair of AI and Robotics, painted a picture of a better AI-powered future for humanity that is already here.

Thanks to cloud-based cognitive platforms, sophisticated AI tools like deep learning are no longer relegated to academic labs. For startups looking to tackle humanity’s grand challenges, the tools to efficiently integrate AI into their missions are readily available. The progress of AI is massively accelerating—to the point you need help from AI to track its progress, joked Jacobstein.

Now is the time to consider how AI can impact your industry, and in the process, begin to envision a beneficial relationship with our machine coworkers. As Jacobstein stressed in his talk, the future of a brain-machine mindmeld is a collaborative intelligence that augments our own. “AI is reinventing the way we invent,” he said.

AI’s Rapid Revolution
Machine learning and other AI-based methods may seem academic and abstruse. But Jacobstein pointed out that there are already plenty of real-world AI application frameworks.

Their secret? Rather than coding from scratch, smaller companies—with big visions—are tapping into cloud-based solutions such as Google’s TensorFlow, Microsoft’s Azure, or Amazon’s AWS to kick off their AI journey. These platforms act as all-in-one solutions that not only clean and organize data, but also contain built-in security and drag-and-drop coding that allow anyone to experiment with complicated machine learning algorithms.

Google Cloud’s Anthos, for example, lets anyone migrate data from other servers—IBM Watson or AWS, for example—so users can leverage different computing platforms and algorithms to transform data into insights and solutions.

Rather than coding from scratch, it’s already possible to hop onto a platform and play around with it, said Jacobstein. That’s key: this democratization of AI is how anyone can begin exploring solutions to problems we didn’t even know we had, or those long thought improbable.

The acceleration is only continuing. Much of AI’s mind-bending pace is thanks to a massive infusion of funding. Microsoft recently injected $1 billion into OpenAI, the Elon Musk venture that engineers socially responsible artificial general intelligence (AGI).

The other revolution is in hardware, and Google, IBM, and NVIDIA—among others—are racing to manufacture computing chips tailored to machine learning.

Democratizing AI is like the birth of the printing press. Mechanical printing allowed anyone to become an author; today, an iPhone lets anyone film a movie masterpiece.

However, this diffusion of AI into the fabric of our lives means tech explorers need to bring skepticism to their AI solutions, giving them a dose of empathy, nuance, and humanity.

A Path Towards Ethical AI
The democratization of AI is a double-edged sword: as more people wield the technology’s power in real-world applications, problems embedded in deep learning threaten to disrupt those very judgment calls.

Much of the press on the dangers of AI focuses on superintelligence—AI that’s more adept at learning than humans—taking over the world, said Jacobstein. But the near-term threat, and far more insidious, is in humans misusing the technology.

Deepfakes, for example, allow AI rookies to paste one person’s head on a different body or put words into a person’s mouth. As the panel said, it pays to think of AI as a cybersecurity problem, one with currently shaky accountability and complexity, and one that fails at diversity and bias.

Take bias. Thanks to progress in natural language processing, Google Translate works nearly perfectly today, so much so that many consider the translation problem solved. Not true, the panel said. One famous example is how the algorithm translates gender-neutral terms like “doctor” into “he” and “nurse” into “she.”

These biases reflect our own, and it’s not just a data problem. To truly engineer objective AI systems, ones stripped of our society’s biases, we need to ask who is developing these systems, and consult those who will be impacted by the products. In addition to gender, racial bias is also rampant. For example, one recent report found that a supposedly objective crime-predicting system was trained on falsified data, resulting in outputs that further perpetuate corrupt police practices. Another study from Google just this month found that their hate speech detector more often labeled innocuous tweets from African-Americans as “obscene” compared to tweets from people of other ethnicities.

We often think of building AI as purely an engineering job, the panelists agreed. But similar to gene drives, germ-line genome editing, and other transformative—but dangerous—tools, AI needs to grow under the consultation of policymakers and other stakeholders. It pays to start young: educating newer generations on AI biases will mold malleable minds early, alerting them to the problem of bias and potentially mitigating risks.

As panelist Tess Posner from AI4ALL said, AI is rocket fuel for ambition. If young minds set out using the tools of AI to tackle their chosen problems, while fully aware of its inherent weaknesses, we can begin to build an AI-embedded future that is widely accessible and inclusive.

The bottom line: people who will be impacted by AI need to be in the room at the conception of an AI solution. People will be displaced by the new technology, and ethical AI has to consider how to mitigate human suffering during the transition. Just because AI looks like “magic fairy dust doesn’t mean that you’re home free,” the panelists said. You, the sentient human, bear the burden of being responsible for how you decide to approach the technology.

The time for AI is now. Let’s make it ethical.

Image Credit: GrAI / Shutterstock.com Continue reading

Posted in Human Robots