Tag Archives: highly

#433506 MIT’s New Robot Taught Itself to Pick ...

Back in 2016, somewhere in a Google-owned warehouse, more than a dozen robotic arms sat for hours quietly grasping objects of various shapes and sizes. For hours on end, they taught themselves how to pick up and hold the items appropriately—mimicking the way a baby gradually learns to use its hands.

Now, scientists from MIT have made a new breakthrough in machine learning: their new system can not only teach itself to see and identify objects, but also understand how best to manipulate them.

This means that, armed with the new machine learning routine referred to as “dense object nets (DON),” the robot would be capable of picking up an object that it’s never seen before, or in an unfamiliar orientation, without resorting to trial and error—exactly as a human would.

The deceptively simple ability to dexterously manipulate objects with our hands is a huge part of why humans are the dominant species on the planet. We take it for granted. Hardware innovations like the Shadow Dexterous Hand have enabled robots to softly grip and manipulate delicate objects for many years, but the software required to control these precision-engineered machines in a range of circumstances has proved harder to develop.

This was not for want of trying. The Amazon Robotics Challenge offers millions of dollars in prizes (and potentially far more in contracts, as their $775m acquisition of Kiva Systems shows) for the best dexterous robot able to pick and package items in their warehouses. The lucrative dream of a fully-automated delivery system is missing this crucial ability.

Meanwhile, the Robocup@home challenge—an offshoot of the popular Robocup tournament for soccer-playing robots—aims to make everyone’s dream of having a robot butler a reality. The competition involves teams drilling their robots through simple household tasks that require social interaction or object manipulation, like helping to carry the shopping, sorting items onto a shelf, or guiding tourists around a museum.

Yet all of these endeavors have proved difficult; the tasks often have to be simplified to enable the robot to complete them at all. New or unexpected elements, such as those encountered in real life, more often than not throw the system entirely. Programming the robot’s every move in explicit detail is not a scalable solution: this can work in the highly-controlled world of the assembly line, but not in everyday life.

Computer vision is improving all the time. Neural networks, including those you train every time you prove that you’re not a robot with CAPTCHA, are getting better at sorting objects into categories, and identifying them based on sparse or incomplete data, such as when they are occluded, or in different lighting.

But many of these systems require enormous amounts of input data, which is impractical, slow to generate, and often needs to be laboriously categorized by humans. There are entirely new jobs that require people to label, categorize, and sift large bodies of data ready for supervised machine learning. This can make machine learning undemocratic. If you’re Google, you can make thousands of unwitting volunteers label your images for you with CAPTCHA. If you’re IBM, you can hire people to manually label that data. If you’re an individual or startup trying something new, however, you will struggle to access the vast troves of labeled data available to the bigger players.

This is why new systems that can potentially train themselves over time or that allow robots to deal with situations they’ve never seen before without mountains of labelled data are a holy grail in artificial intelligence. The work done by MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) is part of a new wave of “self-supervised” machine learning systems—little of the data used was labeled by humans.

The robot first inspects the new object from multiple angles, building up a 3D picture of the object with its own coordinate system. This then allows the robotic arm to identify a particular feature on the object—such as a handle, or the tongue of a shoe—from various different angles, based on its relative distance to other grid points.

This is the real innovation: the new means of representing objects to grasp as mapped-out 3D objects, with grid points and subsections of their own. Rather than using a computer vision algorithm to identify a door handle, and then activating a door handle grasping subroutine, the DON system treats all objects by making these spatial maps before classifying or manipulating them, enabling it to deal with a greater range of objects than in other approaches.

“Many approaches to manipulation can’t identify specific parts of an object across the many orientations that object may encounter,” said PhD student Lucas Manuelli, who wrote a new paper about the system with lead author and fellow student Pete Florence, alongside MIT professor Russ Tedrake. “For example, existing algorithms would be unable to grasp a mug by its handle, especially if the mug could be in multiple orientations, like upright, or on its side.”

Class-specific descriptors, which can be applied to the object features, can allow the robot arm to identify a mug, find the handle, and pick the mug up appropriately. Object-specific descriptors allow the robot arm to select a particular mug from a group of similar items. I’m already dreaming of a robot butler reliably picking my favourite mug when it serves me coffee in the morning.

Google’s robot arm-y was an attempt to develop a general grasping algorithm: one that could identify, categorize, and appropriately grip as many items as possible. This requires a great deal of training time and data, which is why Google parallelized their project by having 14 robot arms feed data into a single neural network brain: even then, the algorithm may fail with highly specific tasks. Specialist grasping algorithms might require less training if they’re limited to specific objects, but then your software is useless for general tasks.

As the roboticists noted, their system, with its ability to identify parts of an object rather than just a single object, is better suited to specific tasks, such as “grasp the racquet by the handle,” than Amazon Robotics Challenge robots, which identify whole objects by segmenting an image.

This work is small-scale at present. It has been tested with a few classes of objects, including shoes, hats, and mugs. Yet the use of these dense object nets as a way for robots to represent and manipulate new objects may well be another step towards the ultimate goal of generalized automation: a robot capable of performing every task a person can. If that point is reached, the question that will remain is how to cope with being obsolete.

Image Credit: Tom Buehler/CSAIL Continue reading

Posted in Human Robots

#433474 How to Feed Global Demand for ...

“You really can’t justify tuna in Chicago as a source of sustenance.” That’s according to Dr. Sylvia Earle, a National Geographic Society Explorer who was the first female chief scientist at NOAA. She came to the Good Food Institute’s Good Food Conference to deliver a call to action around global food security, agriculture, environmental protection, and the future of consumer choice.

It seems like all options should be on the table to feed an exploding population threatened by climate change. But Dr. Earle, who is faculty at Singularity University, drew a sharp distinction between seafood for sustenance versus seafood as a choice. “There is this widespread claim that we must take large numbers of wildlife from the sea in order to have food security.”

A few minutes later, Dr. Earle directly addressed those of us in the audience. “We know the value of a dead fish,” she said. That’s market price. “But what is the value of a live fish in the ocean?”

That’s when my mind blew open. What is the value—or put another way, the cost—of using the ocean as a major source of protein for humans? How do you put a number on that? Are we talking about dollars and cents, or about something far larger?

Dr. Liz Specht of the Good Food Institute drew the audience’s attention to a strange imbalance. Currently, about half of the yearly global catch of seafood comes from aquaculture. That means that the other half is wild caught. It’s hard to imagine half of your meat coming directly from the forests and the plains, isn’t it? And yet half of the world’s seafood comes from direct harvesting of the oceans, by way of massive overfishing, a terrible toll from bycatch, a widespread lack of regulation and enforcement, and even human rights violations such as slavery.

The search for solutions is on, from both within the fishing industry and from external agencies such as governments and philanthropists. Could there be another way?

Makers of plant-based seafood and clean seafood think they know how to feed the global demand for seafood without harming the ocean. These companies are part of a larger movement harnessing technology to reduce our reliance on wild and domesticated animals—and all the environmental, economic, and ethical issues that come with it.

Producers of plant-based seafood (20 or so currently) are working to capture the taste, texture, and nutrition of conventional seafood without the limitations of geography or the health of a local marine population. Like with plant-based meat, makers of plant-based seafood are harnessing food science and advances in chemistry, biology, and engineering to make great food. The industry’s strategy? Start with what the consumer wants, and then figure out how to achieve that great taste through technology.

So how does plant-based seafood taste? Pretty good, as it turns out. (The biggest benefit of a food-oriented conference is that your mouth is always full!)

I sampled “tuna” salad made from Good Catch Food’s fish-free tuna, which is sourced from legumes; the texture was nearly indistinguishable from that of flaked albacore tuna, and there was no lingering fishy taste to overpower my next bite. In a blind taste test, I probably wouldn’t have known that I was eating a plant-based seafood alternative. Next I reached for Ocean Hugger Food’s Ahimi, a tomato-based alternative to raw tuna. I adore Hawaiian poke, so I was pleasantly surprised when my Ahimi-based poke captured the bite of ahi tuna. It wasn’t quite as delightfully fatty as raw tuna, but with wild tuna populations struggling to recover from a 97% decline in numbers from 40 years ago, Ahimi is a giant stride in the right direction.

These plant-based alternatives aren’t the only game in town, however.

The clean meat industry, which has also been called “cultured meat” or “cellular agriculture,” isn’t seeking to lure consumers away from animal protein. Instead, cells are sampled from live animals and grown in bioreactors—meaning that no animal is slaughtered to produce real meat.

Clean seafood is poised to piggyback off platforms developed for clean meat; growing fish cells in the lab should rely on the same processes as growing meat cells. I know of four companies currently focusing on seafood (Finless Foods, Wild Type, BlueNalu, and Seafuture Sustainable Biotech), and a few more are likely to emerge from stealth mode soon.

Importantly, there’s likely not much difference between growing clean seafood from the top or the bottom of the food chain. Tuna, for example, are top predators that must grow for at least 10 years before they’re suitable as food. Each year, a tuna consumes thousands of pounds of other fish, shellfish, and plankton. That “long tail of groceries,” said Dr. Earle, “is a pretty expensive choice.” Excitingly, clean tuna would “level the trophic playing field,” as Dr. Specht pointed out.

All this is only the beginning of what might be possible.

Combining synthetic biology with clean meat and seafood means that future products could be personalized for individual taste preferences or health needs, by reprogramming the DNA of the cells in the lab. Industries such as bioremediation and biofuels likely have a lot to teach us about sourcing new ingredients and flavors from algae and marine plants. By harnessing rapid advances in automation, robotics, sensors, machine vision, and other big-data analytics, the manufacturing and supply chains for clean seafood could be remarkably safe and robust. Clean seafood would be just that: clean, without pathogens, parasites, or the plastic threatening to fill our oceans, meaning that you could enjoy it raw.

What about price? Dr. Mark Post, a pioneer in clean meat who is also faculty at Singularity University, estimated that 80% of clean-meat production costs come from the expensive medium in which cells are grown—and some ingredients in the medium are themselves sourced from animals, which misses the point of clean meat. Plus, to grow a whole cut of food, like a fish fillet, the cells need to be coaxed into a complex 3D structure with various cell types like muscle cells and fat cells. These two technical challenges must be solved before clean meat and seafood give consumers the experience they want, at the price they want.

In this respect clean seafood has an unusual edge. Most of what we know about growing animal cells in the lab comes from the research and biomedical industries (from tissue engineering, for example)—but growing cells to replace an organ has different constraints than growing cells for food. The link between clean seafood and biomedicine is less direct, empowering innovators to throw out dogma and find novel reagents, protocols, and equipment to grow seafood that captures the tastes, textures, smells, and overall experience of dining by the ocean.

Asked to predict when we’ll be seeing clean seafood in the grocery store, Lou Cooperhouse the CEO of BlueNalu, explained that the challenges aren’t only in the lab: marketing, sales, distribution, and communication with consumers are all critical. As Niya Gupta, the founder of Fork & Goode, said, “The question isn’t ‘can we do it’, but ‘can we sell it’?”

The good news is that the clean meat and seafood industry is highly collaborative; there are at least two dozen companies in the space, and they’re all talking to each other. “This is an ecosystem,” said Dr. Uma Valeti, the co-founder of Memphis Meats. “We’re not competing with each other.” It will likely be at least a decade before science, business, and regulation enable clean meat and seafood to routinely appear on restaurant menus, let alone market shelves.

Until then, think carefully about your food choices. Meditate on Dr. Earle’s question: “What is the real cost of that piece of halibut?” Or chew on this from Dr. Ricardo San Martin, of the Sutardja Center at the University of California, Berkeley: “Food is a system of meanings, not an object.” What are you saying when you choose your food, about your priorities and your values and how you want the future to look? Do you think about animal welfare? Most ethical regulations don’t extend to marine life, and if you don’t think that ocean creatures feel pain, consider the lobster.

Seafood is largely an acquired taste, since most of us don’t live near the water. Imagine a future in which children grow up loving the taste of delicious seafood but without hurting a living animal, the ocean, or the global environment.

Do more than imagine. As Dr. Earle urged us, “Convince the public at large that this is a really cool idea.”

Widely available
Medium availability
Emerging

Gardein
Ahimi (Ocean Hugger)
New Wave Foods

Sophie’s Kitchen
Cedar Lake
To-funa Fish

Quorn
SoFine Foods
Seamore

Vegetarian Plus
Akua
Good Catch

Heritage
Hungry Planet
Odontella

Loma Linda
Heritage Health Food
Terramino Foods

The Vegetarian Butcher
May Wah

VBites

Table based on Figure 5 of the report “An Ocean of Opportunity: Plant-based and clean seafood for sustainable oceans without sacrifice,” from The Good Food Institute.

Image Credit: Tono Balaguer / Shutterstock.com Continue reading

Posted in Human Robots

#433288 The New AI Tech Turning Heads in Video ...

A new technique using artificial intelligence to manipulate video content gives new meaning to the expression “talking head.”

An international team of researchers showcased the latest advancement in synthesizing facial expressions—including mouth, eyes, eyebrows, and even head position—in video at this month’s 2018 SIGGRAPH, a conference on innovations in computer graphics, animation, virtual reality, and other forms of digital wizardry.

The project is called Deep Video Portraits. It relies on a type of AI called generative adversarial networks (GANs) to modify a “target” actor based on the facial and head movement of a “source” actor. As the name implies, GANs pit two opposing neural networks against one another to create a realistic talking head, right down to the sneer or raised eyebrow.

In this case, the adversaries are actually working together: One neural network generates content, while the other rejects or approves each effort. The back-and-forth interplay between the two eventually produces a realistic result that can easily fool the human eye, including reproducing a static scene behind the head as it bobs back and forth.

The researchers say the technique can be used by the film industry for a variety of purposes, from editing facial expressions of actors for matching dubbed voices to repositioning an actor’s head in post-production. AI can not only produce highly realistic results, but much quicker ones compared to the manual processes used today, according to the researchers. You can read the full paper of their work here.

“Deep Video Portraits shows how such a visual effect could be created with less effort in the future,” said Christian Richardt, from the University of Bath’s motion capture research center CAMERA, in a press release. “With our approach, even the positioning of an actor’s head and their facial expression could be easily edited to change camera angles or subtly change the framing of a scene to tell the story better.”

AI Tech Different Than So-Called “Deepfakes”
The work is far from the first to employ AI to manipulate video and audio. At last year’s SIGGRAPH conference, researchers from the University of Washington showcased their work using algorithms that inserted audio recordings from a person in one instance into a separate video of the same person in a different context.

In this case, they “faked” a video using a speech from former President Barack Obama addressing a mass shooting incident during his presidency. The AI-doctored video injects the audio into an unrelated video of the president while also blending the facial and mouth movements, creating a pretty credible job of lip synching.

A previous paper by many of the same scientists on the Deep Video Portraits project detailed how they were first able to manipulate a video in real time of a talking head (in this case, actor and former California governor Arnold Schwarzenegger). The Face2Face system pulled off this bit of digital trickery using a depth-sensing camera that tracked the facial expressions of an Asian female source actor.

A less sophisticated method of swapping faces using a machine learning software dubbed FakeApp emerged earlier this year. Predictably, the tech—requiring numerous photos of the source actor in order to train the neural network—was used for more juvenile pursuits, such as injecting a person’s face onto a porn star.

The application gave rise to the term “deepfakes,” which is now used somewhat ubiquitously to describe all such instances of AI-manipulated video—much to the chagrin of some of the researchers involved in more legitimate uses.

Fighting AI-Created Video Forgeries
However, the researchers are keenly aware that their work—intended for benign uses such as in the film industry or even to correct gaze and head positions for more natural interactions through video teleconferencing—could be used for nefarious purposes. Fake news is the most obvious concern.

“With ever-improving video editing technology, we must also start being more critical about the video content we consume every day, especially if there is no proof of origin,” said Michael Zollhöfer, a visiting assistant professor at Stanford University and member of the Deep Video Portraits team, in the press release.

Toward that end, the research team is training the same adversarial neural networks to spot video forgeries. They also strongly recommend that developers clearly watermark videos that are edited through AI or otherwise, and denote clearly what part and element of the scene was modified.

To catch less ethical users, the US Department of Defense, through the Defense Advanced Research Projects Agency (DARPA), is supporting a program called Media Forensics. This latest DARPA challenge enlists researchers to develop technologies to automatically assess the integrity of an image or video, as part of an end-to-end media forensics platform.

The DARPA official in charge of the program, Matthew Turek, did tell MIT Technology Review that so far the program has “discovered subtle cues in current GAN-manipulated images and videos that allow us to detect the presence of alterations.” In one reported example, researchers have targeted eyes, which rarely blink in the case of “deepfakes” like those created by FakeApp, because the AI is trained on still pictures. That method would seem to be less effective to spot the sort of forgeries created by Deep Video Portraits, which appears to flawlessly match the entire facial and head movements between the source and target actors.

“We believe that the field of digital forensics should and will receive a lot more attention in the future to develop approaches that can automatically prove the authenticity of a video clip,” Zollhöfer said. “This will lead to ever-better approaches that can spot such modifications even if we humans might not be able to spot them with our own eyes.

Image Credit: Tancha / Shutterstock.com Continue reading

Posted in Human Robots

#433280 This Week’s Awesome Stories From ...

TECHNOLOGY
Google Turns 20: How an Internet Search Engine Reshaped the World
Editorial Staff | The Verge
“No technology company is arguably more responsible for shaping the modern internet, and modern life, than Google. The company that started as a novel search engine now manages eight products with more than 1 billion users each.”

FUTURE
Why Technology Favors Tyranny
Yuval Noah Harari | The Atlantic
“It is undoubtable…that the technological revolutions now gathering momentum will in the next few decades confront humankind with the hardest trials it has yet encountered.”

ARTIFICIAL INTELLIGENCE
AI Can Recognize Images, But Can It Understand This Headline?
Gregory Barber | Wired
“In 2012, artificial intelligence researchers revealed a big improvement in computers’ ability to recognize images by feeding a neural network millions of labeled images from a database called ImageNet. …In other arenas of AI research, like understanding language, similar models have proved elusive. But recent research from fast.ai, OpenAI, and the Allen Institute for AI suggests a potential breakthrough, with more robust language models that can help researchers tackle a range of unsolved problems.”

COMPUTING
Quantum Computing Is Almost Ready for Business, Startup Says
Sean Captain | Fast Company
“Rigetti is now inviting customers to apply for free access to these systems, toward the goal of developing a real-world application that achieves quantum advantage. As an extra incentive, the first to make it wins a $1 million prize.”

SCIENCE FICTION
How Realistic Are Sci-Fi Spaceships? An Expert Ranks Your Favorites
Chris Taylor | Mashable
“For all the villainous Borg’s supposed efficiency, their vast six-sided planet-threatening vessel is a massive waste of space. The Death Star may cost an estimated $852 quadrillion in steel alone, but that figure would be far higher if it employed any other shape. That’s no moon—it’s a highly efficient use of surface area.”

Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#432882 Why the Discovery of Room-Temperature ...

Superconductors are among the most bizarre and exciting materials yet discovered. Counterintuitive quantum-mechanical effects mean that, below a critical temperature, they have zero electrical resistance. This property alone is more than enough to spark the imagination.

A current that could flow forever without losing any energy means transmission of power with virtually no losses in the cables. When renewable energy sources start to dominate the grid and high-voltage transmission across continents becomes important to overcome intermittency, lossless cables will result in substantial savings.

What’s more, a superconducting wire carrying a current that never, ever diminishes would act as a perfect store of electrical energy. Unlike batteries, which degrade over time, if the resistance is truly zero, you could return to the superconductor in a billion years and find that same old current flowing through it. Energy could be captured and stored indefinitely!

With no resistance, a huge current could be passed through the superconducting wire and, in turn, produce magnetic fields of incredible power.

You could use them to levitate trains and produce astonishing accelerations, thereby revolutionizing the transport system. You could use them in power plants—replacing conventional methods which spin turbines in magnetic fields to generate electricity—and in quantum computers as the two-level system required for a “qubit,” in which the zeros and ones are replaced by current flowing clockwise or counterclockwise in a superconductor.

Arthur C. Clarke famously said that any sufficiently advanced technology is indistinguishable from magic; superconductors can certainly seem like magical devices. So, why aren’t they busy remaking the world? There’s a problem—that critical temperature.

For all known materials, it’s hundreds of degrees below freezing. Superconductors also have a critical magnetic field; beyond a certain magnetic field strength, they cease to work. There’s a tradeoff: materials with an intrinsically high critical temperature can also often provide the largest magnetic fields when cooled well below that temperature.

This has meant that superconductor applications so far have been limited to situations where you can afford to cool the components of your system to close to absolute zero: in particle accelerators and experimental nuclear fusion reactors, for example.

But even as some aspects of superconductor technology become mature in limited applications, the search for higher temperature superconductors moves on. Many physicists still believe a room-temperature superconductor could exist. Such a discovery would unleash amazing new technologies.

The Quest for Room-Temperature Superconductors
After Heike Kamerlingh Onnes discovered superconductivity by accident while attempting to prove Lord Kelvin’s theory that resistance would increase with decreasing temperature, theorists scrambled to explain the new property in the hope that understanding it might allow for room-temperature superconductors to be synthesized.

They came up with the BCS theory, which explained some of the properties of superconductors. It also predicted that the dream of technologists, a room-temperature superconductor, could not exist; the maximum temperature for superconductivity according to BCS theory was just 30 K.

Then, in the 1980s, the field changed again with the discovery of unconventional, or high-temperature, superconductivity. “High temperature” is still very cold: the highest temperature for superconductivity achieved was -70°C for hydrogen sulphide at extremely high pressures. For normal pressures, -140°C is near the upper limit. Unfortunately, high-temperature superconductors—which require relatively cheap liquid nitrogen, rather than liquid helium, to cool—are mostly brittle ceramics, which are expensive to form into wires and have limited application.

Given the limitations of high-temperature superconductors, researchers continue to believe there’s a better option awaiting discovery—an incredible new material that checks boxes like superconductivity approaching room temperature, affordability, and practicality.

Tantalizing Clues
Without a detailed theoretical understanding of how this phenomenon occurs—although incremental progress happens all the time—scientists can occasionally feel like they’re taking educated guesses at materials that might be likely candidates. It’s a little like trying to guess a phone number, but with the periodic table of elements instead of digits.

Yet the prospect remains, in the words of one researcher, tantalizing. A Nobel Prize and potentially changing the world of energy and electricity is not bad for a day’s work.

Some research focuses on cuprates, complex crystals that contain layers of copper and oxygen atoms. Doping cuprates with various different elements, such exotic compounds as mercury barium calcium copper oxide, are amongst the best superconductors known today.

Research also continues into some anomalous but unexplained reports that graphite soaked in water can act as a room-temperature superconductor, but there’s no indication that this could be used for technological applications yet.

In early 2017, as part of the ongoing effort to explore the most extreme and exotic forms of matter we can create on Earth, researchers managed to compress hydrogen into a metal.

The pressure required to do this was more than that at the core of the Earth and thousands of times higher than that at the bottom of the ocean. Some researchers in the field, called condensed-matter physics, doubt that metallic hydrogen was produced at all.

It’s considered possible that metallic hydrogen could be a room-temperature superconductor. But getting the samples to stick around long enough for detailed testing has proved tricky, with the diamonds containing the metallic hydrogen suffering a “catastrophic failure” under the pressure.

Superconductivity—or behavior that strongly resembles it—was also observed in yttrium barium copper oxide (YBCO) at room temperature in 2014. The only catch was that this electron transport lasted for a tiny fraction of a second and required the material to be bombarded with pulsed lasers.

Not very practical, you might say, but tantalizing nonetheless.

Other new materials display enticing properties too. The 2016 Nobel Prize in Physics was awarded for the theoretical work that characterizes topological insulators—materials that exhibit similarly strange quantum behaviors. They can be considered perfect insulators for the bulk of the material but extraordinarily good conductors in a thin layer on the surface.

Microsoft is betting on topological insulators as the key component in their attempt at a quantum computer. They’ve also been considered potentially important components in miniaturized circuitry.

A number of remarkable electronic transport properties have also been observed in new, “2D” structures—like graphene, these are materials synthesized to be as thick as a single atom or molecule. And research continues into how we can utilize the superconductors we’ve already discovered; for example, some teams are trying to develop insulating material that prevents superconducting HVDC cable from overheating.

Room-temperature superconductivity remains as elusive and exciting as it has been for over a century. It is unclear whether a room-temperature superconductor can exist, but the discovery of high-temperature superconductors is a promising indicator that unconventional and highly useful quantum effects may be discovered in completely unexpected materials.

Perhaps in the future—through artificial intelligence simulations or the serendipitous discoveries of a 21st century Kamerlingh Onnes—this little piece of magic could move into the realm of reality.

Image Credit: ktsdesign / Shutterstock.com Continue reading

Posted in Human Robots