Tag Archives: time

#431828 This Self-Driving AI Is Learning to ...

I don’t have to open the doors of AImotive’s white 2015 Prius to see that it’s not your average car. This particular Prius has been christened El Capitan, the name written below the rear doors, and two small cameras are mounted on top of the car. Bundles of wire snake out from them, as well as from the two additional cameras on the car’s hood and trunk.
Inside is where things really get interesting, though. The trunk holds a computer the size of a microwave, and a large monitor covers the passenger glove compartment and dashboard. The center console has three switches labeled “Allowed,” “Error,” and “Active.”
Budapest-based AImotive is working to provide scalable self-driving technology alongside big players like Waymo and Uber in the autonomous vehicle world. On a highway test ride with CEO Laszlo Kishonti near the company’s office in Mountain View, California, I got a glimpse of just how complex that world is.
Camera-Based Feedback System
AImotive’s approach to autonomous driving is a little different from that of some of the best-known systems. For starters, they’re using cameras, not lidar, as primary sensors. “The traffic system is visual and the cost of cameras is low,” Kishonti said. “A lidar can recognize when there are people near the car, but a camera can differentiate between, say, an elderly person and a child. Lidar’s resolution isn’t high enough to recognize the subtle differences of urban driving.”
Image Credit: AImotive
The company’s aiDrive software uses data from the camera sensors to feed information to its algorithms for hierarchical decision-making, grouped under four concurrent activities: recognition, location, motion, and control.
Kishonti pointed out that lidar has already gotten more cost-efficient, and will only continue to do so.
“Ten years ago, lidar was best because there wasn’t enough processing power to do all the calculations by AI. But the cost of running AI is decreasing,” he said. “In our approach, computer vision and AI processing are key, and for safety, we’ll have fallback sensors like radar or lidar.”
aiDrive currently runs on Nvidia chips, which Kishonti noted were originally designed for graphics, and are not terribly efficient given how power-hungry they are. “We’re planning to substitute lower-cost, lower-energy chips in the next six months,” he said.
Testing in Virtual Reality
Waymo recently announced its fleet has now driven four million miles autonomously. That’s a lot of miles, and hard to compete with. But AImotive isn’t trying to compete, at least not by logging more real-life test miles. Instead, the company is doing 90 percent of its testing in virtual reality. “This is what truly differentiates us from competitors,” Kishonti said.
He outlined the three main benefits of VR testing: it can simulate scenarios too dangerous for the real world (such as hitting something), too costly (not every company has Waymo’s funds to run hundreds of cars on real roads), or too time-consuming (like waiting for rain, snow, or other weather conditions to occur naturally and repeatedly).
“Real-world traffic testing is very skewed towards the boring miles,” he said. “What we want to do is test all the cases that are hard to solve.”
On a screen that looked not unlike multiple games of Mario Kart, he showed me the simulator. Cartoon cars cruised down winding streets, outfitted with all the real-world surroundings: people, trees, signs, other cars. As I watched, a furry kangaroo suddenly hopped across one screen. “Volvo had an issue in Australia,” Kishonti explained. “A kangaroo’s movement is different than other animals since it hops instead of running.” Talk about cases that are hard to solve.
AImotive is currently testing around 1,000 simulated scenarios every night, with a steadily-rising curve of successful tests. These scenarios are broken down into features, and the car’s behavior around those features fed into a neural network. As the algorithms learn more features, the level of complexity the vehicles can handle goes up.
On the Road
After Kishonti and his colleagues filled me in on the details of their product, it was time to test it out. A safety driver sat in the driver’s seat, a computer operator in the passenger seat, and Kishonti and I in back. The driver maintained full control of the car until we merged onto the highway. Then he flicked the “Allowed” switch, his copilot pressed the “Active” switch, and he took his hands off the wheel.
What happened next, you ask?
A few things. El Capitan was going exactly the speed limit—65 miles per hour—which meant all the other cars were passing us. When a car merged in front of us or cut us off, El Cap braked accordingly (if a little abruptly). The monitor displayed the feed from each of the car’s cameras, plus multiple data fields and a simulation where a blue line marked the center of the lane, measured by the cameras tracking the lane markings on either side.
I noticed El Cap wobbling out of our lane a bit, but it wasn’t until two things happened in a row that I felt a little nervous: first we went under a bridge, then a truck pulled up next to us, both bridge and truck casting a complete shadow over our car. At that point El Cap lost it, and we swerved haphazardly to the right, narrowly missing the truck’s rear wheels. The safety driver grabbed the steering wheel and took back control of the car.
What happened, Kishonti explained, was that the shadows made it hard for the car’s cameras to see the lane markings. This was a new scenario the algorithm hadn’t previously encountered. If we’d only gone under a bridge or only been next to the truck for a second, El Cap may not have had so much trouble, but the two events happening in a row really threw the car for a loop—almost literally.
“This is a new scenario we’ll add to our testing,” Kishonti said. He added that another way for the algorithm to handle this type of scenario, rather than basing its speed and positioning on the lane markings, is to mimic nearby cars. “The human eye would see that other cars are still moving at the same speed, even if it can’t see details of the road,” he said.
After another brief—and thankfully uneventful—hands-off cruise down the highway, the safety driver took over, exited the highway, and drove us back to the office.
Driving into the Future
I climbed out of the car feeling amazed not only that self-driving cars are possible, but that driving is possible at all. I squint when driving into a tunnel, swerve to avoid hitting a stray squirrel, and brake gradually at stop signs—all without consciously thinking to do so. On top of learning to steer, brake, and accelerate, self-driving software has to incorporate our brains’ and bodies’ unconscious (but crucial) reactions, like our pupils dilating to let in more light so we can see in a tunnel.
Despite all the progress of machine learning, artificial intelligence, and computing power, I have a wholly renewed appreciation for the thing that’s been in charge of driving up till now: the human brain.
Kishonti seemed to feel similarly. “I don’t think autonomous vehicles in the near future will be better than the best drivers,” he said. “But they’ll be better than the average driver. What we want to achieve is safe, good-quality driving for everyone, with scalability.”
AImotive is currently working with American tech firms and with car and truck manufacturers in Europe, China, and Japan.
Image Credit: Alex Oakenman / Shutterstock.com Continue reading

Posted in Human Robots

#431817 This Week’s Awesome Stories From ...

BITCOIN
Bitcoin Is a Delusion That Could Conquer the WorldDerek Thompson | The Atlantic“What seems most certain is that the future of money will test our conventional definitions—of currencies, of bubbles, and of initial offerings. What’s happening this month with bitcoin feels like an unsustainable paroxysm. But it’s foolish to try to develop rational models for when such a market will correct itself. Prices, like currencies, are collective illusions.”
SPACE
This Engineer Is Building a DIY Mars Habitat in His BackyardDaniel Oberhaus | Motherboard“For over a year, Raymond and his wife have been running a fully operational, self-sustaining ‘Mars habitat’ in their backyard. They’ve personally sunk around $200,000 into the project and anticipate spending several thousand more before they’re finished. The habitat is the subject of a popularYouTube channel maintained by Raymond, where he essentiallyLARPs the 2015 Matt Damon film The Martian for an audience of over 20,000 loyal followers.”
INTERNET
The FCC Just Voted to Repeal Its Net Neutrality Rules, in a Sweeping Act of DeregulationBrian Fung | The Washington Post“The 3-2 vote, which was along party lines, enabled the FCC’s Republican chairman, AjitPai, to follow through on his promise to repeal the government’s 2015 net neutrality rules, which required Internet providers to treat all websites, large and small, equally.”
GENDER EQUALITY
Sexism’s National Reckoning and the Tech Women Who Blazed the TrailTekla S. Perry | IEEE Spectrum“Cassidy and other women in tech who spoke during the one-day event stressed that the watershed came not because women finally broke the silence about sexual harassment, whatever Time’s editors may believe. The change came because the women were finally listened to and the bad actors faced repercussions.”
FUTURE
These Technologies Will Shape the Future, According to One of Silicon Valley’s Top VC FirmsDaniel Terdiman | Fast Company“The question then, is what are the technologies that are going to drive the future. At Andreessen Horowitz, a picture of that future, at least the next 10 years or so, is coming into focus.During a recent firm summit, Evans laid out his vision for the most significant tech opportunities of the next decade.On the surface, the four areas he identifies–autonomy, mixed-reality, cryptocurrencies, and artificial intelligence–aren’t entirely surprises.”
Image Credit: Solfer / Shutterstock.com Continue reading

Posted in Human Robots

#431790 FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal RobotsForce Torque Sensor feeds data to Universal Robots force mode
Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.
This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.
The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”
See some of the FT 300’s new capabilities in the following demo videos:
#1 How to calibrate with the FT 300 URCap Dashboard
#2 Linear search demo
#3 Path recording demo
Visit the FT 300 webpage or get a quote here
Get the FT 300 specs here
Get more info in the FAQ
Get free Skills to accelerate robot programming of force control tasks.
Get free robot cell deployment resources on leanrobotics.org
* Available with Universal Robots CB3.1 controller only
About Robotiq
Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.
Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.
Media contact
David Maltais, Communications and Public Relations Coordinator
d.maltais@robotiq.com
1-418-929-2513
////
Press Release Provided by: Robotiq.Com
The post FT 300 force torque sensor appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431733 Why Humanoid Robots Are Still So Hard to ...

Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.

Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.

We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading

Posted in Human Robots

#431690 Oxford Study Says Alien Life Would ...

The alternative universe known as science fiction has given our culture a menagerie of alien species. From overstuffed teddy bears like Ewoks and Wookies to terrifying nightmares such as Alien and Predator, our collective imagination of what form alien life from another world may take has been irrevocably imprinted by Hollywood.
It might all be possible, or all these bug-eyed critters might turn out to be just B-movie versions of how real extraterrestrials will appear if and when they finally make the evening news.
One thing for certain is that aliens from another world will be shaped by the same evolutionary forces as here on Earth—natural selection. That’s the conclusion of a team of scientists from the University of Oxford in a study published this month in the International Journal of Astrobiology.
A complex alien that comprises a hierarchy of entities, where each lower level collection of entities has aligned evolutionary interests.Image Credit: Helen S. Cooper/University of Oxford.
The researchers suggest that evolutionary theory—famously put forth by Charles Darwin in his seminal book On the Origin of Species 158 years ago this month—can be used to make some predictions about alien species. In particular, the team argues that extraterrestrials will undergo natural selection, because that is the only process by which organisms can adapt to their environment.
“Adaptation is what defines life,” lead author Samuel Levin tells Singularity Hub.
While it’s likely that NASA or some SpaceX-like private venture will eventually kick over a few space rocks and discover microbial life in the not-too-distant future, the sorts of aliens Levin and his colleagues are interested in describing are more complex. That’s because natural selection is at work.
A quick evolutionary theory 101 refresher: Natural selection is the process by which certain traits are favored over others in a given population. For example, take a group of brown and green beetles. It just so happens that birds prefer foraging on green beetles, allowing more brown beetles to survive and reproduce than the more delectable green ones. Eventually, if these population pressures persist, brown beetles will become the dominant type. Brown wins, green loses.
And just as human beings are the result of millions of years of adaptations—eyes and thumbs, for example—aliens will similarly be constructed from parts that were once free living but through time came together to work as one organism.
“Life has so many intricate parts, so much complexity, for that to happen (randomly),” Levin explains. “It’s too complex and too many things working together in a purposeful way for that to happen by chance, as how certain molecules come about. Instead you need a process for making it, and natural selection is that process.”
Just don’t expect ET to show up as a bipedal humanoid with a large head and almond-shaped eyes, Levin says.
“They can be built from entirely different chemicals and so visually, superficially, unfamiliar,” he explains. “They will have passed through the same evolutionary history as us. To me, that’s way cooler and more exciting than them having two legs.”
Need for Data
Seth Shostak, a lead astronomer at the SETI Institute and host of the organization’s Big Picture Science radio show, wrote that while the argument is interesting, it doesn’t answer the question of ET’s appearance.
Shostak argues that a more productive approach would invoke convergent evolution, where similar environments lead to similar adaptations, at least assuming a range of Earth-like conditions such as liquid oceans and thick atmospheres. For example, an alien species that evolved in a liquid environment would evolve a streamlined body to move through water.
“Happenstance and the specifics of the environment will produce variations on an alien species’ planet as it has on ours, and there’s really no way to predict these,” Shostak concludes. “Alas, an accurate cosmic bestiary cannot be written by the invocation of biological mechanisms alone. We need data. That requires more than simply thinking about alien life. We need to actually discover it.”
Search Is On
The search is on. On one hand, the task seems easy enough: There are at least 100 billion planets in the Milky Way alone, and at least 20 percent of those are likely to be capable of producing a biosphere. Even if the evolution of life is exceedingly rare—take a conservative estimate of .001 percent or 200,000 planets, as proposed by the Oxford paper—you have to like the odds.
Of course, it’s not that easy by a billion light years.
Planet hunters can’t even agree on what signatures of life they should focus on. The idea is that where there’s smoke there’s fire. In the case of an alien world home to biological life, astrobiologists are searching for the presence of “biosignature gases,” vapors that could only be produced by alien life.
As Quanta Magazine reported, scientists do this by measuring a planet’s atmosphere against starlight. Gases in the atmosphere absorb certain frequencies of starlight, offering a clue as to what is brewing around a particular planet.
The presence of oxygen would seem to be a biological no-brainer, but there are instances where a planet can produce a false positive, meaning non-biological processes are responsible for the exoplanet’s oxygen. Scientists like Sara Seager, an astrophysicist at MIT, have argued there are plenty of examples of other types of gases produced by organisms right here on Earth that could also produce the smoking gun, er, planet.

Life as We Know It
Indeed, the existence of Earth-bound extremophiles—organisms that defy conventional wisdom about where life can exist, such as in the vacuum of space—offer another clue as to what kind of aliens we might eventually meet.
Lynn Rothschild, an astrobiologist and synthetic biologist in the Earth Science Division at NASA’s Ames Research Center in Silicon Valley, takes extremophiles as a baseline and then supersizes them through synthetic biology.
For example, say a bacteria is capable of surviving at 120 degrees Celsius. Rothschild’s lab might tweak an organism’s DNA to see if it could metabolize at 150 degrees. The idea, as she explains, is to expand the envelope for life without ever getting into a rocket ship.

While researchers may not always agree on the “where” and “how” and “what” of the search for extraterrestrial life, most do share one belief: Alien life must be out there.
“It would shock me if there weren’t [extraterrestrials],” Levin says. “There are few things that would shock me more than to find out there aren’t any aliens…If I had to bet on it, I would bet on the side of there being lots and lots of aliens out there.”
Image Credit: NASA Continue reading

Posted in Human Robots