Tag Archives: cameras
#436094 Agility Robotics Unveils Upgraded Digit ...
Last time we saw Agility Robotics’ Digit biped, it was picking up a box from a Ford delivery van and autonomously dropping it off on a porch, while at the same time managing to not trip over stairs, grass, or small children. As a demo, it was pretty impressive, but of course there’s an enormous gap between making a video of a robot doing a successful autonomous delivery and letting that robot out into the semi-structured world and expecting it to reliably do a good job.
Agility Robotics is aware of this, of course, and over the last six months they’ve been making substantial improvements to Digit to make it more capable and robust. A new video posted today shows what’s new with the latest version of Digit—Digit v2.
We appreciate Agility Robotics foregoing music in the video, which lets us hear exactly what Digit sounds like in operation. The most noticeable changes are in Digit’s feet, torso, and arms, and I was particularly impressed to see Digit reposition the box on the table before grasping it to make sure that it could get a good grip. Otherwise, it’s hard to tell what’s new, so we asked Agility Robotics’ CEO Damion Shelton to get us up to speed.
IEEE Spectrum: Can you summarize the differences between Digit v1 and v2? We’re particularly interested in the new feet.
Damion Shelton: The feet now include a roll degree of freedom, so that Digit can resist lateral forces without needing to side step. This allows Digit v2 to balance on one foot statically, which Digit v1 and Cassie could not do. The larger foot also dramatically decreases load per unit area, for improved performance on very soft surfaces like sand.
The perception stack includes four Intel RealSense cameras used for obstacle detection and pick/place, plus the lidar. In Digit v1, the perception systems were brought up incrementally over time for development purposes. In Digit v2, all perception systems are active from the beginning and tied to a dedicated computer. The perception system is used for a number of additional things beyond manipulation, which we’ll start to show in the next few weeks.
The torso changes are a bit more behind-the-scenes. All of the electronics in it are now fully custom, thermally managed, and environmentally sealed. We’ve also included power and ethernet to a payload bay that can fit either a NUC or Jetson module (or other customer payload).
What exactly are we seeing in the video in terms of Digit’s autonomous capabilities?
At the moment this is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade. This is v2 hardware, so there’s one more full version in development prior to the 2020 launch, which will expand the autonomy envelope significantly.
“This is a demonstration of shared autonomy. Picking and placing the box is fully autonomous. Balance and footstep placement are fully autonomous, but guidance and obstacle avoidance are under local teleop. It’s no longer a radio controller as in early videos; we’re not ready to reveal our current controller design but it’s a reasonably significant upgrade”
—Damion Shelton, Agility Robotics
What are some unique features or capabilities of Digit v2 that might not be obvious from the video?
For those who’ve used Cassie robots, the power-up and power-down ergonomics are a lot more user friendly. Digit can be disassembled into carry-on luggage sized pieces (give or take) in under 5 minutes for easy transport. The battery charges in-situ using a normal laptop-style charger.
I’m curious about this “stompy” sort of gait that we see in Digit and many other bipedal robots—are there significant challenges or drawbacks to implementing a more human-like (and presumably quieter) heel-toe gait?
There are no drawbacks other than increased complexity in controls and foot design. With Digit v2, the larger surface area helps with the noise, and v2 has similar or better passive-dynamic performance as compared to Cassie or Digit v1. The foot design is brand new, and new behaviors like heel-toe are an active area of development.
How close is Digit v2 to a system that you’d be comfortable operating commercially?
We’re on track for a 2020 launch for Digit v3. Changes from v2 to v3 are mostly bug-fix in nature, with a few regulatory upgrades like full battery certification. Safety is a major concern for us, and we have launch customers that will be operating Digit in a safe environment, with a phased approach to relaxing operational constraints. Digit operates almost exclusively under force control (as with cobots more generally), but at the moment we’ll err on the side of caution during operation until we have the stats to back up safety and reliability. The legged robot industry has too much potential for us to screw it up by behaving irresponsibly.
It will be a while before Digit (or any other humanoid robot) is operating fully autonomously in crowds of people, but there are so many large market opportunities (think indoor factory/warehouse environments) to address prior to that point that we expect to mature the operational safety side of things well in advance of having saturated the more robot-tolerant markets.
[ Agility Robotics ] Continue reading →
#435806 Boston Dynamics’ Spot Robot Dog ...
Boston Dynamics is announcing this morning that Spot, its versatile quadruped robot, is now for sale. The machine’s animal-like behavior regularly electrifies crowds at tech conferences, and like other Boston Dynamics’ robots, Spot is a YouTube sensation whose videos amass millions of views.
Now anyone interested in buying a Spot—or a pack of them—can go to the company’s website and submit an order form. But don’t pull out your credit card just yet. Spot may cost as much as a luxury car, and it is not really available to consumers. The initial sale, described as an “early adopter program,” is targeting businesses. Boston Dynamics wants to find customers in select industries and help them deploy Spots in real-world scenarios.
“What we’re doing is the productization of Spot,” Boston Dynamics CEO Marc Raibert tells IEEE Spectrum. “It’s really a milestone for us going from robots that work in the lab to these that are hardened for work out in the field.”
Boston Dynamics has always been a secretive company, but last month, in preparation for launching Spot (formerly SpotMini), it allowed our photographers into its headquarters in Waltham, Mass., for a special shoot. In that session, we captured Spot and also Atlas—the company’s highly dynamic humanoid—in action, walking, climbing, and jumping.
You can see Spot’s photo interactives on our Robots Guide. (The Atlas interactives will appear in coming weeks.)
Gif: Bob O’Connor/Robots.ieee.org
And if you’re in the market for a robot dog, here’s everything we know about Boston Dynamics’ plans for Spot.
Who can buy a Spot?
If you’re interested in one, you should go to Boston Dynamics’ website and take a look at the information the company requires from potential buyers. Again, the focus is on businesses. Boston Dynamics says it wants to get Spots out to initial customers that “either have a compelling use case or a development team that we believe can do something really interesting with the robot,” says VP of business development Michael Perry. “Just because of the scarcity of the robots that we have, we’re going to have to be selective about which partners we start working together with.”
What can Spot do?
As you’ve probably seen on the YouTube videos, Spot can walk, trot, avoid obstacles, climb stairs, and much more. The robot’s hardware is almost completely custom, with powerful compute boards for control, and five sensor modules located on every side of Spot’s body, allowing it to survey the space around itself from any direction. The legs are powered by 12 custom motors with a reduction, with a top speed of 1.6 meters per second. The robot can operate for 90 minutes on a charge. In addition to the basic configuration, you can integrate up to 14 kilograms of extra hardware to a payload interface. Among the payload packages Boston Dynamics plans to offer are a 6 degrees-of-freedom arm, a version of which can be seen in some of the YouTube videos, and a ring of cameras called SpotCam that could be used to create Street View–type images inside buildings.
Image: Boston Dynamics
How do you control Spot?
Learning to drive the robot using its gaming-style controller “takes 15 seconds,” says CEO Marc Raibert. He explains that while teleoperating Spot, you may not realize that the robot is doing a lot of the work. “You don’t really see what that is like until you’re operating the joystick and you go over a box and you don’t have to do anything,” he says. “You’re practically just thinking about what you want to do and the robot takes care of everything.” The control methods have evolved significantly since the company’s first quadruped robots, machines like BigDog and LS3. “The control in those days was much more monolithic, and now we have what we call a sequential composition controller,” Raibert says, “which lets the system have control of the dynamics in a much broader variety of situations.” That means that every time one of Spot’s feet touches or doesn’t touch the ground, this different state of the body affects the basic physical behavior of the robot, and the controller adjusts accordingly. “Our controller is designed to understand what that state is and have different controls depending upon the case,” he says.
How much does Spot cost?
Boston Dynamics would not give us specific details about pricing, saying only that potential customers should contact them for a quote and that there is going to be a leasing option. It’s understandable: As with any expensive and complex product, prices can vary on a case by case basis and depend on factors such as configuration, availability, level of support, and so forth. When we pressed the company for at least an approximate base price, Perry answered: “Our general guidance is that the total cost of the early adopter program lease will be less than the price of a car—but how nice a car will depend on the number of Spots leased and how long the customer will be leasing the robot.”
Can Spot do mapping and SLAM out of the box?
The robot’s perception system includes cameras and 3D sensors (there is no lidar), used to avoid obstacles and sense the terrain so it can climb stairs and walk over rubble. It’s also used to create 3D maps. According to Boston Dynamics, the first software release will offer just teleoperation. But a second release, to be available in the next few weeks, will enable more autonomous behaviors. For example, it will be able to do mapping and autonomous navigation—similar to what the company demonstrated in a video last year, showing how you can drive the robot through an environment, create a 3D point cloud of the environment, and then set waypoints within that map for Spot to go out and execute that mission. For customers that have their own autonomy stack and are interested in using those on Spot, Boston Dynamics made it “as plug and play as possible in terms of how third-party software integrates into Spot’s system,” Perry says. This is done mainly via an API.
How does Spot’s API works?
Boston Dynamics built an API so that customers can create application-level products with Spot without having to deal with low-level control processes. “Rather than going and building joint-level kinematic access to the robot,” Perry explains, “we created a high-level API and SDK that allows people who are used to Web app development or development of missions for drones to use that same scope, and they’ll be able to build applications for Spot.”
What applications should we see first?
Boston Dynamics envisions Spot as a platform: a versatile mobile robot that companies can use to build applications based on their needs. What types of applications? The company says the best way to find out is to put Spot in the hands of as many users as possible and let them develop the applications. Some possibilities include performing remote data collection and light manipulation in construction sites; monitoring sensors and infrastructure at oil and gas sites; and carrying out dangerous missions such as bomb disposal and hazmat inspections. There are also other promising areas such as security, package delivery, and even entertainment. “We have some initial guesses about which markets could benefit most from this technology, and we’ve been engaging with customers doing proof-of-concept trials,” Perry says. “But at the end of the day, that value story is really going to be determined by people going out and exploring and pushing the limits of the robot.”
Photo: Bob O'Connor
How many Spots have been produced?
Last June, Boston Dynamics said it was planning to build about a hundred Spots by the end of the year, eventually ramping up production to a thousand units per year by the middle of this year. The company admits that it is not quite there yet. It has built close to a hundred beta units, which it has used to test and refine the final design. This version is now being mass manufactured, but the company is still “in the early tens of robots,” Perry says.
How did Boston Dynamics test Spot?
The company has tested the robots during proof-of-concept trials with customers, and at least one is already using Spot to survey construction sites. The company has also done reliability tests at its facility in Waltham, Mass. “We drive around, not quite day and night, but hundreds of miles a week, so that we can collect reliability data and find bugs,” Raibert says.
What about competitors?
In recent years, there’s been a proliferation of quadruped robots that will compete in the same space as Spot. The most prominent of these is ANYmal, from ANYbotics, a Swiss company that spun out of ETH Zurich. Other quadrupeds include Vision from Ghost Robotics, used by one of the teams in the DARPA Subterranean Challenge; and Laikago and Aliengo from Unitree Robotics, a Chinese startup. Raibert views the competition as a positive thing. “We’re excited to see all these companies out there helping validate the space,” he says. “I think we’re more in competition with finding the right need [that robots can satisfy] than we are with the other people building the robots at this point.”
Why is Boston Dynamics selling Spot now?
Boston Dynamics has long been an R&D-centric firm, with most of its early funding coming from military programs, but it says commercializing robots has always been a goal. Productizing its machines probably accelerated when the company was acquired by Google’s parent company, Alphabet, which had an ambitious (and now apparently very dead) robotics program. The commercial focus likely continued after Alphabet sold Boston Dynamics to SoftBank, whose famed CEO, Masayoshi Son, is known for his love of robots—and profits.
Which should I buy, Spot or Aibo?
Don’t laugh. We’ve gotten emails from individuals interested in purchasing a Spot for personal use after seeing our stories on the robot. Alas, Spot is not a bigger, fancier Aibo pet robot. It’s an expensive, industrial-grade machine that requires development and maintenance. If you’re maybe Jeff Bezos you could probably convince Boston Dynamics to sell you one, but otherwise the company will prioritize businesses.
What’s next for Boston Dynamics?
On the commercial side of things, other than Spot, Boston Dynamics is interested in the logistics space. Earlier this year it announced the acquisition of Kinema Systems, a startup that had developed vision sensors and deep-learning software to enable industrial robot arms to locate and move boxes. There’s also Handle, the mobile robot on whegs (wheels + legs), that can pick up and move packages. Boston Dynamics is hiring both in Waltham, Mass., and Mountain View, Calif., where Kinema was located.
Okay, can I watch a cool video now?
During our visit to Boston Dynamics’ headquarters last month, we saw Atlas and Spot performing some cool new tricks that we unfortunately are not allowed to tell you about. We hope that, although the company is putting a lot of energy and resources into its commercial programs, Boston Dynamics will still find plenty of time to improve its robots, build new ones, and of course, keep making videos. [Update: The company has just released a new Spot video, which we’ve embedded at the top of the post.][Update 2: We should have known. Boston Dynamics sure knows how to create buzz for itself: It has just released a second video, this time of Atlas doing some of those tricks we saw during our visit and couldn’t tell you about. Enjoy!]
[ Boston Dynamics ] Continue reading →
#435791 To Fly Solo, Racing Drones Have a Need ...
Drone racing’s ultimate vision of quadcopters weaving nimbly through obstacle courses has attracted far less excitement and investment than self-driving cars aimed at reshaping ground transportation. But the U.S. military and defense industry are betting on autonomous drone racing as the next frontier for developing AI so that it can handle high-speed navigation within tight spaces without human intervention.
The autonomous drone challenge requires split-second decision-making with six degrees of freedom instead of a car’s mere two degrees of road freedom. One research team developing the AI necessary for controlling autonomous racing drones is the Robotics and Perception Group at the University of Zurich in Switzerland. In late May, the Swiss researchers were among nine teams revealed to be competing in the two-year AlphaPilot open innovation challenge sponsored by U.S. aerospace company Lockheed Martin. The winning team will walk away with up to $2.25 million for beating other autonomous racing drones and a professional human drone pilot in head-to-head competitions.
“I think it is important to first point out that having an autonomous drone to finish a racing track at high speeds or even beating a human pilot does not imply that we can have autonomous drones [capable of] navigating in real-world, complex, unstructured, unknown environments such as disaster zones, collapsed buildings, caves, tunnels or narrow pipes, forests, military scenarios, and so on,” says Davide Scaramuzza, a professor of robotics and perception at the University of Zurich and ETH Zurich. “However, the robust and computationally efficient state estimation algorithms, control, and planning algorithms developed for autonomous drone racing would represent a starting point.”
The nine teams that made the cut—from a pool of 424 AlphaPilot applicants—will compete in four 2019 racing events organized under the Drone Racing League’s Artificial Intelligence Robotic Racing Circuit, says Keith Lynn, program manager for AlphaPilot at Lockheed Martin. To ensure an apples-to-apples comparison of each team’s AI secret sauce, each AlphaPilot team will upload its AI code into identical, specially-built drones that have the NVIDIA Xavier GPU at the core of the onboard computing hardware.
“Lockheed Martin is offering mentorship to the nine AlphaPilot teams to support their AI tech development and innovations,” says Lynn. The company “will be hosting a week-long Developers Summit at MIT in July, dedicated to workshopping and improving AlphaPilot teams’ code,” he added. He notes that each team will retain the intellectual property rights to its AI code.
The AlphaPilot challenge takes inspiration from older autonomous drone racing events hosted by academic researchers, Scaramuzza says. He credits Hyungpil Moon, a professor of robotics and mechanical engineering at Sungkyunkwan University in South Korea, for having organized the annual autonomous drone racing competition at the International Conference on Intelligent Robots and Systems since 2016.
It’s no easy task to create and train AI that can perform high-speed flight through complex environments by relying on visual navigation. One big challenge comes from how drones can accelerate sharply, take sharp turns, fly sideways, do zig-zag patterns and even perform back flips. That means camera images can suddenly appear tilted or even upside down during drone flight. Motion blur may occur when a drone flies very close to structures at high speeds and camera pixels collect light from multiple directions. Both cameras and visual software can also struggle to compensate for sudden changes between light and dark parts of an environment.
To lend AI a helping hand, Scaramuzza’s group recently published a drone racing dataset that includes realistic training data taken from a drone flown by a professional pilot in both indoor and outdoor spaces. The data, which includes complicated aerial maneuvers such as back flips, flight sequences that cover hundreds of meters, and flight speeds of up to 83 kilometers per hour, was presented at the 2019 IEEE International Conference on Robotics and Automation.
The drone racing dataset also includes data captured by the group’s special bioinspired event cameras that can detect changes in motion on a per-pixel basis within microseconds. By comparison, ordinary cameras need milliseconds (each millisecond being 1,000 microseconds) to compare motion changes in each image frame. The event cameras have already proven capable of helping drones nimbly dodge soccer balls thrown at them by the Swiss lab’s researchers.
The Swiss group’s work on the racing drone dataset received funding in part from the U.S. Defense Advanced Research Projects Agency (DARPA), which acts as the U.S. military’s special R&D arm for more futuristic projects. Specifically, the funding came from DARPA’s Fast Lightweight Autonomy program that envisions small autonomous drones capable of flying at high speeds through cluttered environments without GPS guidance or communication with human pilots.
Such speedy drones could serve as military scouts checking out dangerous buildings or alleys. They could also someday help search-and-rescue teams find people trapped in semi-collapsed buildings or lost in the woods. Being able to fly at high speed without crashing into things also makes a drone more efficient at all sorts of tasks by making the most of limited battery life, Scaramuzza says. After all, most drone battery life gets used up by the need to hover in flight and doesn’t get drained much by flying faster.
Even if AI manages to conquer the drone racing obstacle courses, that would be the end of the beginning of the technology’s development. What would still be required? Scaramuzza specifically singled out the need to handle low-visibility conditions involving smoke, dust, fog, rain, snow, fire, hail, as some of the biggest challenges for vision-based algorithms and AI in complex real-life environments.
“I think we should develop and release datasets containing smoke, dust, fog, rain, fire, etc. if we want to allow using autonomous robots to complement human rescuers in saving people lives after an earthquake or natural disaster in the future,” Scaramuzza says. Continue reading →
#435765 The Four Converging Technologies Giving ...
How each of us sees the world is about to change dramatically.
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information—making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, or the Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by two-dimensional screens.
Today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and powerful artificial intelligence.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In the third post of our five-part series on augmented reality, we will explore the convergence of AR, AI, sensors, and blockchain and dive into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with graphics processing units (GPUs)—electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Image Credit: PHD Ventures.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. The augmented world is only just getting started.
Join Me
(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, a highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me—or as I call it, a “countdown to the Singularity.” If you’d like to learn more and consider joining our 2020 membership, apply here.
Share this with your friends, especially if they are interested in any of the areas outlined above.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
This article originally appeared on Diamandis.com
Image Credit: Funky Focus / Pixabay Continue reading →
#435757 Robotic Animal Agility
An off-shore wind power platform, somewhere in the North Sea, on a freezing cold night, with howling winds and waves crashing against the impressive structure. An imperturbable ANYmal is quietly conducting its inspection.
ANYmal, a medium sized dog-like quadruped robot, walks down the stairs, lifts a “paw” to open doors or to call the elevator and trots along corridors. Darkness is no problem: it knows the place perfectly, having 3D-mapped it. Its laser sensors keep it informed about its precise path, location and potential obstacles. It conducts its inspection across several rooms. Its cameras zoom in on counters, recording the measurements displayed. Its thermal sensors record the temperature of machines and equipment and its ultrasound microphone checks for potential gas leaks. The robot also inspects lever positions as well as the correct positioning of regulatory fire extinguishers. As the electronic buzz of its engines resumes, it carries on working tirelessly.
After a little over two hours of inspection, the robot returns to its docking station for recharging. It will soon head back out to conduct its next solitary patrol. ANYmal played alongside Mulder and Scully in the “X-Files” TV series*, but it is in no way a Hollywood robot. It genuinely exists and surveillance missions are part of its very near future.
Off-shore oil platforms, the first test fields and probably the first actual application of ANYmal. ©ANYbotics
This quadruped robot was designed by ANYbotics, a spinoff of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Made of carbon fibre and aluminium, it weighs about thirty kilos. It is fully ruggedised, water- and dust-proof (IP-67). A kevlar belly protects its main body, carrying its powerful brain, batteries, network device, power management system and navigational systems.
ANYmal was designed for all types of terrain, including rubble, sand or snow. It has been field tested on industrial sites and is at ease with new obstacles to overcome (and it can even get up after a fall). Depending on its mission, its batteries last 2 to 4 hours.
On its jointed legs, protected by rubber pads, it can walk (at the speed of human steps), trot, climb, curl upon itself to crawl, carry a load or even jump and dance. It is the need to move on all surfaces that has driven its designers to choose a quadruped. “Biped robots are not easy to stabilise, especially on irregular terrain” explains Dr Péter Fankhauser, co-founder and chief business development officer of ANYbotics. “Wheeled or tracked robots can carry heavy loads, but they are bulky and less agile. Flying drones are highly mobile, but cannot carry load, handle objects or operate in bad weather conditions. We believe that quadrupeds combine the optimal characteristics, both in terms of mobility and versatility.”
What served as a source of inspiration for the team behind the project, the Robotic Systems Lab of the ETH Zurich, is a champion of agility on rugged terrain: the mountain goat. “We are of course still a long way” says Fankhauser. “However, it remains our objective on the longer term.
The first prototype, ALoF, was designed already back in 2009. It was still rather slow, very rigid and clumsy – more of a proof of concept than a robot ready for application. In 2012, StarlETH, fitted with spring joints, could hop, jump and climb. It was with this robot that the team started participating in 2014 in ARGOS, a full-scale challenge, launched by the Total oil group. The idea was to present a robot capable of inspecting an off-shore drilling station autonomously.
Up against dozens of competitors, the ETH Zurich team was the only team to enter the competition with such a quadrupedal robot. They didn’t win, but the multiple field tests were growing evermore convincing. Especially because, during the challenge, the team designed new joints with elastic actuators made in-house. These joints, inspired by tendons and muscles, are compact, sealed and include their own custom control electronics. They can regulate joint torque, position and impedance directly. Thanks to this innovation, the team could enter the same competition with a new version of its robot, ANYmal, fitted with three joints on each leg.
The ARGOS experience confirms the relevance of the selected means of locomotion. “Our robot is lighter, takes up less space on site and it is less noisy” says Fankhauser. “It also overcomes bigger obstacles than larger wheeled or tracked robots!” As ANYmal generated public interest and its transformation into a genuine product seemed more than possible, the startup ANYbotics was launched in 2016. It sold not only its robot, but also its revolutionary joints, called ANYdrive.
Today, ANYmal is not yet ready for sale to companies. However, ANYbotics has a growing number of partnerships with several industries, testing the robot for a few days or several weeks, for all types of tasks. Last October, for example, ANYmal navigated its way through the dark sewage system of the city of Zurich in order to test its capacity to help workers in similar difficult, repetitive and even dangerous tasks.
Why such an early interest among companies? “Because many companies want to integrate robots into their maintenance tasks” answers Fankhauser. “With ANYmal, they can actually evaluate its feasibility and plan their strategy. Eventually, both the architecture and the equipment of buildings could be rethought to be adapted to these maintenance robots”.
ANYmal requires ruggedised, sealed and extremely reliable interconnection solutions, such as LEMO. ©ANYbotics
Through field demonstrations and testing, ANYbotics can gather masses of information (up to 50,000 measurements are recorded every second during each test!) “It helps us to shape the product.” In due time, the startup will be ready to deliver a commercial product which really caters for companies’ needs.
Inspection and surveillance tasks on industrial sites are not the only applications considered. The startup is also thinking of agricultural inspections – with its onboard sensors, ANYmal is capable of mapping its environment, measuring bio mass and even taking soil samples. In the longer term, it could also be used for search and rescue operations. By the way, the robot can already be switched to “remote control” mode at any time and can be easily tele-operated. It is also capable of live audio and video transmission.
The transition from the prototype to the marketed product stage will involve a number of further developments. These include increasing ANYmal’s agility and speed, extending its capacity to map large-scale environments, improving safety, security, user handling and integrating the system with the customer’s data management software. It will also be necessary to enhance the robot’s reliability “so that it can work for days, weeks, or even months without human supervision.” All required certifications will have to be obtained. The locomotion system, which had triggered the whole business, is only one of a number of considerations of ANYbotics.
Designed for extreme environments, for ANYmal smoke is not a problem and it can walk in the snow, through rubble or in water. ©ANYbotics
The startup is not all alone. In fact, it has sold ANYmal robots to a dozen major universities who use them to develop their know-how in robotics. The startup has also founded ANYmal Research, a community including members such as Toyota Research Institute, the German Aerospace Center and the computer company Nvidia. Members have full access to ANYmal’s control software, simulations and documentation. Sharing has boosted both software and hardware ideas and developments (built on ROS, the open-source Robot Operating System). In particular, payload variations, providing for expandability and scalability. For instance, one of the universities uses a robotic arm which enables ANYmal to grasp or handle objects and open doors.
Among possible applications, ANYbotics mentions entertainment. It is not only about playing in more films or TV series, but rather about participating in various attractions (trade shows, museums, etc.). “ANYmal is so novel that it attracts a great amount of interest” confirms Fankhauser with a smile. “Whenever we present it somewhere, people gather around.”
Videos of these events show a fascinated and sometimes slightly fearful audience, when ANYmal gets too close to them. Is it fear of the “bad robot”? “This fear exists indeed and we are happy to be able to use ANYmal also to promote public awareness towards robotics and robots.” Reminiscent of a young dog, ANYmal is truly adapted for the purpose.
However, Péter Fankhauser softens the image of humans and sophisticated robots living together. “These coming years, robots will continue to work in the background, like they have for a long time in factories. Then, they will be used in public places in a selective and targeted way, for instance for dangerous missions. We will need to wait another ten years before animal-like robots, such as ANYmal will share our everyday lives!”
At the Consumer Electronics Show (CES) in Las Vegas in January, Continental, the German automotive manufacturing company, used robots to demonstrate a last-mile delivery. It showed ANYmal getting out of an autonomous vehicle with a parcel, climbing onto the front porch, lifting a paw to ring the doorbell, depositing the parcel before getting back into the vehicle. This futuristic image seems very close indeed.
*X-Files, season 11, episode 7, aired in February 2018 Continue reading →