Tag Archives: time

#431171 SceneScan: Real-Time 3D Depth Sensing ...

Nerian Introduces a High-Performance Successor for the Proven SP1 System
Stereo vision, which is the three-dimensional perception of our environment with two sensors likeour eyes, is a well-known technology. As a passive method – there is no need to emit light in thevisible or invisible spectral range – this technology can open up new possibilities for three dimensional perception, even under difficult conditions.
But as often, the devil is in the details: for most applications, the software implementation withstandard PCs, but also with graphics processors, is too slow. Another complicating factor is thatthese hardware platforms are expensive and not energy-efficient. The solution is to instead usespecialized hardware for image processing. A programmable logic device – a so-called FPGA – cangreatly accelerate the image processing.
As a technology leader, Nerian Vision Technologies has been following this path successfully forthe past two years with the SP1 stereo vision system, which has enabled completely newapplications in the fields of robotics, automation technology, medical technology, autonomousdriving and other domains. Now the company introduces two successors:
SceneScan and SceneScan Pro. Real eye-catchers in a double sense: stereo vision in an elegant design!But more important is, of course, the significantly improved inner workings of the two new modelsin comparison to their predecessor. The new hardware allows processing rates of up to 100 framesper second at resolutions of up to 3 megapixels, which leaves the SP1 far behind:
Photo Credit: Nerian Vision Technologies – www.nerian.com

The table illustrates the difference: while SceneScan Pro has the highest possible computing powerand is designed for the most demanding applications, SceneScan has been cost-reduced forapplications with lower requirements. The customer can thus optimize his embedded vision solution both in terms of costs and technology.
The new duo is completed by Nerian’s proven Karmin stereo cameras. Of course, industrialUSB3Vision cameras by other manufacturers are also supported.This combination not only supports the above-mentioned applications even better, but alsofacilitates completely new and innovative ones. If required, customer-specific adaptations are alsopossible.
ContactNerian Vision TechnologiesOwner: Dr. Konstantin SchauweckerGotenstr. 970771 Leinfelden-EchterdingenGermanyPhone: +49 711 / 2195 9414Email: service@nerian.comWebsite: http://nerian.com
Press Release Authored By: Nerian Vision Technologies
Photo Credit: Nerian Vision Technologies – www.nerian.com
The post SceneScan: Real-Time 3D Depth Sensing Through Stereo Vision appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431170 This Week’s Awesome Stories From ...

AUGMENTED REALITY
ZED Mini Turns Rift and Vive Into an AR Headset From the FutureBen Lang | Road to VR“When attached, the camera provides stereo pass-through video and real-time depth and environment mapping, turning the headsets into dev kits emulating the capabilities of high-end AR headsets of the future. The ZED Mini will launch in November.”
ROBOTICS
Life-Size Humanoid Robot Is Designed to Fall Over (and Over and Over)Evan Ackerman | IEEE Spectrum “The researchers came up with a new strategy for not worrying about falls: not worrying about falls. Instead, they’ve built their robot from the ground up with an armored structure that makes it totally okay with falling over and getting right back up again.”
SPACE
Russia Will Team up With NASA to Build a Lunar Space StationAnatoly Zak | Popular Mechanics “NASA and its partner agencies plan to begin the construction of the modular habitat known as the Deep-Space Gateway in orbit around the Moon in the early 2020s. It will become the main destination for astronauts for at least a decade, extending human presence beyond the Earth’s orbit for the first time since the end of the Apollo program in 1972. Launched on NASA’s giant SLS rocket and serviced by the crews of the Orion spacecraft, the outpost would pave the way to a mission to Mars in the 2030s.”
TRANSPORTATION
Dubai Starts Testing Crewless Two-Person ‘Flying Taxis’Thuy Ong | The Verge“The drone was uncrewed and hovered 200 meters high during the test flight, according to Reuters. The AAT, which is about two meters high, was supplied by specialist German manufacturer Volocopter, known for its eponymous helicopter drone hybrid with 18 rotors…Dubai has a target for autonomous transport to account for a quarter of total trips by 2030.”
AUTONOMOUS CARS
Toyota Is Trusting a Startup for a Crucial Part of Its Newest Self-Driving CarsJohana Bhuiyan | Recode “Toyota unveiled the next generation of its self-driving platform today, which features more accurate object detection technology and mapping, among other advancements. These test cars—which Toyota is testing on both a closed driving course and on some public roads—will also be using Luminar’s lidar sensors, or radars that use lasers to detect the distance to an object.”
Image Credit: KHIUS / Shutterstock.com Continue reading

Posted in Human Robots

#431155 What It Will Take for Quantum Computers ...

Quantum computers could give the machine learning algorithms at the heart of modern artificial intelligence a dramatic speed up, but how far off are we? An international group of researchers has outlined the barriers that still need to be overcome.
This year has seen a surge of interest in quantum computing, driven in part by Google’s announcement that it will demonstrate “quantum supremacy” by the end of 2017. That means solving a problem beyond the capabilities of normal computers, which the company predicts will take 49 qubits—the quantum computing equivalent of bits.
As impressive as such a feat would be, the demonstration is likely to be on an esoteric problem that stacks the odds heavily in the quantum processor’s favor, and getting quantum computers to carry out practically useful calculations will take a lot more work.
But these devices hold great promise for solving problems in fields as diverse as cryptography or weather forecasting. One application people are particularly excited about is whether they could be used to supercharge the machine learning algorithms already transforming the modern world.
The potential is summarized in a recent review paper in the journal Nature written by a group of experts from the emerging field of quantum machine learning.
“Classical machine learning methods such as deep neural networks frequently have the feature that they can both recognize statistical patterns in data and produce data that possess the same statistical patterns: they recognize the patterns that they produce,” they write.
“This observation suggests the following hope. If small quantum information processors can produce statistical patterns that are computationally difficult for a classical computer to produce, then perhaps they can also recognize patterns that are equally difficult to recognize classically.”
Because of the way quantum computers work—taking advantage of strange quantum mechanical effects like entanglement and superposition—algorithms running on them should in principle be able to solve problems much faster than the best known classical algorithms, a phenomenon known as quantum speedup.
Designing these algorithms is tricky work, but the authors of the review note that there has been significant progress in recent years. They highlight multiple quantum algorithms exhibiting quantum speedup that could act as subroutines, or building blocks, for quantum machine learning programs.
We still don’t have the hardware to implement these algorithms, but according to the researchers the challenge is a technical one and clear paths to overcoming them exist. More challenging, they say, are four fundamental conceptual problems that could limit the applicability of quantum machine learning.
The first two are the input and output problems. Quantum computers, unsurprisingly, deal with quantum data, but the majority of the problems humans want to solve relate to the classical world. Translating significant amounts of classical data into the quantum systems can take so much time it can cancel out the benefits of the faster processing speeds, and the same is true of reading out the solution at the end.
The input problem could be mitigated to some extent by the development of quantum random access memory (qRAM)—the equivalent to RAM in a conventional computer used to provide the machine with quick access to its working memory. A qRAM can be configured to store classical data but allow the quantum computers to access all that information simultaneously as a superposition, which is required for a variety of quantum algorithms. But the authors note this is still a considerable engineering challenge and may not be sustainable for big data problems.
Closely related to the input/output problem is the costing problem. At present, the authors say very little is known about how many gates—or operations—a quantum machine learning algorithm will require to solve a given problem when operated on real-world devices. It’s expected that on highly complex problems they will offer considerable improvements over classical computers, but it’s not clear how big problems have to be before this becomes apparent.
Finally, whether or when these advantages kick in may be hard to prove, something the authors call the benchmarking problem. Claiming that a quantum algorithm can outperform any classical machine learning approach requires extensive testing against these other techniques that may not be feasible.
They suggest that this could be sidestepped by lowering the standards quantum machine learning algorithms are currently held to. This makes sense, as it doesn’t really matter whether an algorithm is intrinsically faster than all possible classical ones, as long as it’s faster than all the existing ones.
Another way of avoiding some of these problems is to apply these techniques directly to quantum data, the actual states generated by quantum systems and processes. The authors say this is probably the most promising near-term application for quantum machine learning and has the added benefit that any insights can be fed back into the design of better hardware.
“This would enable a virtuous cycle of innovation similar to that which occurred in classical computing, wherein each generation of processors is then leveraged to design the next-generation processors,” they conclude.
Image Credit: archy13 / Shutterstock.com Continue reading

Posted in Human Robots

#431142 Will Privacy Survive the Future?

Technological progress has radically transformed our concept of privacy. How we share information and display our identities has changed as we’ve migrated to the digital world.
As the Guardian states, “We now carry with us everywhere devices that give us access to all the world’s information, but they can also offer almost all the world vast quantities of information about us.” We are all leaving digital footprints as we navigate through the internet. While sometimes this information can be harmless, it’s often valuable to various stakeholders, including governments, corporations, marketers, and criminals.
The ethical debate around privacy is complex. The reality is that our definition and standards for privacy have evolved over time, and will continue to do so in the next few decades.
Implications of Emerging Technologies
Protecting privacy will only become more challenging as we experience the emergence of technologies such as virtual reality, the Internet of Things, brain-machine interfaces, and much more.
Virtual reality headsets are already gathering information about users’ locations and physical movements. In the future all of our emotional experiences, reactions, and interactions in the virtual world will be able to be accessed and analyzed. As virtual reality becomes more immersive and indistinguishable from physical reality, technology companies will be able to gather an unprecedented amount of data.
It doesn’t end there. The Internet of Things will be able to gather live data from our homes, cities and institutions. Drones may be able to spy on us as we live our everyday lives. As the amount of genetic data gathered increases, the privacy of our genes, too, may be compromised.
It gets even more concerning when we look farther into the future. As companies like Neuralink attempt to merge the human brain with machines, we are left with powerful implications for privacy. Brain-machine interfaces by nature operate by extracting information from the brain and manipulating it in order to accomplish goals. There are many parties that can benefit and take advantage of the information from the interface.
Marketing companies, for instance, would take an interest in better understanding how consumers think and consequently have their thoughts modified. Employers could use the information to find new ways to improve productivity or even monitor their employees. There will notably be risks of “brain hacking,” which we must take extreme precaution against. However, it is important to note that lesser versions of these risks currently exist, i.e., by phone hacking, identify fraud, and the like.
A New Much-Needed Definition of Privacy
In many ways we are already cyborgs interfacing with technology. According to theories like the extended mind hypothesis, our technological devices are an extension of our identities. We use our phones to store memories, retrieve information, and communicate. We use powerful tools like the Hubble Telescope to extend our sense of sight. In parallel, one can argue that the digital world has become an extension of the physical world.
These technological tools are a part of who we are. This has led to many ethical and societal implications. Our Facebook profiles can be processed to infer secondary information about us, such as sexual orientation, political and religious views, race, substance use, intelligence, and personality. Some argue that many of our devices may be mapping our every move. Your browsing history could be spied on and even sold in the open market.
While the argument to protect privacy and individuals’ information is valid to a certain extent, we may also have to accept the possibility that privacy will become obsolete in the future. We have inherently become more open as a society in the digital world, voluntarily sharing our identities, interests, views, and personalities.

“The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental?”

There also seems to be a contradiction with the positive trend towards mass transparency and the need to protect privacy. Many advocate for a massive decentralization and openness of information through mechanisms like blockchain.
The question we are left with is, at what point does the tradeoff between transparency and privacy become detrimental? We want to live in a world of fewer secrets, but also don’t want to live in a world where our every move is followed (not to mention our every feeling, thought and interaction). So, how do we find a balance?
Traditionally, privacy is used synonymously with secrecy. Many are led to believe that if you keep your personal information secret, then you’ve accomplished privacy. Danny Weitzner, director of the MIT Internet Policy Research Initiative, rejects this notion and argues that this old definition of privacy is dead.
From Witzner’s perspective, protecting privacy in the digital age means creating rules that require governments and businesses to be transparent about how they use our information. In other terms, we can’t bring the business of data to an end, but we can do a better job of controlling it. If these stakeholders spy on our personal information, then we should have the right to spy on how they spy on us.
The Role of Policy and Discourse
Almost always, policy has been too slow to adapt to the societal and ethical implications of technological progress. And sometimes the wrong laws can do more harm than good. For instance, in March, the US House of Representatives voted to allow internet service providers to sell your web browsing history on the open market.
More often than not, the bureaucratic nature of governance can’t keep up with exponential growth. New technologies are emerging every day and transforming society. Can we confidently claim that our world leaders, politicians, and local representatives are having these conversations and debates? Are they putting a focus on the ethical and societal implications of emerging technologies? Probably not.
We also can’t underestimate the role of public awareness and digital activism. There needs to be an emphasis on educating and engaging the general public about the complexities of these issues and the potential solutions available. The current solution may not be robust or clear, but having these discussions will get us there.
Stock Media provided by blasbike / Pond5 Continue reading

Posted in Human Robots

#431130 Innovative Collaborative Robot sets new ...

Press Release by: HMK
As the trend of Industry 4.0 takes the world by storm, collaborative robots and smart factories are becoming the latest hot topic. At this year’s PPMA show, HMK will demonstrate the world’s first collaborative robot with built-in vision recognition from Techman Robot.
The new TM5 Cobot from HMK merges systems that usually function separately in conventional robots, the Cobot is the only collaborative robot to incorporate simple programming, a fully integrated vision system and the latest safety standards in a single unit.
With capabilities including direction identification, self-calibration of coordinates and visual task operation enabled by built-in vision, the TM5 can fine-tune in accordance with actual conditions at any time to accomplish complex processes that used to demand the integration of various equipment; it requires less manpower and time to recalibrate when objects or coordinates move and thus significantly improves flexibility as well as reducing maintenance cost.
Photo Credit: hmkdirect.com
Simple.Programming could not be easier. Using an easy to use flow chart program, TM-Flow will run on any tablet, PC or laptop over a wireless link to the TM control box, complex automation tasks can be realised in minutes. Clever teach functions and wizards also allow hand guided programming and easy incorporation of operation such as palletising, de-palletising and conveyor tracking.
SmartThe TM5 is the only cobot to feature a full colour vision package as standard mounted on the wrist of the robot, which in turn, is fully supported within TM-Flow. The result allows users to easily integrate the robot to the application, without complex tooling and the need for expensive add-on vision hardware and programming.
SafeThe recently CE marked TM5 now incorporates the new ISO/TS 15066 guidelines on safety in collaborative robots systems, which covers four types of collaborative operation:a) Safety-rated monitored stopb) Hand guidingc) Speed and separation monitoringd) Power and force limitingSafety hardware inputs also allow the Cobot to be integrated to wider safety systems.
When you add EtherCat and Modbus network connectivity and I/O expansion options, IoT ready network access and ex-stock delivery, the TM5 sets a new benchmark for this evolving robotics sector.
The TM5 is available with two payload options, 4Kg and 6Kg with a reach of 900mm and 700mm respectively, both with positioning capabilities to a repeatability of 0.05mm.
HMK will be showcasing the new TM5 Cobot at this year’s PPMA show at the NEC, visit stand F102 to get hands on the with the Cobot and experience the innovative and intuitive graphic HMI and hand-guiding features.
For more information contact HMK on 01260 279411, email sales@hmkdirect.com or visit www.hmkdirect.com
Photo Credit: hmkdirect.com
The post Innovative Collaborative Robot sets new benchmark appeared first on Roboticmagazine. Continue reading

Posted in Human Robots