Tag Archives: mechanics

#435313 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Microsoft Invests $1 Billion in OpenAI to Pursue Holy Grail of Artificial Intelligence
James Vincent | The Verge
“i‘The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,’ said [OpenAI cofounder] Sam Altman. ‘Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.’i”

ROBOTICS
UPS Wants to Go Full-Scale With Its Drone Deliveries
Eric Adams | Wired
“If UPS gets its way, it’ll be known for vehicles other than its famous brown vans. The delivery giant is working to become the first commercial entity authorized by the Federal Aviation Administration to use autonomous delivery drones without any of the current restrictions that have governed the aerial testing it has done to date.”

SYNTHETIC BIOLOGY
Scientists Can Finally Build Feedback Circuits in Cells
Megan Molteni | Wired
“Network a few LOCKR-bound molecules together, and you’ve got a circuit that can control a cell’s functions the same way a PID computer program automatically adjusts the pitch of a plane. With the right key, you can make cells glow or blow themselves apart. You can send things to the cell’s trash heap or zoom them to another cellular zip code.”

ENERGY
Carbon Nanotubes Could Increase Solar Efficiency to 80 Percent
David Grossman | Popular Mechanics
“Obviously, that sort of efficiency rating is unheard of in the world of solar panels. But even though a proof of concept is a long way from being used in the real world, any further developments in the nanotubes could bolster solar panels in ways we haven’t seen yet.”

FUTURE
What Technology Is Most Likely to Become Obsolete During Your Lifetime?
Daniel Kolitz | Gizmodo
“Old technology seldom just goes away. Whiteboards and LED screens join chalk blackboards, but don’t eliminate them. Landline phones get scarce, but not phones. …And the technologies that seem to be the most outclassed may come back as a the cult objects of aficionados—the vinyl record, for example. All this is to say that no one can tell us what will be obsolete in fifty years, but probably a lot less will be obsolete than we think.”

NEUROSCIENCE
The Human Brain Project Hasn’t Lived Up to Its Promise
Ed Yong | The Atlantic
“The HBP, then, is in a very odd position, criticized for being simultaneously too grandiose and too narrow. None of the skeptics I spoke with was dismissing the idea of simulating parts of the brain, but all of them felt that such efforts should be driven by actual research questions. …Countless such projects could have been funded with the money channeled into the HBP, which explains much of the furor around the project.”

Image Credit: Aron Van de Pol / Unsplash Continue reading

Posted in Human Robots

#434151 Life-or-Death Algorithms: The Black Box ...

When it comes to applications for machine learning, few can be more widely hyped than medicine. This is hardly surprising: it’s a huge industry that generates a phenomenal amount of data and revenue, where technological advances can improve or save the lives of millions of people. Hardly a week passes without a study that suggests algorithms will soon be better than experts at detecting pneumonia, or Alzheimer’s—diseases in complex organs ranging from the eye to the heart.

The problems of overcrowded hospitals and overworked medical staff plague public healthcare systems like Britain’s NHS and lead to rising costs for private healthcare systems. Here, again, algorithms offer a tantalizing solution. How many of those doctor’s visits really need to happen? How many could be replaced by an interaction with an intelligent chatbot—especially if it can be combined with portable diagnostic tests, utilizing the latest in biotechnology? That way, unnecessary visits could be reduced, and patients could be diagnosed and referred to specialists more quickly without waiting for an initial consultation.

As ever with artificial intelligence algorithms, the aim is not to replace doctors, but to give them tools to reduce the mundane or repetitive parts of the job. With an AI that can examine thousands of scans in a minute, the “dull drudgery” is left to machines, and the doctors are freed to concentrate on the parts of the job that require more complex, subtle, experience-based judgement of the best treatments and the needs of the patient.

High Stakes
But, as ever with AI algorithms, there are risks involved with relying on them—even for tasks that are considered mundane. The problems of black-box algorithms that make inexplicable decisions are bad enough when you’re trying to understand why that automated hiring chatbot was unimpressed by your job interview performance. In a healthcare context, where the decisions made could mean life or death, the consequences of algorithmic failure could be grave.

A new paper in Science Translational Medicine, by Nicholson Price, explores some of the promises and pitfalls of using these algorithms in the data-rich medical environment.

Neural networks excel at churning through vast quantities of training data and making connections, absorbing the underlying patterns or logic for the system in hidden layers of linear algebra; whether it’s detecting skin cancer from photographs or learning to write in pseudo-Shakespearean script. They are terrible, however, at explaining the underlying logic behind the relationships that they’ve found: there is often little more than a string of numbers, the statistical “weights” between the layers. They struggle to distinguish between correlation and causation.

This raises interesting dilemmas for healthcare providers. The dream of big data in medicine is to feed a neural network on “huge troves of health data, finding complex, implicit relationships and making individualized assessments for patients.” What if, inevitably, such an algorithm proves to be unreasonably effective at diagnosing a medical condition or prescribing a treatment, but you have no scientific understanding of how this link actually works?

Too Many Threads to Unravel?
The statistical models that underlie such neural networks often assume that variables are independent of each other, but in a complex, interacting system like the human body, this is not always the case.

In some ways, this is a familiar concept in medical science—there are many phenomena and links which have been observed for decades but are still poorly understood on a biological level. Paracetamol is one of the most commonly-prescribed painkillers, but there’s still robust debate about how it actually works. Medical practitioners may be keen to deploy whatever tool is most effective, regardless of whether it’s based on a deeper scientific understanding. Fans of the Copenhagen interpretation of quantum mechanics might spin this as “Shut up and medicate!”

But as in that field, there’s a debate to be had about whether this approach risks losing sight of a deeper understanding that will ultimately prove more fruitful—for example, for drug discovery.

Away from the philosophical weeds, there are more practical problems: if you don’t understand how a black-box medical algorithm is operating, how should you approach the issues of clinical trials and regulation?

Price points out that, in the US, the “21st-Century Cures Act” allows the FDA to regulate any algorithm that analyzes images, or doesn’t allow a provider to review the basis for its conclusions: this could completely exclude “black-box” algorithms of the kind described above from use.

Transparency about how the algorithm functions—the data it looks at, and the thresholds for drawing conclusions or providing medical advice—may be required, but could also conflict with the profit motive and the desire for secrecy in healthcare startups.

One solution might be to screen algorithms that can’t explain themselves, or don’t rely on well-understood medical science, from use before they enter the healthcare market. But this could prevent people from reaping the benefits that they can provide.

Evaluating Algorithms
New healthcare algorithms will be unable to do what physicists did with quantum mechanics, and point to a track record of success, because they will not have been deployed in the field. And, as Price notes, many algorithms will improve as they’re deployed in the field for a greater amount of time, and can harvest and learn from the performance data that’s actually used. So how can we choose between the most promising approaches?

Creating a standardized clinical trial and validation system that’s equally valid across algorithms that function in different ways, or use different input or training data, will be a difficult task. Clinical trials that rely on small sample sizes, such as for algorithms that attempt to personalize treatment to individuals, will also prove difficult. With a small sample size and little scientific understanding, it’s hard to tell whether the algorithm succeeded or failed because it’s bad at its job or by chance.

Add learning into the mix and the picture gets more complex. “Perhaps more importantly, to the extent that an ideal black-box algorithm is plastic and frequently updated, the clinical trial validation model breaks down further, because the model depends on a static product subject to stable validation.” As Price describes, the current system for testing and validation of medical products needs some adaptation to deal with this new software before it can successfully test and validate the new algorithms.

Striking a Balance
The story in healthcare reflects the AI story in so many other fields, and the complexities involved perhaps illustrate why even an illustrious company like IBM appears to be struggling to turn its famed Watson AI into a viable product in the healthcare space.

A balance must be struck, both in our rush to exploit big data and the eerie power of neural networks, and to automate thinking. We must be aware of the biases and flaws of this approach to problem-solving: to realize that it is not a foolproof panacea.

But we also need to embrace these technologies where they can be a useful complement to the skills, insights, and deeper understanding that humans can provide. Much like a neural network, our industries need to train themselves to enhance this cooperation in the future.

Image Credit: Connect world / Shutterstock.com Continue reading

Posted in Human Robots

#432249 New Malicious AI Report Outlines Biggest ...

Everyone’s talking about deep fakes: audio-visual imitations of people, generated by increasingly powerful neural networks, that will soon be indistinguishable from the real thing. Politicians are regularly laid low by scandals that arise from audio-visual recordings. Try watching the footage that could be created of Barack Obama from his speeches, and the Lyrebird impersonations. You could easily, today or in the very near future, create a forgery that might be indistinguishable from the real thing. What would that do to politics?

Once the internet is flooded with plausible-seeming tapes and recordings of this sort, how are we going to decide what’s real and what isn’t? Democracy, and our ability to counteract threats, is already threatened by a lack of agreement on the facts. Once you can’t believe the evidence of your senses anymore, we’re in serious trouble. Ultimately, you can dream up all kinds of utterly terrifying possibilities for these deep fakes, from fake news to blackmail.

How to solve the problem? Some have suggested that media websites like Facebook or Twitter should carry software that probes every video to see if it’s a deep fake or not and labels the fakes. But this will prove computationally intensive. Plus, imagine a case where we have such a system, and a fake is “verified as real” by news media algorithms that have been fooled by clever hackers.

The other alternative is even more dystopian: you can prove something isn’t true simply by always having an alibi. Lawfare describes a “solution” where those concerned about deep fakes have all of their movements and interactions recorded. So to avoid being blackmailed or having your reputation ruined, you just consent to some company engaging in 24/7 surveillance of everything you say or do and having total power over that information. What could possibly go wrong?

The point is, in the same way that you don’t need human-level, general AI or humanoid robotics to create systems that can cause disruption in the world of work, you also don’t need a general intelligence to threaten security and wreak havoc on society. Andrew Ng, AI researcher, says that worrying about the risks from superintelligent AI is like “worrying about overpopulation on Mars.” There are clearly risks that arise even from the simple algorithms we have today.

The looming issue of deep fakes is just one of the threats considered by the new malicious AI report, which has co-authors from the Future of Humanity Institute and the Centre for the Study of Existential Risk (among other organizations.) They limit their focus to the technologies of the next five years.

Some of the concerns the report explores are enhancements to familiar threats.

Automated hacking can get better, smarter, and algorithms can adapt to changing security protocols. “Phishing emails,” where people are scammed by impersonating someone they trust or an official organization, could be generated en masse and made more realistic by scraping data from social media. Standard phishing works by sending such a great volume of emails that even a very low success rate can be profitable. Spear phishing aims at specific targets by impersonating family members, but can be labor intensive. If AI algorithms enable every phishing scam to become sharper in this way, more people are going to get gouged.

Then there are novel threats that come from our own increasing use of and dependence on artificial intelligence to make decisions.

These algorithms may be smart in some ways, but as any human knows, computers are utterly lacking in common sense; they can be fooled. A rather scary application is adversarial examples. Machine learning algorithms are often used for image recognition. But it’s possible, if you know a little about how the algorithm is structured, to construct the perfect level of noise to add to an image, and fool the machine. Two images can be almost completely indistinguishable to the human eye. But by adding some cleverly-calculated noise, the hackers can fool the algorithm into thinking an image of a panda is really an image of a gibbon (in the OpenAI example). Research conducted by OpenAI demonstrates that you can fool algorithms even by printing out examples on stickers.

Now imagine that instead of tricking a computer into thinking that a panda is actually a gibbon, you fool it into thinking that a stop sign isn’t there, or that the back of someone’s car is really a nice open stretch of road. In the adversarial example case, the images are almost indistinguishable to humans. By the time anyone notices the road sign has been “hacked,” it could already be too late.

As the OpenAI foundation freely admits, worrying about whether we’d be able to tame a superintelligent AI is a hard problem. It looks all the more difficult when you realize some of our best algorithms can be fooled by stickers; even “modern simple algorithms can behave in ways we do not intend.”

There are ways around this approach.

Adversarial training can generate lots of adversarial examples and explicitly train the algorithm not to be fooled by them—but it’s costly in terms of time and computation, and puts you in an arms race with hackers. Many strategies for defending against adversarial examples haven’t proved adaptive enough; correcting against vulnerabilities one at a time is too slow. Moreover, it demonstrates a point that can be lost in the AI hype: algorithms can be fooled in ways we didn’t anticipate. If we don’t learn about these vulnerabilities until the algorithms are everywhere, serious disruption can occur. And no matter how careful you are, some vulnerabilities are likely to remain to be exploited, even if it takes years to find them.

Just look at the Meltdown and Spectre vulnerabilities, which weren’t widely known about for more than 20 years but could enable hackers to steal personal information. Ultimately, the more blind faith we put into algorithms and computers—without understanding the opaque inner mechanics of how they work—the more vulnerable we will be to these forms of attack. And, as China dreams of using AI to predict crimes and enhance the police force, the potential for unjust arrests can only increase.

This is before you get into the truly nightmarish territory of “killer robots”—not the Terminator, but instead autonomous or consumer drones which could potentially be weaponized by bad actors and used to conduct attacks remotely. Some reports have indicated that terrorist organizations are already trying to do this.

As with any form of technology, new powers for humanity come with new risks. And, as with any form of technology, closing Pandora’s box will prove very difficult.

Somewhere between the excessively hyped prospects of AI that will do everything for us and AI that will destroy the world lies reality: a complex, ever-changing set of risks and rewards. The writers of the malicious AI report note that one of their key motivations is ensuring that the benefits of new technology can be delivered to people as quickly, but as safely, as possible. In the rush to exploit the potential for algorithms and create 21st-century infrastructure, we must ensure we’re not building in new dangers.

Image Credit: lolloj / Shutterstock.com Continue reading

Posted in Human Robots

#431170 This Week’s Awesome Stories From ...

AUGMENTED REALITY
ZED Mini Turns Rift and Vive Into an AR Headset From the FutureBen Lang | Road to VR“When attached, the camera provides stereo pass-through video and real-time depth and environment mapping, turning the headsets into dev kits emulating the capabilities of high-end AR headsets of the future. The ZED Mini will launch in November.”
ROBOTICS
Life-Size Humanoid Robot Is Designed to Fall Over (and Over and Over)Evan Ackerman | IEEE Spectrum “The researchers came up with a new strategy for not worrying about falls: not worrying about falls. Instead, they’ve built their robot from the ground up with an armored structure that makes it totally okay with falling over and getting right back up again.”
SPACE
Russia Will Team up With NASA to Build a Lunar Space StationAnatoly Zak | Popular Mechanics “NASA and its partner agencies plan to begin the construction of the modular habitat known as the Deep-Space Gateway in orbit around the Moon in the early 2020s. It will become the main destination for astronauts for at least a decade, extending human presence beyond the Earth’s orbit for the first time since the end of the Apollo program in 1972. Launched on NASA’s giant SLS rocket and serviced by the crews of the Orion spacecraft, the outpost would pave the way to a mission to Mars in the 2030s.”
TRANSPORTATION
Dubai Starts Testing Crewless Two-Person ‘Flying Taxis’Thuy Ong | The Verge“The drone was uncrewed and hovered 200 meters high during the test flight, according to Reuters. The AAT, which is about two meters high, was supplied by specialist German manufacturer Volocopter, known for its eponymous helicopter drone hybrid with 18 rotors…Dubai has a target for autonomous transport to account for a quarter of total trips by 2030.”
AUTONOMOUS CARS
Toyota Is Trusting a Startup for a Crucial Part of Its Newest Self-Driving CarsJohana Bhuiyan | Recode “Toyota unveiled the next generation of its self-driving platform today, which features more accurate object detection technology and mapping, among other advancements. These test cars—which Toyota is testing on both a closed driving course and on some public roads—will also be using Luminar’s lidar sensors, or radars that use lasers to detect the distance to an object.”
Image Credit: KHIUS / Shutterstock.com Continue reading

Posted in Human Robots

#430630 CORE2 consumer robot controller by ...

Hardware, software and cloud for fast robot prototyping and development
Kraków, Poland, June 27th, 2017 – Robotic development platform creator Husarion has launched its next-generation dedicated robot controller CORE2. Available now at the Crowd Supply crowdfunding platform, CORE2 enables the rapid prototyping and development of consumer and service robots. It’s especially suitable for engineers designing commercial appliances and robotics students or hobbyists. Whether the next robotic idea is a tiny rover that penetrates tunnels, a surveillance drone, or a room-sized 3D printer, the CORE2 can serve as the brains behind it.
Photo Credit: Husarionwww.husarion.com
Husarion’s platform greatly simplifies robot development, making it as easy as creating a website. It provides engineers with embedded hardware, preconfigured software and easy online management. From the simple, proof-of-concept prototypes made with LEGO® Mindstorms to complex designs ready for mass manufacturing, the core technology stays the same throughout the process, shortening the time to market significantly. It’s designed as an innovation for the consumer robotics industry similar to what Arduino or Raspberry PI were to the Maker Movement.

“We are on the verge of a consumer robotics revolution”, says Dominik Nowak, CEO of Husarion. “Big industrial businesses have long been utilizing robots, but until very recently the consumer side hasn’t seen that many of them. This is starting to change now with the democratization of tools, the Maker Movement and technology maturing. We believe Husarion is uniquely positioned for the upcoming boom, offering robot developers a holistic solution and lowering the barrier of entry to the market.”

The hardware part of the platform is the Husarion CORE2 board, a computer that interfaces directly with motors, servos, encoders or sensors. It’s powered by an ARM® CORTEX-M4 CPU, features 42x I/O ports and can support up to 4x DC motors and 6x servomechanisms. Wireless connectivity is provided by a built-in Wi-Fi module.
Photo Credit: Husarion – www.husarion.com
The Husarion CORE2-ROS is an alternative configuration with a Raspberry Pi 3 ARMv8-powered board layered on top, with a preinstalled Robot Operating System (ROS) custom Linux distribution. It allows users to tap into the rich sets of modules and building tools already available for ROS. Real-time capabilities and high computing power enable advanced use cases, such as fully autonomous devices.

Developing software for CORE2-powered robots is easy. Husarion provides Web IDE, allowing engineers to program their connected robots directly from within the browser. There’s also an offline SDK and a convenient extension for Visual Studio Code. The open-source library hFramework based on Real Time Operating System masks the complexity of interface communication behind an elegant, easy-to-use API.

CORE2 also works with Arduino libraries, which can be used with no modifications at all through the compatibility layer of the hFramework API.
Photo Credit: Husarion – www.husarion.com
For online access, programming and control, Husarion provides its dedicated Cloud. By registering the CORE2-powerd robot at https://cloud.husarion.com, developers can update firmware online, build a custom Web control UI and share controls of their device with anyone.

Starting at $89, Husarion CORE2 and CORE2-ROS controllers are now on sale through Crowd Supply.

Husarion also offers complete development kits, extra servo controllers and additional modules for compatibility with LEGO® Mindstorms or Makeblock® mechanics. For more information, please visit: https://www.crowdsupply.com/husarion/core2.

Key points:
A dedicated robot hardware controller, with built-in interfaces for sensors, servos, DC motors and encoders

Programming with free tools: online (via Husarion Cloud Web IDE) or offline (Visual Studio Code extension)
Compatible with ROS, provides C++ 11 open-source programming framework based on RTOS
Husarion Cloud: control, program and share robots, with customizable control UI
Allows faster development and more advanced robotics than general maker boards like Arduino or Raspberry Pi

About Husarion
Husarion was founded in 2013 in Kraków, Poland. In 2015, Husarion successfully financed a Kickstarter campaign for RoboCORE, the company’s first-generation controller. The company delivers a fast prototyping platform for consumer robots. Thanks to Husarion’s hardware modules, efficient programming tools and cloud management, engineers can rapidly develop and iterate on their robot ideas. Husarion simplifies the development of connected, commercial robots ready for mass production and provides kits for academic education.

For more information, visit: https://husarion.com/.
Photo Credit: Husarion – www.husarion.com

Photo Credit: Husarion – www.husarion.com

Media contact:

Piotr Sarotapublic relations consultant
SAROTA PR – public relations agencyphone: +48 12 684 12 68mobile: +48 606 895 326email: piotr(at)sarota.pl
http://www.sarota.pl/
Jakub Misiurapublic relations specialist
phone: +48 12 349 03 52mobile: +48 696 778 568email: jakub.misiura(at)sarota.pl

Photo Credit: Husarion – www.husarion.com
Photo Credit: Husarion – www.husarion.com
Photo Credit: Husarion – www.husarion.com

The post CORE2 consumer robot controller by Husarion appeared first on Roboticmagazine. Continue reading

Posted in Human Robots