Tag Archives: programming

#432311 Everyone Is Talking About AI—But Do ...

In 2017, artificial intelligence attracted $12 billion of VC investment. We are only beginning to discover the usefulness of AI applications. Amazon recently unveiled a brick-and-mortar grocery store that has successfully supplanted cashiers and checkout lines with computer vision, sensors, and deep learning. Between the investment, the press coverage, and the dramatic innovation, “AI” has become a hot buzzword. But does it even exist yet?

At the World Economic Forum Dr. Kai-Fu Lee, a Taiwanese venture capitalist and the founding president of Google China, remarked, “I think it’s tempting for every entrepreneur to package his or her company as an AI company, and it’s tempting for every VC to want to say ‘I’m an AI investor.’” He then observed that some of these AI bubbles could burst by the end of 2018, referring specifically to “the startups that made up a story that isn’t fulfillable, and fooled VCs into investing because they don’t know better.”

However, Dr. Lee firmly believes AI will continue to progress and will take many jobs away from workers. So, what is the difference between legitimate AI, with all of its pros and cons, and a made-up story?

If you parse through just a few stories that are allegedly about AI, you’ll quickly discover significant variation in how people define it, with a blurred line between emulated intelligence and machine learning applications.

I spoke to experts in the field of AI to try to find consensus, but the very question opens up more questions. For instance, when is it important to be accurate to a term’s original definition, and when does that commitment to accuracy amount to the splitting of hairs? It isn’t obvious, and hype is oftentimes the enemy of nuance. Additionally, there is now a vested interest in that hype—$12 billion, to be precise.

This conversation is also relevant because world-renowned thought leaders have been publicly debating the dangers posed by AI. Facebook CEO Mark Zuckerberg suggested that naysayers who attempt to “drum up these doomsday scenarios” are being negative and irresponsible. On Twitter, business magnate and OpenAI co-founder Elon Musk countered that Zuckerberg’s understanding of the subject is limited. In February, Elon Musk engaged again in a similar exchange with Harvard professor Steven Pinker. Musk tweeted that Pinker doesn’t understand the difference between functional/narrow AI and general AI.

Given the fears surrounding this technology, it’s important for the public to clearly understand the distinctions between different levels of AI so that they can realistically assess the potential threats and benefits.

As Smart As a Human?
Erik Cambria, an expert in the field of natural language processing, told me, “Nobody is doing AI today and everybody is saying that they do AI because it’s a cool and sexy buzzword. It was the same with ‘big data’ a few years ago.”

Cambria mentioned that AI, as a term, originally referenced the emulation of human intelligence. “And there is nothing today that is even barely as intelligent as the most stupid human being on Earth. So, in a strict sense, no one is doing AI yet, for the simple fact that we don’t know how the human brain works,” he said.

He added that the term “AI” is often used in reference to powerful tools for data classification. These tools are impressive, but they’re on a totally different spectrum than human cognition. Additionally, Cambria has noticed people claiming that neural networks are part of the new wave of AI. This is bizarre to him because that technology already existed fifty years ago.

However, technologists no longer need to perform the feature extraction by themselves. They also have access to greater computing power. All of these advancements are welcomed, but it is perhaps dishonest to suggest that machines have emulated the intricacies of our cognitive processes.

“Companies are just looking at tricks to create a behavior that looks like intelligence but that is not real intelligence, it’s just a mirror of intelligence. These are expert systems that are maybe very good in a specific domain, but very stupid in other domains,” he said.

This mimicry of intelligence has inspired the public imagination. Domain-specific systems have delivered value in a wide range of industries. But those benefits have not lifted the cloud of confusion.

Assisted, Augmented, or Autonomous
When it comes to matters of scientific integrity, the issue of accurate definitions isn’t a peripheral matter. In a 1974 commencement address at the California Institute of Technology, Richard Feynman famously said, “The first principle is that you must not fool yourself—and you are the easiest person to fool.” In that same speech, Feynman also said, “You should not fool the layman when you’re talking as a scientist.” He opined that scientists should bend over backwards to show how they could be wrong. “If you’re representing yourself as a scientist, then you should explain to the layman what you’re doing—and if they don’t want to support you under those circumstances, then that’s their decision.”

In the case of AI, this might mean that professional scientists have an obligation to clearly state that they are developing extremely powerful, controversial, profitable, and even dangerous tools, which do not constitute intelligence in any familiar or comprehensive sense.

The term “AI” may have become overhyped and confused, but there are already some efforts underway to provide clarity. A recent PwC report drew a distinction between “assisted intelligence,” “augmented intelligence,” and “autonomous intelligence.” Assisted intelligence is demonstrated by the GPS navigation programs prevalent in cars today. Augmented intelligence “enables people and organizations to do things they couldn’t otherwise do.” And autonomous intelligence “establishes machines that act on their own,” such as autonomous vehicles.

Roman Yampolskiy is an AI safety researcher who wrote the book “Artificial Superintelligence: A Futuristic Approach.” I asked him whether the broad and differing meanings might present difficulties for legislators attempting to regulate AI.

Yampolskiy explained, “Intelligence (artificial or natural) comes on a continuum and so do potential problems with such technology. We typically refer to AI which one day will have the full spectrum of human capabilities as artificial general intelligence (AGI) to avoid some confusion. Beyond that point it becomes superintelligence. What we have today and what is frequently used in business is narrow AI. Regulating anything is hard, technology is no exception. The problem is not with terminology but with complexity of such systems even at the current level.”

When asked if people should fear AI systems, Dr. Yampolskiy commented, “Since capability comes on a continuum, so do problems associated with each level of capability.” He mentioned that accidents are already reported with AI-enabled products, and as the technology advances further, the impact could spread beyond privacy concerns or technological unemployment. These concerns about the real-world effects of AI will likely take precedence over dictionary-minded quibbles. However, the issue is also about honesty versus deception.

Is This Buzzword All Buzzed Out?
Finally, I directed my questions towards a company that is actively marketing an “AI Virtual Assistant.” Carl Landers, the CMO at Conversica, acknowledged that there are a multitude of explanations for what AI is and isn’t.

He said, “My definition of AI is technology innovation that helps solve a business problem. I’m really not interested in talking about the theoretical ‘can we get machines to think like humans?’ It’s a nice conversation, but I’m trying to solve a practical business problem.”

I asked him if AI is a buzzword that inspires publicity and attracts clients. According to Landers, this was certainly true three years ago, but those effects have already started to wane. Many companies now claim to have AI in their products, so it’s less of a differentiator. However, there is still a specific intention behind the word. Landers hopes to convey that previously impossible things are now possible. “There’s something new here that you haven’t seen before, that you haven’t heard of before,” he said.

According to Brian Decker, founder of Encom Lab, machine learning algorithms only work to satisfy their preexisting programming, not out of an interior drive for better understanding. Therefore, he views AI as an entirely semantic argument.

Decker stated, “A marketing exec will claim a photodiode controlled porch light has AI because it ‘knows when it is dark outside,’ while a good hardware engineer will point out that not one bit in a register in the entire history of computing has ever changed unless directed to do so according to the logic of preexisting programming.”

Although it’s important for everyone to be on the same page regarding specifics and underlying meaning, AI-powered products are already powering past these debates by creating immediate value for humans. And ultimately, humans care more about value than they do about semantic distinctions. In an interview with Quartz, Kai-Fu Lee revealed that algorithmic trading systems have already given him an 8X return over his private banking investments. “I don’t trade with humans anymore,” he said.

Image Credit: vrender / Shutterstock.com Continue reading

Posted in Human Robots

#432036 The Power to Upgrade Our Own Biology Is ...

Upgrading our biology may sound like science fiction, but attempts to improve humanity actually date back thousands of years. Every day, we enhance ourselves through seemingly mundane activities such as exercising, meditating, or consuming performance-enhancing drugs, such as caffeine or adderall. However, the tools with which we upgrade our biology are improving at an accelerating rate and becoming increasingly invasive.

In recent decades, we have developed a wide array of powerful methods, such as genetic engineering and brain-machine interfaces, that are redefining our humanity. In the short run, such enhancement technologies have medical applications and may be used to treat many diseases and disabilities. Additionally, in the coming decades, they could allow us to boost our physical abilities or even digitize human consciousness.

What’s New?
Many futurists argue that our devices, such as our smartphones, are already an extension of our cortex and in many ways an abstract form of enhancement. According to philosophers Andy Clark and David Chalmers’ theory of extended mind, we use technology to expand the boundaries of the human mind beyond our skulls.

One can argue that having access to a smartphone enhances one’s cognitive capacities and abilities and is an indirect form of enhancement of its own. It can be considered an abstract form of brain-machine interface. Beyond that, wearable devices and computers are already accessible in the market, and people like athletes use them to boost their progress.

However, these interfaces are becoming less abstract.

Not long ago, Elon Musk announced a new company, Neuralink, with the goal of merging the human mind with AI. The past few years have seen remarkable developments in both the hardware and software of brain-machine interfaces. Experts are designing more intricate electrodes while programming better algorithms to interpret neural signals. Scientists have already succeeded in enabling paralyzed patients to type with their minds, and are even allowing brains to communicate with one another purely through brainwaves.

Ethical Challenges of Enhancement
There are many social and ethical implications of such advancements.

One of the most fundamental issues with cognitive and physical enhancement techniques is that they contradict the very definition of merit and success that society has relied on for millennia. Many forms of performance-enhancing drugs have been considered “cheating” for the longest time.

But perhaps we ought to revisit some of our fundamental assumptions as a society.

For example, we like to credit hard work and talent in a fair manner, where “fair” generally implies that an individual has acted in a way that has served him to merit his rewards. If you are talented and successful, it is considered to be because you chose to work hard and take advantage of the opportunities available to you. But by these standards, how much of our accomplishments can we truly be credited for?

For instance, the genetic lottery can have an enormous impact on an individual’s predisposition and personality, which can in turn affect factors such as motivation, reasoning skills, and other mental abilities. Many people are born with a natural ability or a physique that gives them an advantage in a particular area or predisposes them to learn faster. But is it justified to reward someone for excellence if their genes had a pivotal role in their path to success?

Beyond that, there are already many ways in which we take “shortcuts” to better mental performance. Seemingly mundane activities like drinking coffee, meditating, exercising, or sleeping well can boost one’s performance in any given area and are tolerated by society. Even the use of language can have positive physical and psychological effects on the human brain, which can be liberating to the individual and immensely beneficial to society at large. And let’s not forget the fact that some of us are born into more access to developing literacy than others.

Given all these reasons, one could argue that cognitive abilities and talents are currently derived more from uncontrollable factors and luck than we like to admit. If anything, technologies like brain-machine interfaces can enhance individual autonomy and allow one a choice of how capable they become.

As Karim Jebari points out (pdf), if a certain characteristic or trait is required to perform a particular role and an individual lacks this trait, would it be wrong to implement the trait through brain-machine interfaces or genetic engineering? How is this different from any conventional form of learning or acquiring a skill? If anything, this would be removing limitations on individuals that result from factors outside their control, such as biological predisposition (or even traits induced from traumatic experiences) to act or perform in a certain way.

Another major ethical concern is equality. As with any other emerging technology, there are valid concerns that cognitive enhancement tech will benefit only the wealthy, thus exacerbating current inequalities. This is where public policy and regulations can play a pivotal role in the impact of technology on society.

Enhancement technologies can either contribute to inequality or allow us to solve it. Educating and empowering the under-privileged can happen at a much more rapid rate, helping the overall rate of human progress accelerate. The “normal range” for human capacity and intelligence, however it is defined, could shift dramatically towards more positive trends.

Many have also raised concerns over the negative applications of government-led biological enhancement, including eugenics-like movements and super-soldiers. Naturally, there are also issues of safety, security, and well-being, especially within the early stages of experimentation with enhancement techniques.

Brain-machine interfaces, for instance, could have implications on autonomy. The interface involves using information extracted from the brain to stimulate or modify systems in order to accomplish a goal. This part of the process can be enhanced by implementing an artificial intelligence system onto the interface—one that exposes the possibility of a third party potentially manipulating individual’s personalities, emotions, and desires by manipulating the interface.

A Tool For Transcendence
It’s important to discuss these risks, not so that we begin to fear and avoid such technologies, but so that we continue to advance in a way that minimizes harm and allows us to optimize the benefits.

Stephen Hawking notes that “with genetic engineering, we will be able to increase the complexity of our DNA, and improve the human race.” Indeed, the potential advantages of modifying biology are revolutionary. Doctors would gain access to a powerful tool to tackle disease, allowing us to live longer and healthier lives. We might be able to extend our lifespan and tackle aging, perhaps a critical step to becoming a space-faring species. We may begin to modify the brain’s building blocks to become more intelligent and capable of solving grand challenges.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. Human enhancement is bringing us closer to such a world—it could allow us to take control of our evolution and truly shape the future of our species.

Image Credit: GrAl/ Shutterstock.com Continue reading

Posted in Human Robots

#431790 FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal RobotsForce Torque Sensor feeds data to Universal Robots force mode
Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.
This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.
The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”
See some of the FT 300’s new capabilities in the following demo videos:
#1 How to calibrate with the FT 300 URCap Dashboard
#2 Linear search demo
#3 Path recording demo
Visit the FT 300 webpage or get a quote here
Get the FT 300 specs here
Get more info in the FAQ
Get free Skills to accelerate robot programming of force control tasks.
Get free robot cell deployment resources on leanrobotics.org
* Available with Universal Robots CB3.1 controller only
About Robotiq
Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.
Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.
Media contact
David Maltais, Communications and Public Relations Coordinator
Press Release Provided by: Robotiq.Com
The post FT 300 force torque sensor appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431130 Innovative Collaborative Robot sets new ...

Press Release by: HMK
As the trend of Industry 4.0 takes the world by storm, collaborative robots and smart factories are becoming the latest hot topic. At this year’s PPMA show, HMK will demonstrate the world’s first collaborative robot with built-in vision recognition from Techman Robot.
The new TM5 Cobot from HMK merges systems that usually function separately in conventional robots, the Cobot is the only collaborative robot to incorporate simple programming, a fully integrated vision system and the latest safety standards in a single unit.
With capabilities including direction identification, self-calibration of coordinates and visual task operation enabled by built-in vision, the TM5 can fine-tune in accordance with actual conditions at any time to accomplish complex processes that used to demand the integration of various equipment; it requires less manpower and time to recalibrate when objects or coordinates move and thus significantly improves flexibility as well as reducing maintenance cost.
Photo Credit: hmkdirect.com
Simple.Programming could not be easier. Using an easy to use flow chart program, TM-Flow will run on any tablet, PC or laptop over a wireless link to the TM control box, complex automation tasks can be realised in minutes. Clever teach functions and wizards also allow hand guided programming and easy incorporation of operation such as palletising, de-palletising and conveyor tracking.
SmartThe TM5 is the only cobot to feature a full colour vision package as standard mounted on the wrist of the robot, which in turn, is fully supported within TM-Flow. The result allows users to easily integrate the robot to the application, without complex tooling and the need for expensive add-on vision hardware and programming.
SafeThe recently CE marked TM5 now incorporates the new ISO/TS 15066 guidelines on safety in collaborative robots systems, which covers four types of collaborative operation:a) Safety-rated monitored stopb) Hand guidingc) Speed and separation monitoringd) Power and force limitingSafety hardware inputs also allow the Cobot to be integrated to wider safety systems.
When you add EtherCat and Modbus network connectivity and I/O expansion options, IoT ready network access and ex-stock delivery, the TM5 sets a new benchmark for this evolving robotics sector.
The TM5 is available with two payload options, 4Kg and 6Kg with a reach of 900mm and 700mm respectively, both with positioning capabilities to a repeatability of 0.05mm.
HMK will be showcasing the new TM5 Cobot at this year’s PPMA show at the NEC, visit stand F102 to get hands on the with the Cobot and experience the innovative and intuitive graphic HMI and hand-guiding features.
For more information contact HMK on 01260 279411, email sales@hmkdirect.com or visit www.hmkdirect.com
Photo Credit: hmkdirect.com
The post Innovative Collaborative Robot sets new benchmark appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#430854 Get a Live Look Inside Singularity ...

Singularity University’s (SU) second annual Global Summit begins today in San Francisco, and the Singularity Hub team will be there to give you a live look inside the event, exclusive speaker interviews, and articles on great talks.
Whereas SU’s other summits each focus on a specific field or industry, Global Summit is a broad look at emerging technologies and how they can help solve the world’s biggest challenges.
Talks will cover the latest in artificial intelligence, the brain and technology, augmented and virtual reality, space exploration, the future of work, the future of learning, and more.
We’re bringing three full days of live Facebook programming, streaming on Singularity Hub’s Facebook page, complete with 30+ speaker interviews, tours of the EXPO innovation hall, and tech demos. You can also livestream main stage talks at Singularity University’s Facebook page.
Interviews include Peter Diamandis, cofounder and chairman of Singularity University; Sylvia Earle, National Geographic explorer-in-residence; Esther Wojcicki, founder of the Palo Alto High Media Arts Center; Bob Richards, founder and CEO of Moon Express; Matt Oehrlein, cofounder of MegaBots; and Craig Newmark, founder of Craigslist and the Craig Newmark Foundation.
Pascal Finette, SU vice president of startup solutions, and Alison Berman, SU staff writer and digital producer, will host the show, and Lisa Kay Solomon, SU chair of transformational practices, will put on a special daily segment on exponential leadership with thought leaders.
Make sure you don’t miss anything by ‘liking’ the Singularity Hub and Singularity University Facebook pages and turn on notifications from both pages so you know when we go live. And to get a taste of what’s in store, check out the below selection of stories from last year’s event.
Are We at the Edge of a Second Sexual Revolution?By Vanessa Bates Ramirez
“Brace yourself, because according to serial entrepreneur Martin Varsavsky, all our existing beliefs about procreation are about to be shattered again…According to Varsavsky, the second sexual revolution will decouple procreation from sex, because sex will no longer be the best way to make babies.”
VR Pioneer Chris Milk: Virtual Reality Will Mirror Life Like Nothing Else BeforeBy Jason Ganz
“Milk is already a legend in the VR community…But [he] is just getting started. His company Within has plans to help shape the language we use for virtual reality storytelling. Because let’s be clear, VR storytelling is still very much in its infancy. This fact makes it even crazier there are already VR films out there that can inspire and captivate on such a profound level. And we’re only going up from here.”
7 Key Factors Driving the Artificial Intelligence RevolutionBy David Hill
“Jacobstein calmly and optimistically assures that this revolution isn’t going to disrupt humans completely, but usher in a future in which there’s a symbiosis between human and machine intelligence. He highlighted 7 factors driving this revolution.”
Are There Other Intelligent Civilizations Out There? Two Views on the Fermi ParadoxBy Alison Berman
“Cliché or not, when I stare up at the sky, I still wonder if we’re alone in the galaxy. Could there be another technologically advanced civilization out there? During a panel discussion on space exploration at Singularity University’s Global Summit, Jill Tarter, the Bernard M. Oliver chair at the SETI Institute, was asked to explain the Fermi paradox and her position on it. Her answer was pretty brilliant.”
Engineering Will Soon Be ‘More Parenting Than Programming’By Sveta McShane
“In generative design, the user states desired goals and constraints and allows the computer to generate entire designs, iterations and solution sets based on those constraints. It is, in fact, a lot like parents setting boundaries for their children’s activities. The user basically says, ‘Yes, it’s ok to do this, but it’s not ok to do that.’ The resulting solutions are ones you might never have thought of on your own.”
Biohacking Will Let You Connect Your Body to Anything You WantBy Vanessa Bates Ramirez
“How many cyborgs did you see during your morning commute today? I would guess at least five. Did they make you nervous? Probably not; you likely didn’t even realize they were there…[Hannes] Sjoblad said that the cyborgs we see today don’t look like Hollywood prototypes; they’re regular people who have integrated technology into their bodies to improve or monitor some aspect of their health.”
Peter Diamandis: We’ll Radically Extend Our Lives With New TechnologiesBy Jason Dorrier
“[Diamandis] said humans aren’t the longest-lived animals. Other species have multi-hundred-year lifespans. Last year, a study “dating” Greenland sharks found they can live roughly 400 years. Though the technique isn’t perfectly precise, they estimated one shark to be about 392. Its approximate birthday was 1624…Diamandis said he asked himself: If these animals can live centuries—why can’t I?” Continue reading

Posted in Human Robots