Tag Archives: recognition

#431165 Intel Jumps Into Brain-Like Computing ...

The brain has long inspired the design of computers and their software. Now Intel has become the latest tech company to decide that mimicking the brain’s hardware could be the next stage in the evolution of computing.
On Monday the company unveiled an experimental “neuromorphic” chip called Loihi. Neuromorphic chips are microprocessors whose architecture is configured to mimic the biological brain’s network of neurons and the connections between them called synapses.
While neural networks—the in vogue approach to artificial intelligence and machine learning—are also inspired by the brain and use layers of virtual neurons, they are still implemented on conventional silicon hardware such as CPUs and GPUs.
The main benefit of mimicking the architecture of the brain on a physical chip, say neuromorphic computing’s proponents, is energy efficiency—the human brain runs on roughly 20 watts. The “neurons” in neuromorphic chips carry out the role of both processor and memory which removes the need to shuttle data back and forth between separate units, which is how traditional chips work. Each neuron also only needs to be powered while it’s firing.

At present, most machine learning is done in data centers due to the massive energy and computing requirements. Creating chips that capture some of nature’s efficiency could allow AI to be run directly on devices like smartphones, cars, and robots.
This is exactly the kind of application Michael Mayberry, managing director of Intel’s research arm, touts in a blog post announcing Loihi. He talks about CCTV cameras that can run image recognition to identify missing persons or traffic lights that can track traffic flow to optimize timing and keep vehicles moving.
There’s still a long way to go before that happens though. According to Wired, so far Intel has only been working with prototypes, and the first full-size version of the chip won’t be built until November.
Once complete, it will feature 130,000 neurons and 130 million synaptic connections split between 128 computing cores. The device will be 1,000 times more energy-efficient than standard approaches, according to Mayberry, but more impressive are claims the chip will be capable of continuous learning.
Intel’s newly launched self-learning neuromorphic chip.
Normally deep learning works by training a neural network on giant datasets to create a model that can then be applied to new data. The Loihi chip will combine training and inference on the same chip, which will allow it to learn on the fly, constantly updating its models and adapting to changing circumstances without having to be deliberately re-trained.
A select group of universities and research institutions will be the first to get their hands on the new chip in the first half of 2018, but Mayberry said it could be years before it’s commercially available. Whether commercialization happens at all may largely depend on whether early adopters can get the hardware to solve any practically useful problems.
So far neuromorphic computing has struggled to gain traction outside the research community. IBM released a neuromorphic chip called TrueNorth in 2014, but the device has yet to showcase any commercially useful applications.
Lee Gomes summarizes the hurdles facing neuromorphic computing excellently in IEEE Spectrum. One is that deep learning can run on very simple, low-precision hardware that can be optimized to use very little power, which suggests complicated new architectures may struggle to find purchase.
It’s also not easy to transfer deep learning approaches developed on conventional chips over to neuromorphic hardware, and even Intel Labs chief scientist Narayan Srinivasa admitted to Forbes Loihi wouldn’t work well with some deep learning models.
Finally, there’s considerable competition in the quest to develop new computer architectures specialized for machine learning. GPU vendors Nvidia and AMD have pivoted to take advantage of this newfound market and companies like Google and Microsoft are developing their own in-house solutions.
Intel, for its part, isn’t putting all its eggs in one basket. Last year it bought two companies building chips for specialized machine learning—Movidius and Nervana—and this was followed up with the $15 billion purchase of self-driving car chip- and camera-maker Mobileye.
And while the jury is still out on neuromorphic computing, it makes sense for a company eager to position itself as the AI chipmaker of the future to have its fingers in as many pies as possible. There are a growing number of voices suggesting that despite its undoubted power, deep learning alone will not allow us to imbue machines with the kind of adaptable, general intelligence humans possess.
What new approaches will get us there are hard to predict, but it’s entirely possible they will only work on hardware that closely mimics the one device we already know is capable of supporting this kind of intelligence—the human brain.
Image Credit: Intel Continue reading

Posted in Human Robots

#431130 Innovative Collaborative Robot sets new ...

Press Release by: HMK
As the trend of Industry 4.0 takes the world by storm, collaborative robots and smart factories are becoming the latest hot topic. At this year’s PPMA show, HMK will demonstrate the world’s first collaborative robot with built-in vision recognition from Techman Robot.
The new TM5 Cobot from HMK merges systems that usually function separately in conventional robots, the Cobot is the only collaborative robot to incorporate simple programming, a fully integrated vision system and the latest safety standards in a single unit.
With capabilities including direction identification, self-calibration of coordinates and visual task operation enabled by built-in vision, the TM5 can fine-tune in accordance with actual conditions at any time to accomplish complex processes that used to demand the integration of various equipment; it requires less manpower and time to recalibrate when objects or coordinates move and thus significantly improves flexibility as well as reducing maintenance cost.
Photo Credit: hmkdirect.com
Simple.Programming could not be easier. Using an easy to use flow chart program, TM-Flow will run on any tablet, PC or laptop over a wireless link to the TM control box, complex automation tasks can be realised in minutes. Clever teach functions and wizards also allow hand guided programming and easy incorporation of operation such as palletising, de-palletising and conveyor tracking.
SmartThe TM5 is the only cobot to feature a full colour vision package as standard mounted on the wrist of the robot, which in turn, is fully supported within TM-Flow. The result allows users to easily integrate the robot to the application, without complex tooling and the need for expensive add-on vision hardware and programming.
SafeThe recently CE marked TM5 now incorporates the new ISO/TS 15066 guidelines on safety in collaborative robots systems, which covers four types of collaborative operation:a) Safety-rated monitored stopb) Hand guidingc) Speed and separation monitoringd) Power and force limitingSafety hardware inputs also allow the Cobot to be integrated to wider safety systems.
When you add EtherCat and Modbus network connectivity and I/O expansion options, IoT ready network access and ex-stock delivery, the TM5 sets a new benchmark for this evolving robotics sector.
The TM5 is available with two payload options, 4Kg and 6Kg with a reach of 900mm and 700mm respectively, both with positioning capabilities to a repeatability of 0.05mm.
HMK will be showcasing the new TM5 Cobot at this year’s PPMA show at the NEC, visit stand F102 to get hands on the with the Cobot and experience the innovative and intuitive graphic HMI and hand-guiding features.
For more information contact HMK on 01260 279411, email sales@hmkdirect.com or visit www.hmkdirect.com
Photo Credit: hmkdirect.com
The post Innovative Collaborative Robot sets new benchmark appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431081 How the Intelligent Home of the Future ...

As Dorothy famously said in The Wizard of Oz, there’s no place like home. Home is where we go to rest and recharge. It’s familiar, comfortable, and our own. We take care of our homes by cleaning and maintaining them, and fixing things that break or go wrong.
What if our homes, on top of giving us shelter, could also take care of us in return?
According to Chris Arkenberg, this could be the case in the not-so-distant future. As part of Singularity University’s Experts On Air series, Arkenberg gave a talk called “How the Intelligent Home of The Future Will Care For You.”
Arkenberg is a research and strategy lead at Orange Silicon Valley, and was previously a research fellow at the Deloitte Center for the Edge and a visiting researcher at the Institute for the Future.
Arkenberg told the audience that there’s an evolution going on: homes are going from being smart to being connected, and will ultimately become intelligent.
Market Trends
Intelligent home technologies are just now budding, but broader trends point to huge potential for their growth. We as consumers already expect continuous connectivity wherever we go—what do you mean my phone won’t get reception in the middle of Yosemite? What do you mean the smart TV is down and I can’t stream Game of Thrones?
As connectivity has evolved from a privilege to a basic expectation, Arkenberg said, we’re also starting to have a better sense of what it means to give up our data in exchange for services and conveniences. It’s so easy to click a few buttons on Amazon and have stuff show up at your front door a few days later—never mind that data about your purchases gets recorded and aggregated.
“Right now we have single devices that are connected,” Arkenberg said. “Companies are still trying to show what the true value is and how durable it is beyond the hype.”

Connectivity is the basis of an intelligent home. To take a dumb object and make it smart, you get it online. Belkin’s Wemo, for example, lets users control lights and appliances wirelessly and remotely, and can be paired with Amazon Echo or Google Home for voice-activated control.
Speaking of voice-activated control, Arkenberg pointed out that physical interfaces are evolving, too, to the point that we’re actually getting rid of interfaces entirely, or transitioning to ‘soft’ interfaces like voice or gesture.
Drivers of change
Consumers are open to smart home tech and companies are working to provide it. But what are the drivers making this tech practical and affordable? Arkenberg said there are three big ones:
Computation: Computers have gotten exponentially more powerful over the past few decades. If it wasn’t for processors that could handle massive quantities of information, nothing resembling an Echo or Alexa would even be possible. Artificial intelligence and machine learning are powering these devices, and they hinge on computing power too.
Sensors: “There are more things connected now than there are people on the planet,” Arkenberg said. Market research firm Gartner estimates there are 8.4 billion connected things currently in use. Wherever digital can replace hardware, it’s doing so. Cheaper sensors mean we can connect more things, which can then connect to each other.
Data: “Data is the new oil,” Arkenberg said. “The top companies on the planet are all data-driven giants. If data is your business, though, then you need to keep finding new ways to get more and more data.” Home assistants are essentially data collection systems that sit in your living room and collect data about your life. That data in turn sets up the potential of machine learning.
Colonizing the Living Room
Alexa and Echo can turn lights on and off, and Nest can help you be energy-efficient. But beyond these, what does an intelligent home really look like?
Arkenberg’s vision of an intelligent home uses sensing, data, connectivity, and modeling to manage resource efficiency, security, productivity, and wellness.
Autonomous vehicles provide an interesting comparison: they’re surrounded by sensors that are constantly mapping the world to build dynamic models to understand the change around itself, and thereby predict things. Might we want this to become a model for our homes, too? By making them smart and connecting them, Arkenberg said, they’d become “more biological.”
There are already several products on the market that fit this description. RainMachine uses weather forecasts to adjust home landscape watering schedules. Neurio monitors energy usage, identifies areas where waste is happening, and makes recommendations for improvement.
These are small steps in connecting our homes with knowledge systems and giving them the ability to understand and act on that knowledge.
He sees the homes of the future being equipped with digital ears (in the form of home assistants, sensors, and monitoring devices) and digital eyes (in the form of facial recognition technology and machine vision to recognize who’s in the home). “These systems are increasingly able to interrogate emotions and understand how people are feeling,” he said. “When you push more of this active intelligence into things, the need for us to directly interface with them becomes less relevant.”
Could our homes use these same tools to benefit our health and wellness? FREDsense uses bacteria to create electrochemical sensors that can be applied to home water systems to detect contaminants. If that’s not personal enough for you, get a load of this: ClinicAI can be installed in your toilet bowl to monitor and evaluate your biowaste. What’s the point, you ask? Early detection of colon cancer and other diseases.
What if one day, your toilet’s biowaste analysis system could link up with your fridge, so that when you opened it it would tell you what to eat, and how much, and at what time of day?
Roadblocks to intelligence
“The connected and intelligent home is still a young category trying to establish value, but the technological requirements are now in place,” Arkenberg said. We’re already used to living in a world of ubiquitous computation and connectivity, and we have entrained expectations about things being connected. For the intelligent home to become a widespread reality, its value needs to be established and its challenges overcome.
One of the biggest challenges will be getting used to the idea of continuous surveillance. We’ll get convenience and functionality if we give up our data, but how far are we willing to go? Establishing security and trust is going to be a big challenge moving forward,” Arkenberg said.
There’s also cost and reliability, interoperability and fragmentation of devices, or conversely, what Arkenberg called ‘platform lock-on,’ where you’d end up relying on only one provider’s system and be unable to integrate devices from other brands.
Ultimately, Arkenberg sees homes being able to learn about us, manage our scheduling and transit, watch our moods and our preferences, and optimize our resource footprint while predicting and anticipating change.
“This is the really fascinating provocation of the intelligent home,” Arkenberg said. “And I think we’re going to start to see this play out over the next few years.”
Sounds like a home Dorothy wouldn’t recognize, in Kansas or anywhere else.
Stock Media provided by adam121 / Pond5 Continue reading

Posted in Human Robots

#430743 Teaching Machines to Understand, and ...

We humans are swamped with text. It’s not just news and other timely information: Regular people are drowning in legal documents. The problem is so bad we mostly ignore it. Every time a person uses a store’s loyalty rewards card or connects to an online service, his or her activities are governed by the equivalent of hundreds of pages of legalese. Most people pay no attention to these massive documents, often labeled “terms of service,” “user agreement,” or “privacy policy.”
These are just part of a much wider societal problem of information overload. There is so much data stored—exabytes of it, as much stored as has ever been spoken by people in all of human history—that it’s humanly impossible to read and interpret everything. Often, we narrow down our pool of information by choosing particular topics or issues to pay attention to. But it’s important to actually know the meaning and contents of the legal documents that govern how our data is stored and who can see it.
As computer science researchers, we are working on ways artificial intelligence algorithms could digest these massive texts and extract their meaning, presenting it in terms regular people can understand.
Can computers understand text?
Computers store data as 0s and 1s—data that cannot be directly understood by humans. They interpret these data as instructions for displaying text, sound, images, or videos that are meaningful to people. But can computers actually understand the language, not only presenting the words but also their meaning?
One way to find out is to ask computers to summarize their knowledge in ways that people can understand and find useful. It would be best if AI systems could process text quickly enough to help people make decisions as they are needed—for example, when you’re signing up for a new online service and are asked to agree with the site’s privacy policy.
What if a computerized assistant could digest all that legal jargon in a few seconds and highlight key points? Perhaps a user could even tell the automated assistant to pay particular attention to certain issues, like when an email address is shared, or whether search engines can index personal posts. Companies could use this capability, too, to analyze contracts or other lengthy documents.
To do this sort of work, we need to combine a range of AI technologies, including machine learning algorithms that take in large amounts of data and independently identify connections among them; knowledge representation techniques to express and interpret facts and rules about the world; speech recognition systems to convert spoken language to text; and human language comprehension programs that process the text and its context to determine what the user is telling the system to do.
Examining privacy policies
A modern internet-enabled life today more or less requires trusting for-profit companies with private information (like physical and email addresses, credit card numbers and bank account details) and personal data (photos and videos, email messages and location information).
These companies’ cloud-based systems typically keep multiple copies of users’ data as part of backup plans to prevent service outages. That means there are more potential targets—each data center must be securely protected both physically and electronically. Of course, internet companies recognize customers’ concerns and employ security teams to protect users’ data. But the specific and detailed legal obligations they undertake to do that are found in their impenetrable privacy policies. No regular human—and perhaps even no single attorney—can truly understand them.
In our study, we ask computers to summarize the terms and conditions regular users say they agree to when they click “Accept” or “Agree” buttons for online services. We downloaded the publicly available privacy policies of various internet companies, including Amazon AWS, Facebook, Google, HP, Oracle, PayPal, Salesforce, Snapchat, Twitter, and WhatsApp.
Summarizing meaning
Our software examines the text and uses information extraction techniques to identify key information specifying the legal rights, obligations and prohibitions identified in the document. It also uses linguistic analysis to identify whether each rule applies to the service provider, the user or a third-party entity, such as advertisers and marketing companies. Then it presents that information in clear, direct, human-readable statements.
For example, our system identified one aspect of Amazon’s privacy policy as telling a user, “You can choose not to provide certain information, but then you might not be able to take advantage of many of our features.” Another aspect of that policy was described as “We may also collect technical information to help us identify your device for fraud prevention and diagnostic purposes.”

We also found, with the help of the summarizing system, that privacy policies often include rules for third parties—companies that aren’t the service provider or the user—that people might not even know are involved in data storage and retrieval.
The largest number of rules in privacy policies—43 percent—apply to the company providing the service. Just under a quarter of the rules—24 percent—create obligations for users and customers. The rest of the rules govern behavior by third-party services or corporate partners, or could not be categorized by our system.

The next time you click the “I Agree” button, be aware that you may be agreeing to share your data with other hidden companies who will be analyzing it.
We are continuing to improve our ability to succinctly and accurately summarize complex privacy policy documents in ways that people can understand and use to access the risks associated with using a service.

This article was originally published on The Conversation. Read the original article. Continue reading

Posted in Human Robots

#430668 Why Every Leader Needs to Be Obsessed ...

This article is part of a series exploring the skills leaders must learn to make the most of rapid change in an increasingly disruptive world. The first article in the series, “How the Most Successful Leaders Will Thrive in an Exponential World,” broadly outlines four critical leadership skills—futurist, technologist, innovator, and humanitarian—and how they work together.
Today’s post, part five in the series, takes a more detailed look at leaders as technologists. Be sure to check out part two of the series, “How Leaders Dream Boldly to Bring New Futures to Life,” part three of the series, “How All Leaders Can Make the World a Better Place,” and part four of the series, “How Leaders Can Make Innovation Everyone’s Day Job”.
In the 1990s, Tower Records was the place to get new music. Successful and popular, the California chain spread far and wide, and in 1998, they took on $110 million in debt to fund aggressive further expansion. This wasn’t, as it turns out, the best of timing.
The first portable digital music player went on sale the same year. The following year brought Napster, a file sharing service allowing users to freely share music online. By 2000, Napster hosted 20 million users swapping songs. Then in 2001, Apple’s iPod and iTunes arrived, and when the iTunes Music Store opened in 2003, Apple sold over a million songs the first week.
As music was digitized, hard copies began to go out of style, and sales and revenue declined.
Tower first filed for bankruptcy in 2004 and again (for the last time) in 2006. The internet wasn’t the only reason for Tower’s demise. Mismanagement and price competition from electronics retailers like Best Buy also played a part. Still, today, the vast majority of music is purchased or streamed entirely online, and record stores are for the most part a niche market.
The writing was on the wall, but those impacted most had trouble reading it.
Why is it difficult for leaders to see technological change coming and right the ship before it’s too late? Why did Tower go all out on expansion just as the next big thing took the stage?
This is one story of many. Digitization has moved beyond music and entertainment, and now many big retailers operating physical stores are struggling to stay relevant. Meanwhile, the pace of change is accelerating, and new potentially disruptive technologies are on the horizon.
More than ever, leaders need to develop a strong understanding of and perspective on technology. They need to survey new innovations, forecast their pace, gauge the implications, and adopt new tools and strategy to change course as an industry shifts, not after it’s shifted.
Simply, leaders need to adopt the mindset of a technologist. Here’s what that means.
Survey the Landscape
Nurturing curiosity is the first step to understanding technological change. To know how technology might disrupt your industry, you have to know what’s in the pipeline and identify which new inventions are directly or indirectly related to your industry.
Becoming more technologically minded takes discipline and focus as well as unstructured time to explore the non-obvious connections between what is right in front of us and what might be. It requires a commitment to ongoing learning and discovery.
Read outside your industry and comfort zone, not just Fast Company and Wired, but Science and Nature to expand your horizons. Identify experts with the ability to demystify specific technology areas—many have a solid following on Twitter or a frequently cited blog.
But it isn’t all about reading. Consider going where the change is happening too.
Visit one of the technology hubs around the world or a local university research lab in your own back yard. Or bring the innovation to you by building an internal exploration lab stocked with the latest technologies, creating a technology advisory board, hosting an internal innovation challenge, or a local pitch night where aspiring entrepreneurs can share their newest ideas.
You might even ask the crowd by inviting anyone to suggest what innovation is most likely to disrupt your product, service, or sector. And don’t hesitate to engage younger folks—the digital natives all around you—by asking questions about what technology they are using or excited about. Consider going on a field trip with them to see how they use technology in different aspects of their lives. Invite the seasoned executives on your team to explore long-term “reverse mentoring” with someone who can expose them to the latest technology and teach them to use it.
Whatever your strategy, the goal should be to develop a healthy obsession with technology.
By exploring fresh perspectives outside traditional work environments and then giving ourselves permission to see how these new ideas might influence existing products and strategies, we have a chance to be ready for what we’re not ready for—but is likely right around the corner.
Estimate the Pace of Progress
The next step is forecasting when a technology will mature.
One of the most challenging aspects of the changes underway is that in many technology arenas, we are quickly moving from a linear to an exponential pace. It is hard enough to envision what is needed in an industry buffeted by progress that is changing 10% per year, but what happens when technological progress doubles annually? That is another world altogether.
This kind of change can be deceiving. For example, machine learning and big data are finally reaching critical momentum after more than twenty years of being right around the corner. The advances in applications like speech and image recognition that we’ve seen in recent years dwarf what came before and many believe we’ve just begun to understand the implications.
Even as we begin to embrace disruptive change in one technology arena, far more exciting possibilities unfold when we explore how multiple arenas are converging.
Artificial intelligence and big data are great examples. As Hod Lipson, professor of Mechanical Engineering and Data Science at Columbia University and co-author of Driverless: Intelligent Cars and the Road Ahead, says, “AI is the engine, but big data is the fuel. They need each other.”
This convergence paired with an accelerating pace makes for surprising applications.
To keep his research lab agile and open to new uses of advancing technologies, Lipson routinely asks his PhD students, “How might AI disrupt this industry?” to prompt development of applications across a wide spectrum of sectors from healthcare to agriculture to food delivery.
Explore the Consequences
New technology inevitably gives rise to new ethical, social, and moral questions that we have never faced before. Rather than bury our heads in the sand, as leaders we must explore the full range of potential consequences of whatever is underway or still to come.
We can add AI to kids’ toys, like Mattel’s Hello Barbie or use cutting-edge gene editing technology like CRISPR-Cas9 to select for preferred gene sequences beyond basic health. But just because we can do something doesn’t mean we should.
Take time to listen to skeptics and understand the risks posed by technology.
Elon Musk, Stephen Hawking, Steve Wozniak, Bill Gates, and other well-known names in science and technology have expressed concern in the media and via open letters about the risks posed by AI. Microsoft’s CEO, Satya Nadella, has even argued tech companies shouldn’t build artificial intelligence systems that will replace people rather than making them more productive.
Exploring unintended consequences goes beyond having a Plan B for when something goes wrong. It requires broadening our view of what we’re responsible for. Beyond customers, shareholders, and the bottom line, we should understand how our decisions may impact employees, communities, the environment, our broader industry, and even our competitors.
The minor inconvenience of mitigating these risks now is far better than the alternative. Create forums to listen to and value voices outside of the board room and C-Suite. Seek out naysayers, ethicists, community leaders, wise elders, and even neophytes—those who may not share our preconceived notions of right and wrong or our narrow view of our role in the larger world.
The question isn’t: If we build it, will they come? It’s now: If we can build it, should we?
Adopt New Technologies and Shift Course
The last step is hardest. Once you’ve identified a technology (or technologies) as a potential disruptor and understand the implications, you need to figure out how to evolve your organization to make the most of the opportunity. Simply recognizing disruption isn’t enough.
Take today’s struggling brick-and-mortar retail business. Online shopping isn’t new. Amazon isn’t a plucky startup. Both have been changing how we buy stuff for years. And yet many who still own and operate physical stores—perhaps most prominently, Sears—are now on the brink of bankruptcy.
There’s hope though. Netflix began as a DVD delivery service in the 90s, but quickly realized its core business didn’t have staying power. It would have been laughable to stream movies when Netflix was founded. Still, computers and bandwidth were advancing fast. In 2007, the company added streaming to its subscription. Even then it wasn’t a totally compelling product.
But Netflix clearly saw a streaming future would likely end their DVD business.
In recent years, faster connection speeds, a growing content library, and the company’s entrance into original programming have given Netflix streaming the upper hand over DVDs. Since 2011, DVD subscriptions have steadily declined. Yet the company itself is doing fine. Why? It anticipated the shift to streaming and acted on it.
Never Stop Looking for the Next Big Thing
Technology is and will increasingly be a driver of disruption, destabilizing entrenched businesses and entire industries while also creating new markets and value not yet imagined.
When faced with the rapidly accelerating pace of change, many companies still default to old models and established practices. Leading like a technologist requires vigilant understanding of potential sources of disruption—what might make your company’s offering obsolete? The answers may not always be perfectly clear. What’s most important is relentlessly seeking them.
Stock Media provided by MJTierney / Pond5 Continue reading

Posted in Human Robots