Tag Archives: factory

#433400 A Model for the Future of Education, and ...

As kids worldwide head back to school, I’d like to share my thoughts on the future of education.

Bottom line, how we educate our kids needs to radically change given the massive potential of exponential tech (e.g. artificial intelligence and virtual reality).

Without question, the number one driver for education is inspiration. As such, if you have a kid age 8–18, you’ll want to get your hands on an incredibly inspirational novel written by my dear friend Ray Kurzweil called Danielle: Chronicles of a Superheroine.

Danielle offers boys and girls a role model of a young woman who uses smart technologies and super-intelligence to partner with her friends to solve some of the world’s greatest challenges. It’s perfect to inspire anyone to pursue their moonshot.

Without further ado, let’s dive into the future of educating kids, and a summary of my white paper thoughts….

Just last year, edtech (education technology) investments surpassed a record high of 9.5 billion USD—up 30 percent from the year before.

Already valued at over half a billion USD, the AI in education market is set to surpass 6 billion USD by 2024.

And we’re now seeing countless new players enter the classroom, from a Soul Machines AI teacher specializing in energy use and sustainability to smart “lab schools” with personalized curricula.

As my two boys enter 1st grade, I continue asking myself, given the fact that most elementary schools haven’t changed in many decades (perhaps a century), what do I want my kids to learn? How do I think about elementary school during an exponential era?

This post covers five subjects related to elementary school education:

Five Issues with Today’s Elementary Schools
Five Guiding Principles for Future Education
An Elementary School Curriculum for the Future
Exponential Technologies in our Classroom
Mindsets for the 21st Century

Excuse the length of this post, but if you have kids, the details might be meaningful. If you don’t, then next week’s post will return to normal length and another fun subject.

Also, if you’d like to see my detailed education “white paper,” you can view or download it here.

Let’s dive in…

Five Issues With Today’s Elementary Schools
There are probably lots of issues with today’s traditional elementary schools, but I’ll just choose a few that bother me most.

Grading: In the traditional education system, you start at an “A,” and every time you get something wrong, your score gets lower and lower. At best it’s demotivating, and at worst it has nothing to do with the world you occupy as an adult. In the gaming world (e.g. Angry Birds), it’s just the opposite. You start with zero and every time you come up with something right, your score gets higher and higher.
Sage on the Stage: Most classrooms have a teacher up in front of class lecturing to a classroom of students, half of whom are bored and half of whom are lost. The one-teacher-fits-all model comes from an era of scarcity where great teachers and schools were rare.
Relevance: When I think back to elementary and secondary school, I realize how much of what I learned was never actually useful later in life, and how many of my critical lessons for success I had to pick up on my own (I don’t know about you, but I haven’t ever actually had to factor a polynomial in my adult life).
Imagination, Coloring inside the Lines: Probably of greatest concern to me is the factory-worker, industrial-era origin of today’s schools. Programs are so structured with rote memorization that it squashes the originality from most children. I’m reminded that “the day before something is truly a breakthrough, it’s a crazy idea.” Where do we pursue crazy ideas in our schools? Where do we foster imagination?
Boring: If learning in school is a chore, boring, or emotionless, then the most important driver of human learning, passion, is disengaged. Having our children memorize facts and figures, sit passively in class, and take mundane standardized tests completely defeats the purpose.

An average of 7,200 students drop out of high school each day, totaling 1.3 million each year. This means only 69 percent of students who start high school finish four years later. And over 50 percent of these high school dropouts name boredom as the number one reason they left.

Five Guiding Principles for Future Education
I imagine a relatively near-term future in which robotics and artificial intelligence will allow any of us, from ages 8 to 108, to easily and quickly find answers, create products, or accomplish tasks, all simply by expressing our desires.

From ‘mind to manufactured in moments.’ In short, we’ll be able to do and create almost whatever we want.

In this future, what attributes will be most critical for our children to learn to become successful in their adult lives? What’s most important for educating our children today?

For me it’s about passion, curiosity, imagination, critical thinking, and grit.

Passion: You’d be amazed at how many people don’t have a mission in life… A calling… something to jolt them out of bed every morning. The most valuable resource for humanity is the persistent and passionate human mind, so creating a future of passionate kids is so very important. For my 7-year-old boys, I want to support them in finding their passion or purpose… something that is uniquely theirs. In the same way that the Apollo program and Star Trek drove my early love for all things space, and that passion drove me to learn and do.
Curiosity: Curiosity is something innate in kids, yet something lost by most adults during the course of their life. Why? In a world of Google, robots, and AI, raising a kid that is constantly asking questions and running “what if” experiments can be extremely valuable. In an age of machine learning, massive data, and a trillion sensors, it will be the quality of your questions that will be most important.
Imagination: Entrepreneurs and visionaries imagine the world (and the future) they want to live in, and then they create it. Kids happen to be some of the most imaginative humans around… it’s critical that they know how important and liberating imagination can be.
Critical Thinking: In a world flooded with often-conflicting ideas, baseless claims, misleading headlines, negative news, and misinformation, learning the skill of critical thinking helps find the signal in the noise. This principle is perhaps the most difficult to teach kids.
Grit/Persistence: Grit is defined as “passion and perseverance in pursuit of long-term goals,” and it has recently been widely acknowledged as one of the most important predictors of and contributors to success.

Teaching your kids not to give up, to keep trying, and to keep trying new ideas for something that they are truly passionate about achieving is extremely critical. Much of my personal success has come from such stubbornness. I joke that both XPRIZE and the Zero Gravity Corporation were “overnight successes after 10 years of hard work.”

So given those five basic principles, what would an elementary school curriculum look like? Let’s take a look…

An Elementary School Curriculum for the Future
Over the last 30 years, I’ve had the pleasure of starting two universities, International Space University (1987) and Singularity University (2007). My favorite part of co-founding both institutions was designing and implementing the curriculum. Along those lines, the following is my first shot at the type of curriculum I’d love my own boys to be learning.

I’d love your thoughts, I’ll be looking for them here: https://www.surveymonkey.com/r/DDRWZ8R

For the purpose of illustration, I’ll speak about ‘courses’ or ‘modules,’ but in reality these are just elements that would ultimately be woven together throughout the course of K-6 education.

Module 1: Storytelling/Communications

When I think about the skill that has served me best in life, it’s been my ability to present my ideas in the most compelling fashion possible, to get others onboard, and support birth and growth in an innovative direction. In my adult life, as an entrepreneur and a CEO, it’s been my ability to communicate clearly and tell compelling stories that has allowed me to create the future. I don’t think this lesson can start too early in life. So imagine a module, year after year, where our kids learn the art and practice of formulating and pitching their ideas. The best of oration and storytelling. Perhaps children in this class would watch TED presentations, or maybe they’d put together their own TEDx for kids. Ultimately, it’s about practice and getting comfortable with putting yourself and your ideas out there and overcoming any fears of public speaking.

Module 2: Passions

A modern school should help our children find and explore their passion(s). Passion is the greatest gift of self-discovery. It is a source of interest and excitement, and is unique to each child.

The key to finding passion is exposure. Allowing kids to experience as many adventures, careers, and passionate adults as possible. Historically, this was limited by the reality of geography and cost, implemented by having local moms and dads presenting in class about their careers. “Hi, I’m Alan, Billy’s dad, and I’m an accountant. Accountants are people who…”

But in a world of YouTube and virtual reality, the ability for our children to explore 500 different possible careers or passions during their K-6 education becomes not only possible but compelling. I imagine a module where children share their newest passion each month, sharing videos (or VR experiences) and explaining what they love and what they’ve learned.

Module 3: Curiosity & Experimentation

Einstein famously said, “I have no special talent. I am only passionately curious.” Curiosity is innate in children, and many times lost later in life. Arguably, it can be said that curiosity is responsible for all major scientific and technological advances; it’s the desire of an individual to know the truth.

Coupled with curiosity is the process of experimentation and discovery. The process of asking questions, creating and testing a hypothesis, and repeated experimentation until the truth is found. As I’ve studied the most successful entrepreneurs and entrepreneurial companies, from Google and Amazon to Uber, their success is significantly due to their relentless use of experimentation to define their products and services.

Here I imagine a module which instills in children the importance of curiosity and gives them permission to say, “I don’t know, let’s find out.”

Further, a monthly module that teaches children how to design and execute valid and meaningful experiments. Imagine children who learn the skill of asking a question, proposing a hypothesis, designing an experiment, gathering the data, and then reaching a conclusion.

Module 4: Persistence/Grit

Doing anything big, bold, and significant in life is hard work. You can’t just give up when the going gets rough. The mindset of persistence, of grit, is a learned behavior I believe can be taught at an early age, especially when it’s tied to pursuing a child’s passion.

I imagine a curriculum that, each week, studies the career of a great entrepreneur and highlights their story of persistence. It would highlight the individuals and companies that stuck with it, iterated, and ultimately succeeded.

Further, I imagine a module that combines persistence and experimentation in gameplay, such as that found in Dean Kamen’s FIRST LEGO league, where 4th graders (and up) research a real-world problem such as food safety, recycling, energy, and so on, and are challenged to develop a solution. They also must design, build, and program a robot using LEGO MINDSTORMS®, then compete on a tabletop playing field.

Module 5: Technology Exposure

In a world of rapidly accelerating technology, understanding how technologies work, what they do, and their potential for benefiting society is, in my humble opinion, critical to a child’s future. Technology and coding (more on this below) are the new “lingua franca” of tomorrow.

In this module, I imagine teaching (age appropriate) kids through play and demonstration. Giving them an overview of exponential technologies such as computation, sensors, networks, artificial intelligence, digital manufacturing, genetic engineering, augmented/virtual reality, and robotics, to name a few. This module is not about making a child an expert in any technology, it’s more about giving them the language of these new tools, and conceptually an overview of how they might use such a technology in the future. The goal here is to get them excited, give them demonstrations that make the concepts stick, and then to let their imaginations run.

Module 6: Empathy

Empathy, defined as “the ability to understand and share the feelings of another,” has been recognized as one of the most critical skills for our children today. And while there has been much written, and great practices for instilling this at home and in school, today’s new tools accelerate this.

Virtual reality isn’t just about video games anymore. Artists, activists, and journalists now see the technology’s potential to be an empathy engine, one that can shine spotlights on everything from the Ebola epidemic to what it’s like to live in Gaza. And Jeremy Bailenson has been at the vanguard of investigating VR’s power for good.

For more than a decade, Bailenson’s lab at Stanford has been studying how VR can make us better people. Through the power of VR, volunteers at the lab have felt what it is like to be Superman (to see if it makes them more helpful), a cow (to reduce meat consumption), and even a coral (to learn about ocean acidification).

Silly as they might seem, these sorts of VR scenarios could be more effective than the traditional public service ad at making people behave. Afterwards, they waste less paper. They save more money for retirement. They’re nicer to the people around them. And this could have consequences in terms of how we teach and train everyone from cliquey teenagers to high court judges.

Module 7: Ethics/Moral Dilemmas

Related to empathy, and equally important, is the goal of infusing kids with a moral compass. Over a year ago, I toured a special school created by Elon Musk (the Ad Astra school) for his five boys (age 9 to 14). One element that is persistent in that small school of under 40 kids is the conversation about ethics and morals, a conversation manifested by debating real-world scenarios that our kids may one day face.

Here’s an example of the sort of gameplay/roleplay that I heard about at Ad Astra, that might be implemented in a module on morals and ethics. Imagine a small town on a lake, in which the majority of the town is employed by a single factory. But that factory has been polluting the lake and killing all the life. What do you do? It’s posed that shutting down the factory would mean that everyone loses their jobs. On the other hand, keeping the factory open means the lake is destroyed and the lake dies. This kind of regular and routine conversation/gameplay allows the children to see the world in a critically important fashion.

Module 8: The 3R Basics (Reading, wRiting & aRithmetic)

There’s no question that young children entering kindergarten need the basics of reading, writing, and math. The only question is what’s the best way for them to get it? We all grew up in the classic mode of a teacher at the chalkboard, books, and homework at night. But I would argue that such teaching approaches are long outdated, now replaced with apps, gameplay, and the concept of the flip classroom.

Pioneered by high school teachers Jonathan Bergman and Aaron Sams in 2007, the flipped classroom reverses the sequence of events from that of the traditional classroom.

Students view lecture materials, usually in the form of video lectures, as homework prior to coming to class. In-class time is reserved for activities such as interactive discussions or collaborative work, all performed under the guidance of the teacher.

The benefits are clear:

Students can consume lectures at their own pace, viewing the video again and again until they get the concept, or fast-forwarding if the information is obvious.
The teacher is present while students apply new knowledge. Doing the homework into class time gives teachers insight into which concepts, if any, that their students are struggling with and helps them adjust the class accordingly.
The flipped classroom produces tangible results: 71 percent of teachers who flipped their classes noticed improved grades, and 80 percent reported improved student attitudes as a result.

Module 9: Creative Expression & Improvisation

Every single one of us is creative. It’s human nature to be creative… the thing is that we each might have different ways of expressing our creativity.

We must encourage kids to discover and to develop their creative outlets early. In this module, imagine showing kids the many different ways creativity is expressed, from art to engineering to music to math, and then guiding them as they choose the area (or areas) they are most interested in. Critically, teachers (or parents) can then develop unique lessons for each child based on their interests, thanks to open education resources like YouTube and the Khan Academy. If my child is interested in painting and robots, a teacher or AI could scour the web and put together a custom lesson set from videos/articles where the best painters and roboticists in the world share their skills.

Adapting to change is critical for success, especially in our constantly changing world today. Improvisation is a skill that can be learned, and we need to be teaching it early.

In most collegiate “improv” classes, the core of great improvisation is the “Yes, and…” mindset. When acting out a scene, one actor might introduce a new character or idea, completely changing the context of the scene. It’s critical that the other actors in the scene say “Yes, and…” accept the new reality, then add something new of their own.

Imagine playing similar role-play games in elementary schools, where a teacher gives the students a scene/context and constantly changes variables, forcing them to adapt and play.

Module 10: Coding

Computer science opens more doors for students than any other discipline in today’s world. Learning even the basics will help students in virtually any career, from architecture to zoology.

Coding is an important tool for computer science, in the way that arithmetic is a tool for doing mathematics and words are a tool for English. Coding creates software, but computer science is a broad field encompassing deep concepts that go well beyond coding.

Every 21st century student should also have a chance to learn about algorithms, how to make an app, or how the internet works. Computational thinking allows preschoolers to grasp concepts like algorithms, recursion and heuristics. Even if they don’t understand the terms, they’ll learn the basic concepts.

There are more than 500,000 open jobs in computing right now, representing the number one source of new wages in the US, and these jobs are projected to grow at twice the rate of all other jobs.

Coding is fun! Beyond the practical reasons for learning how to code, there’s the fact that creating a game or animation can be really fun for kids.

Module 11: Entrepreneurship & Sales

At its core, entrepreneurship is about identifying a problem (an opportunity), developing a vision on how to solve it, and working with a team to turn that vision into reality. I mentioned Elon’s school, Ad Astra: here, again, entrepreneurship is a core discipline where students create and actually sell products and services to each other and the school community.

You could recreate this basic exercise with a group of kids in lots of fun ways to teach them the basic lessons of entrepreneurship.

Related to entrepreneurship is sales. In my opinion, we need to be teaching sales to every child at an early age. Being able to “sell” an idea (again related to storytelling) has been a critical skill in my career, and it is a competency that many people simply never learned.

The lemonade stand has been a classic, though somewhat meager, lesson in sales from past generations, where a child sits on a street corner and tries to sell homemade lemonade for $0.50 to people passing by. I’d suggest we step the game up and take a more active approach in gamifying sales, and maybe having the classroom create a Kickstarter, Indiegogo or GoFundMe campaign. The experience of creating a product or service and successfully selling it will create an indelible memory and give students the tools to change the world.

Module 12: Language

A little over a year ago, I spent a week in China meeting with parents whose focus on kids’ education is extraordinary. One of the areas I found fascinating is how some of the most advanced parents are teaching their kids new languages: through games. On the tablet, the kids are allowed to play games, but only in French. A child’s desire to win fully engages them and drives their learning rapidly.

Beyond games, there’s virtual reality. We know that full immersion is what it takes to become fluent (at least later in life). A semester abroad in France or Italy, and you’ve got a great handle on the language and the culture. But what about for an eight-year-old?

Imagine a module where for an hour each day, the children spend their time walking around Italy in a VR world, hanging out with AI-driven game characters who teach them, engage them, and share the culture and the language in the most personalized and compelling fashion possible.

Exponential Technologies for Our Classrooms
If you’ve attended Abundance 360 or Singularity University, or followed my blogs, you’ll probably agree with me that the way our children will learn is going to fundamentally transform over the next decade.

Here’s an overview of the top five technologies that will reshape the future of education:

Tech 1: Virtual Reality (VR) can make learning truly immersive. Research has shown that we remember 20 percent of what we hear, 30 percent of what we see, and up to 90 percent of what we do or simulate. Virtual reality yields the latter scenario impeccably. VR enables students to simulate flying through the bloodstream while learning about different cells they encounter, or travel to Mars to inspect the surface for life.

To make this a reality, Google Cardboard just launched its Pioneer Expeditions product. Under this program, thousands of schools around the world have gotten a kit containing everything a teacher needs to take his or her class on a virtual trip. While data on VR use in K-12 schools and colleges have yet to be gathered, the steady growth of the market is reflected in the surge of companies (including zSpace, Alchemy VR and Immersive VR Education) solely dedicated to providing schools with packaged education curriculum and content.

Add to VR a related technology called augmented reality (AR), and experiential education really comes alive. Imagine wearing an AR headset that is able to superimpose educational lessons on top of real-world experiences. Interested in botany? As you walk through a garden, the AR headset superimposes the name and details of every plant you see.

Tech 2: 3D Printing is allowing students to bring their ideas to life. Never mind the computer on every desktop (or a tablet for every student), that’s a given. In the near future, teachers and students will want or have a 3D printer on the desk to help them learn core science, technology, engineering and mathematics (STEM) principles. Bre Pettis, of MakerBot Industries, in a grand but practical vision, sees a 3D printer on every school desk in America. “Imagine if you had a 3D printer instead of a LEGO set when you were a kid; what would life be like now?” asks Mr. Pettis. You could print your own mini-figures, your own blocks, and you could iterate on new designs as quickly as your imagination would allow. MakerBots are now in over 5,000 K-12 schools across the US.

Taking this one step further, you could imagine having a 3D file for most entries in Wikipedia, allowing you to print out and study an object you can only read about or visualize in VR.

Tech 3: Sensors & Networks. An explosion of sensors and networks are going to connect everyone at gigabit speeds, making access to rich video available at all times. At the same time, sensors continue to miniaturize and reduce in power, becoming embedded in everything. One benefit will be the connection of sensor data with machine learning and AI (below), such that knowledge of a child’s attention drifting, or confusion, can be easily measured and communicated. The result would be a representation of the information through an alternate modality or at a different speed.

Tech 4: Machine Learning is making learning adaptive and personalized. No two students are identical—they have different modes of learning (by reading, seeing, hearing, doing), come from different educational backgrounds, and have different intellectual capabilities and attention spans. Advances in machine learning and the surging adaptive learning movement are seeking to solve this problem. Companies like Knewton and Dreambox have over 15 million students on their respective adaptive learning platforms. Soon, every education application will be adaptive, learning how to personalize the lesson for a specific student. There will be adaptive quizzing apps, flashcard apps, textbook apps, simulation apps and many more.

Tech 5: Artificial Intelligence or “An AI Teaching Companion.” Neil Stephenson’s book The Diamond Age presents a fascinating piece of educational technology called “A Young Lady’s Illustrated Primer.”

As described by Beat Schwendimann, “The primer is an interactive book that can answer a learner’s questions (spoken in natural language), teach through allegories that incorporate elements of the learner’s environment, and presents contextual just-in-time information.

“The primer includes sensors that monitor the learner’s actions and provide feedback. The learner is in a cognitive apprenticeship with the book: The primer models a certain skill (through allegorical fairy tale characters), which the learner then imitates in real life.

“The primer follows a learning progression with increasingly more complex tasks. The educational goals of the primer are humanist: To support the learner to become a strong and independently thinking person.”

The primer, an individualized AI teaching companion is the result of technological convergence and is beautifully described by YouTuber CGP Grey in his video: Digital Aristotle: Thoughts on the Future of Education.

Your AI companion will have unlimited access to information on the cloud and will deliver it at the optimal speed to each student in an engaging, fun way. This AI will demonetize and democratize education, be available to everyone for free (just like Google), and offering the best education to the wealthiest and poorest children on the planet equally.

This AI companion is not a tutor who spouts facts, figures and answers, but a player on the side of the student, there to help him or her learn, and in so doing, learn how to learn better. The AI is always alert, watching for signs of frustration and boredom that may precede quitting, for signs of curiosity or interest that tend to indicate active exploration, and for signs of enjoyment and mastery, which might indicate a successful learning experience.

Ultimately, we’re heading towards a vastly more educated world. We are truly living during the most exciting time to be alive.

Mindsets for the 21st Century
Finally, it’s important for me to discuss mindsets. How we think about the future colors how we learn and what we do. I’ve written extensively about the importance of an abundance and exponential mindset for entrepreneurs and CEOs. I also think that attention to mindset in our elementary schools, when a child is shaping the mental “operating system” for the rest of their life, is even more important.

As such, I would recommend that a school adopt a set of principles that teach and promote a number of mindsets in the fabric of their programs.

Many “mindsets” are important to promote. Here are a couple to consider:

Nurturing Optimism & An Abundance Mindset:
We live in a competitive world, and kids experience a significant amount of pressure to perform. When they fall short, they feel deflated. We all fail at times; that’s part of life. If we want to raise “can-do” kids who can work through failure and come out stronger for it, it’s wise to nurture optimism. Optimistic kids are more willing to take healthy risks, are better problem-solvers, and experience positive relationships. You can nurture optimism in your school by starting each day by focusing on gratitude (what each child is grateful for), or a “positive focus” in which each student takes 30 seconds to talk about what they are most excited about, or what recent event was positively impactful to them. (NOTE: I start every meeting inside my Strike Force team with a positive focus.)

Finally, helping students understand (through data and graphs) that the world is in fact getting better (see my first book: Abundance: The Future is Better Than You Think) will help them counter the continuous flow of negative news flowing through our news media.

When kids feel confident in their abilities and excited about the world, they are willing to work harder and be more creative.

Tolerance for Failure:
Tolerating failure is a difficult lesson to learn and a difficult lesson to teach. But it is critically important to succeeding in life.

Astro Teller, who runs Google’s innovation branch “X,” talks a lot about encouraging failure. At X, they regularly try to “kill” their ideas. If they are successful in killing an idea, and thus “failing,” they save lots of time, money and resources. The ideas they can’t kill survive and develop into billion-dollar businesses. The key is that each time an idea is killed, Astro rewards the team, literally, with cash bonuses. Their failure is celebrated and they become a hero.

This should be reproduced in the classroom: kids should try to be critical of their best ideas (learn critical thinking), then they should be celebrated for ‘successfully failing,’ perhaps with cake, balloons, confetti, and lots of Silly String.

Join Me & Get Involved!
Abundance Digital Online Community: I have created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance Digital. This is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: sakkarin sapu / Shutterstock.com Continue reading

Posted in Human Robots

#431790 FT 300 force torque sensor

Robotiq Updates FT 300 Sensitivity For High Precision Tasks With Universal RobotsForce Torque Sensor feeds data to Universal Robots force mode
Quebec City, Canada, November 13, 2017 – Robotiq launches a 10 times more sensitive version of its FT 300 Force Torque Sensor. With Plug + Play integration on all Universal Robots, the FT 300 performs highly repeatable precision force control tasks such as finishing, product testing, assembly and precise part insertion.
This force torque sensor comes with an updated free URCap software able to feed data to the Universal Robots Force Mode. “This new feature allows the user to perform precise force insertion assembly and many finishing applications where force control with high sensitivity is required” explains Robotiq CTO Jean-Philippe Jobin*.
The URCap also includes a new calibration routine. “We’ve integrated a step-by-step procedure that guides the user through the process, which takes less than 2 minutes” adds Jobin. “A new dashboard also provides real-time force and moment readings on all 6 axes. Moreover, pre-built programming functions are now embedded in the URCap for intuitive programming.”
See some of the FT 300’s new capabilities in the following demo videos:
#1 How to calibrate with the FT 300 URCap Dashboard
#2 Linear search demo
#3 Path recording demo
Visit the FT 300 webpage or get a quote here
Get the FT 300 specs here
Get more info in the FAQ
Get free Skills to accelerate robot programming of force control tasks.
Get free robot cell deployment resources on leanrobotics.org
* Available with Universal Robots CB3.1 controller only
About Robotiq
Robotiq’s Lean Robotics methodology and products enable manufacturers to deploy productive robot cells across their factory. They leverage the Lean Robotics methodology for faster time to production and increased productivity from their robots. Production engineers standardize on Robotiq’s Plug + Play components for their ease of programming, built-in integration, and adaptability to many processes. They rely on the Flow software suite to accelerate robot projects and optimize robot performance once in production.
Robotiq is the humans behind the robots: an employee-owned business with a passionate team and an international partner network.
Media contact
David Maltais, Communications and Public Relations Coordinator
d.maltais@robotiq.com
1-418-929-2513
////
Press Release Provided by: Robotiq.Com
The post FT 300 force torque sensor appeared first on Roboticmagazine. Continue reading

Posted in Human Robots

#431733 Why Humanoid Robots Are Still So Hard to ...

Picture a robot. In all likelihood, you just pictured a sleek metallic or chrome-white humanoid. Yet the vast majority of robots in the world around us are nothing like this; instead, they’re specialized for specific tasks. Our cultural conception of what robots are dates back to the coining of the term robots in the Czech play, Rossum’s Universal Robots, which originally envisioned them as essentially synthetic humans.
The vision of a humanoid robot is tantalizing. There are constant efforts to create something that looks like the robots of science fiction. Recently, an old competitor in this field returned with a new model: Toyota has released what they call the T-HR3. As humanoid robots go, it appears to be pretty dexterous and have a decent grip, with a number of degrees of freedom making the movements pleasantly human.
This humanoid robot operates mostly via a remote-controlled system that allows the user to control the robot’s limbs by exerting different amounts of pressure on a framework. A VR headset completes the picture, allowing the user to control the robot’s body and teleoperate the machine. There’s no word on a price tag, but one imagines a machine with a control system this complicated won’t exactly be on your Christmas list, unless you’re a billionaire.

Toyota is no stranger to robotics. They released a series of “Partner Robots” that had a bizarre affinity for instrument-playing but weren’t often seen doing much else. Given that they didn’t seem to have much capability beyond the automaton that Leonardo da Vinci made hundreds of years ago, they promptly vanished. If, as the name suggests, the T-HR3 is a sequel to these robots, which came out shortly after ASIMO back in 2003, it’s substantially better.
Slightly less humanoid (and perhaps the more useful for it), Toyota’s HSR-2 is a robot base on wheels with a simple mechanical arm. It brings to mind earlier machines produced by dream-factory startup Willow Garage like the PR-2. The idea of an affordable robot that could simply move around on wheels and pick up and fetch objects, and didn’t harbor too-lofty ambitions to do anything else, was quite successful.
So much so that when Robocup, the international robotics competition, looked for a platform for their robot-butler competition @Home, they chose HSR-2 for its ability to handle objects. HSR-2 has been deployed in trial runs to care for the elderly and injured, but has yet to be widely adopted for these purposes five years after its initial release. It’s telling that arguably the most successful multi-purpose humanoid robot isn’t really humanoid at all—and it’s curious that Toyota now seems to want to return to a more humanoid model a decade after they gave up on the project.
What’s unclear, as is often the case with humanoid robots, is what, precisely, the T-HR3 is actually for. The teleoperation gets around the complex problem of control by simply having the machine controlled remotely by a human. That human then handles all the sensory perception, decision-making, planning, and manipulation; essentially, the hardest problems in robotics.
There may not be a great deal of autonomy for the T-HR3, but by sacrificing autonomy, you drastically cut down the uses of the robot. Since it can’t act alone, you need a convincing scenario where you need a teleoperated humanoid robot that’s less precise and vastly more expensive than just getting a person to do the same job. Perhaps someday more autonomy will be developed for the robot, and the master maneuvering system that allows humans to control it will only be used in emergencies to control the robot if it gets stuck.
Toyota’s press release says it is “a platform with capabilities that can safely assist humans in a variety of settings, such as the home, medical facilities, construction sites, disaster-stricken areas and even outer space.” In reality, it’s difficult to see such a robot being affordable or even that useful in the home or in medical facilities (unless it’s substantially stronger than humans). Equally, it certainly doesn’t seem robust enough to be deployed in disaster zones or outer space. These tasks have been mooted for robots for a very long time and few have proved up to the challenge.
Toyota’s third generation humanoid robot, the T-HR3. Image Credit: Toyota
Instead, the robot seems designed to work alongside humans. Its design, standing 1.5 meters tall, weighing 75 kilograms, and possessing 32 degrees of freedom in its body, suggests it is built to closely mimic a person, rather than a robot like ATLAS which is robust enough that you can imagine it being useful in a war zone. In this case, it might be closer to the model of the collaborative robots or co-bots developed by Rethink Robotics, whose tons of safety features, including force-sensitive feedback for the user, reduce the risk of terrible PR surrounding killer robots.
Instead the emphasis is on graceful precision engineering: in the promo video, the robot can be seen balancing on one leg before showing off a few poised, yoga-like poses. This perhaps suggests that an application in elderly care, which Toyota has ventured into before and which was the stated aim of their simple HSR-2, might be more likely than deployment to a disaster zone.
The reason humanoid robots remain so elusive and so tempting is probably because of a simple cognitive mistake. We make two bad assumptions. First, we assume that if you build a humanoid robot, give its joints enough flexibility, throw in a little AI and perhaps some pre-programmed behaviors, then presto, it will be able to do everything humans can. When you see a robot that moves well and looks humanoid, it seems like the hardest part is done; surely this robot could do anything. The reality is never so simple.

We also make the reverse assumption: we assume that when we are finally replaced, it will be by perfect replicas of our own bodies and brains that can fulfill all the functions we used to fulfill. Perhaps, in reality, the future of robots and AI is more like its present: piecemeal, with specialized algorithms and specialized machines gradually learning to outperform humans at every conceivable task without ever looking convincingly human.
It may well be that the T-HR3 is angling towards this concept of machine learning as a platform for future research. Rather than trying to program an omni-capable robot out of the box, it will gradually learn from its human controllers. In this way, you could see the platform being used to explore the limits of what humans can teach robots to do simply by having them mimic sequences of our bodies’ motion, in the same way the exploitation of neural networks is testing the limits of training algorithms on data. No one machine will be able to perform everything a human can, but collectively, they will vastly outperform us at anything you’d want one to do.
So when you see a new android like Toyota’s, feel free to marvel at its technical abilities and indulge in the speculation about whether it’s a PR gimmick or a revolutionary step forward along the road to human replacement. Just remember that, human-level bots or not, we’re already strolling down that road.
Image Credit: Toyota Continue reading

Posted in Human Robots

#431603 What We Can Learn From the Second Life ...

For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading

Posted in Human Robots

#431371 Amazon Is Quietly Building the Robots of ...

Science fiction is the siren song of hard science. How many innocent young students have been lured into complex, abstract science, technology, engineering, or mathematics because of a reckless and irresponsible exposure to Arthur C. Clarke at a tender age? Yet Arthur C. Clarke has a very famous quote: “Any sufficiently advanced technology is indistinguishable from magic.”
It’s the prospect of making that… ahem… magic leap that entices so many people into STEM in the first place. A magic leap that would change the world. How about, for example, having humanoid robots? They could match us in dexterity and speed, perceive the world around them as we do, and be programmed to do, well, more or less anything we can do.
Such a technology would change the world forever.
But how will it arrive? While true sci-fi robots won’t get here right away—the pieces are coming together, and the company best developing them at the moment is Amazon. Where others have struggled to succeed, Amazon has been quietly progressing. Notably, Amazon has more than just a dream, it has the most practical of reasons driving it into robotics.
This practicality matters. Technological development rarely proceeds by magic; it’s a process filled with twists, turns, dead-ends, and financial constraints. New technologies often have to answer questions like “What is this good for, are you being realistic?” A good strategy, then, can be to build something more limited than your initial ambition, but useful for a niche market. That way, you can produce a prototype, have a reasonable business plan, and turn a profit within a decade. You might call these “stepping stone” applications that allow for new technologies to be developed in an economically viable way.
You need something you can sell to someone, soon: that’s how you get investment in your idea. It’s this model that iRobot, developers of the Roomba, used: migrating from military prototypes to robotic vacuum cleaners to become the “boring, successful robot company.” Compare this to Willow Garage, a genius factory if ever there was one: they clearly had ambitions towards a general-purpose, multi-functional robot. They built an impressive device—PR2—and programmed the operating system, ROS, that is still the industry and academic standard to this day.
But since they were unable to sell their robot for much less than $250,000, it was never likely to be a profitable business. This is why Willow Garage is no more, and many workers at the company went into telepresence robotics. Telepresence is essentially videoconferencing with a fancy robot attached to move the camera around. It uses some of the same software (for example, navigation and mapping) without requiring you to solve difficult problems of full autonomy for the robot, or manipulating its environment. It’s certainly one of the stepping-stone areas that various companies are investigating.
Another approach is to go to the people with very high research budgets: the military.
This was the Boston Dynamics approach, and their incredible achievements in bipedal locomotion saw them getting snapped up by Google. There was a great deal of excitement and speculation about Google’s “nightmare factory” whenever a new slick video of a futuristic militarized robot surfaced. But Google broadly backed away from Replicant, their robotics program, and Boston Dynamics was sold. This was partly due to PR concerns over the Terminator-esque designs, but partly because they didn’t see the robotics division turning a profit. They hadn’t found their stepping stones.
This is where Amazon comes in. Why Amazon? First off, they just announced that their profits are up by 30 percent, and yet the company is well-known for their constantly-moving Day One philosophy where a great deal of the profits are reinvested back into the business. But lots of companies have ambition.
One thing Amazon has that few other corporations have, as well as big financial resources, is viable stepping stones for developing the technologies needed for this sort of robotics to become a reality. They already employ 100,000 robots: these are of the “pragmatic, boring, useful” kind that we’ve profiled, which move around the shelves in warehouses. These robots are allowing Amazon to develop localization and mapping software for robots that can autonomously navigate in the simple warehouse environment.
But their ambitions don’t end there. The Amazon Robotics Challenge is a multi-million dollar competition, open to university teams, to produce a robot that can pick and package items in warehouses. The problem of grasping and manipulating a range of objects is not a solved one in robotics, so this work is still done by humans—yet it’s absolutely fundamental for any sci-fi dream robot.
Google, for example, attempted to solve this problem by hooking up 14 robot hands to machine learning algorithms and having them grasp thousands of objects. Although results were promising, the 10 to 20 percent failure rate for grasps is too high for warehouse use. This is a perfect stepping stone for Amazon; should they crack the problem, they will likely save millions in logistics.
Another area where humanoid robotics—especially bipedal locomotion, or walking, has been seriously suggested—is in the last mile delivery problem. Amazon has shown willingness to be creative in this department with their notorious drone delivery service. In other words, it’s all very well to have your self-driving car or van deliver packages to people’s doors, but who puts the package on the doorstep? It’s difficult for wheeled robots to navigate the full range of built environments that exist. That’s why bipedal robots like CASSIE, developed by Oregon State, may one day be used to deliver parcels.
Again: no one more than Amazon stands to profit from cracking this technology. The line from robotics research to profit is very clear.
So, perhaps one day Amazon will have robots that can move around and manipulate their environments. But they’re also working on intelligence that will guide those robots and make them truly useful for a variety of tasks. Amazon has an AI, or at least the framework for an AI: it’s called Alexa, and it’s in tens of millions of homes. The Alexa Prize, another multi-million-dollar competition, is attempting to make Alexa more social.
To develop a conversational AI, at least using the current methods of machine learning, you need data on tens of millions of conversations. You need to understand how people will try to interact with the AI. Amazon has access to this in Alexa, and they’re using it. As owners of the leading voice-activated personal assistant, they have an ecosystem of developers creating apps for Alexa. It will be integrated with the smart home and the Internet of Things. It is a very marketable product, a stepping stone for robot intelligence.
What’s more, the company can benefit from its huge sales infrastructure. For Amazon, having an AI in your home is ideal, because it can persuade you to buy more products through its website. Unlike companies like Google, Amazon has an easy way to make a direct profit from IoT devices, which could fuel funding.
For a humanoid robot to be truly useful, though, it will need vision and intelligence. It will have to understand and interpret its environment, and react accordingly. The way humans learn about our environment is by getting out and seeing it. This is something that, for example, an Alexa coupled to smart glasses would be very capable of doing. There are rumors that Alexa’s AI will soon be used in security cameras, which is an ideal stepping stone task to train an AI to process images from its environment, truly perceiving the world and any threats it might contain.
It’s a slight exaggeration to say that Amazon is in the process of building a secret robot army. The gulf between our sci-fi vision of robots that can intelligently serve us, rather than mindlessly assemble cars, is still vast. But in quietly assembling many of the technologies needed for intelligent, multi-purpose robotics—and with the unique stepping stones they have along the way—Amazon might just be poised to leap that gulf. As if by magic.
Image Credit: Denis Starostin / Shutterstock.com Continue reading

Posted in Human Robots