Tag Archives: interactive

#435822 The Internet Is Coming to the Rest of ...

People surf it. Spiders crawl it. Gophers navigate it.

Now, a leading group of cognitive biologists and computer scientists want to make the tools of the Internet accessible to the rest of the animal kingdom.

Dubbed the Interspecies Internet, the project aims to provide intelligent animals such as elephants, dolphins, magpies, and great apes with a means to communicate among each other and with people online.

And through artificial intelligence, virtual reality, and other digital technologies, researchers hope to crack the code of all the chirps, yips, growls, and whistles that underpin animal communication.

Oh, and musician Peter Gabriel is involved.

“We can use data analysis and technology tools to give non-humans a lot more choice and control,” the former Genesis frontman, dressed in his signature Nehru-style collar shirt and loose, open waistcoat, told IEEE Spectrum at the inaugural Interspecies Internet Workshop, held Monday in Cambridge, Mass. “This will be integral to changing our relationship with the natural world.”

The workshop was a long time in the making.

Eighteen years ago, Gabriel visited a primate research center in Atlanta, Georgia, where he jammed with two bonobos, a male named Kanzi and his half-sister Panbanisha. It was the first time either bonobo had sat at a piano before, and both displayed an exquisite sense of musical timing and melody.

Gabriel seemed to be speaking to the great apes through his synthesizer. It was a shock to the man who once sang “Shock the Monkey.”

“It blew me away,” he says.

Add in the bonobos’ ability to communicate by pointing to abstract symbols, Gabriel notes, and “you’d have to be deaf, dumb, and very blind not to notice language being used.”

Gabriel eventually teamed up with Internet protocol co-inventor Vint Cerf, cognitive psychologist Diana Reiss, and IoT pioneer Neil Gershenfeld to propose building an Interspecies Internet. Presented in a 2013 TED Talk as an “idea in progress,” the concept proved to be ahead of the technology.

“It wasn’t ready,” says Gershenfeld, director of MIT’s Center for Bits and Atoms. “It needed to incubate.”

So, for the past six years, the architects of the Dolittlesque initiative embarked on two small pilot projects, one for dolphins and one for chimpanzees.

At her Hunter College lab in New York City, Reiss developed what she calls the D-Pad—a touchpad for dolphins.

Reiss had been trying for years to create an underwater touchscreen with which to probe the cognition and communication skills of bottlenose dolphins. But “it was a nightmare coming up with something that was dolphin-safe and would work,” she says.

Her first attempt emitted too much heat. A Wii-like system of gesture recognition proved too difficult to install in the dolphin tanks.

Eventually, she joined forces with Rockefeller University biophysicist Marcelo Magnasco and invented an optical detection system in which images and infrared sensors are projected through an underwater viewing window onto a glass panel, allowing the dolphins to play specially designed apps, including one dubbed Whack-a-Fish.

Meanwhile, in the United Kingdom, Gabriel worked with Alison Cronin, director of the ape rescue center Monkey World, to test the feasibility of using FaceTime with chimpanzees.

The chimps engaged with the technology, Cronin reported at this week’s workshop. However, our hominid cousins proved as adept at videotelephonic discourse as my three-year-old son is at video chatting with his grandparents—which is to say, there was a lot of pass-the-banana-through-the-screen and other silly games, and not much meaningful conversation.

“We can use data analysis and technology tools to give non-humans a lot more choice and control.”
—Peter Gabriel

The buggy, rudimentary attempt at interspecies online communication—what Cronin calls her “Max Headroom experiment”—shows that building the Interspecies Internet will not be as simple as giving out Skype-enabled tablets to smart animals.

“There are all sorts of problems with creating a human-centered experience for another animal,” says Gabriel Miller, director of research and development at the San Diego Zoo.

Miller has been working on animal-focused sensory tools such as an “Elephone” (for elephants) and a “Joybranch” (for birds), but it’s not easy to design efficient interactive systems for other creatures—and for the Interspecies Internet to be successful, Miller points out, “that will be super-foundational.”

Researchers are making progress on natural language processing of animal tongues. Through a non-profit organization called the Earth Species Project, former Firefox designer Aza Raskin and early Twitter engineer Britt Selvitelle are applying deep learning algorithms developed for unsupervised machine translation of human languages to fashion a Rosetta Stone–like tool capable of interpreting the vocalizations of whales, primates, and other animals.

Inspired by the scientists who first documented the complex sonic arrangements of humpback whales in the 1960s—a discovery that ushered in the modern marine conservation movement—Selvitelle hopes that an AI-powered animal translator can have a similar effect on environmentalism today.

“A lot of shifts happen when someone who doesn’t have a voice gains a voice,” he says.

A challenge with this sort of AI software remains verification and validation. Normally, machine-learning algorithms are benchmarked against a human expert, but who is to say if a cybernetic translation of a sperm whale’s clicks is accurate or not?

One could back-translate an English expression into sperm whale-ese and then into English again. But with the great apes, there might be a better option.

According to primatologist Sue Savage-Rumbaugh, expertly trained bonobos could serve as bilingual interpreters, translating the argot of apes into the parlance of people, and vice versa.

Not just any trained ape will do, though. They have to grow up in a mixed Pan/Homo environment, as Kanzi and Panbanisha were.

“If I can have a chat with a cow, maybe I can have more compassion for it.”
—Jeremy Coller

Those bonobos were raised effectively from birth both by Savage-Rumbaugh, who taught the animals to understand spoken English and to communicate via hundreds of different pictographic “lexigrams,” and a bonobo mother named Matata that had lived for six years in the Congolese rainforests before her capture.

Unlike all other research primates—which are brought into captivity as infants, reared by human caretakers, and have limited exposure to their natural cultures or languages—those apes thus grew up fluent in both bonobo and human.

Panbanisha died in 2012, but Kanzi, aged 38, is still going strong, living at an ape sanctuary in Des Moines, Iowa. Researchers continue to study his cognitive abilities—Francine Dolins, a primatologist at the University of Michigan-Dearborn, is running one study in which Kanzi and other apes hunt rabbits and forage for fruit through avatars on a touchscreen. Kanzi could, in theory, be recruited to check the accuracy of any Google Translate–like app for bonobo hoots, barks, grunts, and cries.

Alternatively, Kanzi could simply provide Internet-based interpreting services for our two species. He’s already proficient at video chatting with humans, notes Emily Walco, a PhD student at Harvard University who has personally Skyped with Kanzi. “He was super into it,” Walco says.

And if wild bonobos in Central Africa can be coaxed to gather around a computer screen, Savage-Rumbaugh is confident Kanzi could communicate with them that way. “It can all be put together,” she says. “We can have an Interspecies Internet.”

“Both the technology and the knowledge had to advance,” Savage-Rumbaugh notes. However, now, “the techniques that we learned could really be extended to a cow or a pig.”

That’s music to the ears of Jeremy Coller, a private equity specialist whose foundation partially funded the Interspecies Internet Workshop. Coller is passionate about animal welfare and has devoted much of his philanthropic efforts toward the goal of ending factory farming.

At the workshop, his foundation announced the creation of the Coller Doolittle Prize, a US $100,000 award to help fund further research related to the Interspecies Internet. (A working group also formed to synthesize plans for the emerging field, to facilitate future event planning, and to guide testing of shared technology platforms.)

Why would a multi-millionaire with no background in digital communication systems or cognitive psychology research want to back the initiative? For Coller, the motivation boils to interspecies empathy.

“If I can have a chat with a cow,” he says, “maybe I can have more compassion for it.”

An abridged version of this post appears in the September 2019 print issue as “Elephants, Dolphins, and Chimps Need the Internet, Too.” Continue reading

Posted in Human Robots

#435765 The Four Converging Technologies Giving ...

How each of us sees the world is about to change dramatically.

For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.

The world around us is gaining layer upon layer of digitized, virtually overlaid information—making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.

Welcome to Web 3.0, or the Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by two-dimensional screens.

Today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and powerful artificial intelligence.

As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.

In the third post of our five-part series on augmented reality, we will explore the convergence of AR, AI, sensors, and blockchain and dive into the implications through a key use case in manufacturing.

A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.

It all begins with graphics processing units (GPUs)—electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)

However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.

Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.

The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.

In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.

In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.

Image Credit: PHD Ventures.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.

(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.

Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.

With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.

Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.

And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.

Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.

After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.

And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.

As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”

Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.

Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.

(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.

To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).

In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.

With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.

To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.

For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.

Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.

And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.

Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).

Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.

While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.

(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.

A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.

Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”

Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.

In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.

And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.

On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.

Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.

The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.

Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.

Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.

And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.

As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. The augmented world is only just getting started.

Join Me
(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, a highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me—or as I call it, a “countdown to the Singularity.” If you’d like to learn more and consider joining our 2020 membership, apply here.

Share this with your friends, especially if they are interested in any of the areas outlined above.

(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.

This article originally appeared on Diamandis.com

Image Credit: Funky Focus / Pixabay Continue reading

Posted in Human Robots

#435731 Video Friday: NASA Is Sending This ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, UK
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, PA, USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

The big news today is that NASA is sending a robot to Saturn’s moon Titan. A flying robot. The Dragonfly mission will launch in 2026 and arrive in 2034, but you knew that already, because last January, we posted a detailed article about the concept from the Applied Physics Lab at Johns Hopkins University. And now it’s not a concept anymore, yay!

Again, read all the details plus an interview in 2018 article.

[ NASA ]

A robotic gripping arm that uses engineered bacteria to “taste” for a specific chemical has been developed by engineers at the University of California, Davis, and Carnegie Mellon University. The gripper is a proof-of-concept for biologically-based soft robotics.

The new device uses a biosensing module based on E. coli bacteria engineered to respond to the chemical IPTG by producing a fluorescent protein. The bacterial cells reside in wells with a flexible, porous membrane that allows chemicals to enter but keeps the cells inside. This biosensing module is built into the surface of a flexible gripper on a robotic arm, so the gripper can “taste” the environment through its fingers.

When IPTG crosses the membrane into the chamber, the cells fluoresce and electronic circuits inside the module detect the light. The electrical signal travels to the gripper’s control unit, which can decide whether to pick something up or release it.

[ UC Davis ]

The Toyota Research Institute (TRI) is taking on the hard problems in manipulation research toward making human-assist robots reliable and robust. Dr. Russ Tedrake, TRI Vice President of Robotics Research, explains how we are exploring the challenges and addressing the reliability gap by using a robot loading dishes in a dishwasher as an example task.

[ TRI ]

The Tactile Telerobot is the world’s first haptic telerobotic system that transmits realistic touch feedback to an operator located anywhere in the world. It is the product of joint collaboration between Shadow Robot Company, HaptX, and SynTouch. All Nippon Airways funded the project’s initial research and development.

What’s really unique about this is the HaptX tactile feedback system, which is something we’ve been following for several years now. It’s one of the most magical tech experiences I’ve ever had, and you can read about it here and here.

[ HaptX ]

Thanks Andrew!

I love how snake robots can emulate some of the fanciest moves of real snakes, and then also do bonkers things that real snakes never do.

[ Matsuno Lab ]

Here are a couple interesting videos from the Human-Robot Interaction Lab at Tufts.

A robot is instructed to perform an action and cannot do it due to lack of sensors. But when another robot is placed nearby, it can execute the instruction by tacitly tapping into the other robot’s mind and using that robot’s sensors for its own actions. Yes, it’s automatic, and yes, it’s the BORG!

Two Nao robots are instructed to perform a dance and are able to do it right after instruction. Moreover, they can switch roles immediately, and even a third different PR2 robot can perform the dance right away, demonstrating the ability of our DIARC architecture to learn quickly and share the knowledge with any type of robot running the architecture.

Compared to Nao, PR2 just sounds… depressed.

[ HRI Lab ]

This work explores the problem of robot tool construction – creating tools from parts available in the environment. We advance the state-of-the-art in robotic tool construction by introducing an approach that enables the robot to construct a wider range of tools with greater computational efficiency. Specifically, given an action that the robot wishes to accomplish and a set of building parts available to the robot, our approach reasons about the shape of the parts and potential ways of attaching them, generating a ranking of part combinations that the robot then uses to construct and test the target tool. We validate our approach on the construction of five tools using a physical 7-DOF robot arm.

[ RAIL Lab ] via [ RSS ]

We like Magazino’s approach to warehouse picking- constrain the problem to something you can reliably solve, like shoeboxes.

Magazino has announced a new pricing model for their robots. You pay 55k euros for the robot itself, and then after that, all you pay to keep the robot working is 6 cents per pick, so the robot is only costing you money for the work that it actually does.

[ Magazino ]

Thanks Florin!

Human-Robot Collaborations are happening across factories worldwide, yet very few are using it for smaller businesses, due to high costs or the difficulty of customization. Elephant Robotics, a new player from Shenzhen, the Silicon Valley of Asia, has set its sight on helping smaller businesses gain access to smart robotics. They created a Catbot (a collaborative robotic arm) that will offer high efficiency and flexibility to various industries.

The Catbot is set to help from education projects, photography, massaging, to being a personal barista or co-playing a table game. The customizations are endless. To increase the flexibility of usage, the Catbot is extremely easy to program from a high precision task up to covering hefty ground projects.

[ Elephant Robotics ]

Thanks Johnson!

Dronistics, an EPFL spin-off, has been testing out their enclosed delivery drone in the Dominican Republic through a partnership with WeRobotics.

[ WeRobotics ]

QTrobot is an expressive humanoid robot designed to help children with autism spectrum disorder and children with special educational needs in learning new skills. QTrobot uses simple and exaggerated facial expressions combined by interactive games and stories, to help children improve their emotional skills. QTrobot helps children to learn about and better understand the emotions and teach them strategies to handle their emotions more effectively.

[ LuxAI ]

Here’s a typical day in the life of a Tertill solar-powered autonomous weed-destroying robot.

$300, now shipping from Franklin Robotics.

[ Tertill ]

PAL Robotics is excited to announce a new TIAGo with two arms, TIAGo++! After carefully listening to the robotics community needs, we used TIAGo’s modularity to integrate two 7-DoF arms to our mobile manipulator. TIAGo++ can help you swiftly accomplish your research goals, opening endless possibilities in mobile manipulation.

[ PAL Robotics ]

Thanks Jack!

You’ve definitely already met the Cobalt security robot, but Toyota AI Ventures just threw a pile of money at them and would therefore like you to experience this re-introduction:

[ Cobalt Robotics ] via [ Toyota AI ]

ROSIE is a mobile manipulator kit from HEBI Robotics. And if you don’t like ROSIE, the modular nature of HEBI’s hardware means that you can take her apart and make something more interesting.

[ HEBI Robotics ]

Learn about Kawasaki Robotics’ second addition to their line of duAro dual-arm collaborative robots, duAro2. This model offers an extended vertical reach (550 mm) and an increased payload capacity (3 kg/arm).

[ Kawasaki Robotics ]

Drone Delivery Canada has partnered with Peel Region Paramedics to pilot its proprietary drone delivery platform to enable rapid first responder technology via drone with the goal to reduce response time and potentially save lives.

[ Drone Delivery Canada ]

In this week’s episode of Robots in Depth, Per speaks with Harri Ketamo, from Headai.

Harri Ketamo talks about AI and how he aims to mimic human decision making with algorithms. Harri has done a lot of AI for computer games to create opponents that are entertaining to play against. It is easy to develop a very bad or a very good opponent, but designing an opponent that behaves like a human, is entertaining to play against and that you can beat is quite hard. He talks about how AI in computer games is a very important story telling tool and an important part of making a game entertaining to play.

This work led him into other parts of the AI field. Harri thinks that we sometimes have a problem separating what is real from what is the type of story telling he knows from gaming AI. He calls for critical analysis of AI and says that data has to be used to verify AI decisions and results.

[ Robots in Depth ]

Thanks Per! Continue reading

Posted in Human Robots

#435687 Humanoid Robots Teach Coping Skills to ...

Photo: Rob Felt

IEEE Senior Member Ayanna Howard with one of the interactive androids that help children with autism improve their social and emotional engagement.

THE INSTITUTEChildren with autism spectrum disorder can have a difficult time expressing their emotions and can be highly sensitive to sound, sight, and touch. That sometimes restricts their participation in everyday activities, leaving them socially isolated. Occupational therapists can help them cope better, but the time they’re able to spend is limited and the sessions tend to be expensive.

Roboticist Ayanna Howard, an IEEE senior member, has been using interactive androids to guide children with autism on ways to socially and emotionally engage with others—as a supplement to therapy. Howard is chair of the School of Interactive Computing and director of the Human-Automation Systems Lab at Georgia Tech. She helped found Zyrobotics, a Georgia Tech VentureLab startup that is working on AI and robotics technologies to engage children with special needs. Last year Forbes named Howard, Zyrobotics’ chief technology officer, one of the Top 50 U.S. Women in Tech.

In a recent study, Howard and other researchers explored how robots might help children navigate sensory experiences. The experiment involved 18 participants between the ages of 4 and 12; five had autism, and the rest were meeting typical developmental milestones. Two humanoid robots were programmed to express boredom, excitement, nervousness, and 17 other emotional states. As children explored stations set up for hearing, seeing, smelling, tasting, and touching, the robots modeled what the socially acceptable responses should be.

“If a child’s expression is one of happiness or joy, the robot will have a corresponding response of encouragement,” Howard says. “If there are aspects of frustration or sadness, the robot will provide input to try again.” The study suggested that many children with autism exhibit stronger levels of engagement when the robots interact with them at such sensory stations.

It is one of many robotics projects Howard has tackled. She has designed robots for researching glaciers, and she is working on assistive robots for the home, as well as an exoskeleton that can help children who have motor disabilities.

Howard spoke about her work during the Ethics in AI: Impacts of (Anti?) Social Robotics panel session held in May at the IEEE Vision, Innovation, and Challenges Summit in San Diego. You can watch the session on IEEE.tv.

The next IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony will be held on 15 May 2020 at the JW Marriott Parq Vancouver hotel, in Vancouver.

In this interview with The Institute, Howard talks about how she got involved with assistive technologies, the need for a more diverse workforce, and ways IEEE has benefited her career.

FOCUS ON ACCESSIBILITY
Howard was inspired to work on technology that can improve accessibility in 2008 while teaching high school students at a summer camp devoted to science, technology, engineering, and math.

“A young lady with a visual impairment attended camp. The robot programming tools being used at the camp weren’t accessible to her,” Howard says. “As an engineer, I want to fix problems when I see them, so we ended up designing tools to enable access to programming tools that could be used in STEM education.

“That was my starting motivation, and this theme of accessibility has expanded to become a main focus of my research. One of the things about this world of accessibility is that when you start interacting with kids and parents, you discover another world out there of assistive technologies and how robotics can be used for good in education as well as therapy.”

DIVERSITY OF THOUGHT
The Institute asked Howard why it’s important to have a more diverse STEM workforce and what could be done to increase the number of women and others from underrepresented groups.

“The makeup of the current engineering workforce isn’t necessarily representative of the world, which is composed of different races, cultures, ages, disabilities, and socio-economic backgrounds,” Howard says. “We’re creating products used by people around the globe, so we have to ensure they’re being designed for a diverse population. As IEEE members, we also need to engage with people who aren’t engineers, and we don’t do that enough.”

Educational institutions are doing a better job of increasing diversity in areas such as gender, she says, adding that more work is needed because the enrollment numbers still aren’t representative of the population and the gains don’t necessarily carry through after graduation.

“There has been an increase in the number of underrepresented minorities and females going into engineering and computer science,” she says, “but data has shown that their numbers are not sustained in the workforce.”

ROLE MODEL
Because there are more underrepresented groups on today’s college campuses that can form a community, the lack of engineering role models—although a concern on campuses—is more extreme for preuniversity students, Howard says.

“Depending on where you go to school, you may not know what an engineer does or even consider engineering as an option,” she says, “so there’s still a big disconnect there.”

Howard has been involved for many years in math- and science-mentoring programs for at-risk high school girls. She tells them to find what they’re passionate about and combine it with math and science to create something. She also advises them not to let anyone tell them that they can’t.

Howard’s father is an engineer. She says he never encouraged or discouraged her to become one, but when she broke something, he would show her how to fix it and talk her through the process. Along the way, he taught her a logical way of thinking she says all engineers have.

“When I would try to explain something, he would quiz me and tell me to ‘think more logically,’” she says.

Howard earned a bachelor’s degree in engineering from Brown University, in Providence, R.I., then she received both a master’s and doctorate degree in electrical engineering from the University of Southern California. Before joining the faculty of Georgia Tech in 2005, she worked at NASA’s Jet Propulsion Laboratory at the California Institute of Technology for more than a decade as a senior robotics researcher and deputy manager in the Office of the Chief Scientist.

ACTIVE VOLUNTEER
Howard’s father was also an IEEE member, but that’s not why she joined the organization. She says she signed up when she was a student because, “that was something that you just did. Plus, my student membership fee was subsidized.”

She kept the membership as a grad student because of the discounted rates members receive on conferences.

Those conferences have had an impact on her career. “They allow you to understand what the state of the art is,” she says. “Back then you received a printed conference proceeding and reading through it was brutal, but by attending it in person, you got a 15-minute snippet about the research.”

Howard is an active volunteer with the IEEE Robotics and Automation and the IEEE Systems, Man, and Cybernetics societies, holding many positions and serving on several committees. She is also featured in the IEEE Impact Creators campaign. These members were selected because they inspire others to innovate for a better tomorrow.

“I value IEEE for its community,” she says. “One of the nice things about IEEE is that it’s international.” Continue reading

Posted in Human Robots

#435640 Video Friday: This Wearable Robotic Tail ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
CLAWAR 2019 – August 26-28, 2019 – Kuala Lumpur, Malaysia
IEEE Africon 2019 – September 25-27, 2019 – Accra, Ghana
ISRR 2019 – October 6-10, 2019 – Hanoi, Vietnam
Ro-Man 2019 – October 14-18, 2019 – New Delhi, India
Humanoids 2019 – October 15-17, 2019 – Toronto, Canada
ARSO 2019 – October 31-1, 2019 – Beijing, China
ROSCon 2019 – October 31-1, 2019 – Macau
IROS 2019 – November 4-8, 2019 – Macau
Let us know if you have suggestions for next week, and enjoy today’s videos.

Lakshmi Nair from Georgia Tech describes some fascinating research towards robots that can create their own tools, as presented at ICRA this year:

Using a novel capability to reason about shape, function, and attachment of unrelated parts, researchers have for the first time successfully trained an intelligent agent to create basic tools by combining objects.

The breakthrough comes from Georgia Tech’s Robot Autonomy and Interactive Learning (RAIL) research lab and is a significant step toward enabling intelligent agents to devise more advanced tools that could prove useful in hazardous – and potentially life-threatening – environments.

[ Lakshmi Nair ]

Victor Barasuol, from the Dynamic Legged Systems Lab at IIT, wrote in to share some new research on their HyQ quadruped that enables sensorless shin collision detection. This helps the robot navigate unstructured environments, and also mitigates all those painful shin strikes, because ouch.

This will be presented later this month at the International Conference on Climbing and Walking Robots (CLAWAR) in Kuala Lumpur, Malaysia.

[ IIT ]

Thanks Victor!

You used to have a tail, you know—as an embryo, about a month in to your development. All mammals used to have tails, and now we just have useless tailbones, which don’t help us with balancing even a little bit. BRING BACK THE TAIL!

The tail, created by Junichi Nabeshima, Kouta Minamizawa, and MHD Yamen Saraiji from Keio University’s Graduate School of Media Design, was presented at SIGGRAPH 2019 Emerging Technologies.

[ Paper ] via [ Gizmodo ]

The noises in this video are fantastic.

[ ESA ]

Apparently the industrial revolution wasn’t a thorough enough beatdown of human knitting, because the robots are at it again.

[ MIT CSAIL ]

Skydio’s drones just keep getting more and more impressive. Now if only they’d make one that I can afford…

[ Skydio ]

The only thing more fun than watching robots is watching people react to robots.

[ SEER ]

There aren’t any robots in this video, but it’s robotics-related research, and very soothing to watch.

[ Stanford ]

#autonomousicecreamtricycle

In case it wasn’t clear, which it wasn’t, this is a Roboy project. And if you didn’t understand that first video, you definitely won’t understand this second one:

Whatever that t-shirt is at the end (Roboy in sunglasses puking rainbows…?) I need one.

[ Roboy ]

By adding electronics and computation technology to a simple cane that has been around since ancient times, a team of researchers at Columbia Engineering have transformed it into a 21st century robotic device that can provide light-touch assistance in walking to the aged and others with impaired mobility.

The light-touch robotic cane, called CANINE, acts as a cane-like mobile assistant. The device improves the individual’s proprioception, or self-awareness in space, during walking, which in turn improves stability and balance.

[ ROAR Lab ]

During the second field experiment for DARPA’s OFFensive Swarm-Enabled Tactics (OFFSET) program, which took place at Fort Benning, Georgia, teams of autonomous air and ground robots tested tactics on a mission to isolate an urban objective. Similar to the way a firefighting crew establishes a boundary around a burning building, they first identified locations of interest and then created a perimeter around the focal point.

[ DARPA ]

I think there’s a bit of new footage here of Ghost Robotics’ Vision 60 quadruped walking around without sensors on unstructured terrain.

[ Ghost Robotics ]

If you’re as tired of passenger drone hype as I am, there’s absolutely no need to watch this video of NEC’s latest hover test.

[ AP ]

As researchers teach robots to perform more and more complex tasks, the need for realistic simulation environments is growing. Existing techniques for closing the reality gap by approximating real-world physics often require extensive real world data and/or thousands of simulation samples. This paper presents TuneNet, a new machine learning-based method to directly tune the parameters of one model to match another using an iterative residual tuning technique. TuneNet estimates the parameter difference between two models using a single observation from the target and minimal simulation, allowing rapid, accurate and sample-efficient parameter estimation.

The system can be trained via supervised learning over an auto-generated simulated dataset. We show that TuneNet can perform system identification, even when the true parameter values lie well outside the distribution seen during training, and demonstrate that simulators tuned with TuneNet outperform existing techniques for predicting rigid body motion. Finally, we show that our method can estimate real-world parameter values, allowing a robot to perform sim-to-real task transfer on a dynamic manipulation task unseen during training. We are also making a baseline implementation of our code available online.

[ Paper ]

Here’s an update on what GITAI has been up to with their telepresence astronaut-replacement robot.

[ GITAI ]

Curiosity captured this 360-degree panorama of a location on Mars called “Teal Ridge” on June 18, 2019. This location is part of a larger region the rover has been exploring called the “clay-bearing unit” on the side of Mount Sharp, which is inside Gale Crater. The scene is presented with a color adjustment that approximates white balancing to resemble how the rocks and sand would appear under daytime lighting conditions on Earth.

[ MSL ]

Some updates (in English) on ROS from ROSCon France. The first is a keynote from Brian Gerkey:

And this second video is from Omri Ben-Bassat, about how to keep your Anki Vector alive using ROS:

All of the ROSCon FR talks are available on Vimeo.

[ ROSCon FR ] Continue reading

Posted in Human Robots