Tag Archives: product
#435765 The Four Converging Technologies Giving ...
How each of us sees the world is about to change dramatically.
For all of human history, the experience of looking at the world was roughly the same for everyone. But boundaries between the digital and physical are beginning to fade.
The world around us is gaining layer upon layer of digitized, virtually overlaid information—making it rich, meaningful, and interactive. As a result, our respective experiences of the same environment are becoming vastly different, personalized to our goals, dreams, and desires.
Welcome to Web 3.0, or the Spatial Web. In version 1.0, static documents and read-only interactions limited the internet to one-way exchanges. Web 2.0 provided quite an upgrade, introducing multimedia content, interactive web pages, and participatory social media. Yet, all this was still mediated by two-dimensional screens.
Today, we are witnessing the rise of Web 3.0, riding the convergence of high-bandwidth 5G connectivity, rapidly evolving AR eyewear, an emerging trillion-sensor economy, and powerful artificial intelligence.
As a result, we will soon be able to superimpose digital information atop any physical surrounding—freeing our eyes from the tyranny of the screen, immersing us in smart environments, and making our world endlessly dynamic.
In the third post of our five-part series on augmented reality, we will explore the convergence of AR, AI, sensors, and blockchain and dive into the implications through a key use case in manufacturing.
A Tale of Convergence
Let’s deconstruct everything beneath the sleek AR display.
It all begins with graphics processing units (GPUs)—electric circuits that perform rapid calculations to render images. (GPUs can be found in mobile phones, game consoles, and computers.)
However, because AR requires such extensive computing power, single GPUs will not suffice. Instead, blockchain can now enable distributed GPU processing power, and blockchains specifically dedicated to AR holographic processing are on the rise.
Next up, cameras and sensors will aggregate real-time data from any environment to seamlessly integrate physical and virtual worlds. Meanwhile, body-tracking sensors are critical for aligning a user’s self-rendering in AR with a virtually enhanced environment. Depth sensors then provide data for 3D spatial maps, while cameras absorb more surface-level, detailed visual input. In some cases, sensors might even collect biometric data, such as heart rate and brain activity, to incorporate health-related feedback in our everyday AR interfaces and personal recommendation engines.
The next step in the pipeline involves none other than AI. Processing enormous volumes of data instantaneously, embedded AI algorithms will power customized AR experiences in everything from artistic virtual overlays to personalized dietary annotations.
In retail, AIs will use your purchasing history, current closet inventory, and possibly even mood indicators to display digitally rendered items most suitable for your wardrobe, tailored to your measurements.
In healthcare, smart AR glasses will provide physicians with immediately accessible and maximally relevant information (parsed from the entirety of a patient’s medical records and current research) to aid in accurate diagnoses and treatments, freeing doctors to engage in the more human-centric tasks of establishing trust, educating patients and demonstrating empathy.
Image Credit: PHD Ventures.
Convergence in Manufacturing
One of the nearest-term use cases of AR is manufacturing, as large producers begin dedicating capital to enterprise AR headsets. And over the next ten years, AR will converge with AI, sensors, and blockchain to multiply manufacturer productivity and employee experience.
(1) Convergence with AI
In initial application, digital guides superimposed on production tables will vastly improve employee accuracy and speed, while minimizing error rates.
Already, the International Air Transport Association (IATA) — whose airlines supply 82 percent of air travel — recently implemented industrial tech company Atheer’s AR headsets in cargo management. And with barely any delay, IATA reported a whopping 30 percent improvement in cargo handling speed and no less than a 90 percent reduction in errors.
With similar success rates, Boeing brought Skylight’s smart AR glasses to the runway, now used in the manufacturing of hundreds of airplanes. Sure enough—the aerospace giant has now seen a 25 percent drop in production time and near-zero error rates.
Beyond cargo management and air travel, however, smart AR headsets will also enable on-the-job training without reducing the productivity of other workers or sacrificing hardware. Jaguar Land Rover, for instance, implemented Bosch’s Re’flekt One AR solution to gear technicians with “x-ray” vision: allowing them to visualize the insides of Range Rover Sport vehicles without removing any dashboards.
And as enterprise capabilities continue to soar, AIs will soon become the go-to experts, offering support to manufacturers in need of assembly assistance. Instant guidance and real-time feedback will dramatically reduce production downtime, boost overall output, and even help customers struggling with DIY assembly at home.
Perhaps one of the most profitable business opportunities, AR guidance through centralized AI systems will also serve to mitigate supply chain inefficiencies at extraordinary scale. Coordinating moving parts, eliminating the need for manned scanners at each checkpoint, and directing traffic within warehouses, joint AI-AR systems will vastly improve workflow while overseeing quality assurance.
After its initial implementation of AR “vision picking” in 2015, leading courier company DHL recently announced it would continue to use Google’s newest smart lens in warehouses across the world. Motivated by the initial group’s reported 15 percent jump in productivity, DHL’s decision is part of the logistics giant’s $300 million investment in new technologies.
And as direct-to-consumer e-commerce fundamentally transforms the retail sector, supply chain optimization will only grow increasingly vital. AR could very well prove the definitive step for gaining a competitive edge in delivery speeds.
As explained by Vital Enterprises CEO Ash Eldritch, “All these technologies that are coming together around artificial intelligence are going to augment the capabilities of the worker and that’s very powerful. I call it Augmented Intelligence. The idea is that you can take someone of a certain skill level and by augmenting them with artificial intelligence via augmented reality and the Internet of Things, you can elevate the skill level of that worker.”
Already, large producers like Goodyear, thyssenkrupp, and Johnson Controls are using the Microsoft HoloLens 2—priced at $3,500 per headset—for manufacturing and design purposes.
Perhaps the most heartening outcome of the AI-AR convergence is that, rather than replacing humans in manufacturing, AR is an ideal interface for human collaboration with AI. And as AI merges with human capital, prepare to see exponential improvements in productivity, professional training, and product quality.
(2) Convergence with Sensors
On the hardware front, these AI-AR systems will require a mass proliferation of sensors to detect the external environment and apply computer vision in AI decision-making.
To measure depth, for instance, some scanning depth sensors project a structured pattern of infrared light dots onto a scene, detecting and analyzing reflected light to generate 3D maps of the environment. Stereoscopic imaging, using two lenses, has also been commonly used for depth measurements. But leading technology like Microsoft’s HoloLens 2 and Intel’s RealSense 400-series camera implement a new method called “phased time-of-flight” (ToF).
In ToF sensing, the HoloLens 2 uses numerous lasers, each with 100 milliwatts (mW) of power, in quick bursts. The distance between nearby objects and the headset wearer is then measured by the amount of light in the return beam that has shifted from the original signal. Finally, the phase difference reveals the location of each object within the field of view, which enables accurate hand-tracking and surface reconstruction.
With a far lower computing power requirement, the phased ToF sensor is also more durable than stereoscopic sensing, which relies on the precise alignment of two prisms. The phased ToF sensor’s silicon base also makes it easily mass-produced, rendering the HoloLens 2 a far better candidate for widespread consumer adoption.
To apply inertial measurement—typically used in airplanes and spacecraft—the HoloLens 2 additionally uses a built-in accelerometer, gyroscope, and magnetometer. Further equipped with four “environment understanding cameras” that track head movements, the headset also uses a 2.4MP HD photographic video camera and ambient light sensor that work in concert to enable advanced computer vision.
For natural viewing experiences, sensor-supplied gaze tracking increasingly creates depth in digital displays. Nvidia’s work on Foveated AR Display, for instance, brings the primary foveal area into focus, while peripheral regions fall into a softer background— mimicking natural visual perception and concentrating computing power on the area that needs it most.
Gaze tracking sensors are also slated to grant users control over their (now immersive) screens without any hand gestures. Conducting simple visual cues, even staring at an object for more than three seconds, will activate commands instantaneously.
And our manufacturing example above is not the only one. Stacked convergence of blockchain, sensors, AI and AR will disrupt almost every major industry.
Take healthcare, for example, wherein biometric sensors will soon customize users’ AR experiences. Already, MIT Media Lab’s Deep Reality group has created an underwater VR relaxation experience that responds to real-time brain activity detected by a modified version of the Muse EEG. The experience even adapts to users’ biometric data, from heart rate to electro dermal activity (inputted from an Empatica E4 wristband).
Now rapidly dematerializing, sensors will converge with AR to improve physical-digital surface integration, intuitive hand and eye controls, and an increasingly personalized augmented world. Keep an eye on companies like MicroVision, now making tremendous leaps in sensor technology.
While I’ll be doing a deep dive into sensor applications across each industry in our next blog, it’s critical to first discuss how we might power sensor- and AI-driven augmented worlds.
(3) Convergence with Blockchain
Because AR requires much more compute power than typical 2D experiences, centralized GPUs and cloud computing systems are hard at work to provide the necessary infrastructure. Nonetheless, the workload is taxing and blockchain may prove the best solution.
A major player in this pursuit, Otoy aims to create the largest distributed GPU network in the world, called the Render Network RNDR. Built specifically on the Ethereum blockchain for holographic media, and undergoing Beta testing, this network is set to revolutionize AR deployment accessibility.
Alphabet Chairman Eric Schmidt (an investor in Otoy’s network), has even said, “I predicted that 90% of computing would eventually reside in the web based cloud… Otoy has created a remarkable technology which moves that last 10%—high-end graphics processing—entirely to the cloud. This is a disruptive and important achievement. In my view, it marks the tipping point where the web replaces the PC as the dominant computing platform of the future.”
Leveraging the crowd, RNDR allows anyone with a GPU to contribute their power to the network for a commission of up to $300 a month in RNDR tokens. These can then be redeemed in cash or used to create users’ own AR content.
In a double win, Otoy’s blockchain network and similar iterations not only allow designers to profit when not using their GPUs, but also democratize the experience for newer artists in the field.
And beyond these networks’ power suppliers, distributing GPU processing power will allow more manufacturing companies to access AR design tools and customize learning experiences. By further dispersing content creation across a broad network of individuals, blockchain also has the valuable potential to boost AR hardware investment across a number of industry beneficiaries.
On the consumer side, startups like Scanetchain are also entering the blockchain-AR space for a different reason. Allowing users to scan items with their smartphone, Scanetchain’s app provides access to a trove of information, from manufacturer and price, to origin and shipping details.
Based on NEM (a peer-to-peer cryptocurrency that implements a blockchain consensus algorithm), the app aims to make information far more accessible and, in the process, create a social network of purchasing behavior. Users earn tokens by watching ads, and all transactions are hashed into blocks and securely recorded.
The writing is on the wall—our future of brick-and-mortar retail will largely lean on blockchain to create the necessary digital links.
Final Thoughts
Integrating AI into AR creates an “auto-magical” manufacturing pipeline that will fundamentally transform the industry, cutting down on marginal costs, reducing inefficiencies and waste, and maximizing employee productivity.
Bolstering the AI-AR convergence, sensor technology is already blurring the boundaries between our augmented and physical worlds, soon to be near-undetectable. While intuitive hand and eye motions dictate commands in a hands-free interface, biometric data is poised to customize each AR experience to be far more in touch with our mental and physical health.
And underpinning it all, distributed computing power with blockchain networks like RNDR will democratize AR, boosting global consumer adoption at plummeting price points.
As AR soars in importance—whether in retail, manufacturing, entertainment, or beyond—the stacked convergence discussed above merits significant investment over the next decade. The augmented world is only just getting started.
Join Me
(1) A360 Executive Mastermind: Want even more context about how converging exponential technologies will transform your business and industry? Consider joining Abundance 360, a highly selective community of 360 exponentially minded CEOs, who are on a 25-year journey with me—or as I call it, a “countdown to the Singularity.” If you’d like to learn more and consider joining our 2020 membership, apply here.
Share this with your friends, especially if they are interested in any of the areas outlined above.
(2) Abundance-Digital Online Community: I’ve also created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is Singularity University’s ‘onramp’ for exponential entrepreneurs — those who want to get involved and play at a higher level. Click here to learn more.
This article originally appeared on Diamandis.com
Image Credit: Funky Focus / Pixabay Continue reading →
#435757 Robotic Animal Agility
An off-shore wind power platform, somewhere in the North Sea, on a freezing cold night, with howling winds and waves crashing against the impressive structure. An imperturbable ANYmal is quietly conducting its inspection.
ANYmal, a medium sized dog-like quadruped robot, walks down the stairs, lifts a “paw” to open doors or to call the elevator and trots along corridors. Darkness is no problem: it knows the place perfectly, having 3D-mapped it. Its laser sensors keep it informed about its precise path, location and potential obstacles. It conducts its inspection across several rooms. Its cameras zoom in on counters, recording the measurements displayed. Its thermal sensors record the temperature of machines and equipment and its ultrasound microphone checks for potential gas leaks. The robot also inspects lever positions as well as the correct positioning of regulatory fire extinguishers. As the electronic buzz of its engines resumes, it carries on working tirelessly.
After a little over two hours of inspection, the robot returns to its docking station for recharging. It will soon head back out to conduct its next solitary patrol. ANYmal played alongside Mulder and Scully in the “X-Files” TV series*, but it is in no way a Hollywood robot. It genuinely exists and surveillance missions are part of its very near future.
Off-shore oil platforms, the first test fields and probably the first actual application of ANYmal. ©ANYbotics
This quadruped robot was designed by ANYbotics, a spinoff of the Swiss Federal Institute of Technology in Zurich (ETH Zurich). Made of carbon fibre and aluminium, it weighs about thirty kilos. It is fully ruggedised, water- and dust-proof (IP-67). A kevlar belly protects its main body, carrying its powerful brain, batteries, network device, power management system and navigational systems.
ANYmal was designed for all types of terrain, including rubble, sand or snow. It has been field tested on industrial sites and is at ease with new obstacles to overcome (and it can even get up after a fall). Depending on its mission, its batteries last 2 to 4 hours.
On its jointed legs, protected by rubber pads, it can walk (at the speed of human steps), trot, climb, curl upon itself to crawl, carry a load or even jump and dance. It is the need to move on all surfaces that has driven its designers to choose a quadruped. “Biped robots are not easy to stabilise, especially on irregular terrain” explains Dr Péter Fankhauser, co-founder and chief business development officer of ANYbotics. “Wheeled or tracked robots can carry heavy loads, but they are bulky and less agile. Flying drones are highly mobile, but cannot carry load, handle objects or operate in bad weather conditions. We believe that quadrupeds combine the optimal characteristics, both in terms of mobility and versatility.”
What served as a source of inspiration for the team behind the project, the Robotic Systems Lab of the ETH Zurich, is a champion of agility on rugged terrain: the mountain goat. “We are of course still a long way” says Fankhauser. “However, it remains our objective on the longer term.
The first prototype, ALoF, was designed already back in 2009. It was still rather slow, very rigid and clumsy – more of a proof of concept than a robot ready for application. In 2012, StarlETH, fitted with spring joints, could hop, jump and climb. It was with this robot that the team started participating in 2014 in ARGOS, a full-scale challenge, launched by the Total oil group. The idea was to present a robot capable of inspecting an off-shore drilling station autonomously.
Up against dozens of competitors, the ETH Zurich team was the only team to enter the competition with such a quadrupedal robot. They didn’t win, but the multiple field tests were growing evermore convincing. Especially because, during the challenge, the team designed new joints with elastic actuators made in-house. These joints, inspired by tendons and muscles, are compact, sealed and include their own custom control electronics. They can regulate joint torque, position and impedance directly. Thanks to this innovation, the team could enter the same competition with a new version of its robot, ANYmal, fitted with three joints on each leg.
The ARGOS experience confirms the relevance of the selected means of locomotion. “Our robot is lighter, takes up less space on site and it is less noisy” says Fankhauser. “It also overcomes bigger obstacles than larger wheeled or tracked robots!” As ANYmal generated public interest and its transformation into a genuine product seemed more than possible, the startup ANYbotics was launched in 2016. It sold not only its robot, but also its revolutionary joints, called ANYdrive.
Today, ANYmal is not yet ready for sale to companies. However, ANYbotics has a growing number of partnerships with several industries, testing the robot for a few days or several weeks, for all types of tasks. Last October, for example, ANYmal navigated its way through the dark sewage system of the city of Zurich in order to test its capacity to help workers in similar difficult, repetitive and even dangerous tasks.
Why such an early interest among companies? “Because many companies want to integrate robots into their maintenance tasks” answers Fankhauser. “With ANYmal, they can actually evaluate its feasibility and plan their strategy. Eventually, both the architecture and the equipment of buildings could be rethought to be adapted to these maintenance robots”.
ANYmal requires ruggedised, sealed and extremely reliable interconnection solutions, such as LEMO. ©ANYbotics
Through field demonstrations and testing, ANYbotics can gather masses of information (up to 50,000 measurements are recorded every second during each test!) “It helps us to shape the product.” In due time, the startup will be ready to deliver a commercial product which really caters for companies’ needs.
Inspection and surveillance tasks on industrial sites are not the only applications considered. The startup is also thinking of agricultural inspections – with its onboard sensors, ANYmal is capable of mapping its environment, measuring bio mass and even taking soil samples. In the longer term, it could also be used for search and rescue operations. By the way, the robot can already be switched to “remote control” mode at any time and can be easily tele-operated. It is also capable of live audio and video transmission.
The transition from the prototype to the marketed product stage will involve a number of further developments. These include increasing ANYmal’s agility and speed, extending its capacity to map large-scale environments, improving safety, security, user handling and integrating the system with the customer’s data management software. It will also be necessary to enhance the robot’s reliability “so that it can work for days, weeks, or even months without human supervision.” All required certifications will have to be obtained. The locomotion system, which had triggered the whole business, is only one of a number of considerations of ANYbotics.
Designed for extreme environments, for ANYmal smoke is not a problem and it can walk in the snow, through rubble or in water. ©ANYbotics
The startup is not all alone. In fact, it has sold ANYmal robots to a dozen major universities who use them to develop their know-how in robotics. The startup has also founded ANYmal Research, a community including members such as Toyota Research Institute, the German Aerospace Center and the computer company Nvidia. Members have full access to ANYmal’s control software, simulations and documentation. Sharing has boosted both software and hardware ideas and developments (built on ROS, the open-source Robot Operating System). In particular, payload variations, providing for expandability and scalability. For instance, one of the universities uses a robotic arm which enables ANYmal to grasp or handle objects and open doors.
Among possible applications, ANYbotics mentions entertainment. It is not only about playing in more films or TV series, but rather about participating in various attractions (trade shows, museums, etc.). “ANYmal is so novel that it attracts a great amount of interest” confirms Fankhauser with a smile. “Whenever we present it somewhere, people gather around.”
Videos of these events show a fascinated and sometimes slightly fearful audience, when ANYmal gets too close to them. Is it fear of the “bad robot”? “This fear exists indeed and we are happy to be able to use ANYmal also to promote public awareness towards robotics and robots.” Reminiscent of a young dog, ANYmal is truly adapted for the purpose.
However, Péter Fankhauser softens the image of humans and sophisticated robots living together. “These coming years, robots will continue to work in the background, like they have for a long time in factories. Then, they will be used in public places in a selective and targeted way, for instance for dangerous missions. We will need to wait another ten years before animal-like robots, such as ANYmal will share our everyday lives!”
At the Consumer Electronics Show (CES) in Las Vegas in January, Continental, the German automotive manufacturing company, used robots to demonstrate a last-mile delivery. It showed ANYmal getting out of an autonomous vehicle with a parcel, climbing onto the front porch, lifting a paw to ring the doorbell, depositing the parcel before getting back into the vehicle. This futuristic image seems very close indeed.
*X-Files, season 11, episode 7, aired in February 2018 Continue reading →
#435742 This ‘Useless’ Social Robot ...
The recent high profile failures of some home social robots (and the companies behind them) have made it even more challenging than it was before to develop robots in that space. And it was challenging enough to begin with—making a robot that can autonomous interact with random humans in their homes over a long period of time for a price that people can afford is extraordinarily difficult. However, the massive amount of initial interest in robots like Jibo, Kuri, Vector, and Buddy prove that people do want these things, or at least think they do, and while that’s the case, there’s incentive for other companies to give social home robots a try.
One of those companies is Zoetic, founded in 2107 by Mita Yun and Jitu Das, both ex-Googlers. Their robot, Kiki, is more or less exactly what you’d expect from a social home robot: It’s cute, white, roundish, has big eyes, promises that it will be your “robot sidekick,” and is not cheap: It’s on Kicksterter for $800. Kiki is among what appears to be a sort of tentative second wave of social home robots, where designers have (presumably) had a chance to take everything that they learned from the social home robot pioneers and use it to make things better this time around.
Kiki’s Kickstarter video is, again, more or less exactly what you’d expect from a social home robot crowdfunding campaign:
We won’t get into all of the details on Kiki in this article (the Kickstarter page has tons of information), but a few distinguishing features:
Each Kiki will develop its own personality over time through its daily interactions with its owner, other people, and other Kikis.
Interacting with Kiki is more abstract than with most robots—it can understand some specific words and phrases, and will occasionally use a few specific words or two, but otherwise it’s mostly listening to your tone of voice and responding with sounds rather than speech.
Kiki doesn’t move on its own, but it can operate for up to two hours away from its charging dock.
Depending on how your treat Kiki, it can get depressed or neurotic. It also needs to be fed, which you can do by drawing different kinds of food in the app.
Everything Kiki does runs on-board the robot. It has Wi-Fi connectivity for updates, but doesn’t rely on the cloud for anything in real-time, meaning that your data stays on the robot and that the robot will continue to function even if its remote service shuts down.
It’s hard to say whether features like these are unique enough to help Kiki be successful where other social home robots haven’t been, so we spoke with Zoetic co-founder Mita Yun and asked her why she believes that Kiki is going to be the social home robot that makes it.
IEEE Spectrum: What’s your background?
Mita Yun: I was an only child growing up, and so I always wanted something like Doraemon or Totoro. Something that when you come home it’s there to greet you, not just because it’s programmed to do that but because it’s actually actively happy to see you, and only you. I was so interested in this that I went to study robotics at CMU and then after I graduated I joined Google and worked there for five years. I tended to go for the more risky and more fun projects, but they always got cancelled—the first project I joined was called Android at Home, and then I joined Google Glass, and then I joined a team called Robots for Kids. That project was building educational robots, and then I just realized that when we’re adding technology to something, to a product, we’re actually taking the life away somehow, and the kids were more connected with stuffed animals compared to the educational robots we were building. That project was also cancelled, and in 2017, I left with a coworker of mine (Jitu Das) to bring this dream into reality. And now we’re building Kiki.
“Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless”
—Mita Yun, Zoetic
You started working on Kiki in 2017, when things were already getting challenging for Jibo—why did you decide to start developing a social home robot at that point?
I thought Jibo was great. It had a special magical way of moving, and it was such a new idea that you could have this robot with embodiment and it can actually be your assistant. The problem with Jibo, in my opinion, was that it took too long to fulfill the orders. It took them three to four years to actually manufacture, because it was a very complex piece of hardware, and then during that period of time Alexa and Google Home came out, and they started selling these voice systems for $30 and then you have Jibo for $800. Jibo was Alexa plus cuteness equals $800, and I feel like that equation doesn’t work for most people, and that eventually killed the company. So, for Kiki, we are actually building something very different. We’re building something that’s completely useless.
Can you elaborate on “completely useless?”
I feel like people are initially connected with robots because they remind them of a character. And it’s the closest we can get to a character other than an organic character like an animal. So we’re connected to a character like when we have a robot in a mall that’s roaming around, even if it looks really ugly, like if it doesn’t have eyes, people still take selfies with it. Why? Because they think it’s a character. And humans are just hardwired to love characters and love stories. With Kiki, we just wanted to build a character that’s alive, we don’t want to have a character do anything super useful.
I understand why other robotics companies are adding Alexa integration to their robots, and I think that’s great. But the dream I had, and the understanding I have about robotics technology, is that for a consumer robot especially, it is very very difficult for the robot to justify its price through usefulness. And then there’s also research showing that the more useless something is, the easier it is to have an emotional connection, so that’s why we want to keep Kiki very useless.
What kind of character are you creating with Kiki?
The whole design principle around Kiki is we want to make it a very vulnerable character. In terms of its status at home, it’s not going to be higher or equal status as the owner, but slightly lower status than the human, and it’s vulnerable and needs you to take care of it in order to grow up into a good personality robot.
We don’t let Kiki speak full English sentences, because whenever it does that, people are going to think it’s at least as intelligent as a baby, which is impossible for robots at this point. And we also don’t let it move around, because when you have it move around, people are going to think “I’m going to call Kiki’s name, and then Kiki is will come to me.” But that is actually very difficult to build. And then also we don’t have any voice integration so it doesn’t tell you about the stock market price and so on.
Photo: Zoetic
Kiki is designed to be “vulnerable,” and it needs you to take care of it so it can “grow up into a good personality robot,” according to its creators.
That sounds similar to what Mayfield did with Kuri, emphasizing an emotional connection rather than specific functionality.
It is very similar, but one of the key differences from Kuri, I think, is that Kuri started with a Kobuki base, and then it’s wrapped into a cute shell, and they added sounds. So Kuri started with utility in mind—navigation is an important part of Kuri, so they started with that challenge. For Kiki, we started with the eyes. The entire thing started with the character itself.
How will you be able to convince your customers to spend $800 on a robot that you’ve described as “useless” in some ways?
Because it’s useless, it’s actually easier to convince people, because it provides you with an emotional connection. I think Kiki is not a utility-driven product, so the adoption cycle is different. For a functional product, it’s very easy to pick up, because you can justify it by saying “I’m going to pay this much and then my life can become this much more efficient.” But it’s also very easy to be replaced and forgotten. For an emotional-driven product, it’s slower to pick up, but once people actually pick it up, they’re going to be hooked—they get be connected with it, and they’re willing to invest more into taking care of the robot so it will grow up to be smarter.
Maintaining value over time has been another challenge for social home robots. How will you make sure that people don’t get bored with Kiki after a few weeks?
Of course Kiki has limits in what it can do. We can combine the eyes, the facial expression, the motors, and lights and sounds, but is it going to be constantly entertaining? So we think of this as, imagine if a human is actually puppeteering Kiki—can Kiki stay interesting if a human is puppeteering it and interacting with the owner? So I think what makes a robot interesting is not just in the physical expressions, but the part in between that and the robot conveying its intentions and emotions.
For example, if you come into the room and then Kiki decides it will turn the other direction, ignore you, and then you feel like, huh, why did the robot do that to me? Did I do something wrong? And then maybe you will come up to it and you will try to figure out why it did that. So, even though Kiki can only express in four different dimensions, it can still make things very interesting, and then when its strategies change, it makes it feel like a new experience.
There’s also an explore and exploit process going on. Kiki wants to make you smile, and it will try different things. It could try to chase its tail, and if you smile, Kiki learns that this works and will exploit it. But maybe after doing it three times, you no longer find it funny, because you’re bored of it, and then Kiki will observe your reactions and be motivated to explore a new strategy.
Photo: Zoetic
Kiki’s creators are hoping that, with an emotionally engaging robot, it will be easier for people to get attached to it and willing to spend time taking care of it.
A particular risk with crowdfunding a robot like this is setting expectations unreasonably high. The emphasis on personality and emotional engagement with Kiki seems like it may be very difficult for the robot to live up to in practice.
I think we invested more than most robotics companies into really building out Kiki’s personality, because that is the single most important thing to us. For Jibo a lot of the focus was in the assistant, and for Kuri, it’s more in the movement. For Kiki, it’s very much in the personality.
I feel like when most people talk about personality, they’re mainly talking about expression. With Kiki, it’s not just in the expression itself, not just in the voice or the eyes or the output layer, it’s in the layer in between—when Kiki receives input, how will it make decisions about what to do? We actually don’t think the personality of Kiki is categorizable, which is why I feel like Kiki has a deeper implementation of how personalities should work. And you’re right, Kiki doesn’t really understand why you’re feeling a certain way, it just reads your facial expressions. It’s maybe not your best friend, but maybe closer to your little guinea pig robot.
Photo: Zoetic
The team behind Kiki paid particular attention to its eyes, and designed the robot to always face the person that it is interacting with.
Is that where you’d put Kiki on the scale of human to pet?
Kiki is definitely not human, we want to keep it very far away from human. And it’s also not a dog or cat. When we were designing Kiki, we took inspiration from mammals because humans are deeply connected to mammals since we’re mammals ourselves. And specifically we’re connected to predator animals. With prey animals, their eyes are usually on the sides of their heads, because they need to see different angles. A predator animal needs to hunt, they need to focus. Cats and dogs are predator animals. So with Kiki, that’s why we made sure the eyes are on one side of the face and the head can actuate independently from the body and the body can turn so it’s always facing the person that it’s paying attention to.
I feel like Kiki is probably does more than a plant. It does more than a fish, because a fish doesn’t look you in the eyes. It’s not as smart as a cat or a dog, so I would just put it in this guinea pig kind of category.
What have you found so far when running user studies with Kiki?
When we were first designing Kiki we went through a whole series of prototypes. One of the earlier prototypes of Kiki looked like a CRT, like a very old monitor, and when we were testing that with people they didn’t even want to touch it. Kiki’s design inspiration actually came from an airplane, with a very angular, futuristic look, but based on user feedback we made it more round and more friendly to the touch. The lights were another feature request from the users, which adds another layer of expressivity to Kiki, and they wanted to see multiple Kikis working together with different personalities. Users also wanted different looks for Kiki, to make it look like a deer or a unicorn, for example, and we actually did take that into consideration because it doesn’t look like any particular mammal. In the future, you’ll be able to have different ears to make it look like completely different animals.
There has been a lot of user feedback that we didn’t implement—I believe we should observe the users reactions and feedback but not listen to their advice. The users shouldn’t be our product designers, because if you test Kiki with 10 users, eight of them will tell you they want Alexa in it. But we’re never going to add Alexa integration to Kiki because that’s not what it’s meant to do.
While it’s far too early to tell whether Kiki will be a long-term success, the Kickstarter campaign is currently over 95 percent funded with 8 days to go, and 34 robots are still available for a May 2020 delivery.
[ Kickstarter ] Continue reading →
#435731 Video Friday: NASA Is Sending This ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, UK
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, PA, USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
The big news today is that NASA is sending a robot to Saturn’s moon Titan. A flying robot. The Dragonfly mission will launch in 2026 and arrive in 2034, but you knew that already, because last January, we posted a detailed article about the concept from the Applied Physics Lab at Johns Hopkins University. And now it’s not a concept anymore, yay!
Again, read all the details plus an interview in 2018 article.
[ NASA ]
A robotic gripping arm that uses engineered bacteria to “taste” for a specific chemical has been developed by engineers at the University of California, Davis, and Carnegie Mellon University. The gripper is a proof-of-concept for biologically-based soft robotics.
The new device uses a biosensing module based on E. coli bacteria engineered to respond to the chemical IPTG by producing a fluorescent protein. The bacterial cells reside in wells with a flexible, porous membrane that allows chemicals to enter but keeps the cells inside. This biosensing module is built into the surface of a flexible gripper on a robotic arm, so the gripper can “taste” the environment through its fingers.
When IPTG crosses the membrane into the chamber, the cells fluoresce and electronic circuits inside the module detect the light. The electrical signal travels to the gripper’s control unit, which can decide whether to pick something up or release it.
[ UC Davis ]
The Toyota Research Institute (TRI) is taking on the hard problems in manipulation research toward making human-assist robots reliable and robust. Dr. Russ Tedrake, TRI Vice President of Robotics Research, explains how we are exploring the challenges and addressing the reliability gap by using a robot loading dishes in a dishwasher as an example task.
[ TRI ]
The Tactile Telerobot is the world’s first haptic telerobotic system that transmits realistic touch feedback to an operator located anywhere in the world. It is the product of joint collaboration between Shadow Robot Company, HaptX, and SynTouch. All Nippon Airways funded the project’s initial research and development.
What’s really unique about this is the HaptX tactile feedback system, which is something we’ve been following for several years now. It’s one of the most magical tech experiences I’ve ever had, and you can read about it here and here.
[ HaptX ]
Thanks Andrew!
I love how snake robots can emulate some of the fanciest moves of real snakes, and then also do bonkers things that real snakes never do.
[ Matsuno Lab ]
Here are a couple interesting videos from the Human-Robot Interaction Lab at Tufts.
A robot is instructed to perform an action and cannot do it due to lack of sensors. But when another robot is placed nearby, it can execute the instruction by tacitly tapping into the other robot’s mind and using that robot’s sensors for its own actions. Yes, it’s automatic, and yes, it’s the BORG!
Two Nao robots are instructed to perform a dance and are able to do it right after instruction. Moreover, they can switch roles immediately, and even a third different PR2 robot can perform the dance right away, demonstrating the ability of our DIARC architecture to learn quickly and share the knowledge with any type of robot running the architecture.
Compared to Nao, PR2 just sounds… depressed.
[ HRI Lab ]
This work explores the problem of robot tool construction – creating tools from parts available in the environment. We advance the state-of-the-art in robotic tool construction by introducing an approach that enables the robot to construct a wider range of tools with greater computational efficiency. Specifically, given an action that the robot wishes to accomplish and a set of building parts available to the robot, our approach reasons about the shape of the parts and potential ways of attaching them, generating a ranking of part combinations that the robot then uses to construct and test the target tool. We validate our approach on the construction of five tools using a physical 7-DOF robot arm.
[ RAIL Lab ] via [ RSS ]
We like Magazino’s approach to warehouse picking- constrain the problem to something you can reliably solve, like shoeboxes.
Magazino has announced a new pricing model for their robots. You pay 55k euros for the robot itself, and then after that, all you pay to keep the robot working is 6 cents per pick, so the robot is only costing you money for the work that it actually does.
[ Magazino ]
Thanks Florin!
Human-Robot Collaborations are happening across factories worldwide, yet very few are using it for smaller businesses, due to high costs or the difficulty of customization. Elephant Robotics, a new player from Shenzhen, the Silicon Valley of Asia, has set its sight on helping smaller businesses gain access to smart robotics. They created a Catbot (a collaborative robotic arm) that will offer high efficiency and flexibility to various industries.
The Catbot is set to help from education projects, photography, massaging, to being a personal barista or co-playing a table game. The customizations are endless. To increase the flexibility of usage, the Catbot is extremely easy to program from a high precision task up to covering hefty ground projects.
[ Elephant Robotics ]
Thanks Johnson!
Dronistics, an EPFL spin-off, has been testing out their enclosed delivery drone in the Dominican Republic through a partnership with WeRobotics.
[ WeRobotics ]
QTrobot is an expressive humanoid robot designed to help children with autism spectrum disorder and children with special educational needs in learning new skills. QTrobot uses simple and exaggerated facial expressions combined by interactive games and stories, to help children improve their emotional skills. QTrobot helps children to learn about and better understand the emotions and teach them strategies to handle their emotions more effectively.
[ LuxAI ]
Here’s a typical day in the life of a Tertill solar-powered autonomous weed-destroying robot.
$300, now shipping from Franklin Robotics.
[ Tertill ]
PAL Robotics is excited to announce a new TIAGo with two arms, TIAGo++! After carefully listening to the robotics community needs, we used TIAGo’s modularity to integrate two 7-DoF arms to our mobile manipulator. TIAGo++ can help you swiftly accomplish your research goals, opening endless possibilities in mobile manipulation.
[ PAL Robotics ]
Thanks Jack!
You’ve definitely already met the Cobalt security robot, but Toyota AI Ventures just threw a pile of money at them and would therefore like you to experience this re-introduction:
[ Cobalt Robotics ] via [ Toyota AI ]
ROSIE is a mobile manipulator kit from HEBI Robotics. And if you don’t like ROSIE, the modular nature of HEBI’s hardware means that you can take her apart and make something more interesting.
[ HEBI Robotics ]
Learn about Kawasaki Robotics’ second addition to their line of duAro dual-arm collaborative robots, duAro2. This model offers an extended vertical reach (550 mm) and an increased payload capacity (3 kg/arm).
[ Kawasaki Robotics ]
Drone Delivery Canada has partnered with Peel Region Paramedics to pilot its proprietary drone delivery platform to enable rapid first responder technology via drone with the goal to reduce response time and potentially save lives.
[ Drone Delivery Canada ]
In this week’s episode of Robots in Depth, Per speaks with Harri Ketamo, from Headai.
Harri Ketamo talks about AI and how he aims to mimic human decision making with algorithms. Harri has done a lot of AI for computer games to create opponents that are entertaining to play against. It is easy to develop a very bad or a very good opponent, but designing an opponent that behaves like a human, is entertaining to play against and that you can beat is quite hard. He talks about how AI in computer games is a very important story telling tool and an important part of making a game entertaining to play.
This work led him into other parts of the AI field. Harri thinks that we sometimes have a problem separating what is real from what is the type of story telling he knows from gaming AI. He calls for critical analysis of AI and says that data has to be used to verify AI decisions and results.
[ Robots in Depth ]
Thanks Per! Continue reading →
#435714 Universal Robots Introduces Its ...
Universal Robots, already the dominant force in collaborative robots, is flexing its muscles in an effort to further expand its reach in the cobots market. The Danish company is introducing today the UR16e, its strongest robotic arm yet, with a payload capability of 16 kilograms (35.3 lbs), reach of 900 millimeters, and repeatability of +/- 0.05 mm.
Universal says the new “heavy duty payload cobot” will allow customers to automate a broader range of processes, including packaging and palletizing, nut and screw driving, and high-payload and CNC machine tending.
In early 2015, Universal introduced the UR3, its smallest robot, which joined the UR5 and the flagship UR10, offering a payload capability of 3, 5, and 10 kg, respectively. Now the company is going in the other direction, announcing a bigger, stronger arm.
“With Universal joining its competitors in extending the reach and payload capacity of its cobots, a new standard of capability is forming,” Rian Whitton, a senior analyst at ABI Research, in London, tweeted.
Like its predecessors, the UR16e is part of Universal’s e-Series platform, which features 6 degrees of freedom and force/torque sensing on the tool flange. The UR family of cobots have stood out from the competition by being versatile in a variety of applications and, most important, easy to deploy and program. Universal didn’t release UR16e’s price, saying only that it is about 10 percent higher than that of the UR10e, which is about $50,000, depending on the configuration.
Jürgen von Hollen, president of Universal Robots, says the company decided to launch the UR16e after studying the market and talking to customers about their needs. “What came out of that process is we understood payload was a true barrier for a lot of customers,” he tells IEEE Spectrum. The 16 kg payload will be particularly useful for applications that require mounting specialized tools on the arm to perform tasks like screw driving and machine tending, he explains. Customers that could benefit from such applications include manufacturing, material handling, and automotive companies.
“We’ve added the payload, and that will open up that market for us,” von Hollen says.
The difference between Universal and Rethink
Universal has grown by leaps and bounds since its founding in 2008. By 2015, it had sold more than 5,000 robots; that number was close to 40,000 as of last year. During the same period, revenue more than doubled from about $100 million to $234 million. At a time when a string of robot makers have shuttered, including most notably Rethink Robotics, a cobots pioneer and Universal’s biggest rival, Universal finds itself in an enviable position, having amassed a commanding market share, estimated at between 50 to 60 percent.
About Rethink, von Hollen says the Boston-based company was a “good competitor,” helping disseminate the advantages and possibilities of cobots. “When Rethink basically ended it was more of a negative than a positive, from my perspective,” he says. In his view, a major difference between the two companies is that Rethink focused on delivering full-fledged applications to customers, whereas Universal focused on delivering a product to the market and letting the system integrators and sales partners deploy the robots to the customer base.
“We’ve always been very focused on delivering the product, whereas I think Rethink was much more focused on applications, very early on, and they added a level of complexity to their company that made it become very de-focused,” he says.
The collaborative robots market: massive growth
And yet, despite its success, Universal is still tiny when you compare it to the giants of industrial automation, which include companies like ABB, Fanuc, Yaskawa, and Kuka, with revenue in the billions of dollars. Although some of these companies have added cobots to their product portfolios—ABB’s YuMi, for example—that market represents a drop in the bucket when you consider global robot sales: The size of the cobots market was estimated at $700 million in 2018, whereas the global market for industrial robot systems (including software, peripherals, and system engineering) is close to $50 billion.
Von Hollen notes that cobots are expected to go through an impressive growth curve—nearly 50 percent year after year until 2025, when sales will reach between $9 to $12 billion. If Universal can maintain its dominance and capture a big slice of that market, it’ll add up to a nice sum. To get there, Universal is not alone: It is backed by U.S. electronics testing equipment maker Teradyne, which acquired Universal in 2015 for $285 million.
“The amount of resources we invest year over year matches the growth we had on sales,” von Hollen says. Universal currently has more than 650 employees, most based at its headquarters in Odense, Denmark, and the rest scattered in 27 offices in 18 countries. “No other company [in the cobots segment] is so focused on one product.”
[ Universal Robots ] Continue reading →