Tag Archives: 2013

#431603 What We Can Learn From the Second Life ...

For every new piece of technology that gets developed, you can usually find people saying it will never be useful. The president of the Michigan Savings Bank in 1903, for example, said, “The horse is here to stay but the automobile is only a novelty—a fad.” It’s equally easy to find people raving about whichever new technology is at the peak of the Gartner Hype Cycle, which tracks the buzz around these newest developments and attempts to temper predictions. When technologies emerge, there are all kinds of uncertainties, from the actual capacity of the technology to its use cases in real life to the price tag.
Eventually the dust settles, and some technologies get widely adopted, to the extent that they can become “invisible”; people take them for granted. Others fall by the wayside as gimmicky fads or impractical ideas. Picking which horses to back is the difference between Silicon Valley millions and Betamax pub-quiz-question obscurity. For a while, it seemed that Google had—for once—backed the wrong horse.
Google Glass emerged from Google X, the ubiquitous tech giant’s much-hyped moonshot factory, where highly secretive researchers work on the sci-fi technologies of the future. Self-driving cars and artificial intelligence are the more mundane end for an organization that apparently once looked into jetpacks and teleportation.
The original smart glasses, Google began selling Google Glass in 2013 for $1,500 as prototypes for their acolytes, around 8,000 early adopters. Users could control the glasses with a touchpad, or, activated by tilting the head back, with voice commands. Audio relay—as with several wearable products—is via bone conduction, which transmits sound by vibrating the skull bones of the user. This was going to usher in the age of augmented reality, the next best thing to having a chip implanted directly into your brain.
On the surface, it seemed to be a reasonable proposition. People had dreamed about augmented reality for a long time—an onboard, JARVIS-style computer giving you extra information and instant access to communications without even having to touch a button. After smartphone ubiquity, it looked like a natural step forward.
Instead, there was a backlash. People may be willing to give their data up to corporations, but they’re less pleased with the idea that someone might be filming them in public. The worst aspect of smartphones is trying to talk to people who are distractedly scrolling through their phones. There’s a famous analogy in Revolutionary Road about an old couple’s loveless marriage: the husband tunes out his wife’s conversation by turning his hearing aid down to zero. To many, Google Glass seemed to provide us with a whole new way to ignore each other in favor of our Twitter feeds.
Then there’s the fact that, regardless of whether it’s because we’re not used to them, or if it’s a more permanent feature, people wearing AR tech often look very silly. Put all this together with a lack of early functionality, the high price (do you really feel comfortable wearing a $1,500 computer?), and a killer pun for the users—Glassholes—and the final recipe wasn’t great for Google.
Google Glass was quietly dropped from sale in 2015 with the ominous slogan posted on Google’s website “Thanks for exploring with us.” Reminding the Glass users that they had always been referred to as “explorers”—beta-testing a product, in many ways—it perhaps signaled less enthusiasm for wearables than the original, Google Glass skydive might have suggested.
In reality, Google went back to the drawing board. Not with the technology per se, although it has improved in the intervening years, but with the uses behind the technology.
Under what circumstances would you actually need a Google Glass? When would it genuinely be preferable to a smartphone that can do many of the same things and more? Beyond simply being a fashion item, which Google Glass decidedly was not, even the most tech-evangelical of us need a convincing reason to splash $1,500 on a wearable computer that’s less socially acceptable and less easy to use than the machine you’re probably reading this on right now.
Enter the Google Glass Enterprise Edition.
Piloted in factories during the years that Google Glass was dormant, and now roaring back to life and commercially available, the Google Glass relaunch got under way in earnest in July of 2017. The difference here was the specific audience: workers in factories who need hands-free computing because they need to use their hands at the same time.
In this niche application, wearable computers can become invaluable. A new employee can be trained with pre-programmed material that explains how to perform actions in real time, while instructions can be relayed straight into a worker’s eyeline without them needing to check a phone or switch to email.
Medical devices have long been a dream application for Google Glass. You can imagine a situation where people receive real-time information during surgery, or are augmented by artificial intelligence that provides additional diagnostic information or questions in response to a patient’s symptoms. The quest to develop a healthcare AI, which can provide recommendations in response to natural language queries, is on. The famously untidy doctor’s handwriting—and the associated death toll—could be avoided if the glasses could take dictation straight into a patient’s medical records. All of this is far more useful than allowing people to check Facebook hands-free while they’re riding the subway.
Google’s “Lens” application indicates another use for Google Glass that hadn’t quite matured when the original was launched: the Lens processes images and provides information about them. You can look at text and have it translated in real time, or look at a building or sign and receive additional information. Image processing, either through neural networks hooked up to a cloud database or some other means, is the frontier that enables driverless cars and similar technology to exist. Hook this up to a voice-activated assistant relaying information to the user, and you have your killer application: real-time annotation of the world around you. It’s this functionality that just wasn’t ready yet when Google launched Glass.
Amazon’s recent announcement that they want to integrate Alexa into a range of smart glasses indicates that the tech giants aren’t ready to give up on wearables yet. Perhaps, in time, people will become used to voice activation and interaction with their machines, at which point smart glasses with bone conduction will genuinely be more convenient than a smartphone.
But in many ways, the real lesson from the initial failure—and promising second life—of Google Glass is a simple question that developers of any smart technology, from the Internet of Things through to wearable computers, must answer. “What can this do that my smartphone can’t?” Find your answer, as the Enterprise Edition did, as Lens might, and you find your product.
Image Credit: Hattanas / Shutterstock.com Continue reading

Posted in Human Robots

#431592 Reactive Content Will Get to Know You ...

The best storytellers react to their audience. They look for smiles, signs of awe, or boredom; they simultaneously and skillfully read both the story and their sitters. Kevin Brooks, a seasoned storyteller working for Motorola’s Human Interface Labs, explains, “As the storyteller begins, they must tune in to… the audience’s energy. Based on this energy, the storyteller will adjust their timing, their posture, their characterizations, and sometimes even the events of the story. There is a dialog between audience and storyteller.”
Shortly after I read the script to Melita, the latest virtual reality experience from Madrid-based immersive storytelling company Future Lighthouse, CEO Nicolas Alcalá explained to me that the piece is an example of “reactive content,” a concept he’s been working on since his days at Singularity University.

For the first time in history, we have access to technology that can merge the reactive and affective elements of oral storytelling with the affordances of digital media, weaving stunning visuals, rich soundtracks, and complex meta-narratives in a story arena that has the capability to know you more intimately than any conventional storyteller could.
It’s no understatement to say that the storytelling potential here is phenomenal.
In short, we can refer to content as reactive if it reads and reacts to users based on their body rhythms, emotions, preferences, and data points. Artificial intelligence is used to analyze users’ behavior or preferences to sculpt unique storylines and narratives, essentially allowing for a story that changes in real time based on who you are and how you feel.
The development of reactive content will allow those working in the industry to go one step further than simply translating the essence of oral storytelling into VR. Rather than having a narrative experience with a digital storyteller who can read you, reactive content has the potential to create an experience with a storyteller who knows you.
This means being able to subtly insert minor personal details that have a specific meaning to the viewer. When we talk to our friends we often use experiences we’ve shared in the past or knowledge of our audience to give our story as much resonance as possible. Targeting personal memories and aspects of our lives is a highly effective way to elicit emotions and aid in visualizing narratives. When you can do this with the addition of visuals, music, and characters—all lifted from someone’s past—you have the potential for overwhelmingly engaging and emotionally-charged content.
Future Lighthouse inform me that for now, reactive content will rely primarily on biometric feedback technology such as breathing, heartbeat, and eye tracking sensors. A simple example would be a story in which parts of the environment or soundscape change in sync with the user’s heartbeat and breathing, or characters who call you out for not paying attention.
The next step would be characters and situations that react to the user’s emotions, wherein algorithms analyze biometric information to make inferences about states of emotional arousal (“why are you so nervous?” etc.). Another example would be implementing the use of “arousal parameters,” where the audience can choose what level of “fear” they want from a VR horror story before algorithms modulate the experience using information from biometric feedback devices.
The company’s long-term goal is to gather research on storytelling conventions and produce a catalogue of story “wireframes.” This entails distilling the basic formula to different genres so they can then be fleshed out with visuals, character traits, and soundtracks that are tailored for individual users based on their deep data, preferences, and biometric information.
The development of reactive content will go hand in hand with a renewed exploration of diverging, dynamic storylines, and multi-narratives, a concept that hasn’t had much impact in the movie world thus far. In theory, the idea of having a story that changes and mutates is captivating largely because of our love affair with serendipity and unpredictability, a cultural condition theorist Arthur Kroker refers to as the “hypertextual imagination.” This feeling of stepping into the unknown with the possibility of deviation from the habitual translates as a comforting reminder that our own lives can take exciting and unexpected turns at any moment.
The inception of the concept into mainstream culture dates to the classic Choose Your Own Adventure book series that launched in the late 70s, which in its literary form had great success. However, filmic takes on the theme have made somewhat less of an impression. DVDs like I’m Your Man (1998) and Switching (2003) both use scene selection tools to determine the direction of the storyline.
A more recent example comes from Kino Industries, who claim to have developed the technology to allow filmmakers to produce interactive films in which viewers can use smartphones to quickly vote on which direction the narrative takes at numerous decision points throughout the film.
The main problem with diverging narrative films has been the stop-start nature of the interactive element: when I’m immersed in a story I don’t want to have to pick up a controller or remote to select what’s going to happen next. Every time the audience is given the option to take a new path (“press this button”, “vote on X, Y, Z”) the narrative— and immersion within that narrative—is temporarily halted, and it takes the mind a while to get back into this state of immersion.
Reactive content has the potential to resolve these issues by enabling passive interactivity—that is, input and output without having to pause and actively make decisions or engage with the hardware. This will result in diverging, dynamic narratives that will unfold seamlessly while being dependent on and unique to the specific user and their emotions. Passive interactivity will also remove the game feel that can often be a symptom of interactive experiences and put a viewer somewhere in the middle: still firmly ensconced in an interactive dynamic narrative, but in a much subtler way.
While reading the Melita script I was particularly struck by a scene in which the characters start to engage with the user and there’s a synchronicity between the user’s heartbeat and objects in the virtual world. As the narrative unwinds and the words of Melita’s character get more profound, parts of the landscape, which seemed to be flashing and pulsating at random, come together and start to mimic the user’s heartbeat.
In 2013, Jane Aspell of Anglia Ruskin University (UK) and Lukas Heydrich of the Swiss Federal Institute of Technology proved that a user’s sense of presence and identification with a virtual avatar could be dramatically increased by syncing the on-screen character with the heartbeat of the user. The relationship between bio-digital synchronicity, immersion, and emotional engagement is something that will surely have revolutionary narrative and storytelling potential.
Image Credit: Tithi Luadthong / Shutterstock.com Continue reading

Posted in Human Robots

#430630 CORE2 consumer robot controller by ...

Hardware, software and cloud for fast robot prototyping and development
Kraków, Poland, June 27th, 2017 – Robotic development platform creator Husarion has launched its next-generation dedicated robot controller CORE2. Available now at the Crowd Supply crowdfunding platform, CORE2 enables the rapid prototyping and development of consumer and service robots. It’s especially suitable for engineers designing commercial appliances and robotics students or hobbyists. Whether the next robotic idea is a tiny rover that penetrates tunnels, a surveillance drone, or a room-sized 3D printer, the CORE2 can serve as the brains behind it.
Photo Credit: Husarionwww.husarion.com
Husarion’s platform greatly simplifies robot development, making it as easy as creating a website. It provides engineers with embedded hardware, preconfigured software and easy online management. From the simple, proof-of-concept prototypes made with LEGO® Mindstorms to complex designs ready for mass manufacturing, the core technology stays the same throughout the process, shortening the time to market significantly. It’s designed as an innovation for the consumer robotics industry similar to what Arduino or Raspberry PI were to the Maker Movement.

“We are on the verge of a consumer robotics revolution”, says Dominik Nowak, CEO of Husarion. “Big industrial businesses have long been utilizing robots, but until very recently the consumer side hasn’t seen that many of them. This is starting to change now with the democratization of tools, the Maker Movement and technology maturing. We believe Husarion is uniquely positioned for the upcoming boom, offering robot developers a holistic solution and lowering the barrier of entry to the market.”

The hardware part of the platform is the Husarion CORE2 board, a computer that interfaces directly with motors, servos, encoders or sensors. It’s powered by an ARM® CORTEX-M4 CPU, features 42x I/O ports and can support up to 4x DC motors and 6x servomechanisms. Wireless connectivity is provided by a built-in Wi-Fi module.
Photo Credit: Husarion – www.husarion.com
The Husarion CORE2-ROS is an alternative configuration with a Raspberry Pi 3 ARMv8-powered board layered on top, with a preinstalled Robot Operating System (ROS) custom Linux distribution. It allows users to tap into the rich sets of modules and building tools already available for ROS. Real-time capabilities and high computing power enable advanced use cases, such as fully autonomous devices.

Developing software for CORE2-powered robots is easy. Husarion provides Web IDE, allowing engineers to program their connected robots directly from within the browser. There’s also an offline SDK and a convenient extension for Visual Studio Code. The open-source library hFramework based on Real Time Operating System masks the complexity of interface communication behind an elegant, easy-to-use API.

CORE2 also works with Arduino libraries, which can be used with no modifications at all through the compatibility layer of the hFramework API.
Photo Credit: Husarion – www.husarion.com
For online access, programming and control, Husarion provides its dedicated Cloud. By registering the CORE2-powerd robot at https://cloud.husarion.com, developers can update firmware online, build a custom Web control UI and share controls of their device with anyone.

Starting at $89, Husarion CORE2 and CORE2-ROS controllers are now on sale through Crowd Supply.

Husarion also offers complete development kits, extra servo controllers and additional modules for compatibility with LEGO® Mindstorms or Makeblock® mechanics. For more information, please visit: https://www.crowdsupply.com/husarion/core2.

Key points:
A dedicated robot hardware controller, with built-in interfaces for sensors, servos, DC motors and encoders

Programming with free tools: online (via Husarion Cloud Web IDE) or offline (Visual Studio Code extension)
Compatible with ROS, provides C++ 11 open-source programming framework based on RTOS
Husarion Cloud: control, program and share robots, with customizable control UI
Allows faster development and more advanced robotics than general maker boards like Arduino or Raspberry Pi

About Husarion
Husarion was founded in 2013 in Kraków, Poland. In 2015, Husarion successfully financed a Kickstarter campaign for RoboCORE, the company’s first-generation controller. The company delivers a fast prototyping platform for consumer robots. Thanks to Husarion’s hardware modules, efficient programming tools and cloud management, engineers can rapidly develop and iterate on their robot ideas. Husarion simplifies the development of connected, commercial robots ready for mass production and provides kits for academic education.

For more information, visit: https://husarion.com/.
Photo Credit: Husarion – www.husarion.com

Photo Credit: Husarion – www.husarion.com

Media contact:

Piotr Sarotapublic relations consultant
SAROTA PR – public relations agencyphone: +48 12 684 12 68mobile: +48 606 895 326email: piotr(at)sarota.pl
http://www.sarota.pl/
Jakub Misiurapublic relations specialist
phone: +48 12 349 03 52mobile: +48 696 778 568email: jakub.misiura(at)sarota.pl

Photo Credit: Husarion – www.husarion.com
Photo Credit: Husarion – www.husarion.com
Photo Credit: Husarion – www.husarion.com

The post CORE2 consumer robot controller by Husarion appeared first on Roboticmagazine. Continue reading

Posted in Human Robots