Tag Archives: studio

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#432181 Putting AI in Your Pocket: MIT Chip Cuts ...

Neural networks are powerful things, but they need a lot of juice. Engineers at MIT have now developed a new chip that cuts neural nets’ power consumption by up to 95 percent, potentially allowing them to run on battery-powered mobile devices.

Smartphones these days are getting truly smart, with ever more AI-powered services like digital assistants and real-time translation. But typically the neural nets crunching the data for these services are in the cloud, with data from smartphones ferried back and forth.

That’s not ideal, as it requires a lot of communication bandwidth and means potentially sensitive data is being transmitted and stored on servers outside the user’s control. But the huge amounts of energy needed to power the GPUs neural networks run on make it impractical to implement them in devices that run on limited battery power.

Engineers at MIT have now designed a chip that cuts that power consumption by up to 95 percent by dramatically reducing the need to shuttle data back and forth between a chip’s memory and processors.

Neural nets consist of thousands of interconnected artificial neurons arranged in layers. Each neuron receives input from multiple neurons in the layer below it, and if the combined input passes a certain threshold it then transmits an output to multiple neurons above it. The strength of the connection between neurons is governed by a weight, which is set during training.

This means that for every neuron, the chip has to retrieve the input data for a particular connection and the connection weight from memory, multiply them, store the result, and then repeat the process for every input. That requires a lot of data to be moved around, expending a lot of energy.

The new MIT chip does away with that, instead computing all the inputs in parallel within the memory using analog circuits. That significantly reduces the amount of data that needs to be shoved around and results in major energy savings.

The approach requires the weights of the connections to be binary rather than a range of values, but previous theoretical work had suggested this wouldn’t dramatically impact accuracy, and the researchers found the chip’s results were generally within two to three percent of the conventional non-binary neural net running on a standard computer.

This isn’t the first time researchers have created chips that carry out processing in memory to reduce the power consumption of neural nets, but it’s the first time the approach has been used to run powerful convolutional neural networks popular for image-based AI applications.

“The results show impressive specifications for the energy-efficient implementation of convolution operations with memory arrays,” Dario Gil, vice president of artificial intelligence at IBM, said in a statement.

“It certainly will open the possibility to employ more complex convolutional neural networks for image and video classifications in IoT [the internet of things] in the future.”

It’s not just research groups working on this, though. The desire to get AI smarts into devices like smartphones, household appliances, and all kinds of IoT devices is driving the who’s who of Silicon Valley to pile into low-power AI chips.

Apple has already integrated its Neural Engine into the iPhone X to power things like its facial recognition technology, and Amazon is rumored to be developing its own custom AI chips for the next generation of its Echo digital assistant.

The big chip companies are also increasingly pivoting towards supporting advanced capabilities like machine learning, which has forced them to make their devices ever more energy-efficient. Earlier this year ARM unveiled two new chips: the Arm Machine Learning processor, aimed at general AI tasks from translation to facial recognition, and the Arm Object Detection processor for detecting things like faces in images.

Qualcomm’s latest mobile chip, the Snapdragon 845, features a GPU and is heavily focused on AI. The company has also released the Snapdragon 820E, which is aimed at drones, robots, and industrial devices.

Going a step further, IBM and Intel are developing neuromorphic chips whose architectures are inspired by the human brain and its incredible energy efficiency. That could theoretically allow IBM’s TrueNorth and Intel’s Loihi to run powerful machine learning on a fraction of the power of conventional chips, though they are both still highly experimental at this stage.

Getting these chips to run neural nets as powerful as those found in cloud services without burning through batteries too quickly will be a big challenge. But at the current pace of innovation, it doesn’t look like it will be too long before you’ll be packing some serious AI power in your pocket.

Image Credit: Blue Planet Studio / Shutterstock.com Continue reading

Posted in Human Robots

#431078 This Year’s Awesome Robot Stories From ...

Each week we scour the web for great articles and fascinating advances across our core topics, from AI to biotech and the brain. But robots have a special place in our hearts. This week, we took a look back at 2017 so far and unearthed a few favorite robots for your reading and viewing pleasure.
Tarzan the Swinging Robot Could Be the Future of FarmingMariella Moon | Engadget“Tarzan will be able to swing over crops using its 3D-printed claws and parallel guy-wires stretched over fields. It will then take measurements and pictures of each plant with its built-in camera while suspended…While it may take some time to achieve that goal, the researchers plan to start testing the robot soon.”
Grasping Robots Compete to Rule Amazon’s Warehouses Tom Simonite | Wired“Robots able to help with so-called picking tasks would boost Amazon’s efficiency—and make it much less reliant on human workers. It’s why the company has invited a motley crew of mechanical arms, grippers, suction cups—and their human handlers—to Nagoya, Japan, this week to show off their manipulation skills.”
Robots Learn to Speak Body LanguageAlyssa Pagano | IEEE Spectrum“One notable feature of the OpenPose system is that it can track not only a person’s head, torso, and limbs but also individual fingers. To do that, the researchers used CMU’s Panoptic Studio, a dome lined with 500 cameras, where they captured body poses at a variety of angles and then used those images to build a data set.”
I Watched Two Robots Chat Together on Stage at a Tech EventJon Russell | TechCrunch“The robots in question are Sophia and Han, and they belong to Hanson Robotics, a Hong Kong-based company that is developing and deploying artificial intelligence in humanoids. The duo took to the stage at Rise in Hong Kong with Hanson Robotics’ Chief Scientist Ben Goertzel directing the banter. The conversation, which was partially scripted, wasn’t as slick as the human-to-human panels at the show, but it was certainly a sight to behold for the packed audience.”
How This Japanese Robotics Master Is Building Better, More Human AndroidsHarry McCracken | Fast Company“On the tech side, making a robot look and behave like a person involves everything from electronics to the silicone Ishiguro’s team uses to simulate skin. ‘We have a technology to precisely control pneumatic actuators,’ he says, noting, as an example of what they need to re-create, that ‘the human shoulder has four degrees of freedom.’”
Stock Media provided by Besjunior / Pond5 Continue reading

Posted in Human Robots

#430630 CORE2 consumer robot controller by ...

Hardware, software and cloud for fast robot prototyping and development
Kraków, Poland, June 27th, 2017 – Robotic development platform creator Husarion has launched its next-generation dedicated robot controller CORE2. Available now at the Crowd Supply crowdfunding platform, CORE2 enables the rapid prototyping and development of consumer and service robots. It’s especially suitable for engineers designing commercial appliances and robotics students or hobbyists. Whether the next robotic idea is a tiny rover that penetrates tunnels, a surveillance drone, or a room-sized 3D printer, the CORE2 can serve as the brains behind it.
Photo Credit: Husarionwww.husarion.com
Husarion’s platform greatly simplifies robot development, making it as easy as creating a website. It provides engineers with embedded hardware, preconfigured software and easy online management. From the simple, proof-of-concept prototypes made with LEGO® Mindstorms to complex designs ready for mass manufacturing, the core technology stays the same throughout the process, shortening the time to market significantly. It’s designed as an innovation for the consumer robotics industry similar to what Arduino or Raspberry PI were to the Maker Movement.

“We are on the verge of a consumer robotics revolution”, says Dominik Nowak, CEO of Husarion. “Big industrial businesses have long been utilizing robots, but until very recently the consumer side hasn’t seen that many of them. This is starting to change now with the democratization of tools, the Maker Movement and technology maturing. We believe Husarion is uniquely positioned for the upcoming boom, offering robot developers a holistic solution and lowering the barrier of entry to the market.”

The hardware part of the platform is the Husarion CORE2 board, a computer that interfaces directly with motors, servos, encoders or sensors. It’s powered by an ARM® CORTEX-M4 CPU, features 42x I/O ports and can support up to 4x DC motors and 6x servomechanisms. Wireless connectivity is provided by a built-in Wi-Fi module.
Photo Credit: Husarion – www.husarion.com
The Husarion CORE2-ROS is an alternative configuration with a Raspberry Pi 3 ARMv8-powered board layered on top, with a preinstalled Robot Operating System (ROS) custom Linux distribution. It allows users to tap into the rich sets of modules and building tools already available for ROS. Real-time capabilities and high computing power enable advanced use cases, such as fully autonomous devices.

Developing software for CORE2-powered robots is easy. Husarion provides Web IDE, allowing engineers to program their connected robots directly from within the browser. There’s also an offline SDK and a convenient extension for Visual Studio Code. The open-source library hFramework based on Real Time Operating System masks the complexity of interface communication behind an elegant, easy-to-use API.

CORE2 also works with Arduino libraries, which can be used with no modifications at all through the compatibility layer of the hFramework API.
Photo Credit: Husarion – www.husarion.com
For online access, programming and control, Husarion provides its dedicated Cloud. By registering the CORE2-powerd robot at https://cloud.husarion.com, developers can update firmware online, build a custom Web control UI and share controls of their device with anyone.

Starting at $89, Husarion CORE2 and CORE2-ROS controllers are now on sale through Crowd Supply.

Husarion also offers complete development kits, extra servo controllers and additional modules for compatibility with LEGO® Mindstorms or Makeblock® mechanics. For more information, please visit: https://www.crowdsupply.com/husarion/core2.

Key points:
A dedicated robot hardware controller, with built-in interfaces for sensors, servos, DC motors and encoders

Programming with free tools: online (via Husarion Cloud Web IDE) or offline (Visual Studio Code extension)
Compatible with ROS, provides C++ 11 open-source programming framework based on RTOS
Husarion Cloud: control, program and share robots, with customizable control UI
Allows faster development and more advanced robotics than general maker boards like Arduino or Raspberry Pi

About Husarion
Husarion was founded in 2013 in Kraków, Poland. In 2015, Husarion successfully financed a Kickstarter campaign for RoboCORE, the company’s first-generation controller. The company delivers a fast prototyping platform for consumer robots. Thanks to Husarion’s hardware modules, efficient programming tools and cloud management, engineers can rapidly develop and iterate on their robot ideas. Husarion simplifies the development of connected, commercial robots ready for mass production and provides kits for academic education.

For more information, visit: https://husarion.com/.
Photo Credit: Husarion – www.husarion.com

Photo Credit: Husarion – www.husarion.com

Media contact:

Piotr Sarotapublic relations consultant
SAROTA PR – public relations agencyphone: +48 12 684 12 68mobile: +48 606 895 326email: piotr(at)sarota.pl
http://www.sarota.pl/
Jakub Misiurapublic relations specialist
phone: +48 12 349 03 52mobile: +48 696 778 568email: jakub.misiura(at)sarota.pl

Photo Credit: Husarion – www.husarion.com
Photo Credit: Husarion – www.husarion.com
Photo Credit: Husarion – www.husarion.com

The post CORE2 consumer robot controller by Husarion appeared first on Roboticmagazine. Continue reading

Posted in Human Robots