Tag Archives: logistics

#434843 This Week’s Awesome Stories From ...

ARTIFICIAL INTELLIGENCE
Open AI’s Dota 2 AI Steamrolls World Champion e-Sports Team With Back-to-Back Victories
Nick Statt | The Verge
“…[OpenAI cofounder and CEO, Sam Altman] tells me there probably does not exist a video game out there right now that a system like OpenAI Five can’t eventually master at a level beyond human capability. For the broader AI industry, mastering video games may soon become passé, simple table stakes required to prove your system can learn fast and act in a way required to tackle tougher, real-world tasks with more meaningful benefits.”

ROBOTICS
Boston Dynamics Debuts the Production Version of SpotMini
Brian Heater, Catherine Shu | TechCrunch
“SpotMini is the first commercial robot Boston Dynamics is set to release, but as we learned earlier, it certainly won’t be the last. The company is looking to its wheeled Handle robot in an effort to push into the logistics space. It’s a super-hot category for robotics right now. Notably, Amazon recently acquired Colorado-based start up Canvas to add to its own arm of fulfillment center robots.”

NEUROSCIENCE
Scientists Restore Some Brain Cell Functions in Pigs Four Hours After Death
Joel Achenbach | The Washington Post
“The ethicists say this research can blur the line between life and death, and could complicate the protocols for organ donation, which rely on a clear determination of when a person is dead and beyond resuscitation.”

BIOTECH
How Scientists 3D Printed a Tiny Heart From Human Cells
Yasmin Saplakoglu | Live Science
“Though the heart is much smaller than a human’s (it’s only the size of a rabbit’s), and there’s still a long way to go until it functions like a normal heart, the proof-of-concept experiment could eventually lead to personalized organs or tissues that could be used in the human body…”

SPACE
The Next Clash of Silicon Valley Titans Will Take Place in Space
Luke Dormehl | Digital Trends
“With bold plans that call for thousands of new satellites being put into orbit and astronomical costs, it’s going to be fascinating to observe the next phase of the tech platform battle being fought not on our desktops or mobile devices in our pockets, but outside of Earth’s atmosphere.”

FUTURE HISTORY
The Images That Could Help Rebuild Notre-Dame Cathedral
Alexis C. Madrigal | The Atlantic
“…in 2010, [Andrew] Tallon, an art professor at Vassar, took a Leica ScanStation C10 to Notre-Dame and, with the assistance of Columbia’s Paul Blaer, began to painstakingly scan every piece of the structure, inside and out. …Over five days, they positioned the scanner again and again—50 times in all—to create an unmatched record of the reality of one of the world’s most awe-inspiring buildings, represented as a series of points in space.”

AUGMENTED REALITY
Mapping Our World in 3D Will Let Us Paint Streets With Augmented Reality
Charlotte Jee | MIT Technology Review
“Scape wants to use its location services to become the underlying infrastructure upon which driverless cars, robotics, and augmented-reality services sit. ‘Our end goal is a one-to-one map of the world covering everything,’ says Miller. ‘Our ambition is to be as invisible as GPS is today.’i”

Image Credit: VAlex / Shutterstock.com Continue reading

Posted in Human Robots

#434818 Watch These Robots Do Tasks You Thought ...

Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.

Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.

Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.

It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.

That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.

But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.

It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.

And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.

In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.

It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.

Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.

And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).

How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.

Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.

But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.

Image Credit: Visual Generation / Shutterstock.com Continue reading

Posted in Human Robots

#434636 Using Advanced Technology to Increase ...

Image Source The 1SHIFT Logistics platform, developed by LiteLink Technologies, shows what can happen when the technology industry meets the logistics industry. 1SHIFT Logistics uses advanced technology such as artificial intelligence and geolocation to coordinate all three major parts of the logistics process: the shipper, the carrier and the delivery site. When most people think …

The post Using Advanced Technology to Increase Supply Chain Efficiency appeared first on TFOT. Continue reading

Posted in Human Robots

#434246 How AR and VR Will Shape the Future of ...

How we work and play is about to transform.

After a prolonged technology “winter”—or what I like to call the ‘deceptive growth’ phase of any exponential technology—the hardware and software that power virtual (VR) and augmented reality (AR) applications are accelerating at an extraordinary rate.

Unprecedented new applications in almost every industry are exploding onto the scene.

Both VR and AR, combined with artificial intelligence, will significantly disrupt the “middleman” and make our lives “auto-magical.” The implications will touch every aspect of our lives, from education and real estate to healthcare and manufacturing.

The Future of Work
How and where we work is already changing, thanks to exponential technologies like artificial intelligence and robotics.

But virtual and augmented reality are taking the future workplace to an entirely new level.

Virtual Reality Case Study: eXp Realty

I recently interviewed Glenn Sanford, who founded eXp Realty in 2008 (imagine: a real estate company on the heels of the housing market collapse) and is the CEO of eXp World Holdings.

Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, three Canadian provinces, and 400 MLS market areas… all without a single traditional staffed office.

In a bid to transition from 2D interfaces to immersive, 3D work experiences, virtual platform VirBELA built out the company’s office space in VR, unlocking indefinite scaling potential and an extraordinary new precedent.

Real estate agents, managers, and even clients gather in a unique virtual campus, replete with a sports field, library, and lobby. It’s all accessible via head-mounted displays, but most agents join with a computer browser. Surprisingly, the campus-style setup enables the same type of water-cooler conversations I see every day at the XPRIZE headquarters.

With this centralized VR campus, eXp Realty has essentially thrown out overhead costs and entered a lucrative market without the same constraints of brick-and-mortar businesses.

Delocalize with VR, and you can now hire anyone with internet access (right next door or on the other side of the planet), redesign your corporate office every month, throw in an ocean-view office or impromptu conference room for client meetings, and forget about guzzled-up hours in traffic.

As a leader, what happens when you can scalably expand and connect your workforce, not to mention your customer base, without the excess overhead of office space and furniture? Your organization can run faster and farther than your competition.

But beyond the indefinite scalability achieved through digitizing your workplace, VR’s implications extend to the lives of your employees and even the future of urban planning:

Home Prices: As virtual headquarters and office branches take hold of the 21st-century workplace, those who work on campuses like eXp Realty’s won’t need to commute to work. As a result, VR has the potential to dramatically influence real estate prices—after all, if you don’t need to drive to an office, your home search isn’t limited to a specific set of neighborhoods anymore.

Transportation: In major cities like Los Angeles and San Francisco, the implications are tremendous. Analysts have revealed that it’s already cheaper to use ride-sharing services like Uber and Lyft than to own a car in many major cities. And once autonomous “Car-as-a-Service” platforms proliferate, associated transportation costs like parking fees, fuel, and auto repairs will no longer fall on the individual, if not entirely disappear.

Augmented Reality: Annotate and Interact with Your Workplace

As I discussed in a recent Spatial Web blog, not only will Web 3.0 and VR advancements allow us to build out virtual worlds, but we’ll soon be able to digitally map our real-world physical offices or entire commercial high-rises.

Enter a professional world electrified by augmented reality.

Our workplaces are practically littered with information. File cabinets abound with archival data and relevant documents, and company databases continue to grow at a breakneck pace. And, as all of us are increasingly aware, cybersecurity and robust data permission systems remain a major concern for CEOs and national security officials alike.

What if we could link that information to specific locations, people, time frames, and even moving objects?

As data gets added and linked to any given employee’s office, conference room, or security system, we might then access online-merge-offline environments and information through augmented reality.

Imagine showing up at your building’s concierge and your AR glasses automatically check you into the building, authenticating your identity and pulling up any reminders you’ve linked to that specific location.

You stop by a friend’s office, and his smart security system lets you know he’ll arrive in an hour. Need to book a public conference room that’s already been scheduled by another firm’s marketing team? Offer to pay them a fee and, once accepted, a smart transaction will automatically deliver a payment to their company account.

With blockchain-verified digital identities, spatially logged data, and virtually manifest information, business logistics take a fraction of the time, operations grow seamless, and corporate data will be safer than ever.

Or better yet, imagine precise and high-dexterity work environments populated with interactive annotations that guide an artisan, surgeon, or engineer through meticulous handiwork.

Take, for instance, AR service 3D4Medical, which annotates virtual anatomy in midair. And as augmented reality hardware continues to advance, we might envision a future wherein surgeons perform operations on annotated organs and magnified incision sites, or one in which quantum computer engineers can magnify and annotate mechanical parts, speeding up reaction times and vastly improving precision.

The Future of Free Time and Play
In Abundance, I wrote about today’s rapidly demonetizing cost of living. In 2011, almost 75 percent of the average American’s income was spent on housing, transportation, food, personal insurance, health, and entertainment. What the headlines don’t mention: this is a dramatic improvement over the last 50 years. We’re spending less on basic necessities and working fewer hours than previous generations.

Chart depicts the average weekly work hours for full-time production employees in non-agricultural activities. Source: Diamandis.com data
Technology continues to change this, continues to take care of us and do our work for us. One phrase that describes this is “technological socialism,” where it’s technology, not the government, that takes care of us.

Extrapolating from the data, I believe we are heading towards a post-scarcity economy. Perhaps we won’t need to work at all, because we’ll own and operate our own fleet of robots or AI systems that do our work for us.

As living expenses demonetize and workplace automation increases, what will we do with this abundance of time? How will our children and grandchildren connect and find their purpose if they don’t have to work for a living?

As I write this on a Saturday afternoon and watch my two seven-year-old boys immersed in Minecraft, building and exploring worlds of their own creation, I can’t help but imagine that this future is about to enter its disruptive phase.

Exponential technologies are enabling a new wave of highly immersive games, virtual worlds, and online communities. We’ve likely all heard of the Oasis from Ready Player One. But far beyond what we know today as ‘gaming,’ VR is fast becoming a home to immersive storytelling, interactive films, and virtual world creation.

Within the virtual world space, let’s take one of today’s greatest precursors, the aforementioned game Minecraft.

For reference, Minecraft is over eight times the size of planet Earth. And in their free time, my kids would rather build in Minecraft than almost any other activity. I think of it as their primary passion: to create worlds, explore worlds, and be challenged in worlds.

And in the near future, we’re all going to become creators of or participants in virtual worlds, each populated with assets and storylines interoperable with other virtual environments.

But while the technological methods are new, this concept has been alive and well for generations. Whether you got lost in the world of Heidi or Harry Potter, grew up reading comic books or watching television, we’ve all been playing in imaginary worlds, with characters and story arcs populating our minds. That’s the nature of childhood.

In the past, however, your ability to edit was limited, especially if a given story came in some form of 2D media. I couldn’t edit where Tom Sawyer was going or change what Iron Man was doing. But as a slew of new software advancements underlying VR and AR allow us to interact with characters and gain (albeit limited) agency (for now), both new and legacy stories will become subjects of our creation and playgrounds for virtual interaction.

Take VR/AR storytelling startup Fable Studio’s Wolves in the Walls film. Debuting at the 2018 Sundance Film Festival, Fable’s immersive story is adapted from Neil Gaiman’s book and tracks the protagonist, Lucy, whose programming allows her to respond differently based on what her viewers do.

And while Lucy can merely hand virtual cameras to her viewers among other limited tasks, Fable Studio’s founder Edward Saatchi sees this project as just the beginning.

Imagine a virtual character—either in augmented or virtual reality—geared with AI capabilities, that now can not only participate in a fictional storyline but interact and dialogue directly with you in a host of virtual and digitally overlayed environments.

Or imagine engaging with a less-structured environment, like the Star Wars cantina, populated with strangers and friends to provide an entirely novel social media experience.

Already, we’ve seen characters like that of Pokémon brought into the real world with Pokémon Go, populating cities and real spaces with holograms and tasks. And just as augmented reality has the power to turn our physical environments into digital gaming platforms, advanced AR could bring on a new era of in-home entertainment.

Imagine transforming your home into a narrative environment for your kids or overlaying your office interior design with Picasso paintings and gothic architecture. As computer vision rapidly grows capable of identifying objects and mapping virtual overlays atop them, we might also one day be able to project home theaters or live sports within our homes, broadcasting full holograms that allow us to zoom into the action and place ourselves within it.

Increasingly honed and commercialized, augmented and virtual reality are on the cusp of revolutionizing the way we play, tell stories, create worlds, and interact with both fictional characters and each other.

Join Me
Abundance-Digital Online Community: I’ve created a Digital/Online community of bold, abundance-minded entrepreneurs called Abundance-Digital. Abundance-Digital is my ‘onramp’ for exponential entrepreneurs – those who want to get involved and play at a higher level. Click here to learn more.

Image Credit: nmedia / Shutterstock.com Continue reading

Posted in Human Robots

#433954 The Next Great Leap Forward? Combining ...

The Internet of Things is a popular vision of objects with internet connections sending information back and forth to make our lives easier and more comfortable. It’s emerging in our homes, through everything from voice-controlled speakers to smart temperature sensors. To improve our fitness, smart watches and Fitbits are telling online apps how much we’re moving around. And across entire cities, interconnected devices are doing everything from increasing the efficiency of transport to flood detection.

In parallel, robots are steadily moving outside the confines of factory lines. They’re starting to appear as guides in shopping malls and cruise ships, for instance. As prices fall and the artificial intelligence (AI) and mechanical technology continues to improve, we will get more and more used to them making independent decisions in our homes, streets and workplaces.

Here lies a major opportunity. Robots become considerably more capable with internet connections. There is a growing view that the next evolution of the Internet of Things will be to incorporate them into the network, opening up thrilling possibilities along the way.

Home Improvements
Even simple robots become useful when connected to the internet—getting updates about their environment from sensors, say, or learning about their users’ whereabouts and the status of appliances in the vicinity. This lets them lend their bodies, eyes, and ears to give an otherwise impersonal smart environment a user-friendly persona. This can be particularly helpful for people at home who are older or have disabilities.

We recently unveiled a futuristic apartment at Heriot-Watt University to work on such possibilities. One of a few such test sites around the EU, our whole focus is around people with special needs—and how robots can help them by interacting with connected devices in a smart home.

Suppose a doorbell rings that has smart video features. A robot could find the person in the home by accessing their location via sensors, then tell them who is at the door and why. Or it could help make video calls to family members or a professional carer—including allowing them to make virtual visits by acting as a telepresence platform.

Equally, it could offer protection. It could inform them the oven has been left on, for example—phones or tablets are less reliable for such tasks because they can be misplaced or not heard.

Similarly, the robot could raise the alarm if its user appears to be in difficulty.Of course, voice-assistant devices like Alexa or Google Home can offer some of the same services. But robots are far better at moving, sensing and interacting with their environment. They can also engage their users by pointing at objects or acting more naturally, using gestures or facial expressions. These “social abilities” create bonds which are crucially important for making users more accepting of the support and making it more effective.

To help incentivize the various EU test sites, our apartment also hosts the likes of the European Robotic League Service Robot Competition—a sort of Champions League for robots geared to special needs in the home. This brought academics from around Europe to our laboratory for the first time in January this year. Their robots were tested in tasks like welcoming visitors to the home, turning the oven off, and fetching objects for their users; and a German team from Koblenz University won with a robot called Lisa.

Robots Offshore
There are comparable opportunities in the business world. Oil and gas companies are looking at the Internet of Things, for example; experimenting with wireless sensors to collect information such as temperature, pressure, and corrosion levels to detect and possibly predict faults in their offshore equipment.

In the future, robots could be alerted to problem areas by sensors to go and check the integrity of pipes and wells, and to make sure they are operating as efficiently and safely as possible. Or they could place sensors in parts of offshore equipment that are hard to reach, or help to calibrate them or replace their batteries.

The likes of the ORCA Hub, a £36m project led by the Edinburgh Centre for Robotics, bringing together leading experts and over 30 industry partners, is developing such systems. The aim is to reduce the costs and the risks of humans working in remote hazardous locations.

ORCA tests a drone robot. ORCA
Working underwater is particularly challenging, since radio waves don’t move well under the sea. Underwater autonomous vehicles and sensors usually communicate using acoustic waves, which are many times slower (1,500 meters a second vs. 300m meters a second for radio waves). Acoustic communication devices are also much more expensive than those used above the water.

This academic project is developing a new generation of low-cost acoustic communication devices, and trying to make underwater sensor networks more efficient. It should help sensors and underwater autonomous vehicles to do more together in future—repair and maintenance work similar to what is already possible above the water, plus other benefits such as helping vehicles to communicate with one another over longer distances and tracking their location.

Beyond oil and gas, there is similar potential in sector after sector. There are equivalents in nuclear power, for instance, and in cleaning and maintaining the likes of bridges and buildings. My colleagues and I are also looking at possibilities in areas such as farming, manufacturing, logistics, and waste.

First, however, the research sectors around the Internet of Things and robotics need to properly share their knowledge and expertise. They are often isolated from one another in different academic fields. There needs to be more effort to create a joint community, such as the dedicated workshops for such collaboration that we organized at the European Robotics Forum and the IoT Week in 2017.

To the same end, industry and universities need to look at setting up joint research projects. It is particularly important to address safety and security issues—hackers taking control of a robot and using it to spy or cause damage, for example. Such issues could make customers wary and ruin a market opportunity.

We also need systems that can work together, rather than in isolated applications. That way, new and more useful services can be quickly and effectively introduced with no disruption to existing ones. If we can solve such problems and unite robotics and the Internet of Things, it genuinely has the potential to change the world.

Mauro Dragone, Assistant Professor, Cognitive Robotics, Multiagent systems, Internet of Things, Heriot-Watt University

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credit: Willyam Bradberry/Shutterstock.com Continue reading

Posted in Human Robots