Tag Archives: meets
#436186 Video Friday: Invasion of the Mini ...
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!):
DARPA SubT Urban Circuit – February 18-27, 2020 – Olympia, Wash., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.
There will be a Mini-Cheetah Workshop (sponsored by Naver Labs) a year from now at IROS 2020 in Las Vegas. Mini-Cheetahs for everyone!
That’s just a rendering, of course, but this isn’t:
[ MCW ]
I was like 95 percent sure that the Urban Circuit of the DARPA SubT Challenge was going to be in something very subway station-y. Oops!
In the Subterranean (SubT) Challenge, teams deploy autonomous ground and aerial systems to attempt to map, identify, and report artifacts along competition courses in underground environments. The artifacts represent items a first responder or service member may encounter in unknown underground sites. This video provides a preview of the Urban Circuit event location. The Urban Circuit is scheduled for February 18-27, 2020, at Satsop Business Park west of Olympia, Washington.
[ SubT ]
Researchers at SEAS and the Wyss Institute for Biologically Inspired Engineering have developed a resilient RoboBee powered by soft artificial muscles that can crash into walls, fall onto the floor, and collide with other RoboBees without being damaged. It is the first microrobot powered by soft actuators to achieve controlled flight.
To solve the problem of power density, the researchers built upon the electrically-driven soft actuators developed in the lab of David Clarke, the Extended Tarr Family Professor of Materials. These soft actuators are made using dielectric elastomers, soft materials with good insulating properties, that deform when an electric field is applied. By improving the electrode conductivity, the researchers were able to operate the actuator at 500 Hertz, on par with the rigid actuators used previously in similar robots.
Next, the researchers aim to increase the efficiency of the soft-powered robot, which still lags far behind more traditional flying robots.
[ Harvard ]
We present a system for fast and robust handovers with a robot character, together with a user study investigating the effect of robot speed and reaction time on perceived interaction quality. The system can match and exceed human speeds and confirms that users prefer human-level timing.
In a 3×3 user study, we vary the speed of the robot and add variable sensorimotor delays. We evaluate the social perception of the robot using the Robot Social Attribute Scale (RoSAS). Inclusion of a small delay, mimicking the delay of the human sensorimotor system, leads to an improvement in perceived qualities over both no delay and long delay conditions. Specifically, with no delay the robot is perceived as more discomforting and with a long delay, it is perceived as less warm.
[ Disney Research ]
When cars are autonomous, they’re not going to be able to pump themselves full of gas. Or, more likely, electrons. Kuka has the solution.
[ Kuka ]
This looks like fun, right?
[ Robocoaster ]
NASA is leading the way in the use of On-orbit Servicing, Assembly, and Manufacturing to enable large, persistent, upgradable, and maintainable spacecraft. This video was developed by the Advanced Concepts Lab (ACL) at NASA Langley Research Center.
[ NASA ]
The noisiest workshop by far at Humanoids last month (by far) was Musical Interactions With Humanoids, the end result of which was this:
[ Workshop ]
IROS is an IEEE event, and in furthering the IEEE mission to benefit humanity through technological innovation, IROS is doing a great job. But don’t take it from us – we are joined by IEEE President-Elect Professor Toshio Fukuda to find out a bit more about the impact events like IROS can have, as well as examine some of the issues around intelligent robotics and systems – from privacy to transparency of the systems at play.
[ IROS ]
Speaking of IROS, we hope you’ve been enjoying our coverage. We have already featured Harvard’s strange sea-urchin-inspired robot and a Japanese quadruped that can climb vertical ladders, with more stories to come over the next several weeks.
In the mean time, enjoy these 10 videos from the conference (as usual, we’re including the title, authors, and abstract for each—if you’d like more details about any of these projects, let us know and we’ll find out more for you).
“A Passive Closing, Tendon Driven, Adaptive Robot Hand for Ultra-Fast, Aerial Grasping and Perching,” by Andrew McLaren, Zak Fitzgerald, Geng Gao, and Minas Liarokapis from the University of Auckland, New Zealand.
Current grasping methods for aerial vehicles are slow, inaccurate and they cannot adapt to any target object. Thus, they do not allow for on-the-fly, ultra-fast grasping. In this paper, we present a passive closing, adaptive robot hand design that offers ultra-fast, aerial grasping for a wide range of everyday objects. We investigate alternative uses of structural compliance for the development of simple, adaptive robot grippers and hands and we propose an appropriate quick release mechanism that facilitates an instantaneous grasping execution. The quick release mechanism is triggered by a simple distance sensor. The proposed hand utilizes only two actuators to control multiple degrees of freedom over three fingers and it retains the superior grasping capabilities of adaptive grasping mechanisms, even under significant object pose or other environmental uncertainties. The hand achieves a grasping time of 96 ms, a maximum grasping force of 56 N and it is able to secure objects of various shapes at high speeds. The proposed hand can serve as the end-effector of grasping capable Unmanned Aerial Vehicle (UAV) platforms and it can offer perching capabilities, facilitating autonomous docking.
“Unstructured Terrain Navigation and Topographic Mapping With a Low-Cost Mobile Cuboid Robot,” by Andrew S. Morgan, Robert L. Baines, Hayley McClintock, and Brian Scassellati from Yale University, USA.
Current robotic terrain mapping techniques require expensive sensor suites to construct an environmental representation. In this work, we present a cube-shaped robot that can roll through unstructured terrain and construct a detailed topographic map of the surface that it traverses in real time with low computational and monetary expense. Our approach devolves many of the complexities of locomotion and mapping to passive mechanical features. Namely, rolling movement is achieved by sequentially inflating latex bladders that are located on four sides of the robot to destabilize and tip it. Sensing is achieved via arrays of fine plastic pins that passively conform to the geometry of underlying terrain, retracting into the cube. We developed a topography by shade algorithm to process images of the displaced pins to reconstruct terrain contours and elevation. We experimentally validated the efficacy of the proposed robot through object mapping and terrain locomotion tasks.
“Toward a Ballbot for Physically Leading People: A Human-Centered Approach,” by Zhongyu Li and Ralph Hollis from Carnegie Mellon University, USA.
This work presents a new human-centered method for indoor service robots to provide people with physical assistance and active guidance while traveling through congested and narrow spaces. As most previous work is robot-centered, this paper develops an end-to-end framework which includes a feedback path of the measured human positions. The framework combines a planning algorithm and a human-robot interaction module to guide the led person to a specified planned position. The approach is deployed on a person-size dynamically stable mobile robot, the CMU ballbot. Trials were conducted where the ballbot physically led a blindfolded person to safely navigate in a cluttered environment.
“Achievement of Online Agile Manipulation Task for Aerial Transformable Multilink Robot,” by Fan Shi, Moju Zhao, Tomoki Anzai, Keita Ito, Xiangyu Chen, Kei Okada, and Masayuki Inaba from the University of Tokyo, Japan.
Transformable aerial robots are favorable in aerial manipulation tasks for their flexible ability to change configuration during the flight. By assuming robot keeping in the mild motion, the previous researches sacrifice aerial agility to simplify the complex non-linear system into a single rigid body with a linear controller. In this paper, we present a framework towards agile swing motion for the transformable multi-links aerial robot. We introduce a computational-efficient non-linear model predictive controller and joints motion primitive frame-work to achieve agile transforming motions and validate with a novel robot named HYRURS-X. Finally, we implement our framework under a table tennis task to validate the online and agile performance.
“Small-Scale Compliant Dual Arm With Tail for Winged Aerial Robots,” by Alejandro Suarez, Manuel Perez, Guillermo Heredia, and Anibal Ollero from the University of Seville, Spain.
Winged aerial robots represent an evolution of aerial manipulation robots, replacing the multirotor vehicles by fixed or flapping wing platforms. The development of this morphology is motivated in terms of efficiency, endurance and safety in some inspection operations where multirotor platforms may not be suitable. This paper presents a first prototype of compliant dual arm as preliminary step towards the realization of a winged aerial robot capable of perching and manipulating with the wings folded. The dual arm provides 6 DOF (degrees of freedom) for end effector positioning in a human-like kinematic configuration, with a reach of 25 cm (half-scale w.r.t. the human arm), and 0.2 kg weight. The prototype is built with micro metal gear motors, measuring the joint angles and the deflection with small potentiometers. The paper covers the design, electronics, modeling and control of the arms. Experimental results in test-bench validate the developed prototype and its functionalities, including joint position and torque control, bimanual grasping, the dynamic equilibrium with the tail, and the generation of 3D maps with laser sensors attached at the arms.
“A Novel Small-Scale Turtle-inspired Amphibious Spherical Robot,” by Huiming Xing, Shuxiang Guo, Liwei Shi, Xihuan Hou, Yu Liu, Huikang Liu, Yao Hu, Debin Xia, and Zan Li from Beijing Institute of Technology, China.
This paper describes a novel small-scale turtle-inspired Amphibious Spherical Robot (ASRobot) to accomplish exploration tasks in the restricted environment, such as amphibious areas and narrow underwater cave. A Legged, Multi-Vectored Water-Jet Composite Propulsion Mechanism (LMVWCPM) is designed with four legs, one of which contains three connecting rod parts, one water-jet thruster and three joints driven by digital servos. Using this mechanism, the robot is able to walk like amphibious turtles on various terrains and swim flexibly in submarine environment. A simplified kinematic model is established to analyze crawling gaits. With simulation of the crawling gait, the driving torques of different joints contributed to the choice of servos and the size of links of legs. Then we also modeled the robot in water and proposed several underwater locomotion. In order to assess the performance of the proposed robot, a series of experiments were carried out in the lab pool and on flat ground using the prototype robot. Experiments results verified the effectiveness of LMVWCPM and the amphibious control approaches.
“Advanced Autonomy on a Low-Cost Educational Drone Platform,” by Luke Eller, Theo Guerin, Baichuan Huang, Garrett Warren, Sophie Yang, Josh Roy, and Stefanie Tellex from Brown University, USA.
PiDrone is a quadrotor platform created to accompany an introductory robotics course. Students build an autonomous flying robot from scratch and learn to program it through assignments and projects. Existing educational robots do not have significant autonomous capabilities, such as high-level planning and mapping. We present a hardware and software framework for an autonomous aerial robot, in which all software for autonomy can run onboard the drone, implemented in Python. We present an Unscented Kalman Filter (UKF) for accurate state estimation. Next, we present an implementation of Monte Carlo (MC) Localization and Fast-SLAM for Simultaneous Localization and Mapping (SLAM). The performance of UKF, localization, and SLAM is tested and compared to ground truth, provided by a motion-capture system. Our evaluation demonstrates that our autonomous educational framework runs quickly and accurately on a Raspberry Pi in Python, making it ideal for use in educational settings.
“FlightGoggles: Photorealistic Sensor Simulation for Perception-driven Robotics using Photogrammetry and Virtual Reality,” by Winter Guerra, Ezra Tal, Varun Murali, Gilhyun Ryou and Sertac Karaman from the Massachusetts Institute of Technology, USA.
FlightGoggles is a photorealistic sensor simulator for perception-driven robotic vehicles. The key contributions of FlightGoggles are twofold. First, FlightGoggles provides photorealistic exteroceptive sensor simulation using graphics assets generated with photogrammetry. Second, it provides the ability to combine (i) synthetic exteroceptive measurements generated in silico in real time and (ii) vehicle dynamics and proprioceptive measurements generated in motio by vehicle(s) in flight in a motion-capture facility. FlightGoggles is capable of simulating a virtual-reality environment around autonomous vehicle(s) in flight. While a vehicle is in flight in the FlightGoggles virtual reality environment, exteroceptive sensors are rendered synthetically in real time while all complex dynamics are generated organically through natural interactions of the vehicle. The FlightGoggles framework allows for researchers to accelerate development by circumventing the need to estimate complex and hard-to-model interactions such as aerodynamics, motor mechanics, battery electrochemistry, and behavior of other agents. The ability to perform vehicle-in-the-loop experiments with photorealistic exteroceptive sensor simulation facilitates novel research directions involving, e.g., fast and agile autonomous flight in obstacle-rich environments, safe human interaction, and flexible sensor selection. FlightGoggles has been utilized as the main test for selecting nine teams that will advance in the AlphaPilot autonomous drone racing challenge. We survey approaches and results from the top AlphaPilot teams, which may be of independent interest. FlightGoggles is distributed as open-source software along with the photorealistic graphics assets for several simulation environments, under the MIT license at http://flightgoggles.mit.edu.
“An Autonomous Quadrotor System for Robust High-Speed Flight Through Cluttered Environments Without GPS,” by Marc Rigter, Benjamin Morrell, Robert G. Reid, Gene B. Merewether, Theodore Tzanetos, Vinay Rajur, KC Wong, and Larry H. Matthies from University of Sydney, Australia; NASA Jet Propulsion Laboratory, California Institute of Technology, USA; and Georgia Institute of Technology, USA.
Robust autonomous flight without GPS is key to many emerging drone applications, such as delivery, search and rescue, and warehouse inspection. These and other appli- cations require accurate trajectory tracking through cluttered static environments, where GPS can be unreliable, while high- speed, agile, flight can increase efficiency. We describe the hardware and software of a quadrotor system that meets these requirements with onboard processing: a custom 300 mm wide quadrotor that uses two wide-field-of-view cameras for visual- inertial motion tracking and relocalization to a prior map. Collision-free trajectories are planned offline and tracked online with a custom tracking controller. This controller includes compensation for drag and variability in propeller performance, enabling accurate trajectory tracking, even at high speeds where aerodynamic effects are significant. We describe a system identification approach that identifies quadrotor-specific parameters via maximum likelihood estimation from flight data. Results from flight experiments are presented, which 1) validate the system identification method, 2) show that our controller with aerodynamic compensation reduces tracking error by more than 50% in both horizontal flights at up to 8.5 m/s and vertical flights at up to 3.1 m/s compared to the state-of-the-art, and 3) demonstrate our system tracking complex, aggressive, trajectories.
“Morphing Structure for Changing Hydrodynamic Characteristics of a Soft Underwater Walking Robot,” by Michael Ishida, Dylan Drotman, Benjamin Shih, Mark Hermes, Mitul Luhar, and Michael T. Tolley from the University of California, San Diego (UCSD) and University of Southern California, USA.
Existing platforms for underwater exploration and inspection are often limited to traversing open water and must expend large amounts of energy to maintain a position in flow for long periods of time. Many benthic animals overcome these limitations using legged locomotion and have different hydrodynamic profiles dictated by different body morphologies. This work presents an underwater legged robot with soft legs and a soft inflatable morphing body that can change shape to influence its hydrodynamic characteristics. Flow over the morphing body separates behind the trailing edge of the inflated shape, so whether the protrusion is at the front, center, or back of the robot influences the amount of drag and lift. When the legged robot (2.87 N underwater weight) needs to remain stationary in flow, an asymmetrically inflated body resists sliding by reducing lift on the body by 40% (from 0.52 N to 0.31 N) at the highest flow rate tested while only increasing drag by 5.5% (from 1.75 N to 1.85 N). When the legged robot needs to walk with flow, a large inflated body is pushed along by the flow, causing the robot to walk 16% faster than it would with an uninflated body. The body shape significantly affects the ability of the robot to walk against flow as it is able to walk against 0.09 m/s flow with the uninflated body, but is pushed backwards with a large inflated body. We demonstrate that the robot can detect changes in flow velocity with a commercial force sensor and respond by morphing into a hydrodynamically preferable shape. Continue reading →
#435656 Will AI Be Fashion Forward—or a ...
The narrative that often accompanies most stories about artificial intelligence these days is how machines will disrupt any number of industries, from healthcare to transportation. It makes sense. After all, technology already drives many of the innovations in these sectors of the economy.
But sneakers and the red carpet? The definitively low-tech fashion industry would seem to be one of the last to turn over its creative direction to data scientists and machine learning algorithms.
However, big brands, e-commerce giants, and numerous startups are betting that AI can ingest data and spit out Chanel. Maybe it’s not surprising, given that fashion is partly about buzz and trends—and there’s nothing more buzzy and trendy in the world of tech today than AI.
In its annual survey of the $3 trillion fashion industry, consulting firm McKinsey predicted that while AI didn’t hit a “critical mass” in 2018, it would increasingly influence the business of everything from design to manufacturing.
“Fashion as an industry really has been so slow to understand its potential roles interwoven with technology. And, to be perfectly honest, the technology doesn’t take fashion seriously.” This comment comes from Zowie Broach, head of fashion at London’s Royal College of Arts, who as a self-described “old fashioned” designer has embraced the disruptive nature of technology—with some caveats.
Co-founder in the late 1990s of the avant-garde fashion label Boudicca, Broach has always seen tech as a tool for designers, even setting up a website for the company circa 1998, way before an online presence became, well, fashionable.
Broach told Singularity Hub that while she is generally optimistic about the future of technology in fashion—the designer has avidly been consuming old sci-fi novels over the last few years—there are still a lot of difficult questions to answer about the interface of algorithms, art, and apparel.
For instance, can AI do what the great designers of the past have done? Fashion was “about designing, it was about a narrative, it was about meaning, it was about expression,” according to Broach.
AI that designs products based on data gleaned from human behavior can potentially tap into the Pavlovian response in consumers in order to make money, Broach noted. But is that channeling creativity, or just digitally dabbling in basic human brain chemistry?
She is concerned about people retaining control of the process, whether we’re talking about their data or their designs. But being empowered with the insights machines could provide into, for example, the geographical nuances of fashion between Dubai, Moscow, and Toronto is thrilling.
“What is it that we want the future to be from a fashion, an identity, and design perspective?” she asked.
Off on the Right Foot
Silicon Valley and some of the biggest brands in the industry offer a few answers about where AI and fashion are headed (though not at the sort of depths that address Broach’s broader questions of aesthetics and ethics).
Take what is arguably the biggest brand in fashion, at least by market cap but probably not by the measure of appearances on Oscar night: Nike. The $100 billion shoe company just gobbled up an AI startup called Celect to bolster its data analytics and optimize its inventory. In other words, Nike hopes it will be able to figure out what’s hot and what’s not in a particular location to stock its stores more efficiently.
The company is going even further with Nike Fit, a foot-scanning platform using a smartphone camera that applies AI techniques from fields like computer vision and machine learning to find the best fit for each person’s foot. The algorithms then identify and recommend the appropriately sized and shaped shoe in different styles.
No doubt the next step will be to 3D print personalized and on-demand sneakers at any store.
San Francisco-based startup ThirdLove is trying to bring a similar approach to bra sizes. Its 20-member data team, Fortune reported, has developed the Fit Finder quiz that uses machine learning algorithms to help pick just the right garment for every body type.
Data scientists are also a big part of the team at Stitch Fix, a former San Francisco startup that went public in 2017 and today sports a market cap of more than $2 billion. The online “personal styling” company uses hundreds of algorithms to not only make recommendations to customers, but to help design new styles and even manage the subscription-based supply chain.
Future of Fashion
E-commerce giant Amazon has thrown its own considerable resources into developing AI applications for retail fashion—with mixed results.
One notable attempt involved a “styling assistant” that came with the company’s Echo Look camera that helped people catalog and manage their wardrobes, evening helping pick out each day’s attire. The company more recently revisited the direct consumer side of AI with an app called StyleSnap, which matches clothes and accessories uploaded to the site with the retailer’s vast inventory and recommends similar styles.
Behind the curtains, Amazon is going even further. A team of researchers in Israel have developed algorithms that can deduce whether a particular look is stylish based on a few labeled images. Another group at the company’s San Francisco research center was working on tech that could generate new designs of items based on images of a particular style the algorithms trained on.
“I will say that the accumulation of many new technologies across the industry could manifest in a highly specialized style assistant, far better than the examples we’ve seen today. However, the most likely thing is that the least sexy of the machine learning work will become the most impactful, and the public may never hear about it.”
That prediction is from an online interview with Leanne Luce, a fashion technology blogger and product manager at Google who recently wrote a book called, succinctly enough, Artificial Intelligence and Fashion.
Data Meets Design
Academics are also sticking their beakers into AI and fashion. Researchers at the University of California, San Diego, and Adobe Research have previously demonstrated that neural networks, a type of AI designed to mimic some aspects of the human brain, can be trained to generate (i.e., design) new product images to match a buyer’s preference, much like the team at Amazon.
Meanwhile, scientists at Hong Kong Polytechnic University are working with China’s answer to Amazon, Alibaba, on developing a FashionAI Dataset to help machines better understand fashion. The effort will focus on how algorithms approach certain building blocks of design, what are called “key points” such as neckline and waistline, and “fashion attributes” like collar types and skirt styles.
The man largely behind the university’s research team is Calvin Wong, a professor and associate head of Hong Kong Polytechnic University’s Institute of Textiles and Clothing. His group has also developed an “intelligent fabric defect detection system” called WiseEye for quality control, reducing the chance of producing substandard fabric by 90 percent.
Wong and company also recently inked an agreement with RCA to establish an AI-powered design laboratory, though the details of that venture have yet to be worked out, according to Broach.
One hope is that such collaborations will not just get at the technological challenges of using machines in creative endeavors like fashion, but will also address the more personal relationships humans have with their machines.
“I think who we are, and how we use AI in fashion, as our identity, is not a superficial skin. It’s very, very important for how we define our future,” Broach said.
Image Credit: Inspirationfeed / Unsplash Continue reading →
#435172 DARPA’s New Project Is Investing ...
When Elon Musk and DARPA both hop aboard the cyborg hypetrain, you know brain-machine interfaces (BMIs) are about to achieve the impossible.
BMIs, already the stuff of science fiction, facilitate crosstalk between biological wetware with external computers, turning human users into literal cyborgs. Yet mind-controlled robotic arms, microelectrode “nerve patches”, or “memory Band-Aids” are still purely experimental medical treatments for those with nervous system impairments.
With the Next-Generation Nonsurgical Neurotechnology (N3) program, DARPA is looking to expand BMIs to the military. This month, the project tapped six academic teams to engineer radically different BMIs to hook up machines to the brains of able-bodied soldiers. The goal is to ditch surgery altogether—while minimizing any biological interventions—to link up brain and machine.
Rather than microelectrodes, which are currently surgically inserted into the brain to hijack neural communication, the project is looking to acoustic signals, electromagnetic waves, nanotechnology, genetically-enhanced neurons, and infrared beams for their next-gen BMIs.
It’s a radical departure from current protocol, with potentially thrilling—or devastating—impact. Wireless BMIs could dramatically boost bodily functions of veterans with neural damage or post-traumatic stress disorder (PTSD), or allow a single soldier to control swarms of AI-enabled drones with his or her mind. Or, similar to the Black Mirror episode Men Against Fire, it could cloud the perception of soldiers, distancing them from the emotional guilt of warfare.
When trickled down to civilian use, these new technologies are poised to revolutionize medical treatment. Or they could galvanize the transhumanist movement with an inconceivably powerful tool that fundamentally alters society—for better or worse.
Here’s what you need to know.
Radical Upgrades
The four-year N3 program focuses on two main aspects: noninvasive and “minutely” invasive neural interfaces to both read and write into the brain.
Because noninvasive technologies sit on the scalp, their sensors and stimulators will likely measure entire networks of neurons, such as those controlling movement. These systems could then allow soldiers to remotely pilot robots in the field—drones, rescue bots, or carriers like Boston Dynamics’ BigDog. The system could even boost multitasking prowess—mind-controlling multiple weapons at once—similar to how able-bodied humans can operate a third robotic arm in addition to their own two.
In contrast, minutely invasive technologies allow scientists to deliver nanotransducers without surgery: for example, an injection of a virus carrying light-sensitive sensors, or other chemical, biotech, or self-assembled nanobots that can reach individual neurons and control their activity independently without damaging sensitive tissue. The proposed use for these technologies isn’t yet well-specified, but as animal experiments have shown, controlling the activity of single neurons at multiple points is sufficient to program artificial memories of fear, desire, and experiences directly into the brain.
“A neural interface that enables fast, effective, and intuitive hands-free interaction with military systems by able-bodied warfighters is the ultimate program goal,” DARPA wrote in its funding brief, released early last year.
The only technologies that will be considered must have a viable path toward eventual use in healthy human subjects.
“Final N3 deliverables will include a complete integrated bidirectional brain-machine interface system,” the project description states. This doesn’t just include hardware, but also new algorithms tailored to these system, demonstrated in a “Department of Defense-relevant application.”
The Tools
Right off the bat, the usual tools of the BMI trade, including microelectrodes, MRI, or transcranial magnetic stimulation (TMS) are off the table. These popular technologies rely on surgery, heavy machinery, or personnel to sit very still—conditions unlikely in the real world.
The six teams will tap into three different kinds of natural phenomena for communication: magnetism, light beams, and acoustic waves.
Dr. Jacob Robinson at Rice University, for example, is combining genetic engineering, infrared laser beams, and nanomagnets for a bidirectional system. The $18 million project, MOANA (Magnetic, Optical and Acoustic Neural Access device) uses viruses to deliver two extra genes into the brain. One encodes a protein that sits on top of neurons and emits infrared light when the cell activates. Red and infrared light can penetrate through the skull. This lets a skull cap, embedded with light emitters and detectors, pick up these signals for subsequent decoding. Ultra-fast and utra-sensitvie photodetectors will further allow the cap to ignore scattered light and tease out relevant signals emanating from targeted portions of the brain, the team explained.
The other new gene helps write commands into the brain. This protein tethers iron nanoparticles to the neurons’ activation mechanism. Using magnetic coils on the headset, the team can then remotely stimulate magnetic super-neurons to fire while leaving others alone. Although the team plans to start in cell cultures and animals, their goal is to eventually transmit a visual image from one person to another. “In four years we hope to demonstrate direct, brain-to-brain communication at the speed of thought and without brain surgery,” said Robinson.
Other projects in N3 are just are ambitious.
The Carnegie Mellon team, for example, plans to use ultrasound waves to pinpoint light interaction in targeted brain regions, which can then be measured through a wearable “hat.” To write into the brain, they propose a flexible, wearable electrical mini-generator that counterbalances the noisy effect of the skull and scalp to target specific neural groups.
Similarly, a group at Johns Hopkins is also measuring light path changes in the brain to correlate them with regional brain activity to “read” wetware commands.
The Teledyne Scientific & Imaging group, in contrast, is turning to tiny light-powered “magnetometers” to detect small, localized magnetic fields that neurons generate when they fire, and match these signals to brain output.
The nonprofit Battelle team gets even fancier with their ”BrainSTORMS” nanotransducers: magnetic nanoparticles wrapped in a piezoelectric shell. The shell can convert electrical signals from neurons into magnetic ones and vice-versa. This allows external transceivers to wirelessly pick up the transformed signals and stimulate the brain through a bidirectional highway.
The magnetometers can be delivered into the brain through a nasal spray or other non-invasive methods, and magnetically guided towards targeted brain regions. When no longer needed, they can once again be steered out of the brain and into the bloodstream, where the body can excrete them without harm.
Four-Year Miracle
Mind-blown? Yeah, same. However, the challenges facing the teams are enormous.
DARPA’s stated goal is to hook up at least 16 sites in the brain with the BMI, with a lag of less than 50 milliseconds—on the scale of average human visual perception. That’s crazy high resolution for devices sitting outside the brain, both in space and time. Brain tissue, blood vessels, and the scalp and skull are all barriers that scatter and dissipate neural signals. All six teams will need to figure out the least computationally-intensive ways to fish out relevant brain signals from background noise, and triangulate them to the appropriate brain region to decipher intent.
In the long run, four years and an average $20 million per project isn’t much to potentially transform our relationship with machines—for better or worse. DARPA, to its credit, is keenly aware of potential misuse of remote brain control. The program is under the guidance of a panel of external advisors with expertise in bioethical issues. And although DARPA’s focus is on enabling able-bodied soldiers to better tackle combat challenges, it’s hard to argue that wireless, non-invasive BMIs will also benefit those most in need: veterans and other people with debilitating nerve damage. To this end, the program is heavily engaging the FDA to ensure it meets safety and efficacy regulations for human use.
Will we be there in just four years? I’m skeptical. But these electrical, optical, acoustic, magnetic, and genetic BMIs, as crazy as they sound, seem inevitable.
“DARPA is preparing for a future in which a combination of unmanned systems, AI, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager.
The question is, now that we know what’s in store, how should the rest of us prepare?
Image Credit: With permission from DARPA N3 project. Continue reading →
#434636 Using Advanced Technology to Increase ...
Image Source The 1SHIFT Logistics platform, developed by LiteLink Technologies, shows what can happen when the technology industry meets the logistics industry. 1SHIFT Logistics uses advanced technology such as artificial intelligence and geolocation to coordinate all three major parts of the logistics process: the shipper, the carrier and the delivery site. When most people think …
The post Using Advanced Technology to Increase Supply Chain Efficiency appeared first on TFOT. Continue reading →
#434336 These Smart Seafaring Robots Have a ...
Drones. Self-driving cars. Flying robo taxis. If the headlines of the last few years are to be believed, terrestrial transportation in the future will someday be filled with robotic conveyances and contraptions that will require little input from a human other than to download an app.
But what about the other 70 percent of the planet’s surface—the part that’s made up of water?
Sure, there are underwater drones that can capture 4K video for the next BBC documentary. Remotely operated vehicles (ROVs) are capable of diving down thousands of meters to investigate ocean vents or repair industrial infrastructure.
Yet most of the robots on or below the water today still lean heavily on the human element to operate. That’s not surprising given the unstructured environment of the seas and the poor communication capabilities for anything moving below the waves. Autonomous underwater vehicles (AUVs) are probably the closest thing today to smart cars in the ocean, but they generally follow pre-programmed instructions.
A new generation of seafaring robots—leveraging artificial intelligence, machine vision, and advanced sensors, among other technologies—are beginning to plunge into the ocean depths. Here are some of the latest and most exciting ones.
The Transformer of the Sea
Nic Radford, chief technology officer of Houston Mechatronics Inc. (HMI), is hesitant about throwing around the word “autonomy” when talking about his startup’s star creation, Aquanaut. He prefers the term “shared control.”
Whatever you want to call it, Aquanaut seems like something out of the script of a Transformers movie. The underwater robot begins each mission in a submarine-like shape, capable of autonomously traveling up to 200 kilometers on battery power, depending on the assignment.
When Aquanaut reaches its destination—oil and gas is the primary industry HMI hopes to disrupt to start—its four specially-designed and built linear actuators go to work. Aquanaut then unfolds into a robot with a head, upper torso, and two manipulator arms, all while maintaining proper buoyancy to get its job done.
The lightbulb moment of how to engineer this transformation from submarine to robot came one day while Aquanaut’s engineers were watching the office’s stand-up desks bob up and down. The answer to the engineering challenge of the hull suddenly seemed obvious.
“We’re just gonna build a big, gigantic, underwater stand-up desk,” Radford told Singularity Hub.
Hardware wasn’t the only problem the team, comprised of veteran NASA roboticists like Radford, had to solve. In order to ditch the expensive support vessels and large teams of humans required to operate traditional ROVs, Aquanaut would have to be able to sense its environment in great detail and relay that information back to headquarters using an underwater acoustics communications system that harkens back to the days of dial-up internet connections.
To tackle that problem of low bandwidth, HMI equipped Aquanaut with a machine vision system comprised of acoustic, optical, and laser-based sensors. All of that dense data is compressed using in-house designed technology and transmitted to a single human operator who controls Aquanaut with a few clicks of a mouse. In other words, no joystick required.
“I don’t know of anyone trying to do this level of autonomy as it relates to interacting with the environment,” Radford said.
HMI got $20 million earlier this year in Series B funding co-led by Transocean, one of the world’s largest offshore drilling contractors. That should be enough money to finish the Aquanaut prototype, which Radford said is about 99.8 percent complete. Some “high-profile” demonstrations are planned for early next year, with commercial deployments as early as 2020.
“What just gives us an incredible advantage here is that we have been born and bred on doing robotic systems for remote locations,” Radford noted. “This is my life, and I’ve bet the farm on it, and it takes this kind of fortitude and passion to see these things through, because these are not easy problems to solve.”
On Cruise Control
Meanwhile, a Boston-based startup is trying to solve the problem of making ships at sea autonomous. Sea Machines is backed by about $12.5 million in capital venture funding, with Toyota AI joining the list of investors in a $10 million Series A earlier this month.
Sea Machines is looking to the self-driving industry for inspiration, developing what it calls “vessel intelligence” systems that can be retrofitted on existing commercial vessels or installed on newly-built working ships.
For instance, the startup announced a deal earlier this year with Maersk, the world’s largest container shipping company, to deploy a system of artificial intelligence, computer vision, and LiDAR on the Danish company’s new ice-class container ship. The technology works similar to advanced driver-assistance systems found in automobiles to avoid hazards. The proof of concept will lay the foundation for a future autonomous collision avoidance system.
It’s not just startups making a splash in autonomous shipping. Radford noted that Rolls Royce—yes, that Rolls Royce—is leading the way in the development of autonomous ships. Its Intelligence Awareness system pulls in nearly every type of hyped technology on the market today: neural networks, augmented reality, virtual reality, and LiDAR.
In augmented reality mode, for example, a live feed video from the ship’s sensors can detect both static and moving objects, overlaying the scene with details about the types of vessels in the area, as well as their distance, heading, and other pertinent data.
While safety is a primary motivation for vessel automation—more than 1,100 ships have been lost over the past decade—these new technologies could make ships more efficient and less expensive to operate, according to a story in Wired about the Rolls Royce Intelligence Awareness system.
Sea Hunt Meets Science
As Singularity Hub noted in a previous article, ocean robots can also play a critical role in saving the seas from environmental threats. One poster child that has emerged—or, invaded—is the spindly lionfish.
A venomous critter endemic to the Indo-Pacific region, the lionfish is now found up and down the east coast of North America and beyond. And it is voracious, eating up to 30 times its own stomach volume and reducing juvenile reef fish populations by nearly 90 percent in as little as five weeks, according to the Ocean Support Foundation.
That has made the colorful but deadly fish Public Enemy No. 1 for many marine conservationists. Both researchers and startups are developing autonomous robots to hunt down the invasive predator.
At the Worcester Polytechnic Institute, for example, students are building a spear-carrying robot that uses machine learning and computer vision to distinguish lionfish from other aquatic species. The students trained the algorithms on thousands of different images of lionfish. The result: a lionfish-killing machine that boasts an accuracy of greater than 95 percent.
Meanwhile, a small startup called the American Marine Research Corporation out of Pensacola, Florida is applying similar technology to seek and destroy lionfish. Rather than spearfishing, the AMRC drone would stun and capture the lionfish, turning a profit by selling the creatures to local seafood restaurants.
Lionfish: It’s what’s for dinner.
Water Bots
A new wave of smart, independent robots are diving, swimming, and cruising across the ocean and its deepest depths. These autonomous systems aren’t necessarily designed to replace humans, but to venture where we can’t go or to improve safety at sea. And, perhaps, these latest innovations may inspire the robots that will someday plumb the depths of watery planets far from Earth.
Image Credit: Houston Mechatronics, Inc. Continue reading →