Tag Archives: head

#434823 The Tangled Web of Turning Spider Silk ...

Spider-Man is one of the most popular superheroes of all time. It’s a bit surprising given that one of the more common phobias is arachnophobia—a debilitating fear of spiders.

Perhaps more fantastical is that young Peter Parker, a brainy high school science nerd, seemingly developed overnight the famous web-shooters and the synthetic spider silk that he uses to swing across the cityscape like Tarzan through the jungle.

That’s because scientists have been trying for decades to replicate spider silk, a material that is five times stronger than steel, among its many superpowers. In recent years, researchers have been untangling the protein-based fiber’s structure down to the molecular level, leading to new insights and new potential for eventual commercial uses.

The applications for such a material seem near endless. There’s the more futuristic visions, like enabling robotic “muscles” for human-like movement or ensnaring real-life villains with a Spider-Man-like web. Near-term applications could include the biomedical industry, such as bandages and adhesives, and as a replacement textile for everything from rope to seat belts to parachutes.

Spinning Synthetic Spider Silk
Randy Lewis has been studying the properties of spider silk and developing methods for producing it synthetically for more than three decades. In the 1990s, his research team was behind cloning the first spider silk gene, as well as the first to identify and sequence the proteins that make up the six different silks that web slingers make. Each has different mechanical properties.

“So our thought process was that you could take that information and begin to to understand what made them strong and what makes them stretchy, and why some are are very stretchy and some are not stretchy at all, and some are stronger and some are weaker,” explained Lewis, a biology professor at Utah State University and director of the Synthetic Spider Silk Lab, in an interview with Singularity Hub.

Spiders are naturally territorial and cannibalistic, so any intention to farm silk naturally would likely end in an orgy of arachnid violence. Instead, Lewis and company have genetically modified different organisms to produce spider silk synthetically, including inserting a couple of web-making genes into the genetic code of goats. The goats’ milk contains spider silk proteins.

The lab also produces synthetic spider silk through a fermentation process not entirely dissimilar to brewing beer, but using genetically modified bacteria to make the desired spider silk proteins. A similar technique has been used for years to make a key enzyme in cheese production. More recently, companies are using transgenic bacteria to make meat and milk proteins, entirely bypassing animals in the process.

The same fermentation technology is used by a chic startup called Bolt Threads outside of San Francisco that has raised more than $200 million for fashionable fibers made out of synthetic spider silk it calls Microsilk. (The company is also developing a second leather-like material, Mylo, using the underground root structure of mushrooms known as mycelium.)

Lewis’ lab also uses transgenic silkworms to produce a kind of composite material made up of the domesticated insect’s own silk proteins and those of spider silk. “Those have some fairly impressive properties,” Lewis said.

The researchers are even experimenting with genetically modified alfalfa. One of the big advantages there is that once the spider silk protein has been extracted, the remaining protein could be sold as livestock feed. “That would bring the cost of spider silk protein production down significantly,” Lewis said.

Building a Better Web
Producing synthetic spider silk isn’t the problem, according to Lewis, but the ability to do it at scale commercially remains a sticking point.

Another challenge is “weaving” the synthetic spider silk into usable products that can take advantage of the material’s marvelous properties.

“It is possible to make silk proteins synthetically, but it is very hard to assemble the individual proteins into a fiber or other material forms,” said Markus Buehler, head of the Department of Civil and Environmental Engineering at MIT, in an email to Singularity Hub. “The spider has a complex spinning duct in which silk proteins are exposed to physical forces, chemical gradients, the combination of which generates the assembly of molecules that leads to silk fibers.”

Buehler recently co-authored a paper in the journal Science Advances that found dragline spider silk exhibits different properties in response to changes in humidity that could eventually have applications in robotics.

Specifically, spider silk suddenly contracts and twists above a certain level of relative humidity, exerting enough force to “potentially be competitive with other materials being explored as actuators—devices that move to perform some activity such as controlling a valve,” according to a press release.

Studying Spider Silk Up Close
Recent studies at the molecular level are helping scientists learn more about the unique properties of spider silk, which may help researchers develop materials with extraordinary capabilities.

For example, scientists at Arizona State University used magnetic resonance tools and other instruments to image the abdomen of a black widow spider. They produced what they called the first molecular-level model of spider silk protein fiber formation, providing insights on the nanoparticle structure. The research was published last October in Proceedings of the National Academy of Sciences.

A cross section of the abdomen of a black widow (Latrodectus Hesperus) spider used in this study at Arizona State University. Image Credit: Samrat Amin.
Also in 2018, a study presented in Nature Communications described a sort of molecular clamp that binds the silk protein building blocks, which are called spidroins. The researchers observed for the first time that the clamp self-assembles in a two-step process, contributing to the extensibility, or stretchiness, of spider silk.

Another team put the spider silk of a brown recluse under an atomic force microscope, discovering that each strand, already 1,000 times thinner than a human hair, is made up of thousands of nanostrands. That helps explain its extraordinary tensile strength, though technique is also a factor, as the brown recluse uses a special looping method to reinforce its silk strands. The study also appeared last year in the journal ACS Macro Letters.

Making Spider Silk Stick
Buehler said his team is now trying to develop better and faster predictive methods to design silk proteins using artificial intelligence.

“These new methods allow us to generate new protein designs that do not naturally exist and which can be explored to optimize certain desirable properties like torsional actuation, strength, bioactivity—for example, tissue engineering—and others,” he said.

Meanwhile, Lewis’ lab has discovered a method that allows it to solubilize spider silk protein in what is essentially a water-based solution, eschewing acids or other toxic compounds that are normally used in the process.

That enables the researchers to develop materials beyond fiber, including adhesives that “are better than an awful lot of the current commercial adhesives,” Lewis said, as well as coatings that could be used to dampen vibrations, for example.

“We’re making gels for various kinds of of tissue regeneration, as well as drug delivery, and things like that,” he added. “So we’ve expanded the use profile from something beyond fibers to something that is a much more extensive portfolio of possible kinds of materials.”

And, yes, there’s even designs at the Synthetic Spider Silk Lab for developing a Spider-Man web-slinger material. The US Navy is interested in non-destructive ways of disabling an enemy vessel, such as fouling its propeller. The project also includes producing synthetic proteins from the hagfish, an eel-like critter that exudes a gelatinous slime when threatened.

Lewis said that while the potential for spider silk is certainly headline-grabbing, he cautioned that much of the hype is not focused on the unique mechanical properties that could lead to advances in healthcare and other industries.

“We want to see spider silk out there because it’s a unique material, not because it’s got marketing appeal,” he said.

Image Credit: mycteria / Shutterstock.com Continue reading

Posted in Human Robots

#434818 Watch These Robots Do Tasks You Thought ...

Robots have been masters of manufacturing at speed and precision for decades, but give them a seemingly simple task like stacking shelves, and they quickly get stuck. That’s changing, though, as engineers build systems that can take on the deceptively tricky tasks most humans can do with their eyes closed.

Boston Dynamics is famous for dramatic reveals of robots performing mind-blowing feats that also leave you scratching your head as to what the market is—think the bipedal Atlas doing backflips or Spot the galloping robot dog.

Last week, the company released a video of a robot called Handle that looks like an ostrich on wheels carrying out the seemingly mundane task of stacking boxes in a warehouse.

It might seem like a step backward, but this is exactly the kind of practical task robots have long struggled with. While the speed and precision of industrial robots has seen them take over many functions in modern factories, they’re generally limited to highly prescribed tasks carried out in meticulously-controlled environments.

That’s because despite their mechanical sophistication, most are still surprisingly dumb. They can carry out precision welding on a car or rapidly assemble electronics, but only by rigidly following a prescribed set of motions. Moving cardboard boxes around a warehouse might seem simple to a human, but it actually involves a variety of tasks machines still find pretty difficult—perceiving your surroundings, navigating, and interacting with objects in a dynamic environment.

But the release of this video suggests Boston Dynamics thinks these kinds of applications are close to prime time. Last week the company doubled down by announcing the acquisition of start-up Kinema Systems, which builds computer vision systems for robots working in warehouses.

It’s not the only company making strides in this area. On the same day the video went live, Google unveiled a robot arm called TossingBot that can pick random objects from a box and quickly toss them into another container beyond its reach, which could prove very useful for sorting items in a warehouse. The machine can train on new objects in just an hour or two, and can pick and toss up to 500 items an hour with better accuracy than any of the humans who tried the task.

And an apple-picking robot built by Abundant Robotics is currently on New Zealand farms navigating between rows of apple trees using LIDAR and computer vision to single out ripe apples before using a vacuum tube to suck them off the tree.

In most cases, advances in machine learning and computer vision brought about by the recent AI boom are the keys to these rapidly improving capabilities. Robots have historically had to be painstakingly programmed by humans to solve each new task, but deep learning is making it possible for them to quickly train themselves on a variety of perception, navigation, and dexterity tasks.

It’s not been simple, though, and the application of deep learning in robotics has lagged behind other areas. A major limitation is that the process typically requires huge amounts of training data. That’s fine when you’re dealing with image classification, but when that data needs to be generated by real-world robots it can make the approach impractical. Simulations offer the possibility to run this training faster than real time, but it’s proved difficult to translate policies learned in virtual environments into the real world.

Recent years have seen significant progress on these fronts, though, and the increasing integration of modern machine learning with robotics. In October, OpenAI imbued a robotic hand with human-level dexterity by training an algorithm in a simulation using reinforcement learning before transferring it to the real-world device. The key to ensuring the translation went smoothly was injecting random noise into the simulation to mimic some of the unpredictability of the real world.

And just a couple of weeks ago, MIT researchers demonstrated a new technique that let a robot arm learn to manipulate new objects with far less training data than is usually required. By getting the algorithm to focus on a few key points on the object necessary for picking it up, the system could learn to pick up a previously unseen object after seeing only a few dozen examples (rather than the hundreds or thousands typically required).

How quickly these innovations will trickle down to practical applications remains to be seen, but a number of startups as well as logistics behemoth Amazon are developing robots designed to flexibly pick and place the wide variety of items found in your average warehouse.

Whether the economics of using robots to replace humans at these kinds of menial tasks makes sense yet is still unclear. The collapse of collaborative robotics pioneer Rethink Robotics last year suggests there are still plenty of challenges.

But at the same time, the number of robotic warehouses is expected to leap from 4,000 today to 50,000 by 2025. It may not be long until robots are muscling in on tasks we’ve long assumed only humans could do.

Image Credit: Visual Generation / Shutterstock.com Continue reading

Posted in Human Robots

#434759 To Be Ethical, AI Must Become ...

As over-hyped as artificial intelligence is—everyone’s talking about it, few fully understand it, it might leave us all unemployed but also solve all the world’s problems—its list of accomplishments is growing. AI can now write realistic-sounding text, give a debating champ a run for his money, diagnose illnesses, and generate fake human faces—among much more.

After training these systems on massive datasets, their creators essentially just let them do their thing to arrive at certain conclusions or outcomes. The problem is that more often than not, even the creators don’t know exactly why they’ve arrived at those conclusions or outcomes. There’s no easy way to trace a machine learning system’s rationale, so to speak. The further we let AI go down this opaque path, the more likely we are to end up somewhere we don’t want to be—and may not be able to come back from.

In a panel at the South by Southwest interactive festival last week titled “Ethics and AI: How to plan for the unpredictable,” experts in the field shared their thoughts on building more transparent, explainable, and accountable AI systems.

Not New, but Different
Ryan Welsh, founder and director of explainable AI startup Kyndi, pointed out that having knowledge-based systems perform advanced tasks isn’t new; he cited logistical, scheduling, and tax software as examples. What’s new is the learning component, our inability to trace how that learning occurs, and the ethical implications that could result.

“Now we have these systems that are learning from data, and we’re trying to understand why they’re arriving at certain outcomes,” Welsh said. “We’ve never actually had this broad society discussion about ethics in those scenarios.”

Rather than continuing to build AIs with opaque inner workings, engineers must start focusing on explainability, which Welsh broke down into three subcategories. Transparency and interpretability come first, and refer to being able to find the units of high influence in a machine learning network, as well as the weights of those units and how they map to specific data and outputs.

Then there’s provenance: knowing where something comes from. In an ideal scenario, for example, Open AI’s new text generator would be able to generate citations in its text that reference academic (and human-created) papers or studies.

Explainability itself is the highest and final bar and refers to a system’s ability to explain itself in natural language to the average user by being able to say, “I generated this output because x, y, z.”

“Humans are unique in our ability and our desire to ask why,” said Josh Marcuse, executive director of the Defense Innovation Board, which advises Department of Defense senior leaders on innovation. “The reason we want explanations from people is so we can understand their belief system and see if we agree with it and want to continue to work with them.”

Similarly, we need to have the ability to interrogate AIs.

Two Types of Thinking
Welsh explained that one big barrier standing in the way of explainability is the tension between the deep learning community and the symbolic AI community, which see themselves as two different paradigms and historically haven’t collaborated much.

Symbolic or classical AI focuses on concepts and rules, while deep learning is centered around perceptions. In human thought this is the difference between, for example, deciding to pass a soccer ball to a teammate who is open (you make the decision because conceptually you know that only open players can receive passes), and registering that the ball is at your feet when someone else passes it to you (you’re taking in information without making a decision about it).

“Symbolic AI has abstractions and representation based on logic that’s more humanly comprehensible,” Welsh said. To truly mimic human thinking, AI needs to be able to both perceive information and conceptualize it. An example of perception (deep learning) in an AI is recognizing numbers within an image, while conceptualization (symbolic learning) would give those numbers a hierarchical order and extract rules from the hierachy (4 is greater than 3, and 5 is greater than 4, therefore 5 is also greater than 3).

Explainability comes in when the system can say, “I saw a, b, and c, and based on that decided x, y, or z.” DeepMind and others have recently published papers emphasizing the need to fuse the two paradigms together.

Implications Across Industries
One of the most prominent fields where AI ethics will come into play, and where the transparency and accountability of AI systems will be crucial, is defense. Marcuse said, “We’re accountable beings, and we’re responsible for the choices we make. Bringing in tech or AI to a battlefield doesn’t strip away that meaning and accountability.”

In fact, he added, rather than worrying about how AI might degrade human values, people should be asking how the tech could be used to help us make better moral choices.

It’s also important not to conflate AI with autonomy—a worst-case scenario that springs to mind is an intelligent destructive machine on a rampage. But in fact, Marcuse said, in the defense space, “We have autonomous systems today that don’t rely on AI, and most of the AI systems we’re contemplating won’t be autonomous.”

The US Department of Defense released its 2018 artificial intelligence strategy last month. It includes developing a robust and transparent set of principles for defense AI, investing in research and development for AI that’s reliable and secure, continuing to fund research in explainability, advocating for a global set of military AI guidelines, and finding ways to use AI to reduce the risk of civilian casualties and other collateral damage.

Though these were designed with defense-specific aims in mind, Marcuse said, their implications extend across industries. “The defense community thinks of their problems as being unique, that no one deals with the stakes and complexity we deal with. That’s just wrong,” he said. Making high-stakes decisions with technology is widespread; safety-critical systems are key to aviation, medicine, and self-driving cars, to name a few.

Marcuse believes the Department of Defense can invest in AI safety in a way that has far-reaching benefits. “We all depend on technology to keep us alive and safe, and no one wants machines to harm us,” he said.

A Creation Superior to Its Creator
That said, we’ve come to expect technology to meet our needs in just the way we want, all the time—servers must never be down, GPS had better not take us on a longer route, Google must always produce the answer we’re looking for.

With AI, though, our expectations of perfection may be less reasonable.

“Right now we’re holding machines to superhuman standards,” Marcuse said. “We expect them to be perfect and infallible.” Take self-driving cars. They’re conceived of, built by, and programmed by people, and people as a whole generally aren’t great drivers—just look at traffic accident death rates to confirm that. But the few times self-driving cars have had fatal accidents, there’s been an ensuing uproar and backlash against the industry, as well as talk of implementing more restrictive regulations.

This can be extrapolated to ethics more generally. We as humans have the ability to explain our decisions, but many of us aren’t very good at doing so. As Marcuse put it, “People are emotional, they confabulate, they lie, they’re full of unconscious motivations. They don’t pass the explainability test.”

Why, then, should explainability be the standard for AI?

Even if humans aren’t good at explaining our choices, at least we can try, and we can answer questions that probe at our decision-making process. A deep learning system can’t do this yet, so working towards being able to identify which input data the systems are triggering on to make decisions—even if the decisions and the process aren’t perfect—is the direction we need to head.

Image Credit: a-image / Shutterstock.com Continue reading

Posted in Human Robots

#434753 Top Takeaways From The Economist ...

Over the past few years, the word ‘innovation’ has degenerated into something of a buzzword. In fact, according to Vijay Vaitheeswaran, US business editor at The Economist, it’s one of the most abused words in the English language.

The word is over-used precisely because we’re living in a great age of invention. But the pace at which those inventions are changing our lives is fast, new, and scary.

So what strategies do companies need to adopt to make sure technology leads to growth that’s not only profitable, but positive? How can business and government best collaborate? Can policymakers regulate the market without suppressing innovation? Which technologies will impact us most, and how soon?

At The Economist Innovation Summit in Chicago last week, entrepreneurs, thought leaders, policymakers, and academics shared their insights on the current state of exponential technologies, and the steps companies and individuals should be taking to ensure a tech-positive future. Here’s their expert take on the tech and trends shaping the future.

Blockchain
There’s been a lot of hype around blockchain; apparently it can be used for everything from distributing aid to refugees to voting. However, it’s too often conflated with cryptocurrencies like Bitcoin, and we haven’t heard of many use cases. Where does the technology currently stand?

Julie Sweet, chief executive of Accenture North America, emphasized that the technology is still in its infancy. “Everything we see today are pilots,” she said. The most promising of these pilots are taking place across three different areas: supply chain, identity, and financial services.

When you buy something from outside the US, Sweet explained, it goes through about 80 different parties. 70 percent of the relevant data is replicated and is prone to error, with paper-based documents often to blame. Blockchain is providing a secure way to eliminate paper in supply chains, upping accuracy and cutting costs in the process.

One of the most prominent use cases in the US is Walmart—the company has mandated that all suppliers in its leafy greens segment be on a blockchain, and its food safety has improved as a result.

Beth Devin, head of Citi Ventures’ innovation network, added “Blockchain is an infrastructure technology. It can be leveraged in a lot of ways. There’s so much opportunity to create new types of assets and securities that aren’t accessible to people today. But there’s a lot to figure out around governance.”

Open Source Technology
Are the days of proprietary technology numbered? More and more companies and individuals are making their source code publicly available, and its benefits are thus more widespread than ever before. But what are the limitations and challenges of open source tech, and where might it go in the near future?

Bob Lord, senior VP of cognitive applications at IBM, is a believer. “Open-sourcing technology helps innovation occur, and it’s a fundamental basis for creating great technology solutions for the world,” he said. However, the biggest challenge for open source right now is that companies are taking out more than they’re contributing back to the open-source world. Lord pointed out that IBM has a rule about how many lines of code employees take out relative to how many lines they put in.

Another challenge area is open governance; blockchain by its very nature should be transparent and decentralized, with multiple parties making decisions and being held accountable. “We have to embrace open governance at the same time that we’re contributing,” Lord said. He advocated for a hybrid-cloud environment where people can access public and private data and bring it together.

Augmented and Virtual Reality
Augmented and virtual reality aren’t just for fun and games anymore, and they’ll be even less so in the near future. According to Pearly Chen, vice president at HTC, they’ll also go from being two different things to being one and the same. “AR overlays digital information on top of the real world, and VR transports you to a different world,” she said. “In the near future we will not need to delineate between these two activities; AR and VR will come together naturally, and will change everything we do as we know it today.”

For that to happen, we’ll need a more ergonomically friendly device than we have today for interacting with this technology. “Whenever we use tech today, we’re multitasking,” said product designer and futurist Jody Medich. “When you’re using GPS, you’re trying to navigate in the real world and also manage this screen. Constant task-switching is killing our brain’s ability to think.” Augmented and virtual reality, she believes, will allow us to adapt technology to match our brain’s functionality.

This all sounds like a lot of fun for uses like gaming and entertainment, but what about practical applications? “Ultimately what we care about is how this technology will improve lives,” Chen said.

A few ways that could happen? Extended reality will be used to simulate hazardous real-life scenarios, reduce the time and resources needed to bring a product to market, train healthcare professionals (such as surgeons), or provide therapies for patients—not to mention education. “Think about the possibilities for children to learn about history, science, or math in ways they can’t today,” Chen said.

Quantum Computing
If there’s one technology that’s truly baffling, it’s quantum computing. Qubits, entanglement, quantum states—it’s hard to wrap our heads around these concepts, but they hold great promise. Where is the tech right now?

Mandy Birch, head of engineering strategy at Rigetti Computing, thinks quantum development is starting slowly but will accelerate quickly. “We’re at the innovation stage right now, trying to match this capability to useful applications,” she said. “Can we solve problems cheaper, better, and faster than classical computers can do?” She believes quantum’s first breakthrough will happen in two to five years, and that is highest potential is in applications like routing, supply chain, and risk optimization, followed by quantum chemistry (for materials science and medicine) and machine learning.

David Awschalom, director of the Chicago Quantum Exchange and senior scientist at Argonne National Laboratory, believes quantum communication and quantum sensing will become a reality in three to seven years. “We’ll use states of matter to encrypt information in ways that are completely secure,” he said. A quantum voting system, currently being prototyped, is one application.

Who should be driving quantum tech development? The panelists emphasized that no one entity will get very far alone. “Advancing quantum tech will require collaboration not only between business, academia, and government, but between nations,” said Linda Sapochak, division director of materials research at the National Science Foundation. She added that this doesn’t just go for the technology itself—setting up the infrastructure for quantum will be a big challenge as well.

Space
Space has always been the final frontier, and it still is—but it’s not quite as far-removed from our daily lives now as it was when Neil Armstrong walked on the moon in 1969.

The space industry has always been funded by governments and private defense contractors. But in 2009, SpaceX launched its first commercial satellite, and in subsequent years have drastically cut the cost of spaceflight. More importantly, they published their pricing, which brought transparency to a market that hadn’t seen it before.

Entrepreneurs around the world started putting together business plans, and there are now over 400 privately-funded space companies, many with consumer applications.

Chad Anderson, CEO of Space Angels and managing partner of Space Capital, pointed out that the technology floating around in space was, until recently, archaic. “A few NASA engineers saw they had more computing power in their phone than there was in satellites,” he said. “So they thought, ‘why don’t we just fly an iPhone?’” They did—and it worked.

Now companies have networks of satellites monitoring the whole planet, producing a huge amount of data that’s valuable for countless applications like agriculture, shipping, and observation. “A lot of people underestimate space,” Anderson said. “It’s already enabling our modern global marketplace.”

Next up in the space realm, he predicts, are mining and tourism.

Artificial Intelligence and the Future of Work
From the US to Europe to Asia, alarms are sounding about AI taking our jobs. What will be left for humans to do once machines can do everything—and do it better?

These fears may be unfounded, though, and are certainly exaggerated. It’s undeniable that AI and automation are changing the employment landscape (not to mention the way companies do business and the way we live our lives), but if we build these tools the right way, they’ll bring more good than harm, and more productivity than obsolescence.

Accenture’s Julie Sweet emphasized that AI alone is not what’s disrupting business and employment. Rather, it’s what she called the “triple A”: automation, analytics, and artificial intelligence. But even this fear-inducing trifecta of terms doesn’t spell doom, for workers or for companies. Accenture has automated 40,000 jobs—and hasn’t fired anyone in the process. Instead, they’ve trained and up-skilled people. The most important drivers to scale this, Sweet said, are a commitment by companies and government support (such as tax credits).

Imbuing AI with the best of human values will also be critical to its impact on our future. Tracy Frey, Google Cloud AI’s director of strategy, cited the company’s set of seven AI principles. “What’s important is the governance process that’s put in place to support those principles,” she said. “You can’t make macro decisions when you have technology that can be applied in many different ways.”

High Risks, High Stakes
This year, Vaitheeswaran said, 50 percent of the world’s population will have internet access (he added that he’s disappointed that percentage isn’t higher given the proliferation of smartphones). As technology becomes more widely available to people around the world and its influence grows even more, what are the biggest risks we should be monitoring and controlling?

Information integrity—being able to tell what’s real from what’s fake—is a crucial one. “We’re increasingly operating in siloed realities,” said Renee DiResta, director of research at New Knowledge and head of policy at Data for Democracy. “Inadvertent algorithmic amplification on social media elevates certain perspectives—what does that do to us as a society?”

Algorithms have also already been proven to perpetuate the bias of the people who create it—and those people are often wealthy, white, and male. Ensuring that technology doesn’t propagate unfair bias will be crucial to its ability to serve a diverse population, and to keep societies from becoming further polarized and inequitable. The polarization of experience that results from pronounced inequalities within countries, Vaitheeswaran pointed out, can end up undermining democracy.

We’ll also need to walk the line between privacy and utility very carefully. As Dan Wagner, founder of Civis Analytics put it, “We want to ensure privacy as much as possible, but open access to information helps us achieve important social good.” Medicine in the US has been hampered by privacy laws; if, for example, we had more data about biomarkers around cancer, we could provide more accurate predictions and ultimately better healthcare.

But going the Chinese way—a total lack of privacy—is likely not the answer, either. “We have to be very careful about the way we bake rights and freedom into our technology,” said Alex Gladstein, chief strategy officer at Human Rights Foundation.

Technology’s risks are clearly as fraught as its potential is promising. As Gary Shapiro, chief executive of the Consumer Technology Association, put it, “Everything we’ve talked about today is simply a tool, and can be used for good or bad.”

The decisions we’re making now, at every level—from the engineers writing algorithms, to the legislators writing laws, to the teenagers writing clever Instagram captions—will determine where on the spectrum we end up.

Image Credit: Rudy Balasko / Shutterstock.com Continue reading

Posted in Human Robots

#434336 These Smart Seafaring Robots Have a ...

Drones. Self-driving cars. Flying robo taxis. If the headlines of the last few years are to be believed, terrestrial transportation in the future will someday be filled with robotic conveyances and contraptions that will require little input from a human other than to download an app.

But what about the other 70 percent of the planet’s surface—the part that’s made up of water?

Sure, there are underwater drones that can capture 4K video for the next BBC documentary. Remotely operated vehicles (ROVs) are capable of diving down thousands of meters to investigate ocean vents or repair industrial infrastructure.

Yet most of the robots on or below the water today still lean heavily on the human element to operate. That’s not surprising given the unstructured environment of the seas and the poor communication capabilities for anything moving below the waves. Autonomous underwater vehicles (AUVs) are probably the closest thing today to smart cars in the ocean, but they generally follow pre-programmed instructions.

A new generation of seafaring robots—leveraging artificial intelligence, machine vision, and advanced sensors, among other technologies—are beginning to plunge into the ocean depths. Here are some of the latest and most exciting ones.

The Transformer of the Sea
Nic Radford, chief technology officer of Houston Mechatronics Inc. (HMI), is hesitant about throwing around the word “autonomy” when talking about his startup’s star creation, Aquanaut. He prefers the term “shared control.”

Whatever you want to call it, Aquanaut seems like something out of the script of a Transformers movie. The underwater robot begins each mission in a submarine-like shape, capable of autonomously traveling up to 200 kilometers on battery power, depending on the assignment.

When Aquanaut reaches its destination—oil and gas is the primary industry HMI hopes to disrupt to start—its four specially-designed and built linear actuators go to work. Aquanaut then unfolds into a robot with a head, upper torso, and two manipulator arms, all while maintaining proper buoyancy to get its job done.

The lightbulb moment of how to engineer this transformation from submarine to robot came one day while Aquanaut’s engineers were watching the office’s stand-up desks bob up and down. The answer to the engineering challenge of the hull suddenly seemed obvious.

“We’re just gonna build a big, gigantic, underwater stand-up desk,” Radford told Singularity Hub.

Hardware wasn’t the only problem the team, comprised of veteran NASA roboticists like Radford, had to solve. In order to ditch the expensive support vessels and large teams of humans required to operate traditional ROVs, Aquanaut would have to be able to sense its environment in great detail and relay that information back to headquarters using an underwater acoustics communications system that harkens back to the days of dial-up internet connections.

To tackle that problem of low bandwidth, HMI equipped Aquanaut with a machine vision system comprised of acoustic, optical, and laser-based sensors. All of that dense data is compressed using in-house designed technology and transmitted to a single human operator who controls Aquanaut with a few clicks of a mouse. In other words, no joystick required.

“I don’t know of anyone trying to do this level of autonomy as it relates to interacting with the environment,” Radford said.

HMI got $20 million earlier this year in Series B funding co-led by Transocean, one of the world’s largest offshore drilling contractors. That should be enough money to finish the Aquanaut prototype, which Radford said is about 99.8 percent complete. Some “high-profile” demonstrations are planned for early next year, with commercial deployments as early as 2020.

“What just gives us an incredible advantage here is that we have been born and bred on doing robotic systems for remote locations,” Radford noted. “This is my life, and I’ve bet the farm on it, and it takes this kind of fortitude and passion to see these things through, because these are not easy problems to solve.”

On Cruise Control
Meanwhile, a Boston-based startup is trying to solve the problem of making ships at sea autonomous. Sea Machines is backed by about $12.5 million in capital venture funding, with Toyota AI joining the list of investors in a $10 million Series A earlier this month.

Sea Machines is looking to the self-driving industry for inspiration, developing what it calls “vessel intelligence” systems that can be retrofitted on existing commercial vessels or installed on newly-built working ships.

For instance, the startup announced a deal earlier this year with Maersk, the world’s largest container shipping company, to deploy a system of artificial intelligence, computer vision, and LiDAR on the Danish company’s new ice-class container ship. The technology works similar to advanced driver-assistance systems found in automobiles to avoid hazards. The proof of concept will lay the foundation for a future autonomous collision avoidance system.

It’s not just startups making a splash in autonomous shipping. Radford noted that Rolls Royce—yes, that Rolls Royce—is leading the way in the development of autonomous ships. Its Intelligence Awareness system pulls in nearly every type of hyped technology on the market today: neural networks, augmented reality, virtual reality, and LiDAR.

In augmented reality mode, for example, a live feed video from the ship’s sensors can detect both static and moving objects, overlaying the scene with details about the types of vessels in the area, as well as their distance, heading, and other pertinent data.

While safety is a primary motivation for vessel automation—more than 1,100 ships have been lost over the past decade—these new technologies could make ships more efficient and less expensive to operate, according to a story in Wired about the Rolls Royce Intelligence Awareness system.

Sea Hunt Meets Science
As Singularity Hub noted in a previous article, ocean robots can also play a critical role in saving the seas from environmental threats. One poster child that has emerged—or, invaded—is the spindly lionfish.

A venomous critter endemic to the Indo-Pacific region, the lionfish is now found up and down the east coast of North America and beyond. And it is voracious, eating up to 30 times its own stomach volume and reducing juvenile reef fish populations by nearly 90 percent in as little as five weeks, according to the Ocean Support Foundation.

That has made the colorful but deadly fish Public Enemy No. 1 for many marine conservationists. Both researchers and startups are developing autonomous robots to hunt down the invasive predator.

At the Worcester Polytechnic Institute, for example, students are building a spear-carrying robot that uses machine learning and computer vision to distinguish lionfish from other aquatic species. The students trained the algorithms on thousands of different images of lionfish. The result: a lionfish-killing machine that boasts an accuracy of greater than 95 percent.

Meanwhile, a small startup called the American Marine Research Corporation out of Pensacola, Florida is applying similar technology to seek and destroy lionfish. Rather than spearfishing, the AMRC drone would stun and capture the lionfish, turning a profit by selling the creatures to local seafood restaurants.

Lionfish: It’s what’s for dinner.

Water Bots
A new wave of smart, independent robots are diving, swimming, and cruising across the ocean and its deepest depths. These autonomous systems aren’t necessarily designed to replace humans, but to venture where we can’t go or to improve safety at sea. And, perhaps, these latest innovations may inspire the robots that will someday plumb the depths of watery planets far from Earth.

Image Credit: Houston Mechatronics, Inc. Continue reading

Posted in Human Robots