Tag Archives: special

#435687 Humanoid Robots Teach Coping Skills to ...

Photo: Rob Felt

IEEE Senior Member Ayanna Howard with one of the interactive androids that help children with autism improve their social and emotional engagement.

THE INSTITUTEChildren with autism spectrum disorder can have a difficult time expressing their emotions and can be highly sensitive to sound, sight, and touch. That sometimes restricts their participation in everyday activities, leaving them socially isolated. Occupational therapists can help them cope better, but the time they’re able to spend is limited and the sessions tend to be expensive.

Roboticist Ayanna Howard, an IEEE senior member, has been using interactive androids to guide children with autism on ways to socially and emotionally engage with others—as a supplement to therapy. Howard is chair of the School of Interactive Computing and director of the Human-Automation Systems Lab at Georgia Tech. She helped found Zyrobotics, a Georgia Tech VentureLab startup that is working on AI and robotics technologies to engage children with special needs. Last year Forbes named Howard, Zyrobotics’ chief technology officer, one of the Top 50 U.S. Women in Tech.

In a recent study, Howard and other researchers explored how robots might help children navigate sensory experiences. The experiment involved 18 participants between the ages of 4 and 12; five had autism, and the rest were meeting typical developmental milestones. Two humanoid robots were programmed to express boredom, excitement, nervousness, and 17 other emotional states. As children explored stations set up for hearing, seeing, smelling, tasting, and touching, the robots modeled what the socially acceptable responses should be.

“If a child’s expression is one of happiness or joy, the robot will have a corresponding response of encouragement,” Howard says. “If there are aspects of frustration or sadness, the robot will provide input to try again.” The study suggested that many children with autism exhibit stronger levels of engagement when the robots interact with them at such sensory stations.

It is one of many robotics projects Howard has tackled. She has designed robots for researching glaciers, and she is working on assistive robots for the home, as well as an exoskeleton that can help children who have motor disabilities.

Howard spoke about her work during the Ethics in AI: Impacts of (Anti?) Social Robotics panel session held in May at the IEEE Vision, Innovation, and Challenges Summit in San Diego. You can watch the session on IEEE.tv.

The next IEEE Vision, Innovation, and Challenges Summit and Honors Ceremony will be held on 15 May 2020 at the JW Marriott Parq Vancouver hotel, in Vancouver.

In this interview with The Institute, Howard talks about how she got involved with assistive technologies, the need for a more diverse workforce, and ways IEEE has benefited her career.

FOCUS ON ACCESSIBILITY
Howard was inspired to work on technology that can improve accessibility in 2008 while teaching high school students at a summer camp devoted to science, technology, engineering, and math.

“A young lady with a visual impairment attended camp. The robot programming tools being used at the camp weren’t accessible to her,” Howard says. “As an engineer, I want to fix problems when I see them, so we ended up designing tools to enable access to programming tools that could be used in STEM education.

“That was my starting motivation, and this theme of accessibility has expanded to become a main focus of my research. One of the things about this world of accessibility is that when you start interacting with kids and parents, you discover another world out there of assistive technologies and how robotics can be used for good in education as well as therapy.”

DIVERSITY OF THOUGHT
The Institute asked Howard why it’s important to have a more diverse STEM workforce and what could be done to increase the number of women and others from underrepresented groups.

“The makeup of the current engineering workforce isn’t necessarily representative of the world, which is composed of different races, cultures, ages, disabilities, and socio-economic backgrounds,” Howard says. “We’re creating products used by people around the globe, so we have to ensure they’re being designed for a diverse population. As IEEE members, we also need to engage with people who aren’t engineers, and we don’t do that enough.”

Educational institutions are doing a better job of increasing diversity in areas such as gender, she says, adding that more work is needed because the enrollment numbers still aren’t representative of the population and the gains don’t necessarily carry through after graduation.

“There has been an increase in the number of underrepresented minorities and females going into engineering and computer science,” she says, “but data has shown that their numbers are not sustained in the workforce.”

ROLE MODEL
Because there are more underrepresented groups on today’s college campuses that can form a community, the lack of engineering role models—although a concern on campuses—is more extreme for preuniversity students, Howard says.

“Depending on where you go to school, you may not know what an engineer does or even consider engineering as an option,” she says, “so there’s still a big disconnect there.”

Howard has been involved for many years in math- and science-mentoring programs for at-risk high school girls. She tells them to find what they’re passionate about and combine it with math and science to create something. She also advises them not to let anyone tell them that they can’t.

Howard’s father is an engineer. She says he never encouraged or discouraged her to become one, but when she broke something, he would show her how to fix it and talk her through the process. Along the way, he taught her a logical way of thinking she says all engineers have.

“When I would try to explain something, he would quiz me and tell me to ‘think more logically,’” she says.

Howard earned a bachelor’s degree in engineering from Brown University, in Providence, R.I., then she received both a master’s and doctorate degree in electrical engineering from the University of Southern California. Before joining the faculty of Georgia Tech in 2005, she worked at NASA’s Jet Propulsion Laboratory at the California Institute of Technology for more than a decade as a senior robotics researcher and deputy manager in the Office of the Chief Scientist.

ACTIVE VOLUNTEER
Howard’s father was also an IEEE member, but that’s not why she joined the organization. She says she signed up when she was a student because, “that was something that you just did. Plus, my student membership fee was subsidized.”

She kept the membership as a grad student because of the discounted rates members receive on conferences.

Those conferences have had an impact on her career. “They allow you to understand what the state of the art is,” she says. “Back then you received a printed conference proceeding and reading through it was brutal, but by attending it in person, you got a 15-minute snippet about the research.”

Howard is an active volunteer with the IEEE Robotics and Automation and the IEEE Systems, Man, and Cybernetics societies, holding many positions and serving on several committees. She is also featured in the IEEE Impact Creators campaign. These members were selected because they inspire others to innovate for a better tomorrow.

“I value IEEE for its community,” she says. “One of the nice things about IEEE is that it’s international.” Continue reading

Posted in Human Robots

#435605 All of the Winners in the DARPA ...

The first competitive event in the DARPA Subterranean Challenge concluded last week—hopefully you were able to follow along on the livestream, on Twitter, or with some of the articles that we’ve posted about the event. We’ll have plenty more to say about how things went for the SubT teams, but while they take a bit of a (well earned) rest, we can take a look at the winning teams as well as who won DARPA’s special superlative awards for the competition.

First Place: Team Explorer (25/40 artifacts found)
With their rugged, reliable robots featuring giant wheels and the ability to drop communications nodes, Team Explorer was in the lead from day 1, scoring in double digits on every single run.

Second Place: Team CoSTAR (11/40 artifacts found)
Team CoSTAR had one of the more diverse lineups of robots, and they switched up which robots they decided to send into the mine as they learned more about the course.

Third Place: Team CTU-CRAS (10/40 artifacts found)
While many teams came to SubT with DARPA funding, Team CTU-CRAS was self-funded, making them eligible for a special $200,000 Tunnel Circuit prize.

DARPA also awarded a bunch of “superlative awards” after SubT:

Most Accurate Artifact: Team Explorer

To score a point, teams had to submit the location of an artifact that was correct to within 5 meters of the artifact itself. However, DARPA was tracking the artifact locations with much higher precision—for example, the “zero” point on the backpack artifact was the center of the label on the front, which DARPA tracked to the millimeter. Team Explorer managed to return the location of a backpack with an error of just 0.18 meter, which is kind of amazing.

Down to the Wire: Team CSIRO Data61

With just an hour to find as many artifacts as possible, teams had to find the right balance between sending robots off to explore and bringing them back into communication range to download artifact locations. Team CSIRO Data61 cut their last point pretty close, sliding their final point in with a mere 22 seconds to spare.

Most Distinctive Robots: Team Robotika

Team Robotika had some of the quirkiest and most recognizable robots, which DARPA recognized with the “Most Distinctive” award. Robotika told us that part of the reason for that distinctiveness was practical—having a robot that was effectively in two parts meant that they could disassemble it so that it would fit in the baggage compartment of an airplane, very important for a team based in the Czech Republic.

Most Robots Per Person: Team Coordinated Robotics

Kevin Knoedler, who won NASA’s Space Robotics Challenge entirely by himself, brought his own personal swarm of drones to SubT. With a ratio of seven robots to one human, Kevin was almost certainly the hardest working single human at the challenge.

Fan Favorite: Team NCTU

Photo: Evan Ackerman/IEEE Spectrum

The Fan Favorite award went to the team that was most popular on Twitter (with the #SubTChallenge hashtag), and it may or may not be the case that I personally tweeted enough about Team NCTU’s blimp to win them this award. It’s also true that whenever we asked anyone on other teams what their favorite robot was (besides their own, of course), the blimp was overwhelmingly popular. So either way, the award is well deserved.

DARPA shared this little behind-the-scenes clip of the blimp in action (sort of), showing what happened to the poor thing when the mine ventilation system was turned on between runs and DARPA staff had to chase it down and rescue it:

The thing to keep in mind about the results of the Tunnel Circuit is that unlike past DARPA robotics challenges (like the DRC), they don’t necessarily indicate how things are going to go for the Urban or Cave circuits because of how different things are going to be. Explorer did a great job with a team of rugged wheeled vehicles, which turned out to be ideal for navigating through mines, but they’re likely going to need to change things up substantially for the rest of the challenges, where the terrain will be much more complex.

DARPA hasn’t provided any details on the location of the Urban Circuit yet; all we know is that it’ll be sometime in February 2020. This gives teams just six months to take all the lessons that they learned from the Tunnel Circuit and update their hardware, software, and strategies. What were those lessons, and what do teams plan to do differently next year? Check back next week, and we’ll tell you.

[ DARPA SubT ] Continue reading

Posted in Human Robots

#435601 New Double 3 Robot Makes Telepresence ...

Today, Double Robotics is announcing Double 3, the latest major upgrade to its line of consumer(ish) telepresence robots. We had a (mostly) fantastic time testing out Double 2 back in 2016. One of the things that we found out back then was that it takes a lot of practice to remotely drive the robot around. Double 3 solves this problem by leveraging the substantial advances in 3D sensing and computing that have taken place over the past few years, giving their new robot a level of intelligence that promises to make telepresence more accessible for everyone.

Double 2’s iPad has been replaced by “a fully integrated solution”—which is a fancy way of saying a dedicated 9.7-inch touchscreen and a whole bunch of other stuff. That other stuff includes an NVIDIA Jetson TX2 AI computing module, a beamforming six-microphone array, an 8-watt speaker, a pair of 13-megapixel cameras (wide angle and zoom) on a tilting mount, five ultrasonic rangefinders, and most excitingly, a pair of Intel RealSense D430 depth sensors.

It’s those new depth sensors that really make Double 3 special. The D430 modules each uses a pair of stereo cameras with a pattern projector to generate 1280 x 720 depth data with a range of between 0.2 and 10 meters away. The Double 3 robot uses all of this high quality depth data to locate obstacles, but at this point, it still doesn’t drive completely autonomously. Instead, it presents the remote operator with a slick, augmented reality view of drivable areas in the form of a grid of dots. You just click where you want the robot to go, and it will skillfully take itself there while avoiding obstacles (including dynamic obstacles) and related mishaps along the way.

This effectively offloads the most stressful part of telepresence—not running into stuff—from the remote user to the robot itself, which is the way it should be. That makes it that much easier to encourage people to utilize telepresence for the first time. The way the system is implemented through augmented reality is particularly impressive, I think. It looks like it’s intuitive enough for an inexperienced user without being restrictive, and is a clever way of mitigating even significant amounts of lag.

Otherwise, Double 3’s mobility system is exactly the same as the one featured on Double 2. In fact, that you can stick a Double 3 head on a Double 2 body and it instantly becomes a Double 3. Double Robotics is thoughtfully offering this to current Double 2 owners as a significantly more affordable upgrade option than buying a whole new robot.

For more details on all of Double 3's new features, we spoke with the co-founders of Double Robotics, Marc DeVidts and David Cann.

IEEE Spectrum: Why use this augmented reality system instead of just letting the user click on a regular camera image? Why make things more visually complicated, especially for new users?

Marc DeVidts and David Cann: One of the things that we realized about nine months ago when we got this whole thing working was that without the mixed reality for driving, it was really too magical of an experience for the customer. Even us—we had a hard time understanding whether the robot could really see obstacles and understand where the floor is and that kind of thing. So, we said “What would be the best way of communicating this information to the user?” And the right way to do it ended up drawing the graphics directly onto the scene. It’s really awesome—we have a full, real time 3D scene with the depth information drawn on top of it. We’re starting with some relatively simple graphics, and we’ll be adding more graphics in the future to help the user understand what the robot is seeing.

How robust is the vision system when it comes to obstacle detection and avoidance? Does it work with featureless surfaces, IR absorbent surfaces, in low light, in direct sunlight, etc?

We’ve looked at all of those cases, and one of the reasons that we’re going with the RealSense is the projector that helps us to see blank walls. We also found that having two sensors—one facing the floor and one facing forward—gives us a great coverage area. Having ultrasonic sensors in there as well helps us to detect anything that we can't see with the cameras. They're sort of a last safety measure, especially useful for detecting glass.

It seems like there’s a lot more that you could do with this sensing and mapping capability. What else are you working on?

We're starting with this semi-autonomous driving variant, and we're doing a private beta of full mapping. So, we’re going to do full SLAM of your environment that will be mapped by multiple robots at the same time while you're driving, and then you'll be able to zoom out to a map and click anywhere and it will drive there. That's where we're going with it, but we want to take baby steps to get there. It's the obvious next step, I think, and there are a lot more possibilities there.

Do you expect developers to be excited for this new mapping capability?

We're using a very powerful computer in the robot, a NVIDIA Jetson TX2 running Ubuntu. There's room to grow. It’s actually really exciting to be able to see, in real time, the 3D pose of the robot along with all of the depth data that gets transformed in real time into one view that gives you a full map. Having all of that data and just putting those pieces together and getting everything to work has been a huge feat in of itself.

We have an extensive API for developers to do custom implementations, either for telepresence or other kinds of robotics research. Our system isn't running ROS, but we're going to be adding ROS adapters for all of our hardware components.

Telepresence robots depend heavily on wireless connectivity, which is usually not something that telepresence robotics companies like Double have direct control over. Have you found that connectivity has been getting significantly better since you first introduced Double?

When we started in 2013, we had a lot of customers that didn’t have WiFi in their hallways, just in the conference rooms. We very rarely hear about customers having WiFi connectivity issues these days. The bigger issue we see is when people are calling into the robot from home, where they don't have proper traffic management on their home network. The robot doesn't need a ton of bandwidth, but it does need consistent, low latency bandwidth. And so, if someone else in the house is watching Netflix or something like that, it’s going to saturate your connection. But for the most part, it’s gotten a lot better over the last few years, and it’s no longer a big problem for us.

Do you think 5G will make a significant difference to telepresence robots?

We’ll see. We like the low latency possibilities and the better bandwidth, but it's all going to be a matter of what kind of reception you get. LTE can be great, if you have good reception; it’s all about where the tower is. I’m pretty sure that WiFi is going to be the primary thing for at least the next few years.

DeVidts also mentioned that an unfortunate side effect of the new depth sensors is that hanging a t-shirt on your Double to give it some personality will likely render it partially blind, so that's just something to keep in mind. To make up for this, you can switch around the colorful trim surrounding the screen, which is nowhere near as fun.

When the Double 3 is ready for shipping in late September, US $2,000 will get you the new head with all the sensors and stuff, which seamlessly integrates with your Double 2 base. Buying Double 3 straight up (with the included charging dock) will run you $4,ooo. This is by no means an inexpensive robot, and my impression is that it’s not really designed for individual consumers. But for commercial, corporate, healthcare, or education applications, $4k for a robot as capable as the Double 3 is really quite a good deal—especially considering the kinds of use cases for which it’s ideal.

[ Double Robotics ] Continue reading

Posted in Human Robots

#435152 The Futuristic Tech Disrupting Real ...

In the wake of the housing market collapse of 2008, one entrepreneur decided to dive right into the failing real estate industry. But this time, he didn’t buy any real estate to begin with. Instead, Glenn Sanford decided to launch the first-ever cloud-based real estate brokerage, eXp Realty.

Contracting virtual platform VirBELA to build out the company’s mega-campus in VR, eXp Realty demonstrates the power of a dematerialized workspace, throwing out hefty overhead costs and fundamentally redefining what ‘real estate’ really means. Ten years later, eXp Realty has an army of 14,000 agents across all 50 US states, 3 Canadian provinces, and 400 MLS market areas… all without a single physical office.

But VR is just one of many exponential technologies converging to revolutionize real estate and construction. As floating cities and driverless cars spread out your living options, AI and VR are together cutting out the middleman.

Already, the global construction industry is projected to surpass $12.9 trillion in 2022, and the total value of the US housing market alone grew to $33.3 trillion last year. Both vital for our daily lives, these industries will continue to explode in value, posing countless possibilities for disruption.

In this blog, I’ll be discussing the following trends:

New prime real estate locations;
Disintermediation of the real estate broker and search;
Materials science and 3D printing in construction.

Let’s dive in!

Location Location Location
Until today, location has been the name of the game when it comes to hunting down the best real estate. But constraints on land often drive up costs while limiting options, and urbanization is only exacerbating the problem.

Beyond the world of virtual real estate, two primary mechanisms are driving the creation of new locations.

(1) Floating Cities

Offshore habitation hubs, floating cities have long been conceived as a solution to rising sea levels, skyrocketing urban populations, and threatened ecosystems. In success, they will soon unlock an abundance of prime real estate, whether for scenic living, commerce, education, or recreation.

One pioneering model is that of Oceanix City, designed by Danish architect Bjarke Ingels and a host of other domain experts. Intended to adapt organically over time, Oceanix would consist of a galaxy of mass-produced, hexagonal floating modules, built as satellite “cities” off coastal urban centers and sustained by renewable energies.

While individual 4.5-acre platforms would each sustain 300 people, these hexagonal modules are designed to link into 75-acre tessellations sustaining up to 10,000 residents. Each anchored to the ocean floor using biorock, Oceanix cities are slated to be closed-loop systems, as external resources are continuously supplied by automated drone networks.

Electric boats or flying cars might zoom you to work, city-embedded water capture technologies would provide your water, and while vertical and outdoor farming supply your family meal, share economies would dominate goods provision.

AERIAL: Located in calm, sheltered waters, near coastal megacities, OCEANIX City will be an adaptable, sustainable, scalable, and affordable solution for human life on the ocean. Image Credit: OCEANIX/BIG-Bjarke Ingels Group.
Joined by countless government officials whose islands risk submersion at the hands of sea level rise, the UN is now getting on board. And just this year, seasteading is exiting the realm of science fiction and testing practical waters.

As French Polynesia seeks out robust solutions to sea level rise, their government has now joined forces with the San Francisco-based Seasteading Institute. With a newly designated special economic zone and 100 acres of beachfront, this joint Floating Island Project could even see up to a dozen inhabitable structures by 2020. And what better to fund the $60 million project than the team’s upcoming ICO?

But aside from creating new locations, autonomous vehicles (AVs) and flying cars are turning previously low-demand land into the prime real estate of tomorrow.

(2) Autonomous Electric Vehicles and Flying Cars

Today, the value of a location is a function of its proximity to your workplace, your city’s central business district, the best schools, or your closest friends.

But what happens when driverless cars desensitize you to distance, or Hyperloop and flying cars decimate your commute time? Historically, every time new transit methods have hit the mainstream, tolerance for distance has opened up right alongside them, further catalyzing city spread.

And just as Hyperloop and the Boring Company aim to make your commute immaterial, autonomous vehicle (AV) ridesharing services will spread out cities in two ways: (1) by drastically reducing parking spaces needed (vertical parking decks = more prime real estate); and (2) by untethering you from the steering wheel. Want an extra two hours of sleep on the way to work? Schedule a sleeper AV and nap on your route to the office. Need a car-turned-mobile-office? No problem.

Meanwhile, aerial taxis (i.e. flying cars) will allow you to escape ground congestion entirely, delivering you from bedroom to boardroom at decimated time scales.

Already working with regulators, Uber Elevate has staked ambitious plans for its UberAIR airborne taxi project. By 2023, Uber anticipates rolling out flying drones in its two first pilot cities, Los Angeles and Dallas. Flying between rooftop skyports, drones would carry passengers at a height of 1,000 to 2,000 feet at speeds between 100 to 200 mph. And while costs per ride are anticipated to resemble those of an Uber Black based on mileage, prices are projected to soon drop to those of an UberX.

But the true economic feat boils down to this: if I were to commute 50 to 100 kilometers, I could get two or three times the house for the same price. (Not to mention the extra living space offered up by my now-unneeded garage.)

All of a sudden, virtual reality, broadband, AVs, or high-speed vehicles are going to change where we live and where we work. So rather than living in a crowded, dense urban core for access to jobs and entertainment, our future of personalized, autonomous, low-cost transport opens the luxury of rural areas to all without compromising the benefits of a short commute.

Once these drivers multiply your real estate options, how will you select your next home?

Disintermediation: Say Bye to Your Broker
In a future of continuous and personalized preference-tracking, why hire a human agent who knows less about your needs and desires than a personal AI?

Just as disintermediation is cutting out bankers and insurance agents, so too is it closing in on real estate brokers. Over the next decade, as AI becomes your agent, VR will serve as your medium.

To paint a more vivid picture of how this will look, over 98 percent of your home search will be conducted from the comfort of your couch through next-generation VR headgear.

Once you’ve verbalized your primary desires for home location, finishings, size, etc. to your personal AI, it will offer you top picks, tour-able 24/7, with optional assistance by a virtual guide and constantly updated data. As a seller, this means potential buyers from two miles, or two continents, away.

Throughout each immersive VR tour, advanced eye-tracking software and a permissioned machine learning algorithm follow your gaze, further learn your likes and dislikes, and intelligently recommend other homes or commercial residences to visit.

Curious as to what the living room might look like with a fresh coat of blue paint and a white carpet? No problem! VR programs will be able to modify rendered environments instantly, changing countless variables, from furniture materials to even the sun’s orientation. Keen to input your own furniture into a VR-rendered home? Advanced AIs could one day compile all your existing furniture, electronics, clothing, decorations, and even books, virtually organizing them across any accommodating new space.

As 3D scanning technologies make extraordinary headway, VR renditions will only grow cheaper and higher resolution. One company called Immersive Media (disclosure: I’m an investor and advisor) has a platform for 360-degree video capture and distribution, and is already exploring real estate 360-degree video.

Smaller firms like Studio 216, Vieweet, Arch Virtual, ArX Solutions, and Rubicon Media can similarly capture and render models of various properties for clients and investors to view and explore. In essence, VR real estate platforms will allow you to explore any home for sale, do the remodel, and determine if it truly is the house of your dreams.

Once you’re ready to make a bid, your AI will even help estimate a bid, process and submit your offer. Real estate companies like Zillow, Trulia, Move, Redfin, ZipRealty (acquired by Realogy in 2014) and many others have already invested millions in machine learning applications to make search, valuation, consulting, and property management easier, faster, and much more accurate.

But what happens if the home you desire most means starting from scratch with new construction?

New Methods and Materials for Construction
For thousands of years, we’ve been constrained by the construction materials of nature. We built bricks from naturally abundant clay and shale, used tree limbs as our rooftops and beams, and mastered incredible structures in ancient Rome with the use of cement.

But construction is now on the cusp of a materials science revolution. Today, I’d like to focus on three key materials:

Upcycled Materials

Imagine if you could turn the world’s greatest waste products into their most essential building blocks. Thanks to UCLA researchers at CO2NCRETE, we can already do this with carbon emissions.

Today, concrete produces about five percent of all greenhouse gas (GHG) emissions. But what if concrete could instead conserve greenhouse emissions? CO2NCRETE engineers capture carbon from smokestacks and combine it with lime to create a new type of cement. The lab’s 3D printers then shape the upcycled concrete to build entirely new structures. Once conquered at scale, upcycled concrete will turn a former polluter into a future conserver.

Or what if we wanted to print new residences from local soil at hand? Marking an extraordinary convergence between robotics and 3D printing, the Institute of Advanced Architecture of Catalonia (IAAC) is already working on a solution.

In a major feat for low-cost construction in remote zones, IAAC has found a way to convert almost any soil into a building material with three times the tensile strength of industrial clay. Offering myriad benefits, including natural insulation, low GHG emissions, fire protection, air circulation, and thermal mediation, IAAC’s new 3D printed native soil can build houses on-site for as little as $1,000.

Nanomaterials

Nano- and micro-materials are ushering in a new era of smart, super-strong, and self-charging buildings. While carbon nanotubes dramatically increase the strength-to-weight ratio of skyscrapers, revolutionizing their structural flexibility, nanomaterials don’t stop here.

Several research teams are pioneering silicon nanoparticles to capture everyday light flowing through our windows. Little solar cells at the edges of windows then harvest this energy for ready use. Researchers at the US National Renewable Energy Lab have developed similar smart windows. Turning into solar panels when bathed in sunlight, these thermochromic windows will power our buildings, changing color as they do.

Self-Healing Infrastructure

The American Society of Civil Engineers estimates that the US needs to spend roughly $4.5 trillion to fix nationwide roads, bridges, dams, and common infrastructure by 2025. But what if infrastructure could fix itself?

Enter self-healing concrete. Engineers at Delft University have developed bio-concrete that can repair its own cracks. As head researcher Henk Jonkers explains, “What makes this limestone-producing bacteria so special is that they are able to survive in concrete for more than 200 years and come into play when the concrete is damaged. […] If cracks appear as a result of pressure on the concrete, the concrete will heal these cracks itself.”

But bio-concrete is only the beginning of self-healing technologies. As futurist architecture firms start printing plastic and carbon-fiber houses like the stunner seen below (using Branch Technologies’ 3D printing technology), engineers have begun tackling self-healing plastic.

And in a bid to go smart, burgeoning construction projects have started embedding sensors for preemptive detection. Beyond materials and sensors, however, construction methods are fast colliding into robotics and 3D printing.

While some startups and research institutes have leveraged robot swarm construction (namely, Harvard’s robotic termite-like swarm of programmed constructors), others have taken to large-scale autonomous robots.

One such example involves Fastbrick Robotics. After multiple iterations, the company’s Hadrian X end-to-end bricklaying robot can now autonomously build a fully livable, 180-square meter home in under 3 days. Using a laser-guided robotic attachment, the all-in-one brick-loaded truck simply drives to a construction site and directs blocks through its robotic arm in accordance with a 3D model.

Layhead. Image Credit: Fastbrick Robotics.
Meeting verified building standards, Hadrian and similar solutions hold massive promise in the long term, deployable across post-conflict refugee sites and regions recovering from natural catastrophes.

Imagine the implications. Eliminating human safety concerns and unlocking any environment, autonomous builder robots could collaboratively build massive structures in space or deep underwater habitats.

Final Thoughts
Where, how, and what we live in form a vital pillar of our everyday lives. The concept of “home” is unlikely to disappear anytime soon. At the same time, real estate and construction are two of the biggest playgrounds for technological convergence, each on the verge of revolutionary disruption.

As underlying shifts in transportation, land reclamation, and the definition of “space” (real vs. virtual) take hold, the real estate market is about to explode in value, spreading out urban centers on unprecedented scales and unlocking vast new prime “property.”

Meanwhile, converging advancements in AI and VR are fundamentally disrupting the way we design, build, and explore new residences. Just as mirror worlds create immersive, virtual real estate economies, VR tours and AI agents are absorbing both sides of the coin to entirely obliterate the middleman.

And as materials science breakthroughs meet new modes of construction, the only limits to tomorrow’s structures are those of our own imagination.

Join Me
Abundance-Digital Online Community: Stay ahead of technological advancements and turn your passion into action. Abundance Digital is now part of Singularity University. Learn more.

Image Credit: OCEANIX/BIG-Bjarke Ingels Group. Continue reading

Posted in Human Robots

#434823 The Tangled Web of Turning Spider Silk ...

Spider-Man is one of the most popular superheroes of all time. It’s a bit surprising given that one of the more common phobias is arachnophobia—a debilitating fear of spiders.

Perhaps more fantastical is that young Peter Parker, a brainy high school science nerd, seemingly developed overnight the famous web-shooters and the synthetic spider silk that he uses to swing across the cityscape like Tarzan through the jungle.

That’s because scientists have been trying for decades to replicate spider silk, a material that is five times stronger than steel, among its many superpowers. In recent years, researchers have been untangling the protein-based fiber’s structure down to the molecular level, leading to new insights and new potential for eventual commercial uses.

The applications for such a material seem near endless. There’s the more futuristic visions, like enabling robotic “muscles” for human-like movement or ensnaring real-life villains with a Spider-Man-like web. Near-term applications could include the biomedical industry, such as bandages and adhesives, and as a replacement textile for everything from rope to seat belts to parachutes.

Spinning Synthetic Spider Silk
Randy Lewis has been studying the properties of spider silk and developing methods for producing it synthetically for more than three decades. In the 1990s, his research team was behind cloning the first spider silk gene, as well as the first to identify and sequence the proteins that make up the six different silks that web slingers make. Each has different mechanical properties.

“So our thought process was that you could take that information and begin to to understand what made them strong and what makes them stretchy, and why some are are very stretchy and some are not stretchy at all, and some are stronger and some are weaker,” explained Lewis, a biology professor at Utah State University and director of the Synthetic Spider Silk Lab, in an interview with Singularity Hub.

Spiders are naturally territorial and cannibalistic, so any intention to farm silk naturally would likely end in an orgy of arachnid violence. Instead, Lewis and company have genetically modified different organisms to produce spider silk synthetically, including inserting a couple of web-making genes into the genetic code of goats. The goats’ milk contains spider silk proteins.

The lab also produces synthetic spider silk through a fermentation process not entirely dissimilar to brewing beer, but using genetically modified bacteria to make the desired spider silk proteins. A similar technique has been used for years to make a key enzyme in cheese production. More recently, companies are using transgenic bacteria to make meat and milk proteins, entirely bypassing animals in the process.

The same fermentation technology is used by a chic startup called Bolt Threads outside of San Francisco that has raised more than $200 million for fashionable fibers made out of synthetic spider silk it calls Microsilk. (The company is also developing a second leather-like material, Mylo, using the underground root structure of mushrooms known as mycelium.)

Lewis’ lab also uses transgenic silkworms to produce a kind of composite material made up of the domesticated insect’s own silk proteins and those of spider silk. “Those have some fairly impressive properties,” Lewis said.

The researchers are even experimenting with genetically modified alfalfa. One of the big advantages there is that once the spider silk protein has been extracted, the remaining protein could be sold as livestock feed. “That would bring the cost of spider silk protein production down significantly,” Lewis said.

Building a Better Web
Producing synthetic spider silk isn’t the problem, according to Lewis, but the ability to do it at scale commercially remains a sticking point.

Another challenge is “weaving” the synthetic spider silk into usable products that can take advantage of the material’s marvelous properties.

“It is possible to make silk proteins synthetically, but it is very hard to assemble the individual proteins into a fiber or other material forms,” said Markus Buehler, head of the Department of Civil and Environmental Engineering at MIT, in an email to Singularity Hub. “The spider has a complex spinning duct in which silk proteins are exposed to physical forces, chemical gradients, the combination of which generates the assembly of molecules that leads to silk fibers.”

Buehler recently co-authored a paper in the journal Science Advances that found dragline spider silk exhibits different properties in response to changes in humidity that could eventually have applications in robotics.

Specifically, spider silk suddenly contracts and twists above a certain level of relative humidity, exerting enough force to “potentially be competitive with other materials being explored as actuators—devices that move to perform some activity such as controlling a valve,” according to a press release.

Studying Spider Silk Up Close
Recent studies at the molecular level are helping scientists learn more about the unique properties of spider silk, which may help researchers develop materials with extraordinary capabilities.

For example, scientists at Arizona State University used magnetic resonance tools and other instruments to image the abdomen of a black widow spider. They produced what they called the first molecular-level model of spider silk protein fiber formation, providing insights on the nanoparticle structure. The research was published last October in Proceedings of the National Academy of Sciences.

A cross section of the abdomen of a black widow (Latrodectus Hesperus) spider used in this study at Arizona State University. Image Credit: Samrat Amin.
Also in 2018, a study presented in Nature Communications described a sort of molecular clamp that binds the silk protein building blocks, which are called spidroins. The researchers observed for the first time that the clamp self-assembles in a two-step process, contributing to the extensibility, or stretchiness, of spider silk.

Another team put the spider silk of a brown recluse under an atomic force microscope, discovering that each strand, already 1,000 times thinner than a human hair, is made up of thousands of nanostrands. That helps explain its extraordinary tensile strength, though technique is also a factor, as the brown recluse uses a special looping method to reinforce its silk strands. The study also appeared last year in the journal ACS Macro Letters.

Making Spider Silk Stick
Buehler said his team is now trying to develop better and faster predictive methods to design silk proteins using artificial intelligence.

“These new methods allow us to generate new protein designs that do not naturally exist and which can be explored to optimize certain desirable properties like torsional actuation, strength, bioactivity—for example, tissue engineering—and others,” he said.

Meanwhile, Lewis’ lab has discovered a method that allows it to solubilize spider silk protein in what is essentially a water-based solution, eschewing acids or other toxic compounds that are normally used in the process.

That enables the researchers to develop materials beyond fiber, including adhesives that “are better than an awful lot of the current commercial adhesives,” Lewis said, as well as coatings that could be used to dampen vibrations, for example.

“We’re making gels for various kinds of of tissue regeneration, as well as drug delivery, and things like that,” he added. “So we’ve expanded the use profile from something beyond fibers to something that is a much more extensive portfolio of possible kinds of materials.”

And, yes, there’s even designs at the Synthetic Spider Silk Lab for developing a Spider-Man web-slinger material. The US Navy is interested in non-destructive ways of disabling an enemy vessel, such as fouling its propeller. The project also includes producing synthetic proteins from the hagfish, an eel-like critter that exudes a gelatinous slime when threatened.

Lewis said that while the potential for spider silk is certainly headline-grabbing, he cautioned that much of the hype is not focused on the unique mechanical properties that could lead to advances in healthcare and other industries.

“We want to see spider silk out there because it’s a unique material, not because it’s got marketing appeal,” he said.

Image Credit: mycteria / Shutterstock.com Continue reading

Posted in Human Robots