Tag Archives: mind

#432880 Google’s Duplex Raises the Question: ...

By now, you’ve probably seen Google’s new Duplex software, which promises to call people on your behalf to book appointments for haircuts and the like. As yet, it only exists in demo form, but already it seems like Google has made a big stride towards capturing a market that plenty of companies have had their eye on for quite some time. This software is impressive, but it raises questions.

Many of you will be familiar with the stilted, robotic conversations you can have with early chatbots that are, essentially, glorified menus. Instead of pressing 1 to confirm or 2 to re-enter, some of these bots would allow for simple commands like “Yes” or “No,” replacing the buttons with limited ability to recognize a few words. Using them was often a far more frustrating experience than attempting to use a menu—there are few things more irritating than a robot saying, “Sorry, your response was not recognized.”

Google Duplex scheduling a hair salon appointment:

Google Duplex calling a restaurant:

Even getting the response recognized is hard enough. After all, there are countless different nuances and accents to baffle voice recognition software, and endless turns of phrase that amount to saying the same thing that can confound natural language processing (NLP), especially if you like your phrasing quirky.

You may think that standard customer-service type conversations all travel the same route, using similar words and phrasing. But when there are over 80,000 ways to order coffee, and making a mistake is frowned upon, even simple tasks require high accuracy over a huge dataset.

Advances in audio processing, neural networks, and NLP, as well as raw computing power, have meant that basic recognition of what someone is trying to say is less of an issue. Soundhound’s virtual assistant prides itself on being able to process complicated requests (perhaps needlessly complicated).

The deeper issue, as with all attempts to develop conversational machines, is one of understanding context. There are so many ways a conversation can go that attempting to construct a conversation two or three layers deep quickly runs into problems. Multiply the thousands of things people might say by the thousands they might say next, and the combinatorics of the challenge runs away from most chatbots, leaving them as either glorified menus, gimmicks, or rather bizarre to talk to.

Yet Google, who surely remembers from Glass the risk of premature debuts for technology, especially the kind that ask you to rethink how you interact with or trust in software, must have faith in Duplex to show it on the world stage. We know that startups like Semantic Machines and x.ai have received serious funding to perform very similar functions, using natural-language conversations to perform computing tasks, schedule meetings, book hotels, or purchase items.

It’s no great leap to imagine Google will soon do the same, bringing us closer to a world of onboard computing, where Lens labels the world around us and their assistant arranges it for us (all the while gathering more and more data it can convert into personalized ads). The early demos showed some clever tricks for keeping the conversation within a fairly narrow realm where the AI should be comfortable and competent, and the blog post that accompanied the release shows just how much effort has gone into the technology.

Yet given the privacy and ethics funk the tech industry finds itself in, and people’s general unease about AI, the main reaction to Duplex’s impressive demo was concern. The voice sounded too natural, bringing to mind Lyrebird and their warnings of deepfakes. You might trust “Do the Right Thing” Google with this technology, but it could usher in an era when automated robo-callers are far more convincing.

A more human-like voice may sound like a perfectly innocuous improvement, but the fact that the assistant interjects naturalistic “umm” and “mm-hm” responses to more perfectly mimic a human rubbed a lot of people the wrong way. This wasn’t just a voice assistant trying to sound less grinding and robotic; it was actively trying to deceive people into thinking they were talking to a human.

Google is running the risk of trying to get to conversational AI by going straight through the uncanny valley.

“Google’s experiments do appear to have been designed to deceive,” said Dr. Thomas King of the Oxford Internet Institute’s Digital Ethics Lab, according to Techcrunch. “Their main hypothesis was ‘can you distinguish this from a real person?’ In this case it’s unclear why their hypothesis was about deception and not the user experience… there should be some kind of mechanism there to let people know what it is they are speaking to.”

From Google’s perspective, being able to say “90 percent of callers can’t tell the difference between this and a human personal assistant” is an excellent marketing ploy, even though statistics about how many interactions are successful might be more relevant.

In fact, Duplex runs contrary to pretty much every major recommendation about ethics for the use of robotics or artificial intelligence, not to mention certain eavesdropping laws. Transparency is key to holding machines (and the people who design them) accountable, especially when it comes to decision-making.

Then there are the more subtle social issues. One prominent effect social media has had is to allow people to silo themselves; in echo chambers of like-minded individuals, it’s hard to see how other opinions exist. Technology exacerbates this by removing the evolutionary cues that go along with face-to-face interaction. Confronted with a pair of human eyes, people are more generous. Confronted with a Twitter avatar or a Facebook interface, people hurl abuse and criticism they’d never dream of using in a public setting.

Now that we can use technology to interact with ever fewer people, will it change us? Is it fair to offload the burden of dealing with a robot onto the poor human at the other end of the line, who might have to deal with dozens of such calls a day? Google has said that if the AI is in trouble, it will put you through to a human, which might help save receptionists from the hell of trying to explain a concept to dozens of dumbfounded AI assistants all day. But there’s always the risk that failures will be blamed on the person and not the machine.

As AI advances, could we end up treating the dwindling number of people in these “customer-facing” roles as the buggiest part of a fully automatic service? Will people start accusing each other of being robots on the phone, as well as on Twitter?

Google has provided plenty of reassurances about how the system will be used. They have said they will ensure that the system is identified, and it’s hardly difficult to resolve this problem; a slight change in the script from their demo would do it. For now, consumers will likely appreciate moves that make it clear whether the “intelligent agents” that make major decisions for us, that we interact with daily, and that hide behind social media avatars or phone numbers are real or artificial.

Image Credit: Besjunior / Shutterstock.com Continue reading

Posted in Human Robots

#432671 Stuff 3.0: The Era of Programmable ...

It’s the end of a long day in your apartment in the early 2040s. You decide your work is done for the day, stand up from your desk, and yawn. “Time for a film!” you say. The house responds to your cues. The desk splits into hundreds of tiny pieces, which flow behind you and take on shape again as a couch. The computer screen you were working on flows up the wall and expands into a flat projection screen. You relax into the couch and, after a few seconds, a remote control surfaces from one of its arms.

In a few seconds flat, you’ve gone from a neatly-equipped office to a home cinema…all within the same four walls. Who needs more than one room?

This is the dream of those who work on “programmable matter.”

In his recent book about AI, Max Tegmark makes a distinction between three different levels of computational sophistication for organisms. Life 1.0 is single-celled organisms like bacteria; here, hardware is indistinguishable from software. The behavior of the bacteria is encoded into its DNA; it cannot learn new things.

Life 2.0 is where humans live on the spectrum. We are more or less stuck with our hardware, but we can change our software by choosing to learn different things, say, Spanish instead of Italian. Much like managing space on your smartphone, your brain’s hardware will allow you to download only a certain number of packages, but, at least theoretically, you can learn new behaviors without changing your underlying genetic code.

Life 3.0 marks a step-change from this: creatures that can change both their hardware and software in something like a feedback loop. This is what Tegmark views as a true artificial intelligence—one that can learn to change its own base code, leading to an explosion in intelligence. Perhaps, with CRISPR and other gene-editing techniques, we could be using our “software” to doctor our “hardware” before too long.

Programmable matter extends this analogy to the things in our world: what if your sofa could “learn” how to become a writing desk? What if, instead of a Swiss Army knife with dozens of tool attachments, you just had a single tool that “knew” how to become any other tool you could require, on command? In the crowded cities of the future, could houses be replaced by single, OmniRoom apartments? It would save space, and perhaps resources too.

Such are the dreams, anyway.

But when engineering and manufacturing individual gadgets is such a complex process, you can imagine that making stuff that can turn into many different items can be extremely complicated. Professor Skylar Tibbits at MIT referred to it as 4D printing in a TED Talk, and the website for his research group, the Self-Assembly Lab, excitedly claims, “We have also identified the key ingredients for self-assembly as a simple set of responsive building blocks, energy and interactions that can be designed within nearly every material and machining process available. Self-assembly promises to enable breakthroughs across many disciplines, from biology to material science, software, robotics, manufacturing, transportation, infrastructure, construction, the arts, and even space exploration.”

Naturally, their projects are still in the early stages, but the Self-Assembly Lab and others are genuinely exploring just the kind of science fiction applications we mooted.

For example, there’s the cell-phone self-assembly project, which brings to mind eerie, 24/7 factories where mobile phones assemble themselves from 3D printed kits without human or robotic intervention. Okay, so the phones they’re making are hardly going to fly off the shelves as fashion items, but if all you want is something that works, it could cut manufacturing costs substantially and automate even more of the process.

One of the major hurdles to overcome in making programmable matter a reality is choosing the right fundamental building blocks. There’s a very important balance to strike. To create fine details, you need to have things that aren’t too big, so as to keep your rearranged matter from being too lumpy. This might make the building blocks useless for certain applications—for example, if you wanted to make tools for fine manipulation. With big pieces, it might be difficult to simulate a range of textures. On the other hand, if the pieces are too small, different problems can arise.

Imagine a setup where each piece is a small robot. You have to contain the robot’s power source and its brain, or at least some kind of signal-generator and signal-processor, all in the same compact unit. Perhaps you can imagine that one might be able to simulate a range of textures and strengths by changing the strength of the “bond” between individual units—your desk might need to be a little bit more firm than your bed, which might be nicer with a little more give.

Early steps toward creating this kind of matter have been taken by those who are developing modular robots. There are plenty of different groups working on this, including MIT, Lausanne, and the University of Brussels.

In the latter configuration, one individual robot acts as a centralized decision-maker, referred to as the brain unit, but additional robots can autonomously join the brain unit as and when needed to change the shape and structure of the overall system. Although the system is only ten units at present, it’s a proof-of-concept that control can be orchestrated over a modular system of robots; perhaps in the future, smaller versions of the same thing could be the components of Stuff 3.0.

You can imagine that with machine learning algorithms, such swarms of robots might be able to negotiate obstacles and respond to a changing environment more easily than an individual robot (those of you with techno-fear may read “respond to a changing environment” and imagine a robot seamlessly rearranging itself to allow a bullet to pass straight through without harm).

Speaking of robotics, the form of an ideal robot has been a subject of much debate. In fact, one of the major recent robotics competitions—DARPA’s Robotics Challenge—was won by a robot that could adapt, beating Boston Dynamics’ infamous ATLAS humanoid with the simple addition of a wheel that allowed it to drive as well as walk.

Rather than building robots into a humanoid shape (only sometimes useful), allowing them to evolve and discover the ideal form for performing whatever you’ve tasked them to do could prove far more useful. This is particularly true in disaster response, where expensive robots can still be more valuable than humans, but conditions can be very unpredictable and adaptability is key.

Further afield, many futurists imagine “foglets” as the tiny nanobots that will be capable of constructing anything from raw materials, somewhat like the “Santa Claus machine.” But you don’t necessarily need anything quite so indistinguishable from magic to be useful. Programmable matter that can respond and adapt to its surroundings could be used in all kinds of industrial applications. How about a pipe that can strengthen or weaken at will, or divert its direction on command?

We’re some way off from being able to order our beds to turn into bicycles. As with many tech ideas, it may turn out that the traditional low-tech solution is far more practical and cost-effective, even as we can imagine alternatives. But as the march to put a chip in every conceivable object goes on, it seems certain that inanimate objects are about to get a lot more animated.

Image Credit: PeterVrabel / Shutterstock.com Continue reading

Posted in Human Robots

#432539 10 Amazing Things You Can Learn From ...

Hardly a day goes by without a research study or article published talking sh*t—or more precisely, talking about the gut microbiome. When it comes to cutting-edge innovations in medicine, all signs point to the microbiome. Maybe we should have listened to Hippocrates: “All disease begins in the gut.”

Your microbiome is mostly located in your gut and contains trillions of little guys and gals called microbes. If you want to optimize your health, biohack your body, make progress against chronic disease, or know which foods are right for you—almost all of this information can be found in your microbiome.

My company, Viome, offers technology to measure your microscopic organisms and their behavior at a molecular level. Think of it as the Instagram of your inner world. A snapshot of what’s happening inside your body. New research about the microbiome is changing our understanding of who we are as humans and how the human body functions.

It turns out the microbiome may be mission control for your body and mind. Your healthy microbiome is part best friend, part power converter, part engine, and part pharmacist. At Viome, we’re working to analyze these microbial functions and recommend a list of personalized food and supplements to keep these internal complex machines in a finely tuned balance.

We now have more information than ever before about what your microbiome is doing, and it’s going to help you and the rest of the world do a whole lot better. The new insights emerging from microbiome research are changing our perception of what keeps us healthy and what makes us sick. This new understanding of the microbiome activities may put an end to conflicting food advice and make fad diets a thing of the past.

What are these new insights showing us? The information is nothing short of mind-blowing. The value of your poop just got an upgrade.

Here are some of the amazing things we’ve learned from our work at Viome.

1. Was Popeye wrong? Why “health food” isn’t necessarily healthy.
Each week there is a new fad diet released, discussed, and followed. The newest “research” shows that this is now the superfood to eat for everyone. But, too often, the fad diet is just a regurgitation of what worked for one person and shouldn’t be followed by everyone else.

For example, we’ve been told to eat our greens and that greens and nuts are “anti-inflammatory,” but this is actually not always true. Spinach, bran, rhubarb, beets, nuts, and nut butters all contain oxalate. We now know that oxalate-containing food can be harmful, unless you have the microbes present that can metabolize it into a non-harmful substance.

30% of Viome customers do not have the microbes to metabolize oxalates properly. In other words, “healthy foods” like spinach are actually not healthy for these people.

Looks like not everyone should follow Popeye’s food plan.

2. Aren’t foods containing “antioxidants” always good for everyone?
Just like oxalates, polyphenols in foods are usually considered very healthy, but unless you have microbes that utilize specific polyphenols, you may not get full benefit from them. One example is a substance found in these foods called ellagic acid. We can detect if your microbiome is metabolizing ellagic acid and converting it into urolithin A. It is only the urolithin A that has anti-inflammatory and antioxidant effects. Without the microbes to do this conversion you will not benefit from the ellagic acid in foods.

Examples: Walnuts, raspberries, pomegranate, blackberries, pecans, and cranberries all contain ellagic acid.

We have analyzed tens of thousands of people, and only about 50% of the people actually benefit from eating more foods containing ellagic acid.

3. You’re probably eating too much protein (and it may be causing inflammation).
When you think high-protein diet, you think paleo, keto, and high-performance diets.

Protein is considered good for you. It helps build muscle and provide energy—but if you eat too much, it can cause inflammation and decrease longevity.

We can analyze the activity of your microbiome to determine if you are eating too much protein that feeds protein-fermenting bacteria like Alistipes putredinis and Tannerella forsythia, and if these organisms are producing harmful substances such as ammonia, hydrogen sulfide, p-cresol, or putrescine. These substances can damage your gut lining and lead to things like leaky gut.

4. Something’s fishy. Are “healthy foods” causing heart disease?
Choline in certain foods can get converted by bacteria into a substance called trimethylamine (TMA) that is associated with heart disease when it gets absorbed into your body and converted to TMAO. However, TMA conversion doesn’t happen in individuals without these types of bacteria in their microbiome.

We can see the TMA production pathways and many of the gammaproteobacteria that do this conversion.

What foods contain choline? Liver, salmon, chickpeas, split peas, eggs, navy beans, peanuts, and many others.

Before you decide to go full-on pescatarian or paleo, you may want to check if your microbiome is producing TMA with that salmon or steak.

5. Hold up, Iron Man. We can see inflammation from too much iron.
Minerals like iron in your food can, in certain inflammatory microbial environments, promote growth of pathogens like Esherichia, Shigella, and Salmonella.

Maybe it wasn’t just that raw chicken that gave you food poisoning, but your toxic microbiome that made you sick.

On the other hand, when you don’t have enough iron, you could become anemic leading to weakness and shortness of breath.

So, just like Iron Man, it’s about finding your balance so that you can fly.

6. Are you anxious or stressed? Your poop will tell you.
Our gut and brain are connected via the vagus nerve. A large majority of neurotransmitters are either produced or consumed by our microbiome. In fact, some 90% of all serotonin (a feel-good neurotransmitter) is produced by your gut microbiome and not by your brain.

When you have a toxic microbiome that’s producing a large amount of toxins like hydrogen sulfide, the lining of your gut starts to deteriorate into what’s known as leaky gut. Think of leaky gut as your gut not having healthy borders or boundaries. And when this happens, all kinds of disease can emerge. When the barrier of the gut breaks down, it starts a chain reaction causing low-grade chronic inflammation—which has been identified as a potential source of depression and higher levels of anxiety, in addition to many other chronic diseases.

We’re not saying you shouldn’t meditate, but if you want to get the most out of your meditation and really reduce your stress levels, make sure you are eating the right food that promotes a healthy microbiome.

7. Your microbiome is better than Red Bull.
If you want more energy, get your microbiome back into balance.

No you don’t need three pots of coffee to keep you going, you just need a balanced microbiome.

Your microbiome is responsible for calorie extraction, or creating energy, through pathways such as the Tricarboxylic acid cycle. Our bodies depend on the energy that our microbiome produces.

How much energy we get from our food is dependent on how efficient our microbiome is at converting the food into energy. High-performing microbiomes are excellent at converting food into energy. This is great when you are an athlete and need the extra energy, but if you don’t use up the energy it may be the source of some of those unwanted pounds.

If the microbes can’t or won’t metabolize the glucose (sugar) that you eat, it will be stored as fat. If the microbes are extracting too many calories from your food or producing lipopolysaccharides (LPS) and causing metabolic endotoxemia leading to activation of toll-like receptors and insulin resistance you may end up storing what you eat as fat.

Think of your microbiome as Doc Brown’s car from the future—it can take pretty much anything and turn it into fuel if it’s strong and resilient enough.

8. We can see your joint pain in your poop.
Got joint pain? Your microbiome can tell you why.

Lipopolysaccharide (LPS) is a key pro-inflammatory molecule made by some of your microbes. If your microbes are making too much LPS, it can wreak havoc on your immune system by putting it into overdrive. When your immune system goes on the warpath there is often collateral damage to your joints and other body parts.

Perhaps balancing your microbiome is a better solution than reaching for the glucosamine. Think of your microbiome as the top general of your immune army. It puts your immune system through basic training and determines when it goes to war.

Ideally, your immune system wins the quick battle and gets some rest, but sometimes if your microbiome keeps it on constant high alert it becomes a long, drawn-out war resulting in chronic inflammation and chronic diseases.

Are you really “getting older” or is your microbiome just making you “feel” older because it keeps giving warnings to your immune system ultimately leading to chronic pain?

Before you throw in the towel on your favorite activities, check your microbiome. And, if you have anything with “itis” in it, it’s possible that when you balance your microbiome the inflammation from your “itis” will be reduced.

9. Your gut is doing the talking for your mouth.
When you have low stomach acid, your mouth bacteria makes it down to your GI tract.

Stomach acid is there to protect you from the bacteria in your mouth and the parasites and fungi that are in your food. If you don’t have enough of it, the bacteria in your mouth will invade your gut. This invasion is associated with and a risk factor for autoimmune disease and inflammation in the gut.

We are learning that low stomach acid is perhaps one of the major causes of chronic disease. This stomach acid is essential to kill mouth bacteria and help us digest our food.

What kinds of things cause low stomach acid? Stress and antacids like Nexium, Zantac, and Prilosec.

10. Carbs can be protein precursors.
Rejoice! Perhaps carbs aren’t as bad as we thought (as long as your microbiome is up to the task). We can see if some of the starches you eat can be made into amino acids by the microbiome.

Our microbiome makes 20% of our branched-chain amino acids (BCAAs) for us, and it will adapt to make these vital BCAAs for us in almost any way it can.

Essentially, your microbiome is hooking up carbons and hydrogens into different formulations of BCAAs, depending on what you feed it. The microbiome is excellent at adapting and pivoting based on the food you feed it and the environment that it’s in.

So, good news: Carbs are protein precursors, as long as you have the right microbiome.

Stop Talking Sh*t Now
Your microbiome is a world class entrepreneur that can take low-grade sources of food and turn them into valuable and useable energy.

You have a best friend and confidant within you that is working wonders to make sure you have energy and that all of your needs are met.

And, just like a best friend, if you take great care of your microbiome, it will take great care of you.

Given the research emerging daily about the microbiome and its importance on your quality of life, prioritizing the health of your microbiome is essential.

When you have a healthy microbiome, you’ll have a healthy life.

It’s now clear that some of the greatest insights for your health will come from your poop.

It’s time to stop talking sh*t and get your sh*t together. Your life may depend on it.

Viome can help you identify what your microbiome is actually doing. The combination of Viome’s metatranscriptomic technology and cutting-edge artificial intelligence is paving a brand new path forward for microbiome health.

Image Credit: WhiteDragon / Shutterstock.com Continue reading

Posted in Human Robots

#432512 How Will Merging Minds and Machines ...

One of the most exciting and frightening outcomes of technological advancement is the potential to merge our minds with machines. If achieved, this would profoundly boost our cognitive capabilities. More importantly, however, it could be a revolution in human identity, emotion, spirituality, and self-awareness.

Brain-machine interface technology is already being developed by pioneers and researchers around the globe. It’s still early and today’s tech is fairly rudimentary, but it’s a fast-moving field, and some believe it will advance faster than generally expected. Futurist Ray Kurzweil has predicted that by the 2030s we will be able to connect our brains to the internet via nanobots that will “provide full-immersion virtual reality from within the nervous system, provide direct brain-to-brain communication over the internet, and otherwise greatly expand human intelligence.” Even if the advances are less dramatic, however, they’ll have significant implications.

How might this technology affect human consciousness? What about its implications on our sentience, self-awareness, or subjective experience of our illusion of self?

Consciousness can be hard to define, but a holistic definition often encompasses many of our most fundamental capacities, such as wakefulness, self-awareness, meta-cognition, and sense of agency. Beyond that, consciousness represents a spectrum of awareness, as seen across various species of animals. Even humans experience different levels of existential awareness.

From psychedelics to meditation, there are many tools we already use to alter and heighten our conscious experience, both temporarily and permanently. These tools have been said to contribute to a richer life, with the potential to bring experiences of beauty, love, inner peace, and transcendence. Relatively non-invasive, these tools show us what a seemingly minor imbalance of neurochemistry and conscious internal effort can do to the subjective experience of being human.

Taking this into account, what implications might emerging brain-machine interface technologies have on the “self”?

The Tools for Self-Transcendence
At the basic level, we are currently seeing the rise of “consciousness hackers” using techniques like non-invasive brain stimulation through EEG, nutrition, virtual reality, and ecstatic experiences to create environments for heightened consciousness and self-awareness. In Stealing Fire, Steven Kotler and Jamie Wheal explore this trillion-dollar altered-states economy and how innovators and thought leaders are “harnessing rare and controversial states of consciousness to solve critical challenges and outperform the competition.” Beyond enhanced productivity, these altered states expose our inner potential and give us a glimpse of a greater state of being.

Expanding consciousness through brain augmentation and implants could one day be just as accessible. Researchers are working on an array of neurotechnologies as simple and non-invasive as electrode-based EEGs to invasive implants and techniques like optogenetics, where neurons are genetically reprogrammed to respond to pulses of light. We’ve already connected two brains via the internet, allowing the two to communicate, and future-focused startups are researching the possibilities too. With an eye toward advanced brain-machine interfaces, last year Elon Musk unveiled Neuralink, a company whose ultimate goal is to merge the human mind with AI through a “neural lace.”

Many technologists predict we will one day merge with and, more speculatively, upload our minds onto machines. Neuroscientist Kenneth Hayworth writes in Skeptic magazine, “All of today’s neuroscience models are fundamentally computational by nature, supporting the theoretical possibility of mind-uploading.” This might include connecting with other minds using digital networks or even uploading minds onto quantum computers, which can be in multiple states of computation at a given time.

In their book Evolving Ourselves, Juan Enriquez and Steve Gullans describe a world where evolution is no longer driven by natural processes. Instead, it is driven by human choices, through what they call unnatural selection and non-random mutation. With advancements in genetic engineering, we are indeed seeing evolution become an increasingly conscious process with an accelerated pace. This could one day apply to the evolution of our consciousness as well; we would be using our consciousness to expand our consciousness.

What Will It Feel Like?
We may be able to come up with predictions of the impact of these technologies on society, but we can only wonder what they will feel like subjectively.

It’s hard to imagine, for example, what our stream of consciousness will feel like when we can process thoughts and feelings 1,000 times faster, or how artificially intelligent brain implants will impact our capacity to love and hate. What will the illusion of “I” feel like when our consciousness is directly plugged into the internet? Overall, what impact will the process of merging with technology have on the subjective experience of being human?

The Evolution of Consciousness
In The Future Evolution of Consciousness, Thomas Lombardo points out, “We are a journey rather than a destination—a chapter in the evolutionary saga rather than a culmination. Just as probable, there will also be a diversification of species and types of conscious minds. It is also very likely that new psychological capacities, incomprehensible to us, will emerge as well.”

Humans are notorious for fearing the unknown. For any individual who has never experienced an altered state, be it spiritual or psychedelic-induced, it is difficult to comprehend the subjective experience of that state. It is why many refer to their first altered-state experience as “waking up,” wherein they didn’t even realize they were asleep.

Similarly, exponential neurotechnology represents the potential of a higher state of consciousness and a range of experiences that are unimaginable to our current default state.

Our capacity to think and feel is set by the boundaries of our biological brains. To transform and expand these boundaries is to transform and expand the first-hand experience of consciousness. Emerging neurotechnology may end up providing the awakening our species needs.

Image Credit: Peshkova / Shutterstock.com Continue reading

Posted in Human Robots

#432352 Watch This Lifelike Robot Fish Swim ...

Earth’s oceans are having a rough go of it these days. On top of being the repository for millions of tons of plastic waste, global warming is affecting the oceans and upsetting marine ecosystems in potentially irreversible ways.

Coral bleaching, for example, occurs when warming water temperatures or other stress factors cause coral to cast off the algae that live on them. The coral goes from lush and colorful to white and bare, and sometimes dies off altogether. This has a ripple effect on the surrounding ecosystem.

Warmer water temperatures have also prompted many species of fish to move closer to the north or south poles, disrupting fisheries and altering undersea environments.

To keep these issues in check or, better yet, try to address and improve them, it’s crucial for scientists to monitor what’s going on in the water. A paper released last week by a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) unveiled a new tool for studying marine life: a biomimetic soft robotic fish, dubbed SoFi, that can swim with, observe, and interact with real fish.

SoFi isn’t the first robotic fish to hit the water, but it is the most advanced robot of its kind. Here’s what sets it apart.

It swims in three dimensions
Up until now, most robotic fish could only swim forward at a given water depth, advancing at a steady speed. SoFi blows older models out of the water. It’s equipped with side fins called dive planes, which move to adjust its angle and allow it to turn, dive downward, or head closer to the surface. Its density and thus its buoyancy can also be adjusted by compressing or decompressing air in an inner compartment.

“To our knowledge, this is the first robotic fish that can swim untethered in three dimensions for extended periods of time,” said CSAIL PhD candidate Robert Katzschmann, lead author of the study. “We are excited about the possibility of being able to use a system like this to get closer to marine life than humans can get on their own.”

The team took SoFi to the Rainbow Reef in Fiji to test out its swimming skills, and the robo fish didn’t disappoint—it was able to swim at depths of over 50 feet for 40 continuous minutes. What keeps it swimming? A lithium polymer battery just like the one that powers our smartphones.

It’s remote-controlled… by Super Nintendo
SoFi has sensors to help it see what’s around it, but it doesn’t have a mind of its own yet. Rather, it’s controlled by a nearby scuba-diving human, who can send it commands related to speed, diving, and turning. The best part? The commands come from an actual repurposed (and waterproofed) Super Nintendo controller. What’s not to love?

Image Credit: MIT CSAIL
Previous robotic fish built by this team had to be tethered to a boat, so the fact that SoFi can swim independently is a pretty big deal. Communication between the fish and the diver was most successful when the two were less than 10 meters apart.

It looks real, sort of
SoFi’s side fins are a bit stiff, and its camera may not pass for natural—but otherwise, it looks a lot like a real fish. This is mostly thanks to the way its tail moves; a motor pumps water between two chambers in the tail, and as one chamber fills, the tail bends towards that side, then towards the other side as water is pumped into the other chamber. The result is a motion that closely mimics the way fish swim. Not only that, the hydraulic system can change the water flow to get different tail movements that let SoFi swim at varying speeds; its average speed is around half a body length (21.7 centimeters) per second.

Besides looking neat, it’s important SoFi look lifelike so it can blend in with marine life and not scare real fish away, so it can get close to them and observe them.

“A robot like this can help explore the reef more closely than current robots, both because it can get closer more safely for the reef and because it can be better accepted by the marine species.” said Cecilia Laschi, a biorobotics professor at the Sant’Anna School of Advanced Studies in Pisa, Italy.

Just keep swimming
It sounds like this fish is nothing short of a regular Nemo. But its creators aren’t quite finished yet.

They’d like SoFi to be able to swim faster, so they’ll work on improving the robo fish’s pump system and streamlining its body and tail design. They also plan to tweak SoFi’s camera to help it follow real fish.

“We view SoFi as a first step toward developing almost an underwater observatory of sorts,” said CSAIL director Daniela Rus. “It has the potential to be a new type of tool for ocean exploration and to open up new avenues for uncovering the mysteries of marine life.”

The CSAIL team plans to make a whole school of SoFis to help biologists learn more about how marine life is reacting to environmental changes.

Image Credit: MIT CSAIL Continue reading

Posted in Human Robots