Tag Archives: new
Over the past few decades, computer scientists have been trying to train robots to tackle a variety of tasks, including house chores and manufacturing processes. One of the most renowned strategies used to train robots on manual tasks is imitation learning. Continue reading
In 2020, scientists made global headlines by creating “xenobots”—tiny “programmable” living things made of several thousand frog stem cells.
These pioneer xenobots could move around in fluids, and scientists claimed they could be useful for monitoring radioactivity, pollutants, drugs, or diseases. Early xenobots survived for up to ten days.
A second wave of xenobots, created in early 2021, showed unexpected new properties. These included self-healing and longer life. They also showed a capacity to cooperate in swarms, for example by massing into groups.
Last week, the same team of biology, robotics, and computer scientists unveiled a new kind of xenobot. Like previous xenobots, they were created using artificial intelligence to virtually test billions of prototypes, sidestepping the lengthy trial-and-error process in the lab. But the latest xenobots have a crucial difference: this time, they can self-replicate.
Hang On, What? They Can Self-Replicate?!
The new xenobots are a bit like Pac-Man. As they swim around they can gobble up other frog stem cells and assemble new xenobots just like themselves. They can sustain this process for several generations.
But they don’t reproduce in a traditional biological sense. Instead, they fashion the groups of frog cells into the right shape, using their “mouths.” Ironically, the recently extinct Australian gastric-brooding frog uniquely gave birth to babies through its mouth.
The latest advance brings scientists a step closer to creating organisms that can self-replicate indefinitely. Is this as much of a Pandora’s Box as it sounds like?
Conceptually, human-designed self-replication is not new. In 1966, the influential mathematician John Von Neumann discussed “self-reproducing automata.” Famously, Eric Drexler, the US engineer credited with founding the field of nanotechnology, referred to the potential of “grey goo” in his 1986 book Engines of Creation. He envisaged nanobots that replicated incessantly and devoured their surroundings, transforming everything into a sludge made of themselves.
Although Drexler subsequently regretted coining the term, his thought experiment has frequently been used to warn about the risks of developing new biological matter.
In 2002, without the help of AI, an artificial polio virus created from tailor-made DNA sequences became capable of self-replication. Although the synthetic virus was confined to a lab, it was able to infect and kill mice.
Possibilities and Benefits
The researchers who created the new xenobots say their main value is in demonstrating advances in biology, AI, and robotics.
Future robots made from organic materials might be more eco-friendly, because they could be designed to decompose rather than persist. They might help address health problems in humans, animals, and the environment. They might contribute to regenerative medicine or cancer therapy.
Xenobots could also inspire art and new perspectives on life. Strangely, xenobot “offspring” are made in their parents’ image, but are not made of or from them. As such, they replicate without truly reproducing in the biological sense.
Perhaps there are alien life forms that assemble their “children” from objects in the world around them, rather than from their own bodies?
What Are the Risks?
It might be natural to have instinctive reservations about xenobot research. One xenobot researcher said there is a “moral imperative” to study these self-replicating systems, yet the research team also recognizes legal and ethical concerns with their work.
Centuries ago, English philosopher Francis Bacon raised the idea that some research is too dangerous to do. While we don’t believe that’s the case for current xenobots, it may be so for future developments.
Any hostile use of xenobots, or the use of AI to design DNA sequences that would give rise to deliberately dangerous synthetic organisms, is banned by the United Nations’ Biological Weapons Convention and the 1925 Geneva Protocol and Chemical Weapons Convention.
However, the use of these creations outside of warfare is less clearly regulated.
The interdisciplinary nature of these advances, including AI, robotics, and biology, makes them hard to regulate. But it is still important to consider potentially dangerous uses.
There is a useful precedent here. In 2017, the US national academies of science and medicine published a joint report on the burgeoning science of human genome editing.
It outlined conditions under which scientists should be allowed to edit human genes in ways that allow the changes to be passed on to subsequent generations. It advised this work should be limited to “compelling purposes of treating or preventing serious disease or disability,” and even then only with stringent oversight.
Both the US and UK now allow human gene editing under specific circumstances. But creating new organisms that could perpetuate themselves was far beyond the scope of these reports.
Looking Into the Future
Although xenobots are not currently made from human embryos or stem cells, it is conceivable they could be. Their creation raises similar questions about creating and modifying ongoing life forms that require regulation.
At present, xenobots do not live long and only replicate for a few generations. Still, as the researchers say, living matter can behave in unforeseen ways, and these will not necessarily be benign.
We should also consider potential impacts on the non-human world. Human, animal, and environmental health are intimately linked, and organisms introduced by humans can wreak inadvertent havoc on ecosystems.
What limits should we place on science to avoid a real-life “grey goo” scenario? It’s too early to be completely prescriptive. But regulators, scientists, and society should carefully weigh up the risks and rewards.
This article is republished from The Conversation under a Creative Commons license. Read the original article.
Image Credit: An AI-designed “parent” organism (C shape; red) beside stem cells that have been compressed into a ball (“offspring”; green). Douglas Blackiston and Sam Kriegman Continue reading
This is a sponsored article brought to you by NYU Tandon School of Engineering.
New York University’s
Tandon School of Engineering is on the eve of launching a new robotics initiative that promises to take a unique approach to both research and teaching as engineering and academic disciplines, and build upon decades of robotics at the school.
As the details are being finalized after years of planning, we had the opportunity to talk to four roboticists serving as the principal organizers of the new initiative, which will build on existing Tandon strengths across more than a dozen robotics faculty, and eventually will seek to incorporate additional researchers from Tandon and across other NYU schools.
These researchers were recruited to NYU Tandon as part of a “cluster hire” over two years from 2017-2019 to anchor Tandon Dean Jelena Kovačević’s vision for cross-department collaboration in a wide-ranging robotics program.
While their work frequently intersects and they often collaborate on projects, each of these researchers approaches robotics from a unique point of view.
Giuseppe Loianno, with a background in perception, learning, and control for autonomous robotics, explores robot autonomy especially for drones and other airborne robots. He leads the Agile Robotics and Perception Lab (ARPL) and is also a member of NYU WIRELESS and NYU CUSP. The lab performs fundamental and applied research in the area of robot autonomy to create agile autonomous machines that operate in unstructured and dynamically changing environments without relying on any external infrastructure and are able to improve their autonomous behaviors learning from their experience. With projects like Aerial Co-Workers supported by the NSF in partnership with Atashzar and Feng, as well as collaborations with the Army Research Laboratory and several industries, his lab is also investigating ways to make robots more agile and more collaborative — both with each other and with humans. Read more about his work in IEEE Spectrum published earlier this year.
S. Farokh Atashzar
S. Farokh Atashzar has devoted much of his professional career to developing cyber-physical systems and robotics for medical and wellness applications, with a current focus on telesurgery and telerehabilitation combined with next-generation telecommunications capabilities. He recently received an equipment donation from Intuitive Foundation comprising a Da Vinci research kit which is a surgical system that will allow his team to devise means by which a surgeon in one location may be able to operate on a patient located somewhere else—a different city, region or even continent. As part of his work leading the Medical Robotics and Interactive Intelligent Technologies (MERIIT) Laboratory within NYU WIRELESS and NYU CUSP, he is also working on cutting edge human-machine interface technologies enabling neuro-to-device capabilities, with direct applications to exoskeletal devices & next-generation prosthetics and rehabilitation robots. He has developed active collaboration with the NYU School of Medicine and the US Food and Drug Administration (FDA). His research is supported by the National Science Foundation.
Ludovic Righetti leads the Machines in Motion Laboratory at NYU Tandon. There, his team invents algorithms to make robots that walk and manipulate objects more autonomous, versatile and safe to interact with. His novel approaches to machine learning and optimal control could lead to robots that “understand” when and how to interact with their environment and various objects by varying strength, forces, and more based on object material, function, and purpose. Besides creating new possibilities in the field of autonomous machines, he is making robots accessible to more researchers with the Solo 8 and 12 projects, a low-cost, open source alternative to prohibitively expensive quadruped robots. His lab’s work at NYU Wireless, at the intersection of robotics and wireless telecommunications includes devising cloud-based whole-body control of legged robots over a 5G link.
Chen Feng has brought his background in civil, electrical, and geospatial engineering to bear on computer vision and robotic perception applications for construction and manufacturing. With funding support from both NSF and the C2SMART Tier 1 University Transportation Center at NYU Tandon, he has applied his expertise in visual simultaneous localization and mapping (vSLAM) and deep learning to develop technologies for autonomous driving, assisted living, and construction robotics, holding several patents on algorithmic processes for these applications. As head of the multidisciplinary research group Automation and Intelligence for Civil Engineering (AI4CE) he is advancing robot vision and machine learning through multidisciplinary use-inspired basic research. One example, Collective Additive Manufacturing, is a collaborative project aimed at developing both the theories and the system where a team of autonomous mobile robots could jointly print large-scale 3D structures. Another collaborative project ARM4MOD aims at streamlining modular building construction from design to fabrication to installation using quadruped robots that can project sophisticated visual maps over physical surfaces. He is also affiliated with the Center for Urban Science + Progress (CUSP).
A New Robotics Initiative
Here is our interview with the four researchers.
Q: Can you talk about how you found your way to Tandon and what the impetus was for you to be part of launching a new robotics initiative? How did you all find each other and how did the idea for the initiative develop?
While several of us have appointments in other departments, the four of us all have appointments in mechanical engineering, and we work a lot together already. We all joined NYU Tandon within a couple of years of each other and have collaborated from the very beginning. There are three things we share in common that are forming the groundwork for this new initiative.
One thing we have all believed in from the beginning is a shared robotics facility between the four of us and incoming faculty that will join us next year. We want to have experimental facilities that are more than just the sum of our labs. As a result, the school is now investing in a new facility where we will have a bit more than 4,000 square feet of experimental space. The four of us designed it not as distinct labs, but as a truly collective space that allows us to do more than what each lab can do individually.
Another thing we all share is that the four of us are working on the algorithmic foundations of robotics, ranging from control, planning, learning, human-robot interaction, and perception.
And finally, we all work on complementary applications of robotics that can meaningfully be applied to improve the lives of people.
We came to Tandon knowing that there is strong enthusiasm and support from the leadership, combined with plans for a unique shared space. From the very first day of joining NYU, we started a conversation about how we can collaborate on different aspects of this new initiative. Being jointly appointed between multiple departments allows us to be the bridge for collaboration between various sectors of Tandon focusing on robotics.
My vision regarding the initiative is that it is not just about the space and the summation of what we do. Creating a shared physical hub leads to a convolution of what we do. It's the work that we do together and our interactions that result in novel projects, new concepts, new visions.
It was also very important for me to be at a university that has a strong connection between engineering and medicine. Thus, NYU is one of the best candidates. NYU is also designed to be part of the fabric of the city—the fact that New York City is our campus makes it very unique and special. We make robots for a future smart and connected society.
One thing that makes our hires interesting is that we were all jointly appointed. For example, Ludovic, Giuseppe, and Farokh are jointly appointed within the electrical and mechanical engineering departments, I am jointly appointed within civil and mechanical engineering departments and we are all part of a couple of different centers. For example, all four of us are part of the NYU Center for Urban Science + Progress (CUSP), and then there are other centers, such as NYU WIRELESS and C2SMART that some of us are also part of. This is important because the future of robotics involves much more than robotics — it’s the intersection of robotics and advanced AI, robotics and wireless communications, robotics and biomedical engineering, robotics and civil engineering, and so on.
The fact that we are already connected to so many existing departments and centers at NYU is essential, because we are aware of what's happening in different departments and we are already collaborating with a very large number of faculty. This design helps us leverage our different networks and resources at Tandon collectively toward identifying possible collaborations and applying all the resources towards the success of this initiative, this space, and this work.
Before joining NYU Tandon, I was at the University of Pennsylvania and looking for the next step in my career. Definitely, NYU Tandon was and still is the right place with a strong enthusiasm for robotics, a nice future perspective with the possibility to have a strong future research and technological impact, a supportive environment, and within a multi-cultural city environment with tremendous potential for growth as well as strong external collaborations.
Another aspect to the launching of the initiative here that I think is important, and is going to play a major role in the science, is the fact that we are in New York City. The tech ecosystem is growing here. There are indications that in the next five years, it's probably going to be the same or even more vibrant than places like Silicon Valley. This represents a huge opportunity for us because this gives us the ability to create startups and interact with worlds that are in and outside of academia. For example, we are in proximity to the Brooklyn Navy Yard, which can act as an incubator and has lots of space that is available for us to leverage.
Additionally, what makes this initiative unique is that we are bringing together new and futuristic visions for the school, and how we approach research and education that can impact the next 10 years of robotics.
Strong Focus on Collaboration
Q: What do you see as being a unique characteristic of an NYU Tandon Robotics Initiative? How do you see it differing from other university robotics research centers and initiatives?
You can’t compare us with larger robotics institutes in terms of size. However, what is unique is the physical space and shared infrastructure. In our research we want to be able to reproduce situations that you would encounter in a city. So, we’re looking at how we can take the robots out of the labs and test them in real-world situations across the city.
The other thing that we’ve all alluded to is that this initiative has been and will be truly collaborative. We share grant money, we’re all on each other's PhD students committees. We conceive new classes together; we launched a robotics minor together.
We are four young roboticists with minimum overlap—just enough to effectively collaborate. This works amazingly well because we all completely understand our shared scientific language, but the applications are quite different.
Something that makes us different is the educational program that we put together. We have a unique number of robotic courses. I believe there are very few schools in this country that can provide this many graduate-level and PhD-level robotics courses. I think collectively we have seven graduate-level robotics courses. We also have a minor in robotics, in which we are teaching four different robotics courses for undergraduate students.
Our vision is not just for the research, but also in terms of the education we provide, and how research and education influence each other in a bidirectional manner. We are designing quite a broad and in-depth program of research-informed education for undergraduates, masters and PhD students that will grow over time.
Before coming to Tandon, I was working in industry as a research scientist at Mitsubishi Electric Research Laboratories (MERL) in Cambridge. I was looking to go back to academia because I like to interact with students, and I wanted to do more fundamental robotics research that really requires academic freedom.
Also, I was trained in the civil engineering department. I got my PhD in civil engineering working on construction robotics. Not many schools have invested in construction robotics when I just graduated as a PhD, so I was almost giving up on a return to academia in the field. And then suddenly, I saw that NYU was investing in this area to become one of the first few civil engineering departments in the country to invest in construction automation.
This ability to focus on construction automation is something that really makes the initiative appealing to me. Historically, construction has been a low-tech industry that has not benefited from automation technologies. But now, with the new Federal infrastructure bill that recently passed, the whole nation will be spending a lot of money on infrastructure. We will witness growing opportunities for automation and robotics that can help in terms of maintaining and renewing the nation’s civil infrastructure. I feel that the robotics initiative at NYU places us in a unique position compared to other big robotics-focused schools because we're located in this geographically interesting area. Being in a dense, urban environment gives us lots of real-world civil infrastructure problems that enable us to think about how we can use robotics to improve these infrastructure projects to ultimately improve the quality of life for the citizens living here. So that's something very exciting to me, and I think it's something unique.
As I said earlier, because of where we are located, opportunities are just around the corner—there are a lot of investors in New York, and we are in a site that's close to space that can be directly used to establish startups like our Future Labs and the Brooklyn Navy Yard.
The unique aspect of this is how we connect our research that starts from foundational concepts and is then applied to urban problems.
Robotics Applications…and Challenges
Q: Can you talk about both the academic and commercial landscape of robotics today, and what challenges each of you feel need to be addressed in these areas to better promote the development of robotics in general?
Many people see that robotics can be useful, but for really small tasks. They don't see the benefits in terms of a long-term perspective.
New York City is a unique setting that's not available anywhere else in the U.S., maybe the world, for demonstrating how robotics can be useful not only in general life tasks, but also applied to complex urban scenarios where many dimensions are coming together. We just saw the pandemic, and construction is changing. You need lots of monitoring and inspection for guaranteeing security for example, and at the same time for infrastructure monitoring. It’s a really unique type of setting where all these aspects come together in a multicultural urban environment. There’s lots of potential energy in terms of manpower, minds, capital, and space that, grouped together, can really show robotics’ bigger potential. I think that makes this a really interesting place for investors to see what’s next in potential technologies.
There’s a lot of hype around robotics. There are companies that are promising things that we know are probably not going to happen because we do not have the technology to get robots to be fully-autonomous in unstructured environments. Matching the promise of what we say will happen and what actually will happen is important. We need to have a reality check about what we can actually do or will reasonably be able to do in the near future to not disappoint the public and the industry with unrealistic expectations.
Robots are really good in a settings where you can control the environment. Robots become much less good when they are autonomous in an environment that we do not have control over. This means when you have people around, when you have the mess surrounding a construction site, when you have disasters and things like that, they are not very reliable yet.
We need to translate what we do and how we formulate problems so that we are working towards meaningful and credible automation. We need to create critical robotics that are actually useful for people, useful in real settings, not just making promises, but actually trying to think about how we actually solve concrete problems people have. From that point of view, we need to have a dual view — before we commercialize things we develop, let’s also ask: How is it going to impact people's lives? Is this actually going to make their life better or is it not?
One problem in robotics is that people are either overestimating the capability of the current technology, or they are underestimating it. We think our initiative can help uniquely solve this problem through our educational approach. We're bringing STEM students together from all different backgrounds to take our robotics courses. Through these courses they can better understand what the technology can do and what it cannot do right now, and what it may be able to do in the near future versus a long time from now.
By helping them to better understand what the technology is capable of, it will help in terms of setting realistic commercialization expectations. They will not over-promise, and I think this is healthier for the robotics industry in the long run.
A big part of commercialization or industry-focused research is basically testing and evaluating the performance of systems, and to conduct that testing and evaluation you need infrastructure; you need expensive infrastructure in the middle of a city to be connected to what's happening in the city right now. I think that puts our initiative in a unique position.
Getting Robots to “Do Stuff”
Q: Could each of you share a particular area of your research that you’re excited about that will become early areas of interest for the new initiative?
Something that I feel strongly about is a concept called mobile 3D printing, which really needs the support of robotics. The idea is we want to do 3D printing, like concrete 3D printing, using mobile robots and mobile manipulators.
We can even think about sending these robot printers to the Moon and Mars to fabricate bases for us. This is something that Ludovic and I have been working on over the past few years. We're working on establishing the theoretical foundations of this, and it does have lots of commercial potential as well. This is something I'm really passionate about and I want to spend my time solving this problem.
I work mainly in the area of aerial robotics. Our main goal is to make our robots unmanned, smaller, more agile, resilient, and collaborative. We're looking at problems related to safe and fast navigation in unknown environments, and how multiple vehicles can collaborate with each other. For example, not only in terms of swarms but also in terms of how multiple vehicles can physically interact with each other for problems related to transportation or manipulation of large objects, and how they can collaborate with each other and with humans.
My main goal is really to improve the autonomy of these kinds of machines and make them smarter and faster, and more collaborative. And this has an impact in a wide range of problems related to security, search & rescue, to transportation of goods. You can imagine, for example, transportation of goods after a natural disaster or even urban delivery, which now is done using basically ground vehicles. One day, it can potentially be done using autonomous drones.
I am really interested in understanding the algorithmics of movement. I am working on understanding how we get a robot to move and to “do stuff” reliably. So doing stuff means I'm working with big robots, robots that can walk around either as quadrupeds or bipeds. Not only walk around, but also use objects and do any type of tasks in any type of environment. And if something goes wrong, they can figure it out, maybe learn from it and improve over time. How do we solve it? I'm very excited about figuring this out.
I am interested in human-centric robotics in the field of medical robotics. So, my lab has three main focuses. One main focus, which is more fundamental, is autonomous networked robotics. We are trying to connect robots over networks, and we work on a multi-agent network of robots and understand how the distributed delay can affect the reliability, efficacy, and performance and how we can use local autonomy to share the performance between the machine and human. In this context we work on artificial intelligence, nonlinear control, and information theory.
The second part of my work is rehabilitation for patients with stroke and spinal cord injuries, and how we can make a robotic system that can help them regain the lost sensorimotor functionality. In this context we will focus on making an algorithmic bridge between neurorobot intelligence and human cognition.
The third aspect of my work is surgical robotics. When it comes to surgical robotics, I'm interested in autonomous surgery. Like any other technology at its infancy (e.g. autonomous driving 15 years ago), at this time still autonomous surgery may sound like science fiction. But while it’s not yet happening on the scale that we want it to, it will happen soon in the future when access to surgeons is limited (examples can be space operations).
Robots to Improve Lives
Q: With each of you bringing a different background in robotics into this new initiative, can you talk about how you see the initiative being organized in general, and how each of your areas of expertise will play a role in the greater whole?
We want to avoid rigid pre-defined groupings that might limit the potential of the initiative’s work. The shared physical space will make it possible for students and faculty to better collaborate with each other and see each other’s work, which is currently a challenge because we are still in separate physical spaces.
In a shared physical space, people can engage and start a collaboration involving the cross-fertilization of each other’s expertise to explore new concepts and make new contributions to science and technology.
Students are already in a unique position at NYU because they don’t just get classes in robotics and engineering. They have a large portfolio of classes, such as in AI, in the medical field, in mathematics, in the humanities, and so on because the NYU network is quite large.
One way we are thinking about the space being loosely organized is around the functions of the robots that we hope to bring to reality. We have a field robotics area, an aerial robotics area, a service robot area, a healthcare robotics area. These are different functional areas. We are purposefully, however, not thinking of these as distinct groups. It’s different functional areas of expertise operating together in a shared space to deliver on the shared vision of this initiative: to improve people's lives in the city. Continue reading
In their efforts to create smart robots, AI researchers have understandably tended to focus on the brains. But a group from MIT say AI can help us design better bodies for them too, and we should be doing both in parallel.
For a robot to solve a task, its brain and its body have to sync up perfectly to get the job done. That means that an effective AI controller that’s good at piloting one kind of body won’t necessarily work well for one that’s very different.
The standard approach is to simply design a robot body—either by hand or using AI design tools—and then train an AI to control it. But an even better solution is to carry out both processes simultaneously so that the control AI can give feedback on how changes to the body make it easier or more difficult to solve the problem.
This is known as co-design, and it’s not entirely new. But running these two optimization processes in parallel is very complicated, and it can take a long time to reach a useful solution. Because the design algorithm has to try out thousands of different configurations, the approach only works in simulation, and typically, researchers have to build a testing environment from scratch or heavily adapt existing robot training simulations.
All this takes a lot of work, which has led to most co-design environments focusing on a small number of simple tasks. And because most have been developed by separate groups, it’s not easy to compare results across them.
In an attempt to solve these problems, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has created a co-design simulator called Evolution Gym that allows researchers to test out their approaches on a wide range of tasks and terrains using a highly customizable robot design framework. The simulator has also been designed so that groups with fewer computing resources can still use it.
“With Evolution Gym we’re aiming to push the boundaries of algorithms for machine learning and artificial intelligence,” MIT’s Jagdeep Bhatia said in a press release. “By creating a large-scale benchmark that focuses on speed and simplicity, we not only create a common language for exchanging ideas and results within the reinforcement learning and co-design space, but also enable researchers without state-of-the-art compute resources to contribute to algorithmic development in these areas.”
For simplicity the simulator, which will be presented at the Conference on Neural Information Processing Systems this week, only works in two dimensions. The team has designed 30 unique tasks, which include things like walking, jumping over obstacles, carrying or pulling objects, and crawling under barriers, and researchers can also design their own challenges.
The environment allows design algorithms to build robots by linking together squares that can be soft, rigid, or actuators—essentially muscles that enable the rest of the robot to move. An AI system then learns how to pilot this body and gives the design algorithm feedback on how good it was at different tasks.
By repeating this process many times the two algorithms can reach the best possible combination of body layout and control system to solve the challenge.
To set some benchmarks for their simulator, the researchers tried out three different design algorithms working in conjunction with a deep reinforcement learning algorithm that learned to control the robots through many rounds of trial and error.
The co-designed bots performed well on the simpler tasks, like walking or carrying things, but struggled with tougher challenges, like catching and lifting, suggesting there’s plenty of scope for advances in co-design algorithms. Nonetheless, the AI-designed bots outperformed ones design by humans on almost every task.
Intriguingly, many of the co-design bots took on similar shapes to real animals. One evolved to resemble a galloping horse, while another, set the task of climbing up a chimney, evolved arms and legs and clambered up somewhat like a monkey.
The simulator has been open-sourced and is free to use, and the team’s hope is that other researchers will now come and try out their co-design algorithms on the platform, which will make it easier to compare results.
“Evolution Gym is part of a growing awareness in the AI community that the body and brain are equal partners in supporting intelligent behavior,” the University of Vermont’s Josh Bongard said in the press release. “There is so much to do in figuring out what forms this partnership can take. Gym is likely to be an important tool in working through these kinds of questions.”
Image Credit: MIT CSAIL via YouTube Continue reading
Engineered Arts, a robot maker based in the U.K., is showing off its latest creation at this year's CES 2022. Called Ameca, the robot is able to display what appears to be the most human-like facial expressions by a robot to date. On its webpage, the company calls Ameca “The Future Face of Robotics.” Continue reading