Tag Archives: mentioned

#437610 How Intel’s OpenBot Wants to Make ...

You could make a pretty persuasive argument that the smartphone represents the single fastest area of technological progress we’re going to experience for the foreseeable future. Every six months or so, there’s something with better sensors, more computing power, and faster connectivity. Many different areas of robotics are benefiting from this on a component level, but over at Intel Labs, they’re taking a more direct approach with a project called OpenBot that turns US $50 worth of hardware and your phone into a mobile robot that can support “advanced robotics workloads such as person following and real-time autonomous navigation in unstructured environments.”

This work aims to address two key challenges in robotics: accessibility and scalability. Smartphones are ubiquitous and are becoming more powerful by the year. We have developed a combination of hardware and software that turns smartphones into robots. The resulting robots are inexpensive but capable. Our experiments have shown that a $50 robot body powered by a smartphone is capable of person following and real-time autonomous navigation. We hope that the presented work will open new opportunities for education and large-scale learning via thousands of low-cost robots deployed around the world.

Smartphones point to many possibilities for robotics that we have not yet exploited. For example, smartphones also provide a microphone, speaker, and screen, which are not commonly found on existing navigation robots. These may enable research and applications at the confluence of human-robot interaction and natural language processing. We also expect the basic ideas presented in this work to extend to other forms of robot embodiment, such as manipulators, aerial vehicles, and watercraft.

One of the interesting things about this idea is how not-new it is. The highest profile phone robot was likely the $150 Romo, from Romotive, which raised a not-insignificant amount of money on Kickstarter in 2012 and 2013 for a little mobile chassis that accepted one of three different iPhone models and could be controlled via another device or operated somewhat autonomously. It featured “computer vision, autonomous navigation, and facial recognition” capabilities, but was really designed to be a toy. Lack of compatibility hampered Romo a bit, and there wasn’t a lot that it could actually do once the novelty wore off.

As impressive as smartphone hardware was in a robotics context (even back in 2013), we’re obviously way, way beyond that now, and OpenBot figures that smartphones now have enough clout and connectivity that turning them into mobile robots is a good idea. You know, again. We asked Intel Labs’ Matthias Muller why now was the right time to launch OpenBot, and he mentioned things like the existence of a large maker community with broad access to 3D printing as well as open source software that makes broader development easier.

And of course, there’s the smartphone hardware: “Smartphones have become extremely powerful and feature dedicated AI processors in addition to CPUs and GPUs,” says Mueller. “Almost everyone owns a very capable smartphone now. There has been a big boost in sensor performance, especially in cameras, and a lot of the recent developments for VR applications are well aligned with robotic requirements for state estimation.” OpenBot has been tested with 10 recent Android phones, and since camera placement tends to be similar and USB-C is becoming the charging and communications standard, compatibility is less of an issue nowadays.

Image: OpenBot

Intel researchers created this table comparing OpenBot to other wheeled robot platforms, including Amazon’s DeepRacer, MIT’s Duckiebot, iRobot’s Create-2, and Thymio. The top group includes robots based on RC trucks; the bottom group includes navigation robots for deployment at scale and in education. Note that the cost of the smartphone needed for OpenBot is not included in this comparison.

If you’d like an OpenBot of your own, you don’t need to know all that much about robotics hardware or software. For the hardware, you probably need some basic mechanical and electronics experience—think Arduino project level. The software is a little more complicated; there’s a pretty good walkthrough to get some relatively sophisticated behaviors (like autonomous person following) up and running, but things rapidly degenerate into a command line interface that could be intimidating for new users. We did ask about why OpenBot isn’t ROS-based to leverage the robustness and reach of that community, and Muller said that ROS “adds unnecessary overhead,” although “if someone insists on using ROS with OpenBot, it should not be very difficult.”

Without building OpenBot to explicitly be part of an existing ecosystem, the challenge going forward is to make sure that the project is consistently supported, lest it wither and die like so many similar robotics projects have before it. “We are committed to the OpenBot project and will do our best to maintain it,” Mueller assures us. “We have a good track record. Other projects from our group (e.g. CARLA, Open3D, etc.) have also been maintained for several years now.” The inherently open source nature of the project certainly helps, although it can be tricky to rely too much on community contributions, especially when something like this is first starting out.

The OpenBot folks at Intel, we’re told, are already working on a “bigger, faster and more powerful robot body that will be suitable for mass production,” which would certainly help entice more people into giving this thing a go. They’ll also be focusing on documentation, which is probably the most important but least exciting part about building a low-cost community focused platform like this. And as soon as they’ve put together a way for us actual novices to turn our phones into robots that can do cool stuff for cheap, we’ll definitely let you know. Continue reading

Posted in Human Robots

#437150 AI Is Getting More Creative. But Who ...

Creativity is a trait that makes humans unique from other species. We alone have the ability to make music and art that speak to our experiences or illuminate truths about our world. But suddenly, humans’ artistic abilities have some competition—and from a decidedly non-human source.

Over the last couple years there have been some remarkable examples of art produced by deep learning algorithms. They have challenged the notion of an elusive definition of creativity and put into perspective how professionals can use artificial intelligence to enhance their abilities and produce beyond the known boundaries.

But when creativity is the result of code written by a programmer, using a format given by a software engineer, featuring private and public datasets, how do we assign ownership of AI-generated content, and particularly that of artwork? McKinsey estimates AI will annually generate value of $3.5 to $5.8 trillion across various sectors.

In 2018, a portrait that was christened Edmond de Belamy was made in a French art collective called Obvious. It used a database with 15,000 portraits from the 1300s to the 1900s to train a deep learning algorithm to produce a unique portrait. The painting sold for $432,500 in a New York auction. Similarly, a program called Aiva, trained on thousands of classical compositions, has released albums whose pieces are being used by ad agencies and movies.

The datasets used by these algorithms were different, but behind both there was a programmer who changed the brush strokes or musical notes into lines of code and a data scientist or engineer who fitted and “curated” the datasets to use for the model. There could also have been user-based input, and the output may be biased towards certain styles or unintentionally infringe on similar pieces of art. This shows that there are many collaborators with distinct roles in producing AI-generated content, and it’s important to discuss how they can protect their proprietary interests.

A perspective article published in Nature Machine Intelligence by Jason K. Eshraghian in March looks into how AI artists and the collaborators involved should assess their ownership, laying out some guiding principles that are “only applicable for as long as AI does not have legal parenthood, the way humans and corporations are accorded.”

Before looking at how collaborators can protect their interests, it’s useful to understand the basic requirements of copyright law. The artwork in question must be an “original work of authorship fixed in a tangible medium.” Given this principle, the author asked whether it’s possible for AI to exercise creativity, skill, or any other indicator of originality. The answer is still straightforward—no—or at least not yet. Currently, AI’s range of creativity doesn’t exceed the standard used by the US Copyright Office, which states that copyright law protects the “fruits of intellectual labor founded in the creative powers of the mind.”

Due to the current limitations of narrow AI, it must have some form of initial input that helps develop its ability to create. At the moment AI is a tool that can be used to produce creative work in the same way that a video camera is a tool used to film creative content. Video producers don’t need to comprehend the inner workings of their cameras; as long as their content shows creativity and originality, they have a proprietary claim over their creations.

The same concept applies to programmers developing a neural network. As long as the dataset they use as input yields an original and creative result, it will be protected by copyright law; they don’t need to understand the high-level mathematics, which in this case are often black box algorithms whose output it’s impossible to analyze.

Will robots and algorithms eventually be treated as creative sources able to own copyrights? The author pointed to the recent patent case of Warner-Lambert Co Ltd versus Generics where Lord Briggs, Justice of the Supreme Court of the UK, determined that “the court is well versed in identifying the governing mind of a corporation and, when the need arises, will no doubt be able to do the same for robots.”

In the meantime, Dr. Eshraghian suggests four guiding principles to allow artists who collaborate with AI to protect themselves.

First, programmers need to document their process through online code repositories like GitHub or BitBucket.

Second, data engineers should also document and catalog their datasets and the process they used to curate their models, indicating selectivity in their criteria as much as possible to demonstrate their involvement and creativity.

Third, in cases where user data is utilized, the engineer should “catalog all runs of the program” to distinguish the data selection process. This could be interpreted as a way of determining whether user-based input has a right to claim the copyright too.

Finally, the output should avoid infringing on others’ content through methods like reverse image searches and version control, as mentioned above.

AI-generated artwork is still a very new concept, and the ambiguous copyright laws around it give a lot of flexibility to AI artists and programmers worldwide. The guiding principles Eshraghian lays out will hopefully shed some light on the legislation we’ll eventually need for this kind of art, and start an important conversation between all the stakeholders involved.

Image Credit: Wikimedia Commons Continue reading

Posted in Human Robots

#436946 Coronavirus May Mean Automation Is ...

We’re in the midst of a public health emergency, and life as we know it has ground to a halt. The places we usually go are closed, the events we were looking forward to are canceled, and some of us have lost our jobs or fear losing them soon.

But although it may not seem like it, there are some silver linings; this crisis is bringing out the worst in some (I’m looking at you, toilet paper hoarders), but the best in many. Italians on lockdown are singing together, Spaniards on lockdown are exercising together, this entrepreneur made a DIY ventilator and put it on YouTube, and volunteers in Italy 3D printed medical valves for virus treatment at a fraction of their usual cost.

Indeed, if you want to feel like there’s still hope for humanity instead of feeling like we’re about to snowball into terribleness as a species, just look at these examples—and I’m sure there are many more out there. There’s plenty of hope and opportunity to be found in this crisis.

Peter Xing, a keynote speaker and writer on emerging technologies and associate director in technology and growth initiatives at KPMG, would agree. Xing believes the coronavirus epidemic is presenting us with ample opportunities for increased automation and remote delivery of goods and services. “The upside right now is the burgeoning platform of the digital transformation ecosystem,” he said.

In a thought-provoking talk at Singularity University’s COVID-19 virtual summit this week, Xing explained how the outbreak is accelerating our transition to a highly-automated society—and painted a picture of what the future may look like.

Confronting Scarcity
You’ve probably seen them by now—the barren shelves at your local grocery store. Whether you were in the paper goods aisle, the frozen food section, or the fresh produce area, it was clear something was amiss; the shelves were empty. One of the most inexplicable items people have been panic-bulk-buying is toilet paper.

Xing described this toilet paper scarcity as a prisoner’s dilemma, pointing out that we have a scarcity problem right now in terms of our mindset, not in terms of actual supply shortages. “It’s a prisoner’s dilemma in that we’re all prisoners in our homes right now, and we can either hoard or not hoard, and the outcomes depend on how we collaborate with each other,” he said. “But it’s not a zero-sum game.”

Xing referenced a CNN article about why toilet paper, of all things, is one of the items people have been panic-buying most (I, too, have been utterly baffled by this phenomenon). But maybe there’d be less panic if we knew more about the production methods and supply chain involved in manufacturing toilet paper. It turns out it’s a highly automated process (you can learn more about it in this documentary by National Geographic) and requires very few people (though it does require about 27,000 trees a day—so stop bulk-buying it! Just stop!).

The supply chain limitation here is in the raw material; we certainly can’t keep cutting down this many trees a day forever. But—somewhat ironically, given the Costco cartloads of TP people have been stuffing into their trunks and backseats—thanks to automation, toilet paper isn’t something stores are going to stop receiving anytime soon.

Automation For All
Now we have a reason to apply this level of automation to, well, pretty much everything.

Though our current situation may force us into using more robots and automated systems sooner than we’d planned, it will end up saving us money and creating opportunity, Xing believes. He cited “fast-casual” restaurants (Chipotle, Panera, etc.) as a prime example.

Currently, people in the US spend much more to eat at home than we do to eat in fast-casual restaurants if you take into account the cost of the food we’re preparing plus the value of the time we’re spending on cooking, grocery shopping, and cleaning up after meals. According to research from investment management firm ARK Invest, taking all these costs into account makes for about $12 per meal for food cooked at home.

That’s the same as or more than the cost of grabbing a burrito or a sandwich at the joint around the corner. As more of the repetitive, low-skill tasks involved in preparing fast casual meals are automated, their cost will drop even more, giving us more incentive to forego home cooking. (But, it’s worth noting that these figures don’t take into account that eating at home is, in most cases, better for you since you’re less likely to fill your food with sugar, oil, or various other taste-enhancing but health-destroying ingredients—plus, there are those of us who get a nearly incomparable amount of joy from laboring over then savoring a homemade meal).

Now that we’re not supposed to be touching each other or touching anything anyone else has touched, but we still need to eat, automating food preparation sounds appealing (and maybe necessary). Multiple food delivery services have already implemented a contactless delivery option, where customers can choose to have their food left on their doorstep.

Besides the opportunities for in-restaurant automation, “This is an opportunity for automation to happen at the last mile,” said Xing. Delivery drones, robots, and autonomous trucks and vans could all play a part. In fact, use of delivery drones has ramped up in China since the outbreak.

Speaking of deliveries, service robots have steadily increased in numbers at Amazon; as of late 2019, the company employed around 650,000 humans and 200,000 robots—and costs have gone down as robots have gone up.

ARK Invest’s research predicts automation could add $800 billion to US GDP over the next 5 years and $12 trillion during the next 15 years. On this trajectory, GDP would end up being 40 percent higher with automation than without it.

Automating Ourselves?
This is all well and good, but what do these numbers and percentages mean for the average consumer, worker, or citizen?

“The benefits of automation aren’t being passed on to the average citizen,” said Xing. “They’re going to the shareholders of the companies creating the automation.” This is where policies like universal basic income and universal healthcare come in; in the not-too-distant future, we may see more movement toward measures like these (depending how the election goes) that spread the benefit of automation out rather than concentrating it in a few wealthy hands.

In the meantime, though, some people are benefiting from automation in ways that maybe weren’t expected. We’re in the midst of what’s probably the biggest remote-work experiment in US history, not to mention remote learning. Tools that let us digitally communicate and collaborate, like Slack, Zoom, Dropbox, and Gsuite, are enabling remote work in a way that wouldn’t have been possible 20 or even 10 years ago.

In addition, Xing said, tools like DataRobot and H2O.ai are democratizing artificial intelligence by allowing almost anyone, not just data scientists or computer engineers, to run machine learning algorithms. People are codifying the steps in their own repetitive work processes and having their computers take over tasks for them.

As 3D printing gets cheaper and more accessible, it’s also being more widely adopted, and people are finding more applications (case in point: the Italians mentioned above who figured out how to cheaply print a medical valve for coronavirus treatment).

The Mother of Invention
This movement towards a more automated society has some positives: it will help us stay healthy during times like the present, it will drive down the cost of goods and services, and it will grow our GDP in the long run. But by leaning into automation, will we be enabling a future that keeps us more physically, psychologically, and emotionally distant from each other?

We’re in a crisis, and desperate times call for desperate measures. We’re sheltering in place, practicing social distancing, and trying not to touch each other. And for most of us, this is really unpleasant and difficult. We can’t wait for it to be over.

For better or worse, this pandemic will likely make us pick up the pace on our path to automation, across many sectors and processes. The solutions people implement during this crisis won’t disappear when things go back to normal (and, depending who you talk to, they may never really do so).

But let’s make sure to remember something. Even once robots are making our food and drones are delivering it, and our computers are doing data entry and email replies on our behalf, and we all have 3D printers to make anything we want at home—we’re still going to be human. And humans like being around each other. We like seeing one another’s faces, hearing one another’s voices, and feeling one another’s touch—in person, not on a screen or in an app.

No amount of automation is going to change that, and beyond lowering costs or increasing GDP, our greatest and most crucial responsibility will always be to take care of each other.

Image Credit: Gritt Zheng on Unsplash Continue reading

Posted in Human Robots

#435804 New AI Systems Are Here to Personalize ...

The narratives about automation and its impact on jobs go from urgent to hopeful and everything in between. Regardless where you land, it’s hard to argue against the idea that technologies like AI and robotics will change our economy and the nature of work in the coming years.

A recent World Economic Forum report noted that some estimates show automation could displace 75 million jobs by 2022, while at the same time creating 133 million new roles. While these estimates predict a net positive for the number of new jobs in the coming decade, displaced workers will need to learn new skills to adapt to the changes. If employees can’t be retrained quickly for jobs in the changing economy, society is likely to face some degree of turmoil.

According to Bryan Talebi, CEO and founder of AI education startup Ahura AI, the same technologies erasing and creating jobs can help workers bridge the gap between the two.

Ahura is developing a product to capture biometric data from adult learners who are using computers to complete online education programs. The goal is to feed this data to an AI system that can modify and adapt their program to optimize for the most effective teaching method.

While the prospect of a computer recording and scrutinizing a learner’s behavioral data will surely generate unease across a society growing more aware and uncomfortable with digital surveillance, some people may look past such discomfort if they experience improved learning outcomes. Users of the system would, in theory, have their own personalized instruction shaped specifically for their unique learning style.

And according to Talebi, their systems are showing some promise.

“Based on our early tests, our technology allows people to learn three to five times faster than traditional education,” Talebi told me.

Currently, Ahura’s system uses the video camera and microphone that come standard on the laptops, tablets, and mobile devices most students are using for their learning programs.

With the computer’s camera Ahura can capture facial movements and micro expressions, measure eye movements, and track fidget score (a measure of how much a student moves while learning). The microphone tracks voice sentiment, and the AI leverages natural language processing to review the learner’s word usage.

From this collection of data Ahura can, according to Talebi, identify the optimal way to deliver content to each individual.

For some users that might mean a video tutorial is the best style of learning, while others may benefit more from some form of experiential or text-based delivery.

“The goal is to alter the format of the content in real time to optimize for attention and retention of the information,” said Talebi. One of Ahura’s main goals is to reduce the frequency with which students switch from their learning program to distractions like social media.

“We can now predict with a 60 percent confidence interval ten seconds before someone switches over to Facebook or Instagram. There’s a lot of work to do to get that up to a 95 percent level, so I don’t want to overstate things, but that’s a promising indication that we can work to cut down on the amount of context-switching by our students,” Talebi said.

Talebi repeatedly mentioned his ambition to leverage the same design principles used by Facebook, Twitter, and others to increase the time users spend on those platforms, but instead use them to design more compelling and even addictive education programs that can compete for attention with social media.

But the notion that Ahura’s system could one day be used to create compelling or addictive education necessarily presses against a set of justified fears surrounding data privacy. Growing anxiety surrounding the potential to misuse user data for social manipulation is widespread.

“Of course there is a real danger, especially because we are collecting so much data about our users which is specifically connected to how they consume content. And because we are looking so closely at the ways people interact with content, it’s incredibly important that this technology never be used for propaganda or to sell things to people,” Talebi tried to assure me.

Unsurprisingly (and worrying), using this AI system to sell products to people is exactly where some investors’ ambitions immediately turn once they learn about the company’s capabilities, according to Talebi. During our discussion Talebi regularly cited the now infamous example of Cambridge Analytica, the political consulting firm hired by the Trump campaign to run a psychographically targeted persuasion campaign on the US population during the most recent presidential election.

“It’s important that we don’t use this technology in those ways. We’re aware that things can go sideways, so we’re hoping to put up guardrails to ensure our system is helping and not harming society,” Talebi said.

Talebi will surely need to take real action on such a claim, but says the company is in the process of identifying a structure for an ethics review board—one that carries significant influence with similar voting authority as the executive team and the regular board.

“Our goal is to build an ethics review board that has teeth, is diverse in both gender and background but also in thought and belief structures. The idea is to have our ethics review panel ensure we’re building things ethically,” he said.

Data privacy appears to be an important issue for Talebi, who occasionally referenced a major competitor in the space based in China. According to a recent article from MIT Tech Review outlining the astonishing growth of AI-powered education platforms in China, data privacy concerns may be less severe there than in the West.

Ahura is currently developing upgrades to an early alpha-stage prototype, but is already capturing data from students from at least one Ivy League school and a variety of other places. Their next step is to roll out a working beta version to over 200,000 users as part of a partnership with an unnamed corporate client who will be measuring the platform’s efficacy against a control group.

Going forward, Ahura hopes to add to its suite of biometric data capture by including things like pupil dilation and facial flushing, heart rate, sleep patterns, or whatever else may give their system an edge in improving learning outcomes.

As information technologies increasingly automate work, it’s likely we’ll also see rapid changes to our labor systems. It’s also looking increasingly likely that those same technologies will be used to improve our ability to give people the right skills when they need them. It may be one way to address the challenges automation is sure to bring.

Image Credit: Gerd Altmann / Pixabay Continue reading

Posted in Human Robots

#435748 Video Friday: This Robot Is Like a ...

Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We’ll also be posting a weekly calendar of upcoming robotics events for the next few months; here’s what we have so far (send us your events!):

RSS 2019 – June 22-26, 2019 – Freiburg, Germany
Hamlyn Symposium on Medical Robotics – June 23-26, 2019 – London, U.K.
ETH Robotics Summer School – June 27-1, 2019 – Zurich, Switzerland
MARSS 2019 – July 1-5, 2019 – Helsinki, Finland
ICRES 2019 – July 29-30, 2019 – London, U.K.
DARPA SubT Tunnel Circuit – August 15-22, 2019 – Pittsburgh, Pa., USA
Let us know if you have suggestions for next week, and enjoy today’s videos.

It’s been a while since we last spoke to Joe Jones, the inventor of Roomba, about his solar-powered, weed-killing robot, called Tertill, which he was launching as a Kickstarter project. Tertill is now available for purchase (US $300) and is shipping right now.

[ Tertill ]

Usually, we don’t post videos that involve drone use that looks to be either illegal or unsafe. These flights over the protests in Hong Kong are almost certainly both. However, it’s also a unique perspective on the scale of these protests.

[ Team BlackSheep ]

ICYMI: iRobot announced this week that it has acquired Root Robotics.

[ iRobot ]

This Boston Dynamics parody video went viral this week.

The CGI is good but the gratuitous violence—even if it’s against a fake robot—is a bit too much?

This is still our favorite Boston Dynamics parody video:

[ Corridor ]

Biomedical Engineering Department Head Bin He and his team have developed the first-ever successful non-invasive mind-controlled robotic arm to continuously track a computer cursor.

[ CMU ]

Organic chemists, prepare to meet your replacement:

Automated chemical synthesis carries great promises of safety, efficiency and reproducibility for both research and industry laboratories. Current approaches are based on specifically-designed automation systems, which present two major drawbacks: (i) existing apparatus must be modified to be integrated into the automation systems; (ii) such systems are not flexible and would require substantial re-design to handle new reactions or procedures. In this paper, we propose a system based on a robot arm which, by mimicking the motions of human chemists, is able to perform complex chemical reactions without any modifications to the existing setup used by humans. The system is capable of precise liquid handling, mixing, filtering, and is flexible: new skills and procedures could be added with minimum effort. We show that the robot is able to perform a Michael reaction, reaching a yield of 34%, which is comparable to that obtained by a junior chemist (undergraduate student in Chemistry).

[ arXiv ] via [ NTU ]

So yeah, ICRA 2019 was huge and awesome. Here are some brief highlights.

[ Montreal Gazette ]

For about US $5, this drone will deliver raw meat and beer to you if you live on an uninhabited island in Tokyo Bay.

[ Nikkei ]

The Smart Microsystems Lab at Michigan State University has a new version of their Autonomous Surface Craft. It’s autonomous, open source, and awfully hard to sink.

[ SML ]

As drone shows go, this one is pretty good.

[ CCTV ]

Here’s a remote controlled robot shooting stuff with a very large gun.

[ HDT ]

Over a period of three quarters (September 2018 thru May 2019), we’ve had the opportunity to work with five graduating University of Denver students as they brought their idea for a Misty II arm extension to life.

[ Misty Robotics ]

If you wonder how it looks to inspect burners and superheaters of a boiler with an Elios 2, here you are! This inspection was performed by Svenska Elektrod in a peat-fired boiler for Vattenfall in Sweden. Enjoy!

[ Flyability ]

The newest Soft Robotics technology, mGrip mini fingers, made for tight spaces, small packaging, and delicate items, giving limitless opportunities for your applications.

[ Soft Robotics ]

What if legged robots were able to generate dynamic motions in real-time while interacting with a complex environment? Such technology would represent a significant step forward the deployment of legged systems in real world scenarios. This means being able to replace humans in the execution of dangerous tasks and to collaborate with them in industrial applications.

This workshop aims to bring together researchers from all the relevant communities in legged locomotion such as: numerical optimization, machine learning (ML), model predictive control (MPC) and computational geometry in order to chart the most promising methods to address the above-mentioned scientific challenges.

[ Num Opt Wkshp ]

Army researchers teamed with the U.S. Marine Corps to fly and test 3-D printed quadcopter prototypes a the Marine Corps Air Ground Combat Center in 29 Palms, California recently.

[ CCDC ARL ]

Lex Fridman’s Artificial Intelligence podcast featuring Rosalind Picard.

[ AI Podcast ]

In this week’s episode of Robots in Depth, per speaks with Christian Guttmann, executive director of the Nordic AI Artificial Intelligence Institute.

Christian Guttmann talks about AI and wanting to understand intelligence enough to recreate it. Christian has be focusing on AI in healthcare and has recently started to communicate the opportunities and challenges in artificial intelligence to the general public. This is something that the host Per Sjöborg is also very passionate about. We also get to hear about the Nordic AI institute and the work it does to inform all parts of society about AI.

[ Robots in Depth ] Continue reading

Posted in Human Robots