As with any new technology, terms can enter people’s vernacular before they are fully understood. “We should also always be really careful I think about the use of the term AI, and in this context you don’t want to suggest that this anchor is actually exhibiting any intelligence, because it’s not, it’s just like a kind of very sophisticated digital puppet.” Knight said.
AI may have created the Xinhua anchor and its voice, but the anchor itself cannot think, Ali Shafti, a research associate in robotics and AI at Imperial College London, told CNBC by phone. “What actually creates those images and the movement of the lips and the voice of this anchor is using algorithms that are related to artificial intelligence. But to call this an AI anchor is slightly overselling it.”
Defining AI isn’t straightforward, Shafti said. “The term itself is usually defined as a non-human device or algorithm being able to do behaviors and actions that are possible only for a person of human intelligence, or maybe not even possible for humans, so above human intelligence,” he said.
“People will probably misunderstand this as the anchor itself is intelligent, it’s like a human and can react to situations with an intelligent behavior which is not the case. It is basically a puppet running script. It can read script. It can do (that) very convincingly and the aspects that it looks so convincingly, that’s the AI, but not what it says and it does,” Shafti added.
The abilities and dangers of AI can be overstated. “As a person who does research in AI and robotics, I think we need to be very careful with how we explain AI to the general public. There is already the fear and the negative thoughts on the subject in the general public. And it is based on what is being said by people like Elon Musk and by movies and films and series that people see. It is not realistic. It is being oversold and it is necessary for people to understand what it is that we are researching and what it is that we are trying to do.”