Artificial active

Research Highlights: Using Theory of Mind to Improve Human Confidence in Artificial Intelligence

Artificial intelligence (AI) systems are present in modern society, informing us from low-risk interactions such as movie recommendations and chatbots to high-risk environments such as medical diagnostics, self-driving cars, drones and military operations. But it remains a significant challenge to building human trust in these systems, particularly because the systems themselves cannot explain in a way that humans understand how a recommendation or decision was made. This lack of trust can become problematic in critical situations involving finance or healthcare where AI decisions can have life-changing consequences.

To solve this problem, eXplainable artificial intelligence (XAI) has become an active area of ​​research for both scientists and industry. XAI develops models using explanations that aim to shed light on the underlying mechanisms of AI systems, bringing transparency to the process. The results become more susceptible to interpretation by both expert and non-expert end users.

New research by a team of UCLA scientists aims to boost human confidence in these increasingly common systems by dramatically improving XAI. Their study was recently published in the journal iScience – “CX-ToM Counterfactual Explanations with Theory of Mind to Improve Human Confidence in Image Recognition Patterns”.

“Humans can easily become overwhelmed with too many or too detailed explanations. Our interactive communication process helps the machine understand the human user and identify user-specific content for explanation,” says Song-Chun Zhu, principal investigator of the project and professor of statistics and computer science at UCLA.

Zhu and his team at UCLA set out to improve existing XAI models and present explanation generation as an iterative process of human-machine communication. They use the theory of mind (ToM) framework to drive this communication dialogue. ToM helps to explicitly track three important aspects at every turn of dialogue: (a) human intent (or curiosity); (b) human understanding of the machine; and (c) the machine’s understanding of the human user.

“In our framework, we let the machine and the user solve a collaborative task, but the mind of the machine and the mind of the human user only have partial knowledge of the environment. Therefore, the machine and the user must communicate with each other in a dialogue, using their partial knowledge, otherwise they would not be able to optimally solve the collaborative task,” said Arjun Reddy Akula, holder of a doctorate from UCLA. student who led this work for Professor Zhu’s group. “Our work will enable non-AI human users to tap into, understand and improve human trust in AI-based systems. We believe that our ToM-based interactive framework offers a new way of thinking when designing XAI solutions. »

The group’s latest development is the culmination of research in their UCLA lab over five years.

Sign up for the free insideBIGDATA newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1