You are here

A PROBABILISTIC CHECKING MODEL FOR EFFECTIVE EXPLAINABILITY BASED ON PERSONALITY TRAITS

Download pdf | Full Screen View

Date Issued:
2022
Abstract/Description:
It is becoming increasingly important for an autonomous system to be able to explain its actions to humans in order to improve trust and enhance human-machine collaboration. However, providing the most appropriate kind of explanations – in terms of length, format, and presentation mode of explanations at the proper time – is critical to enhancing their effectiveness. Explanation entails costs, such as the time it takes to explain and for humans to comprehend and respond. Therefore, the actual improvement in human-system tasks from explanations (if any) is not always obvious, particularly given various forms of uncertainty in knowledge about humans. In this research, we propose an approach to address this issue. The key idea is to provide a structured framework that allows a system to model and reason about human personality traits as critical elements to guide proper explanation in human and system collaboration. In particular, we focus on the two concerns of modality and amount of explanation in order to optimize the explanation experience and improve overall system-human utility. Our models are based on probabilistic modeling and analysis (PRISM-games) to determine at run time what the most effective explanation under uncertainty is. To demonstrate our approach, we introduce a self-adaptative system called Grid – a virtual game – and the Stock Prediction Engine (SPE), which allows an automated system and a human to collaborate on the game and stock investments. Our evaluation of these exemplars, through simulation, demonstrates that a human subject’s performance and overall human-system utility is improved when considering the psychology of human personality traits in providing explanations.
Title: A PROBABILISTIC CHECKING MODEL FOR EFFECTIVE EXPLAINABILITY BASED ON PERSONALITY TRAITS.
59 views
30 downloads
Name(s): Alharbi, Mohammed N., author
Huang, Shihong, Thesis advisor
Florida Atlantic University, Degree grantor
Department of Computer and Electrical Engineering and Computer Science
College of Engineering and Computer Science
Type of Resource: text
Genre: Electronic Thesis Or Dissertation
Date Created: 2022
Date Issued: 2022
Publisher: Florida Atlantic University
Place of Publication: Boca Raton, Fla.
Physical Form: application/pdf
Extent: 118 p.
Language(s): English
Abstract/Description: It is becoming increasingly important for an autonomous system to be able to explain its actions to humans in order to improve trust and enhance human-machine collaboration. However, providing the most appropriate kind of explanations – in terms of length, format, and presentation mode of explanations at the proper time – is critical to enhancing their effectiveness. Explanation entails costs, such as the time it takes to explain and for humans to comprehend and respond. Therefore, the actual improvement in human-system tasks from explanations (if any) is not always obvious, particularly given various forms of uncertainty in knowledge about humans. In this research, we propose an approach to address this issue. The key idea is to provide a structured framework that allows a system to model and reason about human personality traits as critical elements to guide proper explanation in human and system collaboration. In particular, we focus on the two concerns of modality and amount of explanation in order to optimize the explanation experience and improve overall system-human utility. Our models are based on probabilistic modeling and analysis (PRISM-games) to determine at run time what the most effective explanation under uncertainty is. To demonstrate our approach, we introduce a self-adaptative system called Grid – a virtual game – and the Stock Prediction Engine (SPE), which allows an automated system and a human to collaborate on the game and stock investments. Our evaluation of these exemplars, through simulation, demonstrates that a human subject’s performance and overall human-system utility is improved when considering the psychology of human personality traits in providing explanations.
Identifier: FA00013894 (IID)
Degree granted: Dissertation (Ph.D.)--Florida Atlantic University, 2022.
Collection: FAU Electronic Theses and Dissertations Collection
Note(s): Includes bibliography.
Subject(s): Human-computer interaction
Probabilistic modelling
Human-machine systems
Affective Computing
Persistent Link to This Record: http://purl.flvc.org/fau/fd/FA00013894
Use and Reproduction: Copyright © is held by the author with permission granted to Florida Atlantic University to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Use and Reproduction: http://rightsstatements.org/vocab/InC/1.0/
Host Institution: FAU
Is Part of Series: Florida Atlantic University Digital Library Collections.