Personalized Explainable AI: Dynamic Adjustment of Explanations for Novice and Expert Users
DOI:
10.29303/jppipa.v11i9.12586Published:
2025-09-30Downloads
Abstract
Explainable Artificial Intelligence (XAI) has emerged as a crucial aspect of building trust and transparency in AI-driven systems. However, existing explanation methods often apply a uniform approach, overlooking the diverse backgrounds and expertise levels of users. This paper proposes a personalized explainable AI framework that dynamically adjusts the complexity, depth, and presentation of machine-generated explanations according to the user's expertise—be it novice or expert. By integrating user modeling and adaptive explanation strategies, the system can deliver tailored information that enhances user understanding, satisfaction, and decision-making. We evaluate the proposed approach through experiments involving participants with varying expertise levels interacting with AI-based decision systems. The results show that adaptive explanations significantly improve comprehension for both novice and expert users compared to static, one-size-fits-all explanations. These findings highlight the importance of user-centered design in XAI and suggest practical pathways for future implementation in real-world applications.
Keywords:
Adaptive Explanations AI Interpretability Human-Centered AI Transparency User Expertise User ModelingReferences
Adadi, A., & Berrada, M. (2018). Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI. IEEE Access, 6, 52138–52160. https://doi.org/10.1109/ACCESS.2018.2870052
Brey, de. (2018). Status and Trends in the Education of Racial and Ethnic Groups. National Center for Education Statistics. Retrieved from https://eric.ed.gov/?id=ED592833
Doshi-Velez, F., & Kim, B. (2017). Towards A Rigorous Science of Interpretable Machine Learning. ArXiv Preprint ArXiv. https://doi.org/10.48550/arXiv.1702.08608
Ehsan, U., Liao, Q. V, Muller, M., & Riedl, M. O. (2021). Expanding explainability: Towards social transparency in AI systems. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–19. https://doi.org/10.1145/3411764.3445514
Eiband, M., Schneider, H., Bilandzic, M., Fazekas-Con, J., Haug, M., & Hussmann, H. (2018). Bringing transparency design into practice. Proceedings of the 23rd International Conference on Intelligent User Interfaces, 211–223. https://doi.org/10.1145/3172944.317296
Georgenia Ezeji, N., Ifeanyi Chibueze, K., Nwamaka, E. G., Kingsley, C. I., & Nnenna, N.-N. H. (2024). Enhancing Trust and Transparency in AI: A Comprehensive Study on Explainable Artificial Intelligence (XAI) Techniques and Applications. Conference: 2024 International Conference Of Engineering Innovation For Sustainable Development (ICEISD). https://doi.org/10.5281/zenodo.13889644
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Pedreschi, D., & Giannotti, F. (2018). A Survey Of Methods For Explaining Black Box Models. ACM Computing Surveys (CSUR), 51(5), 1–42. Retrieved from http://arxiv.org/abs/1802.01933
Hoff, K. A., & Bashir, M. (2015). Trust in automation: Integrating empirical evidence on factors that influence trust. Human Factors, 57(3), 407–434. https://doi.org/10.1177/001872081454757
Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for Explainable AI: Challenges and Prospects Institute for Human and Machine Cognition. ArXiv Preprint ArXiv:1812.04608. https://doi.org/10.48550/arXiv.1812.04608
Holzinger, A., Langs, G., Denk, H., Zatloukal, K., & Müller, H. (2019). Causability and explainability of artificial intelligence in medicine. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(4). https://doi.org/10.1002/widm.1312
Lee, J. D., & See, K. A. (2004). Trust in automation: Designing for appropriate reliance. Human Factors, 46(1), 50–80. https://doi.org/10.1518/hfes.46.1.50_30392
Lim, B. Y., & Dey, A. K. (2009). Assessing demand for intelligibility in context-aware applications. Proceedings of the 11th International Conference on Ubiquitous Computing, 195–204. https://doi.org/10.1145/1620545.1620576
Lundberg, S. M., & Lee, S.-I. (2017). Consistent feature attribution for tree ensembles. ArXiv Preprint ArXiv:1706.06060. Retrieved from https://shorturl.asia/NcbRC
Malamuthu, B. K., Balakrishnan, T. P., Deepika, J., P, N., Venkataramanaiah, B., & Malathy, V. (2025). Explainable AI for Decision-Making: A Hybrid Approach to Trustworthy Computing. International Journal of Computational and Experimental Science and Engineering, 11(2). https://doi.org/10.22399/ijcesen.1684
Markus, A. F., Kors, J. A., & Rijnbeek, P. R. (2021). The role of explainability in creating trustworthy artificial intelligence for health care: a comprehensive survey of the terminology, design choices, and evaluation strategies. Journal of Biomedical Informatics, 113, 103655. https://doi.org/10.1016/j.jbi.2020.103655
Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. In Artificial Intelligence (Vol. 267, pp. 1–38). Elsevier B.V. https://doi.org/10.1016/j.artint.2018.07.007
Mohammed, A. A. A. (2025). Adaptive Explainable AI: Personalizing Machine Explanations Based on User Expertise Levels. Journal of Posthumanism, 5(7). https://doi.org/10.63332/joph.v5i7.2793
Mohseni, S., Zarei, N., & Ragan, E. D. (2021). A Multidisciplinary Survey and Framework for Design and Evaluation of Explainable AI Systems. ACM Transactions on Interactive Intelligent Systems, 11(3–4). https://doi.org/10.1145/3387166
Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. https://doi.org/10.1145/2939672.2939778
Ribera, M., & Lapedriza, A. (2019). Can we do better explanations? A proposal of user-centered explainable AI. CEUR Workshop Proceedings, 1–4. Retrieved from https://openaccess.uoc.edu/handle/10609/99643
Schneider, J., & Handali, J. (2019). Personalized explanation in machine learning: A conceptualization. ArXiv Preprint ArXiv:1901.00770. https://doi.org/10.48550/arXiv.1901.00770
Sokol, K., & Flach, P. (2022). Explainability Is in the Mind of the Beholder: Establishing the Foundations of Explainable Artificial Intelligence. Retrieved from http://arxiv.org/abs/2112.14466
Solanke, A. A. (2022). Explainable digital forensics AI: Towards mitigating distrust in AI-based digital forensics analysis using interpretable models. Forensic Science International: Digital Investigation, 42. https://doi.org/10.1016/j.fsidi.2022.301403
Sweller, J., Merriënboer, J. J. G., & Paas, F. (2019). Cognitive Architecture and Instructional Design: 20 Years Later. Educational Psychology Review, 31(2), 261–292. https://doi.org/10.1007/s10648-019-09465-5
Teso, S., & Kersting, K. (2019). Explanatory interactive machine learning. Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, 239–245. https://doi.org/10.1145/3306618.3314293
License
Copyright (c) 2025 Afrizal Zein

This work is licensed under a Creative Commons Attribution 4.0 International License.
Authors who publish with Jurnal Penelitian Pendidikan IPA, agree to the following terms:
- Authors retain copyright and grant the journal right of first publication with the work simultaneously licensed under a Creative Commons Attribution 4.0 International License (CC-BY License). This license allows authors to use all articles, data sets, graphics, and appendices in data mining applications, search engines, web sites, blogs, and other platforms by providing an appropriate reference. The journal allows the author(s) to hold the copyright without restrictions and will retain publishing rights without restrictions.
- Authors are able to enter into separate, additional contractual arrangements for the non-exclusive distribution of the journal's published version of the work (e.g., post it to an institutional repository or publish it in a book), with an acknowledgment of its initial publication in Jurnal Penelitian Pendidikan IPA.
- Authors are permitted and encouraged to post their work online (e.g., in institutional repositories or on their website) prior to and during the submission process, as it can lead to productive exchanges, as well as earlier and greater citation of published work (See The Effect of Open Access).






