Explainable Artificial Intelligence (XAI) has emerged as a crucial aspect of building trust and transparency in AI-driven systems. However, existing explanation methods often apply a uniform approach, overlooking the diverse backgrounds and expertise levels of users. This paper proposes a personalized explainable AI framework that dynamically adjusts the complexity, depth, and presentation of machine-generated explanations according to the user's expertise—be it novice or expert. By integrating user modeling and adaptive explanation strategies, the system can deliver tailored information that enhances user understanding, satisfaction, and decision-making. We evaluate the proposed approach through experiments involving participants with varying expertise levels interacting with AI-based decision systems. The results show that adaptive explanations significantly improve comprehension for both novice and expert users compared to static, one-size-fits-all explanations. These findings highlight the importance of user-centered design in XAI and suggest practical pathways for future implementation in real-world applications.
                        
                        
                        
                        
                            
                                Copyrights © 2025