TRUSTWORTHY AI: EXPLAINABILITY & FAIRNESS IN LARGE-SCALE DECISION SYSTEMS
DOI:
https://doi.org/10.63125/3w9v5e52Keywords:
Trustworthy Artificial Intelligence, Explain ability, Fairness, Decision Systems, Human-Cantered OutcomesAbstract
This study examined the critical roles of explain ability and fairness in advancing trustworthy artificial intelligence (AI) within large-scale decision systems. As AI technologies increasingly shape consequential decisions in domains such as healthcare, finance, employment, and judicial processes, ensuring transparency, equity, and legitimacy has become paramount. Drawing on a comprehensive review of 152 peer-reviewed studies, this research synthesized conceptual foundations, methodological advancements, and empirical findings to build a robust framework for understanding how explain ability and fairness jointly contribute to trustworthiness. A quantitative research design was employed, incorporating large-scale datasets and multi-phase statistical analyses to evaluate how explanation fidelity, stability, and sparsity influence comprehension, trust, and perceived fairness, and how fairness interventions impact model performance and equity outcomes. Results demonstrated that explanation fidelity significantly enhanced user comprehension, while stability strongly predicted trust, highlighting the importance of consistent and faithful explanations in shaping user confidence. Fairness metrics such as demographic parity and equal opportunity gaps were powerful predictors of perceived fairness, and reductions in these disparities substantially increased user acceptance of AI decisions. Interaction analyses revealed that combining counterfactual explanations with fairness constraints produced synergistic effects, improving both equity and trust without excessively compromising predictive performance. The study also quantified trade-offs, showing that fairness interventions slightly reduced accuracy but delivered substantial gains in legitimacy and social acceptability. Human-cantered outcomes such as trust and reliance were closely linked to technical measures, illustrating that the social impact of AI is deeply intertwined with its design. By integrating findings across technical, ethical, and behavioural dimensions, this study contributed new empirical evidence and theoretical insights into how explain ability and fairness shape trustworthy AI. The results provide a comprehensive foundation for designing, evaluating, and governing AI systems that are transparent, equitable, and socially aligned in large-scale decision-making contexts.
