The rapid evolution of Artificial Intelligence (AI) has reshaped social, economic, and cultural landscapes, yet its development often prioritizes technical efficiency over human values. This study proposes the Human-Centered AI Integration Framework, a multidisciplinary model that unites ethical, psychological, and computational perspectives to promote inclusive and responsible AI innovation. Employing a mixed-method and Design Science Research (DSR) approach, data were gathered from literature studies, user surveys, and AI system analyses to identify gaps between ethical principles, user perception, and algorithmic design. The proposed framework consists of three interrelated layers: the Ethical Layer, emphasizing fairness, accountability, and transparency; the Psychological Layer, focusing on trust, empathy, and human experience; and the Computational Layer, ensuring algorithmic integrity through bias mitigation and explainability. Evaluation results from interdisciplinary experts confirm that the model effectively bridges human values with technical implementation, enhancing trust, inclusivity, and transparency across AI systems. This research contributes to the growing discourse on responsible AI by providing a holistic foundation for designing systems that are not only intelligent and efficient but also empathetic, equitable, and aligned with human well-being.