The integration of artificial intelligence (AI) into recruitment processes has transformed hiring practices, yet its ethical implications remain contested. This phenomenological study investigates job candidates’ lived experiences with AI-driven tools, focusing on perceptions of algorithmic bias and procedural fairness. Through in-depth interviews with 20 participants subjected to AI-powered resume screening, video interviews, and gamified assessments, the study uncovers recurring themes of opacity, demographic disparities, emotional dehumanization, and procedural injustice. Findings reveal that candidates, particularly those from marginalized groups, perceive AI systems as less transparent and more exclusionary than human evaluators, fostering distrust and emotional distress. The research highlights how algorithmic tools often replicate systemic inequities under the guise of neutrality, disproportionately affecting individuals with non-Western names, accents, or non-normative identities. By centering candidate voices, this study advocates for human-centered AI redesign, emphasizing participatory audits, transparency mechanisms, and accountability frameworks. These insights contribute to the discourse on ethical HR technologies, urging policymakers and organizations to prioritize equity and dignity in the automation of recruitment.
                        
                        
                        
                        
                            
                                Copyrights © 2025