This study explores the ethical implications of using Artificial Intelligence (AI) in continuous recruitment systems, with a specific focus on algorithmic bias against candidates from marginalized backgrounds in Makassar, Indonesia. Through a qualitative approach involving semi-structured interviews with HR practitioners, developers, and job seekers, the research reveals a concerning gap between technological advancement and ethical accountability. Participants from marginalized groups reported experiences of exclusion and invisibility, often without any transparency or feedback in the recruitment process. Meanwhile, most HR professionals and developers lacked awareness of how algorithmic models could replicate societal inequalities. The findings suggest that AI systems, if left unchecked, risk reinforcing discrimination rather than fostering equal opportunity. However, the study also uncovers a growing willingness among local stakeholders to engage in ethical reform and collaborative efforts toward more inclusive AI design. This research contributes to the discourse on fairness and accountability in digital hiring practices, offering actionable insights for socially responsible AI integration.
                        
                        
                        
                        
                            
                                Copyrights © 2025