Generative AI tools are increasingly used for AI-mediated Informal Digital Learning of English (AI-IDLE), yet research has paid limited attention to how learners negotiate ethical responsibility and algorithmic bias during everyday, out-of-class use, particularly in value-oriented faculties. This qualitative study examined ethical awareness and perceived algorithmic bias in AI-IDLE among Sharia Faculty students at UIN Raden Intan Lampung. Using a qualitative descriptive case study design, twelve undergraduates were recruited through purposive maximum-variation sampling. Data were generated from semi-structured interviews and a curated set of anonymized AI interaction artifacts (e.g., prompts and outputs for explanation, drafting, and translation), supported by brief artifact walkthroughs when feasible. Data were analyzed via reflexive thematic analysis with iterative coding, memoing, and cross-source comparison to strengthen interpretive transparency. Findings showed that ethical awareness operated as context-sensitive boundary-making: students differentiated AI as a learning scaffold versus a substitute for intellectual labor, tightened limits in assessment-adjacent tasks, and reclaimed authorship through substantive rewriting to preserve voice and stance. Ethical reasoning also included privacy stewardship through data minimization in prompts, though awareness varied. Regarding bias, students most often noted Western-centric framing in value-laden examples and language standardization that diluted pragmatic or culturally embedded meanings. Fluent outputs produced an authority effect, but many participants reported mitigation routines such as verification, requesting multiple viewpoints, and re-prompting for Indonesian/Islamic contextualization. The study implies that AI-IDLE guidance should integrate ethical boundary-setting, privacy-aware prompting, and critical AI literacy practices to support responsible and culturally responsive informal language learning.
Copyrights © 2026