The phenomenon of digital agency workers confiding in ChatGPT reflects a significant shift in human-AI communication patterns within contemporary urban work environments. This study explores how emotional interactions with AI are experienced and interpreted by digital workers facing workplace stress and emotional pressure. Using a phenomenology-informed qualitative approach, data were gathered through in-depth, semi-structured interviews with four purposively selected informants who actively use ChatGPT for emotional expression. The study employs thematic analysis to identify patterns in user experiences. Findings reveal three major themes: (1) ChatGPT functions as a safe, non-judgmental emotional space where workers can express feelings without social risk; (2) parasocial relationships emerge as users personalize and anthropomorphize the AI, treating it as a companion or friend; and (3) early signs of emotional dependency are identified, characterized by anxiety when access is disrupted and by cognitive reliance on AI for decision-making. Theoretically, this study contributes to the Human-Machine Communication (HMC) literature by demonstrating that ChatGPT serves as a symbolic actor in digital affective ecosystems, functioning through machine agency and artificial emotional awareness (AEA). Practically, these findings highlight the need for affective AI literacy to maintain balanced human-technology relationships. The study concludes that while AI provides valuable emotional regulation support, users must develop critical awareness of potential dependency risks and the limitations of artificial empathy compared to authentic human connection