This paper focuses on the implementation of core functionalities for a Hearing Support System (HSS) and the validation of its engineering feasibility. The system is designed to address the limitations of conventional hearing aids, specifically their restricted personalized calibration and environmental adaptation. The proposed HSS is a smartphone application-based system characterized by key functions: personalized settings derived from individual audiogram profiles, environment-specific presets, and real-time speech translation with textual display. Regarding the system's auxiliary output, the implementation of a Hangul (Korean) display is presented. A comparative analysis between a low-cost ESP32-based implementation (utilizing bitmap fonts) and a Raspberry Pi-based counterpart (employing vector fonts) empirically validates the necessity of vector fonts for enabling font scaling functions, which are crucial for users with low vision. For speech recognition, the study adopts an approach that transforms one-dimensional time-series audio waveforms into two-dimensional 'sound images,' specifically spectrograms, which serve as input for a Convolutional Neural Network (CNN). Conclusively, this research successfully prototyped the core functionalities of the HSS at a Proof of Concept (PoC) level, utilizing tools, thereby confirming its integration feasibility. Nevertheless, several key areas are identified as future tasks for practical deployment: the refinement of preset functionalities, the elimination of dependencies on external APIs, and fundamental enhancements to speech recognition performance through the adoption of deeper CNN architectures.