Nguyen, Huu-Khanh
Unknown Affiliation

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Parameter-efficient fine-tuning of small language models for code generation: a comparative study of Gemma, Qwen 2.5 and Llama 3.2 Nguyen, Van-Viet; Nguyen, The-Vinh; Nguyen, Huu-Khanh; Vu, Duc-Quang
International Journal of Electrical and Computer Engineering (IJECE) Vol 16, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v16i1.pp278-287

Abstract

Large language models (LLMs) have demonstrated impressive capabilities in code generation; however, their high computational demands, privacy limitations, and challenges in edge deployment restrict their practical use in domain-specific applications. This study explores the effectiveness of parameter efficient fine-tuning for small language models (SLMs) with fewer than 3 billion parameters. We adopt a hybrid approach that combines low-rank adaptation (LoRA) and 4-bit quantization (QLoRA) to reduce fine-tuning costs while preserving semantic consistency. Experiments on the CodeAlpaca-20k dataset reveal that SLMs fine-tuned with this method outperform larger baseline models, including Phi-3 Mini 4K base, in ROUGE-L. Notably, applying our approach to the LLaMA 3 3B and Qwen2.5 3B models yielded performance improvements of 54% and 55%, respectively, over untuned counterparts. We evaluate models developed by major artificial intelligence (AI) providers Google (Gemma 2B), Meta (LLaMA 3 1B/3B), and Alibaba (Qwen2.5 1.5B/3B) and show that parameter-efficient fine-tuning enables them to serve as cost-effective, high-performing alternatives to larger LLMs. These findings highlight the potential of SLMs as scalable solutions for domain-specific software engineering tasks, supporting broader adoption and democratization of neural code synthesis.
Enhancing Autonomous GIS with DeepSeek-Coder: an open-source large language model approach Nguyen, Kim-Son; Nguyen, The-Vinh; Nguyen, Van-Viet; Thi, Minh-Hue Luong; Nguyen, Huu-Khanh; Nguyen, Duc-Binh
International Journal of Electrical and Computer Engineering (IJECE) Vol 16, No 1: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v16i1.pp423-436

Abstract

Large language models (LLMs) have paved a way for geographic information system (GIS) that can solve spatial problems with minimal human intervention. However, current commercial LLM-based GIS solutions pose many limitations for researchers, such as proprietary APIs, high operational costs, and internet connectivity requirements, making them inaccessible in resource-constrained environments. To overcome this, this paper introduced the LLM-Geo framework with the DS-GeoAI platform, integrating the DeepSeek-Coder model (the open-source, lightweight version deepseek-coder-1.3b-base) running directly on Google Colab. This approach eliminates API dependence, thus reducing deployment costs, and ensures data independence and sovereignty. Despite having only 1.3 billion parameters, DeepSeek-Coder proved to be highly effective: generating accurate Python code for complex spatial analysis, achieving a success rate comparable to commercial solutions. After an automated debugging step, the system achieved 90% accuracy across three case studies. With its strong error- handling capabilities and intelligent sample data generation, DS-GeoAI proves highly adaptable to real-world challenges. Quantitative results showed a cost reduction of up to 99% compared to API-based solutions, while expanding access to advanced geo-AI technology for organizations with limited resources.