Claim Missing Document
Check
Articles

Found 1 Documents
Search

Performance Evaluation of Cloud-Init as Deployment Automation, Virtual Machine, and LXC Container on Proxmox VE for AI LLM Deployment Jody, Jody; Riandhito, Febry Aryo; Yusuf, Rika; Saputra, Anggi; Riwurohi, Jan Everhard
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol. 15 No. 01 (2026): JANUARY
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v15i01.2562

Abstract

As Artificial Intelligence (AI) is used more and more in digital certification systems, it is important to create stable and efficient environments for the use of Large Language Models (LLMs). AI-based chatbots are very helpful for people who are taking online tests at professional certification schools and for people who are giving tests. However, it is still not clear where the best place is to run AI inference workloads because virtualization can use different amounts of resources and cost different amounts. This study aims to identify the optimal deployment environment by assessing Cloud Init, Virtual Machine (VM), and Linux Container (LXC) within the Proxmox Virtual Environment (VE). This environment tested Ollama and FastAPI on the same hardware (4 vCPU, 16 GB RAM, 32 GB SSD, 80 Mbps) and the Phi3:3.8b model. The study also checked the important numbers like CPU and memory usage, disk and network throughput, latency, and response time. The tests showed that LXC had the fastest disk speed (2.45 MB/s) and network speed (3.33 MB/s). VM had the longest response time (15.64 s) and the longest latency (6.89 ms). Cloud Init had mixed results: it made automation easier but less effective. These results show that the best way to use Cloud Init and LXC together for big certification systems is through hybrid orchestration. This is the best way to get a good balance between AI deployment that is flexible and fast. The methodology section provides a clearer description of the experimental process, including benchmark tools (Hey CLI, Sysbench, Prometheus), the number of test repetitions (three sessions per environment), and comparative data analysis methods to ensure result validity. Moreover, the conclusion emphasizes the scientific implications by explaining how Cloud Init’s automation capabilities can be combined with LXC’s performance efficiency to improve AI inference deployments in scalable and institutional environments.