Claim Missing Document
Check
Articles

Found 3 Documents
Search

The Impact of Parameter Scaling: Analysis of Specific Large Language Model Capabilities Putera, Ariya Uttama; Marcellino, Felix; Manalu, Sonya Rapinta; Muhamad , Keenan Ario
International Journal of Computer Science and Humanitarian AI Vol. 3 No. 1 (2026): IJCSHAI (In Press)
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/ijcshai.v3i1.15119

Abstract

Large Language Models (LLMs) are currently very diverse. Some of the largest include Chat-GPT, Gemini, Microsoft Copilot, Claude Sonet, Grok, and DeepSeek. Based on this, the plan of this research is to determine how efficient these AI models can be, based on their strengths in LLM training. In this study, we will examine the impact of LLM scaling parameters on the results of each local model we will test. This study also limits the number of parameters and classifies the questions to be asked. From these questions, we can identify and classify which local LLM models perform better when asked the same questions. Then, we will objectively evaluate each of them based on the results of the study. Thus, this study aims to establish a known correlation between scaling parameters and results. We also hope that it will be useful for improving work efficiency in selecting AI that suits user needs and expanding users' knowledge of AI so they can perform their jobs more efficiently and accurately. From this research, we conclude, aware of the results of the work that has been done, that local LLMs with large scaling are not entirely good and efficient. As with Gemma3, even with 12B parameters, the results weren't better than the Gemma3 model with 4B parameters. Alternatively, if you're using similar hardware to ours, you can use GPT-oss (openai/gpt-oss-20B) and Qwen3 (Qwen/Qwen3-4B & Qwen/Qwen3-8B), which offer good results in terms of reasoning and inference speed.
PoseTracker: Accuracy Evaluation of AI-Based Mobile Application for Exercise Posture Feedback Collhins, Billy; Mitta, Kalyana; Gunawan, Christian; Manalu, Sonya Rapinta
International Journal of Computer Science and Humanitarian AI Vol. 3 No. 1 (2026): IJCSHAI (In Press)
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/ijcshai.v3i1.15123

Abstract

In recent years, the rising of public health awareness has increased fitness activities participation. However, improper exercise form remains a significant contributor to injuries, particularly in unsupervised environments. To address this, PoseTracker’s accuracy was evaluated as a native Android application that provides real time feedback on exercise posture through MediaPipe based Human Pose Estimation (HPE) model. The system extracts 33 3D body landmarks, normalizes them to account for body scale, and employs cosine similarity to compare user movements against a reference dataset. Evaluations involving participants aged between 17 to 50 years old and 240 repetitions across four exercises demonstrated high detection accuracy: 88.33% for jumping jacks, 85% for squats, 83.33% for push-ups and 82% for sit ups. While performance can be influenced by environmental factors such as inconsistent lighting, camera positioning and incomplete body visibility, these results highlight the potential for lightweight, AI driven tools to support safe and self-guided fitness routines. Overall, the evaluations indicate that PoseTracker achieves reliable detection accuracy in distinguishing correct and incorrect exercise posture across multiple movement types under realistic conditions. Although performance variability exists due to environmental and system constraints, the accuracy levels observed demonstrate the feasibility of MediaPipe based Human Pose Estimation (HPE) for practical posture assessment in mobile fitness applications.
An End-to-End Architecture for Stock Market Prediction Integrating Mobile Application, Backend Services, and ML/DL Models Wilham, Abraham Kefas; William, William; Manalu, Sonya Rapinta
International Journal of Computer Science and Humanitarian AI Vol. 3 No. 1 (2026): IJCSHAI (In Press)
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21512/ijcshai.v3i1.15154

Abstract

Prior research on stock market prediction has predominantly focused on algorithmic accuracy, leaving a significant research gap in the system-level realization required for real-world delivery. This paper addresses this disparity by presenting an end-to-end stock prediction delivery system that operationalizes trained machine learning models within a mobile-centric architecture. Unlike model-centric studies limited to offline evaluation, this work focuses on the rarity of system-level implementation. Market data are periodically ingested into a managed relational database, where predictions are generated using a fixed historical window and persisted for downstream access. A cross-platform mobile application serves as the primary user interface, providing structured access to historical prices, predictions, and accuracy metrics via backend APIs without local model inference. A key novelty is the implementation of an in-memory caching layer to optimize responsiveness for repeated mobile access. Experimental results demonstrate that this architecture significantly improves efficiency, reducing average API response times by approximately 94% from 817 ms to 48,7778 ms compared to direct database queries. These findings underscore the critical role of mobile-oriented system design in bridging the gap between predictive modeling and practical deployment.