cover
Contact Name
-
Contact Email
magisterkomputer@unpam.ac.id
Phone
+6281316281847
Journal Mail Official
dosen02680@unpam.ac.id
Editorial Address
Universitas Pamulang Viktor, Lt. 3, Jl. Raya Puspitek, Buaran, Kec. Pamulang, Tangerang Selatan, Provinsi Banten
Location
Kota tangerang selatan,
Banten
INDONESIA
Jurnal Ilmu Komputer
Published by Universitas Pamulang
ISSN : -     EISSN : 3031125X     DOI : -
Jurnal Ilmu Komputer merupakan jurnal ilmiah dalam bidang Ilmu Komputer, Informatika, IoT, Network Security dan Digital Forensics yang diterbitkan secara konsisten oleh Program Studi Teknik Informatika S-2, Program Pascasarjana, Universitas Pamulang, Indonesia. Tujuan penerbitannya adalah untuk memberikan informasi terkini dan berkualitas kepada para pembaca yang memiliki ketertarikan terhadap perkembangan ilmu pengetahuan dan teknologi di bidang-bidang tersebut. Setiap artikel yang dimuat dalam Jurnal Ilmu Kompute merupakan hasil kegiatan penelitian, tinjauan pustaka, dan best-practice. Jurnal Ilmu Komputer terbit dua kali dalam setahun, tepatnya pada bulan Juni dan Desember. Jumlah artikel untuk setiap terbitan adalah 10 artikel.
Articles 77 Documents
Analisis Stok Barang Menggunakan Algoritma K-Nearest Neighbor Dan Naïve Bayes Untuk Meningkatkan Efisiensi Persediaan Barang Retail Pada PT. XXX Restu Putra, Catur; Anggai, Sajarwo; Susanto, Agung Budi
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Efficient inventory management is a strategic necessity for retail industries that operate under highly fluctuating demand conditions. PT XXX continues to experience inaccuracies in determining stock requirements because the analysis is still carried out manually using a three-month average sales calculation. This approach is unable to capture actual warehouse variations, resulting in frequent overstock and understock conditions. This study develops a machine learning–based stock classification model using the K-Nearest Neighbor (K-NN) and Naïve Bayes algorithms by utilizing key operational warehouse variables, including average sales, ending stock, and Days of Inventory (DOI). The dataset consists of 4,324 records from November 2024 to October 2025 and was processed using Orange Data Mining. Performance evaluation was conducted using accuracy, precision, recall, and confusion matrix. The results show that K-NN achieved the best performance, with 96.80% accuracy in the prediction model and 93.00% in the test & score evaluation, outperforming Naïve Bayes, which achieved approximately 90%. The study also produced a two-level classification mapping stock status (High/Low) and warehouse recommendations (Low/Enough/Excess) which revealed a significant imbalance between High and Low categories. These findings demonstrate that machine learning–based classification methods can enhance stock assessment accuracy and support more adaptive and efficient restocking decisions in retail inventory management
Analisis BERT dan LDA Untuk Ekstraksi Kebijakan Ekonomi Presiden Prabowo Subianto Muhammad Najwah; Sajarwo Anggai; Sudarno
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Economic policies introduced at the beginning of President Prabowo Subianto’s administration have generated diverse public discourses reflected in online media coverage. The large volume of textual data necessitates computational approaches to extract information systematically. This study aims to identify, label, and compare major economic policy topics using topic modeling techniques, namely Latent Dirichlet Allocation (LDA) and BERTopic.The dataset consists of 1,000 economic news articles collected through web scraping from an online news portal. Text preprocessing includes normalization, case folding, cleaning, tokenization, and lemmatization. LDA was implemented using a TF-IDF representation and evaluated with the Coherence Score (c_v). BERTopic employed IndoBERT embeddings, UMAP for dimensionality reduction, and HDBSCAN for hierarchical clustering, with evaluation based on topic coherence and semantic interpretability. The results show that LDA generated eight main topics with a Coherence Score (c_v) of 0.61, indicating moderate performance but limited semantic representation, leading to overlapping topics. In contrast, BERTopic produced nine main topics with a higher Coherence Score (c_v) of 0.72 and clearer, more contextual topic labels, including fiscal policy, energy, capital markets, and economic stimulus. Overall, BERTopic outperformed LDA in extracting and labeling economic policy topics due to its superior ability to capture semantic context and form stable topic clusters.
Analisis Sentimen Masyarakat terhadap Program Makan Bergizi Gratis (MBG) pada Media Sosial X Menggunakan Support Vector Machine dan Naïve Bayes Hanifah Puji Lestari
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The rapid growth of social media has transformed public communication patterns and positioned platform X as a digital space where citizens actively express their views on government policies, including the Free Nutritious Meal Program (Program Makan Bergizi Gratis/MBG). As a strategic national initiative aimed at improving students’ nutritional quality, the implementation of the MBG Program has generated diverse public responses that require systematic analysis. This study aims to identify public sentiment tendencies toward the MBG Program and to compare the classification performance of Support Vector Machine (SVM) and Naïve Bayes algorithms in sentiment analysis based on social media text. The research data consist of Indonesian-language tweets collected through a web scraping process using keywords related to the MBG Program. The collected data were processed through several text preprocessing stages to reduce noise and enhance data quality. Sentiment labeling was conducted automatically using a lexicon-based approach, classifying tweets into positive, neutral, and negative categories. Feature representation was performed using the Term Frequency–Inverse Document Frequency (TF-IDF) method, and the dataset was divided into training and testing sets with an 80:20 ratio. Sentiment classification was then carried out using SVM and Naïve Bayes algorithms, with model performance evaluated based on accuracy metrics. The experimental results show that the SVM algorithm achieved an accuracy of 87.57%, outperforming the Naïve Bayes algorithm, which obtained an accuracy of 68.08%. These findings indicate that SVM is more effective in handling high-dimensional and unstructured social media text data
Analisis Sentimen Terhadap Chatgpt Dan Gemini Dengan Algoritma K-Nearest Neighbor, Decision Tree Dan Naïve Bayes Dika Prasetya
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The rapid development of artificial intelligence technology has increased the widespread use of AI-based chatbots such as ChatGPT and Gemini. The extensive adoption of these technologies has generated diverse public opinions, which are frequently expressed through social media platforms, particularly X (Twitter). This study aims to analyze public sentiment toward ChatGPT and Gemini and to compare the performance of three classification algorithms, namely Naïve Bayes, K-Nearest Neighbor (KNN), and Decision Tree, in sentiment classification tasks. This research employs a quantitative approach using text mining techniques. The dataset consists of tweets collected through a crawling process using the Python programming language, based on keywords related to ChatGPT and Gemini. Data preprocessing includes data cleansing, case folding, tokenization, stopword removal, and stemming. Sentiment labels, categorized into positive, neutral, and negative classes, are assigned using the VADER lexicon-based approach. Text data are then transformed into numerical features using the Term Frequency–Inverse Document Frequency (TF-IDF) method. The dataset is divided into training and testing sets for model development and evaluation.The experimental results indicate that the Naïve Bayes algorithm outperforms the other models, achieving an accuracy of 57.26%, followed by Decision Tree with 54.98%, and KNN with 41.59%. Further evaluation using precision, recall, and F1-score metrics confirms that Naïve Bayes provides more stable performance in handling high-dimensional text data. These findings suggest that Naïve Bayes is the most effective algorithm for sentiment analysis of short text data on social media platforms
Analisis Topik Dan Sentimen Berbasis Algoritma Latent Dirichlet Allocation (LDA) Dan Bidirectional Encoder Representations From Transformers (BERT): Studi Kasus Ulasan Pelanggan Pada E-Commerce ruparupa.com Permana, Surya; Sajarwo Anggai; Taswanda Taryo
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

The growth of e-commerce in Indonesia has led to an increasing volume of customer reviews containing vital information. These reviews are generally in the form of unstructured text, necessitating text analysis methods to extract meaningful insights. This study aims to analyze topics and sentiments in customer reviews of the e-commerce platform ruparupa.com by utilizing Latent Dirichlet Allocation (LDA) and Bidirectional Encoder Representations from Transformers (BERT) algorithms. The LDA algorithm is used to identify the main topics frequently discussed by customers, while BERT is employed to classify review sentiments into positive, negative, and neutral categories. By using Lexicon-Based and VADER as an automatic labeling mechanism (auto-labeling), the preprocessing stage includes cleaning, case folding, and stemming using the Sastrawi library to ensure the quality of the input data. The LDA algorithm is implemented to extract latent topic structures, which are then mapped into five main categories: Price, Application, Service, Product Quality, and Delivery. Furthermore, the DistilBERT model is trained through a fine-tuning process using the AdamW optimizer for 3 epochs. The sentiment analysis results indicate that the model demonstrates very strong performance, as reflected by high accuracy and consistently optimal precision, recall, and F1-score across all sentiment classes. This customer sentiment distribution reflects the level of user satisfaction with the services of ruparupa.com. The combination of LDA and BERT methods is proven effective in providing an overview of key issues and customer perceptions
Perancangan dan Strategi Tata Kelola TI Menggunakan COBIT 2019 pada PT. XYZ Iqbal Habib Al Baqi; Winarni; Taswanda Taryo
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

XYZ faces unstructured and reactive IT governance challenges, characterized by data silos and slow operational application response times. This gap hinders strategic alignment and operational efficiency within an institution that has significant responsibility for public services. This study uses the COBIT 2019 framework to design a systematic and contextual governance strategy. The novelty of this research lies in the holistic integration of the 10 Design Factors in the public sector with a high security risk profile that has not been widely explored. A descriptive qualitative methodology was applied through in-depth interviews, observations, and document studies with key informants from the managerial to operational levels. The analysis follows the COBIT 2019 Design Guide workflow and is validated through triangulation techniques and member checking. The analysis results show a focus on service stability with high security risks (40%) and strict compliance (30%). Of the 40 objectives, 14 priority Governance and Management Objectives (GMO) were identified, focusing on risk optimization and external compliance. The research produced a three-phase strategic roadmap aligned with the organization's objectives and the Personal Data Protection Law regulations. This strategy transforms IT governance at PT. XYZ into a more structured, accountable, and adaptive framework.
Penerapan Web-Based Homecare Management System Dengan Fitur Reservasi Dan Konsultasi Virtual Untuk Meningkatkan Efisiensi Layanan Kesehatan Homecare Aktavia, Widodo; Nilovar Asyiah
Jurnal Ilmu Komputer Vol 4 No 1 (2026): Jurnal Ilmu Komputer (Edisi Januari 2026)
Publisher : Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Homecare services have become an important solution in providing health care at home, especially for patients who need long-term care or cannot access health facilities directly, such as the elderly, post-natal mothers and babies. However, challenges in schedule management and service efficiency often become obstacles in homecare service operations. This research aims to develop and implement a web-based homecare information system with a reservation feature to increase the efficiency of health services. The case study was conducted on Yunia Khomsati A.Md Keb., SKM., M.K.M, a public health master's graduate collaborating with her friend, the same graduate and her 2 nursing assistants who wanted to help the people around her who needed care at home. This system is designed to facilitate the process of homecare service facilities, managing schedules, home addresses of homecare providers, communication between patients and service providers if they want to be served at the patient's home. The implementation of this system will be able to expand information that it turns out that patients who need care services can do so at home, can make online reservations and know the rates for services provided by nurses. There are 2 types of services available in this homecare service, namely coming to the practice's home or a nurse coming to the patient's house. It is hoped that this research can find effective and practical solutions to increase the efficiency of technology-based homecare services.