Claim Missing Document
Check
Articles

Found 37 Documents
Search

Introduction of LoRa Communication System and Remote Control System in Agricultural Automation With Internet of Things Prabowo, Yani; Riwurohi, Jan Everhard; Windihastuti, Wiwin; Hasan, Fuad
Journal of Computer Science Advancements Vol. 3 No. 2 (2025)
Publisher : Yayasan Adra Karima Hubbi

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.70177/jsca.v3i2.2230

Abstract

This research focuses on the integration of LoRa (Long Range) communication system and remote control system in agricultural automation with Internet of Things (IoT) using ESP32 microcontroller, Arduino nano and STM32 aims to improve the efficiency of intelligent agricultural management. LoRa is used as a long-range wireless communication protocol to collect data from sensors that are widely distributed in agricultural land, such as soil moisture sensors, temperature. The ESP32 microcontroller functions as the main controller that processes data from sensors and sends it in real-time to the control center via the LoRa network. Modbus is used as a standard serial communication protocol to connect sensors, actuators and other devices, thus ensuring compatibility between devices. In addition, Node-RED is used as a graphical interface (GUI) to manage data flow, control automation processes, and provide real-time data visualization to users. The results of this research are a stable integration system between sensor systems and communication systems. The novelty of this research is the integration of LoRa, ESP32, Modbus, and Node-RED to create a reliable and efficient agricultural automation system, enabling remote management of irrigation, fertilization, and environmental monitoring, thereby increasing agricultural productivity and optimizing resource use.
Implementasi Large Language Model dalam Multi-Domain Psikologi: Tinjauan Literatur Sistematis Ansor, Mohamad Zakaria; Ari Kusuma, Dyah Topan; Riwurohi, Jan Everhard
Jurnal Pendidikan dan Teknologi Indonesia Vol 5 No 11 (2025): JPTI - November 2025
Publisher : CV Infinite Corporation

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jpti.1105

Abstract

Implementasi large language models (LLM) dalam bidang psikologi menyajikan peluang signifikan untuk meningkatkan diagnosis, pengambilan keputusan klinis, dan penelitian medis. Studi ini melakukan tinjauan literatur sistematis untuk mengeksplorasi penelitian-penelitian terkini mengenai aplikasi LLM dalam bidang psikologi. Dengan mengikuti panduan PRISMA, pencarian literatur dilakukan pada database ScienceDirect. Kriteria inklusi dan eksklusi diterapkan untuk mengidentifikasi studi-studi yang relevan. Data yang diekstraksi mencakup tujuan penelitian, metodologi, bidang aplikasi, jenis data yang digunakan, key findings, dan hasil. Sebanyak 20 studi dimasukkan setelah proses seleksi. Review ini memberikan gambaran komprehensif mengenai aplikasi LLM dalam bidang psikologi, mengidentifikasi peluang, tantangan, dan arah penelitian masa depan yang bermanfaat bagi peneliti, praktisi, dan pembuat kebijakan. Temuan ini menunjukkan bahwa integrasi LLMs dalam praktik psikologi memiliki potensi transformatif untuk meningkatkan kualitas dan aksesibilitas layanan kesehatan mental, namun memerlukan pengembangan framework etis dan regulasi yang komprehensif untuk memastikan implementasi yang aman dan efektif.
ANALISIS KOMPARATIF EFISIENSI DAN KINERJA PROSESOR INTEL XEON 6 DAN AMD EPYC 9004 PADA LINGKUNGAN SERVER VIRTUALISASI Oktora, Andre; K, Irvan; K, Johanes H; Ridwan, Mohamad; Riwurohi, Jan Everhard
Jurnal TIMES Vol 14 No 2 (2025): Jurnal TIMES
Publisher : STMIK TIME

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Peningkatan konsumsi daya pada pusat data global menempatkan efisiensi energi (performance-per-watt) sebagai metrik krusial dalam pemilihan prosesor server modern, terutama dalam lingkungan komputasi awan dan virtualisasi berbasis container. Penelitian ini bertujuan untuk menganalisis komparatif kinerja (throughput relatif) dan efisiensi energi () antara prosesor Intel Xeon 6 (arsitektur hybrid) dan AMD EPYC 9004 (arsitektur Zen 4 dengan 96 core) di bawah skenario peningkatan beban kerja container. Studi ini menggunakan pendekatan kuantitatif simulatif berbasis data sekunder, mengimplementasikan model matematis yang mereplikasi degradasi kinerja dan peningkatan konsumsi daya seiring penambahan jumlah container (10 hingga 100). Hasil simulasi menunjukkan bahwa AMD EPYC 9004 unggul secara signifikan. Prosesor ini tidak hanya mempertahankan throughput absolut yang lebih tinggi di seluruh beban kerja ( hingga 463.30 pada 100 container), tetapi juga menunjukkan skalabilitas yang lebih baik (degradasi minimal dari ). Keunggulan kinerja ini menghasilkan Efisiensi Energi () yang superior (mencapai 2.47), yang membuktikan bahwa arsitektur berdensitas inti tinggi mampu mengkompensasi TDP yang sedikit lebih tinggi, memberikan rasio performance-per-watt yang lebih ekonomis. Disimpulkan bahwa AMD EPYC 9004 merupakan pilihan yang lebih optimal bagi pengelola data center yang mencari solusi kinerja tinggi yang stabil dan efisien energi untuk beban kerja virtualisasi yang intensif.
Performance Evaluation of Cloud-Init as Deployment Automation, Virtual Machine, and LXC Container on Proxmox VE for AI LLM Deployment Jody, Jody; Riandhito, Febry Aryo; Yusuf, Rika; Saputra, Anggi; Riwurohi, Jan Everhard
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol. 15 No. 01 (2026): JANUARY
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v15i01.2562

Abstract

As Artificial Intelligence (AI) is used more and more in digital certification systems, it is important to create stable and efficient environments for the use of Large Language Models (LLMs). AI-based chatbots are very helpful for people who are taking online tests at professional certification schools and for people who are giving tests. However, it is still not clear where the best place is to run AI inference workloads because virtualization can use different amounts of resources and cost different amounts. This study aims to identify the optimal deployment environment by assessing Cloud Init, Virtual Machine (VM), and Linux Container (LXC) within the Proxmox Virtual Environment (VE). This environment tested Ollama and FastAPI on the same hardware (4 vCPU, 16 GB RAM, 32 GB SSD, 80 Mbps) and the Phi3:3.8b model. The study also checked the important numbers like CPU and memory usage, disk and network throughput, latency, and response time. The tests showed that LXC had the fastest disk speed (2.45 MB/s) and network speed (3.33 MB/s). VM had the longest response time (15.64 s) and the longest latency (6.89 ms). Cloud Init had mixed results: it made automation easier but less effective. These results show that the best way to use Cloud Init and LXC together for big certification systems is through hybrid orchestration. This is the best way to get a good balance between AI deployment that is flexible and fast. The methodology section provides a clearer description of the experimental process, including benchmark tools (Hey CLI, Sysbench, Prometheus), the number of test repetitions (three sessions per environment), and comparative data analysis methods to ensure result validity. Moreover, the conclusion emphasizes the scientific implications by explaining how Cloud Init’s automation capabilities can be combined with LXC’s performance efficiency to improve AI inference deployments in scalable and institutional environments.
Integration of Yolov8 and OCR As E-KTP Data Extraction and Validation Solution for Digital Administration Automation Gumirang, Lalang; Riwurohi, Jan Everhard; Pramono, Agung
Eduvest - Journal of Universal Studies Vol. 5 No. 11 (2025): Eduvest - Journal of Universal Studies
Publisher : Green Publisher Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59188/eduvest.v5i11.52365

Abstract

The exchange of personal data in Indonesia remains predominantly manual, involving form-filling and photocopying of electronic identity cards (e-KTP), despite the availability of embedded electronic chips designed for automated data processing. This study proposes an integrated data extraction and validation system combining YOLOv8 for precise region detection and Optical Character Recognition (OCR) with advanced preprocessing techniques for textual information extraction. Unlike previous approaches relying solely on OCR (e.g., Vision AI), this method employs YOLOv8 object detection to accurately localize key fields (NIK, Name, Address) before text extraction, followed by validation through the DUKCAPIL API. The system was evaluated using 20 e-KTP images captured under various conditions. Results demonstrate that the proposed approach achieves an average OCR accuracy of 98.7% with an Intersection over Union (IoU) of 0.975, significantly outperforming baseline Vision AI extraction by 15–20%. All extracted data successfully passed validation against the official DUKCAPIL database, confirming 100% authenticity verification. This system provides an economical and efficient solution for automating population data administration, particularly suitable for small non-governmental organizations with limited budgets. The integration of deep learning-based object detection and preprocessed OCR offers a robust framework for digital identity verification systems.
Implementation and Analysis of Distributed Cache Architecture Between Virtual Machines in VMware to Reduce Memory Access Latency Riwurohi, Jan Everhard; Syahrir, Muh.; Muslich, Muhammad Farid; Nurman, Indra; Adriansyah, A.
Golden Ratio of Data in Summary Vol. 6 No. 1 (2026): November - January
Publisher : Manunggal Halim Jaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52970/grdis.v6i1.1838

Abstract

Virtualization technology allows multiple virtual machines (VMs) to run on a single physical machine, improving efficiency and flexibility. However, virtualized systems often face performance problems such as high memory access latency and repeated data requests between VMs. To address this issue, this study implements a distributed caching system using Redis as an in-memory cache shared between virtual machines. The experiment was conducted on the VMware vSphere platform using two virtual machines: one VM acted as a Redis cache server, and the other as a client for testing. Both VMs were connected using a host-only network to ensure stable communication. Testing was performed in two scenarios: without cache and with Redis cache, each executed 10 times. The main metric measured was response time in seconds. The results show a clear performance improvement after using Redis. The average response time without cache was 0.0113 seconds, while with Redis cache it decreased to 0.00046 seconds. This indicates that Redis reduced memory access latency by approximately 97.6%. The system also remained stable during testing without any connection issues. In conclusion, implementing a distributed caching architecture using Redis effectively improves response time, reduces memory access latency, and enhances system performance in a VMware virtualized environment. This study can serve as a reference for developing more efficient and responsive virtualization systems in modern computing environments.
Tinjauan Literatur Sistematis dan Analisis Bibliometrik tentang Isu Etika dan Tata Kelola Kecerdasan Buatan dalam Aplikasi Militer dan Peperangan Bambang Suharjo; Sunardi, Dendi; Jan Everhard
Jurnal Teknologi Sistem Informasi dan Aplikasi Vol. 9 No. 1 (2026): Jurnal Teknologi Sistem Informasi dan Aplikasi
Publisher : Program Studi Teknik Informatika Universitas Pamulang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32493/jtsi.v9i1.58508

Abstract

The rapid advancement of Artificial Intelligence (AI) in military applications has raised a range of ethical and governance concerns, particularly regarding the use of Autonomous Weapon Systems (AWS) in making lethal decisions without direct human involvement. While these developments offer strategic advantages, they also introduce significant challenges in ensuring accountability, transparency, and compliance with international humanitarian law. This study aims to systematically examine and map the knowledge structure and global research trends related to ethical and governance issues of AI in the military domain. The research adopts a Systematic Literature Review (SLR) approach based on the PRISMA protocol, combined with bibliometric analysis of 469 articles published between 2020 and 2025. The analysis is conducted using VOSviewer to identify thematic clusters, relationships among research topics, and the overall density of scholarly discourse. The findings reveal seven major thematic clusters, including ethical foundations and human-centric approaches, operational systems and decision-making, robotics and autonomous systems, military applications and strategy, governance and regulatory frameworks, ethical principles and accountability, and technical foundations based on machine learning. Network visualization indicates that ethical issues are closely interconnected with governance as the central focus of the discourse, while density analysis shows that the terms “artificial intelligence,” “ethics,” and “application” dominate the research landscape. The study also highlights a gap between normative ethical frameworks and practical implementation in the development and deployment of AI in military contexts. Therefore, stronger governance frameworks are required to ensure accountability and compliance with international regulations. This research contributes by mapping current research directions and identifying future research opportunities, particularly in developing more adaptive and context-aware AI governance approaches.