Mohammad Idhom
Universitas Pembangunan Nasional "Veteran" Jawa Timur

Published : 5 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 4 Documents
Search
Journal : bit-Tech

Application of Multivariate Singular Spectrum Analysis for Weather Prediction Abdul Mukti; Kartika Maulida Hindrayani; Mohammad Idhom
bit-Tech Vol. 8 No. 2 (2025): bit-Tech
Publisher : Komunitas Dosen Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32877/bt.v8i2.3003

Abstract

Weather significantly influences various aspects of life, especially in urban areas like Surabaya, where unpredictable weather can disrupt transportation, public health, economic activities, and overall comfort. Among the key meteorological variables, air temperature and relative humidity are crucial for assessing human thermal comfort, as their interaction forms the heat index a key indicator of health risks in tropical regions. This study introduces the use of the Multivariate Singular Spectrum Analysis (MSSA) method to forecast daily weather parameters, including minimum temperature (TN), maximum temperature (TX), average temperature (TAVG), and average relative humidity (RH_AVG). The research utilized weather data from the Perak 1 Meteorological Station in Surabaya, spanning from August 1 to December 31, 2024 (training data) and January 1 to January 14, 2025 (testing data). Unlike traditional methods, the MSSA model effectively analyzes the complex relationships between multiple weather variables, improving forecasting accuracy. The model demonstrated strong performance, with Mean Absolute Percentage Errors (MAPE) of 3.70% for TN, 5.99% for TX, 4.44% for TAVG, and 7.39% for RH_AVG. These results highlight MSSA's potential as an effective tool for short-term weather forecasting in urban tropical environments, supporting more accurate predictions that can inform early warning systems, disaster planning, and public health strategies. This work advances the state-of-the-art by offering a robust method for handling multivariate weather data, which is essential for making informed decisions in rapidly changing climates
Effectiveness of Extreme Learning Machine in Online Payment Transaction Fraud Detection Radya Ardi; Mohammad Idhom; Kartika Maulida Hindrayani
bit-Tech Vol. 8 No. 2 (2025): bit-Tech
Publisher : Komunitas Dosen Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32877/bt.v8i2.3005

Abstract

The rise of fintech and digital payment systems has increased efficiency but also escalated the risk of online transaction fraud, particularly under imbalanced data conditions where fraudulent cases are rare. This study addresses the limitations of traditional rule-based and machine learning models in such scenarios by proposing the use of Extreme Learning Machine (ELM) with hyperparameter tuning as a novel and efficient solution for fraud detection. Unlike most prior studies relying on default settings or data resampling, this research focuses on enhancing ELM performance purely through parameter optimization using the Optuna framework. A dataset of 20,000 real-world online transactions was used to evaluate model performance before and after tuning. In its default configuration, ELM yielded high overall accuracy (96.80%) but failed to detect fraudulent cases (0% recall and F1-score). After tuning key parameters such as the number of hidden neurons and activation function, the model achieved a significantly better balance between accuracy and fraud detection performance, with 99.53% accuracy, 98.20% precision, 86.51% recall, and a 91.98% F1-score. These results demonstrate that hyperparameter tuning alone, without resampling, can substantially improve ELM’s sensitivity to minority class detection. The findings suggest that optimized ELM offers a promising alternative for real-time fraud detection in imbalanced financial datasets, contributing to more adaptive and reliable security systems in the digital finance landscape.
Indonesian Sign Language (SIBI) Recognition from Audio Mel-Spectrograms Using LSTM Architecture Enryco Hidayat; Mohammad Idhom; Afina Lina Nurlaili
bit-Tech Vol. 8 No. 2 (2025): bit-Tech
Publisher : Komunitas Dosen Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32877/bt.v8i2.3229

Abstract

Persistent communication barriers continue to challenge Deaf and Hard of Hearing (DHH) individuals in accessing spoken language, underscoring the need for effective and inclusive translation technologies. Existing audio-to-sign language systems typically employ multi-stage pipelines involving speech-to-text transcription, which may propagate recognition errors and fail to preserve acoustic nuances. Addressing these limitations, this study developed and evaluated a deep learning framework for translating spoken Indonesian audio directly into classifications of the Indonesian Sign Language System (SIBI), eliminating explicit text conversion. The dataset comprised 495 eight-second WAV recordings (22,050 Hz) representing five SIBI phrase classes, augmented through time stretching, pitch shifting, and noise addition to improve generalization. Mel-Spectrogram features were extracted and input to a stacked Long Short-Term Memory (LSTM) network implemented in TensorFlow/Keras, trained to learn temporal–spectral mappings between audio patterns and SIBI categories. Evaluation on a held-out test set demonstrated robust performance, achieving 98 % accuracy with consistently high precision, recall, and F1-scores. The trained model was further integrated into a prototype web application built with Flask and React, confirming its feasibility for real-time assistive communication. While results highlight the viability of direct Mel-Spectrogram-to-LSTM translation for SIBI recognition, current findings are constrained by the limited dataset size and restricted speaker diversity. Future research should therefore expand the dataset to include more speakers, varied acoustic environments, and continuous-speech inputs to ensure broader applicability and real-world robustness.
Design and Development of an IoT-Based Rain Intensity Prediction System Using LoRa M. Arif; Mohammad Idhom; Henni Endah Wahanani
bit-Tech Vol. 8 No. 3 (2026): bit-Tech - IN PROGRESS
Publisher : Komunitas Dosen Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32877/bt.v8i3.3704

Abstract

An Internet of Things (IoT)–based system for rain intensity monitoring and next-day prediction is presented by integrating low-power wide-area communication using LoRa with cloud-based processing for outdoor and rural environments. This study evaluates the feasibility of LoRa communication and the end-to-end operational reliability of an IoT–cloud pipeline, while positioning machine learning as a supporting decision-aid module. A low-cost sensing node equipped with temperature, humidity, and wind-speed sensors is connected to a LoRa-based gateway that forwards measurements to an Amazon EC2 cloud server via MQTT for centralized storage, processing, and notification delivery. The system is evaluated through a 10-day single-node real-world outdoor deployment, focusing on sensor data acquisition reliability, LoRa link quality, and end-to-end operation from data acquisition to user notifications. The classification module achieves an overall accuracy of 0.74 with a weighted F1-score of 0.71, while minority-class performance remains limited due to class imbalance. LoRa communication remains stable with RSSI values of −80.91 to −79.19 dBm, SNR values of 9.86–9.95 dB, and packet loss rates below 3%. By jointly evaluating LPWAN communication reliability and cloud-side predictive services within a single field deployment, the results demonstrate the practicality of LPWAN-based IoT sensing with cloud integration for rain intensity monitoring in resource-constrained environments, while highlighting the need for future improvements in minority-class prediction and multi-node scalability.