cover
Contact Name
Jumanto
Contact Email
jumanto@mail.unnes.ac.id
Phone
+6281339762820
Journal Mail Official
shmpublisher@gmail.com
Editorial Address
Jl. Karanglo Raya No. 64, Pedurungan, Semarang, 50191, Indonesia
Location
Kota semarang,
Jawa tengah
INDONESIA
Journal of Soft Computing Exploration
Published by shm publisher
ISSN : 27467686     EISSN : 27460991     DOI : https://doi.org/10.52465/joscex
The journal focuses on publishing high-quality, original research and review articles in the field of Soft Computing, Informatics and Computer Science, emphasizing the development, application, and rigorous evaluation of Advanced Computational Methods, Artificial Intelligence (AI), Machine Learning (ML), and Data Science to address complex real-world challenges. The scope of the journal includes, but is not limited to, innovative research in the following areas: 1. Artificial Intelligence and Machine Learning Novel Algorithms and Architectures: Development and comparison of ML/DL models for classification and prediction (including Logistic Regression, Ridge Classifier, SVM, k-NN, and Random Forest). Ensemble Learning: Evaluation and optimization of ensemble methods Balanced Random Forest, SMOTE-RF, SMOTEBoost, and RUSBoost for robust prediction. Data Challenges and Preprocessing: Techniques for mitigating issues like class imbalance (using methods like SMOTE and GAN) and feature extraction/dimension reduction techniques (including Principal Component Analysis (PCA) and Local Binary Pattern (LBP)). 2. Deep Learning and Computer Vision Convolutional Neural Networks (CNNs): Research on CNN architectures (VGG16, ResNet50, DenseNet121, EfficientNet, and MobileNetV2) and the impact of optimization functions (Adam, SGD, NAdam) on model performance. Hybrid and Concatenated Architectures: Proposing and evaluating hybrid models (MobileNetV2 combined with LBP) or concatenated architectures (MobileNetV2 and DenseNet201) to improve classification and feature representation. Image Analysis Tasks: Advanced techniques for image classification (specifically Diabetic Retinopathy), image similarity detection (using Siamese Networks and Test-Time Augmentation), and multi-object segmentation (using FCN with Squeeze-and-Excitation and Attention Mechanisms for palm oil images). 3. Data Science and Advanced Analytics Pattern Detection and Data Mining: Performance evaluation of data mining algorithms, including Biclustering (Cheng & Church and Spectral Biclustering), specifically under challenging structural conditions like collinearity and overlap. Time Series Analysis and Forecasting: Application of advanced decomposition and clustering methods (Ensemble Empirical Mode Decomposition (EEMD) and Time Series Clustering with DTW/ARIMA) for accurate economic or temporal prediction. 4. Applied Informatics (Domain-Specific Applications) Health and Medical Informatics: Classification models for disease diagnosis (including Heart Attack Disease and Diabetic Retinopathy). Agricultural Informatics: Automated detection and classification of plant diseases from leaf/crop images (including Mango Leaf Disease and Chili Plant Disease) and Palm Oil Segmentation. Business and Economic Informatics: Predictive modeling for crucial business metrics (Customer Churn Prediction in Telecommunications) and economic forecasting (Rice Price Forecasting).
Articles 22 Documents
AI-based career profiling for the creative industry: Data-driven classification of islamic high school students' potential Nove Kurniati Sari; Dias Aziz Pramudita; Syaddam Syaddam; Zainal Abidin Muhja
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.1

Abstract

The creative industry is a major contributor to the global economy, especially vital to ASEAN's growth. The sustainability of the sector depends on skilled human resources, which in turn influences cultural and educational policies. Islamic schools are uniquely positioned to develop the character and competence of students. However, Islamic schools in border areas face challenges in accessing resources, particularly in matching students with their interests and talents in the creative field, which is crucial for fulfilling the human resources needs of the creative industry. This research aims to classify students' knowledge and abilities in the creative field so that student competency mapping can be carried out. Using AI modelling tools, naïve bayes can perform classifications with a value of 100% accuracy. In this study, there were 17 data samples of Islamic school students based on their characteristics, learning styles, and creativity levels. Seven students were identified as Creative Innovator Profiles, work-ready, and 10 students as Creative Innovator Profiles, further education-bound. With the existence of a student data classification developer model, it is hoped that the school can switch from general career guidance to personalized guidance supported by data. This marks a significant step towards implementing smart school governance.
LSTM with temporal encoding for irregular time series forecasting in power consumption Eko Verianto; Muhammad Arif Alfian
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.2

Abstract

Power consumption data obtained from sensors are often recorded at irregular time intervals due to network disruptions, device errors, or power outages, resulting in irregular time series that make forecasting difficult. This study aims to develop an electricity consumption forecasting model based on Long Short-Term Memory (LSTM) and Temporal Encoding.  LSTM was chosen because it has an effective gating mechanism for capturing temporal dependencies in time series data, while Temporal Encoding explicitly represents time information to handle irregular time intervals without data imputation.  The methods in this study include data collection via four electrical current sensors, followed by data aggregation every 10 minutes, and feature engineering using sinusoidal encoding and a time difference encoder. The features were normalized using min-max scaling, organized into a multivariate sequence using a sliding window, and divided using a holdout scheme. The model was trained using LSTM and evaluated using Mean Squared Error (MSE). The results show training MSE values of 9.89210-4, 7.34910-4, 9.53510-4 and 1.90610-3, while the testing MSE values are 4.56610-3, 2.99310-3, 1.09410-2 and 1.20910-2 for each node. These findings indicate that temporal encoding performs well on the training data, but the model's generalization ability remains limited.
Enhancing diabetes classification performance using XGBoost integrated with SMOTE and bayesian hyperparameter optimization Muhammad Nurul Ihyaul Ulum; Jumanto Unjung
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.3

Abstract

Diabetes mellitus is a long-term metabolic disorder that is becoming more common around the world. Finding people at risk early can help prevent serious health problems and improve patient outcomes. Machine learning is often used to predict diabetes, but imbalanced medical data can make it harder for models to spot positive cases. In this study, we created a diabetes classification model by combining the Extreme Gradient Boosting (XGBoost) algorithm with the Synthetic Minority Over-sampling Technique (SMOTE), and we used Bayesian Optimization to fine-tune the model’s settings. We worked with the Pima Indians Diabetes Dataset, which has 768 patient records and eight clinical features. Our steps included preprocessing the data, splitting it into training and testing sets, using SMOTE to balance the training data classes, training the XGBoost model, and performing hyperparameter tuning using Bayesian Optimization with Stratified 5-Fold Cross-Validation to determine the optimal parameter configuration. The final model reached an accuracy of 0.88, a precision of 0.79, a recall of 0.91, an F1-score of 0.84, and a ROC-AUC of 0.955. These results show that our approach can identify diabetes cases more effectively while keeping strong overall performance.
Efficient hierarchical summarization of long legal documents using a lightweight transformer and divide and conquer strategy Muhammad Zhafran Ammar; Ricky Eka Putra; Yuni Yamasari
Journal of Soft Computing Exploration Vol. 7 No. 2 (2026): June 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i2.5

Abstract

This research addresses the challenges of summarizing long and complex legal documents, which often exceed the input length limitations of transformer-based models and contain intricate legal reasoning structures. The purpose of this study is to develop an efficient and scalable summarization framework that preserves semantic fidelity and structural coherence in judicial summaries. To achieve this objective, a hybrid summarization pipeline is proposed by integrating a Bidirectional Encoder Representations from Transformers (BERT)-based extractive model with a hierarchical abstractive model based on Distilled Bidirectional and Auto-Regressive Transformers (DistilBART), combined with a Divide-and-Conquer strategy. The proposed method partitions long legal documents into smaller segments, processes each segment independently, and reconstructs them into a coherent final summary. Experiments were conducted on the Indian Legal Case Summarization dataset and evaluated using Recall-Oriented Understudy for Gisting Evaluation (ROUGE), BERTScore, and Cosine Similarity to assess both lexical overlap and semantic similarity. The results show that the hierarchical DistilBART model outperforms the extractive baseline, achieving a ROUGE-1 score of 0.3802 and a Cosine Similarity of 0.6917. These findings demonstrate that the proposed framework provides an effective solution for long-document summarization in the legal domain.
Application of multi-chaotic map cascade in video encryption to overcome statistical and differential attacks and performance evaluation Cahaya Jatmoko; Heru Lestiawan; Fauzi Adi Rafrastara; Lalang Erawan; Candra Irawan; Mohamed Doheir
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.6

Abstract

The rapid proliferation of digital video across domains such as healthcare, surveillance, and communications has increased the demand for secure and efficient video encryption techniques. However, video data presents unique challenges, including large data volume and high spatial–temporal correlation, which limit the effectiveness and efficiency of conventional encryption approaches, particularly in real-time scenarios. In this context, the objective of this study is to evaluate the feasibility of a chaos-based video encryption in achieving both strong cryptographic security and acceptable computational performance. To accomplish this, the proposed scheme is tested through two controlled experiments. The evaluation focuses on cryptographic strength using the Number of Pixel Change Rate (NPCR) to measure sensitivity to minor input changes, the Unified Average Changing Intensity (UACI) to quantify average pixel intensity variation, and Shannon entropy to assess the randomness of the encrypted frames. In parallel, computational performance is analyzed through encryption time and throughput. The procedure involves frame extraction from video, followed by preprocessing to reduce pixel correlation, and subsequent application of the chaos-based encryption algorithm on a per-frame basis. The results from both experiments show NPCR values exceeding 99.5% and encrypted frame entropy of approximately 7.74 bits/pixel, indicating strong resistance to differential attacks and near-optimal randomness. However, the observed throughput of 0.07–0.09 frames per second highlights a limitation in meeting real-time processing requirements. These findings suggest that while the proposed scheme is cryptographically robust and suitable for offline or batch-processing applications.
Image encryption scheme based on fractional-order hyper-chaotic lorenz system with two-stage confusion-diffusion for enhanced pixel randomness Daurat Sinaga; Cahaya Jatmoko; Erna Zuni Astuti; Feri Agustina; Suprayogi Suprayogi
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.7

Abstract

This study proposes a novel grayscale image encryption framework integrating a fractional-order 4D hyperchaotic Lorenz system with DNA encoding operations and SHA-256 plaintext-dependent key generation to address the security vulnerabilities in digital data transmission. The encryption pipeline employs a robust two-stage confusion-diffusion architecture designed to maximize pixel randomness and resistance against differential attacks. Stage 1 implements DNA-based confusion-diffusion with chaotic rule selection, while Stage 2 executes a four-round pixel-level permutation and XOR diffusion drixven by fractional-order Grünwald-Letnikov sequences (α = 0.95, d = 5). This multi-layered approach ensures that any infinitesimal change in the plaintext or the secret key results in a completely different cipher image. Hyperchaos is verified through the Lyapunov exponent spectrum (λ1 = +0.973, λ2 = +0.531), confirming two positive exponents and complex dynamical behavior. Experiments on five standard 512 × 512 grayscale images yield near-maximum information entropy (7.9993–7.9994 bits) and negligible pixel correlation (below 0.023). Statistical evaluations show an average NPCR of 99.5992% and UACI of 33.4216%, closely matching theoretical ideals. Key sensitivity analysis demonstrates that a perturbation of only ±10⁻¹⁴ in the initial conditions renders decryption unsuccessful, ensuring high security. In conclusion, the proposed scheme achieves perfect lossless recovery (PSNR = ∞ dB) and successfully passes all NIST SP 800-22 tests, providing a highly secure and reliable solution for protecting sensitive medical or military digital imagery.
A deep learning-based leaf aphid detection approach using YOLOv8 Styawati Styawati; Heni Sulistiani; Ajeng Savitri Puspaningrum; Debby Alita; S. Samsugi; Vanisa Adellia Putri
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.9

Abstract

Aphids pose a serious threat to agricultural productivity due to their rapid reproduction and their role as plant virus vectors. Early manual detection is difficult due to the pests' microscopic size and tendency to hide under leaves. This study aims to develop an accurate and real-time aphid monitoring system using the YOLOv8 algorithm. The model was trained using four epoch scenarios (30, 50, 100, and 200) to identify the best configuration to address the challenges of small, overlapping objects and varying leaf backgrounds. The results showed that increasing the number of epochs positively correlated with model performance, with the 200-epoch scenario providing the most optimal results with 91.5% accuracy, 0.87 recall, 0.89 F1-score, and 0.915 mAP50. The model was then integrated into a smart monitoring dashboard that synchronizes visual detection results with IoT sensor data (temperature, humidity, and nutrients) in real time. This system not only validates the reliability of YOLOv8 under field conditions, but also provides an effective early warning system to support rapid decision-making in crop protection management.
A comparative analysis of five textual similarity methods for automatic short answer grading Imam Rangga Bakti; Handaru Jati; Nurkhamid Nurkhamid; Yola Permata Bunda
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.11

Abstract

This study investigates the application of text mining techniques in Automatic Short Answer Grading (ASAG) by comparing five textual similarity methods: Cosine Similarity, Jaccard Similarity, Dice’s Coefficient, Overlap Coefficient, and Matching Coefficient. The dataset consists of five definition-based questions answered by 25 students in a Human–Computer Interaction course. The data were preprocessed using case folding, tokenization, stop word removal, and stemming. The results show that Cosine Similarity achieved the highest similarity score of 67.00%, followed by Overlap Coefficient (66.67%) and Dice’s Coefficient (63.16%), while Jaccard Similarity and Matching Coefficient produced lower scores of 46.15%. These findings indicate that vector-based similarity methods are more effective in handling variations in sentence structure and keyword usage compared to set-based approaches, particularly for definition-based short answers. This study provides a comparative evaluation of multiple lexical similarity methods within a unified experimental setting, offering practical insights for selecting appropriate techniques in ASAG applications.
Bio-inspired metaheuristic MPPT algorithms for PV battery systems: a comparative performance study Faiq Mananul Faqih; Rizky Ajie Aprilianto
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.13

Abstract

Maximum Power Point Tracking (MPPT) has been proven to improve power extraction in photovoltaic (PV) systems. However, conventional MPPT methods such as Perturb and Observe (P&O) and Incremental Conductance (InC) have limitations, such as oscillations in steady state conditions, slow response, and a tendency to get stuck at local maxima when irradiation changes. This study aims to evaluate biology-inspired metaheuristic algorithms to improve tracking accuracy, convergence speed, and MPPT stability in PV systems. These algorithms include Grey Wolf Optimization (GWO), Sand Cat Swarm Optimization (SCSO), Horse Herd Optimization (HHO), Chameleon Swarm Algorithm (CSA), and Flying Squirrel Search Optimization (FSSO). The algorithms were tested using the same general parameters to ensure a fair comparison. Testing was conducted on PV models, DC boost converters with resistive loads and batteries under static and dynamic irradiation conditions using MATLAB/Simulink. The results show that HHO provides the best performance with an efficiency of 99.96% at 1000 W/m² and 98.03% at 800 W/m², a tracking time of <0.05 seconds, and power fluctuations of <0.3% in resistive load testing. In battery testing, CSA and FSSO showed the best performance with voltage stability, high charging current, and lower ripple. Overall, the results of this study indicate that the proposed metaheuristic-based MPPT algorithm can improve the accuracy of maximum power point tracking, accelerate convergence time, and minimize power oscillations in PV systems.
Tuberculosis classification on chest x-ray images using DenseNet-169 and convolutional block attention module Muhammad Agil Izzulhaq; Endang Sugiharti
Journal of Soft Computing Exploration Vol. 7 No. 1 (2026): March 2026
Publisher : SHM Publisher

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52465/joscex.v7i1.14

Abstract

Tuberculosis remains a major global health challenge, and the manual interpretation of chest X-rays is often limited by the subjectivity and shortage of radiology experts. While deep learning approaches like DenseNet have shown promise in medical imaging, the integration of attention mechanisms such as the Convolutional Block Attention Module (CBAM) for tuberculosis detection has been less explored. This study aimed to develop a Convolutional Neural Network (CNN) model utilizing DenseNet-169 combined with CBAM to accurately classify chest X-ray images into normal and tuberculosis classes. A dataset of 7,000 chest X-ray images was preprocessed and partitioned into training, validation, and testing sets. DenseNet-169 served as the backbone architecture, while CBAM was applied to emphasize crucial spatial and channel features. Evaluated across standard metrics, the proposed model achieved an accuracy of 99.43%, a precision of 99.72%, a recall of 99.14%, and an F1-score of 99.43%, successfully outperforming the baseline DenseNet-169 model without CBAM. Ultimately, the integration of CBAM with DenseNet-169 demonstrates remarkable potential in improving tuberculosis detection, confirming that attention mechanisms can substantially enhance deep learning performance in medical imaging.

Page 1 of 3 | Total Record : 22