cover
Contact Name
Jumanto
Contact Email
jumanto@mail.unnes.ac.id
Phone
+628164243462
Journal Mail Official
sji@mail.unnes.ac.id
Editorial Address
Ruang 114 Gedung D2 Lamtai 1, Jurusan Ilmu Komputer Universitas Negeri Semarang, Indonesia
Location
Kota semarang,
Jawa tengah
INDONESIA
Scientific Journal of Informatics
ISSN : 24077658     EISSN : 24600040     DOI : https://doi.org/10.15294/sji.vxxix.xxxx
Scientific Journal of Informatics (p-ISSN 2407-7658 | e-ISSN 2460-0040) published by the Department of Computer Science, Universitas Negeri Semarang, a scientific journal of Information Systems and Information Technology which includes scholarly writings on pure research and applied research in the field of information systems and information technology as well as a review-general review of the development of the theory, methods, and related applied sciences. The SJI publishes 4 issues in a calendar year (February, May, August, November).
Articles 131 Documents
PSNR and SSIM Performance Analysis of Schur Decomposition for Imperceptible Steganography Susanto, Ajib; Sinaga, Daurat; Mulyono, Ibnu Utomo Wahyu
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.9561

Abstract

Purpose: This research examines how well Schur decomposition-based steganography can hide data in digital images without being noticed, while also keeping the image quality high and keeping the hidden information safe. Methods: The study starts by choosing regular test images (Lena, Plane, Peppers, Cameraman, Baboon) to use for hiding messages in. The Schur decomposition is used to hide information within images in a subtle way. To test how well the new method works, we added Gaussian noise and Salt & Pepper noise after embedding. The quality of the image is determined by looking at the Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) metrics. Result: The research shows that Schur decomposition results in very good SSIM values (greater than 0.92) and high PSNR scores (as high as 90.27 dB) for various image sizes (64x64, 128x128, 256x256). This means that the quality of the images is not greatly reduced even after steganography is applied. Novelty: This research introduces a unique use of Schur decomposition for hiding data in images without affecting their quality. The study highlights how this method can securely hide information in digital media, which could be really useful for improving steganography techniques in the future. Future studies should concentrate on making improvements to Schur decomposition-based steganography, especially for bigger images. One possibility is to create adaptive methods that can change how images are hidden based on their content. This could make it harder for others to detect and analyze hidden information in the images.
Hybrid Quantum Representation and Hilbert Scrambling for Robust Image Watermarking Sari, Christy Atika; Abdussalam, Abdussalam; Rachmawanto, Eko Hari; Islam, Hussain Md Mehedul
Scientific Journal of Informatics Vol. 11 No. 4: November 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i4.10140

Abstract

Purpose: This work aims to apply Quantum Hilbert Scrambling to enhance the security and integrity of image watermarking without affecting visual quality degradation. Further conception of the surveyed methods could result in a very good solution to conventional methods of watermarking in solving some problems of digital image security and integrity with new concepts of quantum computing. Methods: The paper reviews Quantum Hilbert Scrambling, whose computational complexity is . The process involves encoding the image into a quantum state, permuting qubits by the Hilbert curve, and embedding a watermark using quantum gates. Result: The quantitative performance evaluation metrics, like Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM), have shown high Peak Signal to Noise Ratio (PSNR) values from 56.13 dB to 57.87 dB and Structural Similarity Index (SSIM) from 0.9985 to 0.9990, correspondingly. This justifies the fact that the quality degradation is very slight and the fine details of the structure are well maintained. Novelty: The proposed method uniquely integrates quantum computing with traditional watermarking steps for a secure and effective approach in digital watermarking. Further development should focus on improving the quantum circuit regarding computation efficiency, extending the applicability of the method to a wide range of images, and various situations in watermarking, and finding hybrid approaches by combining quantum and classical approaches towards better performance and scalability.
Comparative Performance of SVM and Multinomial Naïve Bayes in Sentiment Analysis of the Film 'Dirty Vote' Iedwan, Aisha Shakila; Mauliza, Nia; Pristyanto, Yoga; Hartanto, Anggit Dwi; Rohman, Arif Nur
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.10290

Abstract

Purpose: The purpose of this research is to analyze and compare the performance of two machine learning models, Support Vector Machine (SVM) and Multinomial Naive Bayes, in conducting sentiment analysis on YouTube comments related to the film "Dirty Vote." Methods: The study involved collecting YouTube comments and preprocessing the data through cleaning, labeling, and feature extraction using TF-IDF. The dataset was then divided into training and testing sets in an 80:20 ratio. Both the SVM and Multinomial Naive Bayes models were trained and tested, with their performance evaluated using accuracy, precision, recall, and F1-score metrics. Result: The results revealed that both models performed well in classifying sentiments, with SVM slightly outperforming Multinomial Naive Bayes in terms of accuracy and precision. Particularly, SVM showed superior performance in detecting positive comments, making it a more reliable model for this specific sentiment analysis task. Novelty: This study contributes to the field of sentiment analysis by providing a detailed comparative analysis of SVM and Multinomial Naive Bayes models on YouTube comments in the context of an Indonesian film. The findings highlight the strengths and weaknesses of each model, offering insights into their applicability for sentiment analysis tasks, particularly in analyzing social media content. This research also suggests potential future directions, including the exploration of advanced NLP techniques and different models to enhance sentiment analysis performance.
Performance Comparison of Random Forest (RF) and Classification and Regression Trees (CART) for Hotel Star Rating Prediction Utami, Annisaa; Permadi, Dimas Fanny Hebrasianto; Rosita, Yesy Diah; Unjung, Jumanto
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.11068

Abstract

Purpose: This study proposes to evaluate the effectiveness of Random Forest (RF) compared to Classification and Regression Trees (CART) in prediction of hotel star ratings. The objective is to identify the algorithm that provides the most reliable and accurate classification outcomes based on diverse hotel attributes in accordance with the standard categorization of star hotel categories. This is necessary due to the important role of accurate star ratings in guiding consumer choices and enhancing competitive positioning in the hospitality industry. Method: This study conducted a comprehensive dataset about Hotel in Banyumas Regency, including location, facilities, the size of rooms, type of rooms, price of rooms, and customer reviews, subjected to training through both RF and CART algorithms. Both algorithms are evaluated using accuracy, precision, recall, and F1 score. Additionally, both algorithms due to in the same preprocessing while performing hyperparameter tuning improve the efficacy of each model. Result: The results showed that RF achieved the best overall accuracy and robustness than CART across all tests conducted. Furthermore, RF also outperformed CART in classification effectiveness among classes, including enhanced precision and recall scores across multiple stars rating categories, signifying increased generalization and consistency in classification tasks. RF classifier consistently surpassed the CART classifier in terms of both accuracy and F1-score throughout all random states and test sizes, with a highest score of 0.9932 at a random state of 100 and a test size of 0.4. The most reliable results were obtained using RF with 42 random states and a test size of 0.2, resulting in an accuracy of 0.9909, precision of 1.0, recall of 1.0, and F1 score of 1.0. Simultaneously, CART shows values of 0.9818, 1.0, 1.0, and 1.0, respectively, while maintaining the same variation. This consistent performance, regardless of fluctuations, illustrates the robustness and suitability of RF for classification tasks compared to CART. Novelty: This study offered new insights about the implementation of machine learning about hotel star rating predictions using RF and CART algorithms. Also, the novelty of the collected hotel dataset used in this study. A detailed comparative analysis was also provided, contributing to the existing literature by showing the effectiveness of RF over CART for this specific application. Future studies could explore the integration of additional machine learning methods to further enhance prediction accuracy and operational efficiency in the hospitality industry.
Embedding Quantum Random Phase Encoding Arnold Transform for Advanced Image Security Hermanto, Didik; Pratama, Zudha; Hidajat, Moch. Sjamsul
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.12256

Abstract

Purpose: This research proposes an improvised version of the image encryption technique by incorporating Quantum Random Phase Encoding with the Arnold Transform to help enhance the strength and non-predictability of the encryption process. In this research work, some ideas gained from quantum-based methods have been brought to use with conventional approaches in image encryption techniques for enhancing their security. Methods: This model represents the basic methodology that underlies the Arnold Transform for scrambling the arrangement of image pixels to mask recognizable structures within quantum random phase encoding to introduce complexity through quantum-generated random phases. Result: The experimental results show much improvement in encryption efficiency. For example, in the case of "Cameraman" and "Lena", MSE parameters are 98.134 and 104.76, respectively; these now go up to 832.01 and 888.78. This implies that the higher decrement of these values 21.17 dB and 23.98 dB to 13.41 dB and 13.33 dB translates into higher distortion with higher security. Meanwhile, UACI and NPCR are also very steady and the mean value is about 0.3356 to 0.3358 and 99.60 to 99.61, which proves that this method has been effective in changing the pixel's value, and sensitive input changes. Novelty: This work is novel due to the introduction of quantum technologies in the classical methodology of image encryption. While classical techniques make use of conventional transforms for scrambling, like the Arnold Transform, this work embeds quantum randomness and intricacy in the process as a means of encoding namely, Quantum Random Phase Encoding.
Optimizing Deep Learning Models with Custom ReLU for Breast Cancer Histopathology Image Classification Nugroho, Wahyu Adi; Supriyanto, Catur; Pujiono, Pujiono; Shidik, Guruh Fajar
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.12722

Abstract

Purpose: The prompt identification of breast cancer is crucial in preventing the considerable damage inflicted by this dangerous form of cancer, which is widely happened across the globe. This study seeks to refine the efficacy of a deep learning-driven approach for the precise diagnosis of breast cancer by employing diverse bespoke Rectified Linear Units (ReLU) to improve the model's performance and reduce inaccuracies within the system. Method: This study focuses on analyzing a deep learning approach utilizing the BreakHis dataset with 7,909 images, incorporating changes to the ReLU activation function across different pre-trained CNN models. It then evaluates performance through measurement such as accuracy, precision, recall, and F1-Score. Result: Based on our experiment results, it can be shown that the DenseNet201 models with a custom LeakyReLU excel beyond the typical ReLU, achieving the highest accuracy, recall, and F1-Score at 99.21%, 99.21%, and 99.11%, respectively. Simultaneously, ResNet152, utilizing LessNegativeReLU (α=0.05), achieved the highest precision at 99.11%. The VGG11 model exhibited the most notable performance enhancement, with improvements ranging from 1.39% to 1.59%. Novelty: The research is original in optimizing a model for accurate breast cancer diagnosis. The proposed model is superior to the model utilizing the default activation function. This finding indicates that the study significantly enhances performance while effectively minimizing errors, thereby necessitating further exploration into the effectiveness of the customized activation function when applied to other medical imaging modalities.
Hyperparameter Tuning Decision Tree and Recursive Feature Elimination Technique for Improved Chronic Kidney Disease Classification Saputra, Aries Gilang; Purwanto, Purwanto; Pujiono, Pujiono
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.12990

Abstract

Purpose: This study has the purpose of classifying patients with chronic kidney disease based on specific features and improving the classification models by tuning hyperparameters. This study aims to detect chronic kidney disease at an early stage. Methods: In this study, a machine learning classifier in the form of a decision tree is used to classify chronic kidney disease on the Risk Factor Prediction of Chronic Kidney Disease dataset. After that, the performance of the classifier model is improved by using feature selection, namely Recursive Feature Elimination and Hyperparameter tuning with GridSearchCV. Result: After tests were conducted 3 times namely testing with Decision Tree, Recursive Feature Elimination, and Hyperparameter tuning GridSearchCV which is the proposed method, then compared to other tests. The results from this study is using that method can improve the Decision Tree classifier in classifying chronic kidney disease patients. Novelty: Dataset that have been used in this study is from UCI machine learning repository namely Risk Factor Prediction of Chronic Kidney Disease that have 202 instances and 28 feature and after being processess and conducting test, Recursive Feature Elimination and Hyperparameter tuning GridSearchCV can improve the Decision Tree classifier in classifying chronic kidney disease.
Principal Component Analysis for Prediabetes Prediction using Extreme Gradient Boosting (XGBoost) Wardhani, Kartina Diah Kesuma; Novayani, Wenda
Scientific Journal of Informatics Vol. 11 No. 3: August 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i3.13416

Abstract

Purpose: The purpose of this study is to increase the accuracy of the model used for prediabetes prediction. This study integrates Principal Component Analysis (PCA) for reducing the dimension of data with Extreme Gradient Boosting (XGBoost). The study contributes to providing a new alternative for prediabetes prediction in patients by reducing the complexity of the dataset with the aim of increasing the accuracy of the obtained model. PCA and XGBoost identify the best features that have the highest correlation with prediabetes so that they are expected to produce a better predictive model. Methods: This study utilizes published data sourced from the UCI Machine Learning Repository consisting of 520 records, 16 attributes and 1 label class. The dataset is data collected through direct questionnaires from patients in Sylhet, Bangladesh at the Sylhet Diabetes Hospital. The research method in this study consists of several stages, namely: Data Collection, Data Preprocessing, Dimension Reduction using PCA to reduce the complexity of dimensions in the dataset, Modeling using XGBoost to identify patterns used to predict prediabetes, and Model evaluation used to measure the performance of the resulting model using evaluation metrics such as accuracy, recall, precision and F1-Score. Result: The current study utilizes XGBoost with Principal Component Analysis for feature selection, resulting in 12 features and a model accuracy of 97.44. Novelty: The study's originality lies in applying PCA as a preprocessing step to enhance the performance of machine learning models by reducing data dimensionality and focusing on the most critical features. By demonstrating how PCA can improve the efficiency and accuracy of prediabetes prediction models, this research provides valuable insights to inform future studies and contribute to the development of more effective diagnostic tools for early detection and prevention of prediabetes.
Comparison of Digital Forensic Tools for Drug Trafficking Cases on Instagram Messenger using NIST Method Nahdli, Muhammad Fahmi Mubarok; Riadi, Imam; Biddinika, Muhammad Kunta
Scientific Journal of Informatics Vol. 11 No. 4: November 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i4.13463

Abstract

Purpose: Cybercrime is an unlawful act that utilizes computer technology and the development of the internet. Cybercrime can occur on all electronic devices, including Android smartphones. Forensic handling, particularly mobile forensics, has become crucial in addressing drug trafficking cases conducted through Instagram. As the primary device for accessing Instagram, smartphones store digital data that can serve as evidence in investigations. This research aims to produce a more accurate comparison of results in analyzing Instagram Messenger data containing content related to drug trafficking. Methods: The digital evidence data used in this research included five types of data: text chat, account, image, audio, and image view once. The forensic tools for obtaining digital evidence were MOBILedit, Belkasoft, Mobile Forensic SPF, and Magnet Axiom. The method proposed in this research followed the NIST framework, which consists of four stages: collection, examination, analysis, and reporting. This research followed the NIST framework because it is widely recognized in the field of digital forensics and provides a comprehensive guideline for handling digital evidence. Result: Research results showed that Magnet Axiom had the best performance in digital forensic analysis, with a success rate of 74.1%. MOBILedit Forensic had a success rate of 62.5%, indicating lower performance. Mobile Forensic SPF had a success rate of 44.6%. In comparison, Belkasoft had the lowest success rate of 23.2%, showing that this software could be more effective in detecting and analyzing digital data than the others. Novelty: In this study, the analysis process was conducted using four digital forensic tools, each showing variations in terms of efficiency and effectiveness. Each tool has advantages and disadvantages regarding speed, accuracy, and ability to extract and manage data.
Comparison of KNN and CNN Algorithms for Gender Classification Based on Eye Images Wicaksono, Rizky Dwi; Fajar Shidiq, Guruh
Scientific Journal of Informatics Vol. 11 No. 4: November 2024
Publisher : Universitas Negeri Semarang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.15294/sji.v11i4.13529

Abstract

Purpose: This study explores gender classification using iris images and compares two methods k-nearest neighbors (KNN) and convolutional neural networks (CNN). Most research has focused on facial recognition. However, iris classification is more unique and accurate. This research addresses a gap in gender classification using iris images. It also tests the effectiveness of CNN and KNN for this task. Methods: This study used 11,525 iris images from Kaggle. Of these, 6,323 were male and 5,202 were female. The authors split the data into training (75%) and testing (25%). Preprocessing involved normalizing and augmenting images by rotating, scaling, shifting, and reflecting the them. Pixel values were also adjusted. The study compared the KNN algorithm, using Euclidean distance and 16 neighbors, with a CNN model. The CNN had layers for convolution, pooling, and density. The authors performed evaluation using accuracy, precision, recall, F1-score, and confusion matrix. Result: The KNN model demonstrated 81% accuracy. It identified males with 87% precision but only 70% recall. Meanwhile, the CNN model was better, achieving 93% accuracy with 94% precision and 95% recall for males. The CNN model outperformed KNN for females in precision, recall, and F1-score, indicating its superior ability to learn patterns and classify gender from iris images. Novelty: CNN outperforms KNN in classifying gender from iris images. It effectively recognizes patterns and achieves high accuracy. The study shows CNN’s superiority in biometric tasks, suggesting that future research should balance datasets and test better models, as well as combining models for better performance.

Page 9 of 14 | Total Record : 131