cover
Contact Name
Andri Pranolo
Contact Email
andri@ascee.org
Phone
+6281392554050
Journal Mail Official
andri@ascee.org
Editorial Address
Association for Scientific Computing Electrical and Engineering (ASCEE) Jl. Janti, Karangjambe 130B, Banguntapan, Bantul, Yogyakarta, Indonesia
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Science in Information Technology Letters
ISSN : -     EISSN : 27224139     DOI : https://doi.org/10.31763/SiTech
Core Subject : Science,
Science in Information Technology Letters (SITech) aims to keep abreast of the current development and innovation in the area of Science in Information Technology as well as providing an engaging platform for scientists and engineers throughout the world to share research results in related disciplines. SITech is a peer reviewed open-access journal which covers four (4) majors areas of research that includes 1) Artificial Intelligence, 2) Communication and Information System, 3) Software Engineering, and 4) Business intelligence Submitted papers must be written in English for initial review stage by editors and further review process by minimum two international reviewers. Finally, accepted and published papers will be freely accessed in this website.
Articles 51 Documents
Betta fish classification using transfer learning and fine-tuning of CNN models Munif, Rihwan; Prahara, Adhi
Science in Information Technology Letters Vol 5, No 1 (2024): May 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i1.1378

Abstract

Betta fish, known as freshwater fighters, are in demand because of their beauty and characteristics. These betta fish such as Crowntail, Halfmoon, Doubletail, Spadetail, Plakat, Veiltail, Paradise, and Rosetail are hard to recognize without knowledge about them. Therefore, transfer learning of Convolutional Neural Network models was proposed to classify the betta fish from the image. The transfer learning process used a pre-trained model from ImageNet of VGG16, MobileNet, and InceptionV3 and fine-tuned the models on the betta fish dataset. The models were trained on 461 images, validated with 154 images, and tested on 156 images. The result shows that the InceptionV3 model excels with 0.94 accuracies compared to VGG16 and MobileNet which acquire 0.93 and 0.92 accuracy respectively. With good accuracy, the trained model can be used in betta fish recognition applications to help people easily identify betta fish from the image.
Classification Of Plants By Their Fruits And Leaves Using Convolutional Neural Networks Irhebhude, Martins E.; Kolawole, Adeola O.; Chinyio, Chat
Science in Information Technology Letters Vol 5, No 1 (2024): May 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i1.1364

Abstract

The population growth of the world is exponential, this makes it imperative that we have an increase in food production. In this light, farmers, industries and researchers are struggling with identifying and classifying food plants. Over the years, there have been challenges that come with identifying fruits manually. It is time-consuming, labour intensive and requires experts to identify fruits because of the similarity in fruit’s leaves (citrus family), shapes, sizes and colour. A computerized detection technique is needed for the classification of fruits. Existing solutions to fruits classifications are majorly based on fruit or leave used as input. A new model using Convolutional Neural Network (CNN) is proposed for fruits classification. A dataset of 5 classes of fruits and fresh dry leaves plants (Mango, African almond, Guava, Avocado and Cashew) comprising of 1000 images each. The proposed model hyperparameters were: Conv2D layer, activation layer, dense layer, a learning and dropout rates of 0.001 and 0.5 respectively were used for the experiment. Various performances for accuracies of 91%, 97%, 78% and 97% were obtained for proposed model on local dataset, proposed model on benchmark dataset, benchmark model on local dataset and benchmark model on benchmark dataset. The proposed model is robust on both local and benchmark datasets and can be used for effective classification of plants
Determination of living quarters clutter for caregiver support Karungaru, Stephen
Science in Information Technology Letters Vol 5, No 1 (2024): May 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i1.1459

Abstract

Providing enough health caregivers due to an aging population has recently been challenging. To alleviate this problem, there's a growing demand for certain household monitoring tasks to be automated especially for elderly persons living independently to reduce the number of scheduled visits by caregivers. Moreover, gathering crucial data using AI technology about functional, cognitive, and social health status, is essential for monitoring daily physical activities at home. This paper proposes a system that determines a room's cleanliness (degree of clutter) to decide whether a caregiver visit is required. A Yolov5-based method is applied to recognize objects in the room including clothes, utensils, clothes, etc. However, due to background noise interference in the rooms and the insufficient feature extraction in YOLOv5, an improvement regime is proposed to improve the detection accuracy. The ECA (Efficient Channel Attention) is added to the network's backbone to focus on feature information, reducing the missed detection rate. The initial anchor box clustering algorithm is improved by replacing K-means with the K-means++ algorithm, enabling more effective adaptation to changing room views. The regression loss function EIoU (Enhanced Intersection over Union) is introduced to optimize the convergence speed and improve the accuracy. The room clutter is determined using set rules by comparing the detection results and prior information from the clean room using IOU. In 31 rooms, 9 subjects' evaluation was used to prove the effectiveness of the proposed system. Compared to the original Yolov5 algorithm, the method proposed in this paper achieved better performance
Enhancing the performance of heart arrhythmia prediction model using Convolutional Neural Network based architectures Ismi, Dewi Pramudi; Khoirunnisa, Ninda
Science in Information Technology Letters Vol 5, No 2 (2024): November 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i2.1794

Abstract

Heart disease is one of the diseases that exposes high mortality worldwide. This conventional way of predicting heart disease is usually expensive, time-consuming, and prone to human error. Early detection of heart disease is important as it helps to prevent deaths caused by this disease.  Machine learning utilization as the non-invasive means for predicting heart disease is considered as a fast and affordable method to prevent the fatality of heart disease. This work aims at utilizing  Convolutional neural network (CNN)  to enhance the performance of an Arrhythmia prediction model. We have built an Arrythmia prediction model using neural networks comprising multiple convolutional layers and maxpooling layers. Our proposed model is trained using the MIT-BIH Arrhythmia dataset. The model performance has been evaluated and the model achieves  98.43% of performance  accuracy
Comparative analysis of decision tree and random forest classifiers for structured data classification in machine learning Kinasih, Agnes Nola Sekar; Handayani, Anik Nur; Ardiansah, Jevri Tri; Damanhuri, Nor Salwa
Science in Information Technology Letters Vol 5, No 2 (2024): November 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i2.1746

Abstract

This study explores the application of machine learning techniques, specifically classification, to improve data analysis outcomes. The primary objective is to evaluate and compare the performance of Decision Tree and Random Forest classifiers in the context of a structured dataset. Using the Elbow Method for optimal clustering alongside decision tree and random forest for classification algorithms, this research investigates the effectiveness of each method in accurately categorizing data. The study employs K-Means clustering to segment the data and Decision Trees and Random Forests for classification tasks. Dataset used in this research was obtained from Kaggle consisting of 13 attributes and 1048575 rows, all of which are numeric. The key results show that Random Forest outperforms Decision Trees in terms of classification accuracy, precision, recall, and F1 score, providing a more robust model for data classification. The performance improvement observed in Random Forest, particularly in handling complex datasets, demonstrates its superiority in generalizing across varied classes. The findings suggest that for applications requiring high accuracy and reliability, Random Forest is preferable to Decision Trees, especially when the dataset exhibits high variability. This research contributes to a deeper understanding of how different machine learning models can be applied to real-world classification problems, offering insights into the selection of the most appropriate model based on specific data characteristics.
Analyzing event relationships in Andersen's Fairy Tales with BERT and Graph Convolutional Network (GCN) Daniati, Erna; Wibawa, Aji Prasetya; Irianto, Wahyu Sakti Gunawan; Ghosh, Anusua; Hernandez, Leonel
Science in Information Technology Letters Vol 5, No 1 (2024): May 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i1.1810

Abstract

This study explores the narrative structures of Hans Christian Andersen's fairy tales by analyzing event relationships using a combination of BERT (Bidirectional Encoder Representations from Transformers) and Graph Convolutional Networks (GCN). The research begins with the extraction of key events from the tales using BERT, leveraging its advanced contextual understanding to accurately identify and classify events. These events are then modeled as nodes in a graph, with their relationships represented as edges, using GCNs to capture complex interactions and dependencies. The resulting event relationship graph provides a comprehensive visualization of the narrative structure, revealing causal chains, thematic connections, and non-linear relationships. Quantitative metrics, including event extraction accuracy (92.5%), relationship precision (89.3%), and F1 score (90.8%), demonstrate the effectiveness of the proposed methodology. The analysis uncovers recurring patterns in Andersen's storytelling, such as linear event progressions, thematic contrasts, and intricate character interactions. These findings not only enhance our understanding of Andersen's narrative techniques but also showcase the potential of combining BERT and GCN for literary analysis. This research bridges the gap between computational linguistics and literary studies, offering a data-driven approach to narrative analysis. The methodology developed here can be extended to other genres and domains, paving the way for further interdisciplinary research. By integrating state-of-the-art NLP models with graph-based machine learning techniques, this study advances our ability to analyze and interpret complex textual data, providing new insights into the art of storytelling
Retaining humorous content from marked stand-up comedy text Supriyono, Supriyono; Wibawa, Aji Prasetya; Suyono, Suyono; Kurniawan, Fachrul; Voliansky, Roman; Cengiz, Korhan
Science in Information Technology Letters Vol 5, No 2 (2024): November 2024
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v5i2.1812

Abstract

Identifying humor in stand-up comedy texts has distinct issues due to humor's subjective and context-dependent characteristics.  This study introduces an innovative method for humor retention in stand-up comedy content by employing a pre-trained BERT model that has been fine-tuned for humor classification.  The process commences with the collection and annotation of a varied assortment of stand-up comedy writings, categorized as hilarious or non-humorous, with essential comic elements like punchlines and setups highlighted to augment the model's comprehension of humor.  The texts undergo preprocessing and tokenization to be ready for input into the BERT model. Upon refining the model using the annotated dataset, predictions regarding humor retention are generated for each text, yielding classifications and confidence scores that reflect the model's certainty in its predictions.  The criterion for prediction confidence is set to categorize texts as "retaining humor."  The results indicate that prediction confidence is a dependable metric for humor retention, with elevated confidence scores associated with enhanced accuracy in comedy classification.  Nonetheless, the analysis reveals that text length does not affect the model's confidence much, contradicting the presumption that lengthier texts are more prone to comedy.  The findings underscore the significance of environmental and linguistic elements in comedy detection, indicating opportunities for model enhancement.  Future efforts will concentrate on augmenting the dataset to encompass a broader range of comic styles and integrating more contextual variables to improve prediction accuracy, especially in intricate or ambiguous comedic situations
Machine learning-based residential load demand forecasting: Evaluating ELM, XGBoost, RF, and SVM for enhanced energy system and sustainability Abdalla, Modawy Adam Ali; Ishaga, Ahmed Mohamed; Osman, Hassan Ahmed; Elhindi, Mohamed; Ibrahim, Nasreldin; Snani, Aissa; Hamid, Gomaa Haroun Ali; Hammad, Abdallah
Science in Information Technology Letters Vol 6, No 1 (2025): May 2025
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v6i1.1866

Abstract

Accurate forecasting of electrical power load is essential for properly planning, operating, and integrating energy systems to accommodate renewables and achieve environmental sustainability. Therefore, this study introduces different machine learning (ML) methods, including support vector machines (SVM), random forests (RF), extreme learning machines (ELM), and extreme gradient boosting (XGBoost) to predict hourly electricity demand using electricity consumption and temperature data for train and test ML models. The data is processed by autocorrelation function (ACF) and cross-correlation function (CCF) to determine the appropriate lag time for the inputs. Furthermore, ML model accuracy is assessed using coefficient of determination (R²), mean absolute error (MAE), and root mean square error (RMSE). Results show that the ELM model achieved the highest R² in both summer (0.971) and winter (0.868), outperforming the other models in accuracy R² and error reduction (MAE and RMSE). ELM also more effectively captured load fluctuations. The result of this research has applications for load demand forecasting in the proper planning and operation of the residential grid. The results help estimate load demand and provide useful guidance for residential grid planning and management by determining the best techniques for precisely estimating load demand and identifying domestic energy consumption patterns
Optimizing breast cancer classification using SMOTE, Boruta, and XGBoost Hardiyanti P, Cicin
Science in Information Technology Letters Vol 6, No 1 (2025): May 2025
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v6i1.2109

Abstract

Breast cancer remains one of the leading causes of death among women worldwide. This study aims to develop a clinical data-based breast cancer classification framework by integrating the Synthetic Minority Oversampling Technique (SMOTE), the Boruta feature selection algorithm, and the XGBoost classifier. The proposed approach is tested using the Wisconsin Breast Cancer Diagnostic (WBCD) dataset, consisting of 569 samples and 30 numerical features. SMOTE addresses class imbalance, Boruta selects the most relevant diagnostic features, and XGBoost is the main classification algorithm due to its tabular and imbalanced data robustness. Model validation is conducted through Repeated Stratified K-Fold Cross Validation with 30 repetitions to ensure statistical stability. The resulting model achieves excellent classification performance, with an average accuracy of 0.9608 ± 0.0274, precision of 0.9465 ± 0.0481, Recall of 0.9512 ± 0.0524, and F1-score of 0.9475 ± 0.0374. The ROC-AUC value reaches 0.9926 ± 0.0094, the PR-AUC is 0.9906 ± 0.0113, and the Matthews Correlation Coefficient (MCC) is 0.9179 ± 0.0575, indicating a well-balanced model. Clinically, this model can aid early diagnosis by effectively reducing irrelevant diagnostic attributes, retaining only 10 key features without compromising accuracy, thereby offering a lightweight yet reliable diagnostic tool. However, limitations include the relatively small dataset and the absence of hyperparameter tuning. Future research should explore larger datasets, advanced ensemble methods, and interpretability techniques such as SHAP or LIME to improve clinical transparency and adoption.
Classification of coronary heart disease using the multi-layer perceptron neural networks Ikhwandoko, Fatih; Ismi, Dewi Pramudi
Science in Information Technology Letters Vol 6, No 1 (2025): May 2025
Publisher : Association for Scientific Computing Electronics and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/sitech.v6i1.2186

Abstract

Coronary heart disease (CHD) is one of the leading causes of death worldwide. The complexity of risk factors such as blood pressure, cholesterol, smoking history, and unhealthy lifestyles often makes the diagnosis process less effective. With the increasing need for fast and accurate heart disease prediction systems, the use of artificial intelligence-based methods such as Neural Networks is a promising solution. This study aims to evaluate the ability of the Multi-Layer Perceptron (MLP) algorithm to classify CHD risk using the Framingham Heart Study dataset, while comparing it with other commonly used classification methods. This research used the collection of Framingham heart disease data containing 15 medical features. The data was then processed through cleaning, normalization, and class balancing using the SMOTE method. An MLP model was designed with two hidden layers using 200 and 128 neuron architectures, and tested in three training and testing data split scenarios (70:30, 75:25, and 80:20). The model was trained for 100 epochs and evaluated using accuracy, precision, and recall metrics to assess its classification performance. The experiment results show that MLP is able to produce high performance with 86.20% accuracy. 84.40% precision, and 88.56% recall. Compared to other methods such as Decision Tree and SVM, the experiment results show that MLP demonstrated superior classification accuracy. Thus, MLP has the potential to be an effective tool for supporting early diagnosis of coronary heart disease more intelligently and efficiently