cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Indonesian Journal of Electrical Engineering and Computer Science
ISSN : 25024752     EISSN : 25024760     DOI : -
Core Subject :
Arjuna Subject : -
Articles 9,174 Documents
Fraud detection using TabNet* classifier: a machine learning approach Mary, G. Anish; Sudha, S.
Indonesian Journal of Electrical Engineering and Computer Science Vol 41, No 2: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v41.i2.pp601-613

Abstract

Detecting fraudulent transactions is a big challenge in the digital financial world. Transaction volumes are growing quickly, and new attack methods often outstrip traditional detection systems. Current fraud-detection models usually lack clarity and do not perform reliably on unbalanced real-world datasets. This highlights the urgent need for clear and explainable deep-learning methods for tabular financial data. This paper presents an interpretable deep learning framework built on the TabNet classifier. It uses attention-driven feature selection, sparse representation learning, and sequential decision reasoning to model complex interactions among transactional, demographic, and geographical factors. The model was tested on a real-world credit card transaction dataset with 23 features. It achieved 99.69% accuracy, a 0.975 F1-score, and a 0.956 ROC-AUC. This performance outperforms benchmark models such as random forest, XGBoost, LightGBM, and logistic regression. In addition to outstanding predictive results. Furthermore, interpretability is enhanced by TabNet's attention-based feature attribution. This facilitates the clear understanding of model decisions, supporting its use in regulated financial environments where precision and responsibility are crucial.
An investigation of different low-power circuits and enhanced energy efficiency in medical applications R, Prabhu; Rajagopal, Sivakumar
Indonesian Journal of Electrical Engineering and Computer Science Vol 41, No 2: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v41.i2.pp478-493

Abstract

This research investigates the application of low-power circuits in medical devices and imaging systems. The primary goal is to address the growing demand for energy-efficient solutions in medical applications. There is an increasing need for energy-efficient solutions due to the development of medical technologies, particularly implanted and battery-operated medical devices. This paper explores the integration of adiabatic logic as a critical enabler for achieving low power consumption in medical applications. The study looks into different low-power circuit designs and technologies that optimize power usage without sacrificing performance. Adiabatic circuits offer a promising substitute for conventional circuitry in low-energy design. The research examines several low-power circuit designs and technologies that maximize power efficiency without compromising functionality. In low-energy design, adiabatic circuits present a possible alternative to traditional circuitry. Adiabatic logic aims to create energy-efficient digital circuits that consume significantly less power than conventional complementary metal-oxide-semiconductor (CMOS) circuits. We accomplish this by recovering and recycling energy that would otherwise be lost as heat and carefully controlling energy flows during switching events. Adiabatic logic is precious in battery-operated and energy-constrained devices.
RAC: a reusable adaptive convolution for CNN layer Hung, Nguyen Viet; Huynh, Phi Dinh; Thinh, Pham Hong; Nguyen, Phuc Hau; Hoang, Trong-Minh
Indonesian Journal of Electrical Engineering and Computer Science Vol 41, No 2: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v41.i2.pp753-763

Abstract

This paper proposes reusable adaptive convolution (RAC), an efficient alternative to standard 3×3 convolutions for convolutional neural networks (CNNs). The main advantage of RAC lies in its simplicity and parameter efficiency, achieved by sharing horizontal and vertical 1×k/k×1 filter banks across blocks within a stage and recombining them through a lightweight 1×1 mixing layer. By operating at the operator design level, RAC avoids post-training compression steps and preserves the conventional Conv–BN–activation structure, enabling seamless integration into existing CNN backbones. To evaluate the effectiveness of the proposed method, extensive experiments are conducted on CIFAR-10 using several architectures, including ResNet-18/50/101, DenseNet, WideResNet, and EfficientNet. Experimental results demonstrate that RAC significantly reduces parameters and memory usage while maintaining competitive accuracy. These results indicate that RAC offers a reasonable balance between accuracy and compression, and is suitable for deploying CNN networks on resource-constrained platforms.
Stable and accurate customer churn prediction: comparative analysis of eight classification algorithms Haris, Vincent Alexander; Arsyad, Muhammad Ilyas; Adi Nugraha, Nathanael Septhian; Dani, Yasi; Ginting, Maria Artanta
Indonesian Journal of Electrical Engineering and Computer Science Vol 41, No 2: February 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v41.i2.pp655-665

Abstract

Predicting customer churn is a challenging problem in many subscription-based industries, though it is considered more cost-effective than acquiring new customers. In this research, customer churn is predicted using a public dataset from an internet service provider, with 72,274 instances and 55% churn rate. The main contribution is to provide a comprehensive comparison of the stability and performance of eight classification algorithms in customer churn prediction using a large-scale public dataset. The research process includes data collection, data preprocessing, feature engineering, and model evaluation. The metrics evaluation presents test accuracy, accuracy gap, precision, recall, F1-Score, and ROC AUC, with stratified K-Fold cross-validation. Since the proportion of churn and non-churn in the dataset is relatively balanced, the F1-score is considered as the primary evaluation metric, as it provides a balanced assessment of precision and recall for both classes. The results show that CatBoost and XGBoost are the most effective models that achieve high F1-scores of 94.97% and 94.92%, respectively.

Filter by Year

2012 2026


Filter By Issues
All Issue Vol 41, No 2: February 2026 Vol 41, No 1: January 2026 Vol 40, No 3: December 2025 Vol 40, No 2: November 2025 Vol 40, No 1: October 2025 Vol 39, No 3: September 2025 Vol 39, No 2: August 2025 Vol 39, No 1: July 2025 Vol 38, No 3: June 2025 Vol 38, No 2: May 2025 Vol 38, No 1: April 2025 Vol 37, No 3: March 2025 Vol 37, No 2: February 2025 Vol 37, No 1: January 2025 Vol 36, No 3: December 2024 Vol 36, No 2: November 2024 Vol 36, No 1: October 2024 Vol 35, No 3: September 2024 Vol 35, No 2: August 2024 Vol 35, No 1: July 2024 Vol 34, No 3: June 2024 Vol 34, No 2: May 2024 Vol 34, No 1: April 2024 Vol 33, No 3: March 2024 Vol 33, No 2: February 2024 Vol 33, No 1: January 2024 Vol 32, No 3: December 2023 Vol 32, No 1: October 2023 Vol 31, No 3: September 2023 Vol 31, No 2: August 2023 Vol 31, No 1: July 2023 Vol 30, No 3: June 2023 Vol 30, No 2: May 2023 Vol 30, No 1: April 2023 Vol 29, No 3: March 2023 Vol 29, No 2: February 2023 Vol 29, No 1: January 2023 Vol 28, No 3: December 2022 Vol 28, No 2: November 2022 Vol 28, No 1: October 2022 Vol 27, No 3: September 2022 Vol 27, No 2: August 2022 Vol 27, No 1: July 2022 Vol 26, No 3: June 2022 Vol 26, No 2: May 2022 Vol 26, No 1: April 2022 Vol 25, No 3: March 2022 Vol 25, No 2: February 2022 Vol 25, No 1: January 2022 Vol 24, No 3: December 2021 Vol 24, No 2: November 2021 Vol 24, No 1: October 2021 Vol 23, No 3: September 2021 Vol 23, No 2: August 2021 Vol 23, No 1: July 2021 Vol 22, No 3: June 2021 Vol 22, No 2: May 2021 Vol 22, No 1: April 2021 Vol 21, No 3: March 2021 Vol 21, No 2: February 2021 Vol 21, No 1: January 2021 Vol 20, No 3: December 2020 Vol 20, No 2: November 2020 Vol 20, No 1: October 2020 Vol 19, No 3: September 2020 Vol 19, No 2: August 2020 Vol 19, No 1: July 2020 Vol 18, No 3: June 2020 Vol 18, No 2: May 2020 Vol 18, No 1: April 2020 Vol 17, No 3: March 2020 Vol 17, No 2: February 2020 Vol 17, No 1: January 2020 Vol 16, No 3: December 2019 Vol 16, No 2: November 2019 Vol 16, No 1: October 2019 Vol 15, No 3: September 2019 Vol 15, No 2: August 2019 Vol 15, No 1: July 2019 Vol 14, No 3: June 2019 Vol 14, No 2: May 2019 Vol 14, No 1: April 2019 Vol 13, No 3: March 2019 Vol 13, No 2: February 2019 Vol 13, No 1: January 2019 Vol 12, No 3: December 2018 Vol 12, No 2: November 2018 Vol 12, No 1: October 2018 Vol 11, No 3: September 2018 Vol 11, No 2: August 2018 Vol 11, No 1: July 2018 Vol 10, No 3: June 2018 Vol 10, No 2: May 2018 Vol 10, No 1: April 2018 Vol 9, No 3: March 2018 Vol 9, No 2: February 2018 Vol 9, No 1: January 2018 Vol 8, No 3: December 2017 Vol 8, No 2: November 2017 Vol 8, No 1: October 2017 Vol 7, No 3: September 2017 Vol 7, No 2: August 2017 Vol 7, No 1: July 2017 Vol 6, No 3: June 2017 Vol 6, No 2: May 2017 Vol 6, No 1: April 2017 Vol 5, No 3: March 2017 Vol 5, No 2: February 2017 Vol 5, No 1: January 2017 Vol 4, No 3: December 2016 Vol 4, No 2: November 2016 Vol 4, No 1: October 2016 Vol 3, No 3: September 2016 Vol 3, No 2: August 2016 Vol 3, No 1: July 2016 Vol 2, No 3: June 2016 Vol 2, No 2: May 2016 Vol 2, No 1: April 2016 Vol 1, No 3: March 2016 Vol 1, No 2: February 2016 Vol 1, No 1: January 2016 Vol 16, No 3: December 2015 Vol 16, No 2: November 2015 Vol 16, No 1: October 2015 Vol 15, No 3: September 2015 Vol 15, No 2: August 2015 Vol 15, No 1: July 2015 Vol 14, No 3: June 2015 Vol 14, No 2: May 2015 Vol 14, No 1: April 2015 Vol 13, No 3: March 2015 Vol 13, No 2: February 2015 Vol 13, No 1: January 2015 Vol 12, No 12: December 2014 Vol 12, No 11: November 2014 Vol 12, No 10: October 2014 Vol 12, No 9: September 2014 Vol 12, No 8: August 2014 Vol 12, No 7: July 2014 Vol 12, No 6: June 2014 Vol 12, No 5: May 2014 Vol 12, No 4: April 2014 Vol 12, No 3: March 2014 Vol 12, No 2: February 2014 Vol 12, No 1: January 2014 Vol 11, No 12: December 2013 Vol 11, No 11: November 2013 Vol 11, No 10: October 2013 Vol 11, No 9: September 2013 Vol 11, No 8: August 2013 Vol 11, No 7: July 2013 Vol 11, No 6: June 2013 Vol 11, No 5: May 2013 Vol 11, No 4: April 2013 Vol 11, No 3: March 2013 Vol 11, No 2: February 2013 Vol 11, No 1: January 2013 Vol 10, No 8: December 2012 Vol 10, No 7: November 2012 Vol 10, No 6: October 2012 Vol 10, No 5: September 2012 Vol 10, No 4: August 2012 Vol 10, No 3: July 2012 More Issue