cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Bulletin of Electrical Engineering and Informatics
ISSN : -     EISSN : -     DOI : -
Core Subject : Engineering,
Bulletin of Electrical Engineering and Informatics (Buletin Teknik Elektro dan Informatika) ISSN: 2089-3191, e-ISSN: 2302-9285 is open to submission from scholars and experts in the wide areas of electrical, electronics, instrumentation, control, telecommunication and computer engineering from the global world. The journal publishes original papers in the field of electrical, computer and informatics engineering.
Arjuna Subject : -
Articles 75 Documents
Search results for , issue "Vol 14, No 6: December 2025" : 75 Documents clear
Multi-attribute based optimal location and sizing of solar power plant in radial distribution system Kumar, Ramesh; Singh, Digambar; Aljaidi, Mohammad; Singla, Manish Kumar; Tripathi, Shashank
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.9627

Abstract

Advancements in renewable energy sources (RES) have significantly increased power generation and reduced emissions. Optimally integrating RES into distribution systems can minimize power losses, emissions, and enhance voltage profile and stability. Therefore, determining the optimal location and size of RES is crucial for their effective integration. This paper presents a novel approach for identifying the optimal location and size of a solar power plant (SPP) in a distribution system, considering system power losses, voltage profile, voltage stability, and emissions simultaneously. A simple yet effective methodology combining repeated load flow and fuzzy systems is proposed. Repeated load flow is used to calculate the relevant attributes, while fuzzy decision-making is employed to determine the optimal solution. The effectiveness of the proposed method is demonstrated through its application to the IEEE-33 bus system. The results illustrate that integrating a SPP at the optimal location and size can significantly reduce power losses and emissions while improving voltage profile and stability.
Artificial neural network maximum power point tracking for mitigation photovoltaic harmonic distortion Bouledroua, Adel; Mesbah, Tarek; Kelaiaia, Samia
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10050

Abstract

This study introduces a novel methodology aimed at minimising total harmonic distortion (THD) in grid-connected photovoltaic (PV) systems (GCPVs) through the implementation of a maximum power point tracking (MPPT) approach based on artificial neural networks (ANN). High THD levels in PV systems can lead to inefficiencies, power quality issues, and potential damage to the grid infrastructure. Although traditional MPPT methods effectively optimise the power output, they often fail to address harmonics. The proposed ANN-based MPPT algorithm improves PV power harvesting while actively minimising the harmonic distortions. The ANN was trained using a comprehensive dataset that included various environmental conditions, ensuring robust performance in diverse operational scenarios. Simulation results demonstrate that the ANN-based MPPT approach significantly reduces THD to below 1% across various irradiance levels, in contrast to the 1.18% to 2.72% observed with conventional methods such as perturb and observe (PO), while simultaneously preserving optimal power output. Reducing harmonic distortion improves the power quality, system efficiency, and lifespan of grid-connected components. This study highlights ANN-based control strategies for addressing the challenge of maximising energy harvesting and maintaining power quality in modern PV systems, offering a solution for the sustainable integration of solar energy into the grid.
Analog artificial intelligence hardware for neural networks: design trends and considerations S. Gorde, Kanchan; M. Sonavane, Sonali; Hutke, Sonal; Hutke, Ankush
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10842

Abstract

The increasing deployment of artificial intelligence (AI) in real-time and edge applications intensified the demand for energy-efficient hardware capable of high-throughput processing. Conventional digital processors were constrained by sequential data processing, memory bandwidth limitations, and high-power consumption, making them suboptimal for edge-based AI. This review presented a comprehensive analysis of analog very-large-scale integration (VLSI) design approaches for neural network (NN) implementation focusing on circuit-level architectures including in-memory analog computing, current-mode circuits, switched-capacitor (SC) techniques, and operational transconductance amplifier (OTA)-based designs. Significant hardware design considerations such as process variation, crossbar scalability, precision–linearity trade-offs, and mixed-signal interface challenges were critically examined. Furthermore, training methodologies—spanning offline learning, circuit calibration, and programmability were discussed in the context of analog AI hardware. The review incorporated case studies, recent developments in edge deployment, and a comparative analysis of advanced analog VLSI chips. Key performance evaluation metrics such as accuracy, calibration overhead, noise robustness, and energy per inference, were also addressed. Circuit-level design aspects that impacted the performance, precision, and reliability of analog computing blocks were discussed. The paper concluded by identifying research gaps and future directions for the development of analog AI hardware suitable for real-world edge applications.
Evaluating the effectiveness of Havij for structured query language injection exploitation in web applications Baklizi, Mahmoud; Alkhazaleh, Mohammad; Alzghoul, Musab Bassam Yousef; Maaita, Adi; Zraqou, Jamal; AlShaikh-Hasan, Mohammad
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10751

Abstract

Structured query language injection (SQLi) is still one of the most critical risks to web application security, as it allows attackers to interfere with sensitive data and even a complete database infrastructure. Although many automated tools are available, previous studies usually achieve only descriptive briefs, which do not offer empirical assessments that measure the performance and the usability. This research fills this void by a systematic five-stage experimental analysis of the Havij automated SQLi tool under a controlled and ethical test setup. Confirmation of vulnerability, automated exploitation, data extraction and benchmarking of performance were performed as the methodology, and the results were compared against the industry standard SQLmap tool. It was found that in less than a minute Havij was able to locate the target database, scan its structure, and steal authentication credentials, which is quite efficient and user-friendly. In contrast to the literature, our work presents not only quantitative measures (time-to-exploit, request volume, and success rate) but also a qualitative evaluation (user accessibility and limitations), which gives a comprehensive evaluation. The results highlight trade-offs between the depth and accessibility, the continued dangers of SQLi in practice, and provide recommendations that developers and security experts can implement.
Comparative analysis of Haar Cascade, OpenCV, and you only look once algorithms for vehicle detection Kaur, Gagandeep; Pawar, Shital; Patil, Rutuja Rajendra; Patil, Amol Vijay; Yenkikar, Anuradha V.; Bhandari, Nikita; Kadam, Kalyani Dhananjay
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10554

Abstract

Object detection is one of the substantial tasks in computer vision and has a wide range of applications ranging from autonomous driving to monitoring systems. This study presents a comparative analysis of vehicle detection approaches, contrasting traditional methods (OpenCV contour analysis and Haar Cascade) with modern deep learning-based you only look once version 8 (YOLOv8) and its variants. Vehicles were identified and localized within video frames using bounding boxes, with performance assessed through accuracy, F1-score, mean average precision (mAP), and inference speed. YOLOv8 consistently achieved superior accuracy (up to 98% in specific scenarios) and real-time processing speeds (155 FPS), confirming its suitability for safety-critical applications such as intelligent transport systems and autonomous navigation. However, its higher computational and memory demands highlight deployment trade-offs, where lighter variants like YOLOv8s remain feasible for embedded or low-power devices. In contrast, Haar Cascade and contour analysis offered faster execution and smaller memory footprints but lacked robustness under complex environmental conditions. The study also acknowledges limitations such as dataset bias, adverse weather effects, and scalability challenges, which may impact generalization in real-world deployments. By analyzing these trade-offs, the work provides essential insights to guide practitioners in selecting suitable vehicle detection solutions across diverse application environments.
Real-time vehicle detection and speed estimation system using Raspberry Pi and camera module Jyothi, B; Pabbuleti, Bhavana; Sanjeev, Gadi; Rao, Kambhampati Venkata Govardhan; Srilakshmi, S. Sai; Jee, Atul; Kumar, Malligunta Kiran; Bikku, Thulasi; Reddy, Ch. Rami
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.9931

Abstract

In the era of intelligent transportation systems, real-time vehicle detection and distance estimation play a crucial role in enhancing road safety and traffic efficiency. This study proposes a low-cost, real-time system that integrates you only look once–version 8 (YOLOv8)-based deep learning for vehicle detection with monocular vision techniques for distance estimation, implemented on a Raspberry Pi embedded platform. The objective is to provide a scalable, affordable solution for traffic monitoring and collision avoidance in resource-constrained environments. The methodology involves using a camera module connected to Raspberry Pi for live video capture, YOLOv8 for object detection, and a calibrated monocular distance estimation algorithm based on bounding box dimensions and known vehicle sizes. Experimental results show that the system achieves over 90% detection accuracy under standard lighting conditions and maintains a distance estimation error below 10% for vehicles within 15 meters. The model processes video frames in real time (~0.17 seconds per frame), proving its effectiveness for embedded deployment. In conclusion, the proposed system offers a robust, power-efficient alternative to high-cost light detection and ranging (LiDAR) or stereo vision systems. Its modular design supports future enhancements such as speed estimation or multi-camera integration, making it highly relevant for smart city applications and low-cost vehicular safety systems.
Integrating RPA, BPM, and DT in the context of Industry 4.0 and 5.0: a strategic approach for modern enterprises Bui Quang, Truong; Dang Quoc, Huu; Nguyen Thi Cam, Van; Nguyen-Duc, Anh
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10535

Abstract

This paper examines the interaction between robotic process automation (RPA), business process management (BPM), and digital transformation (DT)-three critical components in improving operational efficiency and driving business modernization. RPA automates repetitive tasks, reduces errors, accelerates processing, and optimizes resource use. When combined with artificial intelligence (AI) and machine learning (ML), it further enhances data analysis and decision-making. BPM focuses on analyzing, designing, and optimizing business processes to ensure organizational agility. DT provides a technological foundation for broader innovation in processes and structures. The paper contributes a comprehensive and updated perspective on how RPA, BPM, and DT interrelate—not only functioning independently but also reinforcing one another to create greater business value. It emphasizes that their integration is a strategic approach to improving performance, responsiveness, and continuous innovation. Importantly, the research is relevant to both Industry 4.0 and Industry 5.0. While Industry 4.0 (I4.0) prioritizes automation and data-driven systems, Industry 5.0 (I5.0) highlights human–technology collaboration for more adaptive and human-centric organizations. This study enriches theoretical insights and offers practical guidance for building effective and sustainable DT strategies.
Alzheimer's disease detection based on MR images using the quad convolutional layers CNN approach Pamungkas, Yuri; Syaifudin, Achmad; Yunanto, Wawan; Hashim, Uda
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10304

Abstract

Alzheimer’s disease is a progressive neurodegenerative disorder requiring early and accurate detection for effective intervention. Deep learning (DL) techniques, particularly convolutional neural networks (CNNs), have shown promise in medical image classification. However, conventional CNN models often suffer from high computational complexity and inefficiency in handling imbalanced datasets. This study proposes a quad convolutional layers-CNN (QCL-CNN) for Alzheimer’s disease detection using magnetic resonance images (MRI) scans from the open access series of imaging studies (OASIS) dataset, which includes four dementia stages, non-dementia, very mild dementia, mild dementia, and moderate dementia. The QCL-CNN model employs four sequential convolutional layers for enhanced multi-level feature extraction, ensuring efficient classification while minimizing computational overhead. The experimental results demonstrate that QCL-CNN outperforms traditional CNN architectures, achieving an accuracy of 99.90%, recall of 99.89%, specificity of 99.93%, and an F1-score of 99.52%. The model surpasses VGG19, Xception, ResNet50, and DenseNet201 while maintaining a significantly lower parameter count (4.2M), making it computationally efficient. These findings confirm that network optimization is more crucial than model depth, ensuring robust performance even with fewer layers. Future research should explore multi-modal imaging, class balancing techniques, and real-world clinical validation to further improve the model’s diagnostic capabilities. The QCL-CNN model offers a promising artificial intelligence (AI)-powered approach for early Alzheimer’s detection, enabling precise, and efficient medical diagnosis.
Advances in artificial intelligence-driven 3D model generation: a review of GAN and VAE methodologies Adilkhan, Shyngys; Alimanova, Madina; Shi, Lei; Soltiyeva, Aiganym
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10755

Abstract

This paper offers a comprehensive review of current developments in artificial intelligence (AI)-based 3D model creation, with an emphasis on techniques utilizing variational autoencoders (VAEs) and generative adversarial networks (GANs). 3DGAN, paired 3D model generation with GAN, conditional GAN, FaceVAE, voxel-based 3D object reconstruction, and 3D-VAE-SDFRaGAN are the six main techniques that are studied in this work. Each method is discussed, highlighting its architectural framework, data representation, and specific approach to generating 3D models. First, the paper introduces basic terms and classical 3D modeling techniques and provides a comparative analysis of them based on their workflow, purpose and field of application. In subsequent chapters, methods for generating 3D models based on the use of GANs and VAEs are reviewed, describing its methodology, experimentation technique, results, and comparison with other methods. The review outlines the strengths and limitations of each approach and their applications in object reconstruction, shape generation, and maintaining model consistency. It concludes by emphasizing how AI-driven methods can advance 3D modeling, underscoring the need for further research to enhance quality, control, and training reliability. The findings show AI’s significant impact on automating complex modeling tasks and enabling new creative opportunities in 3D content development.
IoT-based real-time monitoring of agricultural wastewater using Raspberry Pi, Node-RED, and Grafana Faizu, Nur’in Batrisyia Mohd; Roslizar, Ahmad Muzammil; Zaini, Muhammad Aizat Zaim; Idris, Fakrulradzi; Berahim, Zulkarami; Latiff, Anas Abdul
Bulletin of Electrical Engineering and Informatics Vol 14, No 6: December 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i6.10170

Abstract

This study introduces an internet of things-based agricultural wastewater monitoring system (IoT-AWMS) designed to enhance water management through real-time monitoring and advanced sensor integration. The system employs a Raspberry Pi for centralized control, node-RED for automation, InfluxDB for data storage, and Grafana for visualization. A key innovation is the integration of an alternative sensing approach for estimating electrical conductivity (EC), complementing conventional sensors for total dissolved solids (TDS), water temperature (DS18B20), and ambient conditions (DHT11). The system achieves over 85% accuracy in estimating EC across diverse water samples, including drinking water, agricultural runoff, and fertilizer-enriched solutions. Compared with conventional approaches, IoT-AWMS demonstrates superior accuracy, scalability, and cost-effectiveness. Its modular design supports applications in nutrient runoff detection, contamination monitoring, and optimized water resource utilization, with broader potential in precision farming and environmental monitoring. This work contributes a robust, adaptable IoT framework for sustainable agricultural water management.

Filter by Year

2025 2025


Filter By Issues
All Issue Vol 14, No 6: December 2025 Vol 14, No 5: October 2025 Vol 14, No 4: August 2025 Vol 14, No 3: June 2025 Vol 14, No 2: April 2025 Vol 14, No 1: February 2025 Vol 13, No 6: December 2024 Vol 13, No 5: October 2024 Vol 13, No 4: August 2024 Vol 13, No 3: June 2024 Vol 13, No 2: April 2024 Vol 13, No 1: February 2024 Vol 12, No 6: December 2023 Vol 12, No 5: October 2023 Vol 12, No 4: August 2023 Vol 12, No 3: June 2023 Vol 12, No 2: April 2023 Vol 12, No 1: February 2023 Vol 11, No 6: December 2022 Vol 11, No 5: October 2022 Vol 11, No 4: August 2022 Vol 11, No 3: June 2022 Vol 11, No 2: April 2022 Vol 11, No 1: February 2022 Vol 10, No 6: December 2021 Vol 10, No 5: October 2021 Vol 10, No 4: August 2021 Vol 10, No 3: June 2021 Vol 10, No 2: April 2021 Vol 10, No 1: February 2021 Vol 9, No 6: December 2020 Vol 9, No 5: October 2020 Vol 9, No 4: August 2020 Vol 9, No 3: June 2020 Vol 9, No 2: April 2020 Vol 9, No 1: February 2020 Vol 8, No 4: December 2019 Vol 8, No 3: September 2019 Vol 8, No 2: June 2019 Vol 8, No 1: March 2019 Vol 7, No 4: December 2018 Vol 7, No 3: September 2018 Vol 7, No 2: June 2018 Vol 7, No 1: March 2018 Vol 6, No 4: December 2017 Vol 6, No 3: September 2017 Vol 6, No 2: June 2017 Vol 6, No 1: March 2017 Vol 5, No 4: December 2016 Vol 5, No 3: September 2016 Vol 5, No 2: June 2016 Vol 5, No 1: March 2016 Vol 4, No 4: December 2015 Vol 4, No 3: September 2015 Vol 4, No 2: June 2015 Vol 4, No 1: March 2015 Vol 3, No 4: December 2014 Vol 3, No 3: September 2014 Vol 3, No 2: June 2014 Vol 3, No 1: March 2014 Vol 2, No 4: December 2013 Vol 2, No 3: September 2013 Vol 2, No 2: June 2013 Vol 2, No 1: March 2013 Vol 1, No 4: December 2012 Vol 1, No 3: September 2012 Vol 1, No 2: June 2012 Vol 1, No 1: March 2012 List of Accepted Papers (with minor revisions) More Issue