cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI)
ISSN : 23383070     EISSN : 23383062     DOI : -
JITEKI (Jurnal Ilmiah Teknik Elektro Komputer dan Informatika) is a peer-reviewed, scientific journal published by Universitas Ahmad Dahlan (UAD) in collaboration with Institute of Advanced Engineering and Science (IAES). The aim of this journal scope is 1) Control and Automation, 2) Electrical (power), 3) Signal Processing, 4) Computing and Informatics, generally or on specific issues, etc.
Arjuna Subject : -
Articles 505 Documents
Techno-Economic Optimization and Sensitivity Analysis of a Hybrid Grid-Connected Microgrid System for Sustainable Energy Usman, Habib Muhammad; Sharma, Nirma Kumari; Joshi, Deepak Kumar; Kaushik, Aditya; Kumhar, Suraj; Saminu, Sani; Yero, Abdulbasid Bashir
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30221

Abstract

Chittorgarh like many other villages in India faces a dual challenge of unreliable electricity supply and heavy dependence on fossil fuels, which undermines economic development and environmental sustainability. Addressing this critical issue, this study explores the optimization of a hybrid grid-connected microgrid comprising wind turbines, solar photovoltaic (PV) systems, and grid integration, aimed at delivering reliable, sustainable, and cost-effective energy. To achieve this, real-world meteorological and energy pricing data were analyzed, and HOMER software was employed for comprehensive system modeling. The proposed microgrid features a 165,024 kW wind turbine system and a 1,500 kW solar PV system, generating a combined annual energy production of 58,772,300 kWh. Wind energy dominates the energy mix, contributing 35,272,200 kWh/year, with a capacity factor of 29%, while solar PV provides 23,500,100 kWh/year with a capacity factor of 22%. Both systems efficiently operate for 4,327 hours/year, supplying a primary AC load of 20,077,351 kWh/year, thereby ensuring reliable energy delivery. Economic analysis reveals that the system's total capital investment is $8.6 million, with replacement and operations and maintenance (O&M) costs amounting to $4.5 million and $3.5 million, respectively. The system demonstrates exceptional economic viability, achieving a Levelized Cost of Energy (LCOE) of $0.0413/kWh, a present worth of $16.6 million, and an annual worth of $1.99 million, delivering a 12% return on investment (ROI). Additionally, the microgrid operates as a net energy exporter, selling 46,979,478 kWh/year to the grid and generating a net annual profit of $53,748, with peak profitability recorded in May ($53,553) and June ($47,615). Sensitivity analysis was conducted under various scenarios, including variations in solar irradiance, wind speed, fuel prices, energy production, and grid prices, to evaluate the robustness of the system's performance and economic metrics. The analysis highlights the resilience of the microgrid design, showcasing its adaptability to diverse operational conditions while maintaining economic and environmental viability. The findings provide compelling evidence for policymakers, investors, and energy stakeholders to adopt renewable energy systems that combine sustainability, reliability, and profitability. By leveraging these insights, similar energy-deficient regions can achieve significant strides toward energy independence and environmental preservation.
Optimizing 2.4GHz Wireless Networks in Shrimp Ponds with Particle Swarm Optimization Tahcfulloh, Syahfrizal; Maulianawati, Diana; Wiharyanto, Dhimas
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30236

Abstract

This paper focuses on enhancing wireless sensor networks (WSNs) for monitoring water quality in aquaculture, specifically shrimp ponds, by improving pathloss (PL) models. Radio wave propagation in such environments is challenging due to unpredictable signal attenuation caused by factors like distance, antenna height, terrain, vegetation, and weather conditions. Reliable PL modeling is essential for optimizing network performance. The research evaluates the performance of theoretical PL models, including ITU, Fitting-ITU (FITU), and Weissberger, by comparing their predictions with actual 2.4GHz radio frequency (RF) measurements. Statistical metrics such as root-mean-square error (RMSE) and the coefficient of determination (R²) were used to assess model accuracy. Initial results showed significant discrepancies, with an average RMSE of 28.7dB and an R² of only 5%. To address these issues, the study employed modification techniques (quadratic and cubic polynomial adjustments) and optimization methods, particularly particle swarm optimization (PSO). These approaches refined the theoretical models, aligning them more closely with real-world data. The optimized PSO model reduced the RMSE to 8.34dB and further to 1.89dB, while improving R² from 5% to 95.6%, demonstrating a near-perfect fit. This study highlights the critical role of PSO and similar techniques in bridging the gap between theoretical predictions and practical applications, ensuring more reliable WSN performance in aquaculture environments. The findings contribute to the development of robust, high-accuracy models tailored to the unique challenges of aquaculture settings.
Innovative Multimodal Approaches in Image-Based Analysis of Adipose Tissue Cells Syah Putra, Heru; Mukhtar, Husneni; Alia, Fenty; Adipurna Syamsunarno, Mas Rizky Anggun
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30241

Abstract

This study addresses the limitations of traditional single-modality imaging techniques, such as optical microscopy, in effectively analyzing adipose tissue cells. A novel multimodal approach is introduced to overcome these challenges, combining MRI, CT, and microscopy to provide a more comprehensive and precise dataset. The system automates image processing, utilizing advanced segmentation methods to detect adipose cells more accurately while calculating cell dimensions and total image area. The results indicate that the maximum observed cell diameter reaches 10,466.64 µm, with a minimum diameter of 0.40 µm and an average diameter of 2,398.31 µm across the sample images. All measurements achieved 0% mean square error (MSE), highlighting the precision of the method. Comparative analysis reveals significant improvements in accuracy for both cell detection and quantification, outperforming conventional methods. Graphical representations further validate the reliability of this multimodal approach, demonstrating its capacity to capture intricate details of cellular structures. This innovative method holds considerable promise for enhancing medical diagnostics, particularly in metabolic disorders like obesity and diabetes, where adipose tissue plays a pivotal role. Integrating multiple imaging modalities offers a powerful tool for more informed clinical decisions, potentially leading to improved patient outcomes.
Automatic Software Refactoring to Enhance Quality: A Review Khaleel, Shahbaa I.; Mahmood , Rasha Ahmed
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30277

Abstract

Refactoring aims to enhance the internal structure of the code and improve maintainability without affecting its functionality and external behavior. As a result of the development of technologies, it has become necessary to apply automatic refactoring to address complexities and reduce technical debt. This review presents machine learning and deep learning techniques that lead to identifying opportunities for the need for refactoring and implementing them through analyzing the software code and discovering "code smells", where the focus is on the role of tools such as RefactoringMiner, CODEBERT in enhancing the accuracy of prediction. This review presents various methodologies that include metrics-based methods, search, machine learning and discusses their impact on software quality. The review reviews experimental studies that focus on the challenges of refactoring such as reducing the risks associated with making unnecessary modifications and determining the appropriate timing. Notable empirical studies include a study by Bavota et al., in which Ref-Finder was used to detect 15,008 refactorings in open source software systems, identifying 85% of which improved code quality and reduced bugs. Additionally, another study by Khatchadourian et al. demonstrated the effectiveness of OPTIMIZE STREAMS in improving code performance in large Java projects, increasing efficiency by 55% on average. The study presents two research contributions. The first is a comprehensive analysis of automated refactoring techniques using machine learning algorithms, in addition to improving maintainability and reducing complexity. The second contribution is to provide recommendations to support developers in using modern tools and choosing the right timing for refactoring, which enhances code productivity. The results showed that machine learning techniques can significantly enhance the efficiency of refactoring and thus support developers in making accurate decisions in enhancing maintainability.
Using Artificial Intelligence Algorithms to Recognize Osteoporosis: A Review Razzaq, Saud M. Abdul; Khaleel , Baydaa I.
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30352

Abstract

Osteoporosis is a silent disease that usually occurs due to bone mineral deficiency (BMD), which leads to increased bone porosity, thus weakening the bones and increasing their porosity, which increases the risk of fractures in those with this disease. Bone porosity is defined as an increase in internal spaces in the bone structure, which reduces its density and strength and makes it more susceptible to fractures. Many parts of the skeleton are exposed to osteoporosis, such as the hip, thigh, jaw, knee, forearm, spine, and others. The incidence of osteoporosis increases in the elderly, and women are more affected by it than men. There are also other factors such as genetic predisposition and lifestyle. The use of artificial intelligence-based technical programs has received wide attention in the medical field to diagnose and classify various medical images, such as images of cancerous tumors, arthritis, osteoporosis, and others, as artificial intelligence provides accurate and rapid tools for the early detection of osteoporosis through the analysis of medical images, outperforming traditional methods, which improves treatment opportunities and reduces diagnostic costs. However, these techniques face challenges such as algorithmic bias and the need for diverse databases to ensure a balanced assessment of different cases.In addition, despite the advances in computer technologies for the early detection of osteoporosis, the disease remains a challenge in healthcare due to the absence of clear symptoms until fractures occur, the difficulty of early detection, the variability in disease progression, and the need for personalized treatment plans, which leads to increased mortality. The paper presents a review of studies that have addressed osteoporosis in skeletal parts such as the knee, spine, hip, and teeth. It also reviews the techniques and methods used in diagnosis, with a focus on the role of artificial intelligence in improving accuracy and speed of detection. The review shows how deep learning algorithms, especially convolutional neural networks (CNNs),have been effectively used to classify osteoporosis through the results and achieve high accuracy rates in different studies.
Comparative Analysis of Daily and Weekly Heavy Rain Prediction Using LSTM and Cloud Data Monita, Vivi; Sevirda Raniprima; Nanang Cahyadi
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30374

Abstract

Indonesia's distinct geographic and climatic features make forecasting the weather there tricky. Due to its location at the equator and between two enormous oceans, the nation endures erratic weather patterns. Despite technical developments, the Meteorology, Climatology, and Geophysics Agency (BMKG) require assistance with precise forecasting. This research seeks to increase prediction accuracy using the Long Short-Term Memory (LSTM) algorithm, a deep learning technique appropriate for time series data processing. The research focuses on cloud data sets to improve the prediction of heavy rain. The potential of LSTM in weather forecasting has been demonstrated in earlier research, focusing on identifying rain at particular intervals. This research compares daily and weekly heavy rain prediction models using Python.  Results reveal that the weekly model outperforms the daily model, achieving 85% accuracy compared to 80%. These findings highlight the effectiveness of LSTM in addressing the limitations of existing methods, offering a foundation for more reliable weather forecasting tailored to Indonesia’s conditions.
Enhancing Drug-Target Affinity Prediction with Multi-scale Graph Attention Network and Attention Mechanism Yusuf, Muhammad Rizky Yusfian; Kurniawan, Isman
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.30425

Abstract

Drug-target affinity (DTA) prediction is critical to drug discovery, yet traditional experimental methods are expensive and time-consuming. Existing computational approaches often struggle with limitations in representing the structural and sequential complexities of drugs and proteins, resulting in suboptimal prediction accuracy. This study proposes a novel framework integrating Graph Attention Networks (GAT) for drug molecular and motif graphs and Bidirectional Long Short-Term Memory (BiLSTM) for protein sequences. A two-sided multi-head attention mechanism is utilized to dynamically model drug-protein interactions, enhancing robustness and accuracy. This research contribution is the development of a robust computational model that improves the accuracy of DTA predictions, reducing dependency on traditional laboratory methods. The integration of structural and sequential features provides a more comprehensive representation of drug-protein interactions. The study utilizes the Davis and KIBA, a binding affinity datasets that is widely used. the proposed model achieving the lowest Mean Squared Error (MSE) of 0.3209 and 0.1864, the highest Concordance Index (CI) of 0.8646 and 0.8616, and the highest  of 0.5046 and 0.6672, respectively, outperforming baseline models. In conclusion, this study showed the proposed approach as a reliable method for DTA prediction, offering a faster and more accurate alternative in the drug discovery research field. However, there are still limitations, such as high computational complexity and the GAT model still uses static attention. Future work will focus on addressing this issue, testing the model across broader datasets, and implementing additional drug and target representation for richer feature extraction.
RETRACTED: Analyzing challenging aspects of IPv6 over IPv4 Ashraf, Shahzad; Muhammad, Durr; Aslam, Zeeshan
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 6 No. 1 (2020): June
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v16i1.17105

Abstract

The publisher has retracted this article. This article has been retracted at the request of The International Arab Journal of Information Technology (IAJIT) report because of misconduct and plagiarism. The document and its content have been removed from the Jurnal Ilmiah Teknik Elektro Komputer dan Informatika, and reasonable effort should be made to remove all references to this article.
Malware Detection in Portable Document Format (PDF) Files with Byte Frequency Distribution (BFD) and Support Vector Machine (SVM) Saputra, Heru; Stiawan, Deris; Satria, Hadipurnawan
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 9 No. 4 (2023): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i4.27559

Abstract

Portable Document Format (PDF) files as well as files in several other formats such as (.docx, .hwp and .jpg) are often used to conduct cyber attacks. According to VirusTotal, PDF ranks fourth among document files that are frequently used to spread malware in 2020. Malware detection is challenging partly because of its ability to stay hidden and adapt its own code and thus requiring new smarter methods to detect. Therefore, outdated detection and classification methods become less effective. Nowadays, one of such methods that can be used to detect PDF files infected with malware is a machine learning approach. In this research, the Support Vector Machine (SVM) algorithm was used to detect PDF malware because of its ability to process non-linear data, and in some studies, SVM produces the best accuracy. In the process, the file was converted into byte format and then presented in Byte Frequency Distribution (BFD). To reduce the dimensions of the features, the Sequential Forward Selection (SFS) method was used. After the features are selected, the next stage is SVM to train the model. The performance obtained using the proposed method was quite good, as evidenced by the accuracy obtained in this study, which was 99.11% with an F1 score of 99.65%. The contributions of this research are new approaches to detect PDF malware which is using BFD and SVM algorithm, and using SFS to perform feature selection with the purpose of improving model performance. To this end, this proposed system can be an alternative to detect PDF malware.
Optimizing Cleaning Path for Coal Dust Removal Using Dual Stage Tracking Method Kumalasari, Ira; Ar Rosyid, Harits; Sendari , Siti; Mokhtar , Norrima Binti; Setumin , Samsul
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 4 (2024): December
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i4.29806

Abstract

Manual disaster mitigation at the Java Bali power plant, particularly related to fire risks from coal dust during electricity production, often requires halting operations, leading to significant revenue loss and power outages. This study aims to address this issue by proposing an automated solution to clean coal dust without interrupting production, utilizing a dual-stage tracking method for robot-assisted coal dust cleaning. The research contributes by developing a dual-stage A* algorithm that optimizes robot movements for cleaning tasks in power plant environments, outperforming single-stage BFS and single-stage A* algorithms. The research is divided into two phases: object detection and robot motion path selection. The dual-stage A* algorithm is compared against single-stage BFS and single-stage A* methods through a series of experiments evaluating their efficiency and effectiveness. The dual-stage A* method demonstrates superior performance in terms of path optimization, reducing cleaning time, and improving operational safety. Specifically, the dual-stage A* algorithm reduces energy consumption by 169 units and grid traversal by 84 units compared to single-stage methods, ensuring thorough dust removal while minimizing fire hazards. The dual-stage A* algorithm proves to be the optimal solution for coal dust cleaning in power plants, allowing for safe, continuous operation without the need for production halts. Future work should focus on addressing implementation costs and technical constraints to enhance real-world applicability.