cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika (JITEKI)
ISSN : 23383070     EISSN : 23383062     DOI : -
JITEKI (Jurnal Ilmiah Teknik Elektro Komputer dan Informatika) is a peer-reviewed, scientific journal published by Universitas Ahmad Dahlan (UAD) in collaboration with Institute of Advanced Engineering and Science (IAES). The aim of this journal scope is 1) Control and Automation, 2) Electrical (power), 3) Signal Processing, 4) Computing and Informatics, generally or on specific issues, etc.
Arjuna Subject : -
Articles 505 Documents
Application of the Machine Learning Method for Predicting International Tourists in West Java Indonesia Using the Averege-Based Fuzzy Time Series Model Sri Nurhayati; Syahrul Syahrul; Riani Lubis; Mochamad Fajar Wicaksono
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25475

Abstract

The purpose of this study is to propose whether an average-based fuzzy time series model is appropriate for use in predicting the number of foreign tourists coming to West Java, Indonesia. Machine learning is a branch of artificial intelligence where machines are designed to learn on their own without human direction. One of the machine learning methods used by data science is for prediction processes, such as predicting the number of tourists. Tourism is one of the economic sectors that has a direct impact on the community's economy. Based on data from the Badan Pusat Statistik (BPS), the number of tourists coming to West Java Indonesia fluctuates, meaning that the number can increase and decrease every month and year. Changes in the number of tourists that fluctuate are one of the problems that have an impact on tourism actors. Therefore, the solution given to answer this problem is that an appropriate model is needed to predict the number of tourists visiting West Java. The contribution of this research is to help related parties in predicting the number of foreign tourists so that it can be used as one to make policies related to tourism preparation and planning efforts in West Java, Indonesia.  The method used in this research is a case study approach, where the case study is taken from data on foreign tourists visiting West Java from 2017 to 2020. For the prediction process, the method used is the fuzzy time series method and the average length-based algorithm as the determinant of the interval length. Effective interval length can affect prediction results with a higher level of accuracy. Based on the prediction test results, the Mean Absolute Percentage Error (MAPE) value is 14.71%. These results indicate that the fuzzy time series model based on the average interval length is good for prediction.
Deep Learning Approach For Sign Language Recognition Bambang Krismono Triwijoyo; Lalu Yuda Rahmani Karnaen; Ahmat Adil
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25051

Abstract

Sign language is a method of communication that uses hand movements between fellow people with hearing loss. Problems occur when communication between normal people with hearing disorders, because not everyone understands sign language, so the model is needed for sign language recognition. This study aims to make the model of the introduction of hand sign language using a deep learning approach. The model used is Convolutional Neural Network (CNN). This model is tested using the ASL alphabet database consisting of 27 categories, where each category consists of 3000 images or a total of 87,000 images of 200 x 200 pixels of hand signals. First is the process of resizing the image input to 32 x 32 pixels. Furthermore, separating the dataset for training and validation respectively 75% and 25%. The test results indicate that the proposed model has good performance with a value of 99% accuracy. Experiment results show that preprocessing images using background correction can improve model performance.
LSTM Network Hyperparameter Optimization for Stock Price Prediction Using the Optuna Framework Edi Ismanto; Vitriani Vitriani
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.24944

Abstract

In recent years, the application of deep learning-based financial modeling tools has grown in popularity. Research on stock forecasting is crucial to understanding how a nation's economy is doing. The study of intrinsic value and stock market forecasting has significant theoretical implications and a broad range of potential applications. One of the trickiest challenges in projects involving deep learning and machine learning is hyperparameter search. In this paper, we evaluate and analyze the optimal hyperparameter search in the long short-term memory (LSTM) model developed to forecast stock prices using the Optuna framework. We examined a number of hyperparameters with several LSTM architectures, including optimizers (SGD, Adagrad, RMSprop, Nadam, Adamax, dan Adam), LSTM hidden units, dropout rates, epochs, batch size, and learning rate. The results of the experiment indicated that of the four LSTM models tested—model 1 single LSTM, model 2 single LSTM, model 1 LSTM stacked, and model 2 LSTM stacked—model 1 single LSTM was the most effective. Single LSTM version 1 offers the lowest losses when compared to other models and had the lowest root mean square error (RMSE) score of 7.21. When compared to manual hyperparameter tuning, automatic hyperparameter tuning has lower losses and is better.
Optimal Scheduling of Electric Vehicle Charging: A Study Case of Bantul Feeder 05 Distribution System Candra Febri Nugraha; Jimmy Trio Putra; Lukman Subekti; Suhono Suhono
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25287

Abstract

The growing popularity of electric vehicles (EVs) has the potential to complicate distribution network operations. When a large number of electric vehicles are charging at the same time, the system load can significantly increase. This problem is exacerbated when charging is done concurrently in the evening, which coincides with peak load times. To prevent the increase in peak load and distribution operation stress, EV charging must be coordinated to achieve financial and technical objectives. This study seeks to evaluate the impact of financially driven EV charging scheduling algorithms. The contribution of this study is that the scheduling algorithm considers EV usage behavior based on real data as well as considers the state-of-charge (SoC) target set by EV owners. The proposed algorithm seeks to minimize the total charging cost incurred by EV owners using mixed-integer linear programming (MILP). The impact of the coordinated charging scheduling on the system demand profile and real distribution system operation metrics are also evaluated. The simulation result tested on the Bantul Feeder 05 system demonstrates that coordinated charging can reduce the charging costs by 57.3%. Furthermore, the peak load is reduced by 5.2% while also improving the load factor by 3.5% as compared to uncoordinated scheduling. Based on the power flow simulation, the proposed algorithm can reduce distribution transformer loading by 0.5% and improve voltage quality by 0.1% during peak load. This demonstrates that coordinated EV charging benefits not only the EV users but also the distribution system operator by preventing system operation issues.
Development of Novel Machine Learning to Optimize the Solubility of Azathioprine as Anticancer Drug in Supercritical Carbon Dioxide Arya Adhyaksa Waskita; Stevry Yushady CH Bissa; Ika Atman Satya; Ratna Surya Alwi
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25608

Abstract

Supercritical carbon dioxide (Sc-CO2) has thus been proposed as an appropriate solvent for diluting the pharmaceuticals to increase particle size. The use of supercritical fluids (SCFs) in various industrial applications, such as extraction, chromatography, and particle engineering, has attracted considerable interest. Recognizing the solubility behavior of various drugs is an essential step in the pharmaceutical industry's pursuit of the most effective supercritical approach. In this work, four models were used to predict the solubility of Azathioprine in supercritical carbon dioxide, including Ridge regression (RR), Huber regression (HR), Random forest (RF), and Gaussian process regression (GPR). The R-squared scores of all four models are 0.974, 0.6518, 0.966, and 1.0 for Ridge regression (RR), Huber regression (HR), Random forest (RF), and Gaussian process regression (GPR) models, respectively. The RMSE error rates of 2.843 ×10-13, 7.036 ×10-12, 5.673 ×10-13, and 1.054 ×10-30 for the RR, HR, RF, and GPR models, respectively. MAE metrics of 1.205 ×10-6, 2.151  ×10-6, 5.997 ×10-7 and 9.419 ×10-16 errors were also found in the RR, HR, RF, and GPR models, respectively. It was found that Ridge regression (RR), Random forest (RF), and Gaussian process regression (GPR) models can be used to predict any compound's solubility in supercritical carbon dioxide.
Visible Light Communication System Design Using Raspberry Pi4B, LED Array, and MQTT Synchronization Protocol Teuku Alif Rafi Akbar; Apriono Catur
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25431

Abstract

Visible light communication emerged as the solution to overcome limitations exist in RF-based communication system. Although many research has been done on VLC, there are still a lot room for improvements, especially in the design of the VLC itself. This study discusses a simple visible light communication system design that transmits temperature and humidity information. The system uses Array 2×2 LED configuration to transmit data and photodiode to receive the optical signal. Raspberry Pi is used as the signal processor. The research carried out variations in the color of LED used, variations in the method of synchronization, and variations in the data rate transmission with BER value as the main parameter to be analyzed. The research contribution is to propose a simple visible light communication design that transmit and receive information in reference to room temperature and humidity using Raspberry Pi and DHT-11 sensor, while also implementing two synchronization methods to maximize synchronization in transmission thus minimizing the BER value in higher bit rate. The LED used is blue with an average voltage of 0.0423 V for bit ‘1’ and 0.00448 V for bit ‘0’. The throughput can be achieved are within range 1bps to 10 kbps with BER 0.5 as a threshold. The implementation of the synchronization method decreases the average BER value by 0.0945 with the implementation of transmission calibration synchronization and decreases the average BER value by 0.1221 using the MQTT communication protocol. In conclusion, the design has limitations through the component used in the transmitting and receiving end with BER values relatively high. Further research for system development can be done by implementing Forward Error Correction to minimize errors that occur in the transmission and collaborating with vendors with same research field for the latest components for VLC system design.
Performance of Lexical Resource and Manual Labeling on Long Short-Term Memory Model for Text Classification Mardhiya Hayaty; Aqsal Harris Pratama
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25375

Abstract

Data labeling is a critical aspect of sentiment analysis that requires assigning labels to text data to reflect the sentiment expressed. Traditional methods of data labeling involve manual annotation by human annotators, which can be both time-consuming and costly when handling large volumes of text data. Automation of the data labeling process can be achieved through the utilization of lexicon resources, which consist of pre-labeled dictionaries or databases of words and phrases in sentiment information. The contribution of this study is an evaluation of the performance of lexicon resources in document labeling. The evaluation aims to provide insight into the accuracy of using lexicon resources and inform future research. In this study, a publicly available dataset was utilized and labeled as negative, neutral, and positive. To generate new labels, a lexicon resource such as VADER, AFINN, SentiWordNet, and Liu & Hu was employed. An LSTM model was then trained using the newly generated labels. The performance of the trained model was evaluated by testing it on data that had been manually labeled. The study found manual labeling led to highest accuracy of 0.79, 0.80, and 0.80 for training, validation, and testing respectively. This is likely due to manual creation of test data labels, enabling the model to learn and capture balanced patterns. Models using lexicon resources (VADER and AFINN) had lower accuracy of 0.54 and 0.56. SentiWordNet had lowest accuracy among all models with 0.49, and Liu&Hu model had the lowest testing score of 0.26. Our research indicates that lexicon resources alone are not sufficient for sentiment data labeling as they are dependent on pre-defined dictionaries and may not fully capture the context of words within a sentence, thus, manual labeling is necessary to complement lexicon-based methods to achieve better result.
Sentence-Level Granularity Oriented Sentiment Analysis of Social Media Using Long Short-Term Memory (LSTM) and IndoBERTweet Method Nisa Maulia Azahra; Erwin Budi Setiawan
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25765

Abstract

The dissemination of information through social media has been rampant, especially on the Twitter platform. This information eventually invites various opinions from users as their points of view on a topic being discussed. These opinions can be collected and processed using sentiment analysis to assess public tendencies to obtain a fundamental source of decision-making. However, the procedure is not optimal enough due to its inability to recognize the word meaning of the opinion sentences. By using sentence-level granularity-oriented sentiment analysis, the system can explore the "sense of the word" in each sentence by giving it a granularity weight as the system's consideration in recognizing word meaning. To construct the procedure, this research utilizes LSTM as the classification model combined with TF-IDF and IndoBERTweet as feature extraction. Not only that, but this research also conducts the Word2Vec feature expansion method which was built using Twitter and IndoNews corpus to produce word similarity corpus and find effective word semantics. To be fully compliant with the granularity requirements, manual labeling, and system labeling were performed by considering weight granularity as a model performance comparison. This research succeeded in getting 88.97% accuracy for manual labeling data and 97.80% for system labeling data after combining these methods. The experimental results show that the granularity-oriented sentiment analysis model can outperform the conventional sentiment analysis system which can be seen based on the high performance of the resulting system.
Motorcycling-Net: A Segmentation Approach for Detecting Motorcycling Near Misses Rotimi-Williams Bello; Chinedu Uchechukwu Oluigbo; Oluwatomilola Motunrayo Moradeyo
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25614

Abstract

This article presents near misses as corrective and preventive measures to safety events. The article focuses on the risk factors of commercial motorcycling near misses, which we address by proposing a near miss detection framework based on a hybrid of YOLOv4-DeepSort and VGG16-BiLSTM models. We employed YOLOv4-DeepSort model for the detection and tracking tasks, and the tracked images and identity information were stored. The sequence of image was fetched into the VGG16-BiLSTM model for extraction of image feature information and near misses recognition respectively. Video streams of near miss datasets containing motorcycling in different scenes were collected for the experiment. We evaluate the proposed methods by testing 444 sequential video frames of motorcycling near misses in urban environment. The detection models achieved 96% accuracy for motorcycle, 89% for car, and 81% for person with lower false-positive rates on the test datasets while the tracking models achieved 34.3 MOTA on the test set and MOTP of 0.77. The results of the study indicate practicality for automatic detection of motorcycling near misses in urban environment, and it could assist in providing resourceful technical reference for analyzing the risk factors of motorcycling near misses. The research contributions are: (1) A hybrid of YOLOv4 and DeepSort model to enhance object detection and tracking in a complex environment and (2) A hybrid of YOLOv4 and DeepSort model to optimize the extraction of image feature information and near misses recognition respectively for overall system performance.
The Combination of C4.5 with Particle Swarm Optimization in Classification of Class for Mental Retardation Students Sausan Hidayah Nova; Budi Warsito; Aris Puji Widodo
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol 9, No 1 (2023): March
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v9i1.25520

Abstract

Mental retardation or brain weakness is a condition of children who experience mental disorders. There are several characteristics to know the child has mental retardation. When entering a school, teachers are expected to be able to determine the right class for mental retardation students according to their category. Data mining is the process of finding patterns in selected data using artificial intelligence and machine learning. Algorithm C4.5 is one of the classification techniques in data mining. C4.5 can be used to create decision trees and classify data that has numeric, continuous, and categorical attributes. But C4.5 has the disadvantage of reading large amounts of data and cannot rank every alternative. PSO is an optimization algorithm for feature selection that can improve performance in data classification. Therefore, this study proposes an algorithm that can overcome the weaknesses of C4.5 by combining PSO. This study aims to classify a class of new mental retardation students using a combination of C4.5 as a classification and PSO as a feature selection to determine the attributes that affect the level of accuracy. The contribution of this research is to make it easier for the school to determine the new class of mental retardation students so that it is appropriate and according to their needs. The classification process in this study uses a combination of C4.5 and PSO. The validation used in this model is 10-fold cross-validation, and the evaluation uses a confusion matrix. This study resulted in an accuracy of C4.5 before using PSO of 91%. While the accuracy of C4.5 uses a PSO of 93%. Of the 20 attributes, there are 6 attributes that affect the level of accuracy. This study shows that PSO can be used to implement feature selection and increase the accuracy value of C4.5 by 2%.