Jurnal Nasional Teknik Elektro dan Teknologi Informasi
Topics cover the fields of (but not limited to): 1. Information Technology: Software Engineering, Knowledge and Data Mining, Multimedia Technologies, Mobile Computing, Parallel/Distributed Computing, Artificial Intelligence, Computer Graphics, Virtual Reality 2. Power Systems: Power Generation, Power Distribution, Power Conversion, Protection Systems, Electrical Material 3. Signals, Systems, and Electronics: Digital Signal Processing Algorithm, Robotic Systems and Image Processing, Biomedical Instrumentation, Microelectronics, Instrumentation and Control 4. Communication Systems: Management and Protocol Network, Telecommunication Systems, Wireless Communications, Optoelectronics, Fuzzy Sensor and Network
Articles
10 Documents
Search results for
, issue
"Vol 11 No 1: Februari 2022"
:
10 Documents
clear
Alokasi Resource bagi Multicarrier-Low Density Sequence-Multiple Access
Linda Meylani;
Nur Andini;
Desti Madya Saputri;
Iswahyudi Hidayat
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1727.992 KB)
|
DOI: 10.22146/jnteti.v11i1.2303
Multicarrier low-density sequence multiple access (MC-LDSMA) is a code domain type of non-orthogonal multiple access (NOMA) in a multicarrier system. Each user in this multiple access scheme has a non-orthogonal code to one another. Each user is only allowed to access dv from the available N resources and there are only dc users from the total of J users accessing the same resource. The non-orthogonal nature causes the MC-LDSMA system to have a higher overloading factor than other orthogonal multicarrier systems. This condition causes MC-LDSMA to become one of the multiple access techniques used in underlay cognitive radio communication systems, where secondary users (SUs) are permitted to access resources owned by primary users (PUs). This paper proposed a resource allocation algorithm for MC-LDSMA in an underlay cognitive radio system. The proposed algorithm aims to increase the number of SUs accessing PUs resources while maintaining the SUs quality factor. The system built consisted of I PUs and J SUs. PUs in the system was assumed to be orthogonal so that they did not interfere with each other. At the same time, some J SUs simultaneously accessed resources owned by PU using the MC-LDSMA multiple access schemes. The proposed algorithm considered several factors, including the parameters dc, dv, SU target signal-to-noise ratio (SNR), and the interference tolerance limit desired by PU. Performance parameters were indicated by the outage probability (OP), the throughput of PU and SU, and the ratio of the number of SUs that were allocated less than dv resources. The simulation results suggest that all performance parameters are affected by the number of resources accessed by each user, dv, the target SNR of SU, and the interference limit determined by PU.
Kinerja Konvolusi pada Sistem Amplify and Forward dengan Predistorsi dan Seleksi Relai
Annisa Anggun Puspitasari;
Mareta Dwi Nor Habibah;
Ziyadatus Shofiyah;
Ida Anisah;
Yoedy Moegiharto
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1675.243 KB)
|
DOI: 10.22146/jnteti.v11i1.2386
This paper evaluated the performance of the convolutional coding technique in a cooperative communication system using the amplify-and-forward (AF) protocol and applying a relay selection strategy by simulation. At the transmitter (source) side, a joint of peak average power ratio (PAPR) reduction techniques with selective mapping (SLM) schemes and the Hammerstein predistortion model was applied. The predistortion technique with the Rapp inverse model was applied at the relay. On the channel side, the relays were used as a virtual antenna, where relay usage in cooperative communication systems can be implemented for 4G or 5G networks in future research, even though it requires large bandwidth. Implementing the relay selection strategy can increase bandwidth efficiency because only the best relay will forward information from source to destination. The conventional relay selection strategy was used to evaluate the performance of the convolution coding in a multi-relay scheme by choosing the best relay considering the signal-to-noise ratio (SNR) value on the source to relay and relay to the destination channel. Only the best relay will forward the signal from source to destination using the AF protocol. System performance is expressed in bit error rate (BER) probability. The simulation results showed that the convolutional coding technique could improve system performance up to 16.59% with or without predistortion techniques. Then, the predistortion technique applied on the source and relay side generated the best performance, where the system performance could increase up to 34%. In addition, the implementation of the conventional relay selection strategy showed that the scheme with the most relays, which was six relays, could produce the best performance due to the increasing number of available paths.
Autopilot Pesawat Tanpa Awak Menggunakan Algoritme Genetika untuk Menghilangkan Blank Spot
Ronny Mardiyanto;
Muhammad Ichlasul Salik;
Djoko Purwanto
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1669.33 KB)
|
DOI: 10.22146/jnteti.v11i1.2492
This paper presents the autopilot of unmanned aerial vehicles (UAV) with the ability to minimize blank spots on aerial mapping using the genetic algorithm. The purpose of the developed autopilot is to accelerate the times required for aerial mapping and save battery consumption. Faster time in conducting aerial mapping saves operational costs, saves battery consumption, and reduces UAV maintenance costs. The proposed autopilot has the ability to analyze blank spots from aerial shots and optimize flight routes for re-photography. The genetic algorithm was applied to obtain the shortest distance, which was done to save battery consumption and flight time. When developing the autopilot, the operator would manually set the flight route, then the aircraft would fly according to that route. The unstable wind factor has caused a shift in the flight route, which correspondingly caused blank spots. After all flight routes were traversed, the system developed would analyze the location of the blank spots. The new flight route was calculated using the genetic algorithm to determine the shortest distance from all the blank spot locations. The system developed consisted of a UAV equipped with autopilot and a ground control station (GCS). At the time of flight, the UAV would send the coordinates of the path traversed to the GCS to calculate the blank spot analysis. After the flight mission has been completed, the GCS would create a new route and send it to the UAV. The test carried out was an aircraft with a height of 120m using a 4S 4,200 mAh 25C lipo battery, and the percentage of throttle when flying straight was 30%. The results obtained are that the developed autopilot saves 46.4% of the time and saves 41.18% of battery capacity compared to conventional autopilots.
Kinerja Algoritme Pengelompokan Fuzzy C-Means pada Segmentasi Citra Leukosit
Khakim Assidiqi Nur Hudaya;
Budi Sunarko;
Anan Nugroho
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1205.699 KB)
|
DOI: 10.22146/jnteti.v11i1.2493
Image segmentation is one of the most critical steps in computer-aided diagnosis that potentially accelerate leukemia diagnosis. Leukemia is categorized as blood cancer known as a deadly disease. Generally, acute lymphoblastic leukemia (ALL) detection can be done manually by counting the leukocytes contained in the stained peripheral blood smear image using the immunohistochemical (IHC) method. Unfortunately, the manual diagnosis process takes 3−24 hours to complete and is most likely inaccurate due to operator fatigue. An image segmentation method proposed by Vogado can achieve an accuracy of 98.5%. However, this method uses a K-means clustering algorithm that is not optimal for input images containing mostly noise. In this research, fuzzy c-means were applied to solve this problem. The dataset used in this study was ALL-IDB2, which consisted of 260 images, with each image having the size of 257×257 pixels in tagged image file (TIF) format. The initial stage of this method was to divide the ALL-IDB 2 acute leukemia dataset image into cyan, magenta, yellow, key (CMYK) and L*a*b color schemes which then subtract the M component subtracted by component *b. The subtraction results were then splits using the FCM algorithm, resulting in the nucleus and background sections. The output of this method was then evaluated and measured using the metrics accuracy, specificity, sensitivity, kappa index, dice coefficient, and time complexity. The results showed that changing the clustering algorithm in the image segmentation method did not provide a significant change in results; an increase occurred in the specificity and precision metrics with an average of 0.1−0.4%, the execution time also increased by an average of 23.10%. The decrease in the accuracy metric was down to 95.4238%, and the dice coefficient value was 79.3682%. From the explanation above, it can be concluded that the application of the FCM algorithm to the segmentation method does not provide optimal results.
Sistem Pemantauan dan Pengendalian Sepeda Listrik Berbasis Internet of Things
Muhammad Ridwan Arif Cahyono;
Ita Mariza;
Wirawan
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1413.843 KB)
|
DOI: 10.22146/jnteti.v11i1.3183
Electric bicycles are now widely available in the Indonesian market. Most electric bicycles have not been integrated with smartphones. As a result, they are currently unable to be monitored or controlled remotely. In this study, an internet of things (IoT)-based monitoring and control system for electric bicycles was developed. An ESP32-based microcontroller was used as an IoT device to measure distance traveled with a GPS sensor by applying the Haversine method, measuring bicycle speed, designing a bicycle safety system, and designing a calorie measurement system when a bicycle was pedaled. The SIM800L module was used as a communication device, where this module was capable of establishing internet communication over a 2G network. The electric bicycle controller was modified to be integrated with the ESP32 for electric bicycle propulsion using a BLDC type motor with a voltage of 36 V. Raspberry Pi was used as a web server for data storage and processing. The metabolic equivalent of task (MET) method calculated calories burned. The monitoring and controlling of electric bicycles were carried out by building an Android smartphone-based application using the Kodular application. The map service feature was based on OpenStreetMap. This application can turn on and off the electric bicycle remotely, adjust the speed gear position, adjust the speed, turn on the alarm, track the last location, track location history, and perform calorie measurements. The control process can be done by pressing buttons and voice commands in Indonesian. This application was tested using the black box method with 100% successful results and a time delay of 8.82 s. Calorie measurement accuracy was 94.24% compared to calorie measuring equipment on the market. Speed control has linearity with an R2 of 0.9984.
Analisis Kinerja Aplikasi Pemantauan dan Pengendalian Smart Agriculture Berbasis Android
Helmy;
Fenny Rahmasari;
Arif Nursyahid;
Thomas Agung Setyawan;
Ari Sriyanto Nugroho
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1390.956 KB)
|
DOI: 10.22146/jnteti.v11i1.3379
The ever-evolving digital era leads to an industrial revolution in the internet of things (IoT)-based smart agriculture and smart farm. Of many uses is the use of an Android-based app that monitors and controls parameters in the cultivation process in this digital era. An unstable internet connection can interfere with the monitoring process. For this reason, a system integration into a single app running even in an offline condition is needed; therefore, the user can monitor and control the Android-based smart agriculture app in two modes, namely online and offline. A performance analysis is also necessary to know the app's reliability in sending and receiving data. This system integration used two modes of operation, i.e. online and offline, wherein the online mode, the app will communicate with the server when connected with the internet using representational state transfer application programming interface (REST API). Meanwhile, the app will communicate directly with the system through a local access point in the offline mode. This app interacts with the system with the MQTT protocol where the app acts as an MQTT client. The performance analysis was conducted in the black box test, load activity test, and app performance test from the Android profiler. The acquired test from the app functionality test (black box) showed that the user could monitor and control the smart agriculture in online and offline mode through the app. The average load time for all the activities was 3.507 seconds with a network bandwidth of 4.54 Mbps. At the same time, the average load time in a network bandwidth of 35.35 Mbps was 1.4 seconds. The system performance test indicated the app was relatively light as the CPU usage for the app was 31%, with a memory usage of 453.8 MB.
Review: Analisis Fitur Deteksi Aritmia dan Metode Deep Learning untuk Wearable Devices
Ratna Lestari Budiani Buana;
Imroatul Hudati
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1321.399 KB)
|
DOI: 10.22146/jnteti.v11i1.3381
Arrhythmia is one of the heart abnormalities which probably not a life threat in a short time but could cause a long-term interference in electricity of the heart. Even so, it should be detected earlier to have proper treatment and suggest a better lifestyle. Arrhythmia diagnosis is usually made by performing a long recording ECG by using Holter monitoring then analyzing the rhythm. Nevertheless, the observation takes time, and using Holter in several days may affect the patient’s physiological condition. Previous research has been conducted to build an auto-detection of arrhythmia by using various datasets, different features, and detection methods. However, the biggest challenges faced by the researcher were the computation and the complex features used as the algorithm input. This study aims to review the latest research on the data used, features, and deep learning methods that can solve the time computation problem and be applied in wearable devices. The review method started by searching the related paper, then studied on the data used. The second step was to review the used ECG features and the deep learning method implemented to detect arrhythmia. The review shows that most researchers used the MIT-BIH database, even it requires a lot of effort on the pre-processing. The CNN is the most used deep learning method, but time computation is one of the considerations. The ECG interval features in the time domain are the best feature analysis for rhythm abnormality detection and have a low computation cost. These features will be the input of the deep learning process to reduce computation time, especially on wearable device applications.
Desain Konseptual Internet Accelerator Laboratory (IAL) untuk Siklotron DECY-13
Frida Iswinning Diah;
Idrus Abdul Kudus;
Suharni;
Fajar Sidik Permana;
Taxwim
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1068.663 KB)
|
DOI: 10.22146/jnteti.v11i1.3425
Research and Technology Center for Accelerator of Research Organization for Nuclear Energy of National Research and Innovation Agency (PRTA-ORTN – BRIN) has become the Center for Excellence in Science and Technology (PUI), especially in the particle accelerator field. One of the studies is the DECY-13 cyclotron for radioisotopes production used in the medical field for cancer diagnostics. Currently, the DECY-13 cyclotron R&D is in the final stages of the design process. A Cyclotron internet laboratory facility will be implemented to expand the benefits of this R&D. This facility will support the cyclotron training to be more widely accessible and increase capacity building and human resources programs in the nuclear field. Therefore, a preliminary study for DECY-13 cyclotron Internet Accelerator Laboratory (IAL) is needed, which includes the preparation of long-term concepts and the design of IAL. The formulation of the long-term idea follows the established cyclotron road map. In the conceptual design, the users at the early stages are students and cyclotron operators in hospitals. In order to determine the design requirement, the next step is a literature study on online laboratories and the accelerator learning concept that has been applied in other countries. The IAL apparatus identification is based on a review of the DECY-13 cyclotron laboratory's current conditions and future research plans. The conceptual design that has been successfully developed consists of a research roadmap on IAL for long-term research planning and a material syllabus for two target users, namely students and cyclotron operators in hospitals. While the employed IAL component identification is CompactRIO as the primary controller on the cyclotron operating system and as a liaison with the network system. Supporting the network includes database systems, servers, LAN, internet, webcams, and websites or an application to be accessed by users.
Implementasi Laboratorium Komputer Virtual Berbasis Cloud – Kelas Pemrograman Berorientasi Obyek
Dwi Susanto;
Ridi Ferdiana;
Selo Sulistyo
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1339.699 KB)
|
DOI: 10.22146/jnteti.v11i1.3475
The COVID-19 pandemic that has been occurred since March 2020 has forced learning activities to be carried out online. Online learning activities can generally be done using a learning management system (LMS) and video conference applications. However, in some subject topics, practicum activities are needed, such as in practicum using a computer laboratory. To accomplish computer practicum activities during the pandemic, a computer laboratory that can be accessed online is required. One of the online practicum solutions is a virtual laboratory (Vlab), which is a virtual computer laboratory that uses virtualization technology. Vlab provides a virtual machine (VM) that is accessed online with a remote access application (Remote Desktop Protocol/RDP, Virtual Network Computing/VNC, Secure Shell/SSH). Vlab infrastructure can either use on-premise or public cloud infrastructure. Compared to on-premise infrastructure-based Vlab, public cloud-based Vlab does not require an expensive initial investment and eliminates routine complex hardware maintenance. This study proposes a cloud-based Vlab application with Azure Lab Services in the case of an Object-Oriented Programming class. Vlab was designed based on the technical needs of the programming practicum, which included VM specifications (CPU, RAM, and storage), operating system, and software that must be installed up to the number of VMs in one class. Based on the total cost of ownership analysis, the cost of providing cloud-based Vlab was potentially up to 26% cheaper than on-premise infrastructure-based Vlab. A cloud-based Vlab installation performed using a Powershell script could be completed in six interactions and an installation time of 132 minutes. Vlab access could be done with a standard computer/laptop with an internet connection and an RDP client application. The bandwidth required to access a cloud-based Vlab ranged from 0.13 Mbps to 3.09 Mbps. The bandwidth range is still within the average speed range of the 4G networks available in Indonesia.
Pemanfaatan Metode Smoothing Whittaker-Henderson untuk Meningkatkan Akurasi Neural Network Forecasting
Hans Pratyaksa;
Adhistya Erna Permanasari;
Silmi Fauziati
Jurnal Nasional Teknik Elektro dan Teknologi Informasi Vol 11 No 1: Februari 2022
Publisher : Departemen Teknik Elektro dan Teknologi Informasi, Fakultas Teknik, Universitas Gadjah Mada
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
Full PDF (1611.559 KB)
|
DOI: 10.22146/jnteti.v11i1.3489
Health institutions need to ensure the availability of drug stocks for patients. There are challenges related to the uncertainty of the amount of drug use for the next period. Uncertainty can be reduced by analysing historical drug data to predict future demand. Time series can contain spikes or fluctuation pattern which spikes can disguise the main information. Hence, it can affect the accuracy of the prediction model. One widely used forecasting method in the time series data is the artificial neural network (ANN) method. The ANN method requires the pre-processing stage of the data before the training process. The pre-processing stage is essential to obtain information or knowledge. This study focused on applying smoothing methods at the pre-processing stage of the ANN method. The application of the smoothing method was expected to improve the quality of ANN learning data that would lead to better predictive accuracy. This research focuses on implementing the smoothing method in data pre-processing step for ANN method. Smoothing methods used in this research were exponential smoothing (ES) and Whittaker-Henderson (WH) smoothing applied to two time series datasets. The refining method used in this study was the WH method, which was tested on two time series datasets of medicine. The results show that the mean square error (MSE) obtained by applying the WH method was lower than the non-smoothing ANN for both datasets. Evaluation results revealed that implementing WH smoothing method in data pre-processing step for ANN (WH+ANN) provided MSE significantly lower than ANN results with a confidence level of 94% for dataset 1 and 85% for the dataset 2.