cover
Contact Name
Dayat Kurniawan
Contact Email
Dayat Kurniawan
Phone
-
Journal Mail Official
redaksi@jurnalet.com
Editorial Address
-
Location
Kota adm. jakarta selatan,
Dki jakarta
INDONESIA
Jurnal Elektronika dan Telekomunikasi
ISSN : 14118289     EISSN : 25279955     DOI : -
Core Subject : Engineering,
Jurnal Elektronika dan Telekomunikasi (JET) is an open access, a peer-reviewed journal published by Research Center for Electronics and Telecommunication - Indonesian Institute of Sciences. We publish original research papers, review articles and case studies on the latest research and developments in the field of electronics, telecommunications, and microelectronics engineering. JET is published twice a year and uses double-blind peer review. It was first published in 2001.
Arjuna Subject : -
Articles 14 Documents
Search results for , issue "Vol 24, No 1 (2024)" : 14 Documents clear
Appendix Vol. 24 No. 1 Prini, Salita Ulitia
Jurnal Elektronika dan Telekomunikasi Vol 24, No 1 (2024)
Publisher : National Research and Innovation Agency

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.55981/jet.670

Abstract

The Effect of Window Size and Window Shape in STFT for Pre-Processing FMCW Radar Data in Human Activity Recognition Based on Bi-LSTM Fitrah, Figo Azzam De; Suratman, Fiky Y.; Istiqomah, Istiqomah
Jurnal Elektronika dan Telekomunikasi Vol 24, No 1 (2024)
Publisher : National Research and Innovation Agency

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.55981/jet.601

Abstract

Many studies use radars for Human Activity Recognition (HAR), and numerous techniques for preprocessing FMCW radar data have been explored to improve HAR performances. Our approach employs 1-D radar to classify four human activities, i.e., walking, standing, crouching, and sitting.  We use Fast Fourier Transform (FFT) and Short-Time Fourier Transform (STFT) with Kaiser window to generate range-time and Doppler-time data from inphase and quadrature radar signal. The choice of windowing parameters, i.e., window size and window shape represented by the beta parameter in Kaiser window, is considered to have significant impacts on the performances of deep learning LSTM models, including the F1-score. However, our study in this paper, including statistical analysis using t-tests, shows otherwise. Our results consistently support the null hypothesis, which mean that variations in window size and window shape do not significantly affect the F1-score. In essence, our findings underscore the robustness of our preprocessing methodology, emphasizing the stability and reliability of the selected configurations. This research provides valuable insights into the preprocessing techniques for radar data in the context of human activity recognition, enhancing the consistency and credibility of deep learning models in this domain.
Comparison of YOLOv3-tiny and YOLOv4-tiny in the Implementation Handgun, Shotgun, and Rifle Detection Using Raspberry Pi 4B S. Hi. Rauf, Faris zulkarnain; Handoko, Djati; Pradana, Ilham S; Alifta, Dimas
Jurnal Elektronika dan Telekomunikasi Vol 24, No 1 (2024)
Publisher : National Research and Innovation Agency

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.55981/jet.602

Abstract

Criminal activities frequently involve carryable weapons such as handguns, shotguns, and rifle classes. Frequently, the targets of these weapons that are captured are concealed from plain sight by the people of the crowd. The detection process for these weapons can be assisted by using deep learning. In this case, we intend to identify the model of the firearm that was detected. This research aims to apply one of the deep learning concepts, namely You Only Look Once (YOLO). The authors use versions of YOLOv3-tiny and Yolov4-tiny for the detection and classification of types of weapons, which are one of the fastest and most accurate methods of object detection, outperforming other detection algorithms. However, both require heavy computer architecture. Therefore, YOLOv3-tiny and YOLOv4-tiny, lighter versions of YOLOv3, can be solutions for smaller architectures. YOLOv3-tiny and YOLOv4-tiny have higher FPS, which is supposed to yield faster performance. Since YOLOv3-tiny and YOLOv4-tiny are modified versions of YOLOv3, the accuracy is improved, and YOLOv3 is already outperforming Faster Single Shot Detector (SSD) and Faster Region with Convolutional Neural Network (R-CNN). The authors employ YOLOv3-tiny and YOLOv4-tiny due to the fact that the Frame Per Second (FPS) and Mean Average Precision (mAP) performance of both approaches are superior in object detection. The study found that YOLOv3-tiny had a high FPS and low mAP performance: an average Intersection over Union (IoU)  score of 71.54%, an accuracy of 90%, a recall score of 78%, an F1 score of 84%, and an mAP of 86.7%. While YOLOv4-tiny has low FPS and high mAP: an average IoU score of 73.19%, an accuracy of 90%, a recall score of 84%, an F1 score of 87%, and an mAP of 90.7%.
Front Cover Vol. 24 No. 1 Prini, Salita Ulitia
Jurnal Elektronika dan Telekomunikasi Vol 24, No 1 (2024)
Publisher : National Research and Innovation Agency

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.55981/jet.671

Abstract

Page 2 of 2 | Total Record : 14