Claim Missing Document
Check
Articles

Found 34 Documents
Search

LoRaWAN-Based Communication for Autonomous Vehicles: Performance and Development Saharuna, Saharuna; Adiprabowo, Tjahjo; Yassir, Muhammad; Nurdiana, Dian; Adi, Puput Dani Prasetyo; Kitagawa, Akio; Satyawan, Arief Suryadi
ILKOM Jurnal Ilmiah Vol 16, No 3 (2024)
Publisher : Prodi Teknik Informatika FIK Universitas Muslim Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33096/ilkom.v16i3.2311.236-254

Abstract

Automotive technology in the future continues to develop with a variety of sophistication, especially in vehicles that can move on their own, this research is new from previous developments, intelligent vehicles can be seen from various system developments ranging from the ability to find parking positions, have the right navigation system, and are equipped with various artificial senses such as LiDAR, Smart Camera, Artificial Intelligence, and various components for telecommunications. A small part that will be discussed in this research is in terms of data communication. The development of intelligent vehicles in a broader scope can be included in one of the categories to build a Smart City. In the analysis system, this research develops in terms of analyzing the possibility of data collisions or how to avoid them, with various methods that can be developed and approached comprehensively using LoRaWAN, so that a method can be determined using LoRaWAN Communication and LoRa Modules that can have an important impact in the development of intelligent vehicles or autonomous vehicles for Smart City. In this paper, the LoRa data transmission approach is to use the GPS Module, the GPS Module data is sent from each car to the nearest LoRaWAN Gateway, the car can automatically select the nearest Gateway for data optimization, reducing Packet Loss and Signal Attenuation due to LoRa data communication in the NLOS area, This article still uses data transmission simulation using MATLAB and is planned to be applied to Smart vehicles directly, the contribution of this research is the discovery of a new method in terms of LoRaWAN-based multi-point data transmission that can avoid data collisions from the position of intelligent vehicles in Mobile or moving, in building Smart City technology in the future.
Enhancing Image Quality in Facial Recognition Systems with GAN-Based Reconstruction Techniques Wijaya, Beni; Satyawan, Arief Suryadi; Haqiqi, Mokh. Mirza Etnisa; Susilawati, Helfy; Artemysia, Khaulyca Arva; Sopian, Sani Moch.; Shamie, M. Ikbal; Firman
Teknika Vol. 14 No. 1 (2025): March 2025
Publisher : Center for Research and Community Service, Institut Informatika Indonesia (IKADO) Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.34148/teknika.v14i1.1180

Abstract

Facial recognition systems are pivotal in modern applications such as security, healthcare, and public services, where accurate identification is crucial. However, environmental factors, transmission errors, or deliberate obfuscations often degrade facial image quality, leading to misidentification and service disruptions. This study employs Generative Adversarial Networks (GANs) to address these challenges by reconstructing corrupted or occluded facial images with high fidelity. The proposed methodology integrates advanced GAN architectures, multi-scale feature extraction, and contextual loss functions to enhance reconstruction quality. Six experimental modifications to the GAN model were implemented, incorporating additional residual blocks, enhanced loss functions combining adversarial, perceptual, and reconstruction losses, and skip connections for improved spatial consistency. Extensive testing was conducted using Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to quantify reconstruction quality, alongside face detection validation using SFace. The final model achieved an average PSNR of 26.93 and an average SSIM of 0.90, with confidence levels exceeding 0.55 in face detection tests, demonstrating its ability to preserve identity and structural integrity under challenging conditions, including occlusion and noise.  The results highlight that advanced GAN-based methods effectively restore degraded facial images, ensuring accurate face detection and robust identity preservation. This research provides a significant contribution to facial image processing, offering practical solutions for applications requiring high-quality image reconstruction and reliable facial recognition.
Restorasi Citra Wajah Terdegradasi Menggunakan Model GAN dan Fungsi Loss Wijaya, Beni; Haqiqi, Mokh. Mirza Etnisa; Satyawan, Arief Suryadi; Susilawati, Helfy
Algoritme Jurnal Mahasiswa Teknik Informatika Vol 5 No 2 (2025): April 2025 || Algoritme Jurnal Mahasiswa Teknik Informatika
Publisher : Program Studi Teknik Informatika Universitas Multi Data Palembang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35957/algoritme.v5i2.11487

Abstract

This study develops a Generative Adversarial Network (GAN)-based model to restore partially degraded facial images by reconstructing missing regions while preserving the structural integrity of the face. The model adopts an encoder-decoder architecture enhanced with skip connections and residual blocks to improve restoration accuracy. The training process utilizes 1,000 paired images, comprising 500 original and 500 occluded facial images, with 200 images allocated for testing. The model was trained over 50 epochs, resulting in a consistent reduction of generator loss from 0.80 to 0.67 and stabilization of discriminator loss at 0.70. Qualitative evaluation indicates the model’s capability to reconstruct facial features such as eyes, nose, and mouth with high visual fidelity, although minor artifacts remain in areas with complex textures. These findings demonstrate the effectiveness of GAN-based approaches in facial image restoration and suggest potential improvements through the exploration of alternative network architectures and more diverse training datasets. The proposed model shows promise for applications in digital forensics and historical image recovery.
Thermal Image-Based Multi-Class Semantic Segmentation for Autonomous Vehicle Navigation in Restricted Environments Fazri, Nurul; Susilawati, Helfy; Haqiqi, Mokh. Mirza Etnisa; Satyawan, Arief Suryadi
Jurnal Sistem Cerdas Vol. 8 No. 1 (2025)
Publisher : APIC

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.37396/jsc.v8i1.489

Abstract

Technological advancements have propelled the development of environmentally friendly transportation, with autonomous vehicles (AVs) and thermal imaging playing pivotal roles in achieving sustainable urban mobility. This study explores the application of the SegNet deep learning architecture for multi-class semantic segmentation of thermal images in constrained environments. The methodology encompasses data acquisition using a thermal camera in urban settings, annotation of 3,001 thermal images across 10 object classes, and rigorous model training with a high-performance system. SegNet demonstrated robust learning capabilities, achieving a training accuracy of 96.7% and a final loss of 0.096 after 120 epochs. Testing results revealed strong performance for distinct objects like motorcycles (F1 score: 0.63) and poles (F1 score: 0.84), but challenges in segmenting complex patterns such as buildings (F1 score: 0.34) and trees (F1 score: 0.42). Visual analysis corroborated these findings, highlighting strengths in segmenting well-defined objects while addressing difficulties in handling variability and elongated structures. Despite these limitations, the study establishes SegNet's potential for thermal image segmentation in AV systems. This research contributes to the advancement of computer vision in autonomous navigation, fostering sustainable and green transportation solutions while emphasizing areas for further refinement to enhance performance in complex environments.
360-degree Image Processing on NVIDIA Jetson Nano Satyawan, Arief Suryadi; Utomo, Prio Adjie; Puspita, Heni; Wulandari, Ike Yuni
Internet of Things and Artificial Intelligence Journal Vol. 4 No. 2 (2024): Volume 4 Issue 2, 2024 [May]
Publisher : Association for Scientific Computing, Electronics, and Engineering (ASCEE)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.31763/iota.v4i2.722

Abstract

A wide field of vision is required for autonomous electric vehicles to operate object-detecting systems. By identifying objects, it is possible to imbue the car with human intelligence, similar to that of a driver, so that it can recognize items and make decisions to prevent collisions with them. Using a 360-degree camera is a wonderful idea because it can record events surrounding the car in a single shot. Nevertheless, 360º cameras produce naturally skewed images. To make the image appear normal but have a bigger capture area, it is required to normalize it. In this study, NVIDIA Jetson Nano is used to construct software for 360-degree image normalization processing using Python. To process an image in real-time, first choose the image shape mapping that can give information about the entire item that the camera collected. Then, choose and apply the mapping. Using Python on an NVIDIA Jetson Nano, the author of this research has successfully processed 360-degree images for local and real-time video as well as image geometry modifications.
Optimizing Autonomous Navigation: Advances in LiDAR-based Object Recognition with Modified Voxel-RCNN Firman; Satyawan, Arief Suryadi; Susilawati, Helfy; Haqiqi, Mokh. Mirza Etnisa; Artemysia, Khaulyca Arva; Sopian, Sani Moch; Wijaya, Beni; Samie, Muhammad Ikbal
Kinetik: Game Technology, Information System, Computer Network, Computing, Electronics, and Control Vol. 10, No. 2, May 2025
Publisher : Universitas Muhammadiyah Malang

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.22219/kinetik.v10i2.2199

Abstract

This study aimed to enhance the object recognition capabilities of autonomous vehicles in constrained and dynamic environments. By integrating Light Detection and Ranging (LiDAR) technology with a modified Voxel-RCNN framework, the system detected and classified six object classes: human, wall, car, cyclist, tree, and cart. This integration improved the safety and reliability of autonomous navigation. The methodology included the preparation of a point cloud dataset, conversion into the KITTI format for compatibility with the Voxel-RCNN pipeline, and comprehensive model training. The framework was evaluated using metrics such as precision, recall, F1-score, and mean average precision (mAP). Modifications to the Voxel-RCNN framework were introduced to improve classification accuracy, addressing challenges encountered in complex navigation scenarios. Experimental results demonstrated the robustness of the proposed modifications. Modification 2 consistently outperformed the baseline, with 3D detection scores for the car class in hard scenarios increasing from 4.39 to 10.31. Modification 3 achieved the lowest training loss of 1.68 after 600 epochs, indicating significant improvements in model optimization. However, variability in the real-world performance of Modification 3 highlighted the need for balancing optimized training with practical applicability. Overall, the study found that the training loss decreased up to 29.1% and achieved substantial improvements in detection accuracy under challenging conditions. These findings underscored the potential of the proposed system to advance the safety and intelligence of autonomous vehicles, providing a solid foundation for future research in autonomous navigation and object recognition.
Kendali Kemudi Dengan Memindai Area Jalan Berbasis Kamera Termal Siburian, Sebastian Edward; Suratman, Fiky Y; Satyawan, Arief Suryadi
eProceedings of Engineering Vol. 11 No. 5 (2024): Oktober 2024
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak — Perkembangan teknologi telah mengalami kemajuan yang sangat signifikan khususnya di bidang kecerdasanbuatan, termasuk perkembangan di bidang kendaraan listrik otonom untuk efisiensi penggunaan sumber energi ramahlingkungan. Mengaktifkan mobilitas otonom memerlukan teknologi yang memungkinkan kendaraan mendeteksi objek disekitarnya, termasuk pengenalan objek menggunakan segmentasi semantik.Dalam penelitian ini digunakan sistem segmentasi objek untuk pengenalan jalan, dan sistem dibangun menggunakanmetode segmentasi berbasis deep learning. Informasi gambar diperoleh dari kamera termal FLIR. Metode segmentasi yangdigunakan dalam perancangan Capstone ini adalah arsitektur jaringan yang tersisa (ResNet 18, ResNet 34, ResNet 50, ResNet101, Resnet 152 dan ResNext 50). Hasil segmentasi kemudiandigunakan untuk mengembangkan metode pengendalian kemudi dengan menganalisis area jalan yang tersegmentasi. Hasil analisisberupa sinyal rekomendasi arah kendali kemudi yang dikirimkan ke sistem kendali kemudi kendaraan roda tiga listrik.Hasil percobaan menunjukkan bahwa metode segmentasi ResNet 50 cocok digunakan pada sistem kendali terarah karenaprosesnya baik dan memiliki latensi yang rendah sehingga proses kendali terarah dapat dilakukan secara real time. Kata kunci : Convolutional Neural Network (CNN), Deep Learning, Image Processing, Residual Network, SegmentasiSemantik, Thermal FLIR
Sistem Pengendali Steering Gear Otomatis Menggunakan Teknologi Kamera Thermal FLIR Arifyandy, Rachmat; Suratman, Fiky Y; Satyawan, Arief Suryadi
eProceedings of Engineering Vol. 11 No. 5 (2024): Oktober 2024
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak — Perkembangan teknologi informasi dan komunikasi mendorong berbagai inovasi, salah satunya adalahkendaraan listrik otonom (KLO) yang dapat mengurangi kelalaian manusia dalam mengemudi. Penelitian ini bertujuanuntuk mengembangkan dan menguji sistem kemudi otomatis pada KLO menggunakan kamera FLIR (Forward-LookingInfrared). Kamera FLIR digunakan untuk mendeteksi lingkungan sekitar kendaraan dalam berbagai kondisipencahayaan, seperti siang hari dan malam hari. Pengujian dilakukan untuk mengevaluasi kinerja kamera dalammenghasilkan gambar termal yang akurat dan memastikan deteksi serta identifikasi objek yang andal. MetodeConvolutional Neural Network (CNN) dengan arsitektur ResNet-50 digunakan untuk meningkatkan efektivitas deteksiobjek. Hasil penelitian menunjukkan bahwa kamera FLIR mampu mendeteksi objek dengan baik dalam kondisi siang danmalam hari, serta meningkatkan keselamatan dan navigasi kendaraan otonom. Penggunaan metode CNN terbukti efektifdalam meningkatkan akurasi deteksi objek, memberikan kontribusi signifikan dalam pengembangan sistem kemudi otomatis yang lebih aman dan efisien. Kata kunci — Kendaraan Listrik Otonom (KLO), Sistem Kemudi Otomatis, Kamera FLIR, Deteksi Objek, ConvolutionalNeural Network (CNN), ResNet-50.
Pemanfaatan Intel RealSense Depth Camera D415 untuk Mendeteksi Manusia pada Kendaraan Otonom Roda Tiga Aurelia, Felicia Bunga; Suratman, Fiky Y; Satyawan, Arief Suryadi
eProceedings of Engineering Vol. 11 No. 5 (2024): Oktober 2024
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak — Pemanfaatan Intel RealSense Depth Camera D415 dalam deteksi manusia pada kendaraan otonom roda tigamerupakan inovasi penting dalam meningkatkan keamanan dan efisiensi sistem transportasi. Latar belakang penelitian inididorong oleh kebutuhan untuk mengurangi kecelakaan yang melibatkan kendaraan otonom dengan pejalan kaki danpengguna jalan lainnya. Tujuan utama dari penelitian ini adalah untuk mengembangkan dan mengimplementasikansistem deteksi manusia yang akurat dan andal menggunakan teknologi kamera depth Intel RealSense D415. Metode yangdigunakan melibatkan pengintegrasian kamera depth dengan algoritma pemrosesan citra berupa YOLOv8 untuk mendeteksidan melacak keberadaan manusia di depan kendaraan. Uji coba dilakukan pada prototipe kendaraan otonom roda tiga dalamberbagai kondisi lingkungan untuk menguji kinerja sistem. Hasil penelitian menunjukkan bahwa sistem yangdikembangkan mampu mendeteksi manusia dengan tingkat akurasi yang tinggi, bahkan dalam kondisi pencahayaan yangburuk dan lingkungan yang kompleks. Kesimpulan utama dari penelitian ini adalah bahwa teknologi Intel RealSense DepthCamera D415 memiliki potensi besar untuk meningkatkan keselamatan kendaraan otonom melalui deteksi manusia yanglebih efektif, sehingga dapat mengurangi risiko kecelakaan danmeningkatkan kepercayaan publik terhadap penggunaan kendaraan otonom. Kata kunci— intel realsense depth camera, deteksi manusia, YOLOv8
Ilustrasi Pengereman Kendaraan Otonom Roda Tiga Menggunakan Aktuator Linear Elektrik Jody H, Amadeus Evan; Suratman, Fiky Y.; Satyawan, Arief Suryadi
eProceedings of Engineering Vol. 11 No. 5 (2024): Oktober 2024
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak — Penelitian ini mengembangkan sistem pengereman untuk kendaraan otonom roda tiga menggunakanaktuator linear elektrik, dikendalikan oleh mikrokontroler ATmega 2560 Pro dan motor driver BTS 7960. Depth cameraIntel RealSense D415 digunakan untuk mendeteksi jarak objek. Sistem mengatur kecepatan pengereman berdasarkan tigasegmen PWM: 255 untuk jarak 2-4 meter, 100 untuk jarak 4.01- 6 meter, dan 60 untuk jarak 6.01-8 meter. Hasil pengujianmenunjukkan respons cepat dan akurasi tinggi, dengan ratarata delay kurang dari 1 detik, memastikan pengereman yangaman dan efisien. Kata kunci— pengereman, kendaraan otonom, aktuator linear elektrik, depth camera, mikrokontroller ATmega 2560pro, motor driver BTS7960.