Agussalim Agussalim
Universitas Pembangunan Nasional “Veteran” Jawa Timur

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

An Adaptive DTN Routing Protocol Using a Q-Learning Framework for Archipelagic Emergency Networks Agussalim Agussalim; Henni Endah Wahanani; Andreas Nugroho Sihananto
CommIT (Communication and Information Technology) Journal Vol. 20 No. 1 (2026): CommIT Journal (in press)
Publisher : Bina Nusantara University

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Natural disasters in archipelagic regions often disrupt communication networks, particularly in geographically isolated islands where terrestrial infrastructure is limited and highly vulnerable. Hence, adaptive, infrastructure-independent solutions are required to maintain connectivity during emergencies. The research proposes an adaptive routing protocol for Delay Tolerant Network (DTN), named Q-learning-based Forwarding Routing (QFR), designed to enhance data delivery performance in disaster scenarios characterized by intermittent connectivity and constrained resources. QFR employs a lightweight, tabular Q-learning framework to make intelligent forwarding decisions based on real-time state information, including buffer occupancy, encounter history, and local node density. The protocol further integrates adaptive replica control and prioritybased scheduling mechanisms to regulate congestion and optimize bandwidth and buffer utilization. Performance evaluation is conducted using the ONE Simulator with realistic maritime mobility traces derived from vessel movement patterns around Madura Island, Indonesia, representing inter-island emergency communication conditions. The results indicate that QFR consistently outperforms benchmark protocols such as Epidemic and PRoPHETv2, particularly in maintaining a high delivery ratio under heavy traffic loads while keeping routing overhead moderate and latency stable. Time-series analysis further demonstrates QFR’s ability to improve its performance over time as the agent learns. The key finding is that a lightweight, adaptive algorithm based on a tabular Q-learning framework provides a practical and effective solution for reliable communication in resource-constrained emergency networks, avoiding the computational complexity of deep reinforcement learning approaches.