Claim Missing Document
Check
Articles

Found 28 Documents
Search

SURVEI MEKANISME CONGESTION KONTROL PADA TRANSMISSION CONTROL PROTOCOL DI SOFTWARE DEFINED NETWORK Faishal Halim Saputra; Royyana Muslim Ijtihadie
JUTI: Jurnal Ilmiah Teknologi Informasi Vol 16, No. 1, Januari 2018
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v16i1.a589

Abstract

Sebuah paradigma baru jaringan, Software Defined Network (SDN) dikembangkan membagi integrasi vertikal dalam perangkat jaringan, memisahkan logika kontrol dari infrastruktur jaringan sehingga memungkinkan untuk mengubah keadaan dan kondisi jaringan dari pengontrol yang dapat diprogram secara terpusat. sebagian besar komunikasi one-to-many pada SDN diimplementasikan melalui beberapa unicast seperti TCP, yang tidak efisien. Hal itu menghasilkan banyak trafik direplikasi, yang dapat berakibat menurunkan kinerja aplikasi karena beberapa permasalahan seperti congestion, redundancy dan collision. Permasalahan congestion terjadi ketika sebuah network mempunyai beban yang banyak dan mengakibatkan performansi menurun karena jumlah pengiriman melebihi kapasitas router yang ada. Salah satu solusi penanganan congesti dengan mereduksi ukuran TCP receive window. Tujuan utama dari pembuatan makalah ini adalah merangkum beberapa mekanisme kontrol kemacetan menggunakan TCP yang telah diakukan oleh para peneliti untuk menangani permasalahan kemacetan pada jaringan.
REPLIKASI DATA MENGGUNAKAN DETECTION CONTROLLER MODULE UNTUK MENCEGAH CONGESTION DI DATA CENTER Erna Auparay; Royyana Muslim Ijtihadie
JUTI: Jurnal Ilmiah Teknologi Informasi Vol 16, No. 1, Januari 2018
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v16i1.a590

Abstract

Replication of data in a distributed manner is one of the important phases in the process of centralized data distribution. Replication of data does a copy or duplication technique in the process of data distribution logic. This distribution will be interconnected either system or applicative data is distributed in the network computer. Distributed storage media from one storage medium to another storage medium will implement the synchronization between the source server to the destination server so that the data consistency can be guaranteed. The use of controller-controller Software Defined Network (SDN) in the data center that is to control the process distribution data to multiple locations storing the same data. This is very useful in these locations require the same data or require a separate server in the manufacture of reporting applications. SDN controller-controller is also used to control the distribution entities, which may exhibit different behavior on event triggers, but in the case of traffic flow density distribution in the data center network that causes congestion/bottlenecks until need their completion strategies. Therefore, this paper will discuss what approach to data replication strategies used to cope with the density of the network traffic flow.
DEVELOPMENT OF LOAD BALANCING MECHANISMS IN SDN DATA PLANE FAT TREE USING MODIFIED DIJKSTRA’S ALGORITHM Muhammad Fattahilah Rangkuty; Royyana Muslim Ijtihadie; Tohari Ahmad
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 18, No. 2, July 2020
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v18i2.a1008

Abstract

SDN is a computer network approach that allows network administrators to manage network services through the abstraction of functionality at a higher level, by separating systems that make decisions about where traffic is sent (control plane), then forwarding traffic to the chosen destination (data plane). SDN can have problems with network congestion, high latency, and decreased throughput due to unbalanced traffic allocation on available links, so a load-balancing load method is needed. This technique divides the entire load evenly on each component of the network on the path or path that connects the data plane and S-D (Source Destination) host. The Least Loaded Path (LLP) of our proposed concept, which is a Dijkstra development, selects the best path by finding the shortest path and the smallest traffic load, the smallest traffic load (minimum cost) obtained from the sum of tx and rx data in the switchport data plane involved in the test, this result which will then be determined as the best path in the load balancing process.
EFFICIENCY OF FLOODING BY DEVELOPING RELIABLE SUBNETWORK METHODS ON FIBBING ARCHITECTURE IN THE HYBRID ENVIRONMENT SDN Dino Budi Prakoso; Royyana Muslim Ijtihadie; Tohari Ahmad
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 19, No. 1, Januari 2021
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v19i1.a1009

Abstract

In the technology world especially in the field of current network of Autonomous Systems connectivity (AS) is indispensable. Especially against the dynamic routing protocols that are often used compared to static routing protocols. In supporting this current network, it takes efficient and effective routing protocols capable of covering a sizable scale. Software Defined Network (SDN) is a technological innovation in the network world that has a separate Control Plane and Data Plane that makes it easy to configure on the Control Plane side. Control Plane is the focal point on a process of bottleneck in SDN architecture. Performance is a critical issue in large-scale network implementations because of the large demand load occurring in the Control Plane by generating low throughput value. This research will be conducted testing on the Hybrid network of SDN by using OSPF routing protocol, based on the Fibbing architecture implemented on the system network Hybrid SDN also able to assist in improving performance, but there are constraints when sending flooding which is used as a fake node forming. Many nodes are not skipped as distribution lines in the formation of a fake node, in which case it will certainly affect the value of throughput to be unstable and decrease. This can be overcome by using the Isolation Domain method to manage the LSA Type-5 flooding efficiency.
IMPLEMENTATION OF JOHNSON'S SHORTEST PATH ALGORITHM FOR ROUTE DISCOVERY MECHANISM ON SOFTWARE DEFINED NETWORK Akbar Pandu Segara; Royyana Muslim Ijtihadie; Tohari Ahmad
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 19, No. 1, Januari 2021
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v19i1.a1011

Abstract

Software Defined Network is a network architecture with a new paradigm which consists of a control plane that is placed separately from the data plane. All forms of computer network behavior are controlled by the control plane. Meanwhile the data plane consisting of a router or switch becomes a device for packet forwarding. With a centralized control plane model, SDN is very vulnerable to congestion because of the one-to-many communication model. There are several mechanisms for congestion control on SDNs, one of which is modifying packets by reducing the size of packets sent. But this is considered less effective because the time required will be longer because the number of packets sent is less. This requires that network administrators must be able to configure a network with certain routing protocols and algorithms. Johnson's algorithm is used in determining the route for packet forwarding, with the nature of the all-pair shortest path that can be applied to SDN to determine through which route the packet will be forwarded by comparing all nodes that are on the network. The results of the Johnson algorithm's latency and throughput with the comparison algorithm show good results and the comparison of the Johnson algorithm's trial results is still superior. The response time results of the Johnson algorithm when first performing a route search are faster than the conventional OSPF algorithm due to the characteristics of the all pair shortest path algorithm which determines the shortest route by comparing all pairs of nodes on the network.
Survey on Risks Cyber Security in Edge Computing for The Internet of Things Understanding Cyber Attacks Threats and Mitigation Tiara Rahmania Hadiningrum; Resky Ayu Dewi Talasari; Karina Fitriwulandari Ilham; Royyana Muslim Ijtihadie
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 23, No. 1, January 2025
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v23i1.a1210

Abstract

Dalam era pesatnya perkembangan teknologi, penggunaan IoT terus meningkat, terutama dalam konteks edge computing. Makalah survei ini secara teliti menjelajahi tantangan keamanan yang muncul dalam implementasi IoT pada edge computing. Fokus utama penelitian ini adalah potensi serangan dan ancaman siber yang dapat mempengaruhi keamanan sistem. Melalui metode survei literatur, makalah ini mengidentifikasi risiko keamanan siber yang mungkin timbul dalam lingkungan IoT di edge computing. Pendekatan metodologi penelitian digunakan untuk mengklasifikasikan serangan berdasarkan dampaknya pada infrastruktur, layanan, dan komunikasi. Keempat dimensi klasifikasi, yaitu Network Bandwidth Consumption Attacks, System Resources Consumption Attacks, Threats to Service Availability, dan Threats to Communication, memberikan dasar untuk memahami dan mengatasi risiko keamanan. Makalah ini diharapkan memberikan landasan pemahaman yang kokoh tentang keamanan pada IoT dalam edge computing, serta kontribusi untuk pengembangan strategi keamanan yang efektif. Dengan fokus pada pemahaman mendalam tentang risiko keamanan, makalah ini mendorong pengembangan solusi keamanan yang adaptif di masa depan untuk mengatasi tantangan keamanan yang berkembang seiring dengan pesatnya adopsi teknologi IoT di edge computing.
A Comparative Study Evaluation of Kafka and RabbitMQ: Performance, Scalability and Stress Test in Distributed Messaging Systems Muhammad Rias; Ach Muhyil Umam; Anani Asmani; Royyana Muslim Ijtihadie
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1345

Abstract

The two most widely used Message-Oriented Middleware (MOM) technologies are Apache Kafka and RabbitMQ, both of which have fundamental differences in terms of architecture and performance characteristics. Kafka is designed for high-throughput and good scalability data stream processing, while RabbitMQ excels in message routing flexibility, delivery reliability, and complex queue management. This study presents a comprehensive comparative analysis of two leading message brokers, namely Apache Kafka and RabbitMQ to evaluate performance, scalability, and behaviour under stress test, for the selection of the most suitable message broker in modern distributed system architectures. The experimental testing process was carried out in four different scenarios: message size variation of 1 KB, 10 KB and 100 KB aimed at measuring performance based on payload size, message volume variation of 10,000, 50,000 and 100,000 messages to see throughput limits and resource usage, consumer number variation of 1, 5 and 10 Measuring the scalability of the consumer system, then a high-intensity pressure test of 100,000 messages in 10 seconds to evaluate the stability and latency of the overload. Key performance metrics, such as throughput, latency, CPU usage, and RAM consumption are carefully evaluated. The overall results of the experiment were more suitable for systems that affect the speed and volume of messages, while Kafka was more appropriate for extreme workloads with high durability requirements. This experiment provided empirical data concluding that RabbitMQ is highly effective for applications that require sending high-volume, low-latency individual messages, while Kafka's strength lies in handling specific data stream sizes and maintaining stability under intense and sustained loads.
A Survey On Causal Consistency Implementation In Geo-Replicated Cases Syukron, Muhamad; Nisa, Chilyatun; Aziz, Abdul; Ijtihadie, Royyana Muslim
IJCONSIST JOURNALS Vol 5 No 2 (2024): March
Publisher : International Journal of Computer, Network Security and Information System

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33005/ijconsist.v5i2.145

Abstract

Distributed storage systems are a fundamental component of large-scale internet services. To meet the in-creasing needs of users regarding availability and latency, the design of data storage systems has developed into data replication techniques, one of which is geo-replication. Causal consistency is an attractive method for storing geo-replicated data because it is at the crucial point between ease of programming and resulting performance. This method also enables high availability and low latency. However, when implemented into cloud storage, there are limitations regarding throughput and costs. We surveyed several models using methods related to causal consistency in geo-replication cases designed by previous researchers. The mod-els used were derived from papers on causal consistency in geo-replication cases published within the last five years. In this study, we compared the performance of previously designed models based on their performance results. The results of this study are grouping models based on throughput and latency performance obtained.