Claim Missing Document
Check
Articles

Analisis Perbandingan Kinerja Algoritme Dijkstra, Bellman-Ford, dan Floyd-Warshall Untuk Penentuan Jalur Terpendek Pada Arsitektur Jaringan Software Defined Network Aprillia Arum Pratiwi; Widhi Yahya; Mahendra Data
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 1 (2019): Januari 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (331.207 KB)

Abstract

Software defined network (SDN) is a centralized and flexible network concept compared to traditional networks that exist today. SDN has been developed in the last few years and has been widely implemented, one of which is routing networks. Routing is the process of finding the communication path used to send packets from the sender to the recipient. The implemented routing algorithm is tasked to determine the shortest paths, namely Dijkstra, Bellman-Ford, and Floyd-Warshall algorithms. The three algorithms will be implemented using the Mininet and Ryu controller emulators. For determining the path required the determination of the link or cost weight. Cost in this study is based on the calculation of the reference bandwidth of 1000 Mbps and the bandwidth link that uses three types of capacity quantities, namely 10 Mbps, 100 Mbps, and 1000 Mbps. Testing is done with parameters, namely validation, convergence time, throughput, recovery time and packet loss. Based on the results of validation, the system runs according to the calculation of the manual calculation of each algorithm. In convergence time testing, Dijkstra was superior with an average of 0.0087 seconds compared to Bellman-ford 0.0094 seconds and Floyd-Warshall 0.02025 seconds. In testing the three throughput algorithms do not have far distinction. Based on recovery time testing, the Floyd-Warshall algorithm has faster recovery time than other algorithms. Based on the testing of packet loss Dijkstra is still superior in handling packet loss when sending.
Load Balancing Server Web Berdasarkan Jumlah Koneksi Klien Pada Docker Swarm Dimas Setiawan Afis; Mahendra Data; Widhi Yahya
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 1 (2019): Januari 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (333.817 KB)

Abstract

Most web servers that are used today still use single backend server architectures. The problem that arises is how the single server is able to handle large data requests. We must consider using web server clustering to improve web server reliability. We built this cluster using virtualization technology such as virtual containers. One of the container-based virtualisations currently is Docker. However, managing multiple containers to make a single service is a challenging task. Docker introduces a distributed system development tool called Docker Swarm. We propose a load balancing mechanism on Docker Swarm to balance internal load, so that it can distribute requests to web server. In addition, we also try to find out the performance of roundrobin and leastconn algorithms on loadbalancing mechanisms that will be used on Docker Swarm. The test results show that loadbalancing applying the least connection algorithm has a throughput of 15 Mbps in 1000 requests, 17 Mbps in 3000 requests, 17 Mbps in 5000 requests, while the round robin algorithm has a throughput of 15 Mbps in 1000 requests, 14 Mbps in 3000 requests, 15 Mbps in 5000 requests. The results show the least connection algorithm has better performance than the round robin algorithm. In addition, the results of data distribution are balanced on each available web server.
Implementasi Penyimpanan Data Persisten pada Docker Swarm Menggunakan Network File System (NFS) Andreas Frederius; Mahendra Data; Widhi Yahya
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 2 (2019): Februari 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (409.144 KB)

Abstract

Docker Swarm is a distributed system development technology for managing the Docker engine group. With Docker Swarm you can run multiple containers at the same time in the Docker engine group. In a distributed method using Docker Swarm a persistent data storage is needed. But facing Docker Swarm stores data on the container, if the container is deleted then the data will also be deleted. Therefore we need a persistent alternative storage data. Network File System (NFS) is an open protocol that can be used to share files on many computer networks and operating systems. The design of the NFS architecture on Docker Swarm uses a client-server architecture. Docker Swarm as a client and NFS as a server. NFS is able to provide persistent storage data on Docker Swarm by configuring data even if the container and machine are restarted. NFS can prepare data on Docker Swarm to retrieve data that has been stored on NFS-3 data remains persistent. The average write speed performance on NFS is 30,168 KB while the working speed of reading the average on NFS is 63,939 KB.
Implementasi Algoritme Grain Untuk Pengamanan Data Rekam Medis Yoga Rizwan Priyatna; Ari Kusyanti; Mahendra Data
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 4 (2019): April 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1092.12 KB)

Abstract

Medical record is a document that contains personal data and history of treatment from a patient in a hospital. Medical record are sensitive and confidential so any person or party cannot read the information of the patient's medical record. In case of a hospital cannot provide the best care for the patient, it allows patient referred to other hospital that have better resources and involve the exchange of patient's medical record. Medical record exchange needs a secure system that can guarantee the confidentiality and integrity of data at the time of the delivery process. This research will implement of Grain v1 algorithm and Secure Hash Algorithm (SHA-3) for safety medical record exchange between hospital. Grain v1 is an algorithm to ensure the confidentiality of data on the exchange process by doing encryption and decryption of data. SHA-3 is hashing algorithm to ensure the integrity of the data. Grain v1 algorithm and SHA-3 has been through the vector test to validate the architecture of algorithm in the system are made. The result states that Grain v1 algorithm can perform encryption with an average time 0.0077450 seconds and decryption requires the average time 0.0145436 seconds.
Implementasi Load Balancing Server Web Berbasis Docker Swarm Berdasarkan Penggunaan Sumber Daya Memory Host Mohamad Rexa Mei Bella; Mahendra Data; Widhi Yahya
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 4 (2019): April 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1558.783 KB)

Abstract

Virtualization based container is very popular among programming development because this virtualization is lightweight virtualization where Linux kernel allow to share resource between containers so performace container does not interfere with each other. One of the most used virtualization based container is Docker. Docker is open source software which can be change as we wish. Docker container can be used to clustering web server. This method can be used to decrease a single point of failure (SPOF) in web server. However to manage a lot container is complex, but Docker have engine to solve this problem called Docker Swarm. Docker Swarm have internal load balancing but just to manage between container and can't be monitored.So can make resource between host is not unequal distributed. Therefore this research purpose is to distribute web server traffic inside a Docker swarm using loadbalancing based on the resource utilization of the host machine. There are several tests to test the system's functionality and performance in overcoming gaps in failover based time and load balancer based memory. From the test results, we obtained that failover based time and loadbalancer based memory can work in Docker Swarm and can solved the problem about unequal distributed between host.
Analisis Perbandingan Kinerja Protokol Routing OLSR dan DSDV Pada MANET Berdasarkan Pergerakan Node Brillian Taufan; Rakhmadhany Primananda; Mahendra Data
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 4 (2019): April 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (643.804 KB)

Abstract

Mobile adhoc Network is a wireless network consisting of nodes, node itself tend to move freely or mobililize without any infrastructure such as router. In an adhoc network, the nodes works as router itself which responsible for finding and handling paths to each destination node in a network (Wijayanto, 2009). To set the entire routing process, the MANET network topology does not require a router, because each device functions as a router to determine the path to be passed. There is a proactive routing protocol used in MANET, there are OLSR (Optimized Link State Routing) and DSDV (Destination Sequenced Distance Vector). OLSR and DSDV is used in this study because it includes proactive routing protocols that can adapt quickly to dynamic link conditions. This study provides an analysis of the effect of proactive protocol performance based on the movement of nodes in the MANET topology using Network Simulator 3.25. in this study, the movements used Random Waypoint and Random Direction by measuring network performance using test parameters in the form of average end-to-end delay, packet delivery ratio (PDR), and Overhead Routing. Tests carried out on both proactive protocols, namely OLSR and DSDV with test scenarios in the form of variations in the number of nodes as much as 20,30,40 and 50 nodes, simulation area of 200 m2, 500m2, 800 m2 and 1000 m2, and mobility modes Random Waypoint and Random Direction. The result of each performance from the average end to end delay DSDV protocol is better than the OLSR protocol. This is indicated by the average end to end delay value of the DSDV protocol, which is 0.00107591ms. But on the results of measurement of packet delivery ratio (PDR) and routing overhead OLSR is better than DSDV because OLSR has an MPRs mechanism (Multi point relays), MPR can reduce the number of broadcast information messages that have the same information and to reduce routing overhead
Implementasi Algoritme BLAKE2b untuk Pengecekan Integritas File Pramasita Gustiarum; Ari Kusyanti; Mahendra Data
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 4 (2019): April 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (990.891 KB)

Abstract

File is an object on a computer that stores data and information that can be read by a program on a computer. On computers, files are stored in storage, such as hard drives, DVDs, flash disks and so on. Other than being stored, files can also be moved, shared, changed and duplicated. From these processes, protecting file integrity is very important because these processes can affect data or information stored in it. Possibility of threats in that processes might be corrupted data, data unwantedly modify or file carries hidden data for criminal purposes. Therefore, there is need for a mechanism to check integrity of the data using cryptographic techniques, called hashing. Hashing is a technique to generate unique values from a string and this unique what we called hash value or digest. In this study, BLAKE2b hash function is used as data integrity validation of a file. The experiment in this study includes, timing BLAKE2b to validate .txt file format, which given result of average 18.60s for 500 KB file in size. Next results of BLAKE2b process time are compared with MD5 and the results obtained in this experiment are average of 15.85s and 18.30s for each algorithm, it can be concluded that BLAKE2b is working slightly faster than MD5. Lastly, avalanche test of BLAKE2b gives an average probability of 0.589.
Analisis Kinerja Algoritme Speck Pada Keamanan File Teks Karmila Dewi Sulistyowati; Ari Kusyanti; Mahendra Data
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 4 (2019): April 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1175.564 KB)

Abstract

SPECK published by NSA in 2013 as a new block cipher algorithm. This cipher has 10 variations based on block and key sizes, which it has its own characteristics for its encryption and decryption process. This research aimed to analyze the performance of SPECK to secure text file by performing encryption and decryption processes. There were two experiments conducted, first is to analyzed performance of 3 variations of SPECK, such as SPECK128/128, SPECK128/192 and SPECK128/256. These variations was chosen because of previous research that utilized the AES algorithm to secure data where AES has 3 variations based on key lengths, such as AES-128, AES-192 and AES-256. The results obtained from this experiment were the longer the key used, the faster the encryption and decryption process on the system. Second experiment is to analyzed variations of file sizes, such as 50 KB, 100 KB, 150 KB, 200 KB, 250 KB and 300 KB. The results obtained were the larger the file size used will took longer time to process. Afterwards, the results were put in Kruskal-Wallis and Post Hoc tests to be analyzed and concluded with each change of 50 KB in file size will produce a significant time difference.
Implementasi Algoritme SPECK Block Cipher dan Shamir's Secret Sharing Pada File Teks Novita Krisma Diarti; Ari Kusyanti; Mahendra Data
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 4 (2019): April 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (1141.02 KB)

Abstract

Formal letter is one example of processing important data that contains text files, therefore security is needed to protect the file. Security in files aimed so that only certain people can access the contents of the file. Encryption is one of technique to secure file in cryptography. Cryptography has many algorithms that can be used for encryption process. In this study, two algorithms in cryptography use to secure file, called SPECK block cipher algorithm and Shamir's secret sharing. SPECK block cipher algorithm is used for data encryption and decryption, while Shamir's secret sharing algorithm is used to share confidential data on files. Both algorithms are used to protect text files in order to maintain data security from irresponsible people. Testing involves three file sizes, such as size 4 KB with contents 3, 8, and 13 lines. Results show average of 9.433 s, 91.938 s, and 619.036 s respectively for mentioned file contents. Average results of CPU usage are 7.058%, 25.855% and 31.095% respectively for mentioned file contents. Lastly, average results of the amount of RAM usage are 377.277 MB, 414.283 MB, 440.231 MB respectively for each file contents.
Implementasi Shared Session Dalam Klaster Server Web Menggunakan PHP dan MySQL R. Moch Makruf Puja Pradana; Mahendra Data; Dany Primanita Kartikasari
Jurnal Pengembangan Teknologi Informasi dan Ilmu Komputer Vol 3 No 5 (2019): Mei 2019
Publisher : Fakultas Ilmu Komputer (FILKOM), Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (990.098 KB)

Abstract

Session is a method that used to store information on a computer server that can be used on several pages including the page itself. In using a web server cluster, it can produce a better performance than using a single server that handles a website. But there are some problems that arise in the development between the web server cluster against the use of the session itself. In a web server cluster is a represented by a single node will run independently, if the initial session formed in the web application only have one node server, then the other server nodes can't get the same session access. The solution that can be used is to save the data session into database with mySQL, and the access of data session can be shared to another web servers. Another problem was revealed is how one web server cluster can communicate with the database in storing dan requesting data sessions. With these problems, this study developed the Shared Session method that will be applied through PHP. To prove that the method can run the system according to function, two tests are applied, login and logout testing and black box testing. In testing the login and logout that is designed can be fulfilled in storing and sharing data sessions even though there is a condition of one disabled web server. In the black box testing the response time results obtained in each scenario, in the first scenario of 100 session requests produces a response time of 18.7 second to 26.2 second, in the second scenario of 200 session requests produces a response time of 1 minutes up to 1.38 minutes, in the third scenario of 300 session requests to produce a response time of 10.07 seconds to 15.25 seconds.