cover
Contact Name
Yuliah Qotimah
Contact Email
yuliah@lppm.itb.ac.id
Phone
+622286010080
Journal Mail Official
jictra@lppm.itb.ac.id
Editorial Address
LPPM - ITB Center for Research and Community Services (CRCS) Building Floor 6th Jl. Ganesha No. 10 Bandung 40132, Indonesia Telp. +62-22-86010080 Fax. +62-22-86010051
Location
Kota bandung,
Jawa barat
INDONESIA
Journal of ICT Research and Applications
ISSN : 23375787     EISSN : 23385499     DOI : https://doi.org/10.5614/itbj.ict.res.appl.
Core Subject : Science,
Journal of ICT Research and Applications welcomes full research articles in the area of Information and Communication Technology from the following subject areas: Information Theory, Signal Processing, Electronics, Computer Network, Telecommunication, Wireless & Mobile Computing, Internet Technology, Multimedia, Software Engineering, Computer Science, Information System and Knowledge Management.
Articles 302 Documents
Enhanced Relative Comparison of Traditional Sorting Approaches towards Optimization of New Hybrid Two-in-One (OHTO) Novel Sorting Technique Rajeshwari B S; C.B. Yogeesha; M. Vaishnavi; Yashita P. Jain; B.V. Ramyashree; Arpith Kumar
Journal of ICT Research and Applications Vol. 17 No. 2 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.2.2

Abstract

In the world of computer technology, sorting is an operation on a data set that involves ordering it in an increasing or decreasing fashion according to some linear relationship among the data items. With the rise in the generation of big data, the concept of big numbers has come into existence. When the number of records to be sorted is limited to thousands, traditional sorting approaches can be used; in such cases, complexities in their execution time can be ignored. However, in the case of big data, where processing times for billions or trillions of records are very long, time complexity is very significant. Therefore, an optimized sorting technique with efficient time complexity is very much required. Hence, in this paper an optimized sorting technique is proposed, named Optimized Hybrid Two-in-One Novel Sorting Technique (OHTO, a mixed approach of the Insertion Sort technique and the Bubble Sort technique. The proposed sorting technique uses the procedure of both Bubble Sort and Insertion Sort, resulting in fewer comparisons, fewer data movements, fewer data insertions, and less time complexity for any given input data set compared to existing sorting techniques.
Improving Robustness Using MixUp and CutMix Augmentation for Corn Leaf Diseases Classification based on ConvMixer Architecture Li Hua Li; Radius Tanone
Journal of ICT Research and Applications Vol. 17 No. 2 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.2.3

Abstract

Corn leaf diseases such as blight spot, gray leaf spot, and common rust still lurk in corn fields. This problem must be solved to help corn farmers. The ConvMixer model, consisting of a patch embedding layer, is a new model with a simple structure. When training a model with ConvMixer, improvisation is an important part that needs to be further explored to achieve better accuracy. By using advanced data augmentation techniques such as MixUp and CutMix, the robustness of ConvMixer model can be well achieved for corn leaf diseases classification. We describe experimental evidence in this article using precision, recall, accuracy score, and F1 score as performance metrics. As a result, it turned out that the training model with the data set without extension on the ConvMixer model achieved an accuracy of 0.9812, but this could still be improved. In fact, when we used the MixUp and CutMix augmentation, the training model results increased significantly to 0.9925 and 0.9932, respectively.
Generative Adversarial Networks Based Scene Generation on Indian Driving Dataset K. Aditya Shastry; B.A. Manjunatha; T.G. Mohan Kumar; D.U. Karthik
Journal of ICT Research and Applications Vol. 17 No. 2 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.2.4

Abstract

The rate of advancement in the field of artificial intelligence (AI) has drastically increased over the past twenty years or so. From AI models that can classify every object in an image to realistic chatbots, the signs of progress can be found in all fields. This work focused on tackling a relatively new problem in the current scenario-generative capabilities of AI. While the classification and prediction models have matured and entered the mass market across the globe, generation through AI is still in its initial stages. Generative tasks consist of an AI model learning the features of a given input and using these learned values to generate completely new output values that were not originally part of the input dataset. The most common input type given to generative models are images. The most popular architectures for generative models are autoencoders and generative adversarial networks (GANs). Our study aimed to use GANs to generate realistic images from a purely semantic representation of a scene. While our model can be used on any kind of scene, we used the Indian Driving Dataset to train our model. Through this work, we could arrive at answers to the following questions: (1) the scope of GANs in interpreting and understanding textures and variables in complex scenes; (2) the application of such a model in the field of gaming and virtual reality; (3) the possible impact of generating realistic deep fakes on society.
Scene Segmentation for Interframe Forgery Identification Andriani; Rimba Whidiana Ciptasari; Hertog Nugroho
Journal of ICT Research and Applications Vol. 17 No. 2 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.2.5

Abstract

A common type of video forgery is inter-frame forgery, which occurs in the temporal domain, such as frame duplication, frame insertion, and frame deletion. Some existing methods are not effective to detect forgeries in static scenes. This work proposes static and dynamic scene segmentation and performs forgery detection for each scene. Scene segmentation is performed for outlier detection based on changes of optical flow. Various similarity checks are performed to find the correlation for each frame. The experimental results showed that the proposed method is effective in identifying forgeries in various scenes, especially static scenes, compared with existing methods.
Smart Card-based Access Control System using Isolated Many-to-Many Authentication Scheme for Electric Vehicle Charging Stations Wervyan Shalannanda; Fajri Anugerah P. Kornel; Naufal Rafi Hibatullah; Fahmi Fahmi; Erwin Sutanto; Muhammad Yazid; Muhammad Aziz; Muhammad Imran Hamid
Journal of ICT Research and Applications Vol. 17 No. 2 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.2.8

Abstract

In recent years, the Internet of Things (IoT) trend has been adopted very quickly. The rapid growth of IoT has increased the need for physical access control systems (ACS) for IoT devices, especially for IoT devices containing confidential data or other potential security risks. This research focused on many-to-many ACS, a type of ACS in which many resource-owners and resource-users are involved in the same system. This type of system is advantageous in that the user can conveniently access resources from different resource-owners using the same system. However, such a system may create a situation where parties involved in the system have their data leaked because of the large number of parties involved in the system. Therefore, ‘isolation’ of the parties involved is needed. This research simulated the use of smart cards to access electric vehicle (EV) charging stations that implement an isolated many-to-many authentication scheme. Two ESP8266 MCUs, one RC522 RFID reader, and an LED represented an EV charging station. Each institute used a Raspberry Pi Zero W as the web and database server. This research also used VPN and HTTPS protocols to isolate each institute’s assets. Every component of the system was successfully implemented and tested functionally.
The Evaluation of DyHATR Performance for Dynamic Heterogeneous Graphs Nasy`an Taufiq Al Ghifari; Gusti Ayu Putri Saptawati; Masayu Leylia Khodra; Benhard Sitohang
Journal of ICT Research and Applications Vol. 17 No. 2 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.2.7

Abstract

Dynamic heterogeneous graphs can represent real-world networks. Predicting links in these graphs is more complicated than in static graphs. Until now, research interest of link prediction has focused on static heterogeneous graphs or dynamically homogeneous graphs. A link prediction technique combining temporal RNN and hierarchical attention has recently emerged, called DyHATR. This method is claimed to be able to work on dynamic heterogeneous graphs by testing them on four publicly available data sets (Twitter, Math-Overflow, Ecomm, and Alibaba). However, after further analysis, it turned out that the four data sets did not meet the criteria of dynamic heterogeneous graphs. In the present work, we evaluated the performance of DyHATR on dynamic heterogeneous graphs. We conducted experiments with DyHATR based on the Yelp data set represented as a dynamic heterogeneous graph consisting of homogeneous subgraphs. The results show that DyHATR can be applied to identify link prediction on dynamic heterogeneous graphs by simultaneously capturing heterogeneous information and evolutionary patterns, and then considering them to carry out link predicition. Compared to the baseline method, the accuracy achieved by DyHATR is competitive, although the results can still be improved.
CNN Based Covid-19 Detection from Image Processing Mohammed Ashikur Rahman; Mohammad Rabiul Islam; Md. Anzir Hossain Rafath; Simron Mhejabin
Journal of ICT Research and Applications Vol. 17 No. 1 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.1.7

Abstract

Covid-19 is a respirational condition that looks much like pneumonia. It is highly contagious and has many variants with different symptoms. Covid-19 poses the challenge of discovering new testing and detection methods in biomedical science. X-ray images and CT scans provide high-quality and information-rich images. These images can be processed with a convolutional neural network (CNN) to detect diseases such as Covid-19 in the pulmonary system with high accuracy. Deep learning applied to X-ray images can help to develop methods to identify Covid-19 infection. Based on the research problem, this study defined the outcome as reducing the energy costs and expenses of detecting Covid-19 in X-ray images. Analysis of the results was done by comparing a CNN model with a DenseNet model, where the first achieved more accurate performance than the second.
An Efficient Intrusion Detection System to Combat Cyber Threats using a Deep Neural Network Model Mangayarkarasi Ramaiah; C. Vanmathi; Mohammad Zubair Khan; M. Vanitha; M. Deepa
Journal of ICT Research and Applications Vol. 17 No. 3 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.3.2

Abstract

The proliferation of Internet of Things (IoT) solutions has led to a significant increase in cyber-attacks targeting IoT networks. Securing networks and especially wireless IoT networks against these attacks has become a crucial but challenging task for organizations. Therefore, ensuring the security of wireless IoT networks is of the utmost importance in today’s world. Among various solutions for detecting intruders, there is a growing demand for more effective techniques. This paper introduces a network intrusion detection system (NIDS) based on a deep neural network that utilizes network data features selected through the bagging and boosting methods. The presented NIDS implements both binary and multiclass attack detection models and was evaluated using the KDDCUP 99 and  CICDDoS datasets. The experimental results demonstrated that the presented NIDS achieved an impressive accuracy rate of 99.4% while using a minimal number of features. This high level of accuracy makes the presented IDS a valuable tool.
Prediction of On-time Student Graduation with Deep Learning Method: - Nathanael Victor Darenoh; Fitra Abdurrachman Bachtiar; Rizal Setya Perdana
Journal of ICT Research and Applications Vol. 18 No. 1 (2024)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.18.1.1

Abstract

Universities have an important role in providing quality education to their students so they can build a foundation for their future. However, a problem that often arises is that the process experienced will be different for each individual. Therefore, it is necessary to apply on-time graduation predictions for students with academic attributes in the hope that educational institutions can better understand student conditions and maximize on-time student graduation. In this study, a deep learning method was implemented to help predict on-time graduation for students at the Faculty of Computer Science, University of Brawijaya. Based on the test results and hyperparameter tuning using Optuna, the best hyperparameter configuration for the deep learning method consisted of number of layer combinations = 4; first-layer nodes = 118; first dropout = 0.3393; second-layer nodes = 83; second dropout = 0.0349; third-layer nodes = 88; third dropout = 0.0491; fourth-layer nodes = 65; fourth dropout = 0.4169; number of epochs = 244; learning rate = 0.0710; and optimizer = SGD. Thus, an accuracy rate of 86.61% was achieved for the two classes of the test data set, i.e., on-time graduation and not on-time graduation.
WSN-IoT Forecast: Wireless Sensor Network Throughput Prediction Framework in Multimedia Internet of Things Rosa Eliviani; Yoanes Bandung
Journal of ICT Research and Applications Vol. 17 No. 3 (2023)
Publisher : DRPM - ITB

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.5614/itbj.ict.res.appl.2023.17.3.4

Abstract

Accurate throughput predictions can significantly improve the quality of experience (QoE), where QoE denotes a network’s capacity to provide satisfactory service. By increasing the results of good throughput predictions, the best strategy can be planned for managing data transmission networks with the aim of better and faster data transmission, thereby increasing QoE. Consequently, this paper investigates how to predict the throughput of wireless sensor networks utilizing multimedia data. First, we conducted a comparative analysis of relevant prior research on the topic of throughput prediction in Multimedia Internet of Things (Multimedia IoT). We developed a throughput prediction framework for wireless sensor networks based on what we learned from these studies using machine learning. The Throughput Prediction Framework identifies historical throughput data and employs these traits to predict throughput. In the final phase, multiple camera nodes and local servers are utilized to test a framework for throughput prediction. Our analysis demonstrates that WSN-IoT predictions are quite precise. For a 1-second time breakdown, the average absolute percentage error for all investigated scenarios ranges from 1 to 8 percent.