cover
Contact Name
Tole Sutikno
Contact Email
-
Phone
-
Journal Mail Official
ij.aptikom@gmail.com
Editorial Address
9th Floor, 4th UAD Campus Lembaga Penerbitan dan Publikasi Ilmiah (LPPI) Universitas Ahmad Dahlan
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Computer Science and Information Technologies
ISSN : 2722323X     EISSN : 27223221     DOI : -
Computer Science and Information Technologies ISSN 2722-323X, e-ISSN 2722-3221 is an open access, peer-reviewed international journal that publish original research article, review papers, short communications that will have an immediate impact on the ongoing research in all areas of Computer Science/Informatics, Electronics, Communication and Information Technologies. Papers for publication in the journal are selected through rigorous peer review, to ensure originality, timeliness, relevance, and readability. The journal is published four-monthly (March, July and November).
Articles 13 Documents
Search results for , issue "Vol 7, No 1: March 2026" : 13 Documents clear
Implementation of face recognition using Python Christanto, Febrian Wahyu; Arifin, Husnul; Dewi, Christine; Prasandy, Teguh
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p1-9

Abstract

Artificial intelligence (AI)-based technology systems are developing rapidly. Along with technological development the number of criminal cases caused by facial forgery is also growing. Cases of theft and housebreaking with fake photos are a common problem in Semarang. In 2022–2023 the number of cases of theft and housebreaking reached 372,965 with a crime risk level of 137/100,000 people. To overcome this problem the facial recognition system used in the door security system uses digital image processing. This method works by imitating how nerve cells communicate with interconnected neurons, or more precisely, how artificial neural networks function in humans. As training data, image capture and facial recognition are carried out using a webcam and the Python programming language with the TensorFlow library. The image processing algorithm uses 400 facial images with an accuracy rate of 95%. However further development is needed to improve the efficiency and accuracy of the system to produce better results.
An uneven cluster-based routing protocol for WSNs using a hybrid MCDM and max-min ant colony optimization Ri, Man Gun; Kim, Pyong Gwang; Kim, JinSim
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p74-82

Abstract

In energy-constrained wireless sensor networks (WSNs) composed of sensor nodes (SNs) characterized by multi-criteria contradictory with each other, it is still one of the challenges to be solved to figure out how to combine multi-criteria with each other and how to use an intelligent optimization (IO) algorithm for developing an optimal cluster-based routing protocol. In this article, we overture a new routing protocol based on uneven cluster using the hybrid FCNP-VWA-TOPSIS (FVT) and an improved max-min ant colony optimization (ACO). This scheme uses the hybrid FVT to perform the clustering, and uses an improved max-min ACO to configure a routing tree for the relay transmission of sensed data. The extensive simulation experiments have been carried out to show that the proposed scheme greatly prolongs the network lifetime (NL) by achieving an energy consumption balance superior to the previous schemes.
Optimizing interconnection call routing: a machine learning approach for cost and quality efficiency Mudari, Ivy Anesu; Mutandavari, Mainford; Chiworera, Kenneth
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p56-65

Abstract

This study presents the design and development of an automated least cost routing (LCR) model for telecommunications interconnection calls using machine learning. Leveraging a random forest regressor, the model predicts the most cost-effective call routing path based on pricing and network latency. Trained on real-world call detail records (CDRs) from TelOne Zimbabwe, the model achieved a high R² score of 0.851, with a mean absolute error (MAE) of $0.0482 per minute. Evaluation results demonstrate an average cost reduction of 46.75% compared to traditional routing methods, with prediction times under 0.1 seconds and latency remaining within acceptable thresholds. This work provides a practical, scalable, and efficient solution for telecom. operators seeking to reduce interconnection costs and maintain service quality through intelligent routing automation. The model architecture and performance to make it viable for integration into real-time telecom infrastructure.
Raindrop and bit drop effects on millimeter wave network performance: a critical review Gordon, Victor Dela; Acakpovi, Amevi; Aggrey, George Kwamena; Dziwornu, Michael Gameli
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p83-92

Abstract

This PRISMA guided review examines how rain precipitation degrades 5G millimeter wave (mmWave) network performance, with emphasis on rain induced bit drop and its impact on end-to-end quality of service (QoS). From an initial corpus of 13,317 publications screened across IEEE Xplore, ACM Digital Library, ScienceDirect, Google Scholar, and ELICIT, 18 peer reviewed studies published between 2018 and 2024 met the inclusion criteria. Findings show that rainfall significantly weakens mmWave signals, with specific attenuation ranging from approximately 4 to 45 dB/km at 100 mm/h, particularly in tropical regions. When QoS outcomes are reported, these losses manifest as increased bit error rates, rain driven bit drop along the link, higher packet loss and delay, and reduced throughput. Key deficiencies identified include limited empirical validation of attenuation models against packet level QoS, lack of standardized propagation datasets for short range links, and weak treatment of bit level impairments within QoS analysis. To address these gaps, the review recommends enhancing ITU R P.530 and Mie scattering models with region specific measurements, implementing rain aware adaptive protocols, and adopting standardized benchmarking frameworks that link rain attenuation, bit drop, and QoS. This synthesis offers guidance for building climate aware mmWave systems and positions bit drop as a practical metric for precipitation resilience assessment.
AdaWeb: a stack-adaptive framework for automated web-vulnerability assessment Shah, Syed Aman; Kumar, Vaishali
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p10-19

Abstract

AdaWeb was a configuration-driven framework that automated web-vulnerability assessment through four stages: technology fingerprinting, crawler selection, exploit execution, and incremental reporting. A Wappalyzer probe identified the application stack and triggered a matching crawler—hypertext preprocessor (PHP), ASP.NET, NodeJS, or a general fallback—capable of both unauthenticated and credential-based traversal. Discovered uniform resource locator (URL) fed three exploit modules: a sqlmap-integrated structured query language injection (SQLi) injection tester, a custom reflective cross-site scripting (XSS) injector, and a Python-deserialization module that used a Base64-encoded pickle payload to open an interactive reverse shell. Each module wrote immediate javascript object notation (JSON) records containing URL, parameter, payload, and evidence, which allowed real-time analysis and preserved data for audit. Empirical evaluation on four deliberately vulnerable benchmarks shows that AdaWeb cuts manual triage time by 52% and eliminates false‑negative cases that defeat generic scanners, making it a drop‑in upgrade for DevSecOps pipelines. This framework reduces manual validation effort and eliminates false negatives by leveraging stack-aligned payloads and authenticated scanning.
Advances in dermatological imaging: enhancing skin melanoma classification for improved patient outcomes Sahoo, Debadutta; Mishra, Soumya
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p111-120

Abstract

The study presents an enhanced AlexNet-based deep learning system for binary classification of melanoma skin cancer as either benign or malignant using two paired dermatoscopic and clinical image datasets. The study evaluates the resilience of the models across different image sets with common preprocessing and specific data augmentation, using a melanoma dataset containing 10,000 images and a benign versus malignant dataset with 3,600 images. The AlexNet refinement exceeded several standard machine learning (ML) classifiers and other deep architectures on the two datasets with practical training times, gaining 97.12% and 96.21% in balanced accuracy. The training proceeded with SGD as optimiser and cross-entropy as loss on 256×256 images. Benchmarking against support vector machine (SVM), k-nearest neighbour (KNN), and other convolutional neural networks (CNNs) designs shows that the selected architecture and hyperparameters achieved the highest performance on cost-effective computation for the routine melanoma triage. The report highlights the need for external validation, incorporation into dermatological workflows, and explainability to improve trust, diminish dataset bias, and support the safe clinical deployment in practice.
Car selection in games using multi-objective optimization by ratio analysis based on player achievement Putra, Caesar Nafiansyah; Nugroho, Fresy; Imamudin, Mochamad; Pebrianti, Dwi; Hammad, Jehad Abdelhamid; Lestari, Tri Mukti; Maharani, Dian; Nurrahman, Alfina
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p30-45

Abstract

The selection menu in some racing games usually uses a random system for vehicle selection. However, this random feature generally randomizes the selection of the index without considering factors that support the player's abilities. Therefore, this study aims to develop a racing game that can suggest vehicles that have been adjusted to the player's performance. Vehicle recommendations are made using the multi-objective optimization on the basis of ratio analysis (MOORA) method as its method. The MOORA calculation ranks vehicles based on criteria such as mileage, fuel efficiency, speed, agility, and others collected in previous games. The results of this study show the effectiveness of using the MOORA method in recommending vehicles that match the player's skills, thereby improving the overall player experience. In addition, the usability test produced a system usability scale (SUS) score of 82.4, so it is included in the very good category.
Deep learning for sentiment analysis and topic extraction in health insurance Karomo, Muzondiwa; Mutandavari, Mainford; Muzava, Wilton
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p66-73

Abstract

Social media has transformed into a vital channel for real-time, unsolicited feedback in healthcare, yet health insurance providers often lack the tools to mine insights from such data. This study proposes a cloud-based system leveraging deep learning for sentiment analysis and topic modeling tailored to the Commercial and Industrial Medical Aid Society (CIMAS) health insurance in Zimbabwe. Using bidirectional encoder representations from transformers (BERT), a convolutional neural network (CNN), a random forest (RF), and autoencoders, the system processes multilingual data from platforms like Twitter and Facebook, identifying customer concerns in real time. Over 15,000 posts were analyzed, with CNN achieving 91.4% accuracy in sentiment classification and BERTopic extracting coherent themes. The system detected issues such as claim delays, app navigation problems, and unreported anomalies. Findings demonstrate that AI can improve service delivery, customer satisfaction, and responsiveness in African insurance contexts.
Review on patch antenna for 5G Networks at Ka-Band Al Nasib, Md. Nurullah; Rana, Md. Sohel
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p102-110

Abstract

Microstrip antennas for Ka-band wireless applications will be thoroughly examined in this research. To utilize 5G wireless applications, a new research topic that has been established is the creation of microstrip patch antennas. Patch antennae are made of different shapes, such as rectangles, circular shapes, triangles, donuts, rings, etc. Many substrate materials are used in patch antenna designs. This article examines the geometric configurations of antennas, the many methods of analysis for attributes of antennas, the dimensions of antennas, the issues that antennas face, and the potential solutions to those challenges. Wireless communication technologies, such as television broadcasts, microwave ovens, mobile phones, wireless local area networks (LANs), Bluetooth, global positioning systems (GPS), and two-way radios, all use it. This article examines the geometric structures of antennas, including several characteristics and materials by which they are constructed, as well as the numerous shapes they can produce. This paper will also examine return loss (S11), bandwidth, voltage standing wave ratio (VSWR), gain, directivity, efficiency, and Bandwidth discussed in the prior studies. In the future, a novel patch antenna can be designed for 5G wireless applications.
Development and performance evaluation of a CNN model for seagrass species classification in Bintan, Indonesia Hayaty, Nurul; Kusuma, Hollanda Arief
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p20-29

Abstract

This study presents the development and evaluation of a convolutional neural network (CNN) model for automated seagrass species classification in Bintan, Indonesia. The objective of this research is to examine how different train-validation data split ratios affect model accuracy and generalization performance. The CNN was trained under four configurations (60:40, 70:30, 80:20, and 90:10) to analyze the influence of training data volume on learning convergence and predictive capability. The results indicate that all configurations achieved high validation accuracy, with the best performance reaching 98.53% when using the 90:10 split. Evaluation on unseen data demonstrated that the 60:40 configuration provided the most consistent and reliable generalization. Performance variations were also affected by the morphological similarity between the classified species, which increases the challenge in correctly distinguishing certain classes. Overall, the findings confirm the effectiveness of CNN-based classification for supporting marine biodiversity monitoring and underline the importance of dataset composition in achieving optimal performance. Future improvements will focus on expanding data variability to enhance robustness in real-world scenarios.

Page 1 of 2 | Total Record : 13