cover
Contact Name
Tole Sutikno
Contact Email
-
Phone
-
Journal Mail Official
ij.aptikom@gmail.com
Editorial Address
9th Floor, 4th UAD Campus Lembaga Penerbitan dan Publikasi Ilmiah (LPPI) Universitas Ahmad Dahlan
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
Computer Science and Information Technologies
ISSN : 2722323X     EISSN : 27223221     DOI : -
Computer Science and Information Technologies ISSN 2722-323X, e-ISSN 2722-3221 is an open access, peer-reviewed international journal that publish original research article, review papers, short communications that will have an immediate impact on the ongoing research in all areas of Computer Science/Informatics, Electronics, Communication and Information Technologies. Papers for publication in the journal are selected through rigorous peer review, to ensure originality, timeliness, relevance, and readability. The journal is published four-monthly (March, July and November).
Articles 167 Documents
A machine learning approach for early prediction of mental health crises Chigagure, Hassan; Sakala, Lucy Charity
Computer Science and Information Technologies Vol 6, No 3: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v6i3.p335-345

Abstract

The global mental health crisis, intensified by the COVID-19 pandemic, placed unprecedented strain on healthcare systems and highlighted the urgent need for proactive crisis prevention strategies. This study investigated the effectiveness of various machine learning (ML) models in predicting mental health crises within 28 days post-hospitalization, leveraging an eight-year longitudinal dataset. Multiple data preprocessing techniques, including feature selection (EFSA, RFECV), imputation, and class imbalance handling (SMOTE, Tomek links), were systematically applied to enhance model performance. Six traditional classifiers—logistic regression, support vector machine, k-nearest neighbors, naive Bayes, XGBoost, and AdaBoost—were evaluated alongside ensemble learning (EL) methods (bagging, boosting, stacking). Performance metrics such as accuracy, precision, recall, F1 score, and AUC-ROC were used for comprehensive assessment. Results demonstrated that ensemble methods, particularly boosting and bagging, consistently achieved high predictive accuracy (up to 93%), with XGBoost and AdaBoost emerging as top performers. Feature selection and class imbalance techniques further improved model robustness and generalizability. The findings underscored the potential of ML-driven approaches for early identification of at-risk patients, enabling more effective resource allocation and timely interventions in mental health care. Recommendations for integrating these predictive tools into clinical workflows were discussed to support data-driven decision-making.
Characteristics ransomware stop/djvu remk and erqw variants with static-dinamic analysis Nugrahadi, Dodon Turianto; Abadi, Friska; Herteno, Rudy; Muliadi, Muliadi; Alkaff, Muhammad; Alfando, Muhammad Alvin
Computer Science and Information Technologies Vol 6, No 3: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v6i3.p283-293

Abstract

Ransomware has developed into various new variants every year. One type of ransomware is STOP/DJVU, containing more than 240+ variants. This research to determine changes in differences characteristics and impact between ransomware variants STOP/DJVU remk, which is a variant from 2020, and the erqw variant from 2023, through a mixed-method research approach. Observation, simulation using mixing static and dynamic malware analysis methods. Both variants are from the Malware Bazaar site. The total characteristics based on dynamic analysis, the remk variant has 177, and the erqw variant has 190, which increased by 1.8%. The total characteristics based on static analysis, the remk variants have 586, and the erqw variants have 736, which increased by 5.7%. All characteristics from remk to erqw increasing in dynamic analysis, except the number of payloads that decreased about 20%. In static analysis, all characteristics from remk to erqw increase except the number of sections decreased about 1.5%. It can be the affected CPU performance, because the remk variant affects performance by increasing CPU work by 3.74%, while the erqw variant affects performance by reducing CPU work by 1.18%, both compared with normal CPU. which will affect the ransomware's destructive work and require changes in its handling.
Predictive model for high-risk healthcare clients and claims frequency Zhou, Lenias; Mutandavari, Mainford; Matondora, Lucia
Computer Science and Information Technologies Vol 6, No 3: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v6i3.p346-354

Abstract

Global healthcare spending surged to approximately USD 9.8 trillion in the aftermath of the COVID-19 pandemic, intensifying the need for effective risk management strategies in healthcare insurance. This study proposes a predictive model designed to identify high-risk clients for timely targeted interventions and to forecast claims frequency for optimized resource allocation. A real-world claims dataset from a healthcare insurance provider was utilized. Bayesian optimization was employed to enhance data labelling. A deep learning (DL) model with sigmoid activation was used to classify high-risk clients, while a regression model forecasted claims frequency. The model was trained and validated, and gave an accuracy of 97%, a precision of 95.2%, a recall of 98.1% and an F1-score of 96.6%. The results confirmed the model’s accuracy in identifying high-risk clients and its ability to provide reliable forecasting of future claims frequency. Importantly, the model also provided the reason behind its classification decision, enhancing transparency and trust. This research provides valuable data-driven insights to both the healthcare insurers and clients, giving them the power to stay ahead in managing key risks, which ultimately reduces the cost of healthcare insurance. This work contributed a scalable and interpretable solution for risk prediction in healthcare insurance.
Cloud computing needs to explore into sky computing Ullah, Arif; Remmach, Hassnae; Aznaoui, Hanane; Şahin, Canan Batur; Mrhari, Amine
Computer Science and Information Technologies Vol 6, No 3: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v6i3.p294-306

Abstract

This paper evaluates key issues in cloud computing and introduces a novel model, known as sky computing, to address these challenges. Cloud computing, a transformative technology, has played a critical role in reshaping modern operations—especially following the COVID-19 pandemic, when many human activities shifted to technology-driven platforms. It offers multiple service models, including Software as a Service, Hardware as a Service, Desktop as a Service, Backup as a Service, and Network as a Service, each tailored to user requirements. However, the rapid expansion of cloud-based technologies and interconnected systems has intensified infrastructure and scalability challenges. Sky computing, or the “cloud of clouds,” emerges as an advanced layer above traditional cloud models, enabling dynamically provisioned, distributed domains built over multiple serial clouds. Its core capability lies in offering variable computing capacity and storage resources with dynamic, real-time support, providing a robust and unified platform by integrating diverse cloud resources. This paper reviews related technologies, summarizes prior research on sky computing, and discusses its structural design. Furthermore, it examines the limitations of current cloud computing frameworks and highlights how sky computing could overcome these barriers, positioning it as a pivotal architecture for the future of distributed computing.
Implementation of face recognition using Python Christanto, Febrian Wahyu; Arifin, Husnul; Dewi, Christine; Prasandy, Teguh
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p1-9

Abstract

Artificial intelligence (AI)-based technology systems are developing rapidly. Along with technological development the number of criminal cases caused by facial forgery is also growing. Cases of theft and housebreaking with fake photos are a common problem in Semarang. In 2022–2023 the number of cases of theft and housebreaking reached 372,965 with a crime risk level of 137/100,000 people. To overcome this problem the facial recognition system used in the door security system uses digital image processing. This method works by imitating how nerve cells communicate with interconnected neurons, or more precisely, how artificial neural networks function in humans. As training data, image capture and facial recognition are carried out using a webcam and the Python programming language with the TensorFlow library. The image processing algorithm uses 400 facial images with an accuracy rate of 95%. However further development is needed to improve the efficiency and accuracy of the system to produce better results.
An uneven cluster-based routing protocol for WSNs using a hybrid MCDM and max-min ant colony optimization Ri, Man Gun; Kim, Pyong Gwang; Kim, JinSim
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p74-82

Abstract

In energy-constrained wireless sensor networks (WSNs) composed of sensor nodes (SNs) characterized by multi-criteria contradictory with each other, it is still one of the challenges to be solved to figure out how to combine multi-criteria with each other and how to use an intelligent optimization (IO) algorithm for developing an optimal cluster-based routing protocol. In this article, we overture a new routing protocol based on uneven cluster using the hybrid FCNP-VWA-TOPSIS (FVT) and an improved max-min ant colony optimization (ACO). This scheme uses the hybrid FVT to perform the clustering, and uses an improved max-min ACO to configure a routing tree for the relay transmission of sensed data. The extensive simulation experiments have been carried out to show that the proposed scheme greatly prolongs the network lifetime (NL) by achieving an energy consumption balance superior to the previous schemes.
Optimizing interconnection call routing: a machine learning approach for cost and quality efficiency Mudari, Ivy Anesu; Mutandavari, Mainford; Chiworera, Kenneth
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p56-65

Abstract

This study presents the design and development of an automated least cost routing (LCR) model for telecommunications interconnection calls using machine learning. Leveraging a random forest regressor, the model predicts the most cost-effective call routing path based on pricing and network latency. Trained on real-world call detail records (CDRs) from TelOne Zimbabwe, the model achieved a high R² score of 0.851, with a mean absolute error (MAE) of $0.0482 per minute. Evaluation results demonstrate an average cost reduction of 46.75% compared to traditional routing methods, with prediction times under 0.1 seconds and latency remaining within acceptable thresholds. This work provides a practical, scalable, and efficient solution for telecom. operators seeking to reduce interconnection costs and maintain service quality through intelligent routing automation. The model architecture and performance to make it viable for integration into real-time telecom infrastructure.
Raindrop and bit drop effects on millimeter wave network performance: a critical review Gordon, Victor Dela; Acakpovi, Amevi; Aggrey, George Kwamena; Dziwornu, Michael Gameli
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p83-92

Abstract

This PRISMA guided review examines how rain precipitation degrades 5G millimeter wave (mmWave) network performance, with emphasis on rain induced bit drop and its impact on end-to-end quality of service (QoS). From an initial corpus of 13,317 publications screened across IEEE Xplore, ACM Digital Library, ScienceDirect, Google Scholar, and ELICIT, 18 peer reviewed studies published between 2018 and 2024 met the inclusion criteria. Findings show that rainfall significantly weakens mmWave signals, with specific attenuation ranging from approximately 4 to 45 dB/km at 100 mm/h, particularly in tropical regions. When QoS outcomes are reported, these losses manifest as increased bit error rates, rain driven bit drop along the link, higher packet loss and delay, and reduced throughput. Key deficiencies identified include limited empirical validation of attenuation models against packet level QoS, lack of standardized propagation datasets for short range links, and weak treatment of bit level impairments within QoS analysis. To address these gaps, the review recommends enhancing ITU R P.530 and Mie scattering models with region specific measurements, implementing rain aware adaptive protocols, and adopting standardized benchmarking frameworks that link rain attenuation, bit drop, and QoS. This synthesis offers guidance for building climate aware mmWave systems and positions bit drop as a practical metric for precipitation resilience assessment.
AdaWeb: a stack-adaptive framework for automated web-vulnerability assessment Shah, Syed Aman; Kumar, Vaishali
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p10-19

Abstract

AdaWeb was a configuration-driven framework that automated web-vulnerability assessment through four stages: technology fingerprinting, crawler selection, exploit execution, and incremental reporting. A Wappalyzer probe identified the application stack and triggered a matching crawler—hypertext preprocessor (PHP), ASP.NET, NodeJS, or a general fallback—capable of both unauthenticated and credential-based traversal. Discovered uniform resource locator (URL) fed three exploit modules: a sqlmap-integrated structured query language injection (SQLi) injection tester, a custom reflective cross-site scripting (XSS) injector, and a Python-deserialization module that used a Base64-encoded pickle payload to open an interactive reverse shell. Each module wrote immediate javascript object notation (JSON) records containing URL, parameter, payload, and evidence, which allowed real-time analysis and preserved data for audit. Empirical evaluation on four deliberately vulnerable benchmarks shows that AdaWeb cuts manual triage time by 52% and eliminates false‑negative cases that defeat generic scanners, making it a drop‑in upgrade for DevSecOps pipelines. This framework reduces manual validation effort and eliminates false negatives by leveraging stack-aligned payloads and authenticated scanning.
Advances in dermatological imaging: enhancing skin melanoma classification for improved patient outcomes Sahoo, Debadutta; Mishra, Soumya
Computer Science and Information Technologies Vol 7, No 1: March 2026
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/csit.v7i1.p111-120

Abstract

The study presents an enhanced AlexNet-based deep learning system for binary classification of melanoma skin cancer as either benign or malignant using two paired dermatoscopic and clinical image datasets. The study evaluates the resilience of the models across different image sets with common preprocessing and specific data augmentation, using a melanoma dataset containing 10,000 images and a benign versus malignant dataset with 3,600 images. The AlexNet refinement exceeded several standard machine learning (ML) classifiers and other deep architectures on the two datasets with practical training times, gaining 97.12% and 96.21% in balanced accuracy. The training proceeded with SGD as optimiser and cross-entropy as loss on 256×256 images. Benchmarking against support vector machine (SVM), k-nearest neighbour (KNN), and other convolutional neural networks (CNNs) designs shows that the selected architecture and hyperparameters achieved the highest performance on cost-effective computation for the routine melanoma triage. The report highlights the need for external validation, incorporation into dermatological workflows, and explainability to improve trust, diminish dataset bias, and support the safe clinical deployment in practice.