cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
juti.if@its.ac.id
Editorial Address
Gedung Teknik Informatika Lantai 2 Ruang IF-230, Jalan Teknik Kimia, Kampus ITS Sukolilo, Surabaya, 60111
Location
Kota surabaya,
Jawa timur
INDONESIA
JUTI: Jurnal Ilmiah Teknologi Informasi
ISSN : 24068535     EISSN : 14126389     DOI : http://dx.doi.org/10.12962/j24068535
JUTI (Jurnal Ilmiah Teknologi Informasi) is a scientific journal managed by Department of Informatics, ITS.
Arjuna Subject : -
Articles 10 Documents
Search results for , issue "Vol. 24, No. 1, January 2026" : 10 Documents clear
Optimized Closed Frequent High Utility Itemset Mining Using OSR, OWL, and MSU Pruning on Retail Transaction Data Kinana Syah Sulanjari; Chastine Fatichah
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1311

Abstract

This research proposes the optimization of the Frequent Closed High-Utility Itemset Mining (FCHUIM) algorithm for retail transaction datasets using heuristic-based pruning techniques, Observed Support Ratio (OSR), Observed Weighted Lift (OWL), and Modified Subtree Utility (MSU). The algorithm aims to efficiently extract high-value itemsets that are both frequent and economically significant while minimizing redundant patterns through closed itemset mining. A real-world retail dataset from a consumer cooperative, comprising 56,274 transactions and 4,265 unique items, was used in the experiments. The study evaluates the effectiveness of each pruning technique, individually and in combination, across multiple scenarios of minimum support and utility thresholds. Results show that the proposed optimizations reduce the search space by up to 92.5%, significantly lowering execution time and memory usage. Sensitivity analyses reveal that the minimum utility parameter has a stronger impact on computational efficiency than minimum support, while scalability tests confirm the algorithm's ability to handle increasing dataset sizes with linear performance degradation. These findings confirm that the optimized FCHUIM algorithm is suitable for large-scale retail data mining applications, especially in scenarios requiring fast and concise pattern extraction. Future work may explore real-time integration into recommendation systems and adaptive thresholding for dynamic retail environments.
SentiBERT and Enhanced Bi-GRU for Weather-related Text Classification Using Lexical Features Mohamad Anwar Syaefudin; Arijal Ibnu Jati; Hilya Tsaniya; Chastine Fatichah; Diana Purwitasari
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1320

Abstract

The growing volume of weather related content on social media platforms, especially Twitter, has highlighted the need for robust classification models that can handle noisy, ambiguous, and emotionally subtle language. However, existing models machine learning such as Support Vector Machines (SVM) often fail to effectively capture implicit sentiment and sequential context in short, real time texts. This study addresses the challenge of weather related text classification by proposing a hybrid architecture that combines SentiBERT, a sentiment aware transformer model, with an Enhanced BiGRU network equipped with Self Attention and LeakyReLU activation. Experiments were conducted using a five class(sunny, cloudy, rainy, extreme, other) dataset of weather related tweets with stratified cross validation across multiple deep learning models and tokenizers. Results show that the proposed SentiBERT + Enhanced BiGRU model outperformed all baselines, achieving 88.03% accuracy and 88.25% macro F1 score demonstrating its ability to better interpret contextual and emotional nuances. These findings imply that integrating sentiment specific embeddings with sequential modeling and lexical features offers a promising direction for future real time applications in climate monitoring and disaster alert systems.
Explainable BERT Embeddings for Veracity Assessment in Criminal Investigations Thoha Haq; Chastine Fatichah; Anny Yuniarti
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1327

Abstract

The binary classification of truth and lies is often a detriment in criminal investigations as statements are intentionally not entirely true nor entirely false. This ambiguity in the veracity of their claims demands more extensive methods such as explainable models. Explainable models, particularly SHapley Additive exPlanations (SHAP), can help dissect statements and narrow down information for a more thorough investigation. Data from the Miami University Deception Database, comprising of various statements and their veracity, was analyzed for its linguistic features. This research utilizes Bidirectional Encoder Representations from Transformers (BERT) Embeddings to provide contextual understanding of statements and Sentiment Lexicons to provide domain specific knowledge. Results show that the R² (coefficient of determination) of the 2-Gram embedding performed the best at 0.39 by being able to capture more context than the 1-Gram embedding while being more general than the 3-Gram and 4-Gram embeddings. Each variant of the BERT Embedding was proven to be much more effective than general word embedding such as GloVe, Word2Vec and FastText. SHAP values were able to capture key points of interest in a statement by narrowing down pivotal and decision-making points. These results highlight potential indicators of either deceptive or truthful language such as the word ‘something’ and ‘our’. These points of interest can help humans focus on key points of investigation and intervention.
Handling Ambiguity in App Review-Based Software Requirement Classification Using Multi-Label BERT Transfer Learning Stefani Tasya Hallatu; Muhammad Jerino Gorter; Andrea Bemantoro J; Diana Purwitasari; Chastine Fatichah; Hilya Tsaniya
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1333

Abstract

User-generated reviews on mobile applications represent a valuable yet ambiguous resource for classifying software requirements, particularly when multiple aspects—such as bugs, feature requests, and user experiences—are embedded within a single review. Although prior studies have shown the potential of transformer-based and multi-label models in improving text classification accuracy and efficiency, explicit handling of semantic ambiguity in multi-aspect reviews has not been addressed. This study proposes a multi-label classification approach using BERT-based transfer learning to manage ambiguity in app reviews. Each review is manually annotated with one or more relevant requirement categories. Preprocessing involves text cleaning, normalization, and BERT tokenization to convert reviews into structured representations. The classification model categorizes reviews into four classes: bug reports, feature requests, user experiences, and ratings. Evaluation results demonstrate strong performance, with F1-scores of 0.96 for bug reports, 0.95 for feature requests, 0.97 for ratings, and 0.80 for user experiences, confirming the model’s capability in capturing overlapping labels in ambiguous reviews. This approach offers a scalable and automated solution for extracting software requirements, enabling developers to better identify, categorize, and prioritize user needs from unstructured review data.
A Comparative Study Evaluation of Kafka and RabbitMQ: Performance, Scalability and Stress Test in Distributed Messaging Systems Muhammad Rias; Ach Muhyil Umam; Anani Asmani; Royyana Muslim Ijtihadie
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1345

Abstract

The two most widely used Message-Oriented Middleware (MOM) technologies are Apache Kafka and RabbitMQ, both of which have fundamental differences in terms of architecture and performance characteristics. Kafka is designed for high-throughput and good scalability data stream processing, while RabbitMQ excels in message routing flexibility, delivery reliability, and complex queue management. This study presents a comprehensive comparative analysis of two leading message brokers, namely Apache Kafka and RabbitMQ to evaluate performance, scalability, and behaviour under stress test, for the selection of the most suitable message broker in modern distributed system architectures. The experimental testing process was carried out in four different scenarios: message size variation of 1 KB, 10 KB and 100 KB aimed at measuring performance based on payload size, message volume variation of 10,000, 50,000 and 100,000 messages to see throughput limits and resource usage, consumer number variation of 1, 5 and 10 Measuring the scalability of the consumer system, then a high-intensity pressure test of 100,000 messages in 10 seconds to evaluate the stability and latency of the overload. Key performance metrics, such as throughput, latency, CPU usage, and RAM consumption are carefully evaluated. The overall results of the experiment were more suitable for systems that affect the speed and volume of messages, while Kafka was more appropriate for extreme workloads with high durability requirements. This experiment provided empirical data concluding that RabbitMQ is highly effective for applications that require sending high-volume, low-latency individual messages, while Kafka's strength lies in handling specific data stream sizes and maintaining stability under intense and sustained loads.
The Role of Advanced Penetration Testing Techniques in Enhancing   Cybersecurity: A Survey on Web Application Security EMMANUEL BUGINGO; Voltaire ISHIMWE; Ghislaine UWASE SIMBI; Adeline DUSENGE; Jean Baptiste NIZEYIMANA
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1372

Abstract

The Internet continues to grow as an exciting platform for numerous web applications, providing crucial services to various industries and user communities. However, this expansion comes with an increase in cybersecurity threats, as web applications continue to be a primary target for malicious entities. Despite the creation of numerous security frameworks, yearly reports and the OWASP (Open Web Application Security Project) Top 10 consistently highlight the ongoing presence of severe vulnerabilities in contemporary web platforms. This paper examines the significance of advanced penetration testing methods in enhancing cybersecurity, particularly in the context of web application security. Utilizing a combination of manual and automated testing approaches, incorporating tools such as BURP SUITE and METASPLOIT, this study examines how sophisticated penetration testing can uncover, exploit, and address vulnerabilities, including SQL injections, cross-site scripting (XSS), and weak authentication methods. The results highlight that even small vulnerabilities can have significant practical impacts, emphasizing the importance of ongoing, intelligent testing approaches.
Enhancing the Quality of Merged Process Models by Addressing Invisible Task Kelly Rossa Sungkono; Riyanarto Sarno; I Gusti Agung Chintya Prema Dewi; Muhammad Suzuri Hitam
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1381

Abstract

Model merging is a key approach for integrating multiple process model variants into a unified representation. Existing automated merging methods face challenges in handling invisible tasks, which are intentionally inserted in the process model to depict certain conditions, including stacked branching relationships. The inability to handle invisible tasks reduces the quality of the merged process models. A proposed graph-merging method explicitly addresses sequence, branching relationships, and invisible tasks. The proposed method first identifies common activities across model variants. Furthermore, the method applies the proposed graph rules grounded in behavioral and structural aspects to combine those common activities as well as their related relationships and generate the graph-based merged process model. Behavioral rules govern the integration of sequence and branching relationships, while structural rules handle branching and invisible tasks. An evaluation against two existing approaches by Derguech and Yohanes demonstrates that the proposed graph-merging method achieves higher precision. The graph-merging method substantially improves the quality of merged process models.
Performance Evaluation of DNA-Based Cryptographic Algorithms on Constrained IoT Devices Mircea Ţălu
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1386

Abstract

DNA-based cryptographic techniques have attracted significant interest due to their intrinsic parallelism, high algorithmic complexity, and bio-inspired randomness. However, their practical applicability in resource-constrained Internet of Things (IoT) environments remains insufficiently explored. This study presents a comprehensive performance evaluation of six representative DNA-based encryption schemes—DNA-XOR-Mutation, DNA-Substitution-Shift, Hybrid DNA-Logical Encoding, DNA-Crossover-Encode, DNA-Logical-Shift, and DNA-Hybrid-Crypt—implemented and experimentally measured on embedded platforms typical of IoT devices. These schemes were benchmarked against established lightweight cryptographic algorithms, including PRESENT-80, ASCON-128, SPECK-64, TWINE-80, HIGHT, SIMON-64/128, and LED-64, using an experimental measurement environment configured to reflect the specifications of widely deployed microcontrollers such as ATmega328P, STM32F0, ESP32, nRF52840, PIC24FJ64GA, and MSP430. Performance metrics encompassed execution time, ROM/RAM memory footprint, and energy consumption. The results indicate that while DNA-based algorithms generally demand greater memory resources and exhibit higher latency than hardware-optimized lightweight ciphers, they demonstrate superior diffusion properties and enhanced resistance against classical differential cryptanalysis. These findings highlight the promise of DNA-inspired cryptography as a complementary security mechanism for next-generation IoT systems, particularly in scenarios requiring polymorphic or non-deterministic encryption approaches. Finally, we discuss optimization strategies and hardware integration considerations, offering a performance-driven foundation for further research into DNA-based cryptographic primitives within IoT security frameworks.
Multi-task Temporal Deep Learning Model for Real Time Intrusion Detection System Christian Budhi Sabdana; Noriandini Dewi Salyasari; Izra Noor Zahara Aliya; Ary Mazharuddin Shiddiqi
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1446

Abstract

The rapid expansion of Internet of Things (IoT) ecosystems has enabled large-scale interconnected smart environments while simultaneously exposing IoT devices to increasingly sophisticated cyber threats. To address these challenges, machine learning and deep learning–based intrusion detection systems (IDS) have been widely adopted; however, many existing approaches suffer from limited generalization, insufficient temporal modeling, and poor performance under extreme class imbalance. In this study, we investigate a multi-task stacked Long Short-Term Memory (LSTM) architecture for IoT intrusion detection, where binary anomaly detection and multi-class attack classification are jointly learned within a unified temporal framework. The proposed model examines different inter-path knowledge transfer mechanisms, including additive, gated, and attention-based aggregation, to enhance discriminative attack representation learning. A topology-constrained shuffling strategy is further introduced to preserve intra-flow temporal dependencies while reducing reliance on fixed traffic ordering. Experimental results on the Edge-IIoTset dataset show that all models achieve high binary detection performance (F1-score above 97%), while attention-based aggregation consistently outperforms static fusion strategies for multi-class classification, yielding superior macro F1-score and AUC-PR under severe class imbalance. These findings emphasize the importance of context-aware information sharing and temporal structure preservation for robust and adaptive IoT intrusion detection systems.
Decision Support System for Determining Strategic Warehouse Locations Using a Combination of the WENSLO Weighting and RAWEC Method Junhai Wang; Setiawansyah Setiawansyah; Temi Ardiansah; Faruk Ulum; Sumanto Sumanto
JUTI: Jurnal Ilmiah Teknologi Informasi Vol. 24, No. 1, January 2026
Publisher : Department of Informatics, Institut Teknologi Sepuluh Nopember

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.12962/j24068535.v24i1.a1456

Abstract

Determining the location of a strategic warehouse is a crucial decision in supply chain management as it directly affects distribution efficiency, logistics costs, and service levels. This problem is multi-criteria and complex, requiring an approach that can accommodate differences in the importance of criteria as well as variations in performance among alternatives objectively. This study aims to develop a Decision Support System to determine a strategic warehouse location by combining the Weights by Envelope and Slope (WENSLO) weighting method and the Ranking of Alternatives with Weights of Criterion (RAWEC) ranking method. The WENSLO method is used to generate criteria weights based on the nonlinear strength of each criterion, while the RAWEC method is applied to calculate the final values and determine the ranking of warehouse location alternatives. A case study was conducted on eleven alternative locations with the main criteria including location cost, accessibility, safety, distribution travel time, and proximity to suppliers. The study results showed that Location TR obtained the highest final score of 0.9673 and was designated as the top priority warehouse location, followed by Location RD with a score of 0.6235 and Location HO with a score of 0.338, while Location QC had the lowest score of −0.975. These findings demonstrate that the combination of the WENSLO and RAWEC methods can produce rankings that are objective, consistent, and easy to interpret, making them a reliable decision-support tool for determining strategic warehouse locations and potentially applicable to other logistics and distribution problems.

Page 1 of 1 | Total Record : 10