cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Information Systems Engineering and Business Intelligence
Published by Universitas Airlangga
ISSN : -     EISSN : -     DOI : -
Core Subject : Science,
Jurnal ini menerima makalah ilmiah dengan fokus pada Rekayasa Sistem Informasi ( Information System Engineering) dan Sistem Bisnis Cerdas (Business Intelligence) Rekayasa Sistem Informasi ( Information System Engineering) adalah Pendekatan multidisiplin terhadap aktifitas yang berkaitan dengan pengembangan dan pengelolaan sistem informasi dalam pencapaian tujuan organisasi. ruang lingkup makalah ilmiah Information Systems Engineering meliputi (namun tidak terbatas): -Pengembangan, pengelolaan, serta pemanfaatan Sistem Informasi. -Tata Kelola Organisasi, -Enterprise Resource Planning, -Enterprise Architecture Planning, -Knowledge Management. Sistem Bisnis Cerdas (Business Intelligence) Mengkaji teknik untuk melakukan transformasi data mentah menjadi informasi yang berguna dalam pengambilan keputusan. mengidentifikasi peluang baru serta mengimplementasikan strategi bisnis berdasarkan informasi yang diolah dari data sehingga menciptakan keunggulan kompetitif. ruang lingkup makalah ilmiah Business Intelligence meliputi (namun tidak terbatas): -Data mining, -Text mining, -Data warehouse, -Online Analytical Processing, -Artificial Intelligence, -Decision Support System.
Arjuna Subject : -
Articles 246 Documents
Towards Smart and Green Features of Cloud Computing in Healthcare Services: A Systematic Literature Review Aschalew Arega; Durga Prasad Sharma
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.161-180

Abstract

Background: The healthcare sector has been facing multilateral challenges regarding the quality of services and access to healthcare innovations. As the population grows, the sector requires faster and more reliable services, but the opposite is true in developing countries. As a robust technology, cloud computing has numerous features and benefits that are still to be explored. The intervention of the latest technologies in healthcare is crucial to shifting toward next-generation healthcare systems. In developing countries like Ethiopia, cloud features are still far from being systematically explored to design smart and green healthcare services. Objective: To excavate contextualized research gaps in the existing studies towards smart and green features of cloud computing in healthcare information services. Methods: We conducted a systematic review of research publications indexed in Scopus, Web of Science, IEEE Xplore, PubMed, and ProQuest. 52 research articles were screened based on significant selection criteria and systematically reviewed. Extensive efforts have been made to rigorously review recent, contemporary, and relevant research articles. Results: This study presented a summary of parameters, proposed solutions from the reviewed articles, and identified research gaps. These identified research gaps are related to security and privacy concerns, data repository standardization, data shareability, self-health data access control, service collaboration, energy efficiency/greenness, consolidation of health data repositories, carbon footprint, and performance evaluation. Conclusion: The paper consolidated research gaps from multiple research investigations into a single paper, allowing researchers to develop innovative solutions for improving healthcare services. Based on a rigorous analysis of the literature, the existing systems overlooked green computing features and were highly vulnerable to security violations. Several studies reveal that security and privacy threats have been seriously hampering the exponential growth of cloud computing. 54 percent of the reviewed articles focused on security and privacy concerns. Keywords: Cloud computing, Consolidation, Green computing, Green features, Healthcare services, Systematic literature review.
Transfer Learning based Low Shot Classifier for Software Defect Prediction Vikas Suhag; Sanjay Kumar Dubey; Bhupendra Kumar Sharma
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.228-238

Abstract

Background: The rapid growth and increasing complexity of software applications are causing challenges in maintaining software quality within constraints of time and resources. This challenge led to the emergence of a new field of study known as Software Defect Prediction (SDP), which focuses on predicting future defect in advance, thereby reducing costs and improving productivity in software industry. Objective: This study aimed to address data distribution disparities when applying transfer learning in multi-project scenarios, and to mitigate performance issues resulting from data scarcity in SDP. Methods: The proposed approach, namely Transfer Learning based Low Shot Classifier (TLLSC), combined transfer learning and low shot learning approaches to create an SDP model. This model was designed for application in both new projects and those with minimal historical defect data. Results: Experiments were conducted using standard datasets from projects within the National Aeronautics and Space Administration (NASA) and Software Research Laboratory (SOFTLAB) repository. TLLSC showed an average increase in F1-Measure of 31.22%, 27.66%, and 27.54% for project AR3, AR4, and AR5, respectively. These results surpassed those from Transfer Component Analysis (TCA+), Canonical Correlation Analysis (CCA+), and Kernel Canonical Correlation Analysis plus (KCCA+). Conclusion: The results of the comparison between TLLSC and state-of-the-art algorithms, namely TCA+, CCA+, and KCCA+ from the existing literature consistently showed that TLLSC performed better in terms of F1-Measure. Keywords: Just-in-time, Defect Prediction, Deep Learning, Transfer Learning, Low Shot Learning
Advancement in Bangla Sentiment Analysis: A Comparative Study of Transformer-Based and Transfer Learning Models for E-commerce Sentiment Classification Zishan Ahmed; Shakib Sadat Shanto; Akinul Islam Jony
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.181-194

Abstract

Background: As a direct result of the Internet's expansion, the quantity of information shared by Internet users across its numerous platforms has increased. Sentiment analysis functions at a higher level when there are more available perspectives and opinions. However, the lack of labeled data significantly complicates sentiment analysis utilizing Bangla natural language processing (NLP). In recent years, nevertheless, due to the development of more effective deep learning models, Bangla sentiment analysis has improved significantly. Objective: This article presents a curated dataset for Bangla e-commerce sentiment analysis obtained solely from the "Daraz" platform. We aim to conduct sentiment analysis in Bangla for binary and understudied multiclass classification tasks. Methods: Transfer learning (LSTM, GRU) and Transformers (Bangla-BERT) approaches are compared for their effectiveness on our dataset. To enhance the overall performance of the models, we fine-tuned them. Results: The accuracy of Bangla-BERT was highest for both binary and multiclass sentiment classification tasks, with 94.5% accuracy for binary classification and 88.78% accuracy for multiclass sentiment classification. Conclusion: Our proposed method performs noticeably better classifying multiclass sentiments in Bangla than previous deep learning techniques. Keywords: Bangla-BERT, Deep Learning, E-commerce, NLP, Sentiment Analysis
The Use of Machine Learning to Detect Financial Transaction Fraud: Multiple Benford Law Model for Auditors Doni Wiryadinata; Aris Sugiharto; Tarno Tarno
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.239-252

Abstract

Background: Fraud in financial transaction is at the root of corruption issues recorded in organization. Detecting fraud practices has become increasingly complex and challenging. As a result, auditors require precise analytical tools for fraud detection. Grouping financial transaction data using K-Means Clustering algorithm can enhance the efficiency of applying Benford Law for optimal fraud detection. Objective: This study aimed to introduce Multiple Benford Law Model for the analysis of data to show potential concealed fraud in the audited organization financial transaction. The data was categorized into low, medium, and high transaction values using K-Means Clustering algorithm. Subsequently, it was reanalyzed through Multiple Benford Law Model in a specialized fraud analysis tool. Methods: In this study, the experimental procedures of Multiple Benford Law Model designed for public sector organizations were applied. The analysis of suspected fraud generated by the toolkit was compared with the actual conditions reported in audit report. The financial transaction dataset was prepared and grouped into three distinct clusters using the Euclidean distance equation. Data in these clusters was analyzed using Benford Law, comparing the frequency of the first digit’s occurrence to the expected frequency based on Benford Law. Significant deviations exceeding ±5% were considered potential areas for further scrutiny in audit. Furthermore, the analysis were validated by cross-referencing the result with the findings presented in the authorized audit organization report. Results: Multiple Benford Law Model developed was incorporated into an audit toolkit to automated calculations based on Benford Law. Furthermore, the datasets were categorized using K-Means Clustering algorithm into three clusters representing low, medium, and high-value transaction data. Results from the application of Benford Law showed a 40.00% potential for fraud detection. However, when using Multiple Benford Law Model and dividing the data into three clusters, fraud detection accuracy increased to 93.33%. The comparative results in audit report indicated a 75.00% consistency with the actual events or facts discovered. Conclusion: The use of Multiple Benford Law Model in audit toolkit substantially improved the accuracy of detecting potential fraud in financial transaction. Validation through audit report showed the conformity between the identified fraud practices and the detected financial transaction. Keywords: Fraud Detection, Benford’s Law, K-Means Clustering, Audit Toolkit, Fraudulent Practices.
Enhancing Multi-Output Time Series Forecasting with Encoder-Decoder Networks Kristoko Dwi Hartomo; Joanito Agili Lopo; Hindriyanto Dwi Purnomo
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.195-213

Abstract

Background: Multi-output Time series forecasting is a complex problem that requires handling interdependencies and interactions between variables. Traditional statistical approaches and machine learning techniques often struggle to predict such scenarios accurately. Advanced techniques and model reconstruction are necessary to improve forecasting accuracy in complex scenarios. Objective: This study proposed an Encoder-Decoder network to address multi-output time series forecasting challenges by simultaneously predicting each output. This objective is to investigate the capabilities of the Encoder-Decoder architecture in handling multi-output time series forecasting tasks. Methods: This proposed model utilizes a 1-Dimensional Convolution Neural Network with Bidirectional Long Short-Term Memory, specifically in the encoder part. The encoder extracts time series features, incorporating a residual connection to produce a context representation used by the decoder. The decoder employs multiple unidirectional LSTM modules and Linear transformation layers to generate the outputs each time step. Each module is responsible for specific output and shares information and context along the outputs and steps. Results: The result demonstrates that the proposed model achieves lower error rates, as measured by MSE, RMSE, and MAE loss metrics, for all outputs and forecasting horizons. Notably, the 6-hour horizon achieves the highest accuracy across all outputs. Furthermore, the proposed model exhibits robustness in single-output forecast and transfer learning, showing adaptability to different tasks and datasets.   Conclusion: The experiment findings highlight the successful multi-output forecasting capabilities of the proposed model in time series data, with consistently low error rates (MSE, RMSE, MAE). Surprisingly, the model also performs well in single-output forecasts, demonstrating its versatility. Therefore, the proposed model effectively various time series forecasting tasks, showing promise for practical applications. Keywords: Bidirectional Long Short-Term Memory, Convolutional Neural Network, Encoder-Decoder Networks, Multi-output forecasting, Multi-step forecasting, Time-series forecasting
Fine-Tuning IndoBERT for Indonesian Exam Question Classification Based on Bloom's Taxonomy Fikri Baharuddin; Mohammad Farid Naufal
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.253-263

Abstract

Background: The learning assessment of elementary schools has recently incorporated Bloom's Taxonomy, a structure in education that categorizes different levels of cognitive learning and thinking skills, as a fundamental framework. This assessment now includes High Order Thinking Skill (HOTS) questions, with a specific focus on Indonesian topics. The implementation of this system has been observed to require teachers to manually categorize or classify questions, and this process typically requires more time and resources. To address the associated difficulty, automated categorization and classification are required to streamline the process. However, despite various research efforts in questions classification, there is still room for improvement in terms of performance, particularly in precision and accuracy. Numerous investigations have explored the use of Deep Learning Natural Language Processing models such as BERT for classification, and IndoBERT is one such pre-trained model for text analysis.  Objective: This research aims to build classification system that is capable of classifying Indonesian exam questions in multiple-choice form based on Bloom's Taxonomy using IndoBERT pre-trained model. Methods: The methodology used includes hyperparameter fine-tuning, which was carried out to identify the optimal model performance. This performance was subsequently evaluated based on accuracy, F1 Score, Precision, Recall, and the time required for the training and validation of the model. Results: The proposed Fine Tuned IndoBERT Model showed that the accuracy rate was 97%, 97% F1 Score, 97% Recall, and 98% Precision with an average training time per epoch of 1.55 seconds and an average validation time per epoch of 0.38 seconds. Conclusion: Fine Tuned IndoBERT model was observed to have a relatively high classification performance, and based on this observation, the system was considered capable of classifying Indonesian exam questions at the elementary school level. Keywords: IndoBERT, Fine Tuning, Indonesian Exam Question, Model Classifier, Natural Language Processing, Bloom’s Taxonomy
A Fast and Reliable Approach for COVID-19 Detection from CT-Scan Images Md. Jawwad Bin Zahir; Muhammad Anwarul Azim; Abu Nowshed Chy; Mohammad Khairul Islam
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.288-304

Abstract

Background: COVID-19 is a highly contagious respiratory disease with multiple mutant variants, an asymptotic nature in patients, and with potential to stay undetected in common tests, which makes it deadlier, more transmissible, and harder to detect. Regardless of variants, the COVID-19 infection shows several observable anomalies in the computed tomography (CT) scans of the lungs, even in the early stages of infection. A quick and reliable way of detecting COVID-19 is essential to manage the growing transmission of COVID-19 and save lives. Objective: This study focuses on developing a deep learning model that can be used as an auxiliary decision system to detect COVID-19 from chest CT-scan images quickly and effectively. Methods: In this research, we propose a MobileNet-based transfer learning model to detect COVID-19 in CT-scan images. To test the performance of our proposed model, we collect three publicly available COVID-19 CT-scan datasets and prepare another dataset by combining the collected datasets. We also implement a mobile application using the model trained on the combined dataset, which can be used as an auxiliary decision system for COVID-19 screening in real life. Results: Our proposed model achieves a promising accuracy of 96.14% on the combined dataset and accuracy of 98.75%, 98.54%, and 97.84% respectively in detecting COVID-19 samples on the collected datasets. It also outperforms other transfer learning models while having lower memory consumption, ensuring the best performance in both normal and low-powered, resource-constrained devices. Conclusion: We believe, the promising performance of our proposed method will facilitate its use as an auxiliary decision system to detect COVID-19 patients quickly and reliably. This will allow authorities to take immediate measures to limit COVID-19 transmission to prevent further casualties as well as accelerate the screening for COVID-19 while reducing the workload of medical personnel. Keywords: Auxiliary Decision System, COVID-19, CT Scan, Deep Learning, MobileNet, Transfer Learning
Implementations of Artificial Intelligence in Various Domains of IT Governance: A Systematic Literature Review Eva Hariyanti; Made Balin Janeswari; Malvin Mikhael Moningka; Fikri Maulana Aziz; Annisa Rahma Putri; Oxy Setyo Hapsari; Nyoman Agus Arya Dwija Sutha; Yohannes Alexander Agusti Sinaga; Manik Prasanthi Bendesa
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.305-319

Abstract

Background: Artificial intelligence (AI) has become increasingly prevalent in various industries, including IT governance. By integrating AI into the governance environment, organizations can benefit from the consolidation of frameworks and best practices. However, the adoption of AI across different stages of the governance process is unevenly distributed. Objective: The primary objective of this study is to perform a systematic literature review on applying artificial intelligence (AI) in IT governance processes, explicitly focusing on the Deming cycle. This study overlooks the specific details of the AI methods used in the various stages of IT governance processes. Methods: The search approach acquires relevant papers from Elsevier, Emerald, Google Scholar, Springer, and IEEE Xplore. The obtained results were then filtered using predefined inclusion and exclusion criteria to ensure the selection of relevant studies. Results: The search yielded 359 papers. Following our inclusion and exclusion criteria, we pinpointed 42 primary studies that discuss how AI is implemented in every domain of IT Governance related to the Deming cycle. Conclusion: We found that AI implementation is more dominant in the plan, do, and check stages of the Deming cycle, with a particular emphasis on domains such as risk management, strategy alignment, and performance measurement since most AI applications are not able to perform well in different contexts as well as the other usage driven by its unique capabilities. Keywords: Artificial Intelligence, Deming cycle, Governance, IT Governance domain, Systematic literature review
Medical Image Fusion for Brain Tumor Diagnosis Using Effective Discrete Wavelet Transform Methods Ramaraj, Vijayan; Venkatachalaappaswamy, Mareeswari; Sankar , Manoj Kumar
Journal of Information Systems Engineering and Business Intelligence Vol. 10 No. 1 (2024): February
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.10.1.70-80

Abstract

Background: The field of clinical or medical imaging is beginning to experience significant advancements in recent years. Various medical imaging methods such as computed tomography (CT), X-radiation (X-ray), and magnetic resonance imaging (MRI) produce images with distinct resolution differences, goals, and noise levels, making it challenging for medical experts to diagnose diseases. Objective: The limitations of a single medical image modality have increased the necessity for medical image fusion. The proposed solution is to create a fusion method of merging two types of medical images, such as MRI and CT. Therefore, this study aimed to develop a software solution that swiftly identifies the precise region of a brain tumor, speeding up the diagnosis and treatment planning. Methods: The proposed methodology combined clinical images by using discrete wavelet transform (DWT) and inverse discrete wavelet transform (IDWT). This strategy depended on a multi-goal decay of the image information using DWT, and high-frequency sub-bands of the disintegrated images were combined using a weighted averaging method. Meanwhile, the low-frequency sub-bands were straight-forwardly replicated in the resulting image. The combined high-quality image was recreated using the IDWT. This method can handle images with various modalities and resolutions without the need for previous data. Results: The results showed that the outcomes of the proposed method were assessed by different metrics such as accuracy, recall, F1-score, and visual quality. The method showed a high accuracy of 98% over the familiar neural network techniques. Conclusion: The proposed method was found to be computationally effective and produced high-quality medical images to assist professionals. Furthermore, the method can be stretched out to other image modalities and exercised by hybrid techniques of wavelet transform and neural networks and used for different clinical image analysis tasks.   Keywords: CT and MRI, Image fusion, brain tumor, wavelet transform methods, medical images, machine learning, CNN  
Hybrid Architecture Model of Genetic Algorithm and Learning Vector Quantization Neural Network for Early Identification of Ear, Nose, and Throat Diseases Hayat, Cynthia; Soenandi, Iwan Aang
Journal of Information Systems Engineering and Business Intelligence Vol. 10 No. 1 (2024): February
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.10.1.1-12

Abstract

Background: In 2020, the World Health Organization (WHO) estimated that 466 million people worldwide are affected by hearing loss, with 34 million of them being children. Indonesia is identified as one of the four Asian countries with a high prevalence of hearing loss, specifically at 4.6%. Previous research was conducted to identify diseases related to the Ear, Nose, and Throat, utilizing the certainty factor method with a test accuracy rate of 46.54%. The novelty of this research lies in the combination of two methods, the use of genetic algorithms for optimization and learning vector quantization to improve the level of accuracy for early identification of Ear, Nose, and Throat diseases. Objective: This research aims to produce a hybrid model between the genetic algorithm and the learning vector quantization neural network to be able to identify Ear, Nose, and Throat diseases with mild symptoms to improve accuracy. Methods: Implementing a 90:10 ratio means that 90% (186 data) of the data from the initial sequence is assigned for training purposes, while the remaining 10% (21 data) is allocated for testing. The procedural stages of genetic algorithm-learning vector quantization are population initialization, crossover, mutation, evaluation, selection elitism, and learning vector quantization training. Results The optimum hybrid genetic algorithm-learning vector quantization model for early identification of Ear, Nose, and Throat diseases was obtained with an accuracy of 82.12%. The parameter values with the population size 10, cr 0.9, mr 0.1, maximum epoch of 5000, error goal of 0.01, and learning rate (alpha) of 0.5. Better accuracy was obtained compared to backpropagation (64%), certainty factor 46.54%), and radial basic function (72%). Conclusion: Experiments in this research, successed identifying models by combining genetic algorithm-learning vector quantization to perform the early identification of Ear, Nose, and Throat diseases. For further research, it's very challenging to develop a model that automatically adapts the bandwidth parameters of the weighting functions during trainin   Keywords: Early Identification, Ear-Nose-Throat Diseases, Genetic Algorithm, Learning Vector Quantization