cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota surabaya,
Jawa timur
INDONESIA
Journal of Information Systems Engineering and Business Intelligence
Published by Universitas Airlangga
ISSN : -     EISSN : -     DOI : -
Core Subject : Science,
Jurnal ini menerima makalah ilmiah dengan fokus pada Rekayasa Sistem Informasi ( Information System Engineering) dan Sistem Bisnis Cerdas (Business Intelligence) Rekayasa Sistem Informasi ( Information System Engineering) adalah Pendekatan multidisiplin terhadap aktifitas yang berkaitan dengan pengembangan dan pengelolaan sistem informasi dalam pencapaian tujuan organisasi. ruang lingkup makalah ilmiah Information Systems Engineering meliputi (namun tidak terbatas): -Pengembangan, pengelolaan, serta pemanfaatan Sistem Informasi. -Tata Kelola Organisasi, -Enterprise Resource Planning, -Enterprise Architecture Planning, -Knowledge Management. Sistem Bisnis Cerdas (Business Intelligence) Mengkaji teknik untuk melakukan transformasi data mentah menjadi informasi yang berguna dalam pengambilan keputusan. mengidentifikasi peluang baru serta mengimplementasikan strategi bisnis berdasarkan informasi yang diolah dari data sehingga menciptakan keunggulan kompetitif. ruang lingkup makalah ilmiah Business Intelligence meliputi (namun tidak terbatas): -Data mining, -Text mining, -Data warehouse, -Online Analytical Processing, -Artificial Intelligence, -Decision Support System.
Arjuna Subject : -
Articles 14 Documents
Search results for , issue "Vol. 9 No. 2 (2023): October" : 14 Documents clear
Systematic Literature and Expert Review of Agile Methodology Usage in Business Intelligence Projects Hapsari Wulandari; Teguh Raharjo
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.214-227

Abstract

Background: Agile methodology is known for delivering effective projects with added value within a shorter timeframe, especially in Business Intelligence (BI) system which is a valuable tool for informed decision-making. However, identifying impactful elements for successful BI implementation is complex due to the wide range of Agile attributes. Objective: This research aims to systematically review and analyze the integration of BI within Agile methodology, providing valuable guidance for future projects implementation, enhancing the understanding of effective application, and identifying influential factors. Methods: Based on the Kitchenham method, 19 papers were analyzed from 288 papers, sourced from databases like Scopus, ACM, IEEE, and others published in 2016-2022. Meanwhile the extracted key factors impacting agile BI implementation were validated by qualified expert. Results: Agile was discovered to provide numerous benefits to BI projects by promoting flexibility, collaboration, and rapid iteration for enhanced adaptability, while effectively addressing challenges including those related to technology, management, and skills gaps. In addition, Agile methods, including tasks such as calculating cycle time, measuring defect backlogs, mapping code ownership, and engaging end users, offered practical solutions. The advantages included adaptability, success, value enhancement, cost reduction, shortened timelines, and improved precision. The research additionally considered other critical Agile elements such as BI tools, Agile Practices, Manifesto, and Methods, thereby enhancing insights for successful implementation. Conclusion: In conclusion, the research outlined Agile BI implementation into seven key factor groups, validated by qualified expert, providing guidance for BI integration and practices, and establishing a fundamental baseline for future applications. Keywords: Agile Methodology, Business Intelligence (BI), Expert Judgement, Kitchenham, Systematic Literature Review (SLR)
Information Quality of Business Intelligence Systems: A Maturity-based Assessment Abdelhak Ait Touil; Siham Jabraoui
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.276-287

Abstract

Background: The primary role of a Business Intelligence (BI) system is to provide information to decision-makers within an organization. Moreover, it is crucial to acknowledge that the quality of this information is of greatest significance. Several studies have extensively discussed the importance of information quality in information systems, including BI. However, there is relatively little discussion on the factors influencing 'Information quality”. Objective: This study aimed to address this literature gap by investigating the determinants of BI maturity that impacted information quality. Methods: A maturity model comprising three dimensions was introduced, namely Data quality, BI infrastructure, and Data-driven culture. Data were collected from 84 companies and were analyzed using the SEM-PLS approach. Results: The analysis showed that maturity had a highly positive influence on Information Quality, validating the relevance of the three proposed determinant factors. Conclusion: This study suggested and strongly supported the importance and relevance of Data quality, BI infrastructure, and Data-driven culture as key dimensions of BI maturity. The robust statistical relationship between maturity and information quality showed the effectiveness of approaching the systems from a maturity perspective. This investigation paved the way for exploring additional dimensions that impact Information quality. Keywords: BI infrastructure, BI maturity, Data-driven culture, Data quality, Information quality.
Optimizing Cardiovascular Disease Prediction: A Synergistic Approach of Grey Wolf Levenberg Model and Neural Networks Sheikh Amir Fayaz Fayaz; Majid Zaman; Sameer Kaul; Waseem Jeelani Bakshi
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.119-135

Abstract

Background: One of the latest issues in predicting cardiovascular disease is the limited performance of current risk prediction models. Although several models have been developed, they often fail to identify a significant proportion of individuals who go on to develop the disease. This highlights the need for more accurate and personalized prediction models. Objective: This study aims to investigate the effectiveness of the Grey Wolf Levenberg Model and Neural Networks in predicting cardiovascular diseases. The objective is to identify a synergistic approach that can improve the accuracy of predictions. Through this research, the authors seek to contribute to the development of better tools for early detection and prevention of cardiovascular diseases. Methods: The study used a quantitative approach to develop and validate the GWLM_NARX model for predicting cardiovascular disease risk. The approach involved collecting and analyzing a large dataset of clinical and demographic variables. The performance of the model was then evaluated using various metrics such as accuracy, sensitivity, and specificity. Results: the study found that the GWLM_NARX model has shown promising results in predicting cardiovascular disease. The model was found to outperform other conventional methods, with an accuracy of over 90%. The synergistic approach of Grey Wolf Levenberg Model and Neural Networks has proved to be effective in predicting cardiovascular disease with high accuracy. Conclusion: The use of the Grey Wolf Levenberg-Marquardt Neural Network Autoregressive model (GWLM-NARX) in conjunction with traditional learning algorithms, as well as advanced machine learning tools, resulted in a more accurate and effective prediction model for cardiovascular disease. The study demonstrates the potential of machine learning techniques to improve diagnosis and treatment of heart disorders. However, further research is needed to improve the scalability and accuracy of these prediction systems, given the complexity of the data associated with cardiac illness. Keywords: Cardiovascular data, Clinical data., Decision tree, GWLM-NARX, Linear model functions
Ensemble Learning Based Malicious Node Detection in SDN-Based VANETs Kunal Vermani; Amandeep Noliya; Sunil Kumar; Kamlesh Dutta
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.136-146

Abstract

Background: The architecture of Software Defined Networking (SDN) integrated with Vehicular Ad-hoc Networks (VANETs) is considered a practical method for handling large-scale, dynamic, heterogeneous vehicular networks, since it offers flexibility, programmability, scalability, and a global understanding. However, the integration with VANETs introduces additional security vulnerabilities due to the deployment of a logically centralized control mechanism. These security attacks are classified as internal and external based on the nature of the attacker. The method adopted in this work facilitated the detection of internal position falsification attacks. Objective: This study aimed to investigate the performance of k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest machine learning (ML) algorithms in detecting position falsification attacks using the Vehicular Reference Misbehavior (VeReMi) dataset. It also aimed to conduct a comparative analysis of two ensemble classification models, namely voting and stacking for final decision-making. These ensemble classification methods used the ML algorithms cooperatively to achieve improved classification. Methods: The simulations and evaluations were conducted using the Python programming language. VeReMi dataset was selected since it was an application-specific dataset for VANETs environment. Performance evaluation metrics, such as accuracy, precision, recall, F-measure, and prediction time were also used in the comparative studies. Results: This experimental study showed that Random Forest ML algorithm provided the best performance in detecting attacks among the ML algorithms. Voting and stacking were both used to enhance classification accuracy and reduce time required to identify an attack through predictions generated by k-NN, SVM, Naïve Bayes, Logistic Regression, and Random Forest classifiers. Conclusion: In terms of attack detection accuracy, both methods (voting and stacking) achieved the same level of accuracy as Random Forest. However, the detection of attack using stacking could be achieved in roughly less than half the time required by voting ensemble. Keywords: Machine learning methods, Majority voting ensemble, SDN-based VANETs, Security attacks, Stacking ensemble classifiers, VANETs,
A Systematic Literature Review of Student Assessment Framework in Software Engineering Courses Reza Fauzan; Daniel Siahaan; Mirotus Solekhah; Vriza Wahyu Saputra; Aditya Eka Bagaskara; Muhammad Ihsan Karimi
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.264-275

Abstract

Background: Software engineering are courses comprising various project types, including simple assignments completed in supervised settings and more complex tasks undertaken independently by students, without the oversight of a constant teacher or lab assistant. The imperative need arises for a comprehensive assessment framework to validate the fulfillment of learning objectives and facilitate the measurement of student outcomes, particularly in computer science and software engineering. This leads to the delineation of an appropriate assessment structure and pattern. Objective: This study aimed to acquire the expertise required for assessing student performance in computer science and software engineering courses. Methods: A comprehensive literature review spanning from 2012 to October 2021 was conducted, resulting in the identification of 20 papers addressing the assessment framework in software engineering and computer science courses. Specific inclusion and exclusion criteria were meticulously applied in two rounds of assessment to identify the most pertinent studies for this investigation. Results: The results showed multiple methods for assessing software engineering and computer science courses, including the Assessment Matrix, Automatic Assessment, CDIO, Cooperative Thinking, formative and summative assessment, Game, Generative Learning Robot, NIMSAD, SECAT, Self-assessment and Peer-assessment, SonarQube Tools, WRENCH, and SEP-CyLE. Conclusion: The evaluation framework for software engineering and computer science courses required further refinement, ultimately leading to the selection of the most suitable technique, known as learning framework. Keywords: Computer science course, Software engineering course, Student assessment, Systematic literature review
Crypto-sentiment Detection in Malay Text Using Language Models with an Attention Mechanism Nur Azmina Mohamad Zamani; Norhaslinda Kamaruddin
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.147-160

Abstract

Background: Due to the increased interest in cryptocurrencies, opinions on cryptocurrency-related topics are shared on news and social media. The enormous amount of sentiment data that is frequently released makes data processing and analytics on such important issues more challenging. In addition, the present sentiment models in the cryptocurrency domain are primarily focused on English with minimal work on Malay language, further complicating problems. Objective: The performance of the sentiment regression model to forecast sentiment scores for Malay news and tweets is examined in this study. Methods: Malay news headlines and tweets on Bitcoin and Ethereum are used as the input. A hybrid Generalized Autoregressive Pretraining for Language Understanding (XLNet) language model in combination with Bidirectional-Gated Recurrent Unit (Bi-GRU) deep learning model is applied in the proposed sentiment regression implementation. The effectiveness of the proposed sentiment regression model is also investigated using the multi-head self-attention mechanism. Then, a comparison analysis using Bidirectional Encoder Representations from Transformers (BERT) is carried out. Results: The experimental results demonstrate that the number of attention heads is vital in improving the XLNet-GRU sentiment model performance. There are slight improvements of 0.03 in the adjusted R2 values with an average MAE of 0.163 (Malay news) and 0.174 (Malay tweets). In addition, an average RMSE of 0.267 and 0.255 were obtained respectively for Malay news and tweets, which show that the proposed XLNet-GRU sentiment model outperforms the BERT sentiment model with lesser prediction errors. Conclusion: The proposed model contributes to predicting sentiment on cryptocurrency. Moreover, this study also introduced two carefully curated Malay corpora, CryptoSentiNews-Malay and CryptoSentiTweets-Malay, which are extracted from news and tweets, respectively. Further works to enhance Malay news and tweets corpora on cryptocurrency-related issues will be expended with implementing the proposed XLNet Bi-GRU deep learning model for greater financial insight. Keywords: Cryptocurrency, Deep learning model, Malay text, Sentiment analysis, Sentiment regression model
Towards Smart and Green Features of Cloud Computing in Healthcare Services: A Systematic Literature Review Aschalew Arega; Durga Prasad Sharma
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.161-180

Abstract

Background: The healthcare sector has been facing multilateral challenges regarding the quality of services and access to healthcare innovations. As the population grows, the sector requires faster and more reliable services, but the opposite is true in developing countries. As a robust technology, cloud computing has numerous features and benefits that are still to be explored. The intervention of the latest technologies in healthcare is crucial to shifting toward next-generation healthcare systems. In developing countries like Ethiopia, cloud features are still far from being systematically explored to design smart and green healthcare services. Objective: To excavate contextualized research gaps in the existing studies towards smart and green features of cloud computing in healthcare information services. Methods: We conducted a systematic review of research publications indexed in Scopus, Web of Science, IEEE Xplore, PubMed, and ProQuest. 52 research articles were screened based on significant selection criteria and systematically reviewed. Extensive efforts have been made to rigorously review recent, contemporary, and relevant research articles. Results: This study presented a summary of parameters, proposed solutions from the reviewed articles, and identified research gaps. These identified research gaps are related to security and privacy concerns, data repository standardization, data shareability, self-health data access control, service collaboration, energy efficiency/greenness, consolidation of health data repositories, carbon footprint, and performance evaluation. Conclusion: The paper consolidated research gaps from multiple research investigations into a single paper, allowing researchers to develop innovative solutions for improving healthcare services. Based on a rigorous analysis of the literature, the existing systems overlooked green computing features and were highly vulnerable to security violations. Several studies reveal that security and privacy threats have been seriously hampering the exponential growth of cloud computing. 54 percent of the reviewed articles focused on security and privacy concerns. Keywords: Cloud computing, Consolidation, Green computing, Green features, Healthcare services, Systematic literature review.
Transfer Learning based Low Shot Classifier for Software Defect Prediction Vikas Suhag; Sanjay Kumar Dubey; Bhupendra Kumar Sharma
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.228-238

Abstract

Background: The rapid growth and increasing complexity of software applications are causing challenges in maintaining software quality within constraints of time and resources. This challenge led to the emergence of a new field of study known as Software Defect Prediction (SDP), which focuses on predicting future defect in advance, thereby reducing costs and improving productivity in software industry. Objective: This study aimed to address data distribution disparities when applying transfer learning in multi-project scenarios, and to mitigate performance issues resulting from data scarcity in SDP. Methods: The proposed approach, namely Transfer Learning based Low Shot Classifier (TLLSC), combined transfer learning and low shot learning approaches to create an SDP model. This model was designed for application in both new projects and those with minimal historical defect data. Results: Experiments were conducted using standard datasets from projects within the National Aeronautics and Space Administration (NASA) and Software Research Laboratory (SOFTLAB) repository. TLLSC showed an average increase in F1-Measure of 31.22%, 27.66%, and 27.54% for project AR3, AR4, and AR5, respectively. These results surpassed those from Transfer Component Analysis (TCA+), Canonical Correlation Analysis (CCA+), and Kernel Canonical Correlation Analysis plus (KCCA+). Conclusion: The results of the comparison between TLLSC and state-of-the-art algorithms, namely TCA+, CCA+, and KCCA+ from the existing literature consistently showed that TLLSC performed better in terms of F1-Measure. Keywords: Just-in-time, Defect Prediction, Deep Learning, Transfer Learning, Low Shot Learning
Advancement in Bangla Sentiment Analysis: A Comparative Study of Transformer-Based and Transfer Learning Models for E-commerce Sentiment Classification Zishan Ahmed; Shakib Sadat Shanto; Akinul Islam Jony
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.181-194

Abstract

Background: As a direct result of the Internet's expansion, the quantity of information shared by Internet users across its numerous platforms has increased. Sentiment analysis functions at a higher level when there are more available perspectives and opinions. However, the lack of labeled data significantly complicates sentiment analysis utilizing Bangla natural language processing (NLP). In recent years, nevertheless, due to the development of more effective deep learning models, Bangla sentiment analysis has improved significantly. Objective: This article presents a curated dataset for Bangla e-commerce sentiment analysis obtained solely from the "Daraz" platform. We aim to conduct sentiment analysis in Bangla for binary and understudied multiclass classification tasks. Methods: Transfer learning (LSTM, GRU) and Transformers (Bangla-BERT) approaches are compared for their effectiveness on our dataset. To enhance the overall performance of the models, we fine-tuned them. Results: The accuracy of Bangla-BERT was highest for both binary and multiclass sentiment classification tasks, with 94.5% accuracy for binary classification and 88.78% accuracy for multiclass sentiment classification. Conclusion: Our proposed method performs noticeably better classifying multiclass sentiments in Bangla than previous deep learning techniques. Keywords: Bangla-BERT, Deep Learning, E-commerce, NLP, Sentiment Analysis
The Use of Machine Learning to Detect Financial Transaction Fraud: Multiple Benford Law Model for Auditors Doni Wiryadinata; Aris Sugiharto; Tarno Tarno
Journal of Information Systems Engineering and Business Intelligence Vol. 9 No. 2 (2023): October
Publisher : Universitas Airlangga

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20473/jisebi.9.2.239-252

Abstract

Background: Fraud in financial transaction is at the root of corruption issues recorded in organization. Detecting fraud practices has become increasingly complex and challenging. As a result, auditors require precise analytical tools for fraud detection. Grouping financial transaction data using K-Means Clustering algorithm can enhance the efficiency of applying Benford Law for optimal fraud detection. Objective: This study aimed to introduce Multiple Benford Law Model for the analysis of data to show potential concealed fraud in the audited organization financial transaction. The data was categorized into low, medium, and high transaction values using K-Means Clustering algorithm. Subsequently, it was reanalyzed through Multiple Benford Law Model in a specialized fraud analysis tool. Methods: In this study, the experimental procedures of Multiple Benford Law Model designed for public sector organizations were applied. The analysis of suspected fraud generated by the toolkit was compared with the actual conditions reported in audit report. The financial transaction dataset was prepared and grouped into three distinct clusters using the Euclidean distance equation. Data in these clusters was analyzed using Benford Law, comparing the frequency of the first digit’s occurrence to the expected frequency based on Benford Law. Significant deviations exceeding ±5% were considered potential areas for further scrutiny in audit. Furthermore, the analysis were validated by cross-referencing the result with the findings presented in the authorized audit organization report. Results: Multiple Benford Law Model developed was incorporated into an audit toolkit to automated calculations based on Benford Law. Furthermore, the datasets were categorized using K-Means Clustering algorithm into three clusters representing low, medium, and high-value transaction data. Results from the application of Benford Law showed a 40.00% potential for fraud detection. However, when using Multiple Benford Law Model and dividing the data into three clusters, fraud detection accuracy increased to 93.33%. The comparative results in audit report indicated a 75.00% consistency with the actual events or facts discovered. Conclusion: The use of Multiple Benford Law Model in audit toolkit substantially improved the accuracy of detecting potential fraud in financial transaction. Validation through audit report showed the conformity between the identified fraud practices and the detected financial transaction. Keywords: Fraud Detection, Benford’s Law, K-Means Clustering, Audit Toolkit, Fraudulent Practices.

Page 1 of 2 | Total Record : 14