cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Informatics and Communication Technology (IJ-ICT)
ISSN : 22528776     EISSN : 27222616     DOI : -
Core Subject : Science,
International Journal of Informatics and Communication Technology (IJ-ICT) is a common platform for publishing quality research paper as well as other intellectual outputs. This Journal is published by Institute of Advanced Engineering and Science (IAES) whose aims is to promote the dissemination of scientific knowledge and technology on the Information and Communication Technology areas, in front of international audience of scientific community, to encourage the progress and innovation of the technology for human life and also to be a best platform for proliferation of ideas and thought for all scientists, regardless of their locations or nationalities. The journal covers all areas of Informatics and Communication Technology (ICT) focuses on integrating hardware and software solutions for the storage, retrieval, sharing and manipulation management, analysis, visualization, interpretation and it applications for human services programs and practices, publishing refereed original research articles and technical notes. It is designed to serve researchers, developers, managers, strategic planners, graduate students and others interested in state-of-the art research activities in ICT.
Arjuna Subject : -
Articles 462 Documents
Factors affecting customers intention towards online pharmacies in Indonesian market Ferawaty, Ferawaty; Antonio, Wakky; Anggraeni, Adilla
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 1: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i1.pp91-100

Abstract

Online pharmacies are a promising business model for promoting online sales of medicines. The purpose of this study is to investigate how technology acceptance model (TAM) variables (perceived ease of use and perceived usefulness), perceived trust, perceived performance risk, and perceived physical risk influence customers' intention to use online pharmacy. A questionnaire survey was used to collect data for the planned study. The results showed that perception of trust is a critical factor influencing costomers intention to use an online pharmacy. The reluctance of customers to buy medicines, categorized as risk, through online pharmacies which was originally thought to be a determining factor, has no impact if customer trust in online pharmacy has been formed. This study has several relevances for advancing online pharmacy promotion including the importances of user-friendly and benefits provided by online pharmacies provider. It is very important how online pharmacies providers can increase customers trust in terms of legality, quality and security of personal data.
Design of a model for multistage classification of diabetic retinopathy and glaucoma Mundada, Rupesh Goverdhan; Nawgaje, Devesh
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 2: August 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i2.pp214-222

Abstract

This study addresses the escalating prevalence of diabetic retinopathy (DR) and glaucoma, major global causes of vision impairment. We propose an innovative iterative Q-learning model that integrates with fuzzy C-means clustering to improve diagnostic accuracy and classification speed. Traditional diagnostic frameworks often struggle with accuracy and delay in disease stage classification, particularly in discerning complex features like exudates and veins. Our model overcomes these challenges by combining fuzzy C means with Q learning, enhancing precision in identifying key retinal components. The core of our approach is a custom-designed 45-layer 2D convolutional neural network (CNN) optimized for nuanced detection of DR and glaucoma stages. Compared to previous approaches, the performance on the IDRID and SMDG-19 datasets and associated samples shows a 10.9% rise in precision, an 8.5% improvement in overall accuracy, an 8.3% enhancement in recall, a 10.4% larger area under the curve (AUC), a 5.9% boost in specificity, and a 2.9% decrease in latency. This methodology has the potential to bring about significant changes in the field of DR and glaucoma diagnosis, leading to prompt medical interventions and possibly decreasing vision loss. The use of sophisticated machine learning techniques in medical imaging establishes a model for future investigations in ophthalmology and other clinical situations.
Explainable zero-shot learning and transfer learning for real time Indian healthcare Saigaonkar, Swati; Narawade, Vaibhav
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 14, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v14i1.pp91-101

Abstract

Clinical note research is globally recognized, but work on real-time data, particularly from India, is still lagging. This study initiated by training models on medical information mart for intensive care (MIMIC) clinical notes, focusing on conditions like chronic kidney disease (CKD), myocardial infarction (MI), and asthma using the structured medical domain bidirectional encoder representations from transformers (SMDBERT) model. Subsequently, these models were applied to an Indian dataset obtained from two hospitals. The key difference between publicly available datasets and real-time data lies in the prevalence of certain diseases. For example, in a real-time setting, tuberculosis may exist, but the MIMIC dataset lacks corresponding clinical notes. Thus, an innovative approach was developed by combining a fine-tuned SMDBERT model with a customized zero-shot learning method to effectively analyze tuberculosis-related clinical notes. Another research gap is the lack of explainability because deep learning (DL) models are inherently black-box. To further strengthen the reliability of the models, local interpretable model-agnostic explanations (LIME) and shapley additive explanations (SHAP) explanations were projected along with narrative explanations which generated explanations in a natural language format. Thus, the research provides a significant contribution with ensemble technique of zero-shot learning and SMDBERT model with an accuracy of 0.92 as against the specialized models like scientific BERT (SCIBERT), biomedical BERT (BIOBERT) and clinical BioBERT.
Explainable artificial intelligence for traffic signal detection using LIME algorithm Santhiya, P.; Jebadurai, Immanuel Johnraja; Leelipushpam Paulraj, Getzi Jeba; Kirubakaran S, Stewart; Keren L., Rubee; Veemaraj, Ebenezer; Sharance J. S., Randlin Paul
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 3: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i3.pp527-536

Abstract

As technology progresses, so does everything around us, such as televisions, mobile phones, and robots, which grow wiser. Of these technologies, artificial intelligence (AI) is used to aid the computer in making decisions comparable to humans, and this intelligence is supplied to the machine as a model. As AI deals with the concept of Black-Box, the model’s decisions were poorly comprehended by the end users. Explainable AI (XAI) is where humans can understand the judgments and decisions made by the AI. Earlier, the predictions made by the AI were not as easy as we know the data now, and there was some confusion regarding the predictions made by the AI. The intention for the use of XAI is to improve the user interface of products and services by helping them trust the decisions made by AI. The machine learning (ML) model White-box shows us the result that can be understood by the people in that domain, wherein the end users cannot understand the decisions. To further enhance traffic signal detection using XAI, the concept called local interpretable model- agnostic explanation (LIME) algorithm has been taken into consideration and the performance is improved in this paper.
Data analysis and visualization on titanic and student’s performance datasets-an exploratory study Kim, Seong-Cheol; Salkuti, Surender Reddy; Suresh, Alka Manvayalar; Sankaran, Madhu Sree
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 14, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v14i1.pp68-76

Abstract

Exploratory data analysis (EDA) is all about exploring the data in order to identify any underlying pattern before you try to use it to make a predictive model. It also plays a major role in the data discovery process as it is used to analyze data and to recapitulate their different characteristics, which is displayed efficiently with the help of data visualization methods. This paper aims to identify errors in the dataset, to understand the existing hidden structure and to identify new ones, to detect points in a dataset that deviate to a greater extent from the collected data (outliers), and also to find any relationship or intersection between the variables and constants. Two datasets are used namely ‘Titanic’ and ‘student’s performance’ to perform data analysis and ‘data visualization’ to depict ‘exploratory data analysis’ which acts as an important set of tools for recognizing a qualitative understanding. The datasets were explored and hence it assisted with identifying patterns, outliers, corrupt data, and discovering the relationship between the fields in the dataset.
Automated multi-document summarization using extractive-abstractive approaches Nasari, Maulin; Girsang, Abba Suganda
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 3: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i3.pp400-409

Abstract

This study presents a multi-document text summarizing system that employs a hybrid approach, including both extractive and abstractive methods. The goal of document summarizing is to create a coherent and comprehensive summary that captures the essential information contained in the document. The difficulty in multi-document text summarization lies in the lengthy nature of the input material and the potential for redundant information. This study utilises a combination of methods to address this issue. This study uses the TextRank algorithm as an extractor for each document to condense the input sequence. This extractor is designed to retrieve crucial sentences from each document, which are then aggregated and utilised as input for the abstractor. This study uses bidirectional and auto-regressive transformers (BART) as an abstractor. This abstractor serves to condense the primary sentences in each document into a more cohesive summary. The evaluation of this text summarizing system was conducted using the ROUGE measure. The research yields ROUGE R1 and R2 scores of 41.95 and 14.81, respectively.
An model for structured the NoSQL databases based on machine learning classifiers Benmakhlouf, Amine
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 14, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v14i1.pp229-239

Abstract

Today, the majority of data generated and processed in organizations is unstructured. NoSQL database management systems perform the management of this data. The problem is that these unstructured databases cannot be analyzed by traditional OLAP analytical treatments. The latter are mainly used in structured relational databases. In order to apply OLAP analyses on NoSQL data, the structuring of this data is essential. In this paper, we propose a model for structuring the data of a document-oriented NoSQL database using machine learning (ML). This method is broken down into three steps, first the vectorization of documents, then the learning via different ML algorithms and finally the classification, which guarantees that documents with the same structure will belong to the same collection. Therefore, the modeling of a data warehouse can be carried out in order to create OLAP cubes. Since the models found by learning allow the parallel computation of the classifier, our approach represents an advantage in terms of speed since we will avoid doubly iterative algorithms, which rely on textual comparisons (TC). A comparative study of the performances is carried out in this work in order to detect the most efficient methods to perform this type of classification.
Fault detection in single-hop and multi-hop wireless sensor networks using a deep learning algorithm Padmasree, Ramineni; Chaithanya, Aravalli Sainath
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 13, No 3: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v13i3.pp453-461

Abstract

The wireless sensor network (WSN) has received significant recognition for its positive impact on environmental monitoring, yet its reliability remains prone to faults. Common factors contributing to faults include connectivity loss from malfunctioning node interfaces, disruptions caused by obstacles, and increased packet loss due to noise or congestion. This research employs a variety of machine learning and deep learning techniques to identify and address these faults, aiming to enhance the overall lifespan and scalability of the WSN. Classification models such as support vector machine (SVM), gradient boosting clasifer (GBC), K-nearest neighbours (KNN), random forest, and decision tree were employed in model training, with the decision tree emerging as the most accurate at 90.23%. Additionally, a deep learning approach, the recurrent neural network (RNN), effectively identified faults in sensor nodes, achieving an accuracy of 93.19%.
Enhancing predictive modelling and interpretability in heart failure prediction: a SHAP-based analysis Khan, Niaz Ashraf; Bin Hafiz, Md. Ferdous; Pramanik, Md. Aktaruzzaman
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 14, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v14i1.pp11-19

Abstract

Predictive modelling plays a crucial role in healthcare, particularly in forecasting mortality due to heart failure. This study focuses on enhancing predictive modelling and interpretability in heart failure prediction through advanced boosting algorithms, ensemble methods, and SHapley Additive exPlanations (SHAP) analysis. Leveraging a dataset of patients diagnosed with cardiovascular diseases (CVD), we employed techniques such as synthetic minority over-sampling technique (SMOTE) and bootstrapping to address class imbalance. Our results demonstrated exceptional predictive performance, with the gradient boosting (GBoost) model achieving the highest accuracy of 91.39%. Ensemble techniques further enhanced performance, with the voting classifier (VC), stacking classifier (SC), and Blending achieving accuracies of 91.00%. SHAP analysis uncovered key features such as time, Serum_creatinine, and Ejection_fraction, significantly impacting mortality prediction. These findings highlight the importance of transparent and interpretable machine learning models in healthcare decision-making processes, facilitating informed interventions and personalized treatment strategies for heart failure patients.
The integration of discrete contourlet transform in OFDM framework for future wireless communication Mohamed Nerma, Mohamed Hussien; Ahmed Abdo, Adam Mohamed
International Journal of Informatics and Communication Technology (IJ-ICT) Vol 14, No 1: April 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijict.v14i1.pp182-194

Abstract

In the upcoming era, the forthcoming sixth-generation (6G) wireless communication network will demand highly efficient technology to support extensive capacity, ultra-high speeds, low latency, scalability, and adaptability. While the current fifth-generation (5G) wireless communication system relies on OFDM technology, the evolution towards a beyond 5G wireless communication system necessitates a new OFDM framework. This study introduces a novel OFDM system that integrates the discrete Contourlet transform. A comparative analysis has been conducted among the proposed system, conventional OFDM, and curvelet-based OFDM systems. The results indicate that the proposed system offers improvements in bit error rate (BER), reduced computational complexity, decreased peak-to-average power ratio (PAPR), and enhanced power spectrum density (PSD) when contrasted with both the traditional and curvelet-based systems.