cover
Contact Name
-
Contact Email
-
Phone
-
Journal Mail Official
-
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
International Journal of Advances in Intelligent Informatics
ISSN : 24426571     EISSN : 25483161     DOI : 10.26555
Core Subject : Science,
International journal of advances in intelligent informatics (IJAIN) e-ISSN: 2442-6571 is a peer reviewed open-access journal published three times a year in English-language, provides scientists and engineers throughout the world for the exchange and dissemination of theoretical and practice-oriented papers dealing with advances in intelligent informatics. All the papers are refereed by two international reviewers, accepted papers will be available on line (free access), and no publication fee for authors.
Arjuna Subject : -
Articles 330 Documents
Type-2 Fuzzy ANP and TOPSIS methods based on trapezoid Fuzzy number with a new metric Kustiyahningsih, Yeni; Rahmanita, Eza; Khotimah, Bain Khusnul; Purnama, Jaka
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1285

Abstract

Modeling and linguistic representation in the form Interval Type-2 Fuzzy have better accuracy than Type-1 Fuzzy. The type-2 fuzzy set involves more uncertainty than the type-1 fuzzy set. The degree of fuzzy membership is used to explain uncertainty and ambiguity in the real world. This study presents the type-2 Fuzzy Analytic Network Process (ANP) method to determine the weight of each attribute based on the level of interest and the extension method of type-2 Fuzzy TOPSIS to handle problems based on the value of the fuzzy type-2 attribute. Decision-making is based on the assessment of several experts called Multi-Criteria Group Decision Making (MCGDM), using type-2 Fuzzy geometric mean aggregation function. The membership function in this research is type-2 fuzzy based on the trapezoid. The contribution is a hybrid method Type-2 Fuzzy TOPSIS with Fuzzy Type-2 ANP group-based with new metric intervals on fuzzy type-2 for decision making. The results are a hybrid type-2 FANP and FTOPSIS decision-making model to support the best decision-making. Based on a comparison of the accuracy of trapezoid model 1, model 2, and model 3, the best accuracy result is model 3, which is 84%. The research benefits by presenting a hybrid Type-2 Fuzzy TOPSIS and ANP method that improves decision-making accuracy and better handling uncertainty and ambiguity than Type-1 Fuzzy systems.
A comparison of machine learning methods for knowledge extraction model in A LoRa-Based waste bin monitoring system Abidin, Aa Zezen Zaenal; Othman, Mohd Fairuz Iskandar; Hassan, Aslinda; Murdianingsih, Yuli; Suryadi, Usep Tatang; Siallagan, Timbo Faritchan
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1026

Abstract

Knowledge Extraction Model (KEM) is a system that extracts knowledge through an IoT-based smart waste bin emptying scheduling classification. Classification is a difficult problem and requires an efficient classification method. This research contributes in the form of the KEM system in the classification of scheduling for emptying waste bins with the best performance of the Machine Learning method. The research aims to compare the performance of Machine Learning methods in the form of Decision Tree, Naïve Bayes, K-Nearest Neighbor, Support Vector Machine, and Multi-Layer Perceptron, which will be recommended in the KEM system. Performance testing was performed on accuracy, recall, precision, F-Measure, and ROCS curves using the cross-validation method with ten observations. The experimental results show that the Decision Tree performs best for accuracy, recall, precision, and ROCS curve. In contrast, the K-NN method obtains the highest F-measure performance. KEM can be implemented to extract knowledge from data sets created in various other IoT-based systems.
TelsNet: temporal lesion network embedding in a transformer model to detect cervical cancer through colposcope images Mukku, Lalasa; Thomas, Jyothi
International Journal of Advances in Intelligent Informatics Vol 9, No 3 (2023): November 2023
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v9i3.1431

Abstract

Cervical cancer ranks as the fourth most prevalent malignancy among women globally. Timely identification and intervention in cases of cervical cancer hold the potential for achieving complete remission and cure. In this study, we built a deep learning model based on self-attention mechanism using transformer architecture to classify the cervix images to help in diagnosis of cervical cancer. We have used techniques like an enhanced multivariate gaussian mixture model optimized with mexican axolotl algorithm for segmenting the colposcope images prior to the Temporal Lesion Convolution Neural Network (TelsNet) classifying the images. TelsNet is a transformer-based neural network that uses temporal convolutional neural networks to identify cancerous regions in colposcope images. Our experiments show that TelsNet achieved an accuracy of 92.7%, with a sensitivity of 73.4% and a specificity of 82.1%. We compared the performance of our model with various state-of-the-art methods, and our results demonstrate that TelsNet outperformed the other methods. The findings have the potential to significantly simplify the process of detecting and accurately classifying cervical cancers at an early stage, leading to improved rates of remission and better overall outcomes for patients globally.
A novel multi-step prediction model for process monitoring Lee, Yi Shan; Ooi, Sai Kit; Chen, Junghui
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1528

Abstract

In the competitive market, process monitoring can ensure the quality of products, but strong nonlinearities, slow dynamics, and uncertainties characterize the complexities of the large-scale chemical plant. When the fault occurs, it will not influence the process instantaneously but will react after a few time points. After all the products affected by the faults are inspected, it is too late to fix the process. Conventional approaches neither do nor care about early detection before any disturbance significantly affects the process. To estimate disturbances propagated through the process, a multi-step prediction model is essential. The purpose of early process monitoring is to detect any problem with the currently running process as early as possible. In this paper, a multi-step prediction system is proposed. The system is a dynamic model that can capture the dynamic relationship of past process input variables and future process output variables. It provides a lower dimension and a lower noise-contaminated space for data analysis. Particularly, the past input and output process data can be mapped from the observation space into the latent space to acquire their intrinsic properties. The latent variables preserve the dynamic information for future multi-step prediction so that early warning can be achieved. An industrial example of the PVC dying process is presented to show the multistep predictive ability of the proposed method.
A novel convolutional feature-based method for predicting limited mobility eye gaze direction Khaleel, Amal Hameed; Abbas, Thekra H; Ibrahim, Abdul-Wahab Sami
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1370

Abstract

Eye gaze direction is a critical issue since several applications in computer vision technology rely on determining gaze direction, where individuals move their eyes to limited mobility locations for sensory information. Deep neural networks are considered one of the most essential and accurate image classification methods. Several methods of classification to determine the direction of the gaze employ convolutional neural network models, which are VGG, ResNet, Alex Net, etc. This research presents a new method of identifying human eye images and classifying eye gaze directions (left, right, up, down, straight) in addition to eye-closing discrimination. The proposed method (Di-eyeNET) stands out from the developed method (Split-HSV) for enhancing image lighting. It also reduces implementation time by utilizing only two blocks and employing dropout layers after each block to achieve fast response times and high accuracy. It focused on the characteristics of the human eye images, as it is small, so it cannot be greatly enlarged, and the eye's iris is in the middle of the image, so the edges are not important. The proposed method achieves excellent results compared to previous methods, classifying the five directions of eye gaze instead of the four directions. Both the global dataset and the built local dataset were utilized. Compared to previous methods, the suggested method's results demonstrate high accuracy (99%), minimal loss, and the lowest training time. The research benefits include an efficient method for classifying eye gaze directions, with faster implementation and improved image lighting.
Abnormal behavior recognition using SRU with attention mechanism Tay, Nian Chi; Connie, Tee; Ong, Thian Song; Teoh, Andrew Beng Jin; Teh, Pin Shen
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1385

Abstract

In response to the critical need for enhanced public safety measures, this study introduces an advanced intelligent surveillance system designed to autonomously detect abnormal behaviors within public spaces. Leveraging the computational efficiency and accuracy of a Simple Recurrent Unit (SRU) integrated with an attention mechanism, this research delivers a novel approach towards understanding and interpreting human interactions in real-time video footage. Distinctively, the model specializes in identifying two primary categories of abnormal behavior: aggressive two-person interactions such as physical confrontations and collective crowd dynamics, characterized by sudden dispersal patterns indicative of distress or danger. The incorporation of Attention mechanism precisely targets critical elements of behavior, thereby enhancing the model's focus and interpretative clarity. Empirical validation across five benchmark datasets reveals that our model not only outperforms traditional Long Short-Term Memory (LSTM) frameworks in terms of speed by a factor of 1.5 but also demonstrates superior accuracy in abnormal behavior recognition. These findings not only underscore the model's potential in preempting potential safety threats but also mark a significant advancement in the application of deep learning technologies for public security infrastructures. This research contributes to the broader discourse on public safety, offering actionable insights and robust technological solutions to enhance surveillance efficacy and response mechanisms in critical public domains.
Region-based convolutional neural networks for occluded person re-identification Islam, Atiqul; Tsun, Mark Tee Kit; Theng, Lau Bee; Chua, Caslon
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1125

Abstract

In a variety of applications, including intelligent surveillance systems, targeted tracking, and assistive human-following robots, the ability to accurately identify individuals even when they are partially obscured is imperative. Such Continuous person tracking is complicated by the close similarity between the appearance of people and target occlusions. This study addresses this significant challenge by proposing a two-step, detection-first approach that uses a region-based convolutional neural network (R-CNN) as the re-identification (re-ID)solution. The model is specifically trained to detect occluded persons at different levels of occlusion before forwarding the image for the re-ID process. Three occluded-specific datasets are selected to evaluate the model's effectiveness in detecting occluded people. There are 379 distinct people in total, and each has five images obstructed from different angles. A sample of the data is taken to simulate various environment settings, and new data points are generated with different degrees of occlusion to assess how well the model performs under varying levels of obstruction. The findings demonstrate that the proposed person re-ID model is reliable in most circumstances, correctly re-identifying at 74% (Rank-1) and 90% (Rank-5). Although there is a decrease in accuracy as the number of distinctive people in the dataset increases, this does not significantly impact the tracking performance in various applications, which are expected to recognize a single person or a small group of individuals. Future works will explore refining similarity matching algorithms by delving into robust image comparison techniques, thereby addressing the challenges presented by occlusions. A critical aspect is to assess the model under diverse lighting conditions and investigate scenarios with multiple individuals in a frame. It is also beneficial to exploit high-resolution datasets, such as DukeMTMC-reID, and integrate finer contextual details, like clothing or carried objects. These collective efforts are essential for optimizing the model’s efficacy in practical applications and advancing person re-ID technologies.
AI-Driven Analysis: Optimizing Tertiary Education Policy through Machine Learning Insights Sy, Christian Y; Maceda, Lany L; Abisado, Mideth B
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1525

Abstract

Tertiary education is pivotal in equipping individuals with the necessary knowledge and skills for success, prompting global initiatives for equitable access to quality higher education. The Philippines' Universal Access to Quality Tertiary Education (UAQTE) Act exemplifies this commitment by providing free tertiary education to eligible Filipino students. This study evaluates the UAQTE program's implementation through the perspectives of student beneficiaries, employing a combined approach of qualitative analysis and machine learning techniques. The study utilizes supervised and unsupervised machine learning to analyze student responses, specifically multiclass text classification using BERT and topic modeling with BERTopic. The results reveal insights into students' experiences and perceptions of the UAQTE program. While BERT demonstrates effectiveness in certain categories, challenges such as overfitting and balancing sequence length versus model performance are identified. BERTopic highlights the importance of capturing two-word combinations for enhancing topic coherence. Key themes identified through both approaches include "Educational Opportunity," "Program Implementation," "Financial Support," and "Appreciation and Gratitude," emphasizing their significance within the UAQTE program. Alignment between machine learning analyses and domain experts' insights underscores the relevance and effectiveness of the methodologies employed. Recommendations for optimizing the UAQTE program include refining focus areas, strengthening support systems, incorporating two-word combinations in analysis, and fostering continuous monitoring and interdisciplinary collaboration. By leveraging insights from qualitative and machine learning analyses, administrators can make informed decisions to enhance program effectiveness and comprehensively address students' diverse needs.
Emergency sign language recognition from variant of convolutional neural network (CNN) and long short term memory (LSTM) models As'ari, Muhammad Amir; Sufri, Nur Anis Jasmin; Qi, Guat Si
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1170

Abstract

Sign language is the primary communication tool used by the deaf community and people with speaking difficulties, especially during emergencies. Numerous deep learning models have been proposed to solve the sign language recognition problem. Recently. Bidirectional LSTM (BLSTM) has been proposed and used in replacement of Long Short-Term Memory (LSTM) as it may improve learning long-team dependencies as well as increase the accuracy of the model. However, there needs to be more comparison for the performance of LSTM and BLSTM in LRCN model architecture in sign language interpretation applications. Therefore, this study focused on the dense analysis of the LRCN model, including 1) training the CNN from scratch and 2) modeling with pre-trained CNN, VGG-19, and ResNet50. Other than that, the ConvLSTM model, a special variant of LSTM designed for video input, has also been modeled and compared with the LRCN in representing emergency sign language recognition. Within LRCN variants, the performance of a small CNN network was compared with pre-trained VGG-19 and ResNet50V2. A dataset of emergency Indian Sign Language with eight classes is used to train the models. The model with the best performance is the VGG-19 + LSTM model, with a testing accuracy of 96.39%. Small LRCN networks, which are 5 CNN subunits + LSTM and 4 CNN subunits + BLSTM, have 95.18% testing accuracy. This performance is on par with our best-proposed model, VGG + LSTM. By incorporating bidirectional LSTM (BLSTM) into deep learning models, the ability to understand long-term dependencies can be improved. This can enhance accuracy in reading sign language, leading to more effective communication during emergencies.
Fastener and rail surface defects detection with deep learning techniques Yilmazer, Merve; Karakose, Mehmet
International Journal of Advances in Intelligent Informatics Vol 10, No 2 (2024): May 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i2.1237

Abstract

The railways, which are frequently used by countries for both passenger and freight transportation, should be checked periodically. Controls made with classical methods are slow and there are often overlooked faults.  This work suggests a novel deep learning-based technique for identifying fastener and railway track surface defects. Within the scope of the proposed method, firstly,  The railroad track was observed using an autonomous drone. Shaky images in the recorded video were removed with a video stabilization algorithm. Frames were created and labeled from the video and rail and fastener regions were detected using the Faster R-CNN deep neural network. Fault detection was performed through ResNet101v2-based classification using different datasets for  identifying surface detects in rails and different datasets for detection of fasteners. The proposed method was experimentally shown to have a 98% accuracy rate for detecting rail surface flaws and a 95% accuracy rate for detecting fastener flaws. An user interface was developed to display the identified faulty images on computers, tablets and mobile phones via a mobile application. The system, which was previously proposed in a different study, was retrained by going through the video stabilization step, thus improving the fault detection rate, and the method was also included in the user interface module.  This study contributes to the processing of ever-increasing video data with deep learning-based methods. It is also anticipated that it will benefit researchers working in the field of railway non-contact fault detection.