IAES International Journal of Artificial Intelligence (IJ-AI)
IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like genetic algorithm, ant colony optimization, etc); reasoning and evolution; intelligence applications; computer vision and speech understanding; multimedia and cognitive informatics, data mining and machine learning tools, heuristic and AI planning strategies and tools, computational theories of learning; technology and computing (like particle swarm optimization); intelligent system architectures; knowledge representation; bioinformatics; natural language processing; multiagent systems; etc.
Articles
123 Documents
Search results for
, issue
"Vol 13, No 2: June 2024"
:
123 Documents
clear
Evaluating the machine learning models based on natural language processing tasks
Meeradevi, Meeradevi;
B. J., Sowmya;
B. N., Swetha
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1954-1968
In the realm of natural language processing (NLP), a diverse array of language models has emerged, catering to a wide spectrum of tasks, ranging from speaker recognition and auto-correction to sentiment analysis and stock prediction. The significance of language models in enabling the execution of these NLP tasks cannot be overstated. This study proposes an approach to enhance accuracy by leveraging a hybrid language model, combining the strengths of long short-term memory (LSTM) and gated recurrent unit (GRU). LSTM excels in preserving long-term dependencies in data, while GRU's simpler gating mechanism expedites the training process. The research endeavors to evaluate four variations of this hybrid model: LSTM, GRU, bidirectional long short-term memory (Bi-LSTM), and a combination of LSTM with GRU. These models are subjected to rigorous testing on two distinct datasets: one focused on IBM stock price prediction, and the other on Jigsaw toxic comment classification (sentiment analysis). This work represents a significant stride towards democratizing NLP capabilities, ensuring that even in resource-constrained settings, NLP models can exhibit improved performance. The anticipated implications of these findings span a wide spectrum of real-world applications and hold the potential to stimulate further research in the field of NLP.
Ubiquitous-cloud-inspired deterministic and stochastic service provider models with mixed-integer-programming
Sumarlin, Sumarlin;
Zarlis, Muhammad;
Suherman, Suherman;
Efendi, Syahril
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1304-1311
The ubiquitous computing system is a paradigm shift from personal computing to physical integration. This study focuses on the deterministic and stochastic service provider model to provide sub-services to computing nodes to minimize rejection values. This deterministic service provider model aims to reduce the cost of sending data from one place to another by considering the processing capacity at each node and the demand for each sub-service. At the same time, stochastic service provider aims to optimize service provision in a stochastic environment where parameters such as demand and capacity may change randomly. The novelties of this research are the deterministic and stochastic service provider models and algorithms with mixed integer programming (MIP). The test results show that the solution found meets all the constraints and the smallest objective function value. Stochastic modeling minimizes denial of service problems during wireless sensor network (WSN) distribution. The model resented is the ability of wireless sensors to establish connections between distributed computing nodes. Stochastic modeling minimizes denial of service problems during WSN distribution.
An enhanced domain ontology model of database course in computing curricula
Rahayu, Nur W.;
Ferdiana, Ridi;
Kusumawardani, Sri S.
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1339-1347
The ACM/IEEE Computing Curricula 2020 includes the study of relational databases in four of its six disciplines. However, a domain ontology model of multidisciplinary database course does not exist. Therefore, the current study aims to build a domain ontology model for the multidisciplinary database course. The research process comprises three phases: a review of database course contents based on the ACM/IEEE Computing Curricula 2020, a literature review of relevant domain ontology models, and a design research phase using the NeOn methodology framework. The ontology building involves the ontology reuse and reengineering of existing models, along with the construction of some classes from a non-ontological resource. The approach to ontology reuse and reengineering demonstrates ontology reusability. The final domain ontology model is then evaluated using two ontology syntactic metrics: Relationship Richness and Information Richness. These metrics reflect the diversity of relationships and the breadth of knowledge in the model, respectively. In conclusion, the current research contributes to the Computing Curricula by providing an ontology model for a multidisciplinary database course. The model, developed through ontology reuse and reengineering and the integration of non-ontological resources, exhibits more diverse relationships and represents a broader range of knowledge.
Object detection of the bornean orangutan nests using drone and YOLOv5
Teguh, Rony;
Dwijaya Maleh, I Made;
Sagit Sahay, Abertun;
Porkab Pratama, Muhamad;
Simon, Okta
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1640-1649
Object detection methods when applied to ecology and conservation can help to identify and monitor endangered species and their habitats. Using drones for this purpose has become increasingly popular due to their ability to cover large areas quickly and efficiently. In this study, we aim to implement object detection using YOLOv5 to detect orangutan nests in forests. To conduct our experiment, we collect drone imagery under different conditions. We propose to use the original YOLOv5 to implement our model. The detection and monitoring of orangutan nests can help conservationists to identify critical habitats, monitor population, and design effective conservation strategies. Additionally, the use of drones can reduce the need for on-the-ground surveys, which can be time-consuming, expensive, and logistically challenging. In our study proposes a model for detecting orangutan nests in forests using a drone and the YOLOv5. Our model predicted 1,970 training images and 414 labeled orangutan nests, with a precision of 0.973, recall 0.949, accuracy mean average precision (mAP)_0.5 is 0.969, and mAP_0.5:0.95 is 0.630. The model finished 217 epochs in 58 hours and had a high object detection accuracy. The model has a 99.9% accuracy in detecting the number of orangutan nests.
Improved performance of fake account classifiers with percentage overlap features selection
Tjahyanto, Aris;
Pratama, Rivanda Putra;
Shiddiqi, Ary Mazharuddin
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1585-1595
Feature selection plays a crucial role in the development of high-performance classification models. We propose an innovative method for detecting fake accounts. This method leverages the percentage overlap technique to refine feature selection. We introduce our technique upon earlier work that showcased the enhanced efficacy of the Naïve Bayesian classifier through dataset normalization. Our study employs a dataset of account profiles sourced from Twitter, which we normalize using the Min-Max method. We analyze the results through a series of comprehensive experiments involving diverse classification algorithms—such as Naïve Bayes, decision tree, k-nearest neighbors (KNN), deep learning, and support vector machines (SVM). Our experimental results demonstrate a 100% accuracy achieved by the SVM and deep learning classifiers. The results are attributed to the percentage overlap technique, which facilitates the identification of four highly informative features. These findings outperform models with more extensive feature sets, underscoring the efficacy of our approach.
A genetic algorithm-based feature selection approach for diabetes prediction
Kangra, Kirti;
Singh, Jaswinder
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1489-1498
Genetic algorithms have emerged as a powerful optimization technique for feature selection due to their ability to search through a vast feature space efficiently. This study discusses the importance of feature selection for prediction in healthcare and prominently focuses on diabetes mellitus. Feature selection is essential for improving the performance of prediction models, by finding significant features and removing unnecessary among them. The study aims to identify the most informative subset of features. Diabetes is a chronic metabolic disorder that poses significant health challenges worldwide. For the experiment, two datasets related to diabetes were downloaded from Kaggle and the results of both (datasets) with and without feature selection using the genetic algorithm were compared. Machine learning classifiers and genetic algorithms were combined to increase the precision of diabetes risk prediction. In the preprocessing phase, feature selection, machine learning classifiers, and performance metrics methods were applied to make this study feasible. The results of the experiment showed that genetic algorithm + logistic regression i.e., 80% (accuracy) works better for PIMA diabetes, and for Germany diabetes dataset genetic algorithm + random forest and genetic algorithm + K-Nearest Neighbor i.e., 98.5% performed better than other chosen classifiers. The researchers can better comprehend the importance of feature selection in healthcare through this study.
Towards a disease prediction system: biobert-based medical profile representation
Hatoum, Rima;
Alkhazraji, Ali;
Ibrahim, Zein Al Abidin;
Dhayni, Houssein;
Sbeity, Ihab
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp2314-2322
Predicting diseases in advance is crucial in healthcare, allowing for early intervention and potentially saving lives. Machine learning plays a pivotal role in healthcare advancements today. Various studies aim to predict diseases based on prior knowledge. However, a significant challenge lies in representing medical information for machine learning. Patient medical histories are often in an unreadable format, necessitating filtering and conversion into numerical data. Natural language processing (NLP) techniques have made this task more manageable. In this paper, we propose three medical information representations, two of which are based on bidirectional encoder representations from transformers for biomedical text mining (BioBERT), a state-of-the-art text representation technique in the biomedical field. We compare these representations to highlight the powerful advantages of BioBERT-based methods in disease prediction. We evaluate our approach efficiency using the medical information mart for intensive careIII (MIMIC-III) database, containing data from 46,520 patients. Our focus is on predicting coronary artery disease. The results demonstrate the effectiveness of our proposal. In summary, BioBERT, NLP techniques, and the MIMIC-III database are key components in our work, which significantly enhances disease prediction in healthcare.
Combination of gray level co-occurrence matrix and artificial neural networks for classification of COVID-19 based on chest X-ray images
Imran, Bahtiar;
Delsi Samsumar, Lalu;
Subki, Ahmad;
Zaeniah, Zaeniah;
Salman, Salman;
Rijal Alfian, Muhammad
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1625-1631
This research uses the gray level co-occurrence matrix (GLCM) and artificial neural networks to classify COVID-19 images based on chest X-ray images. According to previous studies, there has never been a researcher who has integrated GLCM with artificial neural networks. Epochs 10, 30, 50, 70, 100, and 120 were used in this research. The total number of data points used in this investigation was 600, divided into 300 normal chests and 300 COVID-19 data points. Epoch 10 had 91% accuracy, epoch 30 had 91% accuracy, epoch 50 had 92% accuracy, epoch 70 had 91% accuracy, epoch 100 had 92% accuracy, and epoch 120 had 90% accuracy in categorization. As indicated by the results of the classification tests, combining GLCM and artificial neural networks can produce good results; a combination of these methods can yield a classification for COVID-19.
Tuning the K value in K-nearest neighbors for malware detection
M. Abualhaj, Mosleh;
Abu-Shareha, Ahmad Adel;
Shambour, Qusai Y.;
Al-Khatib, Sumaya N.;
Hiari, Mohammad O.
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp2275-2282
Malicious software, also referred to as malware, poses a serious threat to computer networks, user privacy, and user systems. Effective cybersecurity depends on the correct detection and classification of malware. In order to improve its effectiveness, the K-nearest neighbors (KNN) method is applied systematically in this study to the task of malware detection. The study investigates the effect of the number of neighbors (K) parameter on the KNN's performance. MalMem-2022 malware datasets and relevant evaluation criteria like accuracy, precision, recall, and F1-score will be used to assess the efficacy of the suggested technique. The experiments evaluate how parameter tuning affects the accuracy of malware detection by comparing the performance of various parameter setups. The study findings show that careful parameter adjustment considerably boosts the KNN method's malware detection capability. The research also highlights the potential of KNN with parameter adjustment as a useful tool for malware detection in real-world circumstances, allowing for prompt and precise identification of malware.
A soft computing algorithmic technique for circuital analysis of a wireless mobile charger
Olukayode Ojo, Adedayo;
Oladipupo Alegbeleye, Oluwafemi;
Omowunmi Olomowewe, Rashida
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science
Show Abstract
|
Download Original
|
Original Source
|
Check in Google Scholar
|
DOI: 10.11591/ijai.v13.i2.pp1443-1449
Wireless energy transfer is emerging as a promising technology for mobile devices because it enhances rapid charging without requiring conventional cables. In this paper, a wireless mobile charger circuit was designed and simulated, the data obtained thereof was used to train an artificial neural network (ANN) using Levenberg-Marquardt (LM) algorithm. The result obtained was validated against that obtained when trained with regular scaled conjugate algorithm. Analysis of the results showed that the proposed technique remains a viable technique for rapidly analyzing several parts of the wireless mobile charger circuit for design and educational purposes, without always executing computationally intensive and time-consuming simulations.