cover
Contact Name
Imam Much Ibnu Subroto
Contact Email
imam@unissula.ac.id
Phone
-
Journal Mail Official
ijai@iaesjournal.com
Editorial Address
-
Location
Kota yogyakarta,
Daerah istimewa yogyakarta
INDONESIA
IAES International Journal of Artificial Intelligence (IJ-AI)
ISSN : 20894872     EISSN : 22528938     DOI : -
IAES International Journal of Artificial Intelligence (IJ-AI) publishes articles in the field of artificial intelligence (AI). The scope covers all artificial intelligence area and its application in the following topics: neural networks; fuzzy logic; simulated biological evolution algorithms (like genetic algorithm, ant colony optimization, etc); reasoning and evolution; intelligence applications; computer vision and speech understanding; multimedia and cognitive informatics, data mining and machine learning tools, heuristic and AI planning strategies and tools, computational theories of learning; technology and computing (like particle swarm optimization); intelligent system architectures; knowledge representation; bioinformatics; natural language processing; multiagent systems; etc.
Arjuna Subject : -
Articles 1,808 Documents
Optimal economic environmental power dispatch by using artificial bee colony algorithm Hassan, Elia Erwani; Noor, Hanan Izzati Mohd; Bin Hashim, Mohd Ruzaini; Sulaima, Mohamad Fani; Bahaman, Nazrulazhar
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 2: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i2.pp1469-1478

Abstract

Today, most power plants worldwide use fossil fuels such as natural gas, coal, and oil as the primary resource for energy reproduction primarily. The new term for economic environmental power dispatch (EEPD) problems is on the minimum total cost of the generator and fossil fuel emissions to address atmosphere pollution. Thus, the significant objective functions are identified to minimize the cost of generation, most minor emission pollutants, and lowest system losses individually.  As an alternative, an Artificial Bee Colony (ABC) swarming algorithm is applied to solve the EEPD problem separately in the power systems on both standard IEEE 26 bus system and IEEE 57 bus system using a MATLAB programming environment. The performance of the introduced algorithm is measured based on simple mathematical analysis such as a simple deviation and its percentage from the obtained results. From the mathematical measurement, the ABC algorithm showed an improvement on each identified single objective function as compared with the gradient approach of using the Newton Raphson method in a short computational time.
Computer vision that can ‘see’ in the dark Goh, Shi Yong; Wong, Yan Chiew ; Ahmad Radzi, Syafeeza; Sarban Singh, Ranjit Singh
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp2883-2892

Abstract

Insufficient lighting environment has raised challenges for night shift workers’ safety monitoring. Thus, we have developed a computer vision-based algorithm recognizing 11 actions based on action recognition in dark (ARID) dataset. A hybrid model of integrating convolutional neural network (CNN) into YOLOv7 has been proposed. YOLOv7 is an algorithm designed for real-time object detection in image or video, for fast and accurate detection in applications such as autonomous vehicles and surveillance systems. In this work, video in dark environment has first been enhanced using CNN algorithm before feeding into YOLOv7 network for activity recognition. Adaptive gamma intensity correction (GIC) has been integrated to further improving the overall result. The proposed model has been evaluated over different enhancement modes. The proposed model is able to handle dark video frames with 74.95% Top-1 accuracy with fast processing speed of 93.99 ms/frame on a 4 GB RTX 3050 graphical processing unit (GPU) and 17.59 ms/frame on 16 GB Tesla T4 GPU. The base size of the proposed model is tiny, only 74.8 MB, but with 36.54 M of total parameters indicating that it has more capacity to learn more meaningful information with limited hardware resources.
1-dimensional convolutional neural networks for predicting sudden cardiac Reddy Karna, Viswavardhan; Vishnu Vardhana Reddy, Karna
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp984-993

Abstract

Sudden cardiac arrest (SCA) is a serious heart problem that occurs without symptoms or warning. SCA causes high mortality. Therefore, it is important to estimate the incidence of SCA. Current methods for predicting ventricular fibrillation (VF) episodes require monitoring patients over time, resulting in no complications. New technologies, especially machine learning, are gaining popularity due to the benefits they provide. However, most existing systems rely on manual processes, which can lead to inefficiencies in disseminating patient information. On the other hand, existing deep learning methods rely on large data sets that are not publicly available. In this study, we propose a deep learning method based on one-dimensional convolutional neural networks to learn to use discrete fourier transform (DFT) features in raw electrocardiogram (ECG) signals. The results showed that our method was able to accurately predict the onset of SCA with an accuracy of 96% approximately 90 minutes before it occurred. Predictions can save many lives. That is, optimized deep learning models can outperform manual models in analyzing long-term signals.
Apple fruits categorizing based on deep convolutional neural network techniques Hussain, Nashaat; Zaki, Gihan; Hassan, Mohamed
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3695-3702

Abstract

For a variety of reasons, including the high degree of similarity between varieties of the same type of fruit, the requirement to train the technique on a large amount of data, and the type and number of features suitable for application, the use of computer vision techniques in the classification of fruits still faces many challenges. Additionally, the technique's effectiveness and speed both need to be improved. Deep conventional neural network (DCNN) approaches were required for all of these reasons. A proposed CNN model is described in this work. The suggested methodology is intended to quickly and accurately categorize thirteen groups of apple fruits. The proposed technique was based on training and testing the model on a maximum number of images of apple fruits, by increasing the number of database images tenfold, after augmentation was performed on the images. The technology also relied on good tuning of the hyperparameters. To further ensure the efficiency of training, validation was performed on 20% of the database. All results that demonstrate the high efficiency of the proposed model were reviewed. The results of the proposal were compared with the results of four related techniques. The results showed the great advantage of the proposed technology at all levels.
Optimizer algorithms and convolutional neural networks for text classification Qorich, Mohammed; Ouazzani, Rajae El
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp451-458

Abstract

Lately, deep learning has improved the algorithms and the architectures of several natural language processing (NLP) tasks. In spite of that, the performance of any deep learning model is widely impacted by the used optimizer algorithm; which allows updating the model parameters, finding the optimal weights, and minimizing the value of the loss function. Thus, this paper proposes a new convolutional neural network (CNN) architecture for text classification (TC) and sentiment analysis and uses it with various optimizer algorithms in the literature. Actually, in NLP, and particularly for sentiment classification concerns, the need for more empirical experiments increases the probability of selecting the pertinent optimizer. Hence, we have evaluated various optimizers on three types of text review datasets: small, medium, and large. Thereby, we examined the optimizers regarding the data amount and we have implemented our CNN model on three different sentiment analysis datasets so as to binary label text reviews. The experimental results illustrate that the adaptive optimization algorithms Adam and root mean square propagation (RMSprop) have surpassed the other optimizers. Moreover, our best CNN model which employed the RMSprop optimizer has achieved 90.48% accuracy and surpassed the state-of-the-art CNN models for binary sentiment classification problems. 
Optically processed Kannada script realization with Siamese neural network model Parathra Sreedharanpillai, Ambili; Abraham, Biku; Kotapuzakal Varghese, Arun
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp1112-1118

Abstract

Optical character recognition (OCR) is a technology that allows computers to recognize and extract text from images or scanned documents. It is commonly used to convert printed or handwritten text into machine-readable format. This Study presents an OCR system on Kannada Characters based on siamese neural network (SNN). Here the SNN, a Deep neural network which comprises of two identical convolutional neural network (CNN) compare the script and ranks based on the dissimilarity. When lesser dissimilarity score is identified, prediction is done as character match. In this work the authors use 5 classes of Kannada characters which were initially preprocessed using grey scaling and convert it to pgm format. This is directly input into the Deep convolutional network which is learnt from matching and non-matching image between the CNN with contrastive loss function in Siamese architecture. The Proposed OCR system uses very less time and gives more accurate results as compared to the regular CNN. The model can become a powerful tool for identification, particularly in situations where there is a high degree of variation in writing styles or limited training data is available.
DualFaceNet: augmentation consistency for optimal facial landmark detection and face mask classification Songsri-in, Kritaphat; Rattaphun, Munlika; Kaewchada, Sopee; Ruang-on, Somporn
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3228-3239

Abstract

In an era where face masks are commonplace, facial recognition faces new challenges and opportunities. This study introduces DualFaceNet (DFN), a cutting-edge neural network that efficiently combines facial landmark detection with mask classification. Benefiting from multi-task learning (MTL) and enhanced with a unique consistency loss, DFN outperforms traditional single-task models. Tests using the reputable 300W dataset and a face mask dataset showcase DFN’s strengths: a significant reduction in landmark error to 5.42 and an increase in mask classification accuracy to 92.59%. These results highlight the potential of integrating MTL and custom loss functions in facial recognition. As face masks continue to be globally essential, DFN’s integrated approach offers a fresh perspective in facial recognition studies. Furthermore, DFN paves the way for adaptive facial recognition systems, emphasizing the adaptability needed in our current era.
Word embedding for detecting cyberbullying based on recurrent neural networks Shaker, Noor Haydar; Dhannoon, Ban N.
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 1: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i1.pp500-508

Abstract

The phenomenon of cyberbullying has spread and has become one of the biggest problems facing users of social media sites and generated significant adverse effects on society and the victim in particular. Finding appropriate solutions to detect and reduce cyberbullying has become necessary to mitigate its negative impacts on society and the victim. Twitter comments on two datasets are used to detect cyberbullying, the first dataset was the Arabic cyberbullying dataset, and the second was the English cyberbullying dataset. Three different pre-trained global vectors (GloVe) corpora with different dimensions were used on the original and preprocessed datasets to represent the words. Recurrent neural networks (RNN), long short-term memory (LSTM), Bidirectional LSTM (BiLSTM), gated recurrent unit (GRU), and Bidirectional GRU (BiGRU) classifiers utilized, evaluated and compared. The GRU outperform other classifiers on both datasets; its accuracy on the Arabic cyberbullying dataset using the Arabic GloVe corpus of dimension equal to 256D is 87.83%, while the accuracy on the English datasets using 100 D pre-trained GloVe corpus is 93.38%.
Morphology for hexagonal image processing: a comprehensive simulation analysis Cevik, Taner; Nematzadeh, Sajjad; Rasheed, Jawad; Alshammari, Abdulaziz
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp2574-2590

Abstract

Morphological operators for binary and grayscale images are commonly used to eliminate noise, recognize contours or specific structures, and arrange shapes in image processing for physiological modeling and biomechanics applications. Even though morphology has been substantially developed in square-pixelbased-image-processing (SIP), no effort has been made to construct morphological operators in hexagonal-pixel-based-image-processing (HIP) yet. In this paper, we transform basic SIP-domain-morphological operators such as dilation, erosion, closing, and opening into HIP-domain and compare their performance with their SIP counterparts. It is the first time to give the fundamental morphological operators in the HIP domain. The operators developed in this paper initiate the research about morphology in the HIP domain by successfully filling a significant gap by eliminating HIP’s lack of basic operators, thus capable of producing enhanced images for better analysis in anatomical models related to biology and medicine research fields.
Transfer learning scenarios on deep learning for ultrasoundbased image segmentation Bani Unggul, Didik; Iriawan, Nur; Kuswanto, Heri
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 13, No 3: September 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v13.i3.pp3273-3282

Abstract

Deep learning coupled with transfer learning, which involves reusing a pretrained model's network structure and parameter values, offers a rapid and accurate solution for image segmentation. Differing approaches exist in updating transferred parameters during training. In some studies, parameters remain frozen or untrainable (referred to as TL-S1), while in others, they act as trainable initial values updated from the first iteration (TL-S2). We introduce a new state-of-the-art transfer learning scenario (TL-S3), where parameters initially remain unchanged and update only after a specified cutoff time. Our research focuses on comparing the performance of these scenarios, a dimension yet unexplored in the literature. We simulate on three architectures (Dense-UNet-121, Dense-UNet-169, and Dense-UNet-201) using an ultrasound-based dataset with the left ventricular wall as the region of interest. The results reveal that the TL-S3 consistently outperforms the previous state-of-the-art scenarios, i.e., TL-S1 and TL-S2, achieving correct classification ratios (CCR) above 0.99 during training with noticeable performance spikes post-cutoff. Notably, two out of three top-performing models in the validation data also originate from TL-S3. Finally, the best model is the Dense-UNet-121 with TL-S3 and a 20% cutoff. It achieves the highest CCR for training 0.9950, validation 0.9699, and testing data 0.9695, confirming its excellence.

Filter by Year

2012 2026


Filter By Issues
All Issue Vol 15, No 1: February 2026 Vol 14, No 6: December 2025 Vol 14, No 5: October 2025 Vol 14, No 4: August 2025 Vol 14, No 3: June 2025 Vol 14, No 2: April 2025 Vol 14, No 1: February 2025 Vol 13, No 4: December 2024 Vol 13, No 3: September 2024 Vol 13, No 2: June 2024 Vol 13, No 1: March 2024 Vol 12, No 4: December 2023 Vol 12, No 3: September 2023 Vol 12, No 2: June 2023 Vol 12, No 1: March 2023 Vol 11, No 4: December 2022 Vol 11, No 3: September 2022 Vol 11, No 2: June 2022 Vol 11, No 1: March 2022 Vol 10, No 4: December 2021 Vol 10, No 3: September 2021 Vol 10, No 2: June 2021 Vol 10, No 1: March 2021 Vol 9, No 4: December 2020 Vol 9, No 3: September 2020 Vol 9, No 2: June 2020 Vol 9, No 1: March 2020 Vol 8, No 4: December 2019 Vol 8, No 3: September 2019 Vol 8, No 2: June 2019 Vol 8, No 1: March 2019 Vol 7, No 4: December 2018 Vol 7, No 3: September 2018 Vol 7, No 2: June 2018 Vol 7, No 1: March 2018 Vol 6, No 4: December 2017 Vol 6, No 3: September 2017 Vol 6, No 2: June 2017 Vol 6, No 1: March 2017 Vol 5, No 4: December 2016 Vol 5, No 3: September 2016 Vol 5, No 2: June 2016 Vol 5, No 1: March 2016 Vol 4, No 4: December 2015 Vol 4, No 3: September 2015 Vol 4, No 2: June 2015 Vol 4, No 1: March 2015 Vol 3, No 4: December 2014 Vol 3, No 3: September 2014 Vol 3, No 2: June 2014 Vol 3, No 1: March 2014 Vol 2, No 4: December 2013 Vol 2, No 3: September 2013 Vol 2, No 2: June 2013 Vol 2, No 1: March 2013 Vol 1, No 4: December 2012 Vol 1, No 3: September 2012 Vol 1, No 2: June 2012 Vol 1, No 1: March 2012 More Issue