Pg Emeroylariffion Abas
Universiti Brunei Darussalam

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Transfer learning for cancer diagnosis in histopathological images Sandhya Aneja; Nagender Aneja; Pg Emeroylariffion Abas; Abdul Ghani Naim
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 11, No 1: March 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v11.i1.pp129-136

Abstract

Transfer learning allows us to exploit knowledge gained from one task to assist in solving another but relevant task. In modern computer vision research, the question is which architecture performs better for a given dataset. In this paper, we compare the performance of 14 pre-trained ImageNet models on the histopathologic cancer detection dataset, where each model has been configured as naive model, feature extractor model, or fine-tuned model. Densenet161 has been shown to have high precision whilst Resnet101 has a high recall. A high precision model is suitable to be used when follow-up examination cost is high, whilst low precision but a high recall/sensitivity model can be used when the cost of follow-up examination is low. Results also show that transfer learning helps to converge a model faster.
Defense against adversarial attacks on deep convolutional neural networks through nonlocal denoising Sandhya Aneja; Nagender Aneja; Pg Emeroylariffion Abas; Abdul Ghani Naim
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 11, No 3: September 2022
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v11.i3.pp961-968

Abstract

Despite substantial advances in network architecture performance, the susceptibility of adversarial attacks makes deep learning challenging to implement in safety-critical applications. This paper proposes a data-centric approach to addressing this problem. A nonlocal denoising method with different luminance values has been used to generate adversarial examples from the Modified National Institute of Standards and Technology database (MNIST) and Canadian Institute for Advanced Research (CIFAR-10) data sets. Under perturbation, the method provided absolute accuracy improvements of up to 9.3% in the MNIST data set and 13% in the CIFAR-10 data set. Training using transformed images with higher luminance values increases the robustness of the classifier. We have shown that transfer learning is disadvantageous for adversarial machine learning. The results indicate that simple adversarial examples can improve resilience and make deep learning easier to apply in various applications.
Dialect classification using acoustic and linguistic features in Arabic speech Mohammad Ali Humayun; Hayati Yassin; Pg Emeroylariffion Abas
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 12, No 2: June 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v12.i2.pp739-746

Abstract

Speech dialects refer to linguistic and pronunciation variations in the speech of the same language. Automatic dialect classification requires considerable acoustic and linguistic differences between different dialect categories of speech. This paper proposes a classification model composed of a combination of classifiers for the Arabic dialects by utilizing both the acoustic and linguistic features of spontaneous speech. The acoustic classification comprises of an ensemble of classifiers focusing on different frequency ranges within the short-term spectral features, as well as a classifier utilizing the ‘i-vector’, whilst the linguistic classifiers use features extracted by transformer models pre-trained on large Arabic text datasets. It has been shown that the proposed fusion of multiple classifiers achieves a classification accuracy of 82.44% for the identification task of five Arabic dialects. This represents the highest accuracy reported on the dataset, despite the relative simplicity of the proposed model, and has shown its applicability and relevance for dialect identification tasks.