Ali Abd Almisreb
Universiti Tenaga Nasional

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Acoustical Comparison between /u/ and /u:/ Arabic Vowels for Non-Native Speakers Ali Abd Almisreb; Nooritawati Md Tahir; Ahmad Farid Abidin; Norashidah Md Din
Indonesian Journal of Electrical Engineering and Computer Science Vol 11, No 1: July 2018
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v11.i1.pp1-8

Abstract

The articulation of Arabic phonemes is essential for the Malay community since Arabic language is mandatory to perform worship. Hence, in this paper, an acoustical analysis of Arabic phonemes for vowels /u/ and /u:/ is discussed based on tokens pronounced by Malay speakers. The experimental results showed that the Malay speakers are inclined to utter these Arabic phonemes similar to the native speakers and it was also found from the analysis that the vowel /u/ and /u: was articulated as high-back vowels. Conversely, the vowel /u/ was located lower than /u:/ as in the vowel-space. Alternatively results also showed that /u/ and /u:/  is higher than the other vowels specifically /a/ and /a:/. In addition, the statistical analysis showed that the formant frequencies of both short and long dummah for formant frequency F1, F2 and F3 have more variation in terms of /u/ as compare to /u:/. In contrast formant frequency F4 and F5 are more diversity in terms of /u:/.
Can Convolution Neural Network (CNN) Triumph in Ear Recognition of Uniform Illumination Invariant? Nursuriati Jamil; Ali Abd Almisreb; Syed Mohd Zahid Syed Zainal Ariffin; N. Md Din; Raseeda Hamzah
Indonesian Journal of Electrical Engineering and Computer Science Vol 11, No 2: August 2018
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v11.i2.pp558-566

Abstract

Current deep convolution neural network (CNN) has shown to achieve superior performance on a number of computer vision tasks such as image recognition, classification and object detection. The deep network was also tested for view-invariance, robustness and illumination invariance. However, the CNN architecture has thus far only been tested on non-uniform illumination invariant. Can CNN perform equally well for very underexposed or overexposed images or known as uniform illumination invariant? This is the gap that we are addressing in this paper. In our work, we collected ear images under different uniform illumination conditions with lumens or lux values ranging from 2 lux to 10,700 lux. A total of 1,100 left and right ear images from 55 subjects are captured under natural illumination conditions. As CNN requires considerably large amount of data, the ear images are further rotated at every 5o angles to generate 25,300 images. For each subject, 50 images are used as validation/testing dataset, while the remaining images are used as training datasets. Our proposed CNN model is then trained from scratch and validation and testing results showed recognition accuracy of 97%. The results showed that 100% accuracy is achieved for images with lumens ranging above 30 but having problem with lumens less than 10 lux