Abul Kalam Al Azad
University of Liberal Arts Bangladesh

Published : 2 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search

Hybrid deep neural network for Bangla automated image descriptor Md Asifuzzaman Jishan; Khan Raqib Mahmud; Abul Kalam Al Azad; Md Shahabub Alam; Anif Minhaz Khan
International Journal of Advances in Intelligent Informatics Vol 6, No 2 (2020): July 2020
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v6i2.499

Abstract

Automated image to text generation is a computationally challenging computer vision task which requires sufficient comprehension of both syntactic and semantic meaning of an image to generate a meaningful description. Until recent times, it has been studied to a limited scope due to the lack of visual-descriptor dataset and functional models to capture intrinsic complexities involving features of an image. In this study, a novel dataset was constructed by generating Bangla textual descriptor from visual input, called Bangla Natural Language Image to Text (BNLIT), incorporating 100 classes with annotation. A deep neural network-based image captioning model was proposed to generate image description. The model employs Convolutional Neural Network (CNN) to classify the whole dataset, while Recurrent Neural Network (RNN) and Long Short-Term Memory (LSTM) capture the sequential semantic representation of text-based sentences and generate pertinent description based on the modular complexities of an image. When tested on the new dataset, the model accomplishes significant enhancement of centrality execution for image semantic recovery assignment. For the experiment of that task, we implemented a hybrid image captioning model, which achieved a remarkable result for a new self-made dataset, and that task was new for the Bangladesh perspective. In brief, the model provided benchmark precision in the characteristic Bangla syntax reconstruction and comprehensive numerical analysis of the model execution results on the dataset.
A deep learning approach for brain tumor detection using magnetic resonance imaging Al-Akhir Nayan; Ahamad Nokib Mozumder; Md. Rakibul Haque; Fahim Hossain Sifat; Khan Raqib Mahmud; Abul Kalam Al Azad; Muhammad Golam Kibria
International Journal of Electrical and Computer Engineering (IJECE) Vol 13, No 1: February 2023
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v13i1.pp1039-1047

Abstract

The growth of abnormal cells in the brain’s tissue causes brain tumors. Brain tumors are considered one of the most dangerous disorders in children and adults. It develops quickly, and the patient’s survival prospects are slim if not appropriately treated. Proper treatment planning and precise diagnoses are essential to improving a patient’s life expectancy. Brain tumors are mainly diagnosed using magnetic resonance imaging (MRI). As part of a convolution neural network (CNN)-based illustration, an architecture containing five convolution layers, five max-pooling layers, a Flatten layer, and two dense layers has been proposed for detecting brain tumors from MRI images. The proposed model includes an automatic feature extractor, modified hidden layer architecture, and activation function. Several test cases were performed, and the proposed model achieved 98.6% accuracy and 97.8% precision score with a low cross-entropy rate. Compared with other approaches such as adjacent feature propagation network (AFPNet), mask region-based CNN (mask RCNN), YOLOv5, and Fourier CNN (FCNN), the proposed model has performed better in detecting brain tumors.