Hussein, Aziza I.
Unknown Affiliation

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 3 Documents
Search

Design and implementation of a low-cost circuit for medium-speed flash analog to digital conversions Hussain Hassan, Nashaat M.; Esmaeel Salama, Mohamed Adel; Hussein, Aziza I.; Mabrook, Mohamed Mourad
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 2: April 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i2.pp2361-2368

Abstract

Despite the considerable advancements in analog-to-digital conversion (ADC) circuits, many papers neglect several crucial considerations: Firstly, it does not ensure that ADCs work well in the software or hardware. Secondly, it is not certain that ADCs have a wide range of amplitude responses for the input voltages to be convenient in many applications, especially in electronics, communications, computer vision, CubeSat circuits, and subsystems. Finally, many of these ADCs need to look at the suitability of the proposed circuit to the most extensive range of frequencies. In this paper, a design of a low-cost circuit is proposed for medium-speed flash ADCs. The proposed circuit is simulated based on a set of electronic components with specific values to achieve high stability operation for a wide range of frequencies and voltages, whether in software or hardware. This circuit is practically implemented and experimentally tested. The proposed design aims to achieve high efficiency in the sampling process over a range of amplitudes from 10 mV to 10 V. The proposed circuit operates at a bandwidth of frequencies from 0 Hz to greater than 10 kHz in the simulation and hardware implementation.
Braille code classifications tool based on computer vision for visual impaired Sadak, Hany M.; Khalaf, Ashraf A. M.; Hussein, Aziza I.; Salama, Gerges Mansour
International Journal of Electrical and Computer Engineering (IJECE) Vol 14, No 6: December 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijece.v14i6.pp6992-7000

Abstract

Blind and visually impaired people (VIP) face many challenges in writing as they usually use traditional tools such as Slate and Stylus or expensive typewriters as Perkins Brailler, often causing accessibility and affordability issues. This article introduces a novel portable, cost-effective device that helps VIP how to write by utilizing a deep-learning model to detect a Braille cell. Using deep learning instead of electrical circuits can reduce costs and enable a mobile app to act as a virtual teacher for blind users. The app could suggest sentences for the user to write and check their work, providing an independent learning platform. This feature is difficult to implement when using electronic circuits. A portable device generates Braille character cells using light- emitting diode (LED) arrays instead of Braille holes. A smartphone camera captures the image, which is then processed by a deep learning model to detect the Braille and convert it to English text. This article provides a new dataset for custom-Braille character cells. Moreover, applying a transfer learning technique on the mobile network version 2 (MobileNetv2) model offers a basis for the development of a comprehensive mobile application. The accuracy based on the model reached 97%.
A systematic evaluation of pre-trained encoder architectures for multimodal brain tumor segmentation using U-Net-based architectures Abbas, Marwa; Khalaf, Ashraf A. M.; Mogahed, Hussein; Hussein, Aziza I.; Gaber, Lamya; Mabrook, M. Mourad
Indonesian Journal of Electrical Engineering and Computer Science Vol 40, No 2: November 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v40.i2.pp850-859

Abstract

Accurate brain tumor segmentation from medical imaging is critical for early diagnosis and effective treatment planning. Deep learning methods, particularly U-Net-based architectures, have demonstrated strong performance in this domain. However, prior studies have primarily focused on limited encoder backbones, overlooking the potential advantages of alternative pretrained models. This study presents a systematic evaluation of twelve pretrained convolutional neural networks—ResNet34, ResNet50, ResNet101, VGG16, VGG19, DenseNet121, InceptionResNetV2, InceptionV3, MobileNetV2, EfficientNetB1, SE-ResNet34, and SE-ResNet18—used as encoder backbones in the U-Net framework for identification and extraction of tumor-affected brain areas using the BraTS 2019 multimodal MRI dataset. Model performance was assessed through cross-validation, incorporating fault detection to enhance reliability. The MobileNetV2-based U-Net configuration outperformed all other architectures, achieving 99% cross-validation accuracy and 99.3% test accuracy. Additionally, it achieved a Jaccard coefficient of 83.45%, and Dice coefficients of 90.3% (Whole Tumor), 86.07% (Tumor Core), and 81.93% (Enhancing Tumor), with a low-test loss of 0.0282. These results demonstrate that MobileNetV2 is a highly effective encoder backbone for U-Net in extracting tasks for tumor-affected brain regions using multimodal medical imaging data.