Rachid El Ayachi
Sultan Moulay Slimane University

Published : 11 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 4 Documents
Search
Journal : Indonesian Journal of Electrical Engineering and Computer Science

Amazigh part-of-speech tagging with machine learning and deep learning Otman Maarouf; Rachid El Ayachi; Mohamed Biniz
Indonesian Journal of Electrical Engineering and Computer Science Vol 24, No 3: December 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v24.i3.pp1814-1822

Abstract

Natural language processing (NLP) is a part of artificial intelligence that dissects, comprehends, and changes common dialects with computers in composed and spoken settings. At that point in scripts. Grammatical features part-of-speech (POS) allow marking the word as per its statement. We find in the literature that POS is used in a few dialects, in particular: French and English. This paper investigates the attention-based long short-term memory (LSTM) networks and simple recurrent neural network (RNN) in Tifinagh POS tagging when it is compared to conditional random fields (CRF) and decision tree. The attractiveness of LSTM networks is their strength in modeling long-distance dependencies. The experiment results show that LSTM networks perform better than RNN, CRF and decision tree that has a near performance.
Recognition of a Face in a Mixed Document Lhoussaine Bouhou; Rachid El Ayachi; Mohamed Fakir; Mohamed Oukessou
Indonesian Journal of Electrical Engineering and Computer Science Vol 15, No 2: August 2015
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v15.i2.pp301-312

Abstract

Face recognition is the field of great interest in the domaine of research for several applications such as biometry identification, surveillance, and human-machine interaction…This paper exposes a system of face recognition. This system exploits an image document text embedding a color human face image. Initially, the system, in its phase of extraction, exploitis the horizontal and vertical histogram of the document, detects the image which contains the human face. The second task of the system consists of detecting the included face in other to determine, with the help of invariants moments, the characteristics of the face. The third and last task of the system is to determine, via the same invariants moments, the characteristics of each face stored in a database in order to compare them by means of a classification tool (Neural Networks and K nearest neighbors) with the one determined in the second task for the purpose of taking the decision of identification in that database, of the most similar face to the one detected in the input image.
Automatic translation from English to Amazigh using transformer learning Otman Maarouf; Abdelfatah Maarouf; Rachid El Ayachi; Mohamed Biniz
Indonesian Journal of Electrical Engineering and Computer Science Vol 34, No 3: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v34.i3.pp1924-1934

Abstract

Due to the lack of parallel data, to our knowledge, no study has been conducted on the Amazigh-English language pair, despite the numerous machine translation studies completed between major European language pairs. We decided to utilize the neural machine translation (NMT) method on a parallel corpus of 137,322 sentences. The attention-based encoder-decoder architecture is used to construct statistical machine translation (SMT) models based on Moses, as well as NMT models using long short-term memory (LSTM), gated recurrent units (GRU), and transformers. Various outcomes were obtained for each strategy after several simulations: 80.7% accuracy was achieved using the statistical approach, 85.2% with the GRU model, 87.9% with the LSTM model, and 91.37% with the transformer.
One level deep convolutional neural network for facial key points detection Abdelaali Benaiss; Rachid El Ayachi; Mohamed Biniz; Mustapha Oujaoura
Indonesian Journal of Electrical Engineering and Computer Science Vol 33, No 3: March 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v33.i3.pp1694-1704

Abstract

Facial landmark detection has a lot of applications in face recognition, face alignment, facial expression recognition, video surveillance and security systems. In the existing literature, there are multiple methods utilizing convolutional neural networks (CNNs) that address this problem in various ways. In many cases, the models use a tree-like structure of CNNs to achieve better results. This paper proposes a combination of three parallel deep convolutional neural networks (DCNNs) to estimate the accurate localization of each keypoint. The first one focuses on the whole face to outperform five points, including the eyes, nose, and mouth corners. The second one focuses on the eyes-nose parts to outperform three points, specifically the eyes and nose. The last one focuses on the nose-mouth parts to outperform three points, namely the nose and mouth corners. Further, we combine all outputs of the three DCNNs and take the average value of each detected key point as the final output. In the first step, we improvthe the parameter efficiency and accuracy of each DCNNs through a set of experiments using the labeled face parts in-the-wild database (LFPW) and the helen facial feature dataset (Helen). Then, we demonstrate that our approach yields more accurate estimations of facial key points than two state-of-the-art methods and commercial software in terms of accuracy.