Otman Maarouf
Sultan Moulay Slimane University

Published : 3 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 2 Documents
Search
Journal : Indonesian Journal of Electrical Engineering and Computer Science

Amazigh part-of-speech tagging with machine learning and deep learning Otman Maarouf; Rachid El Ayachi; Mohamed Biniz
Indonesian Journal of Electrical Engineering and Computer Science Vol 24, No 3: December 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v24.i3.pp1814-1822

Abstract

Natural language processing (NLP) is a part of artificial intelligence that dissects, comprehends, and changes common dialects with computers in composed and spoken settings. At that point in scripts. Grammatical features part-of-speech (POS) allow marking the word as per its statement. We find in the literature that POS is used in a few dialects, in particular: French and English. This paper investigates the attention-based long short-term memory (LSTM) networks and simple recurrent neural network (RNN) in Tifinagh POS tagging when it is compared to conditional random fields (CRF) and decision tree. The attractiveness of LSTM networks is their strength in modeling long-distance dependencies. The experiment results show that LSTM networks perform better than RNN, CRF and decision tree that has a near performance.
Automatic translation from English to Amazigh using transformer learning Otman Maarouf; Abdelfatah Maarouf; Rachid El Ayachi; Mohamed Biniz
Indonesian Journal of Electrical Engineering and Computer Science Vol 34, No 3: June 2024
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v34.i3.pp1924-1934

Abstract

Due to the lack of parallel data, to our knowledge, no study has been conducted on the Amazigh-English language pair, despite the numerous machine translation studies completed between major European language pairs. We decided to utilize the neural machine translation (NMT) method on a parallel corpus of 137,322 sentences. The attention-based encoder-decoder architecture is used to construct statistical machine translation (SMT) models based on Moses, as well as NMT models using long short-term memory (LSTM), gated recurrent units (GRU), and transformers. Various outcomes were obtained for each strategy after several simulations: 80.7% accuracy was achieved using the statistical approach, 85.2% with the GRU model, 87.9% with the LSTM model, and 91.37% with the transformer.