Claim Missing Document
Check
Articles

Found 1 Documents
Search
Journal : JOIV : International Journal on Informatics Visualization

An Experimental Study on Deep Learning Technique Implemented on Low Specification OpenMV Cam H7 Device Asmara, Rosa Andrie; Rosiani, Ulla Delfana; Mentari, Mustika; Syulistyo, Arie Rachmad; Shoumi, Milyun Ni'ma; Astiningrum, Mungki
JOIV : International Journal on Informatics Visualization Vol 8, No 2 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.2.2299

Abstract

This research aims to identify and recognize the OpenMV Camera H7. In this research, all tests were carried out using Deep Machine Learning and applied to several functions, including Face Recognition, Facial Expression Recognition, Detection and Calculation of the Number of Objects, and Object Depth Estimation. Face Expression Recognition was used in the Convolutional Neural Network to recognize five facial expressions: angry, happy, neutral, sad, and surprised. This allowed the use of a primary dataset with a 48MP resolution camera. Some scenarios are prepared to meet environment variability in the implementation, such as indoor and outdoor environments, with different lighting and distance. Most pre-trained models in each identification or recognition used mobileNetV2 since this model allows low computation cost and matches with low hardware specifications. The object detection and counting module compared two methods: the conventional Haar Cascade and the Deep Learning MobileNetV2 model. The training and validation process is not recommended to be carried out on OpenMV devices but on computers with high specifications. This research was trained and validated using selected primary and secondary data, with 1500 image data. The computing time required is around 5 minutes for ten epochs. On average, recognition results on OpenMV devices take around 0.3 - 2 seconds for each frame. The accuracy of the recognition results varies depending on the pre-trained model and the dataset used, but overall, the accuracy levels achieved tend to be very high, exceeding 96.6%.
Co-Authors Adi Atmoko Agung Nugroho Pramudhita Agung, Muhammad Helmi Permana Al Hazmi, Moch. Fariz Andjani, Bella Sita Ardiansyah, Muhammad Rizqi Arie Rachmad Syulistyo Aryo Bagus Kusumadewa Tutuko Astiningrum, Mungki Astuti, Ely Setyo Atika Prasetyawati Aulia Zahra Musthafawi Baqi, Rijalul Batubulan, Kadek Suarjuna Bella Sita Andjani Cadea Mikha Pasma Choirina, Priska Choirina, Priska Chrisnandari, Rosita Dwi Dyah Ayu Irawati Eka Larasati Amalia Elisiana, Malia Fadlilah, Afi Fadlullah, Faqih Faisal Rahutomo Faqih Fadlullah Fitri Maharany Fitriani, Indah Martha Frangky Tupamahu Gunawan Budi Prasetyo Hidayatinnisa, Nurul Ibnu Tsalis Assalam Irfin, Zakijah Khosyi Nasywa Imanda Krista Bella Dwi Rahayu Nur Widyasari Luthfansa, Zaky Maula Malia Elisiana Marcelina Alifia Rahmawati Maula, Ahmad Zaky Maulana Syarief Hidayatullah Moch. Fariz Al Hazmi Mustika Mentari Nadhifatul Laeily Nor Wahid Hidayad Ulloh Nugraha W, Raphael Nur Afifi, Yunis Fiatin Nurhayati, Rafika Nurudin Santoso P., Mauridhy Hery Pasma, Cadea Mikha Permatasari, Twisty Henras Prasetyawati, Atika Putra Prima Arhandi, Putra Prima Putra, Rahardhiyan Wahyu Qonitatul Hasanah Rahardhiyan Wahyu Putra Rahmad, Cahya Raphael Nugraha W Rosa Andrie Asmara Rosa Andrie Asmara Rudy Ariyanto Rudy Ariyanto Santoso, Nurudin Septiar Enggar Sukmana Shoumi, Milyun Ni’ma Siti Romlah Siti Romlah Sri Rulianah, Sri Surya Sumpeno Twisty Henras Permatasari Vandry Eko Haris Setiyanto Wilda Imama Sabilla, Wilda Imama Yessy Nindi Pratiwi Pratiwi Yoppy Yunhasnawa Yunhasnawa, Yoppy Yushintia Pramitarini Yusron, Rizqi Darma Rusdiyan Zaky Maula Luthfansa