Claim Missing Document
Check
Articles

Found 3 Documents
Search

Comparison of FairMOT-VGG16 and MCMOT Implementation for Multi-Object Tracking and Gender Detection on Mall CCTV Pray Somaldo; Dina Chahyati
Jurnal Ilmu Komputer dan Informasi Vol 14, No 1 (2021): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Information
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v14i1.958

Abstract

The crowd detection system on CCTV has proven to be useful for retail and shopping sector owners in mall areas. The data can be used as a guide by shopping center owners to find out the number of visitors who enter at a certain time. However, such information was still insufficient. The need for richer data has led to the development of more specific person detection which involves gender. Gender detection can provide specific information on the number of men and women visiting a particular location. However, gender detection alone does not provide an identity label for every detection that occurs, so it needs to be combined with a multi-person tracking system. This study compares two tracking methods with gender detection, namely FairMOT with gender classification and MCMOT. The first method produces MOTA, MOTP, IDS, and FPS of 78.56, 79.57, 19, and 24.4, while the second method produces 69.84, 81.94, 147, and 30.5. In addition, evaluation of gender was also carried out where the first method resulted in a gender accuracy of 65\% while the second method was 62.35\%. 
Visual Emotion Recognition Using ResNet Azmi Najid; Dina Chahyati
Proceeding of the Electrical Engineering Computer Science and Informatics Vol 5: EECSI 2018
Publisher : IAES Indonesia Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (944.458 KB) | DOI: 10.11591/eecsi.v5.1700

Abstract

Given an image, humans have emotional reactions to it such as happy, fear, disgust, etc. The purpose of this research is to classify images based on human's reaction to them using ResNet deep architecture. The problem is that emotional reaction from humans are subjective, therefore a confidently labelled dataset is difficult to obtain. This research tries to overcome this problem by implementing and analyzing transfer learning from a big dataset such as ImageNet to relatively small visual emotion dataset. Other than that, because emotion is determined by low-level and high-level features, we will make a modification to a pretrained residual network to better utilize low-level and high-level feature to be used in visual emotion recognition. Results show that general (low-level) features and specific (high-level) features obtained from ImageNet object recognition can be well utilized for visual emotion recognition.
Designing the CORI score for COVID-19 diagnosis in parallel with deep learning-based imaging models Kamelia, Telly; Zulkarnaien, Benny; Septiyanti, Wita; Afifi, Rahmi; Krisnadhi, Adila; Rumende, Cleopas M.; Wibisono, Ari; Guarddin, Gladhi; Chahyati, Dina; Yunus, Reyhan E.; Pratama, Dhita P.; Rahmawati, Irda N.; Nareswari, Dewi; Falerisya, Maharani; Salsabila, Raissa; Baruna, Bagus DI.; Iriani, Anggraini; Nandipinto, Finny; Wicaksono, Ceva; Sini, Ivan R.
Narra J Vol. 5 No. 2 (2025): August 2025
Publisher : Narra Sains Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52225/narra.v5i2.1606

Abstract

The coronavirus disease 2019 (COVID-19) pandemic has triggered a global health crisis and placed unprecedented strain on healthcare systems, particularly in resource-limited settings where access to RT-PCR testing is often restricted. Alternative diagnostic strategies are therefore critical. Chest X-rays, when integrated with artificial intelligence (AI), offers a promising approach for COVID-19 detection. The aim of this study was to develop an AI-assisted diagnostic model that combines chest X-ray images and clinical data to generate a COVID-19 Risk Index (CORI) Score and to implement a deep learning model based on ResNet architecture. Between April 2020 and July 2021, a multicenter cohort study was conducted across three hospitals in Jakarta, Indonesia, involving 367 participants categorized into three groups: 100 COVID-19 positive, 100 with non-COVID-19 pneumonia, and 100 healthy individuals. Clinical parameters (e.g., fever, cough, oxygen saturation) and laboratory findings (e.g., D-dimer and C-reactive protein levels) were collected alongside chest X-ray images. Both the CORI Score and the ResNet model were trained using this integrated dataset. During internal validation, the ResNet model achieved 91% accuracy, 94% sensitivity, and 92% specificity. In external validation, it correctly identified 82 of 100 COVID-19 cases. The combined use of imaging, clinical, and laboratory data yielded an area under the ROC curve of 0.98 and a sensitivity exceeding 95%. The CORI Score demonstrated strong diagnostic performance, with 96.6% accuracy, 98% sensitivity, 95.4% specificity, a 99.5% negative predictive value, and a 91.1% positive predictive value. Despite limitations—including retrospective data collection, inter-hospital variability, and limited external validation—the ResNet-based AI model and the CORI Score show substantial promise as diagnostic tools for COVID-19, with performance comparable to that of experienced thoracic radiologists in Indonesia.