Claim Missing Document
Check
Articles

Found 4 Documents
Search

Comparison of FairMOT-VGG16 and MCMOT Implementation for Multi-Object Tracking and Gender Detection on Mall CCTV Pray Somaldo; Dina Chahyati
Jurnal Ilmu Komputer dan Informasi Vol 14, No 1 (2021): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Information
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v14i1.958

Abstract

The crowd detection system on CCTV has proven to be useful for retail and shopping sector owners in mall areas. The data can be used as a guide by shopping center owners to find out the number of visitors who enter at a certain time. However, such information was still insufficient. The need for richer data has led to the development of more specific person detection which involves gender. Gender detection can provide specific information on the number of men and women visiting a particular location. However, gender detection alone does not provide an identity label for every detection that occurs, so it needs to be combined with a multi-person tracking system. This study compares two tracking methods with gender detection, namely FairMOT with gender classification and MCMOT. The first method produces MOTA, MOTP, IDS, and FPS of 78.56, 79.57, 19, and 24.4, while the second method produces 69.84, 81.94, 147, and 30.5. In addition, evaluation of gender was also carried out where the first method resulted in a gender accuracy of 65\% while the second method was 62.35\%. 
Visual Emotion Recognition Using ResNet Azmi Najid; Dina Chahyati
Proceeding of the Electrical Engineering Computer Science and Informatics Vol 5: EECSI 2018
Publisher : IAES Indonesia Section

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (944.458 KB) | DOI: 10.11591/eecsi.v5.1700

Abstract

Given an image, humans have emotional reactions to it such as happy, fear, disgust, etc. The purpose of this research is to classify images based on human's reaction to them using ResNet deep architecture. The problem is that emotional reaction from humans are subjective, therefore a confidently labelled dataset is difficult to obtain. This research tries to overcome this problem by implementing and analyzing transfer learning from a big dataset such as ImageNet to relatively small visual emotion dataset. Other than that, because emotion is determined by low-level and high-level features, we will make a modification to a pretrained residual network to better utilize low-level and high-level feature to be used in visual emotion recognition. Results show that general (low-level) features and specific (high-level) features obtained from ImageNet object recognition can be well utilized for visual emotion recognition.
Designing the CORI score for COVID-19 diagnosis in parallel with deep learning-based imaging models Kamelia, Telly; Zulkarnaien, Benny; Septiyanti, Wita; Afifi, Rahmi; Krisnadhi, Adila; Rumende, Cleopas M.; Wibisono, Ari; Guarddin, Gladhi; Chahyati, Dina; Yunus, Reyhan E.; Pratama, Dhita P.; Rahmawati, Irda N.; Nareswari, Dewi; Falerisya, Maharani; Salsabila, Raissa; Baruna, Bagus DI.; Iriani, Anggraini; Nandipinto, Finny; Wicaksono, Ceva; Sini, Ivan R.
Narra J Vol. 5 No. 2 (2025): August 2025
Publisher : Narra Sains Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52225/narra.v5i2.1606

Abstract

The coronavirus disease 2019 (COVID-19) pandemic has triggered a global health crisis and placed unprecedented strain on healthcare systems, particularly in resource-limited settings where access to RT-PCR testing is often restricted. Alternative diagnostic strategies are therefore critical. Chest X-rays, when integrated with artificial intelligence (AI), offers a promising approach for COVID-19 detection. The aim of this study was to develop an AI-assisted diagnostic model that combines chest X-ray images and clinical data to generate a COVID-19 Risk Index (CORI) Score and to implement a deep learning model based on ResNet architecture. Between April 2020 and July 2021, a multicenter cohort study was conducted across three hospitals in Jakarta, Indonesia, involving 367 participants categorized into three groups: 100 COVID-19 positive, 100 with non-COVID-19 pneumonia, and 100 healthy individuals. Clinical parameters (e.g., fever, cough, oxygen saturation) and laboratory findings (e.g., D-dimer and C-reactive protein levels) were collected alongside chest X-ray images. Both the CORI Score and the ResNet model were trained using this integrated dataset. During internal validation, the ResNet model achieved 91% accuracy, 94% sensitivity, and 92% specificity. In external validation, it correctly identified 82 of 100 COVID-19 cases. The combined use of imaging, clinical, and laboratory data yielded an area under the ROC curve of 0.98 and a sensitivity exceeding 95%. The CORI Score demonstrated strong diagnostic performance, with 96.6% accuracy, 98% sensitivity, 95.4% specificity, a 99.5% negative predictive value, and a 91.1% positive predictive value. Despite limitations—including retrospective data collection, inter-hospital variability, and limited external validation—the ResNet-based AI model and the CORI Score show substantial promise as diagnostic tools for COVID-19, with performance comparable to that of experienced thoracic radiologists in Indonesia.
Peningkatan Kualitas Citra Bawah Air Menggunakan GAN dengan Mekanisme Residual dan Attention Abdurrachman, Nursanti; Chahyati, Dina
Jurnal Teknologi Informasi dan Ilmu Komputer Vol 12 No 6: Desember 2025
Publisher : Fakultas Ilmu Komputer, Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25126/jtiik.2025126

Abstract

Citra bawah air sering mengalami penurunan kualitas yang disebabkan oleh proses redaman dan hamburan cahaya yang dipengaruhi oleh panjang gelombang serta jarak antara objek dan kamera. Faktor-faktor seperti gangguan pencahayaan dan kompleksitas latar bawah air sering kali menyebabkan citra menjadi buram, mengalami perubahan warna, dan mengalami berbagai bentuk degradasi visual lainnya. Upaya peningkatan kualitas citra bawah air tidak hanya bertujuan untuk memperbaiki tampilan visual, tetapi juga untuk menghasilkan citra yang lebih baik sebagai masukan bagi proses pengolahan citra lanjutan. Keunikan dan kompleksitas dari citra bawah air membuat metode peningkatan konvensional yang dirancang untuk kondisi seperti cahaya rendah dan berkabut menjadi kurang efektif bila diterapkan dalam konteks bawah air. Untuk mengatasi hal ini, penelitian ini memanfaatkan Generative Adversarial Networks (GANs) yang dilengkapi dengan mekanisme attention dan residual pada bagian generator. Penggunaan attention dan residual mechanism memungkinkan jaringan untuk fokus pada bagian penting dari gambar dan membantu dalam pemulihan informasi yang hilang selama proses peningkatan gambar. Penelitian ini menggunakan dataset EUVP dengan data latih sebanyak 3330 citra, data uji sebanyak 1110 citra, dan data validasi sebanyak 1110 citra. Pendekatan yang diusulkan pada penelitian ini mampu menjawab tantangan-tantangan utama dalam peningkatan citra bawah air. Citra yang dihasilkan memiliki keseimbangan yang cukup baik antara kualitas alami gambar dan kesamaan struktural dengan gambar target. Pendekatan yang diusulkan dalam penelitian ini juga mampu menyeimbangkan pemulihan warna dan preservasi tekstur sehingga menghasilkan gambar yang lebih alami dan realistis, tanpa artefak warna yang berlebihan atau kehilangan detail tekstur. Hasil evaluasi menunjukkan metode yang diusulkan mencapai nilai PSNR 23.7966, SSIM 0.7219, UIQM 1.4485, dan UCIQE 0.2389.   Abstract Underwater images inevitably suffer from quality degradation issues caused by wavelength- and distance-dependent attenuation and scattering. Light interference and complex underwater backgrounds frequently cause blurriness, color distortion, and other degradation problems in underwater images. Enhancing underwater image quality not only aims to improve visual perception but also to provide higher-quality inputs for other image processing techniques. The uniqueness and complexity of underwater images make enhancement methods designed to address issues like low light and foggy conditions unsuitable for underwater image enhancement tasks. Attention and residual mechanisms allow the network to focus on important parts of the image and aid in recovering lost information during the enhancement process. This research employs Generative Adversarial Networks (GANs) for underwater image enhancement, incorporating attention and residual mechanisms into the generator part. This study uses the EUVP dataset with 3330 training images, 1110 test images, and 1110 validation images. This research’s proposed method can address the challenges faced by underwater images. The resulting images achieve a good balance between natural image quality and structural similarity with the ground truth. The proposed method in this study is also able to balance color recovery and texture preservation, producing more natural and realistic images without excessive color artifacts or loss of texture details. Evaluation results show that the proposed method achieves PSNR 23.7966, SSIM 0.7219, UIQM 1.4485, dan UCIQE 0.2389.