Claim Missing Document
Check
Articles

Found 10 Documents
Search

3D Information from Scattering Media Images Laksmita Rahadianti
Jurnal Ilmu Komputer dan Informasi Vol 14, No 1 (2021): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Information
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v14i1.963

Abstract

Scattering media environments are real-world conditions that occur often, in daily life. Some examples of scattering media are haze, fog, and other bad weather conditions. In these environments, micro-particles in the surrounding media interfere with light propagation and image formation. Thus, images that are captured in these scattering media environments will suffer from low contrast and loss of intensity. This becomes an issue for computer vision methods that employ features found in the scene. To solve this issue, many approaches must estimate the corresponding clear scene prior to further processing. However, the image formation model in scattering media shows potential 3D distance information about the scene encoded implicitly in image intensities. In this paper, we investigate the potential information that can be extracted directly from the scattering media images. We demonstrate the possibility of extracting relative depth in the form of transmission as well as explicit depth maps from single images.
Driver Drowsiness Detection Based on Drivers’ Physical Behaviours: A Systematic Literature Review Femilia Hardina Caryn; Laksmita Rahadianti
Computer Engineering and Applications Journal Vol 10 No 3 (2021)
Publisher : Universitas Sriwijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (268.076 KB) | DOI: 10.18495/comengapp.v10i3.381

Abstract

One of the most common causes of traffic accidents is human error. One such factor involves the drowsy drivers that do not focus on the road before them. Driver drowsiness often occurs due to fatigue in long distances or long durations of driving. The signs of a drowsy driver may be detected based on one out of three types of tests; i.e., performance test, physiological test, and behavioural test. Since the physiological and performance tests are quite difficult and expensive to implement, the behavioural test is a good choice to use for detecting early drowsiness. Behaviour-based driver drowsiness detection has been one of the hot research topics in recent years and is still increasingly developing. There are many approaches for behavioural driver drowsiness detection, such as Neural Networks, Multi Layer Perceptron, Support Vector Machine, Vander Lugt Correlator, Haar Cascade, and Eye Aspect Ratio. Therefore, this study aims to conduct a systematic literature review to elaborate on the development and research trends regarding driver drowsiness detection. We hope to provide a good overview of the current state of research and offer the research potential in the future.
Determining subject headings of documents using information retrieval models Evi Yulianti; Laksmita Rahadianti
Indonesian Journal of Electrical Engineering and Computer Science Vol 23, No 2: August 2021
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijeecs.v23.i2.pp1049-1058

Abstract

Subject heading is a controlled vocabulary that describes the topic of adocument, which is important to find and organize library resources. Assigning appropriate subject headings to a document, however, is a time-consuming process. We therefore conduct a novel study on the effectiveness of information retrieval models, i.e.,language model (LM) andvector spacemodel (VSM), to automatically generate a ranked list of relevant subject headings, with the aim to give a recommendation for librarians to determine the subject headings effectively and efficiently. Our results show that there are a high number of our queries (up to 61%) that have relevant subject headings in the ten top-ranked recommendations and on average, the first relevant subject heading is found at the early position (3rd rank). This indicates that document retrieval methods can help the subject heading assignment process. LM and VSM are shown to have comparable performance, except when the search unit is title, VSM is superior to LM by8-22%. Our further analysis exhibits three faculty pairs that are potential to have research collaboration as their students’ thesis often have overlap subject headings: i) economy and business-social and political sciences, ii) nursing-public health and iii) medicine-public health.
Comparing ASM and Learning-Based Methods for Satellite Image Dehazing Steven Christ Pinantyo Arwidarasto; Rahadianti, Laksmita
Jurnal Ilmu Komputer dan Informasi Vol. 18 No. 2 (2025): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Informatio
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v18i2.1521

Abstract

Recent advancements in optical satellite technologies have significantly improved image resolution, providing more detailed information about Earth's surface. However, atmospheric interference, such as haze, is still a major factor in image capture. The interference results in visibility degradation of the acquired images, hindering computer vision tasks. Numerous studies have proposed various methods to recover haze-affected regions in satellite images, highlighting the need for more effective solutions. Motivated by this, this paper compares different atmospheric dehazing methods, including Atmospheric Scattering Model (ASM)-based and deep learning-based. The results show that SRD is the best ASM-based method, with a PSNR value of 19.09 dB and an SSIM of 0.908. Among deep learning models, DW-GAN achieves the best restoration results with a PSNR value of 26.22 dB and an SSIM of 0.959. SRD offers faster inference times, but still suffers from residual haze and noticeable color degradation compared to DW-GAN. In contrast, DW-GAN provides a more complete haze removal at the cost of higher computational demands than ASM-based methods.
CycleGAN for day-to-night image translation: a comparative study Raihan Taufiq, Muhammad Feriansyah; Rahadianti, Laksmita
IAES International Journal of Artificial Intelligence (IJ-AI) Vol 14, No 3: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijai.v14.i3.pp2347-2357

Abstract

Computer vision tasks often fail when applied to night images, because the models are usually trained using clear daytime images only. This creates the need to augment the data with more nighttime image for training to increase robustness. In this study, we consider day-to-night image translation using both traditional image processing approaches and deep learning models. This study employs a hybrid framework of traditional image processing followed by a CycleGANbased deep learning model for day-to-night image translation. We then conduct a comparative study on various generator architectures in our CycleGAN model. This research compares four different CycleGAN models; i.e., the orginal CycleGAN, feature pyramid network (FPN) based CycleGAN, the original U-Net vision transformer based UVCGAN, plus a modified UVCGAN with additional edge loss. The experimental results show that the orginal UVCGAN obtains an Frechet inception distance (FID) score of 16.68 and structural similarity index ´ measure (SSIM) of 0.42, leading in terms of FID. Meanwhile, FPN-CycleGAN obtains an FID score of 104.46 and SSIM score of 0.44, leading in terms of SSIM. Considering FPN-CycleGAN’s bad FID score and visual observation, we conclude that UVCGAN is more effective in generating synthetic nighttime images.
Indonesian Food Classification Using Deep Feature Extraction and Ensemble Learning for Dietary Assessment Kardawi, Muhammad Yusuf; Saragih, Frederic Morado; Rahadianti, Laksmita; Arymurthy, Aniati Murni
Journal of Applied Informatics and Computing Vol. 9 No. 5 (2025): October 2025
Publisher : Politeknik Negeri Batam

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30871/jaic.v9i5.10643

Abstract

Food is a cornerstone of culture, shaping traditions and reflecting regional identities. However, understanding the nutritional content of diverse cuisines can be challenging due to the vast array of ingredients and the similarities in appearance across different dishes. While food provides essential nutrients for the body, excessive and unbalanced consumption can harm health. Overeating, particularly high-calorie and fatty foods, can lead to an accumulation of excess calories and fat, increasing the risk of obesity and related health issues such as diabetes and heart disease. This paper introduces a novel ensemble learning approach with a dictionary that contains food nutrition content for addressing this challenge, specifically on Padang cuisine, a rich culinary tradition from West Sumatera, Indonesia. By leveraging a dataset of nine Padang dishes, the system employs image enhancement techniques and combines deep feature extraction and machine learning algorithms to classify food items accurately. Then, depending on the classification results, the system evaluates the nutritional content and creates a dietary evaluation report that includes the amount of protein, fat, calories, and carbs. The model is evaluated using different evaluation metrics and achieving a state-of-the-art accuracy of 85.56%, significantly outperforming standard baseline models. Based on the findings, the suggested approach can efficiently classify different Padang dishes and produce dietary assessments, enabling personalised nutritional recommendations to provide clear information on a balanced diet to enhance physical and overall wellness.
CP_SDUNet: road extraction using SDUNet and centerline preserving dice loss Persada, Bayu Satria; Susanto, Muhammad Rifqi Priyo; Rahadianti, Laksmita; Arymurthy, Aniati Murni
IAES International Journal of Robotics and Automation (IJRA) Vol 14, No 2: June 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/ijra.v14i2.pp260-272

Abstract

Existing automatic road map extraction approaches on remote sensing images often fail because they cannot understand the spatial context of an image. Mainly because they could not learn the spatial context of the image and only knew the structure or texture of the image. These approaches only focus on regional accuracy instead of connectivity. Therefore, most approaches produce discontinuous outputs caused by buildings, shadows, and similarity to rivers. This study addresses the challenge of automatic road extraction, focusing on enhancing road connectivity and segmentation accuracy by proposing a network-based road extraction that uses a spatial intensifier module (DULR) and densely connected U-Net architecture (SDUNet) with a connectivity-preserving loss function (CP_clDice) called CP_SDUNet. This study analyzes the CP_clDice loss function for the road extraction task compared to the BCE Loss function to train the SDUNet model. The result shows that CP_SDUNet, performs best using an image size of 128×128 pixels and trained with the whole dataset with a combination of 20% clDice and 80% dice loss. The proposed method obtains a clDice score of 0.85 and an Interest over Union (IoU) score of 0.65 for the testing data. These findings demonstrate the potential of CP_SDUNet for reliable road extraction.
Cloud Removal Using Sparse Dark Channel Region Detection: A Systemic Literature Review Hamidiyati, Nazifa; Rahadianti, Laksmita
Syntax Literate Jurnal Ilmiah Indonesia
Publisher : Syntax Corporation

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.36418/syntax-literate.v9i12.55565

Abstract

Remote sensing satellite technology has revolutionized the way we gather information about our planet. Through the use of advanced imaging capabilities, satellite images have become invaluable in various aspects of daily life. These images are extensively utilized in environmental protection, agricultural engineering, and other fields. Remote sensing satellite maps are used for tasks such as geological mapping, monitoring urban heat islands, environmental surveillance, and detecting forest fires from remote sensing images. However, clouds present a significant hindrance when utilizing satellite imagery for ground observations, as they obstruct the view and can limit the accuracy of the analysis. While there are numerous advanced state-of-the-art approaches available, it is important to note that they often require a substantial amount of data for training. On the other hand, if a more general approach is desired without the need for extensive training data, pixel-based methods provide a viable option. One of the widely used pixel-based methods for cloud removal in satellite images is Dark Channel Prior (DCP). DCP is often combined with other methods to improve the image quality. This systematic literature review will demonstrate the development of the DCP method in cloud removal from satellite images. Keywords: Cloud removal, dark channel prior (DCP), satellite imagery, remote sensing
Deep Image Deblurring for Non-Uniform Blur: a Comparative Study of Restormer and BANet Nugraha, Made Prastha; Rahadianti, Laksmita
Jurnal Ilmu Komputer dan Informasi Vol. 17 No. 2 (2024): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Informatio
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v17i2.1274

Abstract

Image blur is one of the common degradations on an image. The blur that occurs on the captured images is sometimes non-uniform, with different levels of blur in different areas of the image. In recent years, most deblurring methods have been deep learning-based. These methods model deblurring as an imageto-image translation problem, treating images globally. This may result in poor performance when handling non-uniform blur in images. Therefore, in this paper, the author compared two state-of-the-art supervised deep learning methods for deblurring and restoration, e.g. BANet and Restormer, with a special focus on the non-uniform blur. The GOPRO training dataset, which is also used in various studies as a benchmark, was used to train the models. The trained models were then tested on the GOPRO testing test, the HIDE testing set for cross-dataset testing, and GOPRO-NU, which consists of specifically selected non-uniform blurred images from the GOPRO testing set, for the non-uniform deblur testing. On the GOPRO testing set, Restormer achieved an SSIM of 0.891 and PSNR of 27.66 while BANet obtained an SSIM of 0.926 and PSNR of 34.90. Meanwhile, for the HIDE dataset, Restormer achieved an SSIM of 0.907 and PSNR of 27.93 while BANet obtained an SSIM of 0.908 and PSNR of 34.52. Finally, on the non-uniform blur GOPRO dataset, Restormer achieved an SSIM of 0.911 and PSNR of 29.48 while BANet obtained an SSIM of 0.935 and PSNR of 35.47. Overall, BANet shows the best result in handling non-uniform blur with a significant improvement over Restormer.
Single Image Dehazing Using Deep Learning Hartanto, Cahyo Adhi; Rahadianti, Laksmita
JOIV : International Journal on Informatics Visualization Vol 5, No 1 (2021)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30630/joiv.5.1.431

Abstract

Many real-world situations such as bad weather may result in hazy environments. Images captured in these hazy conditions will have low image quality due to microparticles in the air. The microparticles light to scatter and absorb, resulting in hazy images with various effects. In recent years, image dehazing has been researched in depth to handle images captured in these conditions. Various methods were developed, from traditional methods to deep learning methods. Traditional methods focus more on the use of statistical prior. These statistical prior have weaknesses in certain conditions. This paper proposes a novel architecture based on PDR-Net by using a pyramid dilated convolution and pre-processing modules, processing modules, post-processing modules, and attention applications. The proposed network is trained to minimize L1 loss and perceptual loss with the O-Haze dataset. To evaluate our architecture's result, we used structural similarity index measure (SSIM), peak signal-to-noise ratio (PSNR), and color difference as an objective assessment and psychovisual experiment as a subjective assessment. Our architecture obtained better results than the previous method using the O-Haze dataset with an SSIM of 0.798, a PSNR of 25.39, but not better on the color difference. The SSIM and PSNR results were strengthened by using subjective assessments and 65 respondents, most of whom chose the results of the restoration of the image produced by our architecture.