Claim Missing Document
Check
Articles

Found 5 Documents
Search

Analisis Faktor-Faktor Yang Mempengaruhi Ketepatan Waktu Pelaporan Keuangan Pada Perusahaan Manufaktur Yang Terdaftar Di Bursa Efek Indonesia Arniman Zebua; Selfie Gultom; Yohannes
Jurnal Akuntansi Bisnis Eka Prasetya : Penelitian Ilmu Akuntansi Vol 6 No 1 (2020): Edisi Maret
Publisher : lppm.eka-prasetya.ac.id

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Penelitian ini bertujuan untuk menemukan bukti empiris tentang faktor-faktor yang mempengaruhi ketepatan waktu pelaporan keuangan perusahaan manufaktur yang terdaftar di Bursa Efek Indonesia. Faktor-faktor yang diuji dalam penelitian ini yaitu debt to equity ratio, dan profitabilitas. Populasi dari penelitian ini menggunakan 168 perusahaan manufaktur yang konsisten terdaftar di Bursa Efek Indonesia periode tahun 2015-2017 yang diambil dengan menggunakan metode purposive sampling. Faktor-faktor tersebut kemudian diuji dengan menggunakan regresi logistic pada tingkat signifikansi 5 persen. Hasil penelitian mengidentifikasi bahwa debt to equity ratio, profitabilitas tidak berpengaruh pada ketepatan waktu pelaporan keuangan perusahaan manufaktur yang terdaftar di Bursa Efek Indonesia.
Residual-Gated Attention U-Net with Channel Recalibration for Polyp Segmentation in Colonoscopy Images Tanuwijaya, William; Yohannes
INOVTEK Polbeng - Seri Informatika Vol. 10 No. 3 (2025): November
Publisher : P3M Politeknik Negeri Bengkalis

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35314/4qmfa987

Abstract

This study proposed a modification to the Attention U-Net architecture by integrating a Residual-Gated mechanism and Squeeze-and-Excitation (SE) Block-based channel recalibration within the Attention Gate to enhance feature selectivity in polyp segmentation. This integration reinforces both spatial and channel attention, enabling the model to better highlight polyp regions while suppressing irrelevant background features. Experiments were conducted on three colonoscopy datasets, CVC-ClinicDB, CVC-ColonDB, and CVC-300, using IoU and DSC metrics. Compared to the Attention U-Net baseline, the proposed model achieves noticeable improvements, with performance gains of mIoU 0.0043 and mDSC 0.0094 on CVC-ClinicDB, mIoU 0.0012 on CVC-ColonDB, and a larger margin of mIoU 0.0224 and mDSC 0.0127 on CVC-300. The best results were obtained on CVC-ClinicDB (mIoU 0.8889, mDSC 0.9412). Although the absolute scores on CVC-ClinicDB and CVC-ColonDB are lower than those reported in several recent studies, these datasets contain higher variability in polyp size, boundary ambiguity, and illumination, contributing to more challenging segmentation conditions. Visual evaluation further shows smoother and more coherent boundaries, especially on small or low-contrast polyps. Overall, the integration of the residual-gated mechanism and SE block within the attention gate effectively improves model accuracy and generalization, particularly in challenging scenarios.
Evaluation of MobileNet-Based Deep Features for Yogyakarta Traditional Batik Motif Classification Muhdhor, Umar; Yohannes
Sinkron : jurnal dan penelitian teknik informatika Vol. 10 No. 1 (2026): Article Research January 2026
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v10i1.15668

Abstract

Batik is an Indonesian intangible cultural heritage that embodies profound philosophical, aesthetic, and cultural values. Yogyakarta batik motifs, such as Parang, Kawung, and Truntum, reflect Javanese wisdom and identity through distinctive geometric and floral patterns. In the digital era, artificial intelligence based image processing provides a promising approach to support the preservation and automatic recognition of traditional batik motifs. The objective of this study is to evaluate the effectiveness of MobileNet-based feature extraction combined with Support Vector Machine (SVM) classification for Yogyakarta batik motif recognition. The proposed method employs MobileNet as a convolutional feature extractor and SVM as a decision model to separate motif classes in the feature space. Experiments were conducted on 685 batik images consisting of three motif classes, with class imbalance handled using Synthetic Minority Over-sampling Technique (SMOTE). Model performance was evaluated using weighted accuracy, precision, recall, and F1-score under five-fold cross validation. The results show that MobileNetV3Large achieved the best performance with a weighted accuracy of 98.36%, followed by MobileNetV3Small and MobileNetV4Small. Statistical significance tests using the Friedman test and Wilcoxon signed-rank analysis confirm that the performance differences among the evaluated models are statistically significant. These findings indicate that MobileNetV3 architectures provide robust and discriminative feature representations for batik motif classification on limited yet structured datasets. This study contributes a validated MobileNet–SVM framework for batik recognition and supports ongoing efforts in the digital preservation of Indonesia’s cultural heritage. Future work will explore larger motif sets and cross-dataset evaluation to further improve generalization performance.
Feature-Level Fusion of DenseNet121 and EfficientNetV2 with XGBoost for Multi-Class Retinal Classification Laksana, Jovansa Putra; Yohannes
Sinkron : jurnal dan penelitian teknik informatika Vol. 10 No. 1 (2026): Article Research January 2026
Publisher : Politeknik Ganesha Medan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.33395/sinkron.v10i1.15670

Abstract

Accurate and efficient classification of retinal fundus images plays a critical role in supporting the early diagnosis of ocular diseases. However, models relying on a single deep learning backbone often struggle to capture the multi-scale and heterogeneous characteristics of retinal lesions, leading to unstable performance across visually similar disease classes. To address this limitation, this study proposes a novelty feature-level fusion framework that integrates complementary representations from DenseNet121 and EfficientNetV2-s, followed by classification using XGBoost. The fusion pipeline extracts 1024-dimensional features from DenseNet121 and 1280-dimensional features from EfficientNetV2-s, which are concatenated into a unified 2304-dimensional feature vector. Experiments were conducted on a dataset of 10,247 retinal fundus images spanning six categories: Central Serous Chorioretinopathy, Diabetic Retinopathy, Macular Scar, Retinitis Pigmentosa, Retinal Detachment, and Healthy. The proposed fusion model achieved an accuracy of 91.60%, outperforming DenseNet121 XGBoost (91.31%) and EfficientNetV2-s XGBoost (89.70%). Moreover, the fusion strategy demonstrated improved class-level stability, particularly for visually similar retinal disorders where single-backbone models exhibited higher misclassification rates. This study contributes a lightweight yet effective multi-backbone feature-level fusion approach that enhances discriminative representation and classification stability without increasing model complexity. In addition, the use of XGBoost introduces a tree-based decision mechanism that is inherently more interpretable than conventional fully connected layers, offering potential advantages for clinical analysis. Overall, the results highlight the effectiveness of multi-backbone feature fusion as a reliable strategy for automated retinal disease classification.
Performance Analysis of YOLOv11 Integrated with Lightweight Backbones (MobileNetV2, GhostNet, ShuffleNet V2) for Cigarette Detection Andreas, Kevin; Yohannes; Meiriyama
INOVTEK Polbeng - Seri Informatika Vol. 11 No. 1 (2026): February
Publisher : P3M Politeknik Negeri Bengkalis

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.35314/0gjq1j10

Abstract

Cigarette object detection in indoor environments plays a vital role for enforcing smoke-free zone regulations and protecting public health from secondhand smoke exposure. This study investigates the performance of YOLOv11n architecture integrated with three lightweight backbone modifications (MobileNetV2, GhostNet, and ShuffleNet V2) for real-time cigarette detection with the aim of achieving efficiency suitable for potential deployment on resource-constrained edge devices. Comprehensive experiments were conducted using the Cigar Detection Dataset comprising 5,333 images, augmented to 8,890 samples through horizontal flipping and brightness adjustment techniques. All models were trained for 100 epochs using the SGD optimizer on an NVIDIA Tesla T4 GPU. The evaluation metrics included detection accuracy (mAP@0.5, mAP@0.5:0.95, precision, recall, and F1-score) and computational efficiency (parameters, model size, GFLOPs, and FPS). Experimental results demonstrate that the pretrained YOLOv11n baseline achieves the highest detection accuracy with mAP@0.5 of 0.8072 and precision of 0.8688. Among lightweight backbone variants, ShuffleNet V2 (0.5x) provides the most compact solution with only 2.28M parameters and a 4.73 MB model size, while ShuffleNet V2 (0.75x) offers an optimal balance between accuracy (mAP@0.5: 0.7430) and efficiency with only 0.95% accuracy degradation compared to the 1.0x variant. These findings provide practical guidance for selecting appropriate model configurations based on deployment constraints in smoke-free area monitoring systems.