Claim Missing Document
Check
Articles

Found 2 Documents
Search

Pemanfaatan Blockchain sebagai Instrumen Transparansi Pemilu dalam Hukum Tata Negara Indonesia Zakiri, Hasani
Journal Evidence Of Law Vol. 4 No. 2 (2025): Journal Evidence Of Law (Agustus)
Publisher : CV. Era Digital Nusantara

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59066/jel.v4i2.1514

Abstract

Penelitian ini menganalisis potensi teknologi blockchain sebagai instrumen peningkatan transparansi pemilu dalam kerangka hukum tata negara Indonesia melalui pendekatan konseptual-normatif. Tujuan penelitian adalah mengevaluasi kompatibilitas blockchain dengan prinsip konstitusional LUBER JURDIL dan merancang model implementasi yang sesuai konteks Indonesia. Metode penelitian menggunakan pendekatan konseptual dengan analisis normatif-yuridis melalui studi pustaka komprehensif terhadap hukum konstitusi, regulasi pemilu, dan studi komparatif implementasi blockchain voting internasional. Kebaruan penelitian terletak pada penyediaan kerangka hukum-teknis komprehensif pertama untuk implementasi blockchain dalam konteks hukum tata negara Indonesia yang spesifik, dengan analisis kompatibilitas kuantitatif terhadap prinsip LUBER JURDIL dan usulan model implementasi hibrid yang pragmatis. Hasil penelitian menunjukkan teknologi blockchain memiliki kompatibilitas tinggi dengan prinsip konstitusional LUBER JURDIL Indonesia: langsung (95%), umum (85%), bebas (88%), jujur (97%), dan adil (93%), namun prinsip rahasia menghadapi tantangan signifikan (52%) yang memerlukan solusi kriptografi advanced. Model implementasi hibrid tiga lapisan (antarmuka fisik, pemrosesan digital, verifikasi publik) merupakan pendekatan optimal yang memerlukan adaptasi kerangka hukum komprehensif di tingkat konstitusional, legislatif, dan regulasi. Penelitian menyimpulkan bahwa blockchain dapat menjadi instrumen efektif transparansi pemilu Indonesia melalui implementasi bertahap 10 tahun (2025-2035) dengan fokus pengembangan kedaulatan digital dan mitigasi risiko keamanan siber
Efektivitas Pelatihan Awal Berbasis Domain Spesifik Legal-BERT Untuk Natural Language Processing Hukum: Replikasi Dan Perluasan Studi Casehold Zakiri, Hasani; Alva Hendi Muhammad; Asro Nasiri
Journal of Informatics, Electrical and Electronics Engineering Vol. 5 No. 1 (2025): September 2025
Publisher : Forum Kerjasama Pendidikan Tinggi (FKPT)

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.47065/jieee.v5i1.2610

Abstract

Abstract?The emergence of domain-specific language models has demonstrated significant potential across various specialized fields. However, their effectiveness in legal natural language processing (NLP) remains underexplored, particularly given the unique challenges posed by legal text complexity and specialized terminology. Legal NLP has practical applications such as automated legal precedent search and court decision analysis that can accelerate legal research from weeks to hours. This study evaluates the CaseHOLD dataset to provide comprehensive empirical validation of domain-specific pretraining benefits for legal NLP tasks with focus on data efficiency and context complexity analysis. We conducted systematic experiments using the CaseHOLD dataset containing 53,000 legal multiple-choice questions. We compared four models: BiLSTM, BERT-base, Legal-BERT, and RoBERTa across varying data volumes (1%, 10%, 50%, 100%) and context complexity levels. Paired t-tests with 10-fold cross-validation and Bonferroni correction ensure robust methodology that guarantees finding reliability. Legal-BERT achieved the highest macro-F1 score of 69.5% (95% CI: [68.0, 71.0]), demonstrating a statistically significant improvement of 7.2 percentage points over BERT-base (62.3%, p < 0.001, Cohen's d= 1.23). RoBERTa showed competitive performance at 68.9%, nearly matching Legal-BERT. The most substantial improvements occurred under limited data conditions with 16.6% improvement at 1% training data. Context complexity analysis revealed an inverted-U pattern with optimal performance on 41-60 word texts. The introduced Domain Specificity Score (DS-score) showed strong positive correlation (r = 0.73, p < 0.001) with pretraining effectiveness, explaining 53.3% of performance improvement variance. These findings provide empirical evidence that domain-specific pretraining offers significant advantages for legal NLP tasks, particularly under data-constrained conditions and moderate-high context complexity. The key distinction of this research is the development of a predictive DS-score framework enabling benefit estimation before implementation, unlike previous studies that only evaluated post-hoc performance. The results have practical implications for developing legal NLP systems in resource-limited environments and provide optimal implementation guidance for Legal-BERT.