cover
Contact Name
Furizal
Contact Email
sjer.editor@gmail.com
Phone
+6282386092684
Journal Mail Official
sjer.editor@gmail.com
Editorial Address
Jl. Poros Seroja, Kesra, Kepenuhan Barat Sei Rokan Jaya, Kec. Kepenuhan, Kab. Rokan Hulu, Riau
Location
Kab. rokan hulu,
Riau
INDONESIA
Scientific Journal of Engineering Research
ISSN : -     EISSN : 31091725     DOI : https://doi.org/10.64539/sjer
Core Subject : Engineering,
The Scientific Journal of Engineering Research (SJER) is a peer-reviewed and open-access scientific journal, managed and published by PT. Teknologi Futuristik Indonesia in collaboration with Universitas Qamarul Huda Badaruddin Bagu and Peneliti Teknologi Teknik Indonesia. The journal is committed to publishing high-quality articles in all fundamental and interdisciplinary areas of engineering, with a particular emphasis on advancements in Information Technology. It encourages submissions that explore emerging fields such as Machine Learning, Internet of Things (IoT), Deep Learning, Artificial Intelligence (AI), Blockchain, and Big Data, which are at the forefront of innovation and engineering transformation. SJER welcomes original research articles, review papers, and studies involving simulation and practical applications that contribute to advancements in engineering. It encourages research that integrates these technologies across various engineering disciplines. The scope of the journal includes, but is not limited to: Mechanical Engineering Electrical Engineering Electronic Engineering Civil Engineering Architectural Engineering Chemical Engineering Mechatronics and Robotics Computer Engineering Industrial Engineering Environmental Engineering Materials Engineering Energy Engineering All fields related to engineering By fostering innovation and bridging knowledge gaps, SJER aims to contribute to the development of sustainable and intelligent engineering systems for the modern era.
Articles 24 Documents
Nano-modified Bitumen Enhancing Properties with Nanomaterials Alam, Rafi Shahriar; Maynul, Md Omar Farkuq; Hossain, Sazib
Scientific Journal of Engineering Research Vol. 1 No. 2 (2025): April
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i2.2025.26

Abstract

In the modern era, pollution from plastic waste has become a growing concern, particularly due to the widespread use of plastics like plastic bottles. This research explores a novel approach for recycling plastic waste by incorporating plastic bottles into bitumen for the enhancement of Hot Mix Asphalt (HMA). Alongside this, nano-materials such as carbon nanotubes (CNTs), graphene, nanosilica, and nanoclays were used to further improve the mechanical, rheological, and durability properties of the modified bitumen. The plastic waste, in the form of plastic bottles, was added in varying proportions (3%, 5%, 8%, 10%, and 12% by weight of total mix) to investigate its effect on bitumen’s performance. The study conducted a series of tests, including Dynamic Shear Rheometer (DSR), Rotational Viscosity, Penetration Test, Softening Point Test, and Scanning Electron Microscopy (SEM), to evaluate the rheological and mechanical properties. The results revealed that the incorporation of plastic waste significantly improved the bitumen’s resistance to rutting, cracking, and fatigue, while nano-additives further enhanced high-temperature stability and elastic recovery. As the percentage of plastic waste in the bitumen increased, improvements in resistance to aging and moisture susceptibility were observed. Additionally, the plastic-modified bitumen exhibited better stability, improved resilience to temperature fluctuations, and enhanced mechanical strength. These findings suggest that combining plastic waste and nano-materials in bitumen can contribute to more sustainable road infrastructure, reducing plastic pollution while improving the performance and longevity of asphalt pavements.
Effectiveness of Fourier, Wiener, Bilateral, and CLAHE Denoising Methods for CT Scan Image Noise Reduction Kobra, Mst Jannatul; Nakib, Arman Mohammad; Mweetwa, Peter; Rahman, Md Owahedur
Scientific Journal of Engineering Research Vol. 1 No. 3 (2025): July
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i3.2025.27

Abstract

The proper reduction of noise inside CTscan Images remains crucial to achieve both better diagnosis results and clinical choices. This research analyzes through quantitative metrics the effectiveness of four popular noise reduction methods which include Fourier-based denoising and Wiener filtering as well as bilateral filtering and Contrast Limited Adaptive Histogram Equalization (CLAHE) applied to more than 500 CTscan Images. The investigated methods were assessed quantitatively through Peak Signal-to-Noise Ratio (PSNR) and Structural Similarity Index (SSIM) while Mean Squared Error (MSE) served as the additional metric for evaluation. The evaluated denoising methods revealed bilateral filtering as the best technique based on its 50.37 dB PSNR and 0.9940 SSIM together with its 0.5967 MSE. Denoising with Fourier-based methods succeeded in removing high-frequency noise however it produced PSNR of 25.89 dB along with SSIM of 0.8138 while maintaining MSE at 167.4976 indicating lost crucial Image information. The performance balance of Wiener filtering resulted in 40.87 dB PSNR and 0.9809 SSIM and 5.3270 MSE that outperformed Fourier denoising in SSIM yet demonstrated higher MSE. CLAHE produces poor denoising outcomes because it achieves the lowest PSNR of 21.51 dB together with SSIM of 0.5707, and the maximum MSE of 459.1894 while creating undesirable artifacts. This research stands out through a full evaluation of four denoising techniques on a big dataset to create more precise analysis than prior research. The research results show bilateral filtering to be the most reliable technique for CTscan Image noise reduction when maintaining picture quality and thus represents a suitable choice for clinical use. This research adds new information to medical imaging research about quality enhancement which directly benefits clinical diagnostics and therapeutic planning.
A Thirdweb-Based Smart Contract Framework for Secure Sharing of Human Genetic Data on the Ethereum Blockchain Famuji, Tri Stiyo; Grancho, Bernadine; Fanani, Galih Pramuja Inggam; Talirongan, Hidear; Sumantri, Raden Bagus Bambang
Scientific Journal of Engineering Research Vol. 1 No. 3 (2025): July
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i3.2025.30

Abstract

Human genetic data, crucial for advancing personalized medicine, requires secure and privacy-preserving management solutions. Traditional approaches face challenges in scalability, security, and decentralized access control. This study proposes a blockchain-based framework leveraging Thirdweb and Ethereum smart contracts to address these issues. The framework integrates decentralized storage via IPFS for cost-efficient off-chain genetic data storage, while on-chain smart contracts manage access control, encryption, and audit trails. Utilizing Solidity for smart contract development, the system ensures role-based permissions, wallet-based authentication, and immutable transaction logging. Genetic data in FASTA format, sourced from NCBI, is encrypted and linked to IPFS hashes stored on the blockchain. The architecture supports dual interfaces—command-line for developers and a Thirdweb dashboard for end-users—enabling secure data upload, access, and monitoring. Testing demonstrated functional efficacy in data integrity, access verification, and audit capabilities. Results highlight the system’s ability to enhance privacy, eliminate intermediaries, and provide transparent data governance. The integration of Thirdweb further decentralizes operations, aligning with Web 3.0 principles. Key contributions include a scalable model for genetic data sharing, a customizable smart contract template, and a user-centric design. Future work should explore advanced encryption, real-world healthcare integration, and performance optimization under high-throughput conditions. This research bridges biotechnology and blockchain, offering a robust foundation for secure genomic data ecosystems.
Post-Quantum Cryptography Review in Future Cybersecurity Strengthening Efforts Mu'min, Muhammad Amirul; Safitri, Yana; Saputra, Sabarudin; Sulistianingsih, Nani; Ragimova, Nazila; Abdullayev, Vugar
Scientific Journal of Engineering Research Vol. 1 No. 3 (2025): July
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i3.2025.35

Abstract

The development of quantum computing technology brings significant challenges to conventional crypto-graphic systems that are currently widely used in digital data security. Attacks made possible by quan-tum computers have the potential to weaken classical algorithms such as RSA and ECC, so a new ap-proach is needed that can guarantee long-term security. This study aims to systematically review the ef-fectiveness and readiness of the implementation of post-quantum cryptography (PQC) algorithms, espe-cially those that have been recommended by NIST, in order to strengthen the resilience of future cyberse-curity systems. The method used was a structured literature study with comparative analysis of lattice-based (Kyber and Dilithium), code-based (BIKE), and hash-based (SPHINCS+) PQC algorithms. Data are obtained from official documents of standards institutions as well as the latest scientific publications. The results of the analysis show that lattice-based algorithms offer an optimal combination of security and efficiency, and demonstrate high readiness to be implemented on limited devices. Compared to other al-gorithms, Kyber and Dilithium have advantages in terms of performance and scalability. Thus, this re-search contributes in the form of mapping the practical readiness of the PQC algorithm that has not been widely studied in previous studies, and can be the basis for the formulation of future cryptographic adop-tion policies. These findings are expected to help the transition process towards cryptographic systems that are resilient to quantum threats.
Robust Positive-Unlabeled Learning via Bounded Loss Functions under Label Noise Awasthi, Lalit; Danso, Eric
Scientific Journal of Engineering Research Vol. 1 No. 3 (2025): July
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i3.2025.314

Abstract

Positive-Unlabeled (PU) learning has become a pivotal tool in scenarios where only positive samples are labeled, and negative labels are unavailable. However, in practical applications, the labeled positive data often contains noise such as mislabeled or outlier instances that can severely degrade model performance. This issue is exacerbated using traditional surrogate loss functions, many of which are unbounded and overly sensitive to mislabeled examples. To address this limitation, we propose a robust PU learning framework that integrates bounded loss functions, including ramp loss and truncated logistic loss, into the non-negative risk estimation paradigm. Unlike conventional loss formulations that allow noisy samples to disproportionately influence training, our approach caps each instance’s contribution, thereby reducing the sensitivity to label noise. We mathematically reformulate the PU risk estimator using bounded surrogates and demonstrate that this formulation maintains risk consistency while offering improved noise tolerance. A detailed framework diagram and algorithmic description are provided, along with theoretical analysis that bounds the influence of corrupted labels. Extensive experiments are conducted on both synthetic and real-world datasets under varying noise levels. Our method consistently outperforms baseline models such as unbiased PU (uPU) and non-negative PU (nnPU) in terms of classification accuracy, area under the receiver operating characteristic curve (ROC AUC), and precision-recall area under the curve (PR AUC). The ramp loss variant exhibits particularly strong robustness without sacrificing optimization efficiency. These results demonstrate that incorporating bounded losses is a principled and effective strategy for enhancing the reliability of PU learning in noisy environments.
Hybrid K-means, Random Forest, and Simulated Annealing for Optimizing Underwater Image Segmentation Kobra, Mst Jannatul; Rahman, Md Owahedur; Nakib, Arman Mohammad
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.46

Abstract

The process of underwater image segmentation is also very difficult because the data collected by the underwater sensors and cameras is of very high complexity, and much data is generated and in that case, the data is not well seen, the color is distorted, and the features overlap. Current solutions, including K-means clustering and Random Forest classification, are unable to partition complex underwater images with high accuracy, or are unable to scale to large datasets, although the possibility of dynamically optimizing the number of clusters has not been fully explored. To fill these gaps, this paper advises a hybrid solution that combines K-means clustering, Random Forest classification and the Simulated Annealing optimization as a complete end to end system to maximize the efficiency and accuracy of segmentation. K-means clustering first divides images based on pixel intensity, Random Forest narrows its segmentation of images with features like texture, color and shape, and Simulated Annealing determines the desired number of clusters dynamically to segment images with minimal segmentation error. The segmentation error of the proposed method was 30 less than the baseline K-means segmentation accuracy of 65 percent and the proposed method segmentation accuracy was 95% with an optimal cluster number of 10 and a mean error of 7839.22. This hybrid system offers a large-scale, scalable system to underwater image processing that is robust and has applications in marine biology, environmental research, and autonomous underwater system exploration.
Brine Treatment Plant using Hybrid Forward Osmosis–Membrane Distillation (FO–MD) System Al-Rashidi, Aryam Qalit
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.313

Abstract

Brine discharge from seawater reverse osmosis (SWRO) plants poses critical environ-mental and operational challenges, particularly in regions reliant on large-scale desalination. This study proposes a hybrid brine treatment system integrating Forward Osmosis (FO) and Membrane Distillation (MD) to enhance water recovery and minimize ecological impact. The FO stage utilizes a concentrated magnesium chloride (MgCl₂) draw solution to extract water from high-salinity brine without the need for hydraulic pressure, while the MD stage regenerates the draw solution using low-grade solar thermal energy, simultaneously producing high-purity distillate. Mass and energy balance calculations were per-formed to evaluate recovery rates, specific energy consumption, and thermal input requirements. The results indicate that the FO–MD configuration can achieve recovery rates exceeding 80% with significantly reduced brine discharge, while maintaining low energy demand compared to conventional methods. The integration of solar energy further enhances system sustainability, making it suitable for deployment in off-grid or arid regions. This hybrid approach demonstrates strong potential for advancing sustainable desalination practices, aligning with circular water strategies and zero liquid discharge (ZLD) objectives.
GIS-Based Flood Risk Assessment Using the Analytical Hierarchy Process Rakuasa, Heinrich; Rifai, Ahmat
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.43

Abstract

Floods are a common hydrometeorological disaster in Teluk Ambon Sub-district ; therefore, modeling is necessary as a mitigation measure. To address this challenge, Geographic Information System (GIS) and Remote Sensing technologies have proven to be powerful tools in flood disaster analysis and modeling. This study uses 10 variables, including elevation, slope, TWI, NDVI, precipitation, land cover, soil type, drainage density, distance from roads, and distance from rivers. This study uses the Analytical Hierarchy Process (AHP) method. The results show that distance from rivers has the greatest contribution (14.08%) to flooding in Teluk Ambon Sub-district . The level of flood vulnerability in Teluk Ambon Sub-district  is divided into three classes, namely low risk, covering an area of 8,642.26 ha or 64.71%; medium risk, covering an area of 4,066.79 ha or 30.45%; and high risk, covering an area of 646.44 ha or 4.84%. Settlements predicted to be affected by flooding in the low class cover an area of 130.36 ha, or 11.59%; the medium class covers an area of 649.29 ha, or 57.73%; and the high class covers an area of 345.07 ha, or 30.68%. The results of this study are very important in providing a more precise flood risk map to support spatial planning and disaster mitigation in the affected areas.
High-RAP Asphalt Mixtures (>40%): Mechanical Performance, Durability, Sustainability, and Emerging Technologies Abbas, Saifal
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.321

Abstract

Asphalt mixtures that utilize Reclaimed Asphalt Pavement (RAP), particularly at high RAP levels above 40%, are gaining in popularity due to the emphasis on sustainable pavement solutions. This review paper comprehensively evaluates the performance of high RAP asphalt mixtures, focusing on their mechanical characteristics and durability when compared to standard asphalt mixtures. High RAP mixtures perform well in high-traffic scenarios because they have excellent stiffness and resistance to rutting. Still, they have performance limitations that could be remedied through rejuvenators, anti-stripping agents, and premium additives. Durability issues such as moisture susceptibility and long-term aging are investigated, along with the importance of binder blending and rejuvenation on the impacts of aging. The review highlights significant research areas like optimizing rejuvenator formulations, bio-based additives evaluation, and complete life cycle assessments (LCA) to examine the overall sustainability of high-RAP mixtures. The comparison with conventional mixtures highlights high-RAP mixtures' environmental and economic advantages, such as reduced greenhouse gas emissions, decreased energy use, and substantial cost savings. Despite these advantages, variability of RAP content and lack of standard testing are significant challenges. While still in its infancy, new technologies, such as warm-mix asphalt (WMA), and new characterization technologies, such as X-ray computed tomography (CT) and AI, promise to optimize mix design and forecast long-term performance. High-RAP mixes can transform sustainable pavement construction by alleviating these challenges and employing innovative technologies. This article will benefit researchers, engineers, and policymakers looking to facilitate the use of high-RAP mixes in new construction.
Comparative Analysis and Modeling of Single and Three Phase Inverters for Efficient Renewable Energy Integration Emon, Asif Eakball; Molla, Sohan; Shawon, Md; Tabassum, Anika
Scientific Journal of Engineering Research Vol. 1 No. 4 (2025): October Article in Process
Publisher : PT. Teknologi Futuristik Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.64539/sjer.v1i4.2025.325

Abstract

This work details the hands-on design, simulation, and direct performance comparison of single-phase and three-phase grid-connected photovoltaic (PV) inverters, fully implemented and tested within the MATLAB/Simulink environment. Moving beyond theoretical descriptions, we constructed detailed models incorporating practical elements: a PV array, a DC-DC boost converter with Perturb and Observe (P&O) Maximum Power Point Tracking (MPPT) for real-world energy harvesting, and both single-phase H-bridge and three-phase two-level voltage source inverters (VSIs) feeding the grid through carefully designed LCL filters. We subjected both systems to identical, realistic solar irradiance profiles and rigorously analyzed critical performance metrics side-by-side, including output waveform quality (Total Harmonic Distortion - THD), power conversion efficiency, DC-link voltage stability, and MPPT effectiveness. Our simulation results clearly demonstrate distinct operational characteristics: the three-phase inverter consistently delivered superior efficiency (approximately 97.8% vs. 96.5%), significantly lower output current THD (below 2.0% vs. approximately 3.8%), and reduced DC-link voltage ripple. Conversely, the single-phase topology offers inherent simplicity and lower cost for lower-power applications. This comparative analysis provides concrete, simulation-backed insights into the fundamental trade-offs between complexity, cost, efficiency, and power quality, directly informing the optimal selection of inverter technology—single-phase for standard residential use or three-phase for commercial/industrial systems demanding higher performance.

Page 2 of 3 | Total Record : 24