Claim Missing Document
Check
Articles

Found 2 Documents
Search

Determining the Optimal Chemical Concentration with the Regula Falsi Method Bandiyah, Salza Nur; Angelia; Hidayat, Rafi
International Journal of Technology and Modeling Vol. 3 No. 3 (2024)
Publisher : Etunas Sukses Sistem

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.63876/ijtm.v3i3.100

Abstract

Determination of optimal chemical concentrations is one of the important aspects in industrial research and applications, especially in chemical reaction processes. In this article, the use of the Regula Falsi method as a numerical approach to determine optimal concentration based on the mathematical model of non-linear functions is discussed. The Regula Falsi method was chosen for its simplicity and ability to iteratively converge solutions with high accuracy. The target function is defined from the relationship between concentration variables and the efficiency of chemical reactions. In this study, simulations were carried out using several reaction parameter data scenarios to evaluate the performance of the method. The results show that the Regula Falsi method consistently provides accurate results in determining the root of the target function that represents the optimal concentration. The error rate is calculated to ensure that the resulting solution is within an absolute error tolerance of 0.01. The advantage of this method lies in the speed of convergence compared to other numerical methods, such as the Division by Two method. In addition, sensitivity analysis was carried out to assess the effect of parameter changes on the calculation results. This article concludes with a discussion of the potential applications of the Regula Falsi method in other chemical fields, including the optimization of reaction processes on an industrial scale. With this approach, it is hoped that the Regula Falsi method can be an effective tool to support data-based decision-making in chemical research and process technology.
Implementing LU Decomposition to Improve Computer Network Performance Angelia; Bandiyah, Salza Nur; Marine, Yoni
International Journal of Technology and Modeling Vol. 4 No. 2 (2025)
Publisher : Etunas Sukses Sistem

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.63876/ijtm.v4i2.101

Abstract

The application of LU decomposition in computer networks has great potential to improve system performance, especially in processing and analyzing complex and large-sized data. LU decomposition is a technique in linear algebra that breaks down a matrix into two triangular matrices, namely the lower (L) and upper (U) matrices, which facilitates the solution of a system of linear equations. In the context of computer networks, these algorithms can be applied to accelerate the analysis and processing of network traffic data, resource management, and traffic scheduling. Large matrices are often used to model networks in applications such as route mapping, bandwidth allocation, and network performance monitoring. The use of LU decomposition allows efficiency in handling such big data, speeds up calculations and reduces latency time in network information processing. This study proposes the application of LU decomposition to optimize several aspects in computer networks, such as dynamic routing, network fault detection, and more effective resource allocation. With LU decomposition, the process of load analysis and problem identification can be carried out more quickly, increasing the throughput and stability of the system. The results of the experiments conducted show that the application of LU decomposition can reduce the computational load and accelerate the system's response to changes in network conditions. Overall, the application of these methods can contribute to improving the efficiency and performance of modern computer networks, especially in the face of increasingly high and complex data traffic demands.