Claim Missing Document
Check
Articles

Found 27 Documents
Search

Optimization of use case point through the use of metaheuristic algorithm in estimating software effort Ardiansyah, Ardiansyah; Zulfa, Mulki Indana; Tarmuji, Ali; Jabbar, Farisna Hamid
International Journal of Advances in Intelligent Informatics Vol 10, No 1 (2024): February 2024
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/ijain.v10i1.1298

Abstract

Use Case Points estimation framework relies on the complexity weight parameters to estimate software development projects. However, due to the discontinue parameters, it lead to abrupt weight classification and results in inaccurate estimation. Several research studies have addressed these weaknesses by employing various approaches, including fuzzy logic, regression analysis, and optimization techniques. Nevertheless, the utilization of optimization techniques to determine use case weight parameter values has yet to be extensively explored, with the potential to enhance accuracy further. Motivated by this, the current research delves into various metaheuristic search-based algorithms, such as genetic algorithms, Firefly algorithms, Reptile search algorithms, Particle swarm optimization, and Grey Wolf optimizers. The experimental investigation was carried out using a Silhavy UCP estimation dataset, which contains 71 project data from three software houses and is publicly available. Furthermore, we compared the performance between models based on metaheuristic algorithms. The findings indicate that the performance of the Firefly algorithm outperforms the others based on five accuracy metrics: mean absolute error, mean balance relative error, mean inverted relative error, standardized accuracy, and effect size. This research could be useful for software project managers to leverage the practical implications of this study by utilizing the UCP estimation method, which is optimized using the Firefly algorithm.
Addressing Overfitting in Dermatological Image Analysis with Bayesian Convolutional Neural Network Zulfa, Mulki Indana; Aryanto, Andreas Sahir; Wijonarko, Bintang Abelian; Ahmed, Waleed Ali
Jurnal Ilmiah Teknik Elektro Komputer dan Informatika Vol. 10 No. 2 (2024): June
Publisher : Universitas Ahmad Dahlan

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26555/jiteki.v10i2.29177

Abstract

VGG, ResNet, and DenseNet are popular convolutional neural network (CNN) designs for transfer learning (TL), aiding dermatological image processing, particularly in skin cancer categorization. These TL-CNN models build extensive neural network layers for effective image classification. However, their numerous layers can cause overfitting and demand substantial computational resources. The Bayesian CNN (BCNN) technique addresses TL-CNN overfitting by introducing uncertainty in model weights and predictions. Research contributions are (i) comparing BCNN with three TL-CNN architectures in dermatological image processing and (ii) examining BCNN ability to mitigate overfitting through weight perturbation and uncertainty during training. BCNN uses flipout layers to perturb weights during training, guided by the KL divergence and Binary Cross Entropy (BCE) loss function. The dataset used is the ISIC Challenge 2017, categorized as malignant and benign skin tumors. The simulation results show that three TL-CNN architectures, namely VGG-19, ResNet-101, and DenseNet-201, obtained training accuracies of 96.65%, 100%, and 97.70%, respectively. However, all three were only able to achieve a maximum validation accuracy of around 78%. In contrast, BCNN can produce training and validation accuracy of 81.30% and 80%, respectively. The difference in training and validation accuracy values produced by BCNN is only 1.3%. Meanwhile, the three TL-CNN architectures are trapped in an overfitting condition with a difference in training and validation values of around 20%. Therefore, BCNN is more reliable for dermatological image processing, especially for skin cancer images.
CACHE DATA REPLACEMENT POLICY BASED ON RECENTLY USED ACCESS DATA AND EUCLIDEAN DISTANCE Zulfa, Mulki Indana; Muhammad Syaiful Aliim; Ari Fadli; Waleed Ali
Jurnal Teknik Informatika (Jutif) Vol. 4 No. 4 (2023): JUTIF Volume 4, Number 4, August 2023
Publisher : Informatika, Universitas Jenderal Soedirman

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.52436/1.jutif.2023.4.4.1244

Abstract

Data access management in web-based applications that use relational databases must be well thought out because the data continues to grow every day. The Relational Database Management System (RDBMS) has a relatively slow access speed because the data is stored on disk. This causes problems with decreased database server performance and slow response times. One strategy to overcome this is to implement caching at the application level. This paper proposed SIMGD framework that models Application Level Caching (ALC) to speed up relational data access in web applications. The ALC strategy maps each controller and model that has access to the database into a node-data in the in-Memory Database (IMDB). Not all node-data can be included in IMDB due to limited capacity. Therefore, the SIMGD framework uses the Euclidean distance calculation method for each node-data with its top access data as a cache replacement policy. Node-data with Euclidean distance closer to their top access data have a high priority to be maintained in the caching server. Simulation results show at the 25KB cache configuration, the SIMGD framework excels in achieving hit ratios compared to the LRU algorithm of 6.46% and 6.01%, respectively.
RANCANG BANGUN PURECOMPUTE PLATFORM E-COMMERCE UNTUK BELANJA LAPTOP BERBASIS WEBSITE Musyaffa, Ahmad Irfan; Mulki Indana Zulfa; Muhammad Syaiful Alim
Jurnal SINTA: Sistem Informasi dan Teknologi Komputasi Vol. 1 No. 1 (2024): SINTA - JANUARI
Publisher : Berkah Tematik Mandiri

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.61124/sinta.v1i1.9

Abstract

The PureCompute Project is an exploration of the design and implementation of an e-commerce platform with a specific focus on laptop sales. The main goal of PureCompute is to provide an optimal online shopping experience, especially for users looking for technology products such as laptops. This report covers the entire development process, starting from needs analysis, system design, technology selection, to the implementation of key features such as login, register, product pages, shopping cart, and authentication and payment systems. The development methodology adopts a software development life cycle and an iterative approach, ensuring flexibility to accommodate changing user requirements. Thus, PureCompute is designed to be responsive to market dynamics and technological advancements. It is expected that PureCompute's positive contribution in providing an efficient and convenient shopping experience will meet the needs of consumers looking for laptops through the e-commerce platform. The success of this project is expected to support and advance the trend of online shopping in the current digital era, through affordable and innovative solutions for technology users.
Application-Level Caching Approach Based on Enhanced Aging Factor and Pearson Correlation Coefficient Zulfa, Mulki Indana; Maryani, Sri; Ardiansyah, -; Widiyaningtyas, Triyanna; Ali, Waleed
JOIV : International Journal on Informatics Visualization Vol 8, No 1 (2024)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.8.1.2143

Abstract

Relational database management systems (RDBMS) have long served as the fundamental infrastructure for web applications. Relatively slow access speeds characterize an RDBMS because its data is stored on a disk. This RDBMS weakness can be overcome using an in-memory database (IMDB). Each query result can be stored in the IMDB to accelerate future access. However, due to the limited capacity of the server cache in the IMDB, an appropriate data priority assessment mechanism needs to be developed. This paper presents a similar cache framework that considers four data vectors, namely the data size, timestamp, aging factor, and controller access statistics for each web page, which serve as the foundation elements for determining the replacement policy whenever there is a change in the content of the server cache. The proposed similarCache employs the Pearson correlation coefficient to quantify the similarity levels among the cached data in the server cache. The lowest Pearson correlation coefficients cached data are the first to be evicted from the memory. The proposed similarCache was empirically evaluated based on simulations conducted on four IRcache datasets. The simulation outcomes revealed that the data access patterns, and the configuration of the allocated memory cache significantly influenced the hit ratio performance. In particular, the simulations on the SV dataset with the most minor memory space configuration exhibited a 2.33% and 1% superiority over the SIZE and FIFO algorithms, respectively. Future tasks include building a cache that can adapt to data access patterns by determining the standard deviation. The proposed similarCache should raise the Pearson coefficient for often available data to the same level as most accessed data in exceptional cases.
Model Siklus Waktu Lampu Lalu Lintas Cerdas Menggunakan Fuzzy Mamdani Zulfa, Mulki Indana; Aryanto, Andreas Sahir; Fadli, Ari
JURNAL INFOTEL Vol 16 No 2 (2024): May 2024
Publisher : LPPM INSTITUT TEKNOLOGI TELKOM PURWOKERTO

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20895/infotel.v16i2.1106

Abstract

The growth of motorized vehicles in Indonesia has increased significantly. According to data from the Central Bureau of Statistics, the number of motorized vehicles in Indonesia has increased by around 10% each year in the last five years. One of the negative impacts of the increasing number of motorized vehicles is traffic congestion. Traffic congestion has become a serious problem in several cities in Indonesia. One of the causes is the increase in the number of vehicles at road intersections, which has an impact on congestion and the safety of road users. The rapid growth in the number of vehicles requires a more comprehensive strategy to reduce congestion and accidents at road intersections. Therefore, the need for Intelligent Transportation System, especially on the time-cycle configuration of intelligent red light is very important. This research aims to model the time-cycle of the red light using the Mamdani Fuzzy Inference System to simulate the green light time configuration so as to reduce the waiting time of road users at highway intersections. The simulation results show that the time-cycle configuration and green light time length of the Mamdani Fuzzy calculation are more varied relative to the number of vehicles. The values are relatively smaller than 6 to 54 seconds from the time configuration set by the local Department of Transportation. This shows a time efficiency for road users of up to 27%, which means that road users can complete trips 6 to 13 seconds faster.
PENERAPAN BIOTEKNOLOGI PRODUK SUSU YOGURT DAN KEJU UNTUK MENINGKATKAN NILAI TAMBAH DAN DAYA SAING KELOMPOK TANI SUPRAH DI DUSUN SILEMBU BANJARNEGARA Uletika, Niko Siameva; Arkan, Naofal Dhia; Putera, Radita Dwi; Afuan, Lasmedi; Zulfa, Mulki Indana
Jurnal Abdi Insani Vol 12 No 12 (2025): Jurnal Abdi Insani
Publisher : Universitas Mataram

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.29303/abdiinsani.v12i12.3309

Abstract

This dedication aims to analyze the implementation of digital transformation in Nagari Bukit Buai Tapan through the integration of regional profiles, management of organic compost, and the use of family medicinal plants (Toga) as an effort to support sustainable development. The methodology used includes socialization of programs to local stakeholders, training in nagari and community devices in the use of digital information systems, regional mapping with simple software, as well as direct practice of cultivating toga and making household compost. This program is implemented in a participatory manner by involving nagari devices, farmer groups, health cadres, youth, and beneficiary households. The results of this service indicate that the digital information system is successfully operated by the Nagari device to manage population data and local potential. The regional mapping produces administrative maps that are printed and installed in the Nagari office and are available in digital format. More than 20 households planted toga, including ginger, turmeric, and lemongrass, while three units of household composter began to be actively used with kitchen waste and EM4. Analysis shows a significant increase in public knowledge related to herbal -based health and environmentally friendly waste management. The discussion confirms that the integration of digitalization, organic compost, and toga not only strengthens data-based governance, but also encourages food independence, health, and preservation of local wisdom. This finding implies that holistic digital nagari transformation models can be an effective strategy to strengthen governance, improve welfare, and support sustainable development. This research contributes conceptual and practical to the development of local village based on local wisdom that can be replicated in other nagari.