cover
Contact Name
Rizqi Putri Nourma Budiarti
Contact Email
rizqi.putri.nb@unusa.ac.id
Phone
-
Journal Mail Official
atcsj2018@unusa.ac.id
Editorial Address
-
Location
Kota surabaya,
Jawa timur
INDONESIA
Applied Technology and Computing Science Journal
ISSN : 26214458     EISSN : 26214474     DOI : https://doi.org/10.33086/atcsj
Core Subject : Social, Engineering,
Applied Technology and Computing Science Journal ( ISSN 2621-4458, E-ISSN 2621-4474) is a journal on all aspect of applied technology natural science that published online by Faculty of Engineering – University of Nahdlatul Ulama Surabaya. This journal published periodically twice in a year (on June and December) to accommodate the researcher from all over the world who want to publish the results of their research and contribution with all variety topics related to Engineering, Applied Computer Modelling and Simulation, Information System, Computer Science, Forecasting, Computer Applications, Expert System, E-Government, E-Business, E-Commerce, Information Security, Big Data, Intelligent System, Data Analysis, Data Mining, Smart City.
Arjuna Subject : -
Articles 122 Documents
Performance Comparison of Automated Website GUI Testing Tools: A Study of Selenium IDE, Katalon Studio, UI.Vision, and BugBug Rozi, Muhammad Javier Dafa; Sulistiyani, Endang; Budiarti, Rizqi Putri Nourma
TEKNOLOGI DITERAPKAN DAN JURNAL SAINS KOMPUTER Vol 8 No 1 (2025): June
Publisher : Universitas Nahdlatul Ulama Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Software testing consumes approximately 50% of development time and cost. One of the most commonly used testing activities is GUI testing, which is still frequently carried out manually. In manual testing, testers often repeat the same actions multiple times, increasing the risk of human error. In contrast, automated testing utilizes specialized tools to minimize these errors. Therefore, selecting the right tool and understanding its performance are crucial. Previous studies on automated testing generally focused only on implementing test objects without evaluating the actual performance quality of the tools used. There are many automated testing tools available that offer capture-and-replay features, such as Selenium, Katalon Studio, UI.Vision, and BugBug. This study compares the performance of these tools for website GUI testing using two parameters: execution time and encountered issues. A total of 21 test cases were designed and executed by the researcher. The execution results show that Katalon Studio, UI.Vision, and BugBug successfully completed all test cases as expected. However, Selenium failed to execute 5 test cases due to its inability to perform hover dropdown actions. In contrast, Katalon Studio, UI.Vision, and BugBug did not encounter issues during execution. In terms of test execution time, Katalon Studio recorded the fastest average time at 6.15 seconds, followed by Selenium. UI.Vision and BugBug required longer execution times, with average times of 20.85 seconds and 22.8 seconds respectively.
Integration of ITIL V3, COBIT 5, and Service Desk Standards in IT Service Desk Design Alfadina, Afni Virda; Sulistiyani, Endang; Budiarti, Rizqi Putri Nourma
TEKNOLOGI DITERAPKAN DAN JURNAL SAINS KOMPUTER Vol 8 No 1 (2025): June
Publisher : Universitas Nahdlatul Ulama Surabaya

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Software testing is a critical phase in the software development life cycle, consuming more than 50% of the total time and resources. One important activity within this phase is the creation of test cases, which can be problematic, particularly when done manually, due to the significant time and resources required. To address this, test case generation can be automated using two main approaches: based on program code or design models. Generating test cases at the design level using UML has been shown to be more time- and cost-efficient than generating them from code. Previous studies have focused on creating test cases using a single UML diagram or a combination of two UML diagrams, producing satisfactory results but with limitations in covering all possible execution paths of the software. To overcome these limitations, this study proposes a combination of three UML diagrams such as activity diagrams, sequence diagrams, and state diagrams using the Depth-First Search (DFS) algorithm. The proposed approach involves three major stages: preparation, execution, and evaluation. By combining these three UML diagrams, it is expected that the method will generate the maximum number of test cases, ensuring comprehensive coverage of all software paths with minimal issues.

Page 13 of 13 | Total Record : 122