Claim Missing Document
Check
Articles

Found 4 Documents
Search

Towards Human-Level Safe Reinforcement Learning in Atari Library Afriyadi, Afriyadi; Herry Utomo, Wiranto
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol 12, No 3 (2023): NOVEMBER
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v12i3.1739

Abstract

Reinforcement learning (RL) is a powerful tool for training agents to perform complex tasks. However, from time-to-time RL agents often learn to behave in unsafe or unintended ways. This is especially true during the exploration phase, when the agent is trying to learn about its environment. This research acquires safe exploration methods from the field of robotics and evaluates their effectiveness compared to other algorithms that are commonly used in complex videogame environments without safe exploration. We also propose a method for hand-crafting catastrophic states, which are states that are known to be unsafe for the agent to visit. Our results show that our method and our hand-crafted safety constraints outperform state-of-the-art algorithms on relatively certain iterations. This means that our method is able to learn to behave safely while still achieving good performance. These results have implications for the future development of human-level safe learning with combination of model-based RL using complex videogame environments. By developing safe exploration methods, we can help to ensure that RL agents can be used in a variety of real-world applications, such as self-driving cars and robotics.
Analisis Faktor yang Berkontribusi Terhadap Pengurangan Karyawan Berdasarkan Clustering Self-Organizing Map Arifiandy, Rony; Herry Utomo, Wiranto
Jurnal Teknik Informatika dan Sistem Informasi Vol 11 No 2 (2025): JuTISI
Publisher : Maranatha University Press

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.28932/jutisi.v11i2.11224

Abstract

Employee turnover can disrupt the organization's operations and more or less cause losses to the business. Therefore, it is important to understand the causal factors so that organizations can take anticipatory action. Identify reasons employees leave their jobs is crucial for both employers and policy makers, especially when the goal is to prevent this from happening. Data on the causes of employee turnover is complex data that can have many dimensions, so a certain method is needed to analyze it. In this research, an analysis of data on the causes of employee turnover with 10 dimensions will be carried out using the Self Organizing Map (SOM) method. The Self-Organizing Map (SOM) is a technique for clustering and visualizing high-dimensional data by mapping it to a two-dimensional space while preserving the data's topological structure. This neural network-based method ensures that similar data points remain close to each other in the resulting 2D representation. SOM will cluster the data into several uniform groups. The results of this SOM grouping will be assessed with the Silhouette score, Dunn index and Connectivity value to determine how uniform the grouping is. Hopefully that by using the results of this SOM grouping, it shows that the clusters formed are very good and the data is clearly grouped. Therefore, we can analyze these groups with more accurate results.
Deblurring Photos With Lucy-Richardson And Wiener Filter Algorithm In Rgba Color Rustam, Michiavelly; Fahmi, Hasanul; Herry Utomo, Wiranto
Journal of Comprehensive Science Vol. 3 No. 3 (2024): Journal of Comprehensive Science (JCS)
Publisher : Green Publisher Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.59188/jcs.v3i3.655

Abstract

Photographers and social media influencers create engaging posts every day to captivate their audience with engaging content.Central to success is the need for high-quality images that allow the viewer to clearly perceive and engage with the information being conveyed. However, a persistent challenge in the field of photography is that hand tremors during image capture can result in accidentally blurred photos. In response, I propose a comprehensive solution that leverages the advanced Lucy-Richardson (L-R) and Wiener filter algorithms.This innovative approach is tailored to reduce the effects of blur caused by unstable handling, allowing for sharper, noise-free images. By incorporating these cutting-edge algorithms into their workflows, creators can not only reduce the frustration of blurry footage, but also increase the overall visual impact of their posts, foster deeper connections with their viewers, and create dynamic setting a new standard of excellence in a global world.
Towards Human-Level Safe Reinforcement Learning in Atari Library Afriyadi, Afriyadi; Herry Utomo, Wiranto
Jurnal Sisfokom (Sistem Informasi dan Komputer) Vol. 12 No. 3 (2023): NOVEMBER
Publisher : ISB Atma Luhur

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.32736/sisfokom.v12i3.1739

Abstract

Reinforcement learning (RL) is a powerful tool for training agents to perform complex tasks. However, from time-to-time RL agents often learn to behave in unsafe or unintended ways. This is especially true during the exploration phase, when the agent is trying to learn about its environment. This research acquires safe exploration methods from the field of robotics and evaluates their effectiveness compared to other algorithms that are commonly used in complex videogame environments without safe exploration. We also propose a method for hand-crafting catastrophic states, which are states that are known to be unsafe for the agent to visit. Our results show that our method and our hand-crafted safety constraints outperform state-of-the-art algorithms on relatively certain iterations. This means that our method is able to learn to behave safely while still achieving good performance. These results have implications for the future development of human-level safe learning with combination of model-based RL using complex videogame environments. By developing safe exploration methods, we can help to ensure that RL agents can be used in a variety of real-world applications, such as self-driving cars and robotics.