Claim Missing Document
Check
Articles

Found 27 Documents
Search

Perancangan User Interface dan User Experience (UI/UX) pada Aplikasi Konek untuk PT. Agro Lestari Merbabu Berbasis Mobile dengan Menggunakan Metode Design Thinking Pratama, Andro Adhita; Prasetijo, Agung Budi; Eridani, Dania
JUSTIN (Jurnal Sistem dan Teknologi Informasi) Vol 12, No 1 (2024)
Publisher : Jurusan Informatika Universitas Tanjungpura

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.26418/justin.v12i1.72750

Abstract

Bidang pertanian adalah salah satu ujung tombak untuk menopang kehidupan manusia. Meskipun demikian, aktivitas mengakses informasi pertanian melalui media internet masih merupakan hal yang baru bagi petani. Terlihat dari kurangnya akses petani terhadap informasi mengenai pertanian. Hal inilah yang mendorong PT Agro Lestari Merbabu, untuk membantu mitra dalam menangani masalah "“ masalah pada tanaman yang mereka budidayakan. Bisa dengan mendapatkan solusi dari informasi yang dihadirkan pada aplikasi dan konsultasi kepada ahli tani. Penelitian ini bertujuan merancang antarmuka pengguna (UI) dan pengalaman pengguna (UX) sesuai dengan kebutuhan pengguna serta menghadirkan desain antarmuka yang baik untuk memberikan kepuasan kepada pengguna. Dalam pembuatan aplikasi tersebut, metode Design Thinking digunakan dengan berfokus pada desain antarmuka yang memudahkan bagi pengguna. Pengujian desain antarmuka dengan menggunakan metode usability testing cognitive walkthrough yang berfokus pada aspek learnability, effectiveness, dan satisfaction. Pada tahap penilaian satisfaction melibatkan pengujian usability menggunakan System Usability Scale (SUS) sebagai parameter evaluasi tingkat kepuasan pengguna terhadap antarmuka. Pada akhir penelitian desain prototype telah berhasil dirancang dengan menggunakan Figma. Hasil penilaian dari pengguna terhadap aplikasi memperoleh skor rata-rata sebesar 77 dengan predikat "good" untuk Pengguna atau Mitra, 83 untuk ahli tani dengan predikat "good", dan 87.5 dengan predikat "excellent" untuk admin. Oleh karena itu, antarmuka aplikasi Konek telah mendapatkan kategori "acceptable" mengindikasikan desain antarmuka baik dan diterima oleh pengguna.
Analisis Purwarupa Sistem Otomatisasi Penerang Jalan Untuk Menghemat Daya Listrik Eridani, Dania; Asyauqqi, M. Azka; Prasetijo, Agung Budi
J-SAKTI (Jurnal Sains Komputer dan Informatika) Vol 4, No 1 (2020): EDISI MARET
Publisher : STIKOM Tunas Bangsa Pematangsiantar

Show Abstract | Download Original | Original Source | Check in Google Scholar | Full PDF (794.721 KB) | DOI: 10.30645/j-sakti.v4i1.194

Abstract

Public street lighting automation is one part of an area or a city development design. The design of public street lightning need to pay attention to several aspects, where one of them is how the public street lighting design is able to save electricity consumption. This research contain several combination of public street light-ing automation designs. It use Arduino Mega 2560 board as the main control board, KY-018 sensor as the sen-sor to detect the environment light intensity, HC-SR501 sensor as the sensor to detect object movement, AC light dimmer as the regulator of the lamp, AC power meter as a measuring device to measure the power con-sumption of the lamp used. The testing mechanism are consist of testing the input and output components in the system and testing the electricity consumption in the circuit design. The results of the research carried out, showed a comparison of the level of electricity consumption from each series of combinations that can be used as a reference for road lighting development planning.
Analisis Purwarupa Sistem Otomatisasi Penerang Jalan Untuk Menghemat Daya Listrik Eridani, Dania; Asyauqqi, M. Azka; Prasetijo, Agung Budi
J-SAKTI (Jurnal Sains Komputer dan Informatika) Vol 4, No 1 (2020): EDISI MARET
Publisher : STIKOM Tunas Bangsa Pematangsiantar

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.30645/j-sakti.v4i1.194

Abstract

Public street lighting automation is one part of an area or a city development design. The design of public street lightning need to pay attention to several aspects, where one of them is how the public street lighting design is able to save electricity consumption. This research contain several combination of public street light-ing automation designs. It use Arduino Mega 2560 board as the main control board, KY-018 sensor as the sen-sor to detect the environment light intensity, HC-SR501 sensor as the sensor to detect object movement, AC light dimmer as the regulator of the lamp, AC power meter as a measuring device to measure the power con-sumption of the lamp used. The testing mechanism are consist of testing the input and output components in the system and testing the electricity consumption in the circuit design. The results of the research carried out, showed a comparison of the level of electricity consumption from each series of combinations that can be used as a reference for road lighting development planning.
YoloV8, EfficientNetv2, and CSP Darknet Comparison as Recognition Model’s Backbone for Drone-Captured Images Kridalukmana, Rinta; Eridani, Dania; Septiana, Risma; Windasari, Ike Pertiwi
JOIV : International Journal on Informatics Visualization Vol 9, No 2 (2025)
Publisher : Society of Visual Informatics

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.62527/joiv.9.2.2880

Abstract

Artificial intelligence (AI) has recently empowered drones to support smart city apps and recognize on-the-ground objects or events. Various pre-trained backbones are available to develop object recognition models, and some of them could boost the models’ accuracy. Consequently, it becomes difficult for practitioners to select a suitable backbone as a feature extractor during recognition model development. Hence, this research aims to provide a benchmark examining the performance of three popular backbones in supporting recognition models using images captured by drones as the dataset. This research used the UAV-AUAIR dataset and compared three deep learning backbone architectures as the feature extractor, namely YoloV8_s, EfficientNetv2_s, and CSP_DarkNet_l. The head part of each selected backbone was replaced with YoloV8Detector architecture, provided by Keras-CV, to perform the inference tasks. The models generated during training were evaluated against four measurement methods: loss function, intersection over union (IOU), across-scale mean average precision (mAP), and computational performance. The results showed that the model generated using EfficientNetv2_s backbone outperformed the others in most criteria, except the computational performance and detecting small objects, which was won by YOLOV8_s and CSP_Darknet_l, respectively. Thus, EfficientNetv2_s and CSP_DarkNet_l can be considered if app development concerns accuracy. Meanwhile, YoloV8_s is far better when computational performance is essential, as its prediction time achieved 0.8 seconds per image. This study is essential as a reference for practitioners, particularly those who want to develop an object-recognition model based on a pre-trained backbone.
Pembuatan Aplikasi Virtual Reality Sebagai Media Edukasi Pemadaman Api Menggunakan Alat Pemadam Api Ringan Muhammad Risqullah Naufal Yudanar; Kurniawan Teguh Martono; Dania Eridani
IJAI (Indonesian Journal of Applied Informatics) Vol 6, No 2 (2022)
Publisher : Universitas Sebelas Maret

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.20961/ijai.v6i2.59364

Abstract

AbstrakTeknologi Virtual Reality dapat digunakan sebagai media dalam menyampaikan suatu materi edukasi. Salah satu hal penting yang dapat dijadikan materi edukasi adalah metode penggunaan alat pemadam ringan. Penelitian ini bertujuan untuk mengembangkan aplikasi Virtual Reality yang dapat digunakan sebagai media edukasi metode penggunaan alat pemadam api ringan yaitu metode PASS. Metode yang digunakan dalam pengembangan aplikasi ini adalah Multimedia Development Life Cycle (MDLC) yang memiliki enam tahap yaitu konsep, desain, pengumpulan materi, perakitan, pengujian, dan distribusi. Aplikasi Virtual Reality bernama APARVR yang berjalan pada perangkat Oculus Quest 2 yang menggunakan game engine Unreal Engine 4. Aplikasi ini diuji menggunakan System Usability Scale yang dilakukan kepada 15 orang responden dimana didapatkan rata-rata skor akhir SUS sebesar 67 yang berada pada posisi “OK”. Dilakukan juga pengujian materi terhadap responden yang sama dimana terdapat peningkatan nilai rata – rata pada hasil pengujian pasca-uji sebesar 42.66% dibandingkan dengan nilai rata – rata hasil pengujian pra-uji yang menandakan bahwa materi edukasi dalam aplikasi “APARVR” sudah sesuai.AbstractVirtual Reality can be used as a tool in delivering educational material. One of the important things that can be used as educational material is the method of using a fire extinguishers. This study aims to develop a Virtual Reality application as an educational tool for the method of using a fire extinguisher called by PASS method. The method used in developing this application is the Multimedia Development Life Cycle (MDLC) which has six stages, starting with concept, design, material collection, assembly, testing, and with distribution. This Virtual Reality application runs on the Oculus Quest 2 device that uses the Unreal Engine 4 as the game engine. This application which goes by the name of APARVR was tested using the System Usability Scale which was carried out on 15 respondents where an average SUS final score of 67 was obtained which fall into the "OK" category. Material testing was also carried out on the same respondents where there was an increase in the average value of the post-test test results of 42.66% compared to the average value of the pre-test results which indicated that the educational material in this "APARVR" application were appropriate.
Sistem Isyarat Bahasa Indonesia (SIBI) Metode Convolutional Neural Network Sequential secara Real Time Nurhayati, Oky Dwi; Eridani, Dania; Tsalavin, Muhammad Hafiz
Jurnal Teknologi Informasi dan Ilmu Komputer Vol 9 No 4: Agustus 2022
Publisher : Fakultas Ilmu Komputer, Universitas Brawijaya

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.25126/jtiik.2022944787

Abstract

Bahasa isyarat dengan menggunakan gerakan tangan biasanya dilakukan oleh tuna rungu dan tuna wicara. Bahasa isyarat yang digunakan di Indonesia adalah SIBI (Sistem Isyarat Bahasa Indonesia). Namun, penggunaan bahasa isyarat tangan tidak selalu di mengerti oleh manusia normal sehingga dibutuhkan perangkat tambahan yang dapat mempermudah dalam menerjemahkan suatu isyarat. Perangkat tambahan yang dikembangkan dalam penelitian ini melibatkan teknologi visi komputer deep learning sehingga menghasilkan tools untuk menerjemahkan bahasa isyarat tangan. Dalam penelitian ini, gambar isyarat tangan di capture menggunakan webcam kemudian dilakukan pre-processing dengan mengubah gambar ke dalam bentuk HSV. Gambar yang digunakan dalam penelitian berupa citra sebanyak 26 kelas huruf alfabet SIBI dan 3 kelas tambahan, dengan masing-masing kelas memiliki 1000 gambar. Kemudian dilakukan cropping dan thresholding dengan menempatkan isyarat tangan yang berbentuk huruf  kedalam kotak yang merupakan area ROI untuk memudahkan pengenalan. Teknologi visi komputer deep learning convolutional neural network (CNN) digunakan untuk feature learning dan mengklasifikasi isyarat tangan pada sebuah obyek. Untuk menguji metode CNN, digunakan berbagai variasi cahaya sebesar 10-200 lux, serta jarak dari tangan ke webcam 50-200 cm. Hasil penelitian dengan metode CNN pada citra isyarat tangan memberikan akurasi sebesar 92%, presisi 91,96%, sensitivitas 91,9%, spesivisitas 91,96% dan f1 score 91,9%. AbstractSign language is usually used by deaf and speech impaired persons. The Sistem Isyarat Bahasa Indonesia (SIBI) is a hand signal language used in Indonesia. The use of hand signals is not always understood by normal humans, such that additional devices are needed to make sign translation easier. The additional device in this study is developed using deep learning and computer vision technology to produce a hand signal translation tool. This study uses 29 sign images for a dataset, consisting of 26 letters of the alphabet and 3 additional signs, namely space, delete, and unclassified. Pre-processing is performed by converting the image into HSV, cropping, and thresholding to make easy recognition. The convolutional neural network (CNN) method is then used as a learning feature and hand signals classifier on an object. The testing phase is performed on various lights ranging from 10-200 lux and the hand distance to the webcam is about 50-200 cm. Experimental results show that the CNN method on the hand signal image could provide an accuracy of 97.2%, precision of 91.96%, sensitivity of 91.9%, specificity of 91.96%, and F1 score of 91.9%, respectively.
A Dynamic-Bayesian-Network-Based Approach to Predict Immediate Future Action of an Intelligent Agent Kridalukmana, Rinta; Eridani, Dania; Septiana, Risma
Jurnal Ilmu Komputer dan Informasi Vol. 17 No. 1 (2024): Jurnal Ilmu Komputer dan Informasi (Journal of Computer Science and Informatio
Publisher : Faculty of Computer Science - Universitas Indonesia

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.21609/jiki.v17i1.1199

Abstract

Predicting immediate future actions taken by an intelligent agent is considered an essential problem inhuman-autonomy teaming (HAT) in many fields, such as industries and transportation, particularly toimprove human comprehension of the agent as their non-human counterpart. Moreover, the results of suchpredictions can shorten the human response time to gain control back from their non-human counterpartwhen it is required. An example case of HAT that can be benefitted from the action predictor is partiallyautomated driving with the autopilot agent as the intelligent agent. Hence, this research aims to develop anapproach to predict the immediate future actions of an intelligent agent with partially automated drivingas the experimental case. The proposed approach relies on a machine learning method called naive Bayesto develop an action classifier, and the Dynamic Bayesian Network (DBN) as the action predictor. Theautonomous driving simulation software called Carla is used for the simulation. The results show that theproposed approach is applicable to predict an intelligent agent’s three-second time-window immediate futureaction.