Claim Missing Document
Check
Articles

Found 26 Documents
Search
Journal : eProceedings of Engineering

Perancangan Dan Analisis Sistem Speech Processing Untuk Tunarungu Menggunakan Metode Hidden Markov Model Dan Mel-frequency Cepstral Coefficients Bagus Robbiyanto; Raditiana Patmasari; Rita Magdalena
eProceedings of Engineering Vol 6, No 1 (2019): April 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Mendengar merupakan salah satu cara untuk saling berkomunikasi, mendengar sangat dibutuhkan oleh manuasia untuk mengerti maksud satu sama lain. Namun hal ini membatasi untuk orang normal berkomunikasi dengan tunarungu, karena tidak semua orang mengerti gerakan Bahasa isyarat. Pada Tugas Akhir ini membuat dibuat suatu alat untuk membantu orang normal untuk berkomunikasi dengan orang yang menderita tunarungu. Alat ini mengolah sinyal suara input menjadi suatu text menggunakan metode Mel Frequency Cepstral Coefficient untuk mengekstrasi sinyal suara input dan diklasifikasi menggunakan metode Hidden Markov Model untuk melihat kemiripan antara sinyal suara yang sudah diekstrasi ciri dengan yang di database. Jika terdapat suatu kemiripan maka menghasilkan suatu text, kemudian text tersebut diolah menjadi suatu input baru yang menampilkan video Bahasa Isyarat Indonesia. Hasil pengujian menunjukkan bahwa kombinasi metode mel frequency cepstral coefficient dan Hidden Markov Model mampu mengenali sinyal suara berupa kata dengan akurasi tertinggi mencapai 87%. Kata Kunci: Bahasa Isyarat indonesia, Tunarungu, MFFC, HMM. Abstract Hearing is one way to communicate with each other, hearing is needed by manuasia to understand each other's intentions. But this limits the normal person communicating with the deaf, because not everyone understands Sign Language. In this Final Project, a tool is created to help normal people communicate with people who are deaf. This tool processes the input sound signal into a text using the Mel Frequency Cepstral Coefficient method to extract input sound signals and is classified using the Hidden Markov Model method to see the similarity between the sound signals that have been extracted and those in the database. If there is a similarity then it produces a text, then the text is processed into a new input that displays Indonesian Sign Language videos. The test results showed that the combination of the mel frequency cepstral coefficient method and the Hidden Markov Model were able to recognize sound signals in the form of words with the highest accuracy reaching 87%. Keywords: Indonesian Sign Language, Deaf Person, speech processing, MFCC, HMM.
Evaluation Of Dlx Microprocessor Instructions Efficiency For Image Compression Nimas Fatihah; Nyoman Karna; Raditiana Patmasari
eProceedings of Engineering Vol 6, No 1 (2019): April 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstract Internet of things (IoT) nowadays uses a generic microprocessor, which is applicable for general purpose and produces many machine instructions. Likewise, IoT can also be integrated on ASIC (Applicationspecific integrated circuit) which is customized for partial use. ASIC is hardcoded, meaning that the program cannot be modified, therefore it tends to consume less power compared to generic microprocessor. This thesis considers a compression for an image of CCTV, which is using a microprocessor that is designed for a specific and general purposes as the compression. Compressing image is required to reduce the size of the original image. This thesis uses the Deluxe (DLX) microprocessor with a high performance to design an image compressor, and the machine instructions were determined with a specific algorithm. The compression uses Joint Photographic Experts Group (JPEG) format lossy compression, which is the most commonly used to compress multimedia data. The proposed compression method is Huffman Coding, coded in the assembly DLX programming language. DCT and Quantization are needed to be simulated in image processing tool to do the Huffman coding process. Then, the result data can be processed into Huffman. The result of this stage is by using Huffman Coding in the DLX microprocessor, it requires total of 11657 cycles executed by 8622 instructions. Thus, with such specific machine instructions, the performance of DLX microprocessor to execute Huffman Coding can be efficient. Keywords: IoT, DLX microprocessor, Huffman Coding, image compression, JPEG.
Sistem Deteksi Nada Pada Alat Musik Angklung Menggunakan Metode Harmonic Product Spektrum Rachmat Hidayat Ashary; Raditiana Patmasari; Sofia Saidah
eProceedings of Engineering Vol 6, No 1 (2019): April 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

ABSTRAK Salah satu alat musik tradisional yang banyak di jumpai di jawa barat adalah angklung. Angklung sendiri merupakan alat musik yang terbuat dari tabung-tabung bambu. Suara atau nada dihasilkan dari efek benturan tabung-tabung bambu dengan cara di goyangkan atau digetarkan. Suara yang dihasilkan berupa nada seperti do, re, mi, fa, sol, la, si, dan do tinggi. Cara memainkannya sangat mudah namun bagi pemula biasanya hanya dapat mendengar suara yang dihasilkan dan tidak mengetahui nadanya. Sehingga pada tugas akhir ini akan dibuat sistem yang dapat membantu bagi pemula dan dapat menjadi alternatif pembantu di sekolah musik untuk mengidentifikasi nada-nada pada angklung. Sistem yang digunakan pada tugas akhir ini menggunakan metode harmonic product spectrum yang berfungsi untuk melihat frekuensi dasar yang terdapat pada sinyal masukan. Sistem ini melalui dua tahap yaitu proses perekaman dan proses pengenalan nada. Pada proses perekaman dilakukan untuk membuat referensi atau sampel nada yang akan menjadi acuan untuk mengenali nada yang dimainkan dengan cara merekam nada angklung dan menyimpan filenya dalam bentuk *.waf. Sedangkan proses pengenalan nada yaitu proses secara langsung pada penginputan data yang akan melalui prepocessing, harmonic product spektrum dan klasifikasi KNN sehingga dapat mendeteksi dan mengenali nada yang sedang dimainkan. Sinyal masukan berasal dari suara yang dihasilkan angklung lalu diubah menjadi frekuensi dan di proses sehingga mendapatkan frekuensi dasar yang dikenali. Dari hasil pengujian menunjukkan akurasi terbaik pada kombinasi dua ciri statistik orde pertama variansiskewness, dan jenis KNN yang digunakan yaitu euclidean dengan jumlah K=1 dengan akurasi sebesar 88.78%. Dengan kata lain sistem deteksi nada alat musik angklung menggunakan metode harmonic product spectrum memperoleh hasil yang optimal. Kata kunci : Angklung, Prepocessing, Harmonic Product Spektrum.
Analisis Performansi Sistem Pendeteksi Katarak Menggunakan Dct (discrete Cosine Transform) Dan Jaringan Saraf Tiruan Backpropagation (jst Backpropagation) Herdian Anantya Risma; Raditiana Patmasari; Rita Magdalena
eProceedings of Engineering Vol 6, No 1 (2019): April 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Seiring berkembangnya teknologi saat ini, kita dapat memanfaatkan pengolahan citra digital sebagai cara untuk mendeteksi penyakit katarak. Pada pengolahan citra digital ini, akan dilakukan pengenalan suatu objek yang dapat dilakukan dengan mengenali algoritma tertentu. Pada tugas akhir ini menggunakan pengolahan citra digital untuk mempercepat proses identifikasi penyakit katarak. Pada identifikasi ini akan menggunakan metode DCT (Discrete Cosine Transform). Metode ini merupakan suatu metode yang akan digunakan dalam proses pemampatan file citra, yaitu untuk mentransformasikan sebuah matriks citra dengan representasi lain serta dapat digunakan di daerah pengolahan digital untuk keperluan pengenalan pola. Kemudian menggunakan Jaringan Saraf Tiruan Backpropagation (JST Backpropagation) sebagai pengklasifikasi citra uji. Hasil yang di peroleh adalah berupa sebuah simulasi perangkat lunak operasi matriks yang dapat digunakan untuk mengetahui dan mengklasifikasi mata katarak dengan akurasi sebesar 86,67% dengan waktu komputasi terbaik 3,666 detik menggunakan jumlah data latih dan data uji masing-masing 45 buah data, parameter orde satu standard deviation dan entropy, blok size DCT 5, saat epoch bernilai 1000, learning rate bernilai 1, dan hidden layer bernilai 5. Kata Kunci : DCT, JST Backpropagation, Katarak Abstract As technology develops today, we can utilize digital image processing as a way to detect cataract disease. In this digital image processing, we will do the introduction of an object that can be done by recognizing a particular algorithm. In this final project the research will use digital image processing to speed up the process of identification of cataract disease. This identification will use the DCT (Discrete Cosine Transform) method. This method is a method that will be used in image file compression process, that is to transform an image matrix with another representation and can be used in digital processing area for pattern recognition purposes. Then using Backpropagation Neural Network (JST Backpropagation) as the classifier of test image. The result obtained are a simulation of matrix operating software that can be used to know and classify cataract eyes with an accuracy of 86,67% with the best computation time 3,666 seconds using the amount of training data and test data for each of 45 pieces, the first orde parameter standard deviation and entropy, DCT size 5 block, when epoch is 1000, the learning rate is 1, and hidden layer is 5. Keywords : DCT, JST Backpropagation, Cataract
Perancangan Dan Implementasi Sistem Monitoring Suhu Pada Kolam Penangkaran Muhammad Rahmat Hidayat; Raditiana Patmasari; Arif Indra Irawan
eProceedings of Engineering Vol 6, No 2 (2019): Agustus 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Metabolisme serta semua aktifitas biologis sangat dipengaruhi oleh suhu, meningkat dan menurunnya suhu akan membuat oraginsasi air kesulitan untuk bernafas. Untuk itu pengecekan suhu kolam harus dilakukan secara teratur dan berkala. Pengecekan suhu secara manual yaitu dengan turun langsung ke kolam akan memerlukan waktu dan biaya bagi pemilik kolam yang tidak setiap waktu berada disekitar kolam. Sistem ini menggunakan sensor suhu DS18B20 sebagai alat untuk mendeteksi suhu pada kolam. Selanjutnya untuk pengolahan data suhu, sistem ini menggunakan prosesor Raspberry Pi yang selanjutnya akan dikirim ke database. Untuk melihat data suhu dari database, telah dibuat Mobile Aplikasi sebagai interface. Data yang akan muncul dari database merupakan data terakhir yang masuk ke database. Setelah dilakukan pengujian pada sistem, rata-rata tingkat akurasi yang didapatkan adalah 99%. Selanjutnya analisa QoS, nilai delay yang didapatkan rata-rata 0,3 ms yang termasuk kategori sangat bagus. Kemudian nilai troughput paling tinggi pada waktu pagi terdapat pada kolam D dengan nilai troughput 7159,780525 bits/s, dan nilai troughput terendah terdapat pada kolam A dengan nilai 6399,194438 bits/s. Sedangkan pada waktu sore nilai troughput paling tinggi terdapat pada kolam F dengan nilai troughput 6627,64872 bits/s dan nilai troughput terendah terdapat pada kolam A dengan nilai troughput 6399,194438 bits/s. Dan terakhir nilai packet loss yang didapatkan pada semua kolam dengan semua waktu bernilai 0% yang termasuk dalam kategori sangat bagus. Kata Kunci : Monitoring Suhu, Cloud Computing, QoS, Realtime. Abstract Metabolism and all biological activities are very dependent on temperature, the rise and fall of temperature will cause the difficulty of air to breathe. For that checking the temperature of the pool must be done regularly and regularly. Checking manual costs by lowering directly into the pool will take time and costs for pool owners who are not near the pool at any time. This system uses a DS18B20 temperature sensor as a tool to detect temperatures in the pond. Furthermore, for processing temperature data, this system uses a Raspberry Pi processor which will then be sent to the database. To view temperature data from a database, Mobile Application has been created as an interface. Data that will appear from the database is the last data that enters the database. ISSN : 2355-9365 e-Proceeding of Engineering : Vol.6, No.2 Agustus 2019 | Page 4495 After testing the system, the average level of accuracy obtained is 99%. Furthermore, QoS analysis, the delay value obtained is an average of 0.3 ms which includes a very good category. Then the highest troughput value in the morning is found in pool D with a throughput value of 7159,780525 bits/s, and the lowest throughput value is found in pool A with a value of 6399,194438 bits/s. Whereas in the afternoon the highest troughput value was found in the F pool with a throughput value of 6627,64872 bits/s and the lowest throughput value was found in pool A with a throughput value of 6399,194438 bits/s. And finally the value of packet loss obtained on all pools with all time is 0% which is included in the very good category. Keywords : Temperature Monitoring, Cloud Computing, QoS, Realtime
Desain Sistem Pengenalan Wajah Menggunakan Raspberry Pi 3 Muhamad Ihsan S; Nyoman Bogi Aditya Karna; Raditiana Patmasari
eProceedings of Engineering Vol 6, No 2 (2019): Agustus 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Tugas akhir ini menganalisis desain sistem pengenalan wajah yang menggunakan Raspberry Pi 3 B+ sebagai pusat sistem. Sistem pada Tugas Akhir ini menggunakan Raspberry Pi 3 B+ dan Raspberry Pi Camera Module V2.1. Adapun sistem face recognition yang digunakan merupakan sistem face recognition dari pyimagesearch yang diperuntukkan untuk Raspberry Pi 3. Terdapat program encode_face.py untuk melakukan proses training image dari lima orang subjek penelitian. Terdapat pula program pi_face_recogition.py yang akan dijalankan dan diuji terhadap empat orang yang wajahnya sudah ada pada database yang telah di training sebelumnya dan terhadap seseorang yang wajahnya tidak ada pada database sistem face recognition. Metode yang digunakan untuk face recognition yaitu Deep Metric Learning dengan triplet training step. Sistem face recognition pada tugas akhir ini berdasarkan pada pi_face_recognition yang berasal dari pyimagesearch oleh Adrian. Face recognition tersebut menggunakan network architecture bernama dlib milik David King dan modul face recognition milik Adam Geutgey. Sedangkan face detection memanfaatkan haar cascade frontal face default berupa file xml. Dataset terdiri dari 5 orang dengan jumlah foto wajah perorang yaitu 30, sehingga totalnya yaitu 150 foto. Kemudian dataset tersebut di training menggunakan encode_face.py sehingga menghasilkan berkas TUGASAKHIR-5subjek.pickle. Pengujian sistem face recognition dilakukan pada empat kondisi jarak pengujian yang berbeda yaitu 1,5 meter, 2 meter, 2,5 meter, dan 3 meter. Terdapat tiga macam parameter pengujian yaitu parameter size, parameter scale factor, dan parameter neighbourhood. Variasi nilai dari parameter size yaitu 20×20, 25×25, 30×30, dan 35×35. Variasi nilai dari parameter scale factor yaitu 1.1, 1.2, 1.3, dan 1.4. Variasi nilai dari parameter neighboarhood yaitu 3, 4, 5, dan 6. Hasil pengujian menunjukkan nilai Accuracy tertinggi yaitu 80% dan True Positive Rate mencapai 100% dengan parameter terbaik yaitu parameter size 20×20, parameter scale factor 1,1, dan parameter neighbourhood bernilai 3. Kata Kunci: Raspberry Pi 3 B+, face recognition, Deep Metric Learning . Abstract This final project analyzes the design of a face recognition system that uses Raspberry Pi 3 B + as the center of the system. The system in this final project uses Raspberry Pi 3 B + and Raspberry Pi Camera Module V2.1. The face recognition system used is a face recognition system from Pyimageseearch which is intended for Raspberry Pi 3. There is an encode_face.py program to carry out the image training process of five research subjects. There is also the pi_face_recogition.py program which will be run and tested on four people whose faces have been in a database that has been previously trained and on someone whose face is not in the face recognition system database. The method used for face recognition is Deep Metric Learning with step training triplets. The face recognition system in this thesis is based on pi_face_recognition originating from pyimagesearch by Adrian. Face recognition uses a network architecture called David King's Dlib and Adam Geutgey's face recognition module. Whereas face detection utilizes a default frontal face haar cascade in the form of an xml file. The dataset consists of 5 people with 30 photos per face, so that the total is 150 photos. Then the dataset is trained using encode_face.py to produce the TUGASAKHIR-5subjek.pickle file. Face recognition system testing is carried out on four different test conditions, namely 1.5 meters, 2 meters, 2.5 meters and 3 meters. There are three types of testing parameters, namely size parameters, scale factor parameters, and neighborhood parameters. Variations in the value of the size parameter are 20x20, 25x25, 30x30, and 35x35. Variations in the values of the scale factor parameters are 1.1, 1.2, 1.3, and 1.4. Variation values of the neighboring parameters are 3, 4, 5, and 6. The test results show the highest Accuracy value is 80% and the True Positive Rate reaches 100% with the best parameters, namely the size parameter 20 × 20, scale factor parameters 1,1, and parameters neighborhood is worth 3. Keywords: Raspberry Pi 3 B+, face recognition, Deep Metric Learning
Analysis Of Discrete Wavelet Transform For Optimum Machine Instruction Of Dlx Microprocessor Shafitri Nurhanifa; Nyoman Bogi Aditya Karna; Raditiana Patmasari
eProceedings of Engineering Vol 6, No 2 (2019): Agustus 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstract The application of monitoring over Wireless Sensor Network (WSN) is highly demanded to be implemented in the Internet of Things (IoT). The problem that appears in IoT is the general purpose microprocessor is still highly used, which causes more energy used than it is needed. Although, an Application Specific Integrated Circuit (ASIC) can be used to make a more efficient energy application, it is more expensive and permanent, which means it can't be changed or reconfigured. This thesis presents a method to design a specific purpose microprocessor by compressing an image in DLX microprocessor, which can still be reconfigured by optimizing machine instruction needed in the microprocessor. Prior to DWT process, an image will go through pre-processing stage. The stage will be done in Matlab to turn an RGB image into a grayscale image, and the matrix of the grayscale image will be obtained. This matrix will be the input for Haar DWT machine instruction. The machine instruction is simulated in WinDLX, a simulator for DLX microprocessor. After the simulation has finished, the statistics of the simulation will be analyzed to conclude whether the machine instruction is optimum enough. The result of Haar DWT machine instruction is the same as the result obtained from Matlab, which means the machine instruction is capable to do the image compression. Out of 92 kinds of instruction, Haar machine instruction only needs 20 kinds of instructions used. This shows that the program will not waste energy for unused instruction. From the statistics obtained, the total cycles executed from the pipelined DLX microprocessor is 1239 cycles, where a non-pipelined microprocessor would need 2755 cycles to execute the program. This means the program is a more efficient method to run a Haar DWT compression. Keywords: optimum machine instruction, DLX microprocessor, DWT image compression, internet of things (IoT), wireless sensor multimedia networks. 1. Introduction As the technology grows more and more in human life, the phrase "Internet of Things" is not an uncommon phrase to be involved in the growth. Kevin Ashton was the first person to use the term “Internet of Things” in 1999. At that time, Kevin and his team were developing an extension of the internet to accommodate things and it inspired him to the term “Internet of Things” [1]. The idea of IoT was developed in parallel to WSNs [2]. Wireless Sensor Network or WSN is a network of a large number of nodes that cooperatively sense the environment. The application of the WSN has been done since the 1980s but then became more common to use in 2001 for industrial and research purposes [2]. The WSN is largely applied in many applications, such as environmental monitoring, industrial and infrastructure, and military surveillance. Although WSN is very useful for the convenience in the society, this technology also comes with some issues [3], such as the minimum exposure path [4] and the energy sink-hole [5], [6] in WSN. The main problem discusses in this thesis is the energy lifetime of the WSN itself, which people have been paying attention as well. Sensor nodes are usually powered by limited lifetime batteries. Changing the batteries frequently become very inefficient for a long use of WSN. There are many suggestions to this specific problem, such as wirelesspowered sensor networks [7] and harvesting solar energy as a wireless charging for the WSN [8]. However, even if the additional power can be harvested to the WSN, the resource is still limited for frequent use. Image compressing is a more detail strategy to reduce excessive energy consumption of WSN. There are many methods of image compressing used for this problem [9], [10]. This thesis uses the image compression strategy by creating DWT machine instruction to be inserted in DLX microprocessor so that the processor will run the specific instructions. This strategy will be efficient to get the most ideal microprocessor to be implanted in the WSN. Furthermore, the WSN will not be wasting energy on other microprocessor instructions that will be left unused. The purpose of this thesis is to analyze the energy efficiency WSN by reconstructing the machine instruction. The benefit is this method can be successfully implemented on WSN in multimedia sector. The problem can be formulated as how effective DWT for optimum machine instruction affects the WSN energy efficiency in multimedia monitoring system. This thesis uses DLX microprocessor and Haar DWT algorithm in DLX assembly language. The parameters for this paper are the compression result, the power consumption, and ISSN : 2355-9365 e-Proceeding of Engineering : Vol.6, No.2 Agustus 2019 | Page 3542 the speed of simulation. The completion of this thesis uses several methodologies, such as literature study, designing the system, and simulation.
Klasifikasi Jenis Zat Narkotika Dengan Menggunakan Metode Gray Level Co-occurrence Matrix (glcm) Dan Jaringan Saraf Tiruan Backpropagation (jst-bp) Irdin Arjulian; Raditiana Patmasari; R. Yunendah Nur Fu'adah
eProceedings of Engineering Vol 6, No 2 (2019): Agustus 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Narkoba adalah suatu zat adiktif yang masuk kedalam tubuh yang mempengaruhi susunan sistem saraf atau otak yang berdampak buruk pada tubuh. Melihat alat pengujian jenis narkoba yang ada pada saat ini memerlukan biaya lebih mahal sehingga, menyebabkan terbatasnya alat yang dipunya dan tidak semua petugas yang berwenang melakukan penyidikan narkoba membawa alat saat sedang berkerja. Karena begitu seriusnya kasus narkoba di Indonesia diperlukan juga alat pendukung yang memadai untuk membantu para petugas saat berkerja. Maka dari itu, diperlukan suatu sistem yang dapat mengklasifikasikan jenis zat narkotika untuk dapat menjadi alternatif lain dalam pengujian jenis zat narkoba. Pada tugas akhir ini, sistem klasifikasi jenis zat narkoba yang dibuat menggunakan metode Gray Level Cooccurrence Matrix (GLCM) untuk ekstraksi ciri pada narkoba dengan klasifikasi menggunakan Jaringan Saraf Tiruan Backpropagation (JST-BP) untuk pencocokan basis data citra terhadap input yang akan di identifikasi. Hasil penelitian ini menggunakan 150 data citra latih dan 125 citra uji data jenis zat narkotika yang diambil menggunakan kamera dino-lite digital microscope AM3111T dengan hasil performansi sistem, nilai akurasi tertinggi 96,80% dan waktu komputasi 0,0897 detik. Hasil tersebut didapatkan dengan menggunakan parameter jarak piksel, arah sudut, level kuantisasi, 7 fitur ciri GLCM, pada klasifikasi JST-BP menggunakan parameter hidden layer, dan iterasi (epoch). Kata Kunci: Narkotika, Gray Level Co-occurrence Matrix, Jaringan Saraf Tiruan Backpropagation. Abstract Drugs is something substance addictive that entry into human body which affects arrangement neural system or brain that has bad impact on the body. Tool testing type of drugs that exists is more expensive. Expensive testing equipment for drugs, make tools become limited to use and no all the authorized officer conducts a drug investigation bring tool when worked. Because of that, case drugs in Indonesia that also needs supportive tool to help officer worked. Therefore, a system is needed that can classify the types of narcotics substance to be an alternative testing drugs substance. In this final project, the classification system for drugs substance using Gray Level Co-occurrence Matrix (GLCM) method for extraction characteristic image with classification using Artificial Neural Network Backpropagation (ANN-BP) for testing database images are tested by input data drugs that will be identified. The results of this study used 150 training images data and 125 test images of types narcotics substance, data taken using camera dino-lite digital microscope AM3111T with system performance results, the highest accuracy value is 96,80% and the computing time 0,0897 seconds. These results are obtained by using parameters of pixel distance, angle direction, quantization level, 7 features extraction GLCM, in the classification ANN-BP using parameters hidden layer, and iteration (epoch). Keywords: Narcotics, Gray Level Co-occurrence Matrix, Artificial Neural Network Backpropagation
Protoype Smart Traffic Light Berbasis Pengolahan Citra Digital Menggunakan Mikrokontroller Rifky Abdul Khafid; Raditiana Patmasari; R. Yunendah Nur Fu'adah
eProceedings of Engineering Vol 6, No 2 (2019): Agustus 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Jalan raya merupakan sarana penghubung antara tempat satu ke tempat yang lain untuk mempermudah masyarakat dalam berkendara. Namun peningkatan jumlah kendaraan dari tahun ke tahun sangat mempengaruhi tingkat kepadatan di berbagai ruas jalan. Lampu lalu lintas merupakan salah satu solusi untuk mengurangi angka kemacetan di berbagai ruas jalan. Namun, sistem lampu lintas saat ini belum efektif karena menggunakan sistem fixed time trafic dengan waktu yang telah ditentukan. Oleh karena itu, pada penelitian kali ini dilakukan sistem pengaturan lampu lintas yang mampu mendeteksi kepadatan jumlah kendaraan di persimpangan jalan dengan menggunakan pengolahan citra digital. Jika salah satu ruas jalan di persimpangan tersebut memiliki jumlah antrian terbanyak, maka lampu lalu lintas di ruas jalan tersebut akan menyala hijau terlebih dahulu. Sistem ini bekerja dengan merekam ruas jalan dan mengambil frame dari rekaman tersebut di waktu yang telah ditentukan untuk dijadikan data masukan sistem. Selanjutnya data masukan tersebut diproses menggunakan pengolahan citra digital, kemudian data output pada sistem ini diimplementasikan menggunakan lampu LED yang akan menyala pada bagian ruas jalan dengan jumlah kendaraan terbanyak. Hasil yang diperoleh dari sistem ini yaitu mengetahui ruas mana yang memiliki jumlah kendaraan terbanyak, melalui proses labelling dan deteksi tepi untuk mengambil objek yang dibutuhkan. Setelah pengujian pada sistem ini dilakukan dapat ditarik kesimpulan bahwa intensitas cahaya sangat berpengaruh terhadap kinerja dan akurasi sistem. Pada pagi hari kondisi cerah menghasilkan akurasi sistem sebesar 92,50%. Sedangkan pada siang hari kondisi cerah akurasi sistem sebesar 80,00% dikarenakan terdapat intensitas cahaya yang tinggi. Dan pada sore hari kondisi berawan sistem menghasilkan akurasi paling optimal sebesar 95,00%. Dengan demikian rata – rata akurasi sistem yang diperoleh yaitu sebesar 89,16%. Kata Kunci : Lampu lalu lintas, pengolahan citra digital, mikrokontroller, LED Abstract Roads are a way to connect between places to ease commuting. But the increase of vehicle from year to year causes an increase in vehicle density on the road. Traffic lights are one of the solutions to reduce congestion on roads. However, traffic lights system nowadays are mostly inefficient because they usually use fixed time traffic. Because of that, in this study a cross-lamp regulation system was conducted which was able to detect the density of the number of vehicles at crossroads using digital image processing. If one of the roads at the intersection has the highest number of queues, then the traffic lights on the road will light green first. This system works by recording the road and taking frames from the recording at a specified time to be used as system input data. Furthermore, the input data is processed using digital image processing, then the output data in this system is implemented using LED lights that will light up on the section of the road with the highest number of vehicles. The results obtained from this system are knowing which section has the highest number of vehicles, through the labeling and edge detection process to retrieve the required objects. After testing the system, it can be concluded that the intensity of the light is very influential on the performance and accuracy of the system. In the morning bright conditions produce system accuracy of 92.50%. Whereas in bright daylight the system's accuracy is 80.00% due to high light intensity. And in the afternoon the cloudy condition of the system produces the most optimal accuracy of 95.00%. Thus, the average system accuracy obtained is 89.16%. Keywords : Traffic Light, Digital Image Processing, Microcontroller, LED
Klasifikasi Retinopati Diabetik Non-proliferatif Dan Proliferatif Berdasarkan Citra Fundus Menggunakan Metode Gabor Wavelet Dan Klasifikasi Jaringan Saraf Tiruan Backpropagation Donny Janu Sundoro; Raditiana Patmasari; Rita Magdalena
eProceedings of Engineering Vol 6, No 2 (2019): Agustus 2019
Publisher : eProceedings of Engineering

Show Abstract | Download Original | Original Source | Check in Google Scholar

Abstract

Abstrak Retinopati diabetik merupakan komplikasi mikrovaskular di retina mata pada penderita diabetes melitus. Jika tidak tertangani, penyakit ini bisa berakibat pada kebutaan. Retinopati diabetik memiliki tiga tipe sesuai dengan tingkat keparahan penderitanya, yaitu normal, non-proliferatif (NPDR), dan proliferatif (PDR). Proses deteksi dan klasifikasi tingkat keparahan penderita retinopati diabetik saat ini masih dilakukan secara manual oleh tenaga medis terlatih. Seiring dengan berkembangnya bidang teknologi, memungkinkan pengembangan suatu sistem yang dapat mengklasifikasi tingkat keparahan retinopati diabetik. Pada tugas akhir ini, dirancang suatu sistem klasifikasi tingkat keparahan retinopati diabetik berdasarkan citra fundus dengan pengolahan citra digital. Klasifikasi tersebut dibagi menjadi lima kelas tingkat keparahan, yaitu normal, non-proliferatif (mild, moderate, dan severe), serta proliferatif yang masing-masing terbagi dalam 60 buah data latih dan 40 buah data uji. Metode yang digunakan untuk ekstraksi ciri adalah Gabor Wavelet dan Jaringan Saraf Tiruan (JST) Backpropagation sebagai algoritma klasifikasi. Berdasarkan pengujian yang telah dilakukan, didapatkan akurasi terbaik sebesar 85% dengan jumlah data latih 60 buah. Parameter terbaik menggunakan citra resize dengan resolusi 512x512, citra kanal biru, parameter orde satu feature variance dan entropy, downsampling d1=16 dan d2=16, dengan proses klasifikasi menggunakan jumlah neuron hidden layer 200 buah; learning rate 0,005; dan epoch sebanyak 1000 kali. Kata Kunci: Retinopati Diabetik, NPDR, PDR, Gabor Wavelet, JST Backpropagation Abstract Diabetic retinopathy is a microvascular complication in the retina of the eye in people with diabetes mellitus. If not treated, this disease can result in blindness. Diabetic retinopathy has three types according to the severity of the sufferer, namely normal, non-proliferative (NPDR), and proliferative (PDR). The process of detection and classification of the severity of diabetic retinopathy patients is still done manually by trained medical personnel. Along with the development of the technology field, it allows the development of a system that can classify the severity of diabetic retinopathy. In this final project, a system of classification of the severity of diabetic retinopathy is designed based on fundus images with digital image processing. The classification is divided into five classes of severity, namely normal, nonproliferative (mild, moderate, and severe), and proliferative wich is equally divided into 60 training data and 40 test data.. The method used for feature extraction is Gabor Wavelet and Artificial Neural Network (ANN) Backpropagation as a classification algorithm. Based on the tests that have been done, the best accuracy is obtained at 85% with 60 training data. The best parameters using resize images with 512x512 resolution, blue canal image, first order parameter feature variance and entropy, downsampling d1 = 16 and d2 = 16, with the classification process using 200 pieces of hidden layer neurons, learning rate 0.005, and epoch 1000 times . Keywords: Diabetic Retinopathy, NPDR, PDR, Gabor Wavelet, ANN Backpropagation