Communicating is a necessity for all groups or individual because each individual should communicate with their surroundings. Communicating can also make us get information so that it can be used as a reference to be able to adapt. Verbal language used by speaking out loud is a way of communicating with individuals, but not all individuals can communicate with it, especially there are some individuals who have hearing limitations. Because of these limitations, another program that can be used is through sign language. Language requirements are languages that are usually used by people with disabilities in terms of hearing or speaking and sign language also has a fairly well-known sign language standard, namely the American Sign Language (ASL) standard. Unlike languages in the world, sign language is also often of little interest to most people because people's interest in sign language is still lacking so that most people are unable to understand their language. Sign language has many types, one of which is sign language by using hands to form letters and numbers. In overcoming these problems, the solution is to create a system that can be used to recognize sign language, the system developed is a system that used machine learning technology. This study will propose an ASL classification approach through data preprocessing and a convolutional neural network model. The proposed model can classify ASL hand posture images to be translated into the alphabet. The result of this study is an model with accuracy of 99.8% obtained from the process of merging preprocessing data and the convolutional neural network model.
                        
                        
                        
                        
                            
                                Copyrights © 2022