Deception detection is a method to determine whether a person is lying or not. One lie detector is a polygraph that measures human physiology, such as pulse and blood pressure. However, polygraphs have a problem in that they cannot be measured based on human psychology, such as speech and intonation. Therefore, audio deception detection is required, and this can be measured based on human psychology. This research will extract audio features, such as the Mel Frequency Cepstral Coeffi-cient (MFCC), Jitter, Fundamental Frequency (F0), and Perceptual Linear Prediction (PLP), from the Real-Life Trial dataset, which comprises 121 audio data. From the extraction results in the form of numerical data totaling 6387 features, various feature-selection methods are employed, such as Feature Importance (FI), Principal Component Analysis (PCA), Information Gain, Chi-Square, and Recursive Feature Elimination (RFE). After feature selection, the selected features are input to machine learning models, such as random forest and support vector machine (SVM). After model testing, metrics such as accuracy, precision, recall, and F1 score were evaluated, as well as statistical evaluation, to assess the developed model. Results from this experiment show that the deception detection model is improved after a feature selection process to reduce irrelevant features. Comparing the accuracy, Chi-Square achieves a significantly higher result, reaching up to 92% with an improvement of 24.32%, surpassing the SVM model's accuracy of 67.57% before feature selection. In contrast, the RFE technique yielded the best accuracy of 86%, with an increase of 13.52%, building upon its baseline accuracy of 72.97%.
Copyrights © 2025