Shunmugalingam Parvathi, Sakthidevi
Unknown Affiliation

Published : 1 Documents Claim Missing Document
Claim Missing Document
Check
Articles

Found 1 Documents
Search

Feature separation of music across diverse dataset: a comparative perspective Shunmugalingam Parvathi, Sakthidevi; Chandrasekar, Divya
Bulletin of Electrical Engineering and Informatics Vol 14, No 5: October 2025
Publisher : Institute of Advanced Engineering and Science

Show Abstract | Download Original | Original Source | Check in Google Scholar | DOI: 10.11591/eei.v14i5.9962

Abstract

In music, feature separation is the process of separating distinguishable auditory characteristics, such as pitch, timbre, rhythm, and harmonic content, from a complicated, mixed signal. Virtual reality (VR), gaming, music transcription, karaoke systems, audio restoration, music information retrieval (MIR), music education, and audio forensics, are just a few of the areas where the topic has attracted a lot of attention. Feature extraction is crucial in music separation as it identifies and isolates sound elements, improving accuracy, and reducing noise. It simplifies raw audio into meaningful data for efficient processing and effective model learning. Without it, clean separation of audio components is very difficult. In this research, extracting features from mixed audio sources enables clean and accurate isolation of musical elements, enhancing quality, supporting precise evaluations, and boosting neural network performance across varied datasets including DSD100, MUSDB, and MUSDB18-HQ, which collectively afford rich musical content for making evaluations and benchmarks. Evaluation metrics, such as F1-score, precision, and recall, are utilized to demonstrate the performance data of the extracted features. The MUSDB18-HQ dataset yielded an overall increase of 17.86% in the F1-score metrics with significant increases in drums (+25.05%) and vocals (+20.04%), showing that the dataset was highly effective for feature separation.