The limited availability of voice emotion datasets in Indonesian poses a challenge in the development of Speech Emotion Recognition (SER) systems, even though the need for such systems continues to grow in various sectors such as customer service, education, and human-computer interaction. To address this challenge, this study developed the Maleo Emotion Audio Dataset, a collection of three-second audio clips labeled with seven emotion categories: angry, neutral, disgusted, sad, happy, afraid, and surprised. The data was collected from the YouTube platform, and the Maleo Emotion Dataset is available at https://huggingface.co/datasets/maleo-ai/maleo-emotion. It was processed through preprocessing, feature extraction, and augmentation stages. The five main features extracted include Zero Crossing Rate, energy, Mel-Frequency Cepstral Coefficients (MFCC), spectral roll-off, and spectral flux. To enhance generalization, augmentation techniques such as pitch shifting, noise injection, and time stretching were applied. The classification model was built using a Convolutional Neural Network (CNN) architecture with TensorFlow-based implementation. Evaluation showed that the model achieved 94.48% accuracy on the test data, with balanced performance across all emotion categories. These results demonstrate that the developed dataset and model architecture have high capability in effectively recognizing emotions from Indonesian speech in a locally relevant context.