Chatbots are increasingly prevalent in various fields, including academic fields. Universities often rely on lecturers and staff for information access, which can lead to delays, limited availability outside working hours, and the risk of missed questions. This study aims to develop a chatbot model capable of addressing questions about the curriculum through intent classification, reducing reliance on manual responses, and providing a solution that ensures quick, accurate information retrieval. The research focuses on optimizing the IndoBERT model for intent classification and addresses challenges that arose due to imbalance data, which could have impacted model performance. Data was collected through an open poll on common curriculum-related questions asked by students. To address data imbalance, we tried oversampling techniques, such as SMOTE, B-SMOTE, ADASYN, and Data Augmentation. Data augmentation was chosen and successfully addressed the imbalance problem while maintaining data semantics effectively. We achieved the best model with hyperparameters batch size of 8, learning rate of 0.00001, 15 epochs, and 64 neurons in the hidden layer, resulting in 98.7% accuracy on the test data. Evaluation metrics further demonstrate the model's robustness across multiple intents. This research demonstrates the advantages of the IndoBERT model in intent classification for academic chatbots, achieving excellent performance.