With the increasing volume of multilingual user-generated content across social media platforms, effective sentiment analysis (SA) becomes crucial, especially for low-resource languages. However, traditional models relying on context-independent embeddings, such as Word2Vec, GloVe, and fastText, struggle to handle the complexity of multilingual sentiment classification. To address this, we propose an Automatic Multilingual Sentiment Detection (AMSD) framework that leverages the contextual capabilities of BERT for feature extraction and a Bidirectional Long Short-Term Memory (Bi-LSTM) network for classification. Our method, termed Elite Opposition Cross-Entropy Weighted Bi-LSTM (EOCEWBi-LSTM), integrates elite opposition-based learning to optimize hyperparameters and enhance classification accuracy. A weighted cross-entropy loss function further refines the model's sensitivity to class imbalance, thereby improving its performance. The model is trained and evaluated on the NEP_EDUSET corpus, comprising 45,434 tweets in English, Hindi, and Tamil. Experimental results demonstrate notable improvements in precision, recall, F1-score, and accuracy, highlighting the effectiveness of EOCEWBi-LSTM in multilingual sentiment analysis, especially across both high-resource and low-resource languages. The experimental results show that the proposed EOCEWBi-LSTM achieves a high F1-score ratio of 93.83% and an accuracy ratio of 93.83% compared to other existing methods. EOCEWBi-LSTM provides an effective solution for multilingual sentiment analysis, especially for languages with limited resources.