This study addresses anomaly detection in high dimensional time series data within the context of Artificial Intelligence (AI) driven software development, where modern systems generate large temporal data streams and reliable monitoring remains difficult due to noise, complexity, and limited labeled anomalies. The objective of this research is to develop an effective and scalable anomaly detection framework based on self supervised transformer models that can learn meaningful temporal representations without heavy reliance on manual annotation. The proposed method applies self supervised pretraining through masked sequence reconstruction and contrastive temporal learning on large scale, unlabeled multivariate time series datasets, followed by transformer based attention mechanisms to capture long range dependencies and compute anomaly scores. Experiments are conducted using benchmark datasets and real world system log data implemented with Python based deep learning tools and transformer architectures to evaluate detection performance. The results indicate that the proposed approach improves detection accuracy and reduces false positive rates compared to traditional statistical techniques and supervised deep learning models, particularly in high dimensional and low label settings. In conclusion, integrating self supervised learning with transformer architectures provides a robust and generalizable solution for time series anomaly detection, contributing to software analytics and monitoring systems by lowering labeling costs and improving adaptability across application domains.
Copyrights © 2026