Breast cancer is one of the cancers with the highest mortality rate among women worldwide. Early detection plays a crucial role in improving the chances of successful treatment and reducing the risk of death. Numerous efforts have been made both by the general public and healthcare professionals to promote awareness, early screening, and timely medical intervention. In line with technological advancements, the use of computer-based systems, particularly in the field of medical image analysis, has become increasingly important. One such application is the analysis of histopathology images to support the diagnosis process in breast cancer cases. Histopathological image classification has attracted considerable attention from researchers in recent years, and a variety of machine learning and deep learning techniques have been applied to improve its accuracy. Convolutional Neural Networks (CNNs), as part of deep learning frameworks, have shown promising results in identifying tissue patterns in histopathology images. However, despite their high accuracy, CNNs often lack interpretability, making it difficult to understand the reasoning behind their decisions—especially when dealing with subtle features such as small spots, dots, or fine lines, which may go undetected. This study addresses those limitations by proposing a method that not only classifies histopathology images with high accuracy but also improves interpretability through localization techniques. The goal is to make the classification process more transparent and clinically useful. Using widely recognized datasets such as BreakHIS, the proposed method achieved a classification accuracy of up to 97.50%, demonstrating its potential as a reliable tool in medical diagnostics and breast cancer research.