Decision-making in complex environments often requires evaluating multiple alternatives against various criteria, which can sometimes result in inconsistent outcomes when different decision support methods are employed. Such inconsistencies pose significant challenges for decision-makers in determining the most reliable methodology. To address this gap, the present study examines whether three widely adopted decision support methods, Simple Additive Weighting (SAW), Simple Multi-Attribute Rating Technique (SMART), and Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), produce consistent results when applied to identical input values, criteria, and alternatives. The primary aim is to explicitly assess the consistency of decision-making outcomes across these methods under controlled conditions. The evaluation was conducted using a set of alternatives, with A1 consistently emerging as the top choice. Specifically, the SAW method produced a final score of 0.8998 for A5, the SMART method assigned a value of 0, and the TOPSIS method yielded a closeness coefficient of 0.826 for the same alternative. The unique contribution of this study lies in its systematic, side-by-side comparison of SAW, SMART, and TOPSIS using precisely the same dataset, an approach seldom addressed in prior research. By empirically demonstrating that these methods generate identical rankings under strictly controlled scenarios, this research provides new evidence supporting the methodological robustness and practical interchangeability of these widely used decision support techniques. The findings underscore the reliability of these methods in facilitating objective decision-making and offer valuable guidance for researchers and practitioners in selecting the most suitable DSS method without concern for inconsistent results.