This study explores the implementation of the Rasch Model for modern test evaluation using a custom Python prototype, validated against Winsteps software. Focusing on dichotomous exam data from a significant sample, the research estimates participant ability and item difficulty with high precision, achieving standard errors below 0.30. The model identifies misfitting items, such as Item I5 with an outfit mean square of 1.45, enhancing test design reliability. Item Characteristic Curves (ICC) and Item Information Functions (IIF) support the efficacy of Computer Adaptive Testing (CAT) across varying ability levels. Results demonstrate the prototype's consistency with Winsteps (correlation = 0.98), affirming its potential as a flexible tool for educational assessment. Limitations include the command-line interface and the need for larger datasets, suggesting future improvements in scalability and usability. This work advances modern testing practices, offering a foundation for adaptive and fair assessment systems.
                        
                        
                        
                        
                            
                                Copyrights © 2025