Identifying humor in stand-up comedy texts has distinct issues due to humor's subjective and context-dependent characteristics.  This study introduces an innovative method for humor retention in stand-up comedy content by employing a pre-trained BERT model that has been fine-tuned for humor classification.  The process commences with the collection and annotation of a varied assortment of stand-up comedy writings, categorized as hilarious or non-humorous, with essential comic elements like punchlines and setups highlighted to augment the model's comprehension of humor.  The texts undergo preprocessing and tokenization to be ready for input into the BERT model. Upon refining the model using the annotated dataset, predictions regarding humor retention are generated for each text, yielding classifications and confidence scores that reflect the model's certainty in its predictions.  The criterion for prediction confidence is set to categorize texts as "retaining humor."  The results indicate that prediction confidence is a dependable metric for humor retention, with elevated confidence scores associated with enhanced accuracy in comedy classification.  Nonetheless, the analysis reveals that text length does not affect the model's confidence much, contradicting the presumption that lengthier texts are more prone to comedy.  The findings underscore the significance of environmental and linguistic elements in comedy detection, indicating opportunities for model enhancement.  Future efforts will concentrate on augmenting the dataset to encompass a broader range of comic styles and integrating more contextual variables to improve prediction accuracy, especially in intricate or ambiguous comedic situations