This study aims to perform a comparative analysis of four prompt engineering frameworks: KARANGTURI, RTF (Role-Task-Format), CoT (Chain-of-Thought), and ReAct. These frameworks play a crucial role in assisting users in designing effective instructions for Large Language Models (LLMs). A descriptive-comparative approach is employed to examine each framework in terms of structure, focus, complexity, strengths, limitations, and practical application. KARANGTURI, a locally developed framework, consists of four key elements: Character, Summary, Goal, and Constraint. RTF offers a simple structure based on three core components, making it suitable for straightforward tasks. CoT emphasizes step-by-step reasoning and is effective for complex and logical challenges. ReAct integrates reasoning with actions and supports interaction with external tools for advanced tasks. The analysis reveals that the choice of framework depends on task type, complexity level, and the need for reasoning or access to external information. KARANGTURI is viewed as a comprehensive and flexible approach with promising potential, though it requires further empirical validation. The findings are expected to help AI practitioners select the most appropriate prompting strategy based on their specific needs.
Copyrights © 2025