The integration of Generative Artificial Intelligence (GenAI) in language education demands a fundamental redefinition of the construct of writing assessment. A major challenge currently is the inability of conventional rubrics—which emphasize mechanical accuracy and surface structure—to distinguish between students' cognitive competence and algorithmic sophistication. This article aims to develop a conceptual framework and prototype design of a new analytical rubric for the assessment of argumentative texts that explicitly integrates the AI literacy dimension. Using a conceptual study method, this research synthesizes key references including language assessment theory, the Toulmin argumentation model, and the concept of distributed agency. This research proposes a paradigm shift in assessment from product-oriented to process-oriented through a "co-creation" model. The main result of this research is the design of an analytical rubric with four main dimensions and their proposed weights: (1) Originality and Idea Synthesis (35%), (2) Critical Evaluation of AI Sources and Outputs (30%), (3) Argumentation Logic (25%), and (4) Mechanics (10%), coupled with the prerequisite of Ethical Transparency. This framework not only addresses the issue of plagiarism but is also designed to encourage self-regulated learning strategies. These findings are expected to serve as a reference for educators and policymakers in designing an assessment ecosystem that is valid, ethical, and capable of fostering critical thinking in students in an era of technological disruption.