Automated unit testing is essential for ensuring the security and reliability of smart contracts, particularly because their immutable nature prevents post-deployment modifications. However, manually creating test scenarios remains time-consuming, costly, and highly dependent on expert knowledge. A potential solution is to utilize AI technology, particularly Large Language Models (LLMs), to automatically generate test scenarios. This study fills the research gap in leveraging LLM technology in the software testing space by proposing a workflow for automatically gener-ating unit test scenarios for blockchain smart contract code using Large Language Models (LLMs). The proposed workflow consists of two stages: converting Solidity smart contracts into structured Gherkin scenarios and translating those scenarios into executable Hardhat unit test scripts. This study proposes an automated workflow using Large Language Models (LLMs) to address these challenges. The workflow consists of two stages: con-verting Solidity smart con-tracts into structured Gherkin scenarios and trans-lating those scenarios into executable Hardhat unit test scripts. Using the Gemini 2.5 Pro model, the research evaluates three prompting tech-niques such as Chain-of-Thought, Few-Shot, and Role-Based through quantitative analysis based on code coverage metrics, including Statements, Branches, Functions, and Lines. The experimental results show that Role-Based Prompting achieves the highest average coverage (92.02%), fol-lowed by Few-Shot Prompting (89.52%), while Chain-of-Thought produces the lowest coverage (78.79%). Role-Based Prompting also attains the highest Branch coverage, demonstrating superi-or capability in capturing conditional logic within smart contracts.