Software architecture is essential for developing maintainable and scalable systems. However, issues such as limited documentation and design complexity often hinder effective architectural assessment. The purpose of this study is to study the role of large language models (LLMs) in analyzing software architecture documentation based on the ISO/IEC/IEEE 42010 standard; this study uses literature study methods and automated evaluation experiments on multi-layer systems, where the GPT model serves as an evaluator and Gemini serves as a validator. The results show that LLMs can find architectural conformance to standards, find potential issues, and provide optimization suggestions based on best practices. Despite the fact that manual validation is required to ensure the accuracy of LLM evaluation, the integration of LLMs offers significant opportunities to accelerate the process of data-driven architectural analysis.