This research uses advanced models, Generative Pre-trained Transformer-4 and Bidirectional Encoder Representations from Transformers, to generate embeddings that analyze semantic relationships in open-ended concept maps. The problem addressed is the challenge of accurately capturing complex relationships between concepts in concept maps, commonly used in educational settings, especially in relational database learning. These maps, created by students, involve numerous interconnected concepts, making them difficult for traditional models to analyze effectively. In this study, we compare two variants of the Artificial Intelligence model to evaluate their ability to generate semanticembeddings for a dataset consisting of 1,206 student-generated concepts and 616 link nodes (Mean Concept = 4, Standard Deviation = 4.73). These student-generated maps are compared with a reference map created by a teacher containing 50 concepts and 25 link nodes. The goal is to assess the models’ performance in capturing the relationships between concepts in an open-ended learning environment. The results show that demonstrate that Generative Pretrained Transformers outperform other models in generating more accurate semantic embeddings. Specifically, Generative Pre-trained Transformer achieves 92% accuracy, 96% precision, 96% recall, and 96% F1-score. This highlights the Generative Pretrained Transformer’s ability to handle the complexity of large, student-generatedconcept maps while avoiding overfitting, an issue observed with the Bidirectional Encoder Representationsfrom Transformer models. The key contribution of this research is the ability of two complex models and multi-faceted relationships among concepts with high precision. This makes it particularly valuable in educational environments, where precise semantic analysis of open-ended data is crucial, offering potential for enhancing concept map-based learning with scalable and accurate solutions.
Copyrights © 2025