Maryland Assessment Research Center (MARC)

Automated Alignment of Test Items to Standards

Automated Alignment of SAT Math and Reading Items to SAT Specifications (ongoing)

 

Automated Alignment of Math Items to Content Standards in Large-Scale Assessments Using Language Models

Abstract: Accurate alignment of items to content standards is critical for valid score interpretation in large-scale assessments. This study evaluates three automated paradigms for aligning items with four domain and nineteen skill labels. First, we extracted embeddings and trained multiple classical supervised machine learning models, and further investigated the impact of dimensionality reduction on model performance. Second, we fine-tuned eight BERT model and its variants for both domain and skill alignment. Third, we explored ensemble learning with majority voting and stacking with multiple meta-models. The DeBERTa-v3-base achieved the highest weighted-average F1 score of 0.950 for domain alignment while the RoBERTa-large yielded the highest F1 score of 0.869 for skill alignment. Ensemble models did not surpass the best-performing language models. Dimension reduction enhanced linear classifiers based on embeddings but did not perform better than language models. This study demonstrated different methods in automated item alignment to content standards.

 

Text-Based Approaches to Item Alignment to Content Standards in Large-Scale Reading and Writing Tests

Abstract: Aligning test items to content standards is a critical step in test development to collect validity evidence based on content. Item alignment has typically been conducted by human experts. This judgmental process can be subjective and time-consuming. This study investigated the performance of fine-tuned small language models (SLMs) including BERT-base, BERT- large, ALBERT-base, DistilBERT-base, All-DistilRoBERTa, ELECTRA-small, ELECTRA-base, RoBERTa-base, RoBERTa-large, DeBERTa-base, DeBERTa-large and ConvBERT for automated item alignment using data from a large-scale standardized reading and writing test for college admissions. Different SLMs were trained for alignment at both domain and skill levels respectively with 10 skills mapped to 4 content domains. The model performance was evaluated in multiple criteria including precision, recall, accuracy, weighted F1, and Cohen's kappa on two testing datasets. The impact of types and sizes of the input data for training was investigated. Results showed that including more item text data led to substantially better model performance, surpassing the improvements induced by sample size increase alone. For comparison, supervised machine learning models including logistic regression, SVM, Naive Bayes, Random Forest, Gradient Boosting, XGBoost, LightGBM, MLP and KNN were trained using the embeddings from the multilingual-E5-large-instruct model. The study results showed that fine-tuned SLMs consistently outperformed the embedding-based supervised machine learning models, particularly for the more fine-grained skill alignment. To better understand model misclassifications, multiple semantic similarity analysis including pairwise cosine similarity, Kullback-Leibler divergence of embedding distributions, and two-dimension projections of item embeddings were conducted. These analyses consistently showed that certain skills in SAT and PSAT were semantically too close, providing evidence for the observed misclassification.