<sec> <title>BACKGROUND</title> <p>Computerized adaptive testing (CAT) has been shown to deliver short, accurate and personalized versions of the CLEFT-Q patient-reported outcome measure (PROM) for children and young adults born with a cleft lip and/or palate. Decision trees may be able to integrate clinician-reported data (e.g. age, gender, cleft type and planned treatments) to make these assessments even shorter and/or more accurate.</p> </sec> <sec> <title>OBJECTIVE</title> <p>We aimed to create decision tree models that incorporated clinician-reported data into adaptive CLEFT-Q assessments, and compare their accuracy to traditional CAT models.</p> </sec> <sec> <title>METHODS</title> <p>We used relevant clinician-reported data and patient-reported item responses from the CLEFT-Q field test to train and test decision tree models using recursive partitioning. We compared the prediction accuracy of decision trees to CAT assessments of similar length. Participant scores from the full-length questionnaire were used as ground truth. Accuracy was assessed through Pearson’s correlation coefficient of predicted and ground truth scores, mean absolute error, root mean squared error, and a two-tailed Wilcoxon signed-rank test comparing absolute error.</p> </sec> <sec> <title>RESULTS</title> <p>Decision trees demonstrated poorer accuracy than CAT comparators, and generally made data splits based on item responses, rather than clinician-reported data.</p> </sec> <sec> <title>CONCLUSIONS</title> <p>When predicting CLEFT-Q scores, individual item responses are generally more informative than clinician-reported data. Decision trees that make binary splits are at risk of underfitting polytomous PROM scale data and demonstrated poorer performance than CATs in this study.</p> </sec>
JMIR Publications Inc.