Cookies on this website

We use cookies to ensure that we give you the best experience on our website. If you click 'Accept all cookies' we'll assume that you are happy to receive all cookies and you won't see this message again. If you click 'Reject all non-essential cookies' only necessary cookies providing core functionality such as security, network management, and accessibility will be enabled. Click 'Find out more' for information on how to change your cookie settings.

<sec> <title>BACKGROUND</title> <p>Computerized adaptive testing (CAT) has been shown to deliver short, accurate and personalized versions of the CLEFT-Q patient-reported outcome measure (PROM) for children and young adults born with a cleft lip and/or palate. Decision trees may be able to integrate clinician-reported data (e.g. age, gender, cleft type and planned treatments) to make these assessments even shorter and/or more accurate.</p> </sec> <sec> <title>OBJECTIVE</title> <p>We aimed to create decision tree models that incorporated clinician-reported data into adaptive CLEFT-Q assessments, and compare their accuracy to traditional CAT models.</p> </sec> <sec> <title>METHODS</title> <p>We used relevant clinician-reported data and patient-reported item responses from the CLEFT-Q field test to train and test decision tree models using recursive partitioning. We compared the prediction accuracy of decision trees to CAT assessments of similar length. Participant scores from the full-length questionnaire were used as ground truth. Accuracy was assessed through Pearson’s correlation coefficient of predicted and ground truth scores, mean absolute error, root mean squared error, and a two-tailed Wilcoxon signed-rank test comparing absolute error.</p> </sec> <sec> <title>RESULTS</title> <p>Decision trees demonstrated poorer accuracy than CAT comparators, and generally made data splits based on item responses, rather than clinician-reported data.</p> </sec> <sec> <title>CONCLUSIONS</title> <p>When predicting CLEFT-Q scores, individual item responses are generally more informative than clinician-reported data. Decision trees that make binary splits are at risk of underfitting polytomous PROM scale data and demonstrated poorer performance than CATs in this study.</p> </sec>

Original publication

DOI

10.2196/preprints.26412

Type

Journal article

Publisher

JMIR Publications Inc.

Publication Date

14/12/2020