Plenary - 03: (EN)TRUST ME: VALIDATING AN ASSESSMENT RUBRIC FOR DOCUMENTING CLINICAL ENCOUNTERS DURING A SURGERY CLERKSHIP CLINICAL SKILLS EXAM.
Tess H Aulet, MD1, Jesse Moore, MD, FACS1, Peter Callas, PhD1, Cate Nicholas, MS, PA, EdD2, Michael Hulme, PhD3; 1University of Vermont Medical Center, 2Larner College of Medicine at the University of Vermont, 3Wake Forest School of Medicine
Introduction: A Core Entrustable Professional Activity (EPA) put forth by the AAMC for undergraduate medical education, EPA 5 is “Document a clinical encounter in the patient record.” Our goal was to develop an assessment rubric and gather evidence to support its validity in measuring progress towards entrustability for this EPA.
Methods: We developed a rubric, derived from the literature, for specific EPA 5 functions that assess students’ documentation of history, physical exam, differential diagnosis, diagnostic justification, and workup. During the 2017 third year surgery clerkship, students completed two standardized patient (SP) clinical skills exams after which they wrote a note. Notes generated from 57 students were prospectively collected. Each note was assessed by two blinded physician raters (n=114); a subset of these (n=20) were assessed by four raters. First, a global note score (1-5 scale) was assigned. The rubric was then completed. Items within each domain were summed to generate an overall note score. Messick’s validity framework was used. Relationship to other variables was evaluated by correlation of note scores to SP checklists and relevant clinical evaluations. Interrater reliability (IRR) was examined with intraclass correlation coefficients (ICC), and internal structure using Cronbach’s alpha.
Results: Average overall note score was 43.4. IRR with two raters (n=114) was excellent with ICC=0.86 (ICC 95%, confidence interval (CI) 0.80-0.90) for overall note score. Within the various note domains ICC ranged from 0.55-0.93. IRR for four raters was slightly lower. Internal consistency within domains of our rubric yielded Cronbach’s alphas of 0.07-0.70. Global note score correlation with overall note score was high, r=0.80 (p<0.001). Correlation between note items and SP checklists ranged from 0.39-0.46 (p<0.05). Correlation between note items and clinical evaluations ranged from 0.28-0.39 (p<0.05).
Conclusions: The development of this rubric demonstrates initial evidence to support content validity. Correlations to other variables were moderate for both clinical evaluation components and the SP checklist. IRR was moderate to excellent within our domains and overall slightly lower as the number of raters increased. There is initial validity evidence to support the use of our rubric for assessment of progress towards entrustability of EPA 5.