Two years later: A Review of Assessment Strategies for a Pilot Critical Care Medicine Entrustable Professional Activities (EPA) - Based Curriculum
CCCF ePoster library. Bridge S. 11/11/19; 285166; EP4
Dr. Suzanne Bridge
Dr. Suzanne Bridge
Login now to access Regular content available to all registered users.

You may also access this content "anytime, anywhere" with the Free MULTILEARNING App for iOS and Android
Abstract
Rate & Comment (0)
ePoster
Topic: Education

n/a


Background: Competency by Design (CBD) is a curriculum design recently launched in Canada with a goal to improve assessment of trainees and to address the 'failure to fail' dilemma frequently described in medical education. To do this, CBD relies on individual focused assessments that are centered around achieving competence at entrustable professional activities (EPAs) of a medical discipline. Assessment tools typically contain an entrustment as well as qualitative feedback. Almost two years in to our launch of a CBD in the Critical Care Medicine (CCM) Program at Queen's University, we sought to review the quantity and quality of the data we were acquiring during our pilot 33 EPA CCM Curriculum. 

Objective: The goal of this analysis was to review the data obtained to date in our pilot EPA-based curriculum. We looked at the quantity of acquired assessments. Further, we evaluated our two utilized entrustment scales to ascertain whether the scales were able to appropriately discriminate the ability of trainees.

Methods: Using our online data acquisition platform, Medtech, all data for assessments performed for trainees enrolled in CBD based CCM curriculum from July 1st 2017-May 30th 2019 was extracted. To evaluate our assessment strategy, we reviewed the quantity of individual assessments as well as looked at the distribution of answers for each utilized entrustment system compared with qualitative feedback.  Two entrustment scales, the O-score, used to assess procedure based EPAS, and a 3-point entrustment scale for non-procedure-based EPAs, were a primary focus.
 
Results: There were a total of 11 trainees who generated 1056 independent assessments. Of the 353 O-score assessments, the distribution of the final entrustment decision was:  4 (1.1 %) 'I had to do', 23 (6.5 %) 'I had to talk them through', 48 (13.6) 'I had to prompt them from time to time', 117 (33.1%) 'I needed to be there just in case' and 353 (49.9%) 'I did not need to be there'.  Of the 782 3-point entrustment scale assessments the distribution of final entrustment decisions was: 4 (0.5 %) 'not yet', 81 (10.4%) 'almost' and 703 (89.9 %) 'yes'. When compared with the corresponding qualitative data, the 3-point entrustment scale failed to provide an accurate reflection of the 'competence' of trainees at a given EPA.
 
Discussion: Overall, trainees were able to achieve a significant number of assessments (approx. 100 per person) and thus were able to meet the requirements of our curriculum. For assessment data from procedure-based EPAS, the O-score entrustment scale yielded a wide distribution over the entire range of entrustment categories. Further, when compared quantitative data, there was good correlation between O-Score and written feedback. This suggests that in our assessment of procedural skills, we are observing individuals at all stages of their progression towards competency. In the case of non-procedural EPAs, trainees are deemed entrustable in approximately 90% of cases. However, the qualitative data often yielded concerns that seemed incongruent with entrustment. This would suggest persistence of the 'failure to fail' dilemma.

Conclusion: In our current model, we were able to obtain enough assessments to satisfy our pilot curriculum. However, in this inception cohort, the Queen's 3-point entrustment scale failed to discriminate competence in non-procedural EPAs. Whether this is because of the design or application of this scale in this setting remains to be elucidated.
 
 


no references

ePoster
Topic: Education

n/a


Background: Competency by Design (CBD) is a curriculum design recently launched in Canada with a goal to improve assessment of trainees and to address the 'failure to fail' dilemma frequently described in medical education. To do this, CBD relies on individual focused assessments that are centered around achieving competence at entrustable professional activities (EPAs) of a medical discipline. Assessment tools typically contain an entrustment as well as qualitative feedback. Almost two years in to our launch of a CBD in the Critical Care Medicine (CCM) Program at Queen's University, we sought to review the quantity and quality of the data we were acquiring during our pilot 33 EPA CCM Curriculum. 

Objective: The goal of this analysis was to review the data obtained to date in our pilot EPA-based curriculum. We looked at the quantity of acquired assessments. Further, we evaluated our two utilized entrustment scales to ascertain whether the scales were able to appropriately discriminate the ability of trainees.

Methods: Using our online data acquisition platform, Medtech, all data for assessments performed for trainees enrolled in CBD based CCM curriculum from July 1st 2017-May 30th 2019 was extracted. To evaluate our assessment strategy, we reviewed the quantity of individual assessments as well as looked at the distribution of answers for each utilized entrustment system compared with qualitative feedback.  Two entrustment scales, the O-score, used to assess procedure based EPAS, and a 3-point entrustment scale for non-procedure-based EPAs, were a primary focus.
 
Results: There were a total of 11 trainees who generated 1056 independent assessments. Of the 353 O-score assessments, the distribution of the final entrustment decision was:  4 (1.1 %) 'I had to do', 23 (6.5 %) 'I had to talk them through', 48 (13.6) 'I had to prompt them from time to time', 117 (33.1%) 'I needed to be there just in case' and 353 (49.9%) 'I did not need to be there'.  Of the 782 3-point entrustment scale assessments the distribution of final entrustment decisions was: 4 (0.5 %) 'not yet', 81 (10.4%) 'almost' and 703 (89.9 %) 'yes'. When compared with the corresponding qualitative data, the 3-point entrustment scale failed to provide an accurate reflection of the 'competence' of trainees at a given EPA.
 
Discussion: Overall, trainees were able to achieve a significant number of assessments (approx. 100 per person) and thus were able to meet the requirements of our curriculum. For assessment data from procedure-based EPAS, the O-score entrustment scale yielded a wide distribution over the entire range of entrustment categories. Further, when compared quantitative data, there was good correlation between O-Score and written feedback. This suggests that in our assessment of procedural skills, we are observing individuals at all stages of their progression towards competency. In the case of non-procedural EPAs, trainees are deemed entrustable in approximately 90% of cases. However, the qualitative data often yielded concerns that seemed incongruent with entrustment. This would suggest persistence of the 'failure to fail' dilemma.

Conclusion: In our current model, we were able to obtain enough assessments to satisfy our pilot curriculum. However, in this inception cohort, the Queen's 3-point entrustment scale failed to discriminate competence in non-procedural EPAs. Whether this is because of the design or application of this scale in this setting remains to be elucidated.
 
 


no references

    This eLearning portal is powered by:
    This eLearning portal is powered by MULTIEPORTAL
Anonymous User Privacy Preferences

Strictly Necessary Cookies (Always Active)

MULTILEARNING platforms and tools hereinafter referred as “MLG SOFTWARE” are provided to you as pure educational platforms/services requiring cookies to operate. In the case of the MLG SOFTWARE, cookies are essential for the Platform to function properly for the provision of education. If these cookies are disabled, a large subset of the functionality provided by the Platform will either be unavailable or cease to work as expected. The MLG SOFTWARE do not capture non-essential activities such as menu items and listings you click on or pages viewed.


Performance Cookies

Performance cookies are used to analyse how visitors use a website in order to provide a better user experience.


Save Settings