A Novel Approach for Developing Rating Scales for a Practice Ready OSCE
1. A Novel Approach for Developing Rating
Scales for a Practice Ready OSCE
Bruce Holmes
Division of Medical Education, Dalhousie University
Saad Chahine
Mount Saint Vincent University
2. Clinician Assessment for Practice
Program (CAPP)
• A program of the College of Physicians and
Surgeons of Nova Scotia (CPSNS)
• A 3-part program assessing IMG physicians
(residents in Canada, without formal Canadian
training) for practice readiness.
3. The CAPP Program
Part A: Initial assessment
• Assessment of competence via OSCE & therapeutics
exam
Part B: 1 year mentorship with a family physician
• Defined license for 1 year with performance assessment
Part C: Additional 3 years of defined license until
certified by The College of Family Physicians
(CCFP)
4. Purpose of the CAPP OSCE
1. To assess the clinical competence of IMG
candidates for readiness to practice.
2. To provide feedback on candidates' performance
for their continuing professional development.
• The OSCE is not a “pass or fail” examination, but
rather one factor in the decision to grant a defined
license to practice in Nova Scotia.
5. Examining the Constructs of
Rating Form?
• Re-design rating form
• Provide rater guidelines on how to be an examiner
• Two online practice videos for raters
– Results submitted
– Group data reported to compare with individual rating
– Personal feedback available
6. OSCE Competencies
• History Taking
• Physical Exam
• Communication Skills
• Quality of Spoken English
• Counselling
• Professional Behaviour
• Problem Definition & Diagnosis*
• Investigation & Management*
• Overall Global*
8. Purpose of Elements
“Guided Conceptualization”
• The elements act as a guide to forming a
conceptualization of the competency
• Theory the elements help the examiner in self
regulatory process to build an
“awareness and management of one’s own
thought” (Kuhn & Dean, 2004)
• Provide “standardized” comments as feedback for
candidates and Credentials Committee of College
9.
10.
11. How are Elements Related to
Competencies?
• Reviewed the ratings in 2012 OSCE
– 36 candidates
– Total of 432 observations
• Each set of elements correlated with
competency
– e.g., History; Investigation & Management
• Spearman Correlation between the element
ratings and the respective competency rating
was conducted one at a time
12. Results
• History Taking; Counselling; Investigation &
Management:
– High positive correlation for all element ratings
with the respective competency rating (r-values
0.74-0.82)
• Problem Definition and Diagnosis:
– High positive correlation (r-values 0.70-0.81) except for
alternate diagnosis (r=0.61)
• Physical Exam
– High positive correlation (r-values 0.70-0.75) except for
sensitivity (r=0.61)
13. Future Analysis for
Competencies – CFA
History
Taking
Breadth
Relevance
Focus
Sequencing
Chronology
• Include data from
2013 & 2014
• Confirmatory Factor
Analysis conducted
through Structural
Equation Modeling
14. Take Home Messages
• Strong positive associations between
elements and competency rating
– Appears to be helpful to guide examiner ratings
– Helpful feedback on competencies
• Follow-up with refinement of elements using
examiner feedback
• Follow-up with data from June 2013 & 2014
OSCE
PDD, IM and OG are rated after PEs ask questions at 10 minute mark. Candidates have 4 minutes to answer questions related to PDD and IM
And 3 minutes between candidates
Note: the Elements were developed from themes based on “free text” narrative comments from physician examiners in previous CAPP OSCEs
PEs are asked to consider the “pattern” of bubbled Elements when rating competency
In this example a PE would likely rate the competency as either Borderline or Satisfacory
Poor or Very Good options would not be consistent with the Elements
Credentials Committee receives the same report as candidates
More green the better.
Yellow is borderline
Red is poor/inferior
Green is good; red is not!
Note: Spearman rank correlations
Confirmatory Factor Analysis allows us to mathematically model all the competencies and elements at the same time and look at how the whole structure (“hangs together”). Additionally it confirms the conceptual structure of the form. CFA is important in tell us that the conceptual design of the forms allows us to be representative in the decisions we make about candidates. Reliability tells us if the measures are consistent this analysis tells us if the conceptual meaning of the test operationalized and the ways we intended.