Автор неизвестен - Mededworld and amee 2013 conference connect - страница 91

1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 

Martin Valcke (University Ghent, Department of Educational Studies, Ghent, Belgium) Cees P.M. van der Vleuten (Maastricht University, Department of Educational Development and Research, Faculty of Health, Medicine and Life Sciences, Maastricht, Netherlands)

Introduction: Health professionals need to be able to engage in continuous competency development throughout their career. Competency development relies on a continuous reflective process, built on two cognitive processes that differ in timing and focus: immediate reflection on performance and delayed reflection on competency development. We aimed to compare students' perceptions of the learning value and the perceived effect of the two reflection activities. Our main research questions were:

1. What is the perceived learning value of reflective writing immediately after performance versus delayed reflective writing on progress in competency development, and which approach is most valued by learners and recent graduates?

2. What is the perceived effect of the two reflective writing activities on learning?

Methods: 142 respondents (students and recent graduates) completed a questionnaire with closed-ended and open-ended questions about their perceptions of the two reflective activities. Quantitative and qualitative data were triangulated to identify core findings.

Results: Immediate reflection on performance was valued above delayed reflection on competency development. A positive effect of delayed reflection on learning was perceived only (retrospectively) by the graduates. The other year groups were much less appreciative of delayed reflective writing. Immediate reflection on performance was perceived by all groups to promote learning, because it facilitated moment-by-moment improvement and a two-way feedback process. Delayed reflection seemed more helpful to facilitate an overall self-assessment, self-confidence and continuous practice improvement. The following suggestions were made to enhance the learning effect of both reflective writing activities: limitation of immediate reflection as a function of challenging learning experiences and


limitation of delayed reflection at longer time intervals, limitation of the number of competencies and more time for observation, reflection, feedback and a progress dialogue.

Discussion and Conclusion: Although all respondents prefer reflection on performance, adding a reflective writing activity, focusing on progress might facilitate immediate and optimal improvement during the current internship as well as promoting longitudinal competency development across internships.

References: 1. Driessen, E., van Tartwijk, J., Overeem, K., Vermunt, J. D. & van der Vleuten, C. P. M. (2005). Conditions for successful reflective use of portfolios in undergraduate medical education. Medical Education,

39, 1230-1235.

2. Sagasser, M. H., Kramer, A. W. M., van der Vleuten, C. P. M. (2012). How do postgraduate GP trainees regulate their learning and what helps and hinders them? A qualitative study. BMC Medical Educution, 12, 67.


How Theory and Causal Assumptions can Guide Data Analysis and Inference in Medical Education Research

Benjamin Boerebach (Academic Medical Center, Professional Performance Research Group, Center for Evidence-Based Education, Meibergdreef 9, Amsterdam 1105 AZ, Netherlands)

Kiki Lombarts (Academic Medical Center, Professional

Performance Research Group, Center for Evidence-Based

Education, Amsterdam, Netherlands)

Albert Scherpbier (Maastricht University, Faculty of

Health, Medicine and Life Sciences, Maastricht,


Onyebuchi Arah (University of California, Los Angeles

(UCLA), UCLA Field Center for Health Policy Research;

Department of Epidemiology, Los Angeles, United States)

Introduction: Researchers often have to make causal assumptions in the process of analyzing data, interpreting results and reaching conclusions. In some areas of medical education research, the evidence supporting these causal assumptions is scarce because of the small number of relevant empirical studies conducted in the specific area. Therefore, it often remains unclear why certain assumptions are made and what effect these assumptions have had on the research findings (Groenwold et al., 2008). This study explored the implications of causal assumptions for medical education research, illustrated by a case study about the influence of faculty's teaching performance on their role modeling behavior (Boerebach et al., 2012). Methods: We used the formal language and diagrams from the modern Structural Causal Model (Pearl, 2009) to guide and interpret data re-analysis of a previously published study about the influence of faculty's teaching performance on their role modeling behaviors as teacher-supervisor, physician and person (Boerebach et al., 2012). To illustrate the hypothetical causal relationships between faculty's teaching performance and their role modeling behaviors (as teacher-

supervisor, physician and person), all plausible causal diagrams were drawn (Greenland et al., 1999). Subsequently, these causal diagrams were translated into corresponding statistical models and multilevel analyses were performed to estimate the different effects (expressed as odds ratios) for each relationship between faculty's teaching performance and their role modeling behaviors.

Results: Overall, four different statistical models for each outcome variable as (role modeling behaviors as teacher-supervisor, physician and person) emerged. The results of these different statistical models showed major differences in the magnitude of the relationship between faculty's teaching performance and their role modeling behaviors. The odds ratios for relating teaching performance to the three role model typologies for the different statistical models ranged from 31.1 to 73.6 for the teacher-supervisor role, from 3.7 to 15.5 for the physician role and from 2.8 to 13.8 for the person role.

Discussion and Conclusion: As we found in this nuanced re-analysis, moving from associations to inferring effects using non-experimental data requires (some untestable) causal assumptions about the interrelationships between key study variables. The causal assumptions guided choice of variable adjustment, model choice, and interpretation of the possibly different effect estimates. Since different causal or relational assumptions can lead to different analytical models, results interpretation, and practice implications in non-experimental medical education research, it is important that authors be transparent to their readership about their causal assumptions and subsequent results interpretation given those assumptions.

References: (1) Groenwold RH, Van Deursen AM, Hoes AW, Hak E. Poor quality of reporting confounding bias in observational intervention studies: a systematic review.

Ann Epidemiol 2008; 18(10):746-751.

(2) Boerebach BC, Lombarts KM, Keijzer C, Heineman MJ, Arah OA. The teacher, the physician and the person: how faculty's teaching performance influences their role modelling. PLoS One 2012; 7(3):e32089.

(3) Pearl J. Causal inference in statistics: an overview. Statistics Surveys 2009; 3: 96-146.

(4) Greenland S, Pearl J, Robins JM. Causal diagrams for epidemiologic research. Epidemiology 1999; 10(1):37-



Score Gains for Repeat International Medical Graduates on a Performance-Based United States Medical Licensure Examination

Kimberly Swygert (National Board of Medical

Examiners, Scoring Services, 3750 Market Street,

Philadelphia 19104, United States)

Alex Chavez (National Board of Medical Examiners, Test

Development, Philadelphia, United States)

Steven Peitzman (Educational Commission for Foreign

Medical Graduates, Clinical Skills Evaluation

Collaboration, Philadelphia, United States)


Mark Raymond (National Board of Medical Examiners, Test Development, Philadelphia, United States)

Introduction: The literature on repeater performance on performance-based standardized patient exams has reported score gains both across initial and repeat testing sessions, indicating a remediation or learning effect, and over multiple encounters within a single exam session, indicating a warm-up or practice effect.1-3 One recent study that analyzed the communications (CIS), data gathering (DG), and patient note (PN) scores for United States (US) medical students who failed and repeated the United States Medical Licensure® (USMLE®) Step 2 Clinical Skills (CS) found that within-session score gains were present on each component and attributable to a pattern of score increases over the first few encounters for both first and second attempts, indicating a practice effect within session for both takes.4 A significant between-session score gain was found as well for each component that was not attributable to the practice effect, indicating a true improvement in performance between takes. The current paper extends the previous analyses to international medical graduates (IMGs) repeating the exam during the same time period. The specific research question was, do IMGs show the same pattern of between-session and within-session scores gains as USMGs, and if not, what do the patterns indicate for IMGs as far as both practice and remediation effects? Methods: The data included encounter-level scores for 12,394 international (non-US or Canadian) medical students and graduates (IMGs) who took Step 2 Clinical Skills twice between April 1, 2005 and December 31, 2010. This group includes examinees who report a language other than English as their first language. To test specific hypotheses about the within-session score gains, we modeled score patterns using smoothing and regression and applied statistical tests to determine whether the patterns were the same or different across attempts. In addition, we tested whether any between-session score increase could be explained by the first attempt within-session score trajectory. These within-session and between-session results were compared to the previous study that used USMG subjects for the CIS, DG, and PN components.

Results: Within-session score gains were observed on the CIS, DG, and PN components; these were attributable to a pattern of score increases over the first 3-6 encounters, followed by a subsequent leveling off, for both the first and second attempts. The gains were similar to those observed for USMGs in previous research.4 Hypothesis tests based on model predictions revealed that the between-session score gains, while significant, were small in size compared to the gains seen for USMGs in the previous study. Discussion and Conclusion: Within-session score patterns reflect a temporary "warm-up" effect that disappears within 3-6 encounters but "resets" between testing attempts. Between-session gains are significant but not meaningful in size, perhaps indicating a lack of effective remediation between exam attempts for IMGs in general. Further implications of the findings,

especially with respect to the validity of inferences made on repeat administrations of Step 2 CS for IMGs, will be discussed in the full presentation. References: 1. Boulet JR, McKinley DW, Whelan GP, Hambleton RK. The effect of task exposure on repeat candidate scores in high-stakes standardized patient assessments. Teach Learn Med. 2003;15(4): 227-232.

2. Swygert KA, Balog KP, Jobe A. The impact of repeat information on examinee performance for a large-scale standardized-patient examination. Acad Med.


3. Ramineni C, Harik P, Margolis MJ, Clauser BE, Swanson DB, Dillon GF. Sequence effects in the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) examination. Acad Med. 2007;82(suppl 10):S101-S104.

4. Chavez AK, Swygert KA, Peitzman SJ, Raymond MR (2013). Within-session score gains for repeat examinees on a standardized patient examination. Acad Med (in press).


Thinking like an Expert: Implications of a Theoretical Model of Intraoperative Decision-Making for Surgical Education

Sayra Cristancho (Western University, Surgery, Medical Biophysics and Centre for Education Research & Innovation, Health Sciences Addition, Room H110, London N6A 5C1, Canada)

Introduction: Currently, expertise research is grappling with the question of how experts adapt to novel challenges. Worldwide, researchers who study the practices of high-stakes professionals are learning how to better train for flexibility and innovation in the face of uncertainty. The present study seeks to further this understanding in the context of surgical education by exploring the challenges surgical experts encounter and the processes by which they assess and respond to challenges. Our purpose was to produce a theoretical model that can support further research and curriculum development for fostering surgeons' adaptive expertise. Methods: The study used an ethnographic methodology consisting of approximately 150 hours of non-participant observation and 32 semi-structured interviews immediately following 32 surgical cases. The cases, drawn from seven staff surgeons from a variety of surgical specialties, were purposively sampled after being pre-identified as "likely to include challenges" by the operating surgeon. We used constructivist grounded theory methodology with a two-stage analytical process. From the first analytical phase, a grounded theory of intraoperative decision-making emerged and the various elements of the model were identified. The second phase aimed to refine the description of the cycle and to consider how existing theoretical frameworks might further inform the interpretation of the data, as suggested by the tenets of grounded theory.

Results: The grounded theory developed during the first analytic phase consisted of three elements: Assessing the Situation, Reconciliation Cycle and Implementing the Planned Course of Action and two points of transition during which the surgeons continue to act, although they may change the course of their action. The Reconciliation Cycle was identified as the main element in the model. During the second analytical phase, the Reconciliation Cycle was further elucidated as a continuous, iterative process of gaining information and transforming the information encountered during the course of the case. It was found that experts transform information by comparing it against what is expected or typical and/or against the planned course of action to obtain 'new meaning' that is useful for solving the situation.

Discussion and Conclusion: The theoretical model developed in this study is the first step toward developing a language that captures recurring features of situation awareness and decision-making strategies in the surgical context. The Reconciliation Cycle is characterized as a dynamic and intertwined cognitive process in which reflection plays an important role in the way information is interpreted with a 'new meaning' by the surgeons. This characterization may serve as an overarching framework to further investigate the difference between how expert and non­expert surgeons create and implement strategies to cope with difficult and unexpected events. This study has produced a theoretical description of experts' cognitive strategies as they decode emergent challenges. While further research is required to elaborate and test the explanatory power of the language provided by this theoretical model, these insights will support the development of curricula to train for adaptive expertise in surgery.

8F Short Communications: Assessment:


Location: Chamber Hall, PCC


A tale of two cities: a comparison of the Mini-CEX in primary care in two universities

Martina Kelly (University of Calgary, Family Medicine, 3330 Hospital Drive, Calgary T2N 2N1, Canada) Deirdre Bennett (University College Cork, Medical Education, Cork, Ireland)

Caroline Sprake (University of Newcastle, Primary Care, Newcastle, United Kingdom)

Background: Workplace based assessment is increasingly common. Little is known about its implementation in primary care. Two universities; University College Cork, Ireland and the University of Newcastle, United Kingdom use the mini-clinical examination (Mini-CEX) in primary care to assess and give feedback to undergraduate medical students. Summary of work: To compare and contrast our experience using Mini-CEX. Both universities used the same form for assessment (derived from Foundation year training) and similar information was given to family physicians participating. Primary care mini-CEX assessments in both settings for the academic year 2010-2011 (UCC n=108, Newcastle n=178) were analysed to compare type of cases used for assessment; duration of mini-CEX and satisfaction with the assessment to students and family physicians. Summary of results: A wide variety of case histories and examinations were used in both settings; the respiratory system was the commonest system examined. The duration of the assessment (mean 20 minutes) was acceptable to busy primary care physicians and students. Detailed feedback (mean time 12 minutes) was given to students. Both students and assessors report satisfaction with this type of assessment in both contexts. However, a number of differences exist in both contexts in terms of student and assessors expectations of the function of assessment. Conclusions: International collaboration facilitated scrutiny of local application procedures to enhance reliability of the use of this format of assessment e.g. examiner training. This information will be used to help inform benchmarking and standardisation processes for this type of assessment in primary care. Take-home messages: Use of Mini-CEX is feasible within primary care.


Collaborating for success: International assessment and benchmarking of students' workplace performance

Sue McAllister (Flinders University, Speech Pathology, GPO Box 2100, Adelaide 5001, Australia)


Background: Collaborative development of assessment of workplace performance is important to ensure relevance, utility and engagement by all stakeholders. Summary of work: Speech pathology educators in Australia and New Zealand have continuously collaborated since 2001 to: 1. Develop a valid competency based assessment of students' performance in the workplace. 2. Embed the assessment tool into educational programs to support the unique learning and assessment structure of each program. 3. Develop a non-competitive strategy for cross-institutional benchmarking of student performance as an outcome measure to inform curriculum development.

Summary of results: 1. COMPASS® Online performance assessment validated and embedded into all speech pathology programs in Australia and New Zealand, and trialling in Malaysia, Hong Kong and Singapore. 2. Strategies for secure and collaborative cross-institutional benchmarking for curriculum improvement established. 3. Ongoing annual Asia-Pacific forums for sharing of and collaboration on curriculum evaluation and innovation.

Conclusions: The COMPASS® projects represent a highly collaborative and effective international process of performance assessment and curriculum development across a health profession. Consequently a shared understanding and language regarding the nature and process of developing competency by students, clinical and university educators and accreditors now exists across the profession.

Take-home messages: An international non-competitive approach to valid assessment and benchmarking of student performance in the workplace was achieved and yielded greater advantages than initially anticipated.


Anaesthesia training - trainees in the driving seat

Olly Jones (Australian and New Zealand College of Anaesthetists, Education, 630 St Kilda Road, Melbourne 3004, Australia)

Jodie Atkin (Independent Medical Education and Training Consultant, Sydney, Australia)

Background: The Australian and New Zealand College of Anaesthetists (ANZCA) launched a revised curriculum in 2013. The introduction of seven ANZCA Clinical Fundamentals defines fundamental anaesthesia knowledge and skill. Professional attributes required of anaesthetists in contemporary practice are nurtured through the ANZCA Roles in Practice. Workplace-based assessments guide trainees through the curriculum. Summary of work: The implementation of the curriculum affected 531 trainees. Workplace-based assessment (WBA) tools provide an improved structure for teaching, critical thinking and rich feedback for the clinical fundamentals and the ANZCA Roles in Practice. The College developed an online mobile-compatible Training Portfolio System (TPS) which drives trainee learning through key milestones.

Summary of results: 77 WBA workshops were delivered before the curriculum launch and 648 assessors and 67% of supervisors of training were trained. 66% of trainees had interacted with the TPS in the first 2 months. Trainee focus groups confirmed a higher quality experience. WBAs conducted in the first 2 months are providing far more standardized, structured feedback and guidance and the tool introduction is positive. Conclusions: WBAs and the TPS have been instrumental in the introduction of the revised curriculum. Trainees can explore the curriculum, consider unique opportunities for their learning and highlight their learning needs. Trainees are in the driver's seat and the TPS provides supervisors with a vehicle to proactively monitor trainee progression.

Take-home messages: A revised curriculum, supported by a range of tightly aligned formative assessments and an online training portfolio drives teaching, regular feedback and learning.


What is the best way to use clinical supervisors' assessment?

Mark McLean (University of Western Sydney, School of Medicine, Locked Bag 1797, Penrith New South Wales, Sydney 2751, Australia)

Vicki Langendyk (University of Western Sydney, School of Medicine, Sydney, Australia) Background: Clinical attachment supervisors are essential members of the teaching faculty, but are diverse and varied in their approach to assessment of students attached to their clinical team. It is difficult to standardize their marking of student performance in clinical attachments.

Summary of work: We reviewed the clinical attachment assessment (CAA) marks awarded by supervisors in 1156 episodes of hospital-based attachments for 256 students in their first clinical year of an undergraduate medical program. We compared these marks with the student's performance in written and OSCE examinations in the same year.

Summary of results: Clinical supervisors were very generous with marks and demonstrated poor discrimination between high and low-scoring students in written assessments. The median CAA score was 80%, with a very narrow range of scores (sd=6), and no students received a failing CAA grade. The written examination median score was 63%, (sd=9, minimum 41%, 13 scores below 50%). There was a very poor correlation between CAA scores and the written examination result (r=0.31). However, clinical supervisors were readily able to recommend failure based on criteria of professional conduct or attendance. Students also find their informal feedback useful. Conclusions: Clinical supervisors are good at recognizing poor professionalism or attendance, but are otherwise uniformly generous with assessment marks. CAA marks are poor discriminators of overall student performance. Supervisor's assessments are more appropriate to formative feedback, plus hurdle requirements on professionalism and attendance.

Take-home messages: Clinical Supervisor's assessments should not be used as summative assessments. However they are useful for formative feedback and detection of poor professional standards.

8G Short Communications: Curriculum Evaluation

Location: Conference Hall, PCC


Beyond course evaluation: Concept-development of an ongoing theory based competency and curriculum evaluation

Evelyn Bergsmann (University of Vienna, Faculty of Psychology - Educational Psychology and Evaluation, Universitaetsstrasse 7, Vienna 1010, Austria) Petra Winter (University of Veterinary Medicine Vienna, Vice-rectorship for Study Affairs, Vienna, Austria) Barbara Schober (University of Vienna, Faculty of Psychology - Educational Psychology and Evaluation, Vienna, Austria)

Christiane Spiel (University of Vienna, Faculty of Psychology - Educational Psychology and Evaluation, Vienna, Austria)

Background: The systematic evaluation of student competencies and of curricula is rarely implemented. However, this would enhance teaching quality and consequently the student competencies sustainable. Hence, in a pilot study the veterinary medicine universities of the German speaking countries decided to conduct a pilot study to develop, implement and evaluate a theory based concept for ongoing competency- and curriculum evaluation. Summary of work: A two-step procedure was applied: (1) Defining the theoretical framework for the concept based on the respective literature, and (2) identifying the evaluation goals of the Veterinary Medicine University Vienna.

Summary of results: The procedure resulted in an evaluation-concept of competencies and the curriculum which includes (a) an ideal and a real perspective, i.e. what and how students should learn and what and how they do learn, (b) student perspective and lecturer/instructor perspective, and (c) annual data collection at a crucial phase in the middle as well as at the end of the curriculum. Conclusions: Evaluation results should inform the rectorship/senate to make evidence-based decisions, the lecturers/instructors to enhance their teaching-quality and the students about their individual competence profile. For realizing the concept at the university and fulfilling the criteria for empowerment and utilization-focused-evaluation, four teams are established and trained by evaluators in a four semester program.

Take-home messages: Competency- and curriculum evaluation should be theory-based, involve the stakeholders from the beginning, and include different perspectives. To conduct an ongoing evaluation evaluators have to build up evaluation-capacity and -culture at the university.



Study diaries as sensitive detection instrument and basis for current interventions in the process of curriculum implementation

Tanja Hitzblech (Charite Universitatsmedizin Berlin, Dieter Scheffner Fachzentrum, Invalidenstr. 80-83, Berlin 10117, Germany)

Asja Maaz (Charite Universitatsmedizin Berlin, Dieter Scheffner Fachzentrum, Berlin, Germany) Sabine Schmidt (Charite Universitatsmedizin Berlin, Dieter Scheffner Fachzentrum, Berlin, Germany) Harm Peters (Charite Universitatsmedizin Berlin, Dieter Scheffner Fachzentrum, Berlin)

1  2  3  4  5  6  7  8  9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  27  28  29  30  31  32  33  34  35  36  37  38  39  40  41  42  43  44  45  46  47  48  49  50  51  52  53  54  55  56  57  58  59  60  61  62  63  64  65  66  67  68  69  70  71  72  73  74  75  76  77  78  79  80  81  82  83  84  85  86  87  88  89  90  91  92  93  94  95  96  97  98  99  100  101  102  103  104  105  106  107  108  109  110  111  112  113  114  115  116  117  118  119  120  121  122  123  124  125  126  127  128  129  130  131  132  133  134  135  136  137  138  139  140 

Похожие статьи

Автор неизвестен - 13 самых важных уроков библии

Автор неизвестен - Беседы на книгу бытие

Автор неизвестен - Беседы на шестоднев

Автор неизвестен - Богословие

Автор неизвестен - Божественность христа