Автор неизвестен - Mededworld and amee 2013 conference connect - страница 126
Discussion and Conclusion: In this investigation, we demonstrated that all three System 2 feedback methods produced highly significant effect size improvements in DDX performance. However, System 2 feedback consisting of a full comparing and contrasting feedback strategy identifying both similar and discriminating features provides no additional benefits compared to a much simpler form of System 2 feedback consisting of a listing of only discriminating features ordered in terms of their relative importance.
References: 1. Gentner, D, Loewenstein, J, Thompson, L. (2003) Learning and Transfer: A general role for analogical encoding. Journal of Educational Psychology,
2. Ark, T.K., Brooks, L.R. Eva, K.W. (2007). The benefits of flexibility: The pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Medical
Education, 41, 281-287.
10F Short Communications: Assessment
Location: Chamber Hall, PCC
Assessment practices are on the move
David Rosenthal (Flinders University, Rural Clinical School, PO Box 852, Renmark 5341, Australia) Lambert Schuwirth (Flinders University, Medical Education, Adelaide, Australia)
Background: In medical education there is an increased interest in flexible and personalised assessment. From the ethics of equity it is becoming increasingly clear that standardisation is not the sole route to fairness and equity, especially in educational situations with a diversity in the profiles of students in the medical program. Here, standardised testing may rather lead to inequity than to equity.
Summary of work: In the context of a rural placement programme which is built according to the longitudinal and integrated clerkship model, we have employed a flexible and personalised assessment system. This programme is politically sensitive and therefore our approach to equity is under close scrutiny. In our situation a deficiency-model would have implied lowering standards to cater to the diversity whereas a difference-model has led to identifying and managing strengths and weaknesses of each student. Summary of results: These differences in strengths and weaknesses are not so much apparent in terms of differences of learning styles or other stable characteristics, but are mainly located in intra-individual differences and interactions between student, teacher and subject matter to master. This is why standardised assessment would probably have created more inequity. Conclusions: Optimising the student tailored teaching and assessment is important and seems intuitively right but there are practicalities, and pros and cons, which we will discuss based on our experiences. Take-home messages: Flexible assessment is often more equitable than standardised assessment in situations where the student body is of diverse backgrounds.
An International Consortium for Assessment Networks (ICAN): facing the challenges of competency-based assessment
Achim Hochlehnert (University of Heidelberg, Center of Competence for Medical Assessment, Im Neuenheimer Feld 410, Heidelberg D-69120, Germany) Konstantin Brass (University of Heidelberg, Center of Competence for Medical Assessment, Heidelberg, Germany)
Andreas Moltner (University of Heidelberg, Center of Competence for Medical Assessment, Heidelberg, Germany)
Jobst-Hendrik Schultz (University of Heidelberg, Center of Competence for Medical Assessment, Heidelberg, Germany)
ABSTRACT BOOK: SESSION 10 WEDNESDAY 28 AUGUST: 0830-1015
Jana Jiinger (University of Heidelberg, Center of Competence for Medical Assessment, Heidelberg, Germany)
Background: Good examinations require considerable resources. Beside medical faculties several institutions which perform assessment in postgraduate education or in other health care professions are confronted with this challenge. There is a need for 1) the possibility to exchange, standardize and compare the appraisal of achievement, 2) an efficient quality assurance, and 3) the chance to move forward to competency-based forms of assessment.
Summary of work: Therefore in 2006 the Medical Assessment Alliance was founded to facilitate all relevant processes of assessments in medical faculties. After several international partners had joined the International Consortium for Assessment Networks (ICAN) was established as a non-profit umbrella organization for assessment alliances with different interests and focuses.
Summary of results: At present, ICAN covers 31 medical faculties in 6 countries all over Europe. More than 3.500 users in 1.100 working groups are collaborating. To support cooperation within this network, the web-based ItemManagementSystem (IMS) was developed as an all-in-one working platform. Currently 122.000 questions have been stored in the IMS. Since 2007, more than 6.100 examinations have been successfully conducted. Aside from the medical faculties additional institutions or foundations like the European Board of Medical Assessors (EBMA), several physician chambers, etc. have decided to use IMS for the assessment at different steps of postgraduate education (assessment of clinical competence, board certification etc.), some for assessment of examinees in other health care professions.
Conclusions: There are overall 48 partners which are united under the non-profit organization ICAN. Take-home messages: ICAN-partners can take advantage of all features of the IMS-platform which could be customized for their specific needs.
Variation in achievement patterns of medical students in final examinations in MBBS course and its reasons
Rehan Ahmed Khan (Islamic International Medical College, Riphah University, Surgery, Villa 13, Circular Avenue, Safari Villas 1, Rawalpindi 46000, Pakistan) Madiha Sajjad (Islamic International Medical College, Riphah University, Pathology, Rawalpindi, Pakistan) Masood Anwar (Islamic International Medical College, Riphah University, Pathology, Rawalpindi, Pakistan)
Background: Performance of medical students is not static through the MBBS course. Identifying the reasons for success and failure in their examinations will help to identify the cause of variations in achievement patterns of medical students in MBBS course.
Summary of work: Four year data of final assessments of first to fourth year MBBS students was collected. Thirty students (10 each from high, middle and low achievement scores) in the first year MBBS were selected from the data to see the variations in their ranks and grades in the next 3 years. These students were interviewed to find the reasons they attributed to their success and failure. Weiner's attribution theory was used to explain the reasons for their success and failure.
Summary of results: Only few students were able to maintain their ranks and grades with consistent pattern in all four years. A very high variation in ranks/grades was found. Effort and Interest was the main reasons for the success whereas bad luck and task difficulty were mainly attributed to failure.
Conclusions: Wiener's attribution theory explains the reasons for success and failure of students and this helps to find the reasons for variations in student achievement and failure patterns.
Take-home messages: Identifying the right reasons at the right time for failure and success in examinations can help students to improve their grades.
How well do medical school assessments predict post-graduation performance?
Ming Lee (School of Medicine at UCLA, Center for Educational Development and Research, PO Box 951722, 60-051 CHS, Los Angeles, CA 90095-1722, United States) Michelle Vermillion (David Geffen School of Medicine at UCLA, Center for Educational Development and Research, Los Angeles, United States) LuAnn Wilkerson (David Geffen School of Medicine at UCLA, Center for Educational Development and Research, Los Angeles, United States)
Background: Although studies have shown the individual values of undergraduate assessments in predicting graduate performance, little is known about their relative contributions when compared simultaneously.
Summary of work: Two-hundred-fifty-six (66%) graduates in the 2009 and 2011 residency program director surveys were rated by their supervisors. A multiple regression analysis used the ratings to examine the predictive values of several undergraduate assessments, including the United States Medical Licensing Examination (USMLE) Step 1, Step 2 Clinical Knowledge (CK), National Board of Medical Examiner (NBME) medicine exam, inpatient clerkship ratings, and an 8-station Clinical Performance Examination (CPX). A multivariate analysis of covariance (MANCOVA) was conducted to examine the differences among the three internship performance groups (Low, Medium, and High) in these measures, using Medical College Admission Test (MCAT) scores as a covariate. Summary of results: Only the inpatient clerkship ratings and CPX scores contributed significantly (p < .01) to the prediction of internship performance. After controlling for differences in MCAT scores, we found significant
ABSTRACT BOOK: SESSION 10 WEDNESDAY 28 AUGUST: 0830-1015
group variations in the undergraduate measures (F (10, 488) = 2.91, p = .001). Follow-up analyses revealed significant (p < .01) differences in the inpatient clerkship and CPX assessments.
Conclusions: Performances in clinical settings were stronger predictors of internship performance than knowledge test scores. The undergraduate assessments demonstrated a collective relationship with the internship measurement, even when students' pre-medical school differences were held constant. Medical school assessments, especially those measuring clinical competencies, positively predicted post-graduation performance.
Take-home messages: Clinical assessments by undergraduate and graduate faculty members are comparable.
Can preclinical standardized tests predict medical student clinical performance? A multi-specialty longitudinal analysis
Petra Casey (Mayo Medical School, Obstetrics and Gynecology, 200 First Street SW, Rochester, Minnesota
55905, United States)
Joseph Grande (Mayo Medical School, Laboratory Medicine and Pathology, Rochester, Minnesota, United
Torrey Laack (Mayo Medical School, Emergency Medicine, Rochester, Minnesota, United States) Geoffrey Thompson (Mayo Medical School, Surgery, Rochester, Minnesota, United States) Robert Ficalora (Mayo Medical School, Internal Medicine, Rochester, Minnesota, United States)
Background: Standardized examinations are designed to objectively measure performance. Where literature supports the correlation between premedical (MCAT), preclinical (USMLE1) and clinical standardized scores (NBME subject&USMLE2) in individual specialties, our study presents correlations across all clinical clerkships using NBME subject examinations, and incorporates a clinical performance parameter (ISES) to determine if clinical performance can be predicted. Summary of work: Gender, GPA, MCAT, USMLE1, USMLE2, NBME subject and ISES scores were obtained for 310 students matriculating at MMS 2003-2009. The outcomes of interest were NBME subject and ISES scores. Multivariable linear regression models using stepwise and backward variable selection identified independent predictors. The strength of each model was summarized by R2 value.
Summary of results: The strongest predictor of USMLE1 was the MCAT-biologic al science score, with R2=23%. The strongest predictor of USMLE2 was USMLE1, which yielded R2=61%. All NBME subject scores correlated with USMLE 1&2, (R2: IM 59%, Pediatrics 64%, Surgery 54%, Obgyn 60%, Neurology 61%, Psychiatry 52%). Independent predictors of ISES scores were USMLE2 and GPA, however these measures only explained 16% of the ISES variation.
Conclusions: Among students at Mayo Medical school, premedical standardized scores (MCAT) correlated with preclinical scores (USMLE 1), whereas the latter correlated with later clinical standardized scores (USMLE2 and NBME subject). Clinical performance scores (ISES) correlated with both clinical standardized scores (USMLE2) and undergraduate performance
Take-home messages: Standardized scores may serve as predictors of future examination and clinical clerkship performance in medical school. Identifying students at risk for underperforming in medical school based on standardized scores will facilitate earlier intervention and remediation.
Clinical assessment in Australian and New Zealand medical schools: Providing an overview and the development of a national assessment resource
Monique Hourn (Medical Deans Australia and New Zealand, c/o Medical Deans Secretariat, Level 6,173 -175 Phillip Street, Sydney 2000, Australia) Richard Hays (Bond University, Faculty of Health Sciences and Medicine, Gold Coast, Australia)
Background: The increase in medical student numbers across Australia has put pressure on educational bodies to provide quality clinical training and to undertake assessment that measures the work readiness of graduates. Over the last three years, Medical Deans Australia and New Zealand has developed clinical training resources for medical schools. The first was a framework of clinical competencies based on national accreditation standards; the second identified the common diagnostic and procedural competencies for the medical graduate and specified the level of achievement of these skills. A third body of work is now underway which builds on the first two stages to develop a comprehensive overview of how Australian and New Zealand medical schools assess the clinical competencies of their graduates before they enter the workforce.
Summary of work: An extensive consultation process with all Australian and New Zealand medical schools was undertaken to collect data on clinical assessment, assessment blueprints, the use of Workplace Based Assessments in medical schools and standard setting for clinical assessments.
Summary of results: The project has provided a summary of clinical assessment in Australian and New Zealand medical schools whilst examining the role of Workplace Based Assessments in medical schools. Conclusions: This information has been collated to develop an assessment blueprint for clinical competencies for the medical graduate which medical schools could use to compare and evaluate their clinical assessment programs.
Take-home messages: Best practice scenarios for clinical assessment have been identified, providing useful information for medical schools, accreditation agencies
ABSTRACT BOOK: SESSION 10 WEDNESDAY 28 AUGUST: 0830-1015
and health services about clinical training and how graduates are assessed as ready for internship.
What do postgraduate examiners know about, and think of, standard setting in the College of Physicians of South Africa?
Scarpa Schoeman (University of the Free State, Dept of Internal Medicine, PO Box 339 (G73), Bloemfontein 9300, South Africa)
Vanessa Burch (University of Cape Town, Dept of
Medicine, Cape Town, South Africa)
Marietjie Nel (University of the Free State, Division of
Health Professions Education, Bloemfontein, South
Background: Since its inception in 1954, the Colleges of Medicine of South Africa (CMSA) has used a fixed pass mark (cut-score) of 50% for all fellowship examinations in its 29 constituent colleges. In 2011 the College of Physicians (CoP) introduced standard setting (Cohen method) for components of their fellowship examinations. Despite an earlier workshop, it seemed that CoP examiners had limited knowledge of, and diverse opinions about, standard setting. A situational analysis was done to verify knowledge gaps and explore attitudes towards standard setting to guide the design of a focused workshop for CoP examiners. Summary of work: An anonymous online survey was sent to current (2010-2013) CoP examiners (n=51). Their knowledge of, and opinions about, standard setting were investigated.
Summary of results: Seventy five percent of examiners completed the survey. Some examiners did not know that standard setting had been introduced; 21% for Part I MCQ exam and 45% for Part II Objective Test. Altogether 21% were knowledgeable about, and 55% were familiar with, but not knowledgeable about, standard setting. A number of examiners (29%) had "no problem" with using a fixed 50% pass mark, 32% were concerned about it and 39% rejected the practice. Most (63%) endorsed the changes made and 74% supported further implementation of standard setting in other CoP examinations.
Conclusions: Although many CoP examiners endorsed standard setting, and some rejected the ongoing use of a fixed pass mark, they had very limited knowledge about standard setting.
Take-home messages: Although broadly positive and supportive, CoP examiners need more information about, and a better understanding of, standard setting.
10G Short Communications: Curriculum: Competency Based Education/Outcome Based Education 2 - Undergraduate
Location: Conference Hall, PCC
The planner's plan - Reflections on the underlying conceptions and the theoretical basis of a new integrated, competency-based medical curriculum at the Charite Berlin
Asja Maaz (Charite, Dieter-Scheffner-Fachzentrum, Berlin, Germany)
Tanja Hitzblech (Charite, Dieter-Scheffner-Fachzentrum, Berlin, Germany)
Markus Langenstrass (Charite, Dieter-Scheffner-Fachzentrum, Chariteplatz 1, Berlin 10117, Germany) Harm Peters (Charite, Dieter-Scheffner-Fachzentrum, Berlin, Germany)
Background: Establishing a reformed medical curriculum is a challenging encounter for all medical faculties. Yet there is a lack of profound research on the reform processes and success factors even though it could guide and facilitate the introduction of curriculum reform in other places.
Summary of work: The Charite - Universitatmedizin Berlin introduced a new medical curriculum, Modular Curriculum of Medicine (MCM), in 2010. The MCM attempted to incorporate a large number of elements currently accounted as characteristics for good teaching and learning, for instance being out-come and problem-based, involving early patient contact, interdisciplinary modules and small group and team-based learning. This research work focuses on the educational theory basis of the MCM using a mixed-status focus group analysis with former key-players, including students in the MCM-planning and decision-making process. Summary of results: The data obtained were analysed via content analysis (Mayring 2007) of qualitative data and provided systematic insights into the multi-layered negotiation process of planning a medical curriculum. It formulated the implicit and explicit objectives and ideas of the planners in retrospect and allowed a connection to their educational theory background. Conclusions: The focus group analysis reveals the highly creative and dynamic process of building the basis of a new curriculum at the very beginning where there aren't constraining organizational factors. It points out the vision of the new curriculum as well as educational understanding of the planners. Take-home messages: Analysing and communicating curriculum planner's plans may serve as tool to guide and foster reform of medical curricula.
ABSTRACT BOOK: SESSION 10 WEDNESDAY 28 AUGUST: 0830-1015
Implementation of a competency-based DVM program without changing the existing program structure at the Universite de Montreal
Michele YDoucet (Universite de Montreal, Faculte de medecine veterinaire, CP 5000, Saint-Hyacinthe J2S 7C6, Canada)
Marilou Belisle (Universite de Sherbrooke, Faculte d'education, Longueuil, Quebec, Canada)
Background: A competency-based approach was implemented within the existing DVM program structure thus avoiding a costly and complex curriculum overhaul.
Summary of work: Course contents were reviewed, aligned and integrated using concept mapping. A competency framework was developed in three steps: definition of essential learning elements (SKA's) for each of the seven competencies, determination of expectations for each competency at each level of the program, and creation of developmental rubrics for assessing competencies. Reflective practice was identified as the educational concept's backbone. Summary of results: Concept maps of course contents were shared amongst teachers to align learning objectives with essential learning elements identified for each competency and to integrate course contents throughout the program. A competency development and evaluation trajectory (CDET) was designed to include complex and authentic tasks called "learning-evaluation situations" (LES) within existing courses of the program. These LES were created by faculty to allow students to practice each competency and receive formative feedback several times throughout the program before submitting to certifying evaluations corresponding to each level of the program (novice, advanced and day-one). An electronic portfolio was designed to allow students to reflect on their progress within the CDET.
Conclusions: Alignment of course contents and learning objectives with a competency framework along with a carefully designed CDET are essential to the successful inclusion of a competency-based approach in an existing program structure.
Take-home messages: Several key elements are essential to designing and implementing a competency-based approach without changing the existing structure of a professional program.
On the way towards a National Competency-based Catalogue of Learning Goals for Medicine (NKLM) in Germany: The role of the "Gesellschaft fur Medizinische Ausbildung" (GMA)
Martin R Fischer (Klinikum der Universitat Munchen und Gesellschaft fur Medizinische Ausbildung (GMA), Institut fur Didaktik und Ausbildungsforschung in der Medizin, Ziemssenstr. 1, Munich 80336, Germany)
Karin Mohn (NKLM-Geschaftstelle der Gesellschaft fur Medizinische Ausbildung (GMA), Witten, Germany)
Background: Outcomes of undergraduate medical education in Germany are measured after six years by a national written single best answer multiple choice exam and a two day clinical case-write-up and oral examination in the faculties' responsibility. However, no outcome- or competency-based national catalogue of learning goals exists. The GMA together with the German Association of Medical Faculties have initiated a structured process of creating such a catalogue together with all relevant institutional stakeholders in medical education, taking into account international references as well as catalogues from faculties and national medical associations.
Summary of work: We describe the development of the NKLM from 2009 until now with respect to its structure, process and preliminary results from the perspective of the GMA, the association for medical education in the German speaking community. 21 interdisciplinary workgroups are involved in the development process. Summary of results: Intermediate results are currently reviewed by more than 150 German medical associations. The goal is a broadly accepted competency-based core curriculum to be used by the 37 German medical faculties as a joint basis that should be enriched by faculty specific profiles. The NKLM should provide recommendations for assessment and serve as a foundation for postgraduate training. Competencies should seamlessly be further developed after graduation. The development process is critically reviewed and perceived strengths and shortcomings are described.
Conclusions: The multi-institutional development of the National Competency-based Catalogue of Learning Goals for Medicine (NKLM) is a complex process. Take-home messages: The NKLM has potential for the improvement of medical education in Germany. Evaluation studies to support this assumption are needed.
Self-assessment as a driving force in competencies development
Jean-Francois Montreuil (Universite Laval, Vice-decanat aux etudes de premier cycle, Faculte de medecine Pavillon Ferdinand-Vandry, bureau 4770, 1050 avenue de la Medecine, Quebec G1V 0A6, Canada) Lucie Rochefort (Universtie Laval, Vice-decanat aux etudes de premier cycle, Quebec, Canada) Daniel Turpin (Universite Laval, Vice-decanat a la pedagogie et au developpement professionnel continu, Quebec, Canada)
Background: The medical curriculum at Universite Laval is a competency-based program combining knowledge acquirement with development of the seven CanMEDS competencies. Progressive development of these competencies is monitored by an innovative longitudinal approach established to assure that all students achieve
ABSTRACT BOOK: SESSION 10 WEDNESDAY 28 AUGUST: 0830-1015