Автор неизвестен - Mededworld and amee 2013 conference connect - страница 119
Conclusions: This study pointed to the need for faculty development to provide adequate feedback. Take-home messages: The feedback is critical for formative assessment and when properly applied is well accepted and contributes to the training of students.
Comparison of the performance of post-graduate year-one residents from different departments by global rating and the mini-CEX in the emergency medicine department at a medical center in Taiwan
Chip-Jin Ng (Chang Gung Memorial Hospital, Emergency Medicine, 5, Fushin street, Kweishan County, TaoYuan 333, Taiwan)
Yu-Che Chang (Chang Gung Memorial Hospital, Emergency Medicine, Taoyuan, Taiwan) Chien-Kuang Chen (Chang Gung Memorial Hospital, Emergency Medicine, Taoyuan, Taiwan) Ping Liu (Chang Gung Memorial Hospital, Emergency Medicine, Taoyuan, Taiwan)
ABSTRACT BOOK: SESSION 9 TUESDAY 27 AUGUST: 1600-1730
Jih-Chang Chen (Chang Gung Memorial Hospital, Emergency Medicine, Taoyuan, Taiwan)
Background: This study aimed to evaluate the differences in clinical performance among PGY1 students from various specialty backgrounds in ED and also to evaluate the correlation between the results of different evaluation systems for these students. Summary of work: A total of 179 PGY1 residents received 1 month ED training and were divided into three groups according to their specialty background. Group A consisted of Radiology, Pathology, and Nuclear Medicine PGY1 which were not clinically orientated; Group B consisted of Internal Medicine, Surgical, OB/GYN, Pediatrics and ED PGY1 which were very clinically orientated specialties; Group C consisted of Ophthalmology, ENT, Dermatology and Psychiatry specialty PGY1. We used mini-clinical evaluation exercise (mini-CEX) and global rating method to evaluate their clinical performance and to analyze if these two evaluation scoring methods correlated to each other. Summary of results: By global rating, Group A had the highest score while Group C was the lowest. The global rating score for Group A to Group C were 87.4±6.9, 86.5±5.4 and 86.0±4.7 respectively. However, by using mini-CEX method, the score in Group A was the lowest. When compared with Group B, Group A were significantly lower in the Physical examination, clinical skill and clinical judgments part of Mini-CEX evaluation. When compared to Group C, Group A were significantly lower in the clinical skill part of Mini-CEX evaluation. Conclusions: We found that using mini-CEX for evaluation is superior to global rating method. Mini-CEX has better discriminating ability than global rating in evaluating the trainee's performance and more suitable to use for clinical evaluation. Take-home messages: Mini-CEX has better discriminating ability than global rating in evaluating the trainee's performance and is more suitable to use for clinical evaluation.
A Self-Assessment Tool To Evaluate The Medical Student's Development And Personal Growth Throughout his/her Career
Ileana Petra (National Autonomous University of Mexico, Psychiatry and Mental Health, Rio Mixocac 66402, Col. del Valee, Mexico City 03100, Mexico) Teresa Cortes (National Autonomous University of Mexico, Public Health, Mexico City, Mexico) Patricia Herrera (National Autonomous University of Mexico, Anatomy, Mexico City, Mexico) Monica Aburto (National Autonomous University of Mexico, Embryology, Mexico City, Mexico) Aurora Farfan (National Autonomous University of Mexico, Public Health, Mexico City, Mexico)
Background: The student traditionally develops a passive role in his training and a non-reflective practice. An educational program is required that encourages interest in personal growth and development. This
competency tends to be left out because it involves another kind of assessment not considered within the grade that is awarded to the student and therefore is left to him/her to handle. Studies on personal growth and development in general are not used to give feedback to the student so that he/she can become aware of his/her strengths and weaknesses. Summary of work: Based on the ideal key points that have been associated with development and personal growth, a Likert type instrument was formulated to evaluate the following areas: self-esteem, self-awareness and emotional expression, commitment, creativity, resilience, self-criticism, positive outlook on life, security, confidence and assertiveness. Summary of results: Once the questionnaire was tested, the instrument was structured in the following manner: a) the questionnaire of self-implementation. b) instructions to get the scores of each section and how to interpret the results. c) provide graphs of each area so the student can score their annual evolution throughout their career.
Conclusions: Its application at the beginning of the career and its annual follow-up allows students to know their progress and areas of weakness and advises them to seek guidance to help them improve. Take-home messages: The student can count on a self-assessment tool in the area of personal growth and development. It promotes their academic performance throughout their studies and perhaps even beyond.
The Correlation of Acceptability Index based on medical teachers and borderline examinee of fourth-year medical students
Kanyarat Katanyoo (Faculty of Medicine, Vajira Hospital, Navamindradhiraj, Radiology, Bangkok, Thailand) Atchima Cholpaisal (Faculty of Medicine, Vajira Hospital, Navamindradhiraj, Radiology, Bangkok, Thailand) Phensri Sirikunakorn (Faculty of Medicine, Vajira Hospital, Navamindradhiraj, Radiology, 681 Samsan Road Dusit District, 20 Soi Vongsavang 4, Vongsavang Road, Bansue District, Bangkok 10800, Thailand) Chiroj Soorapanth (Faculty of Medicine, Vajira Hospital, Navamindradhiraj , Orthopaedics, Bangkok, Thailand)
Background: Acceptability Index (AI) is calculated from estimation of cut-off score of borderline examinee. There may be variation to obtain the value and this raises the issue of reliability.
Summary of work: In academic year 2012, borderline examinees in each of 3 groups were invited to calculate AI from 100 MCQs during radiology rotation. We assessed their correlation with AI based on medical teachers' estimate. Additionally, difficulty indexes (DI) from item analysis were also considered. Minimal passing levels (MPL) were compared among the methods from medical teachers, borderline examinee and DI.
Summary of results: There were 19 borderline examinees out of 79 fourth-year medical students. The correlation coefficient (r) of AI from medical teachers
ABSTRACT BOOK: SESSION 9 TUESDAY 27 AUGUST: 1600-1730
with borderline examinees and DI in group 1, 2 and 3 were 0.31 and 0.32, 0.01 and 0.07, and 0.19 and 0.31, respectively, while r of AI from borderline students with DI values in all above groups were 0.60, 0.77 and 0.67 (p-value<0.001). MPL of group 1 from medical teachers, borderline examinees and DI were 45.1, 58.7 and 70.8. These values were 45.4, 71.1, 81.8 and 49.2, 80.6, 83.1 for group 2 and 3, respectively.
Conclusions: The correlation of AI from medical teachers with borderline examinees and DI were poor, while AI from borderline students were fairly good associations with DI. Those were relevant with MPL which was the lowest value from medical teachers and nearly the same value from borderline students and DI. Take-home messages: Al I estimations from medical teachers have a trend of lower values than from real borderline medical students.
Using a Relative Ranking Scale to Enhance Feedback during Resident Assessments
Andrew Sparrow (University of Toronto - Toronto Western Hospital, Family Medicine, 399 Bathurst St, 2W428, Toronto M5T2S8, Canada) Milena Forte (University of Toronto - Mount Sinai Hospital, Family Medicine, Toronto, Canada) June Carroll (University of Toronto - Mount Sinai Hospital, Family Medicine, Toronto, Canada) Perle Feldman (University of Toronto - North York General Hospital, Family Medicine, Toronto, Canada)
Background: The Relative Ranking Scale (RRS) asks learners to rank a defined set of skills relative to each other and cross-check this ranking with expert opinion. Learners are not asked to gauge their overall level of competency, but provide a rank order of their strengths and weaknesses. We studied whether the RRS 1) impacts quality and dynamic of feedback as compared to traditional evaluation forms 2) Impacts the creation and implementation of educational action plans. Summary of work: Family practice residents and teachers at academic and community sites completed the RRS at regular evaluations in addition to their usual feedback forms. Focus groups were conducted to explore the experience of using the RRS. FG's were transcribed and analysed using the constant comparative method and thematic analysis. Summary of results: The RRS changed the dynamic of the feedback interaction for both teachers and residents. The feedback encounter became a feedback conversation with much more bidirectionality than traditional evaluations. Residents felt their opinion was more welcomed and teachers felt they could deliver critical feedback more easily.
Conclusions: The focus of the feedback changed to 1) emphasize the identification of strengths and weaknesses and 2) to define learning priorities and develop common goals (considering both the residents' and teachers' agenda). The form did not seem to impact on development of an action plan to achieve these goals.
Take-home messages: A new type of feedback form increased amount and quality of feedback to residents. This feedback was learner centred but teacher driven.
Receiving Feedback in Near-Peer Teaching
Nilanka N Mannakkara (Basildon Hospital, Medicine and Surgery, Nethermayne, Basildon, Essex SS16 5NL, United Kingdom)
Shaine D Mehta (Basildon Hospital, Medicine and Surgery, Essex, United Kingdom) Aparna Mark (Basildon Hospital, Medicine and Surgery, Essex, United Kingdom)
Background: Feedback plays an important role in developing the clinical teacher and its usefulness for this purpose is well documented. There is increasing emphasis on junior doctors developing teaching skills, however there is little work on how feedback is best utilised in near-peer environments. We assessed what components of feedback FY1s (Foundation Year 1 doctors) found most useful when teaching medical students (near-peers) in our FY1-led programme. We also introduced peer feedback from FY1 observers, who receive no additional training, to determine whether this enhanced the use of feedback in teachers' development. Summary of work: We conducted focus groups with FY1s and analysed questionnaires to obtain qualitative data. FY1s were asked about their experiences and perceptions of feedback received and how this influenced their training and development. Summary of results: Junior doctors derived great value and encouragement from near-peer feedback. Participants gained new insights that aided reflection and development. However, students infrequently suggested criticisms or improvements. Peers identified more potential improvements, which were considered the most valuable components of feedback. Positive feedback was more valued when from students. FY1s found that giving feedback increased awareness of their own teaching methods.
Conclusions: Feedback from near-peers and peers complement each other to provide a comprehensive assessment of teaching. Dedicated sessions on giving feedback may enhance the quality of feedback. Take-home messages: Near-peer feedback is highly valued and appreciated by junior doctors. Incorporating peers into the feedback process is an easy way to increase the usefulness of feedback to the developing teacher.
Medical students in the feedback process
Michal Kendra (Jessenius Faculty of Medicine in Martin, Comenius University, Dean's office, SDaJ Novomeskeho 2, Martin 03601, Slovakia)
Petronela Lalikova (Jessenius Faculty of Medicine in Martin, Comenius University, Martin, Slovakia) Ivan Majling (Jessenius Faculty of Medicine in Martin, Comenius University, Martin, Slovakia)
ABSTRACT BOOK: SESSION 9 TUESDAY 27 AUGUST: 1600-1730
Dasa Gocova (Jessenius Faculty of Medicine in Martin, Comenius University, Martin, Slovakia) Juraj Sokol (Jessenius Faculty of Medicine in Martin, Comenius University, Department of hematology and transfusiology, Martin)
Background: The last two decades have brought a seismic shift in the provision of feedback. Feedback has been described as an essential, even "crucial," feature of medical education.
Summary of work: The students of Jessenius Faculty of Medicine have designed their own evaluation questionnaire. It gives each student the opportunity to evaluate subjects and teachers in academic year 2011/2012. Course questions were focused on: content, interest, satisfaction/performance and recommendation. The questions about teachers were focused on: professionalism, ability to explain, time efficiency and access to students. Students responded to each question using a 5-point scale (subjects 1=strongly agree and 5=strongly disagree; teacher 1=very well and 5=poor).
Summary of results: Evaluation of the results was from beginning to end in the hands of students (elected representatives). 458 students participated in the evaluation (1st year n=96, 2nd n=93, 3rd n=94, 4th n=79 and 5th n=96). The Dean praised the best-rated professor, associate professor and assistant at the beginning of the new academic year 2012/2013. The evaluation report is publicly available on our faculty website.
Conclusions: Course assessment is an efficient method of recognizing the strengths and weaknesses of teaching at the end of the current academic year. However, the question remains, how quickly can gaps in the teaching process be identified and removed. Take-home messages: Feedback in medical education is specific information with the intent to improve the student's performance.
Students' perception on the experience of learning portfolios in medical education
Sujin Chae (Ajou University School of Medicine, Department of Medical Humanities & Social Medicine, Woncheon dong Yeongtong gu Suwon 443-721, Korea, Republic of (South Korea))
Seungsoo Sheen (Ajou University School of Medicine, Department of Pulmonary and Critical Care Medicine, Suwon, Korea, Republic of (South Korea)) Ki Young Lim (Ajou University School of Medicine, Department of Medical Humanities & Social Medicine, Suwon, Korea, Republic of (South Korea))
Background: A portfolio in medical education is a collection of documents providing evidence of learning and a self-reflection from those documented events. This study explored medical school students' perception on the experience of learning portfolios for the year
Summary of work: The portfolios were designed to enable students to demonstrate their personal development and to stimulate self-reflection. The portfolio binders were given to 167 students at the beginning of the semester. The outstanding students received scholarships at the end of the semester. 40 students of Ajou university school of medicine who have submitted the portfolios completed the questionnaire. Summary of results: 80% of students were satisfied with the experience of portfolio, 95% answered to participate in the next year. The main reasons of participation were the scholarship (53%) and the preservation of learning experiences (33%). The advantages of portfolios were a collection of students' work (39%) and provision of teachers' feedback (25%). Meanwhile, 49% of students felt it was difficult to configure the contents of portfolio. Conclusions: The results of the study suggest that portfolios have helped to collect evidence of learning, but are less effective for self-reflection. A guide is needed on what it is and how it can be used. Take-home messages: Portfolios are increasingly used and highly valued in medical education but to date there are few studies that examined learning portfolios in South Korea. Further research on the effects of portfolio on self-reflection in medical education is required.
Does the Summative Assessment Performance Relate to the Portfolios Performance in Under Graduate Year Surgical Training?
Kun-Ming Chan (Chang Gung Memorial Hospital at Linkou, General Surgery, No. 5, Fu-Hsing Street, Kwei-Shan Township, Taoyuan County 33305, Taiwan) Ming-Ju Hsieh (Chang Gung Memorial Hospital at Linkou, Thoracic and Cardiovascular Surgery, Taoyuan County, Taiwan)
Tzu-Chieh Chao (Chang Gung Memorial Hospital at Linkou, General Surgery, Taoyuan County, Taiwan) Lun-Jou Lo (Chang Gung Memorial Hospital at Linkou, Plastic Surgery, Taoyuan County, Taiwan) San-Jou Yeh (Chang Gung Memorial Hospital at Linkou, Internal Medicine, Taoyuan County, Taiwan) Wen-Neng Ueng (Chang Gung Memorial Hospital at Linkou, Orthopedic, Taoyuan County, Taiwan)
Background: This study aims to explore the relationship between the performance of the summation assessment and the performance of portfolios in Undergraduate (UGY) students when they are trained in a surgical department.
Summary of work: Thirty-six undergraduate students (Interns) who received surgical training within a 3 month period at Chang Gung Memorial Hospital at Linkou, Taiwan, were included. We evaluated their learning and performance using portfolios and a summation assessment which included a MCQ test, a 3-station OSCE and a 3-station DOPS at the end of the program. Summary of results: The MCQ test consisted of 50 questions, and the 3-station OSCE consisted of one physical examination station, one history taking station and one communication station. The DOPS included
ABSTRACT BOOK: SESSION 9 TUESDAY 27 AUGUST: 1600-1730
operation scrub technique, operation room preparation and suturing technique. The performance scores of portfolios were compared with the scores of these summation assessments. The results were: DOPS (p=
0. 087), OSCE (p=0.884) and MCQ (p=0.753). The results do not demonstrate a significant difference between these 3 groups.
Conclusions: There was a minimal relationship between the score of the DOPS and the performance of the portfolios (p=0.087). The performance of the portfolios still cannot predict the performance in MCQ and objective structured clinical examination (OSCE). We found the above evaluation methods are still necessary and important for evaluation of clinical competences for Undergraduate (UGY) students when they are trained in a surgical department.
Take-home messages: he portfolios cannot predict the performance of an objective structured clinical examination (OSCE) and MCQ for Undergraduate (UGY) students when they are trained in a surgical department.
Assessing shared decision-making skills of 3rd year medical students.
L.M.L. Ong, D. van Woerden (Department of Medical Psychology, Academic Medical Centre, Amsterdam)
Background: 70% of patients wants to be involved in their care. Shared decision-making (SDM) meets this need, having a positive effect on satisfaction, quality of life and the doctor-patient relationship. Summary of work: We teach 3rd year medical students a 6-phase SDM consultation model:
1. Start (goal, equipoise). 2. Informing (treatment options, pros/cons). 3. Deliberation (weighing considerations, concerns). 4. Preference. 5. Preferred role in decision-making. 6. Decision.
Video recordings of 364 students conducting SDM consultations with simulation patients were made, uploaded in students' digital portfolio, shared with two peers and assessed by teachers. Summative assessments were made using a semi-structured rating list. Assessments were categorized as: below expectations (4-5), meets expectations (6-7-8), and above expectations (9-10). Furthermore, students provided written reflections on self-selected events in their consultation. They both received and provided peer-feedback. By fulfilling this assignment, students received a positive assessment of 'professional behaviour'. Summary of results: A semi-structured rating list was developed to assess SDM skills of 364 medical students. The average assessment was 7,2. 16 students (4,4%) failed, whereas 24 students (6,6%) performed above expectations. The majority of students (89%) performed at 'meets expectations' level. All students fulfilled their reflective assignment.
Conclusion: Our 6-phase consultation model can be used to teach SDM skills. These skills can be assessed using our semi-structured rating list.
Take home message: SDM skills can be taught and assessed.
Validating force-based metrics for computerized assessment of technical skills in laparoscopic surgery
Matthew Dawson (Lawson Health Research Institute, Canadian Surgical Technologies and Advanced Robotics, London, Canada)
Ana Luisa Trejos (Lawson Health Research Institute, Canadian Surgical Technologies and Advanced Robotics, London, Canada)
Rajni Patel (Lawson Health Research Institute, Canadian Surgical Technologies and Advanced Robotics, London, Canada)
Christopher Schlachta (Lawson Health Research Institute, Canadian Surgical Technologies and Advanced Robotics, London, Canada)
Richard Malthaner (Lawson Health Research Institute, Canadian Surgical Technologies and Advanced Robotics, London, Canada)
Michael Naish (Lawson Health Research Institute, Canadian Surgical Technologies and Advanced Robotics, London, Canada)
(Presenter: Sayra Cristancho, Schulich School of Medicine and Dentistry, Centre for Education Research and Innovation, 339 Windermere Road, London N6A 5C1, Canada)
Background: Learning the required motor skills to perform minimally invasive surgery is especially difficult. Automated performance metrics are needed to provide trainees with feedback, allowing for more efficient learning.
Summary of work: The SIMIS system, which uses instruments that measure position and force during training, was used to compute metrics related to safety and consistency. Thirty subjects with varying experience performed a laparoscopic knot-tying task. A Pearson correlation was used to compare the SIMIS metrics to those simultaneously obtained with the ICSAD system, which is considered a validated method. Spearman's Rho correlations were used to compare all metrics with experience level.
Summary of results: Results show that the SIMIS metrics have slightly stronger correlations with experience level than the ICSAD metrics (-0.781 for safety and -0.796 for consistency, vs. -0.736 for path length, -0.629 for number of movements and -0.792 for time, p < 0.001). There are also significant correlations between the SIMIS and ICSAD metrics (e.g., safety correlates with
path length (0.535) and time (0.528)).
Conclusions: Current computer-based feedback systems do not provide trainees with information that can be readily related to patient safety. The force data collected with SIMIS is able to provide trainees with information that is related to consistency and overall safety. The SIMIS/ICSAD comparisons demonstrated concurrent validity for the proposed performance metrics. The metrics obtained with the SIMIS system reflect