Journal Mobile

Author(s): 

Alice E Lee1, Maddalena Ardissino2,3, Nadja F Bednarczuk4, Maria Tennyson5, Ankur Khajuria6

Author Affiliations: 

1,2Academic Foundation Doctor, Department of Surgery and Cancer, Imperial College London, UK; 3Academic Foundation Doctor, Magill Department, Chelsea and Westminster Hospital, London, UK; 4Academic Foundation Doctor, King’s College Hospital, Denmark Hill, London, UK; 5Academic Foundation Doctor, Cambridge University Hospitals NHS Foundation Trust, UK; 6Honorary Clinical Research Fellow, Department of Surgery and Cancer, Imperial College London, UK

Correspondence to: 

Ankur Khajuria, Kellogg College, University of Oxford, Oxford, UK

Journal Issue: 
Volume 50: Issue 1: 2020
Cite paper as: 
J R Coll Physicians Edinb 2020; 50: 60–6

Format

Abstract

Background Previous research has demonstrated that medical students have insufficient knowledge of critical appraisal, a fundamental aspect of evidence-based medicine. We aimed to enhance medical students’ critical appraisal skills using an innovative mixed-methods programme.

Methods We designed a 2-day, mixed-methods, national teaching programme, including an interactive lecture and workshop, quiz and viva-style examination. Course efficacy was assessed using pre- and post-course confidence questionnaires and a quiz adapted from the validated Berlin Questionnaire. Data were analysed primarily using Wilcoxon Signed Ranks test.

Results Fifty-nine participants from 17 medical schools completed the programme. Pre- and post-course scores demonstrated significant improvement in confidence (median score 5 vs 8; p < 0.001) and quiz performance (median score 9 vs 13; p < 0.001).

Conclusion Our study demonstrates the efficacy of a novel mixed-methods programme in teaching medical students about critical appraisal. Implementation of our approach within the undergraduate curriculum should enhance the uptake of these fundamental skills.

HTML Full Text

Introduction

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context.1 Huge growth in the published clinical literature since the 1970s poses challenges with risk of information overload.2–4 The sheer volume of literature serves as a potential barrier to its effective use, highlighting the need for proficiency amongst healthcare professionals in critical appraisal skills.2,5–7 A systematic review showed that when professional library services are accessed general patient care, diagnosis, choice of tests and choice of therapy is improved.8 This supports the need to educate medical students and doctors in critical appraisal so that they can independently and appropriately use information resources, leading to improved health outcomes.9

All practitioners require an evidence base to assess, apply and integrate new knowledge,10 whether or not they undertake primary research activities.11 Despite this evidence, formal teaching of critical appraisal in an already busy medical school curriculum can fall to the wayside, resulting in poor knowledge of, and confidence in, critical appraisal skills in medical students.12,13 Furthermore, research has demonstrated that lack of formal instruction in critical appraisal compromises the ability of junior doctors to adequately interpret clinical research.14,15

A variety of different teaching methods have demonstrated improvement in critical appraisal skills by learners.16–18 Traditionally, didactic lectures and journal clubs have been utilised to teach critical appraisal skills, however, these can be passive experiences for learners who are not actively involved in the preparation or delivery of teaching materials, and many trainees do not regularly contribute to the discussions that take place.19,20

Our overall aim was to enhance the critical appraisal skills of medical students with respect to both learner confidence and competencies. We integrated a 2-day critical appraisal teaching programme into an established course (running annually since 2015), designed to prepare senior medical students for the Academic Foundation Programme (AFP).21 The AFP is a national, integrated clinical academic pathway in the UK that was developed to encourage junior doctors to pursue academic medicine. Ability to critically appraise is part of the selection process for the AFP. Our course previously included a short (30 minutes), didactic lecture on critical appraisal. We sought to improve and expand this teaching utilising a novel, mixed-methods educational approach over 2 days. We targeted a population of students interested in pursuing an AFP to increase engagement in the course whilst retaining applicability to all senior medical undergraduates. The teaching was delivered by academic trainees, motivated by findings that near-peer tutors are as effective as (and more readily accepted than) staff tutors in teaching critical appraisal skills and to support the broader implementation of peer teaching in other areas of medical education.16

Our specific objectives were to:

  1. 1. Provide students with a framework for how to critically appraise a paper.
  2. 2. Provide information on common parametric and nonparametic statistical analyses.
  3. 3. Encourage students to think about study design and potential sources of bias.

Box 1 A sample of three questions utilised in the critical appraisal quiz. Candidates were instructed to select only one answer per question

Quiz introduction

Intensive Care Unit (ICU) doctors frequently give proton pump inhibitors (PPIs) to prevent gastric stress ulceration and death in their patients, however, the benefits are unclear. Some researchers have decided to assess whether this practice has any benefits.41

What is the most appropriate study design, on the hierarchy of evidence-based medicine, to use to answer this question?

  • Prospective cohort study following up patients receiving PPIs compared to those not receiving PPIs
  • A randomised-controlled trial, randomising patients to placebo or PPI
  • A case-control study, looking at whether patients with poor outcome received a PPI
  • A retrospective cohort study comparing outcomes in patients who did and did not receive a PPI
  • Cannot tell from this information alone

Which of the following factors would reduce the external validity of this study?

  • Only one type of antacid medication (PPI) was used
  • The study only looked at ICU patients
  • The study only recruited patients from Europe
  • The study did not include patient-centred outcomes
  • The study did not calculate a number needed to treat (NNT) statistic

During the study, some patients who received the PPIs were unconscious. Which of the following ethical principles does this contravene?

  • Beneficence
  • Non-maleficence
  • Autonomy
  • Justice
  • Patient-centred medicine

Methods

A 2-day, national teaching programme was organised and delivered on the 7–8 September 2019 for medical students considering applying for the AFP. The overarching aim of the course was to encourage students to apply for the AFP and to prepare them for AFP interviews. In this study, we specifically assessed the novel teaching methods used to deliver critical appraisal content; the general usefulness of previous iterations of the course has been assessed and reported elsewhere.21

Critical appraisal teaching was delivered using a mixed-methods educational approach over 2 days. On day 1, teaching consisted of a 60-minute, interactive lecture on the principles of critical appraisal, followed by a 30-minute workshop during which students were provided a scientific abstract and asked to appraise it in small groups. On day 2, students were allocated a time slot to participate in a viva-style examination. Students were instructed to read and critically appraise an abstract in exam conditions; approximately 15 minutes of preparation time was allowed. The students then presented their appraisal and were asked relevant questions by two examiners (academic trainees). This part of the examination lasted approximately 10 minutes (including feedback).

A cross-sectional study design was employed. Critical appraisal skills were assessed both a) subjectively (self-efficacy or ‘confidence’) and b) objectively. For the subjective assessment, pre- and post-course electronic questionnaires were sent to participants asking them to rate their confidence relating to various aspects of critical appraisal (Supplementary Information, Part 1). This questionnaire was adapted from the validated evidence-based practice confidence scale for relevance to senior medical students and course objectives.22 For each topic, participants rated their level of confidence on an 11-point scale ranging from 0% to 100% confident, as in the original study. For the objective assessment, students were asked to complete a pre- and post-lecture critical appraisal quiz, which was also administered electronically (Supplementary Information, Part 2). The quiz was modelled on the validated Berlin questionnaire23 and consisted of 15 multiple choice questions based on a hypothetical clinical scenario and linked to a published article. As in the original Berlin questionnaire, the questions involved formulating a research question, appraising evidence, and the clinical and ethical implications of research. The quiz was in a similar format to the Berlin questionnaire but substantially modified for our purposes, including the use of a different clinical scenario, less information-dense question stems and no requirement for in-depth interpretation of statistical analyses or findings. This was to ensure relevance to situations in which the reader is required to appraise limited information under time constraints, including ‘real-life’ clinical practice and AFP interviews. A sample of three questions utilised in the quiz are presented in Box 1.

Demographic and educational characteristics of participants were collected, including age, gender, medical school year group, medical school attended and whether the participants had any further degree(s). Confidence questionnaire scores were collected before and after the course and recorded as continuous values on a scale of 1–10. Critical appraisal quiz scores were collected from each subject before and after the intervention; the cumulative percentage of correct answers for each of the 15 questions was recorded at both time points. Baseline demographic and education characteristics of study subjects were quantified by means of simple descriptive statistics and cumulative percentages. Normalcy of distribution of all continuous data was tested using Shapiro-Wilk test with a threshold of >0.05 considered to indicate normal distribution. A Wilcoxon Signed Ranks Test was used for the purpose of comparing both confidence and critical appraisal quiz scores before and after the course. One-way interaction effects ANOVA with post-hoc Tukey’s honestly significant difference test were used to test the interaction between education variables and across-study quiz score improvement. For all statistical analyses, a p-value threshold of <0.05 was considered to indicate statistical significance.

The project was considered an evaluation of teaching methods and did not require ethical approval. All students provided written consent for their anonymised quiz scores to be included in this study.

Table 1 Baseline demographic, medical education background and participation characteristics of the study participants

Baseline characteristics

Participants (n = 59)

Demographic characteristics

Age (median, IQR)

23 (23–25)

Gender

Female (n, %)

Male (n, %)

 

29 (49.2%)

30 (50.8%)

Medical education

Medical school year (n, %)

 

Year 4

2 (3.4%)

Year 5

16 (27.1%)

Year 6

41 (69.5%)

Medical School (n, %) 

Birmingham

1 (1.7%)

Brighton and Sussex

2 (3.4%)

Bristol

2 (3.4%)

Florida Atlantic University

1 (1.7%)

Hull York Medical School

3 (5.1%)

Imperial College London

16 (27.1%)

Keele

2 (3.4%)

King’s College London

4 (6.8%)

Norwich

1 (1.7%)

Nottingham

3 (5.1%)

Sheffield

1 (1.7%)

University College London

4 (6.8%)

Aberdeen

1 (1.7%)

Buckingham

1 (1.7%)

Cambridge

14 (23.7%)

Leeds

1 (1.7%)

Oxford

2 (3.4%)

Further degree (n, %) 

BA

2 (3.4%)

BMedSci

1 (1.7%)

BSc

40 (67.8%)

Masters

8 (13.6%)

PhD

4  (6.8%)

IQR: interquartile range

 

Table 2 Critical appraisal confidence questionnaire scores before and after educational intervention

Question theme

Pre-course score

(mean, SD)

Post-course score

(mean, SD)

p-value

Summarise using PICO 

4.64 (2.39)

7.78 (1.78)

 

Summarize abstract results

5.12 (1.83)

7.64 (1.69)

Identifying strengths and weaknesses

4.82 (1.91)

7.46 (1.71)

Interpreting interventions

5.16 (1.95)

7.4 (1.67)

Appraising outcomes

4.32 (2.06)

7.28 (1.76)

Determine clinical applications

5.14 (1.87)

7.48 (1.56)

Evaluate ethics

5.46 (1.76)

7.3 (1.59)

Interpret role of funding

5.2 (1.87)

7.68 (1.76)

Understand design and hierarchy of evidence

6.16 (2.18)

8.14 (1.63)

Understand inclusion and exclusion criteria

5.54 (2.05)

7.58 (1.70)

Recognise per protocol vs intention to treat analyses

5.12 (2.13)

7.94 (1.75)

Interpret CI and p-values

5.86 (2.33)

8.28 (1.77)

Understand meaning of power

5.24 (2.22)

7.52 (1.74)

Understand and evaluate external validity

4.88 (2.32)

7.82 (1.74)

Mean score

5.23 (1.74)

7.70 (1.51)

<0.001

CI: confidence interval; PICO: population, intervention, control, outcome(s); SD: standard deviation

Results

Population characteristics

Fifty-nine participants from 17 different medical schools attended the programme. Out of the 59 attendees, 49 completed both the pre- and post- course questionnaire and quiz (response rate: 83.1%). Ten students only participated in one of the questionnaires. The median age of participants was 23 years [interquartile range (IQR 23–25)] and the participants were generally balanced across the sexes, with 49.2% being female and 50.8% male. The majority were attending their sixth year at medical school (69.5%); 27.1% were in their fifth year and 3.4% were in their fourth year.

The medical school most represented among study participants was Imperial College London (16 students; 27.1%). A sizeable proportion of students attended Cambridge University (14 students; 23.7%) and the next most represented university beyond this was University College London (four students; 6.8%). The full list of universities attended by study participants and further degrees can be found in Table 1.

Among study participants, 40 (67.8%) had a BSc degree, eight (13.6%) had a Masters degree and four (6.8%) a PhD. Only five participants (8.4%) reported no prior critical appraisal experience before attending the course. As many as 42 (71%) had previously attended lectures about critical appraisal and 28 (47.5%) had previously attended a workshop. Similarly, 28 (47.5%) participants had prior experience of research represented by a paper publication and eight participants (13.6%) had previously read a book on critical appraisal.

Figure 1 Confidence questionnaire scores before and after critical appraisal course; p < 0.001

Table 3 Critical appraisal quiz scores before and after educational intervention

Question number

Pre-course correct answers (%)

Post-course correct answers (%)

p-value

1

62.7

93.3

<0.001

2

76.3

78.3

3

37.3

73.3

4

59.3

88.3

5

37.3

88.3

6

74.6

91.7

7

72.9

86.7

8

73.6

93.3

9

66.1

85

10

84.7

91.7

11

64.4

95

12

6.8

16.7

13

69.5

68.3

14

42.4

56.7

15

94.9

93.3

 

Confidence and performance measures

The results of the confidence questionnaire scores before and after the course are outlined in Table 2 and depicted in Figure 1. Study participants reported overall higher confidence after the course [post-course median score 8.05 (IQR 7.5–8.5); pre-course median score 5.18 (IQR 4.0–6.6); p < 0.001].

Critical appraisal quiz scores before and after the course are displayed in Table 3 and Figure 2. Overall quiz score significantly improved after the course [pre-score median 9 (IQR 7–12); post-score median 13 (IQR 11–14); p < 0.001].

We measured the interaction effects between a number of educational factors and critical appraisal skills improvement, as measured by quiz score change before and after the course. Neither the presence of a further degree (p int = 0.886) or its type (p int = 0.608 for Masters/PhD only) nor medical school year (p = 0.448) were found to significantly determine the efficacy of the intervention by interaction.

Discussion

Our study demonstrates the success of utilising a multimodality workshop to teach undergraduate medical students critical appraisal skills. Following completion of our course, confidence scores rose significantly from the pre-course average. Additionally, the students scored significantly higher in the quiz following the course, with an average pre-course score of 61.5% increasing to 79.9% (p < 0.001). Accordingly, our workshop was able to effectively address both the lack of knowledge and confidence that the medical students experienced with regards to critical appraisal.

We believe the efficacy of our course was particularly contributed by its novel mixed-methods programme, involving a staged approach to learning starting with passive knowledge transfer in a lecture format, followed by an interactive critical appraisal workshop and culminating in a viva-style examination, which demanded higher level engagement from learners. Similar to our study, previous research has highlighted the importance and success of implementing curricula using a mixture of lectures and case-based discussion.24 Further, a study assessing the optimal way of teaching statistics, a fundamental part of evidence-based medicine, showed greater success in teaching using various media materials, including videos and workbooks, than with traditional teaching methods.25 These results, as well as the findings of our study, may be owing to the inclusion of a variety of learning styles, in keeping with the experiential learning theory,26 allowing the information to be delivered and practiced in a variety of different settings.25,27–29 Interestingly, multimodal learning has been shown to be the most preferred learning style amongst medical students.30 The awareness of students learning styles and addressing these appropriately will lead to overall improvement of their performance.31,32 As such, we believe a greater focus should be placed on using multimodal approaches, similar to our programme, in developing students’ critical appraisal skills.

Owing to significantly lower pre-course quiz and confidence scores, our study also highlights a systemic problem with the current teaching methods used to teach critical appraisal skills throughout medical school.33,34 Similar results have been shown in studies considering both medical students and junior doctors.25,35,36 For instance, a cross-sectional study highlighted that 64.6% of medical students did not fully understand basic research methodology.33 Importantly, in junior doctors their knowledge of critical appraisal seemed to decrease the further their clinical training progressed, likely due to lack of reinforcement.14,37 Not only do doctors benefit from improved critical appraisal training, but all allied healthcare professionals should be trained in understanding research in order to improve patient care.38 This could be achieved by using our proposed approach, in particular as our results showed improved scores when controlling for previous scientific degrees as well as year of study. Further, our approach could be expanded to teach critical appraisal to other healthcare professionals, which should be assessed with further studies.38 Without future improvement in critical appraisal teaching methods in any curriculum, students will lack both the knowledge and confidence to appraise research articles and ultimately their future evidence-based practice will suffer.

Figure 2 Critical appraisal knowledge quiz scores before and after critical appraisal course; p < 0.001

It is important to consider the limitations of our study. Although our study represented senior medical students from 17 different medical students across the UK, the majority of students that attended were enrolled in Cambridge University or Imperial College London. Both of these universities offer 6-year undergraduate courses that include an integrated science degree at the undergraduate level. As such, our study population is likely to have an already enhanced understanding of critical appraisal in comparison to students that do not complete an intercalated course. Despite this, both confidence and knowledge of critical appraisal were significantly improved by the interactive critical appraisal course, reflecting scope for improvement in developing these skills regardless of students holding additional scientific degrees. Although the majority of correct quiz scores improved post-course, in four of the quiz questions the pre-course scores were equal to or slightly higher than post-course scores. We attribute this to already high pre-course scores (Question 2 and 15), differing response rates (Question 13) and question difficulty (Question 12). Additionally, the post-course evaluations were performed straight after the workshop in order to minimise attrition, but consequently did not allow us to assess the long-term effects of our teaching. Finally, our course was aimed particularly in helping final year medical students in their application to the AFP in the UK, in which critical appraisal plays an important part of the selection process. As such, the population in our study is likely to have greater interest and motivation in understanding evidence-based medicine. However, we believe that this approach would benefit medical students during any year of study and regardless of their future ambition, as evidence-based medicine is a fundamental component of clinical practice.

With millions of papers published each year, critical appraisal is becoming an increasingly important skill for healthcare professionals to perfect as it allows them to judge research quality and its relevance to their patients.39,40 Yet, the development of these skills appears to be neglected in current undergraduate medical education. Our study highlights the benefit of using a mixed-modality approach, which can benefit not only medical education, but also reduce barriers to incorporating evidence-based clinical practice.

This study demonstrates the efficacy of an innovative 2-day teaching programme delivering critical appraisal teaching for senior UK medical students. Before the course, self-assessed confidence and objective critical appraisal skills were poor, despite most students receiving prior formal training in evidence-based medicine. Post-course feedback demonstrated significant improvements in both parameters assessed. We, therefore, believe that the implementation of a validated, multimodal approach, such as our programme, within the undergraduate curriculum would rectify the demonstrated gap in critical appraisal training and greatly enhance student performance in this often neglected but fundamental skill.

 

References

1 Burls A. What is Critical Appraisal? Evidence-Based Medicine. 2nd ed. London: Hayward Medical Communications; 2009

2 Druss BG, Marcus SC. Growth and decentralization of the medical literature: implications for evidence-based medicine. J Med Libr Assoc 2005; 93: 499–501.

3 Landhuis E. Scientific literature: Information overload. Nature 2016; 535: 457–8.

4 Information overload. Nature 2009; 460: 551.

5 Clancy C, Collins FS. Patient-Centered Outcomes Research Institute: the intersection of science and health care. Sci Transl Med 2010; 2: 37cm18.

6 Gabriel SE, Normand S-LT. Getting the methods right — the foundation of patient-centered outcomes research. N Engl J Med 2012; 367: 787–90.

7 Engineering a Learning Healthcare System. Washington, DC: National Academies Press; 2011.

8 Weightman AL, Williamson J. The value and impact of information provided through library services for patient care: a systematic review. Heal Inf Libr J 2005; 22: 4–25.

9 Moore M. Teaching physicians to make informed decisions in the face of uncertainty: librarians and informaticians on the health care team. Acad Med 2011; 86: 1345.

10 Dawes M, Summerskill W, Glasziou P et al. Sicily statement on evidence-based practice. BMC Med Educ 2005; 5: 1.

11 Elessi K, Albarqouni L, Glasziou P et al. Promoting critical appraisal skills. Lancet 2019; 393: 2589–90.

12 Burrows SC, Tylman V. Evaluating medical student searches of MEDLINE for evidence-based information: process and application of results. Bull Med Libr Assoc 1999; 87: 471–6.

13 Aldugieman TZ, Alanezi RS, Alshammari WMG et al. Knowledge, attitude and perception toward evidence-based medicine among medical students in Saudi Arabia: analytic cross-sectional study. J Fam Med Prim Care 2018; 7: 1026–31.

14 Windish DM, Huot SJ, Green ML. Medicine residents’ understanding of the biostatistics and results in the medical literature. JAMA 2007; 298: 1010.

15 Hryciw N, Knox A, Arneja JS. How well are we doing at teaching critical appraisal skills to our residents? A needs assessment of plastic surgery journal club. Plast Surg 2017; 25: 261–7.

16 Widyahening IS, Findyartini A, Ranakusuma RW et al. Evaluation of the role of near-peer teaching in critical appraisal skills learning: a randomized crossover trial. Int J Med Educ 2019; 10: 9–15.

17 Weberschock TB, Ginn TC, Reinhold J et al. Change in knowledge and skills of Year 3 undergraduates in evidence-based medicine seminars. Med Educ 2005; 39: 665–71.

18 Sánchez-Mendiola M, Kieffer-Escobar LF, Marín-Beltrán S et al. Teaching of evidence-based medicine to medical students in Mexico: a randomized controlled trial. BMC Med Educ 2012; 12: 107.

19 Milinkovic D, Field N, Agustin CB. Evaluation of a journal club designed to enhance the professional development of radiation therapists. Radiography 2008; 14: 127–7.

20 Cramer JS, Mahoney MC. Introducing evidence based medicine to the journal club, using a structured pre and post test: a cohort study. BMC Med Educ 2001; 1: 6.

21 Khajuria A, Cheng K, Levy J. Effect of a national focused course on academic medicine for UK candidates applying for a clinical academic programme. J R Coll Physicians Edinb 2017; 1: 65–9.

22 Salbach NM, Jaglal SB. Creation and validation of the evidence-based practice confidence scale for health care professionals. J Eval Clin Pract 2011; 17: 764–800.

23 Fritsche L, Greenhalgh T, Falck-Ytter Y et al. Do short courses in evidence based medicine improve knowledge and skills? Validation of Berlin questionnaire and before and after study of courses in evidence based medicine. Br Med J 2002; 325: 1338.

24 Marantz PR, Burton W, Steiner-Grossman P. Using the case-discussion method to teach epidemiology and biostatistics. Acad Med 2003; 78: 365–71.

25 Freeman J V, Collier S, Staniforth D et al. Innovations in curriculum design: a multi-disciplinary approach to teaching statistics to undergraduate medical students. BMC Med Educ 2008; 8: 28.

26 Kolb DA. Experiential Learning: Experience as the Source of Learning and Development. USA: Prentice Hall Inc.; 1984.

27 Coffield F, Moseley D, Hall E et al. Learning styles and pedagogy in post-16 learning: a systematic and critical review | VOCEDplus, the international tertiary education and research database. https://www.voced.edu.au/content/ngv: 13692 [accessed 20 October 2019].

28 Hernández-Torrano D, Ali S, Chan CK. First year medical students’ learning style preferences and their correlation with performance in different subjects within the medical course. BMC Med Educ 2017; 17.

29 Engels PT, De Gara C. Learning styles of medical students, general surgery residents, and general surgeons: Implications for surgical education. BMC Med Educ 2010; 10.

30 Samarakoon L, Fernando T, Rodrigo C. Learning styles and approaches to learning among medical undergraduates and postgraduates. BMC Med Educ 2013; 13

31 Feeley AM, Biggerstaff DL. Exam success at undergraduate and graduate-entry medical schools: is learning style or learning approach more important? A critical review exploring links between academic success, learning styles, and learning approaches among school-leaver entry. Tradi Teach Learn Med 2015; 27: 237–44.

32 Alghasham AA. Effect of students’ learning styles on classroom performance in problem-based learning. Med Teach 2012; 34.

33 Hamdan A. Medical students still lack skills needed to practise evidence-based medicine. J R Soc Med 2012; 105: 324.

34 Ologunde R, Di Salvo I, Khajuria A. The CanMEDS scholar: the neglected competency in tomorrow’s doctors. Adv Med Educ Pract 2014: 383.

35 Astin J, Jenkins T, Moore L. Medical students’ perspective on the teaching of medical statistics in the undergraduate medical curriculum. Stat Med 2002; 21: 1003–6.

36 Windish DM, Huot SJ, Green ML. Medicine residents’ understanding of the biostatistics and results in the medical literature. JAMA 2007; 298: 1010.

37 Windish DM. Brief curriculum to teach residents study design and biostatistics. BMJ Evidence Based Med 2011; 16: 100–4.

38 Odierna DH, White J, Forsyth S et al. Critical appraisal training increases understanding and confidence and enhances the use of evidence in diverse categories of learners. Heal Expect 2015; 18: 273–87.

39 Ioannidis JPA, Stuart ME, Brownlee S et al. How to survive the medical misinformation mess. Eur J Clin Invest 2017; 47: 795–802.

40 Farrell PM, Grossman J. Using science to improve health care delivery and patient care Dean’s Corner. Wisconsin Med J 2004; 103: 87–8.

41 Krag M, Marker S, Perner A et al. Pantoprazole in patients at risk for gastrointestinal bleeding in the ICU. N Engl J Med 2018; 379: 2199–208.

Financial and Competing Interests: 
No conflict of interests declared
PDF
PDF SUPPLEMENT