Academy of Medical Royal Colleges
Monday, 13 June, 2011

1. Introduction

1.1 This paper sets out a set of quality indicators for the commissioning of medical education and training in England developed by a task group established by the Medical Programme Board (MPB). It covers undergraduate medical students in clinical placements and foundation and specialist trainees in clinical posts across hospital, primary care and non-NHS training settings.

1.2 The proposed indicators are circulated here for consideration by interested parties. Comments are requested by 30 June 2011 with a view to the indicators being tested during the next twelve months. The indicators will be incorporated into the educational contract between the commissioner and education provider as well as the individual educational contract for the student/trainee.

2. Background

2.1 At the request of the Department of Health, the MPB set up, early in 2010, a task and finish group (Membership - Appendix 1) to develop a set of quality indicators to be used in the commissioning of medical (Postgraduate and Undergraduate) education and training in England. The aim of this work is to drive up the quality of medical education and training, by ensuring that those commissioning this (£2.5bn) element of the multi-professional education and training (MPET) budget have the right levers and indicators to secure the best possible results.

2.2 Since the work was initiated there have been a number of significant changes including:

  • a new coalition government
  • publication of a White Paper setting out the government reforms to the future NHS;
  • the scale of the financial savings of £20bn over four years identified to be released into frontline services for patients and;
  • the medical profession has moved to a single regulatory body

2.3 In discussion with the Department of Health (DH) however, it is clear that whatever new workforce and education structures and systems operate in the future, there will still be a need for good quality indicators to measure the effectiveness of medical education delivery whilst ensuring better outcomes for patients and value for money.

2.4 The White Paper Equity and excellence: Liberating the NHS signalled a new approach to workforce planning, education and training that should give employers greater autonomy and accountability for planning and developing the workforce, alongside greater professional ownership of the quality of education and training. It highlighted the importance of patient centred clinicians. Work from Stewart et al ,2 has demonstrated that learner centred trainers promote patient centred doctors.

2.5 A public consultation paper Liberating the NHS: Developing the Healthcare Workforce on the future education and training system, has been launched and closed on 31 March 2011. It proposes giving employers greater responsibility for workforce planning and developing the healthcare workforce, which will be discharged through “local” provider networks, while the quality of education and training will remain under the stewardship of the healthcare professions. The creation of a new Special Health Authority, Health Education England (HEE) will support healthcare providers in these tasks and provide strategic and national overview. The consultation also highlights the importance of the quality metrics work and the need to provide much better information on the quality of education and training and the outcomes being achieved.

2.6 The Task and Finish Group (the Group) has therefore framed the quality metrics to reflect these proposed changes and the need to place patient safety, experience and outcomes at the centre of what is measured. Further development may be needed as greater clarity about future education commissioning and the outcomes framework for the new system emerges.

2.7 From the outset the Group was clear that the work on quality indicators for medical education needed to build on the standards set, and the data required, by the General Medical Council.

The Group, in identifying the priority areas, has taken note of the recommendations from the recent Temple and Collins4 reviews, as well as of the Medical Programme Board’s own report5 on the quality of training following the work led by Dr Ian Wilson in 2009.

2.8 A draft set of quality indicators was considered with stakeholder representatives at a meeting on 17 November 2010 and the range of indicators was supported.

The plea was to establish a small set of measurable indicators without increasing the burden of regulation. The Group has taken this on board by identifying a small number of national mandatory measures whilst offering a range of indicators, which can be adopted locally. The need to balance the tensions that exist between: focusing monitoring on key outcomes and critical factors for success in achieving these outcomes; the importance of selecting metrics that are valid, reliable and cost-effective; and avoiding creating perverse incentives, has been recognised. The full range of possible metrics identified in Appendices 2 and 3 establish a sound starting point.

2.9 The development of the medical indicators will link with the piloting and testing of the new funding arrangements being developed as part of the MPET Review, which is planned for implementation from April 2012. They will also be tested in the pilots alongside the current non-medical educational indicators with a view to developing a single educational quality assurance framework across the whole of MPET responsibilities.

COMMENTS FOR UK ACADEMY OF MEDICAL ROYAL COLLEGES ON
HEALTH EDUCATION ENGLAND
MEDICAL INDICATORS - EDUCATION COMMISSIONING FOR QUALITY

The Royal College of Physicians of Edinburgh is pleased to respond to the Academy of Medical Royal Colleges’ request for comments on the Health Education England consultation, Medical Indicators – Education Commissioning for Quality  

Background

This paper proposes that three quality indicators are set as national priorities for medical education:

  • Board level engagement in education and training.
  • Clinical leadership/clinical and trainee engagement.
  • Safe (trainee/student) supervision.

The College offers the following comments for inclusion in the UK Academy response to the proposed list of national indicators for commissioning medical education.  “National” should reflect medical education across the UK, and the Colleges should make this point forcibly in the Academy response.

The Task and Finish Group are aware of the potential for increased duplication of role between deaneries/deanery function and LEP responsibility for education, and this merits greater attention.  Also, implementation strategies are not laid out and it is difficult to comment on their feasibility at this stage.  The oversight function of HEE will only be effective if the quality indicators can be implemented and HEE is in a position to respond rapidly to emerging issues.

A key message is that the shortlist of national quality indicators is not sufficiently robust and should include others, lest the LEP partners in the skill networks (or their successor organisations according to the outcome of the “listening exercise”) take a minimalist approach, given their service priorities.

Within this main message the following are important points to raise:

  • One or more outcome-based indicator should be included in the national priorities list – for postgraduate education ARCP/RITA ‘success’ rates would be the best choice.  This would have the advantage that it would also cover national exam performance success - for physicians this would be MRCP(UK) and SCE.  The Task and Finish Group argues against this on the basis that trainees’ attachments may not all have been within the same LEP.  This might be
  • The other outcome-based indicator that should be considered is Workplace Based Assessments.  The number performed in any one attachment would clearly indicate that training is being delivered.  However, it is important that “sign off”, based on WPBAs,  includes a clear element of assessment of competence in addition to experiential learning.
  • Time for Trainers to train should also be added to the national priority list.  It is clearly a national issue that needs to be addressed, and this would be an excellent lever for achieving a more realistic approach to job planning by employers.
  • Safe supervision of trainees will be difficult to quantify accurately.  The potential sources of evidence do not appear to measure this directly, and much reliance will have to be placed on the GMC training surveys.  No LEP will collect data on calls made to senior clinicians and their response.  Induction and handover processes, rotas, and consent and prescribing audits can be measured, but these do not provide direct information about supervision.
  • Undergraduate education requires a similar national focus on quantifiable outcome measures rooted in national education standards to demonstrate a national level of competence of medical graduates.
  • ‘Clinical leadership/Clinical and trainee engagement’ (should surely read as Clinician and Trainee engagement).  This indicator is too subjective to be measured in any meaningful way.  The only sources of evidence suggested are the Annual Report demonstrating CEO commitment to medical engagement, and the GMC Trainee Survey.  It seems unlikely these would provide an objective measure.  The possibility of “other measures’ of medical engagement are mentioned but none are specifically suggested.
  • There is no mention of patient feedback within the suggested metrics, which is surprising given the exposure of patients to students and trainees throughout their training.