Article Text

Health professional education and practice in preventing and controlling infections in New Zealand: a review to inform strategies for enhancing practitioner competencies and patient safety
  1. Linda Gulliver1,
  2. Heather Brooks2,
  3. Linda Kinniburgh3,
  4. Rebecca Aburn4,
  5. Jo Stodart4 and
  6. Joy Rudland1
  1. 1Otago Medical School, University of Otago, Dunedin, New Zealand
  2. 2Microbiology Department, University of Otago, Dunedin, New Zealand
  3. 3School of Nursing, Otago Polytechnic, Dunedin, New Zealand
  4. 4Infection Prevention and Control, Southern District Health Board, Dunedin, New Zealand
  1. Correspondence to Dr Linda Gulliver; linda.gulliver{at}otago.ac.nz

Abstract

Objective Quality assurance for reducing infections is a key objective of the WHO’s global action plan targeting antimicrobial resistance, yet no studies have employed a multifaceted approach to review health professional education and practice in infection prevention and control (IPC). This study completed such a review.

Methods and analysis New Zealand medical and nursing curricula were analysed for IPC-related teaching and assessment. Clinicians (undergraduate to senior) received peer-expert evaluation while performing procedures demonstrating IPC competencies. Patient and clinician self-evaluation followed. Hospital IPC practice monitoring was also reviewed.

Results Medical curricula had approximately twice the total IPC-related theory compared with nursing (79.71 vs 41.66 hours), emphasising microbiology. IPC theory in nursing curricula was applied, emphasising health and safety. Junior nursing students were rigorously taught (16.17 hours) and assessed (2.91 hours) in practical IPC competencies, whereas little practical instruction (2.62 hours) and no formal assessment existed for junior medical students. IPC teaching chiefly occurred during medical students’ senior clinical years, and was opportunistic, rotation-specific or in introductory sessions. Senior medical and nursing students were expected to be IPC-proficient but no formal assessment occurred. Peer review generally revealed satisfactory practice, however both professions had lapses with hand hygiene, asepsis and incorrect donning, removal and use of personal protective equipment. Clinician confidence in providing and being peer-reviewed for best IPC practice, and patients’ confidence in receiving best IPC care, was positively associated with clinician experience. Trainee interns, whose confidence in IPC practice was not matched by the same desire for monitoring/feedback as senior colleagues, were the exception.

Conclusion Multifaceted approaches to IPC quality assurance have utility in identifying gaps, reducing infection transmission and reassuring staff and patients.

  • continuous quality improvement
  • infection control
  • safety culture
  • health professions education
  • standards of care
http://creativecommons.org/licenses/by-nc/4.0/

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

Statistics from Altmetric.com

Key messages

What is already known about this subject?

  • Antimicrobial resistance presents a real and imminent threat to mankind. To our knowledge, despite quality assurance for reducing infections being a key objective of the WHO action plan targeting this threat, there have been no studies representing a holistic approach to assessing both education and clinical infection prevention and control (IPC) competencies of front-line health professionals. There have, instead, been several isolated studies targeting predominately the hand hygiene practices of doctors and nurses, but there is little on other important IPC practices.

What does this study add?

  • The present study is a multifaceted approach to a stocktake of health professional preparedness in the event of a postantibiotic era or other microbial challenges. Importantly, such a stocktake includes the patient voice, giving a novel angle to quality assurance in the clinical environment.

How might this impact on clinical practice or future developments?

  • The piloting of expert peer IPC practice review, clinician self-reflection and particularly patient feedback to guide quality assurance in IPC practices, will inform future larger studies and encourage public dissemination of results, conceivably influencing practice guidelines and improving patient outcomes.

Introduction

Increasingly ‘last resort’ antibiotics are required to treat common bacteria that are multidrug resistant, causing severe infections and death.1–4 With a dearth of novel antibiotic candidates in the clinical pipeline, the WHO warns of an impending postantibiotic era.5 Drug-resistant diseases are currently responsible for an estimated 700 000 deaths globally each year and without action, this number could rise to 10 million by 2050.6 Urgent alternative approaches are needed to curb infection rates and antimicrobial resistance (AMR), including the re-examination of some basic but important infection prevention and control (IPC) measures taught to health professionals.3 6 Studies on health professional hand hygiene (HH), central to infection control, reveal poor practice especially among doctors, and are linked to hospital-acquired/healthcare-associated infections (HAI).7–9 Additionally, health professionals lack knowledge of what constitutes ‘safe practice’ in protecting themselves and their patients from transmissible infections.9 10 This is despite evidence in medical and nursing curricula for explicit teaching of subjects critical to infection awareness, such as prevalence and transmission.

Research additionally suggests that education around HAI as a health and safety issue receives little attention internationally in medical curricula.11 Assessment of this important component of students’ learning is restricted mainly to multiple-choice questions and a small part of the Objective Structured Clinical Exams (OSCE),11 with no time given over to reflective analysis of practice. Of the limited nursing studies into the teaching and assessment of infection control with respect to HH, gaps have also been identified,12 13 however nurses consistently perform better than doctors in practice.14 15

HH is only part of the equation, although a critical part. Patients are also vulnerable to the consequences of health practitioner ignorance of good aseptic technique (AT) and effective patient isolation technique (PIT). In 2014, published results of a large HAI prevalence survey estimated the burden of HAI in US acute-care hospitals to be 722 000 patients in 2011 alone, 75 000 of whom died during their hospitalisation.16 In New Zealand an estimated 10% of hospitalised patients will develop an HAI at some stage during their stay17–19 resulting in deleterious outcomes for patients and adding to the cost burden of healthcare. Sound procedural knowledge of IPC practices is therefore essential for healthcare practitioners in every healthcare setting.

The present study involved one New Zealand tertiary education provider of nursing and one of medical undergraduate education (two separate institutions). Approximately 110 Bachelor of Nursing (BN) students and 300 Bachelor of Medicine and Bachelor of Surgery (MB ChB) students enter these health professional programmes. Medical students complete the first 2 years of their degree at a central university campus with most students entering medicine following a health sciences first year. The need to provide adequate clinical training sees the medical class divided into three at the end of the third year, with students completing their more senior years (4–6) in three geographically distributed clinical schools/campuses and their associated tertiary care hospitals. Similarly, third year nursing students become dispersed following 2 years on a central campus.

Our study aimed to enquire into medical and nursing education and practice in three key areas of IPC: patient isolation (PI), AT and HH. It was hypothesised gaps existed in curricula and in clinical practice that could be targeted to improve outcomes involving infection transmission. In addition to tracking IPC-related curricula in BN and MB ChB degrees, this study piloted a novel approach to quality control for IPC practice, combining expert peer observation, clinician self-evaluation and patient feedback.

Materials and methods

A survey approach was used to analyse curricula for the teaching and assessment of IPC-related content. Researchers contacted course coordinators and module convenors convening topics incorporating IPC for the BN degree (n=5) and years 2 and 3 of the MB ChB (n=5) respectively; as well as module convenors for all disciplines from years 4–6 of the MB ChB across the three campuses (n=91). For the BN and MB ChB year 2/3 programmes, researchers interviewed convenors to complete the survey, while surveys were emailed to geographically distributed convenors to complete and return. Specifically, surveys enquired into teaching and assessment of theory and practice for asepsis and AT (A/AT), PI/PIT and HH. Course and module convenors for the BN and MB ChB degrees hold integral roles in determining what is taught and by whom, and the learning outcomes expected for their part of the curriculum. They also teach into their own modules and liaise with convenors in other disciplines, thereby providing the most reliable source for data gathering.

Surveys reviewing IPC-related undergraduate teaching and assessment

Surveys captured:

  1. Student demographics.

  2. Total number of teaching moments in HAI topics covering A/AT, PI/PIT and HH.

  3. Topic title, focus and undergraduate year covered.

  4. Teaching modality, for example, whole class lecture, ward-based tutorial.

  5. Whether theoretical or practical, formative/summative.

  6. Whether practical instruction contributed to skills portfolios.

  7. Title, qualification, expertise and experience of the tutor/lecturer.

  8. Total number of assessments in curricula with A/AT, PI/PIT or/and HH-related material stating whether theoretical or practical, formative/summative.

  9. Assessment topic, undergraduate year assessed.

  10. Assessment type, for example, reflective essay, OSCE.

  11. Time allocated to assess component(s), nature of component(s) assessed.

  12. Title, qualification and experience of persons setting, marking and moderating assessment(s).

Review of IPC practice monitoring in the clinical setting

To investigate methods used to mentor and monitor practice, a separate questionnaire was completed by the hospital IPC charge nurse enquiring into:

  1. Elements of A/AT, PI/PIT and HH included at new staff induction, which components, to what degree.

  2. Numbers of staff learning opportunities offered by IPC staff annually.

  3. Nature/duration of teaching (topic, setting, time allocated).

  4. Staff attending IPC-related teaching.

  5. Whether sufficient IPC-expert staff were available for mentoring and monitoring of evidence-based best practice.

  6. If hospital staff had protected time to attend IPC sessions/uptake of sessions offered.

Expert peer observation and evaluation of IPC practice

To pilot expert peer observation for IPC quality assurance, IPC nurse specialists sought to consent, brief and observe:

Year 3 (final year) nursing students; year 6 (final year) medical students (trainee interns (TIs)); early career doctors (up to 5 years postgraduate); early career registered nurses (up to 5 years postgraduate); experienced/senior (>10 years) postgraduate doctors; and experienced/senior (>10 years) postgraduate registered nurses. Excluded were early career and experienced doctors and nurses who were not New Zealand trained. Recruitment was face to face (f2f), via medical school Facebook, email, flyers and Moodle/Blackboard. The aim was to recruit at least 10 participants into each cohort.

Clinicians were observed performing a 15–20 min minor procedure on a preconsented patient that demonstrated HH with AT or PIT. The procedure was one the clinician was considered to be familiar with and the patient normally scheduled for. A de-identifying code matched clinicians to patients. Clinicians chose from the following: performing venipuncture, doing a surgical wound dressing, inserting an intravenous cannula (IV), central venous catheter (CVC) or peripherally inserted central catheter (PICC) line; or donning and removing personal protective equipment (PPE). Observers used iPads to complete marking schedules prepopulated with procedure-specific checkpoints (ChPs). The following 4-point pass/fail scale was then applied (example provided is for AT/surgical wound dressing):

CP—Clear pass: Maintained sterile flow with correct HH moments.

P—Pass: Maintained sterile flow but missed HH moment.

BF—Bare fail: Attempted sterile but did not attend HH, that is did not allow Chlorex hand sanitiser to dry or wear gloves.

T—Terminate: No sterile flow evident/no HH/no patient explanation/multiple attempts.

Data was transferred to a central server using REDCap.

Clinician self-evaluation of IPC practice

Following procedures, clinicians completed a self-evaluation of their performance which also captured information relating to their acquisition of IPC skills, personal attitudes towards IPC and perceived barriers to best practice. A debrief followed and feedback on their performance was given on request.

Patient perspective/patient involvement statement

Patient involvement occurred solely in the clinical phase of the research in consideration of the 10% HAI rate previously alluded to and the importance research staff placed on the patient voice informing patient safety. Patient surveys included questions relating to the burden of interventions but they were not overtly questioned on the time required of them as this was minimal. Survey questions aligned to clinician self-evaluation questions, capturing the patient perspective. Additionally, they enquired into the extent patients would feel empowered to request a higher level of practitioner IPC compliance where they considered it to be inadequate (eg, request the doctor wash their hands). The piloting of patient feedback to guide quality assurance in IPC practices will inform future larger studies and public dissemination of results, conceivably influencing practice guidelines.

Patient and clinician surveys were paper-based and used a 5-point Likert Scale with 1 signalling strong agreement and 5 signalling strong disagreement. Demographic data were captured and a free-text option available. Online supplementary appendices 1–3 provide examples of (A) Expert peer observer marking schedule for a PIT procedure (donning and removing PPE), (B) Clinician self-evaluation questionnaire, (C) Patient evaluation questionnaire.

Supplemental material

Data analysis

Differences between cohorts and conditions were analysed using one-way analysis of variance with post hoc Mann-Whitney U test or t-test (GraphPad Prism 8, V8.4.3 statistical software (GraphPad Prism; San Diego, California, USA)). CI was 95%. P values <0.05 were considered to be statistically significant.

Results

This study commenced in 2016 and ended in 2018. Curriculum tracking of IPC-related material required retrospective analysis of the 2015 student cohorts (MB ChB students from entry in year 2 to exit in year 6 and BN students from entry in year 1 to exit in year 3). Two hundred and ninety-eight students and 111 students entered their first year of medicine and nursing, respectively. One hundred and sixty-one (54%) of the medical cohort and 102 (92%) of the nursing cohort were female.

Teaching of IPC theory and practice

For practical considerations due to senior students dispersing to different geographical locations to complete their clinical experience, data sets were collated separately for early and advanced learning in medicine and nursing. Figure 1 shows total teaching hours in curricula dedicated to HAI topics in the first 2 years of medicine and nursing. Medical students received approximately twice the total amount of IPC-related theory compared with nurses (79.71 hours vs 41.66); 69% of this was via microbiology/infection and immunity whole class lectures and labs, delivered by experienced microbiologists, a clinical micro/immunologist and epidemiologists; and 31% was delivered in small group tutorials (case-based with 10–11 students).

Figure 1

Total hours of IPC-related teaching in the first 2 years of medicine and nursing degrees. IPC, infection prevention and control.

IPC-related theory in nursing was by way of lectures, online learning and labs evenly distributed, with emphasis on applied microbiology/infection and immunity and quality and safety. Lectures and labs were delivered by experienced nurse clinicians with postgraduate science degrees (Masters and PhD). Notably, nursing students received approximately six times the total amount of IPC-related practical instruction compared with medical student colleagues (16.17 hours vs 2.62 hours). Practical instruction was delivered alongside theoretical underpinnings relating to A/AT, PI/PIT and HH with competence in handwashing reinforced weekly throughout year 1. By comparison, medical students in years 2 and 3 received a total of less than 3 hours practical instruction involving HH and standard precautions only. There were no significant differences in the average time per session that nursing and medical students received for instruction in IPC theory or practice; p=0.60 and p=0.58, respectively, Mann-Whitney test.

Assessment of IPC theory and practice

Figure 2 shows total time allowed for assessment of IPC-related topics in the first 2 years of the MB ChB and BN degrees. Assessment was minimal in medicine, totalling just 59 min of summative assessment of theory. Notably, medical students received no IPC practical assessment in their first 2 years of training (HH was presumed completed outside the examination room before commencing OSCEs, so was neither observed nor assessed by examiners). By contrast, nurses received nearly 3 hours of assessment in IPC-related theory and practice. IPC-related assessment, both formative and summative, contributed to skills portfolios for nursing students only. Contrasting with IPC-related instruction, nurses received a greater average amount of time per individual assessment of IPC-related theory than medical students in the first 2 years of their degrees (p=0.0002); Welch’s unpaired t-test (two-tailed). Assessment of IPC practice was not compared as junior medical students received none.

Figure 2

Total hours of IPC-related assessment in the first 2 years of medicine and nursing degrees. IPC, infection prevention and control.

Exact hours of teaching and assessment were unable to be derived for years 4–6 of medicine and year 3 of nursing because students were in a variety of clinical settings encompassing primary, secondary and tertiary care in geographically spread locations, including locations abroad during sixth year TI electives. Module convenors, responsible for clinical rotations and disciplines, were therefore surveyed by email.

Average response rates were just 43% across clinical educators for medicine, but revealed that most IPC-related teaching was opportunistic and rotation-specific, especially around A/AT and PI/PIT. Some excellent formal instruction did occur (eg, as part of clinical orientation, introductions to operating theatre). Log books used by medical students lacked details specifically relating to teaching components of IPC, however, instead just listing the procedural skill mastered. IPC-related assessment was also limited in years 4–6 of medicine; reported as ‘sometimes part of an OSCE’, but there was no high stakes assessment. Rather, IPC competence was ‘expected’ in the advanced years of medical training. Enquiry into nursing yielded a similar response for final year nurses, where IPC competency was not formally assessed because it was ‘expected’.

Monitoring of IPC practice in the clinical setting

Table 1 details mentoring and monitoring of IPC in a tertiary care teaching hospital serving the medical and nursing schools. In addition to providing fourth year medical students with a 1 hour clinical orientation lecture, medical staff received orientation in all aspects of IPC. Antimicrobial stewardship was offered to doctors annually by experienced IPC clinical nurse specialists as a 30 min one-on-one session. Medical and nursing staff were given a 1 hour IPC orientation and offered five 45 min face-to-face sessions per year covering HH/gloves, multidrug-resistant organisms and PI, with practical scenarios and HH auditing.

Table 1

Mentoring and monitoring of HH, asepsis, aseptic technique, patient isolation and patient isolation technique in the clinical setting

Other regularly occurring learning opportunities included disease-specific and location-specific instruction, sessions on serious systemic infections, a traffic light system for triaging infections, and initiatives to raise staff and public awareness. Several IPC teaching staff were involved but clinician uptake depended heavily on workload/time availability.

Expert peer review of IPC practice in the clinical setting

Reported understaffing and resourcing issues led to a total of only 34 clinicians (57% of desired uptake) consenting to have their clinical practice observed. Participants were representative, however, of all cohorts except final year student nurses and were matched to consented patients. All four types of procedures were observed being undertaken by a mix of clinicians. Analysis saw marking schedules for each procedure scrutinised for ChPs relevant to demonstrating competence in HH, asepsis and/or PI. Each ChP gaining an affirmative ‘yes’ (tick—achieved) was coded the numeral 2, whereas a not-achieved was coded 1. Cohort data were combined to derive a mean and SD for each ChP. ChPs with a mean ≤ 1.5 were deemed poorly performed, since this equated to a 50% or above failure rate for clinicians performing that part of the procedure. Table 2 shows procedural ChPs with a mean of ≤ 1.5 for all four procedures observed by peer experts. The number of IPC ChPs assessed as a proportion of all ChPs assessed per procedure is listed under the participant n value.

Table 2

IPC ChPs scoring a mean value of ≤1.5

Although peer experts considered venipuncture, wound dressings and insertion of IV/CVC/PICC lines all to be performed satisfactorily from both an IPC and a skills perspective, areas of concern were identified. These included clinicians not observing HH moments, contaminating sterile packaging and sterile fields, not allowing Chlorex hand sanitiser or chlorhexidine skin prep to dry and/or not wearing gloves and most notably, poor PPE practices. Table 3 details cohort composition and overall IPC performance as judged by peer experts.

Table 3

Results of expert peer review of IPC practice

No procedures were terminated by staff (indicating a cardinal issue compromising the patient or the clinician), however 4 of the 10 clinicians (40%) who chose donning and removing PPE failed to demonstrate safe IPC practice and only 2 of the 10 were able to gain a clear pass. PPE procedural concerns included not securing or checking the fit of PPE (eg, N95 mask), not wearing the appropriate level of PPE, and incorrect and unsafe removal of PPE including face shields, goggles, gloves and gowns.

Clinician self-evaluation of IPC practice

Doctor participants included TIs (n=5), early career doctors (n=4) and experienced (senior [snr]) doctors, (n=5). Nurse cohorts included early career nurses (n=5) and experienced (senior [snr]) nurses (n=15). Whereas experienced nurses were over-represented, no year 3 nursing students could be recruited, citing timing of peer review.

Analysis revealed statistically significant differences in the answering of 3 of the 14 self-evaluation questions employing Likert Scales (1: strongly agree to 5: strongly disagree) and these are detailed:

Question 3. ‘I think I took all the appropriate steps to ensure the patient’s safety and compliance as it relates to infection prevention and control’.

Both TIs (mean 1.4, SD 0.54) and senior nurses (mean 1.3, SD 0.49) were significantly more likely to feel confident in their IPC practice than their early career doctor colleagues (mean 2.25, SD 0.5), p=0.047, TIs vs EC doctors and p=0.004, senior nurses vs EC doctors.

Question 12. ‘I would feel happy to have my clinical practice in techniques like this one, peer observed from time to time’.

Senior nurses (mean 1.2, SD 0.46) and senior doctors (mean 1.2, SD 0.44) were significantly happier to have their clinical practice peer observed than either their TI (mean 2.2, SD 0.44) or early career doctor colleagues (mean 2.0, SD 0.0), p=0.001, Snr nurses vs TIs, p=0.008, Snr nurses vs EC doctor, p=0.007, Snr doctors vs TIs and p=0.009, Snr doctors vs EC doctors.

Question 14. ‘I asked the peer evaluator today to provide me with his/her feedback on the procedure I performed’.

Senior nurses (mean 1.2, SD 0.41) and senior doctors (mean 1.4, SD 0.54) were significantly more inclined to seek expert peer feedback on their clinical practice than TIs (mean 1.8, SD 0.4), early career doctors (mean 2.5, SD 0.57) and early career nurses (mean 1.75, SD 0.5); p=0.01, Snr nurses vs TIs, p=0.0001, Snr nurses vs EC doctors, p=0.03, Snr nurses vs EC nurses, p=0.02, Snr doctor vs EC doctor.

Although responses to these three questions significantly differed between cohorts, all clinicians were generally confident performing their chosen procedure and agreed that providing regular expert peer observation and feedback to clinicians was a good initiative. However, there was indecision/disagreement over whether the IPC procedure they performed had been formally taught to them from an IPC as well as mastery perspective (ie, with emphasis on IPC proficiency alongside mastery of the skill). Three out of the five senior doctors either disagreed (2) or were undecided (1) on this point and similar mixed responses were given by TIs and both early career and experienced nurses. Experienced doctors (and one early career doctor) also disagreed they had been taught the procedure by someone who demonstrated proficiency in IPC and that the instruction they had received was not rushed (early career nurses expressed similar views).

Eighteen clinicians (53% of all participants) either agreed or strongly agreed that they had adjusted techniques to ‘save time’ in the busy clinical environment. The least likely group to affirm this was the senior experienced nurses (just 5/15 nurses). Free-text comments from senior doctors revealed the level of IPC practice among clinicians could vary with specialty (eg, surgeons get more) and that IPC in medicine was generally taught in an ad hoc fashion on a need-to-know basis by other clinicians. Real-time assessment and feedback was considered to be much more useful by this group as evidenced by statistical analysis.

Notably, while most clinicians self-reported they had followed recommended guidelines for AT (96%) and HH (100%), indecision existed regarding aspects of PIT, with nearly 20% of participants (nurses and doctors) reporting they had not followed recommended guidelines or were unsure if they had. Doctors and nurses also reported not having received or being unsure of having received formal PPE training.

Free-text comments included: ‘I personally think it would be good to have more supervision/scheduled peer review because many different nurses/universities teach different skills and techniques. Technique can get lost in practice’ (early career RN) ‘I have never been taught how to remove gloves and gown in the appropriate way’ (RN with 10 years experience). ‘I have had no formal training in PPE’ (early career RN). ‘Time constraints often have a detrimental effect on appropriate handwashing protocols’ (experienced doctor). ‘Would be helpful to have formal training demonstration on donning and removing PPE’ (early career doctor) and ‘No formal medical teaching with regards to dressing changes and procedures’ (early career doctor).

Patient feedback on clinician IPC practice

Patient feedback (n=34) was generally very positive. Patient evaluations (online supplementary appendix 3) were analysed in the same manner as clinician self-evaluations. From the total of nine questions using Likert Scales, there were again three where patients’ responses differed significantly between practitioner cohorts performing procedures.

Question 2. ‘I think the nurse/doctor felt confident performing this procedure on me’.

Patients rated senior nurses (mean 1.06, SD 0.25) as being significantly more confident in performing the procedure than early career nurses and TIs (means 1.75 and SDs 0.5 for both cohorts); p=0.001, Snr nurses vs TIs and p=0.001, Snr nurses vs EC nurses.

Question 3. ‘I think he/she took all the appropriate steps to ensure my safety and my compliance so as to minimise my chances of getting an infection’.

Senior nurses (mean 1.13, SD 0.35) were judged by patients to be superior to both TIs and early career nurses (both with means of 1.5 and SDs of 0.5) in ensuring procedures were followed that protected the patient’s safety per se and also their chances of getting an infection; p=0.01, Snr nurses vs TIs, p=0.01, Snr nurse vs EC nurse.

Question 7. ‘I think that regularly scheduled patient feedback is a good idea for encouraging clinician best practice and best patient outcomes’.

Patients matched to senior nurses held significantly more positive views on this statement (mean 1.4, SD 0.5) compared with those matched to early career nurses (mean 3.25, SD 2.06) p=0.003.

Generally, patients reported feeling safe with clinicians performing procedures on them and comfortable with clinicians’ attention to HH. Nearly all patients either agreed or strongly agreed that regular scheduled peer observation could encourage clinician best practice and best patient outcomes (96.7%), as would patient feedback (90.3%). One patient felt strongly there was ‘no need for patient feedback on clinician practice’ (80+ years age group). In response to the statement ‘When nurses and doctors perform poorly in my estimation, I feel confident with asserting my needs as a patient’, ten of the 33 patients answering this question (30.3%) either disagreed or were undecided about their ability to ‘speak up’. There was no statistical difference across cohorts regarding this statement.

Discussion

To the best of our knowledge, this is the first study to exclusively target IPC teaching and assessment in New Zealand undergraduate nursing and medical education. It has shown significant differences exist between the professions in IPC-related curricula in the first 2 years. The medical curriculum was content-heavy in IPC-related theory, with an emphasis on didactic teaching centred around microbiology, whereas nursing IPC-related theory was less dense and more applied, with an emphasis on health and safety. Of particular note was the very small amount of IPC-related practical instruction that junior medical students received compared with their nursing peers and a total lack of IPC-related assessment. Similar deficiencies in medical curricula regarding safety as it pertains to IPC practices have recently been highlighted in a systematic review involving higher education institutions.20

Junior nurses by contrast received rigorous practical training involving AT, PI and in particular, HH, and were accordingly well assessed in these areas. Although barriers existed to the comprehensive tracking of IPC-related teaching in the later years of both medicine and nursing training, it was clear there was a need for more formal scheduled IPC instruction in the clinical setting where students complete their undergraduate education. Assessment of IPC-related material, both theoretical and practical, could also be enhanced in the final years of training and perhaps routinely incorporated into high stakes/summative assessments. These suggestions echo those of Abdelaziz et al21 in their 2018 study of IPC in undergraduate nursing curricula, where they identified similar gaps. Mastery of a procedural skill should not assume mastery of the associated IPC techniques used when performing the skill. Furthermore, those teaching the skill should ideally be highly IPC proficient, yet our research and that of others22–24 suggests many may not be.

It was apparent for the tertiary hospital facility we examined, that mentoring and monitoring of IPC practice was a high priority, with adequate specialist staff involved and many and varied learning opportunities offered. However our study encountered great difficulty recruiting clinician participants, almost exclusively because clinicians were stressed, feeling overworked and time poor. IPC staff relayed that attendance at education sessions varied greatly and was similarly dependent on time availability for busy clinicians. This likely diluted some of the gains that could be made if clinicians had protected time, which most did not. Interestingly, clinician self-evaluations and patient evaluations of clinician performance did not always align to that of IPC experts, and this was particularly true for procedures involving PPE, which were poorly performed. Poor knowledge pertaining to correct use of the N95 respiratory mask and other PPE equipment was also a finding in the study of nurses by Abdelaziz et al.21

The present study additionally revealed that clinician confidence in providing and being audited for, best IPC practice, and patients’ confidence in receiving best IPC care, was positively associated with clinician experience. TIs, whose confidence in IPC practice was not matched by the same desire for monitoring or feedback compared with senior colleagues, were the exception, perhaps reflecting some underlying uncertainty regarding their true level of competence. Previous studies have proposed linkages between the patient experience, clinical effectiveness and patient safety.25 26

Safe IPC practice represents a simple, cheap and highly effective weapon in combating rates of infection transmission and AMR. This study has shown there is generally a readiness of clinicians to undergo regular review of their IPC competencies in the interests of future best practice, something also supported by patients. One patient commented ‘This is an important study for the patient’. This statement may be a reflection of the fact that nearly a third of surveyed patients expressed they would not feel confident in asserting their rights to doctors and nurses if they perceived poor IPC practice. These results are in agreement with the findings of a recent large cross-sectional study reporting that as many as 30.5% of hospitalised patients experience an inability to feel comfortable in ‘speaking up’.27 Patient feedback may not always be accurate, as the present study has shown, but it’s routine incorporation into clinical practice could play an important part in patient empowerment leading to better patient-clinician communication and therefore patient safety.

Strength and limitations of the study

A strength of this New Zealand study is that it has provided empiric evidence for the need for better and more consistent IPC teaching and assessment in undergraduate nursing and particularly, medical curricula. It has additionally highlighted the need for staff having protected time to receive ongoing IPC training and a general desire for health professionals to receive regular expert peer auditing of their IPC practices beyond that of HH. These practices are constantly at risk of becoming suboptimal, for example, as a consequence of ‘adjusting techniques to save time’ in challenging clinical environments. This is something that may not necessarily be commented upon or even recognised by the patient and underpins the importance of this research in incorporating the patient voice

Limitations of this study are that the inclusion of greater numbers of nursing and medical curricula and larger cohort numbers are needed to provide fuller comparisons of IPC-related curricula and clinicians’ IPC practices. Second, difficulties mapping the later years of medicine and nursing training could mean some data may not have been captured, though likely minimal, since some modules/disciplines overlapped or were out of IPC scope (eg, ethics). Finally, senior doctors and nurses in this study may have experienced slightly different IPC-related curricula, given the time since graduation.

We conclude that a multifaceted approach such as the one employed in the present study for IPC quality assurance, when used iteratively, could promote and maintain safe IPC practice, lessening chances of infection transmission and reassuring staff and patients. To support the utility of this and similar approaches, larger studies are needed.

References

Supplementary materials

  • Supplementary Data

    This web only file has been produced by the BMJ Publishing Group from an electronic file supplied by the author(s) and has not been edited for content.

Footnotes

  • Contributors LG conceived the research design, obtained CALT Grant funding and University of Otago ethics, contributed to year 2/3 medical curriculum mapping, performed data analysis, wrote, revised and edited the manuscript. RA was responsible for patient and clinician recruitment, expert peer observation and clinical data acquisition. HB was responsible for year 2/3 medical IPC-related curriculum mapping. LK obtained all data mapping IPC-related material in the BN nursing curriculum. JS was jointly responsible for patient and clinician recruitment, expert peer observation and clinical data acquisition as well as clinical ethics. JR retrieved all data relating to IPC teaching and assessment in years 4–6 of the MB ChB degree. HB, LK, JS and JR all assisted with manuscript revision.

  • Funding This study was funded by University of Otago (Continued Teaching and Learning (CALT) Grant 2016).

  • Competing interests None declared.

  • Patient consent for publication Not required.

  • Ethics approval Ethics approval was obtained from the University of Otago Committee for Human Ethics (Health) H16/023. Clinical ethics approval was from Health Research South, New Zealand (ID 01208)

  • Provenance and peer review Not commissioned; externally peer reviewed.

  • Data availability statement All raw data stored on Excel and REDCap files contain sensitive patient and clinician details. Ethics approval does not permit release of such data. Collated anonymised and grouped data does exist, however, and could be provided upon reasonable request.

  • Supplemental material This content has been supplied by the author(s). It has not been vetted by BMJ Publishing Group Limited (BMJ) and may not have been peer-reviewed. Any opinions or recommendations discussed are solely those of the author(s) and are not endorsed by BMJ. BMJ disclaims all liability and responsibility arising from any reliance placed on the content. Where the content includes any translated material, BMJ does not warrant the accuracy and reliability of the translations (including but not limited to local regulations, clinical guidelines, terminology, drug names and drug dosages), and is not responsible for any error and/or omissions arising from translation and adaptation or otherwise.

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Other content recommended for you