Open Access
Open Peer Review

This article has Open Peer Review reports available.

How does Open Peer Review work?

Is general inpatient obstetrics and gynaecology evidence-based? A survey of practice with critical review of methodological issues

  • Aamir T Khan1,
  • M Nauman Mehr1,
  • Anne-Marie Gaynor2,
  • Malcolm Bowcock3 and
  • Khalid S Khan1Email author
BMC Women's Health20066:5

DOI: 10.1186/1472-6874-6-5

Received: 13 June 2005

Accepted: 10 March 2006

Published: 10 March 2006

Abstract

Background

To examine the rates of evidence-supported care provided in an obstetrics-gynaecology unit.

Methods

The main diagnosis-intervention set was established for a sample of 325 consecutive inpatient admissions in 1998–99 in a prospective study in a UK tertiary care centre. A comprehensive literature search was conducted to obtain the evidence supporting the intervention categorised according to the following hierarchy: Grade A, care supported by evidence from randomised controlled trials; Grade B, care supported by evidence from controlled observational studies and convincing non-randomised evidence; and Grade C, care without substantial research evidence.

Results

Of the 325 admissions, in 135 (42%) the quality of care was based on Grade A evidence, in 157 (48%) it was based on Grade B evidence, and in 33 (10%) it was based on Grade C evidence. The patterns of care were not different amongst patients sampled in 1998 and 1999.

Conclusion

A significant majority (90%) of obstetric and gynaecological care was found to be supported by substantial research evidence.

Background

The practice of medicine is underpinned by clinical experience, intuitive clinical reasoning and applied medical research. In recent times, there has been a shift towards evidence-based medicine (EBM) defined as the conscientious, explicit and judicious use of contemporary best research evidence in making decisions about the care of individual patients [1]. To what extent is this concept being applied in clinical practice? This approach gained significant momentum in the 1990's. In thoracic surgery 78% of patient care [2], in general medicine 82% [3], in general surgery 95% [4], in paediatric medicine 75% [5], in ophthalmology 77% [6], and in anaesthesia 97% [7] of care is purported to be evidence-based. No such evidence exists in obstetric and gynaecological care. Such evaluations are fraught with methodological difficulties, which to our knowledge have not been explored in the literature. We thus assessed the extent of evidence-based care in an inpatient obstetrics-gynaecological unit and generated examples of potential pitfalls in research on evidence-based practice.

Methods

All inpatient admissions at a tertiary care centre in 1998 and 1999 during the first week of October were reviewed. Our approach replicated previous study designs [27]. We selected cases for analysis where an active diagnosis was accompanied by an active intervention. This meant that chronic background conditions requiring ambulatory treatment that were not the reason for inpatient admission were not considered. The main diagnosis-intervention sets were coded using the International Statistical Classification of Diseases and Related Health Problems (ICD-10) [8]. There were 298 and 290 consecutive cases during the two sampling periods, of which 85 and 67 were excluded due to admissions without interventions (admissions requiring monitoring or observation only, e.g. in cases of threatened perterm labour that spontaneously subsides) and 45 and 66 were excluded due to generation of confusing sets using the ICD-10 (see discussion – Table 2) in 1998 and 1999 respectively. This left 168 and 157 admissions for analysis in 1998 and 1999 respectively. To determine the codes we examined case notes as our previous research had shown a high degree of inaccuracy in electronic data [9]. Any secondary diagnoses or interventions were excluded from analysis. The data extraction was performed by three of us (AMG, TB and PC).
Table 1

Rates of evidence-based inpatient management in obstetrics and gynaecology

Grades of evidence*

Total

1998+

1999+

 

n

%

n

%

n

%

Grade A

135

42

69

41

66

42

Grade B

157

48

79

47

78

50

Grade C

33

10

20

12

13

8

Total no. of admissions

325

100

168

100

157

100

n = number of patients

* See methods for details

+ No difference between grades for the two sampling periods (p = 0.9)

Table 2

Potential pitfalls in research on rates of evidence-based practice

Potential pitfalls and explanation of methodological issues

Grading of interventions

The interventions themselves do not carry a grade, it is their application in the appropriate clinical circumstances that earns them the relevant evidence grade. For example, a hysterectomy for menorrhagia may be appropriately graded A only when less invasive options have been exhausted. This problem may not be dealt with by pairing up interventions with diagnoses without regard for previous history of the problem.

Selection of main diagnosis-intervention set

By narrowing down to one main set other aspects of care that might be important might be excluded. An alternate approach would be to develop care-pathways based on evidence and study compliance with pathway as a measure of evidence-based practice.

Multi-faceted interventions

Some interventions are a composite of several aspects of care, e.g. management of labour consists of amniotomy, augmentation, support, etc. Each aspect of care may be evidence based but it may be difficult to provide a single grade to the composite intervention.

Coding of diagnosis-intervention sets

Coding using the International Statistical Classification of Diseases and Related Health Problems (ICD-10) (WHO, 1992) can produce confusing sets, e.g. vertex presentation and normal delivery. Here the delivery is an outcome not an intervention. The intervention is care according to labour ward guideline. Sometimes such sets could be so confusing that they cannot qualify for an evidence search.

Unit of analysis

If admission is used as the unit of analysis instead of patient this might bias the analysis. It is possible that the same patient may be counted more than once if they are admitted on several occasions over the study period. Using short sampling periods may avoid this problem.

Self evident interventions

These are interventions where there are no controlled trials in support of the treatment modality but there is convincing biological or basic research evidence such that a trial would be unnecessary or unethical, e.g. caesarean section for placenta praevia. These should not be graded as Grade C.

Cost-effectiveness of care

Cost-effectiveness rather than effectiveness alone may determine provision of care. Some cases may be graded lower on the grounds that the marginal benefit of an intervention graded higher is not considered worthy of the additional expense involved.

Medical literature was searched and examined by two of us (NM and ATK) to determine whether or not the main diagnosis-intervention set for each admission was supported by research evidence using a systematic hierarchical approach [10]. Any disagreements between the reviewers were resolved by consensus between them or by arbitration by a third reviewer (KSK). Our search interrogated the Guidelines from the Royal College of Obstetricians and Gynaecologists [11], Guidelines from the Scottish Intercollegiate Guidelines Network [12], the National Electronic Library for Health [13], Cochrane Database [14], the Reproductive Health Library [15], Clinical Evidence [16], PubMed Medline [17], and other bibliographic data bases. Our search was guided by the principle of "theoretic saturation", i.e. we stopped search as soon as relevant graded evidence was identified. This approach has wide acceptance in the research community [18].

Key references from the search were used to assign each diagnosis-intervention set to one of the following three categories of evidence: Grade A, care supported by evidence from randomised controlled trials (RCTs); Grade B, care supported by evidence from controlled observational studies as well as conditions where there were no controlled data in support of the interventions but there were convincing biological or basic research evidence such that controlled trials would be unnecessary or unethical; Grade C, care without substantial research evidence. Grade A evidence was presumed to be of the highest quality. Grade B evidence including non-randomised prospective cohort studies and large retrospective comparative studies, was second best. Where Grade B was based on inherent validity of interventions we sought consensus from two to three consultants. Both Grade A and B were considered substantial supportive evidence. We computed rates of evidence-based practice separately for the two sampling periods and sought for examples of methodological pitfalls, which we discuss while examining the validity of our findings.

Results

Of the complete list of 325 consecutive inpatient admissions (Table 1) the care provided in 135 (42%) was supported by Grade A evidence, in 157 (48%) the care was supported by Grade B evidence, and in 33 (10%) the care was supported by Grade C evidence. Of the 157 cases graded B, there were 23 (15%) based on convincing biological or basic research evidence when trials would be unnecessary or unethical. There were no significant differences in pattern of care amongst the patients sampled in 1998 and 1999.

Discussion

Medical practice has been criticised as not being based on solid evidence [19]. No published studies have examined the extent of EBM in general obstetric and gynaecological practice. Our study showed that the majority (90%) of the care among general obstetric and gynaecological inpatient admissions was well supported by research evidence.

The validity of our findings depends on the methodological robustness of research designs used to evaluate evidence-based practice [27]. The strength of our study is that we evaluated a consecutive series and examined actual case notes. This reduces the risk of bias due to mis-classification of diagnosis-intervention sets. Our searches were extensive and the grades of evidence were extracted largely from the guidelines of bodies responsible for practice in the UK. The methodological deficiencies that must be understood in order to realistically interpret our findings are summarised in Table 2, where we have identified possible pitfalls in research on rates of evidence-based practice. Moreover, the generalisability of our findings may be limited as these are based on a study performed in one hospital in 1998–99. Our study may be considered 'not particularly up to date' by some critics. However, profound changes seldom take place quickly in healthcare as practitioners have difficulty finding, assessing, interpreting, and applying evidence. Another contributory factor is the slow progress in accumulation of strong evidence over the last half decade. Hence, for the majority of the gynaecologic and obstetric conditions included in our study practice has remained unchanged. On balance, we are confident that the rates and grades summarised in our study merit consideration.

EBM requires application of knowledge of medical informatics (i.e. efficiently searching the medical literature) and clinical epidemiology (i.e. being able to critically appraise the literature) along with intuition and experience to improve decision-making for individual patients [20]. Provision of up-to-date medical information may promote the application of research evidence and may lead to improvements in healthcare [21]. A Cochrane systematic review has concluded that audit and feedback can be effective in bringing about improvements in the performance of healthcare provision, which are worthwhile [22]. The identification of potential difficulties in undertaking research on evidence-based practice may help to clarify how future assessments of EBM may be developed. Our findings may serve as a baseline for comparison. Our study and careful dissection of the approach used provides insight into how the rates of evidence-based practice may be more adequately assessed in practice.

Declarations

Acknowledgements

We thank Dr. Bolarinde Ola for his help in the initiation of this project, Tracy Bingham and Pauline Claridge for data extraction from case notes, Mr. Harold Gee, Mr. Peter Thompson, and Mr. Bill Martin for their valuable feedback, and the Evidence-supported Medicine Union, UK, for funding this project.

Authors’ Affiliations

(1)
Department of Obstetrics and Gynaecology, Birmingham Womens NHS Trust
(2)
Department of Practice Development, Birmingham Womens NHS Trust
(3)
Clinical Governance and Audit Department, Birmingham Womens NHS Trust

References

  1. Sackett DL, Rosenberg WM, Gray JA, Haynes RB, Richardson WS: Evidence based medicine: what it is and what it isn't. British Medical Journal. 1966, 312: 71-72.View ArticleGoogle Scholar
  2. Lee James S, Urschel Dorothy M, Urschel John D: Is general thoracic surgical practice evidence based?. The Annals of Thoracic Surgery. 2000, 70: 429-431. 10.1016/S0003-4975(00)01483-1.View ArticleGoogle Scholar
  3. Ellis J, Mulligan I, Rowe J, Sackett DL: Inpatient general medicine is evidence based. Lancet. 1995, 346: 407-410. 10.1016/S0140-6736(95)92781-6.View ArticlePubMedGoogle Scholar
  4. Howes N, Chagla L, Thorpe M, McCulloch P: Surgical practice is evidence based. British Journal of Surgery. 1977, 84: 1220-1223. 10.1046/j.1365-2168.1997.00513.x.View ArticleGoogle Scholar
  5. Moyer VA, Gist AK, Elliott EJ: Is the practice of paediatric inpatient medicine evidence-based?. Journal of Paediatrics and Child Health. 2002, 38 (4): 347-351. 10.1046/j.1440-1754.2002.00006.x.View ArticlePubMedGoogle Scholar
  6. Lai TY, Wong VW, Leung GM: Is ophthalmology evidence based? A clinical audit of the emergency unit of a regional eye hospital. British Journal of Ophthalmology. 2003, 87 (4): 385-390. 10.1136/bjo.87.4.385.View ArticlePubMedPubMed CentralGoogle Scholar
  7. Myles PS, Bain DL, Johnson F, McMahon R: Is anaesthesia evidence-based? A survey of anaesthetic practice. British Journal of Anaesthesia. 1999, 82 (4): 591-595.View ArticlePubMedGoogle Scholar
  8. WHO: International Statistical Classification of Diseases and Related Health Problems (ICD-10). 10th revision. 1992, Geneva, World Health Organisation, 679-764.Google Scholar
  9. Ola B, Khan KS, Gaynor A, Bowcock ME: Information derived from the hospital coded data is inaccurate: the Birmingham Women's Hospital experience. Journal of Obstetrics and Gynaecology. 2001, 21: 112-113. 10.1080/01443610020025958.View ArticlePubMedGoogle Scholar
  10. Khan KS, Coomarasamy A: Searching for evidence to inform clinical practice. Current Obstetrics and Gynaecology. 2004, 14: 142-146. 10.1016/j.curobgyn.2003.12.006.View ArticleGoogle Scholar
  11. Royal College of Obstetricians and Gynaecologists, Guidelines (Good Practice). [http://www.recog.org.uk/guidelines.asp?PageID=105]
  12. Scottish Intercollegiate Guidelines Network (SIGN): Clinical Guidelines. [http://www.sign.ac.uk/guidelines/index.html]
  13. National Health Service, National Electronic Library for Health (NELH). 2004, [http://www.nelh.nhs.uk]
  14. National Health Service, National Electronic Library for Health (NELH). 2004, [http://www.nelh.nhs.uk]
  15. WHO: Reproductive Health Library. Version 7. 2004, Geneva, World Health OrganisationGoogle Scholar
  16. BMJ, Clinical Evidence. 2001, London, BMJ Publishing Group, 5: 960-1010. 1075-1127, 1196-1356.Google Scholar
  17. National Library of Medicine, PubMed (Medline). 2004, [http://www.ncbi.nlm.nih.gov/entrez/query.fcgi]
  18. Lilford RJ, Richardson A, Stevens A, Fitzpatrick R, Edwards S, Rock F: Issues in methodological research: perspectives from researchers and commissioners. Health Technology Assessment. 2001, 5: 1-57.View ArticlePubMedGoogle Scholar
  19. Smith R: Where is the wisdom.....?. British Medical Journal. 1991, 303: 798-799.View ArticlePubMedPubMed CentralGoogle Scholar
  20. Gray GE, Pinson LA: Evidence-based medicine and psychiatric practice. Psychiatric Quarterly. 74: 387-389. 10.1023/A:1026091611425.Google Scholar
  21. Geyoushi BE, Matthews Z, Stones RW: Pathways to evidence-based reproductive healthcare in developing countries. British Journal of Obstetrics and Gynaecology. 110: 500-507.Google Scholar
  22. Jamtvedt G, Young JM, Kristoffersen DT, Thomson O'Brien MA, Oxman AD: Audit and feedback: effects on professional practice and health care outcomes (Cochrane Review). The Cochrane Library. 2004, Chichester, UK, John Wiley and Sons Ltd, 3.Google Scholar
  23. Pre-publication history

    1. The pre-publication history for this paper can be accessed here:http://www.biomedcentral.com/1472-6874/6/5/prepub

Copyright

© Khan et al; licensee BioMed Central Ltd. 2006

This article is published under license to BioMed Central Ltd. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Advertisement