A concern expressed repeatedly in audits of medical care relates to the gap between that which is known and how that evidence is often ignored by physicians during the provision of care. Over the last several decades, specialty societies and health agencies have attempted to close this gap with the publication of clinical practice guidelines and practice guidance statements. Broadly speaking, guidelines can be grouped into two distinct categories. The first type typically describes the customary and expected care to be offered to patients; commonly these guidelines do not arise from systematic reviews of the literature and they are often not evidence-based. Examples of this type are the International Standards for Safe Practice of Anesthesia 2010 and Guidelines to the Practice of Anesthesia Revised Edition 2012, both published in the Journal.1,2 The second broad category of clinical practice guidelines has been defined by the Institute of Medicine as “statements that include recommendations intended to optimize patient care and that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options.”3 This narrative review considers the issues related to the latter category and the impact of the application of evidence and guidelines on patient outcomes and safety. This article also identifies issues and factors that have limited the potential of the guidance documents to achieve their stated purpose.

Clinical practice guidelines purportedly attempt to integrate evidence and clinical judgement so as to aid practitioners in clinical decision-making. However, their uptake and ultimate impact on decision-making, resource consumption, and safety have often been less than intended or envisioned. In support of this review, a MEDLINE® search was carried out using various meshes of the following terms: clinical practice guidelines; patient safety; patient outcomes; autonomy; quality of care; barriers to use; quality; evidence support; physician compliance. The references of the retrieved publications were also searched for other relevant publications.

Principal findings - literature search

Issues related to the quality and variability of care

One difficulty that arises at the outset of any evaluation of the impact of new practices on the outcomes of care is the paucity of accurate large scale reports of outcomes in patient care against which to measure improvement or change. Outcomes data are often gathered on a limited scale and reflect patterns of practice at a local or regional level, or they are amassed via biopsy on a wider scale and then extrapolated to generate “data” on a system-wide scale. Obviously, in order to obtain an accurate measurement of the impact of practice and the application of evidence on patient safety and outcomes, we must begin to measure these variables routinely and document their occurrence on a much larger scale than is currently accomplished.

A compelling finding and one that has an obvious impact on outcomes, safety, and cost is the tremendous variability in care that physicians provide to their patients. An early report (1982) documenting this phenomenon revealed that the amount, type, and cost of hospital treatment provided to a community had more to do with the number of physicians, their medical specialties, and their preferred procedures than with the health of the residents.4 Among 193 small areas in the six states of New England, the overall rate of surgery varied more than twofold and correlated most strongly with the number of surgeons and the number of hospital beds per capita. Not surprisingly, communities with higher surgery rates also had higher mortality among operated residents compared with matched controls in communities with lower rates of surgery.

In a report by Wijeysundera et al., the role of local customs and practices influencing the provision of care in anesthesiology was recently highlighted. The authors ascertained the predictors of preoperative medical consultation in patients undergoing major elective surgery and concluded that the individual hospital was the major determinant.5 There were large discrepancies in rates across hospitals that were not explained by volume of surgical procedure, hospital teaching status, or factors relating to patient and surgery levels. The median odds of obtaining a consultation were 3.51 times higher if a similar patient had surgery at one randomly selected hospital rather than at another. Factors relating to patient and surgery levels explained only a very small proportion of the variation in consultation rates.

The variability in care provided to patients may be due to a deficiency in evidence necessary to inform care or because available evidence is not applied to patient care decision-making. This inconsistency does raise the potential for some patients to receive care that is neither necessary nor beneficial, while others do not receive care that is both necessary and beneficial. If an adverse outcome should occur in the former scenario, and it will happen in a proportion of cases, harm will be perpetrated, resources will be consumed (both to provide the care and to manage the complication), and no benefit will likely ever ensue for the patient. In the latter scenario, failure to provide care can result in progression of disease and an outcome that could have been mitigated or prevented. The failure to align care with need has obvious implications regarding both patient safety and the inappropriate consumption of system resources.

Physician knowledge and compliance of practice with evidence

Both Shin et al. and Sackett et al. examined physicians’ current knowledge and found that awareness of best current practice was negatively correlated with the number of years since medical school graduation; physicians’ knowledge of the science supporting practice declined over time in practice.6,7 As knowledge declines with time in practice, there is also evidence that physicians do not seek new and relevant knowledge in order to support patient care decisions. Ely et al. assessed how frequently physicians seek answers to patient-care questions and the nature of the most frequent obstacles preventing them from answering their questions.8 Physicians pursued answers to only about half of their patient-care questions. The most commonly reported obstacle to the pursuit of an answer was the physician’s doubt that an answer actually existed, and the most common obstacle among pursued questions was the failure of the resource that the physician selected to actually provide an answer.

Even when they seek answers, physicians may experience difficulty in readily extracting− from a review of the literature− the current status of an evidence base as it applies to a given clinical scenario or patient issue. Richard Smith, a former editor of the British Medical Journal, suggested that medical journals may not be optimally engineered to deliver useful and accurate information to physicians. He also recognized that many doctors have little experience with the critical appraisal of scientific articles.9 Physicians are overwhelmed with data of limited relevance to them and their patients, yet they often cannot easily find specific information or answers to questions that arise in interactions with their patients.10 Well-constructed clinical practice guidelines may help provide guidance and answers by translating abundant, complex, and confusing science into clear recommendations for patient care.

Perhaps because physicians are often unaware of current evidence, medical practices may occur that are discordant with what might be considered best practice. An example of this is the use of ultrasound for placement of central venous catheters. Complications during central venous catheterization (CVC) in adults are not rare, and related injury can be severe; the routine use of ultrasound during CVC has been recommended by a number of authorities for some years to improve patient safety.11-13 However, when Bailey et al. surveyed members of the Society of Cardiovascular Anesthesiologists regarding their use of ultrasound for CVC, two-thirds of the respondents (1,494/4,235 responded) stated that they never or almost never used ultrasound, whereas only 15% always or almost always used ultrasound.14 The most common reason cited for not using ultrasound was “no apparent need for the use of ultrasound” (46%) despite evidence in the literature strongly to the contrary. Subsequently, following a meta-analysis of randomized controlled trials, the American Society of Anesthesiologists’ Task Force on Central Venous Access concluded that real-time ultrasound-guided CVC (internal jugular vein) was more effective (higher first insertion attempt success rate, reduced access time, higher overall successful cannulation rate) and safer (decreased rates of arterial puncture) than the landmark technique, and its use was recommended on that basis.15 This guidance is echoed by the American Society of Echocardiography and the Society of Cardiovascular Anesthesiologists who also recommended the use of ultrasound for CVC (internal jugular vein) to improve cannulation success and reduce the incidence of complications.16 Despite the consistency and strength of both the evidence and expert guidance, members of the American Society of Anesthesiologists’ declared themselves to be equivocal regarding the use of ultrasound for CVC, suggesting not only that they remain unconvinced by the evidence but also that they are likely to be non-compliant with the recommendations.15

In summary, these data suggest that physicians become more distant from current evidence the longer they are in practice; they often do not seek answers to patient care questions that arise in practice, and when they do seek information, they often are not successful or persistent in their search. There is abundant literature of dubious relevance, and physician practices may be discordant with current evidence, either because they are unaware of it or they choose to ignore it.

Issues with the quality of the medical literature in general

It is tempting to think that all would be well if only we could convince our colleagues to regularly turn to the literature for their answers. In fact, we base our practices on some important deficiencies in the literature. Ioannidis identified 49 original clinical research studies that were eventually cited more than 1,000 times, and the author examined how frequently those studies were contradicted by subsequent literature.17 Sixteen percent of these landmark publications were contradicted in subsequent studies; 16% reported stronger effects than those found in subsequent studies; 44% were replicated; and 24% remained unchallenged. Controversies were most common with highly cited nonrandomized studies, but even some of the most highly cited randomized trials were refuted over time.

Medical reversal is a term used to signify the phenomenon of a new trial that is usually superior to its predecessors in design or power and contradicts current clinical practices. Ideally, good medical practices are replaced over time by better ones, and the new practices dominate because they outperform older ones in robust comparative trials. Often, however, established practices must be abandoned because evidence emerges to show that practices once thought to be beneficial likely never were. Prasad reviewed all articles in the 2009 New England Journal of Medicine that made a claim about a medical practice.18 The articles were divided equally between those that constituted a reversal, those that upheld an existing practice as beneficial, and those that were inconclusive regarding a current practice. Once incorrect findings from earlier studies are refuted by better quality data, it is not a given that they fade into oblivion accompanied by the practices that they supported. Tatsioni et al. reported that research findings which had been subsequently contradicted often continued to be cited in some scientific literature, most typically by proponents of a particular practice.19

Although the findings of individual clinical studies may be vulnerable to be eventually overturned, surely the gold standard of evidence-based medicine, the meta-analysis, would prove more durable over time. In fact, Shojania et al. reviewed 100 high-quality quantitative systematic reviews, published from 1995 to 2005 and indexed in the ACP Journal Club®, in order to estimate the average time until there were sufficiently important changes in evidence to warrant updating systematic reviews.20 The median duration of survival free of a signal for updating was 5.5 yr (confidence interval [CI], 4.6 to 7.6). However, a signal occurred within two years for 23% of the reviews and within one year for 15%. In 7% of reviews, a signal had already occurred at the time of publication.

Finally, fraud and, in particular, fabricated data are increasing concerns in the medical literature; these are issues that may affect the quality of the evidence in some clinical domains. Although fraud likely affects a relatively small proportion of the literature, it may have a major impact in areas of enquiry dominated by a small number of researchers. In the last decade, the world anesthesia community witnessed deception severally as much of the work of three notable anesthesiologists with considerable publication impact has been categorized as likely or clearly fraudulent. The published works of Scott Reuben (United States), Joachim Boldt (Germany), and Yoshitaka Fujii (Japan) were both extensive and influential regarding the management of perioperative analgesia, fluid therapy, and the treatment of nausea and vomiting, respectively. All three anesthesiologists have been discredited, and much of their work has been called into question on the basis of fraud and fabrication.21-23 As much as we would like to think that these were isolated occurrences, the number of papers retracted yearly for fraud has increased sharply over the past decade.24 It is unclear whether this trend is reflective of a real increase in the incidence of fraud or the enhanced efforts by journal editorial boards to police their submissions. Yentis reports that all submissions to the journal, Anaesthesia, are now routinely scanned using an anti-plagiarism service and 4% of manuscripts are rejected before review.25

From 2000 to 2010, more than 700 English language research papers were retracted from the PubMed database for either fraud or error.26 More than half of the fraudulent papers were written by a first author who had written other retracted papers. Fraudulent papers were likely to have several authors, to be published in journals with higher impact factors, and to be retracted more slowly than erroneous papers. Miller suggests that the long-term effects of a fraudulent publication can be substantial, with the potential for error propagation resulting from the citation of an article in review articles and practice guidelines.27 Furthermore, the inclusion of fraudulent data into the data sets of systematic reviews may lead to incorrect conclusions. For example, Marret et al. examined the impact of the Reuben reports on the conclusions of systematic reviews that included data from his retracted studies.28 In one of six quantitative reviews, different conclusions would have been reached had Reuben’s reports been excluded. All authors of eight qualitative reviews agreed that different conclusions would certainly have been reached without Reuben’s data; the authors’ judgements were not unanimous for another four reviews. Qualitative systematic reviews seemed at greater risk than quantitative reviews for having altered conclusions by lessening the amount of retracted data.

We might like to think that physicians select both accurate and enduring evidence to support medical practice, but there is evidence suggesting that elements of validation are erroneous and temporary. Perhaps the situation is not as grim as suggested by Ioannidis. In an analysis of the common methodological flaws afflicting many research studies, he stated, perhaps somewhat tongue-in-cheek, that “most published research findings are false”.29

Evidence-based medical practice and patient safety

Recognizing the overwhelming and steadily escalating amount of medical literature that is often irrelevant and for various reasons (e.g., reversal, fraud) faces a short shelf-life, is it possible to identify practices and generate guidelines and pathways that optimize care and enhance safety? In fact, Shojania et al. identified 83 practice interventions to enhance safety that were supported by systemic reviews and investigations at the time of the review.30 Ten of these practices were identified under the heading “Practices targeting complications of anesthesia, surgery, or other invasive procedures”. Although the majority of the listed practices have uncontested literature support, two practices would be subject to challenge based on current literature. These practices are tight perioperative glucose control to reduce the risk of surgical site infection and the use of perioperative beta-blockers to reduce the risk of cardiac complications. As evidence has evolved, support for both of these practices has waxed and waned over the last decade. It cannot be concluded in either case that their value is certain. Shojania et al. also argued that some interventions are so obviously beneficial to patients that insisting on hypothesis testing is unnecessary and overly rigid.30 However, Leape et al. expressed caution regarding the implementation of safety practices solely on the basis of common sense or intuitive reasoning, particularly when there is evidence to indicate that practices whose benefit seemed obvious and intuitive in the past were subsequently revealed to be harmful.31

Does the application of evidence-based medicine improve patient safety and outcomes?

In an early evaluation of the effectiveness of guidelines to improve the mechanisms or outcomes of care, Grimshaw and Russell identified 59 evaluations of clinical guidelines.32 All but four of the evaluations detected improvements in the process of care with the introduction of guidelines. Nine of the 11 studies assessing the outcomes of care also reported improvements, and the magnitude of the improvements in performance varied considerably. More recently, Lugtenberg et al. conducted a systematic review of studies evaluating the effects of Dutch evidence-based guidelines on both the process and structure of care and patient outcomes.33 Seventeen of 19 studies that measured the effects of the guidelines on the process or structure of care reported improvements in those areas, and six of nine studies that measured patient health outcomes showed improvements, albeit small in magnitude.

Structured care pathways and interventions have also been shown to have a positive effect on both patterns of care and patient outcomes in specific clinical areas. Clark et al. described a comprehensive redesign of patient safety processes in the Hospital Corporation of America® (HCA®) obstetrical units, which featured the development of explicit and unambiguous guidance documents describing uniform processes and procedures for specific obstetrical scenarios.34 The HCA is the largest private health care delivery system in the United States, providing approximately 220,000 deliveries or 5% of births in the USA. Since the inception of this program, the HCA has experienced improvements in patient outcomes, a dramatic decline in litigation claims, and a reduction in the primary Cesarean delivery rate. Less expansive institutional care pathways have been described for the management of anticipated and unanticipated difficult airways. These approaches result in improved care and better outcomes than those achieved with historical patterns of management.35

There are also many individual examples where the application of evidence to clinical practices has been shown to enhance safety and improve outcomes. A recent high-profile example concerns prevention of central venous catheter-related bloodstream infections occurring in the intensive care unit (ICU). Pronovost et al. showed that an evidence-based intervention could be used to reduce the incidence of such infections.36 Using the intervention in more than 1,981 ICU-months of data and 375,757 catheter-days in 108 ICUs, the median rate of infection per 1,000 catheter-days decreased from 2.7 infections at baseline to 0 at three months after implementation, and the mean rate per 1,000 catheter-days decreased from 7.7 at baseline to 1.4 at 16 to 18 months of follow-up.

Not surprisingly, there are data which suggest that using evidence to inform patient management decisions and interventions is associated with improved clinical outcomes and enhanced patient safety. However, there are also data to suggest that the nature, quality, and consistency of the data are important and also that evidence evolves over time. Finally, decision-making supported only by intuition (e.g., expert opinion) is not without risk and has been associated with adverse patient outcomes.

Barriers to the implementation of guidelines and the application of evidence

Surveys of clinicians suggest that a major barrier to using current research evidence is the time, effort, and skills required to access the right information among the massive volumes of research.37 MEDLINE® indexes about 1,500 new articles and 55 new trials per day, and clinicians need highly efficient strategies to analyze new evidence and guidance likely to benefit their patients. However, physicians also seem reluctant to adopt changes in patterns of practice to become compliant with new evidence or guidelines. A number of authors have explored the reasons underlying this reluctance as well as the factors which enhance and impair implementation of guidelines. Davis and Taylor-Vaisey reviewed studies of clinical practice guideline implementation strategies and concluded that many guideline implementation processes yield poor results.38 Variables that affect the adoption of guidelines included the quality of the guidelines, characteristics of the health care professional, characteristics of the practice setting, incentives, as well as regulation and patient factors. Cabana et al. identified 76 published studies describing at least one barrier to adherence to clinical practice guidelines.39 Salient factors identified in the failure to follow guidelines included a lack of awareness of guidelines, lack of familiarity with their content, lack of agreement with the recommendations, lack of self-efficacy (i.e., the belief in one’s ability to perform a behaviour), low expectancy of favourable outcomes, inertia or a lack of motivation to apply the guidelines, and a perception of external barriers beyond the control of individual physicians (e.g., cost, requirement for system resources, patient acceptance).

With the identified barriers in mind, a number of authors have advanced models for guideline acceptance and adherence which might promote physician compliance with the guideline recommendations.40-42 All of the models are similar in a number of respects. At the outset, a practicing physician should be cognisant that uncertainty regarding a patient care issue exists, resulting in a search for clinical guidance. An awareness of new and relevant evidence and guidance may develop as a result of that enquiry, and the physician must then be persuaded to change management regimes based on the evidence presented. There is a relative advantage favouring adoption if the new practice is demonstrably superior to the traditional approach in at least one important attribute. As well, adoption is also likely favoured if the new practice pattern is not radically different from the traditional approach. There are greater barriers to changing practice for more complex interventions as clinicians may require additional training before these can be performed as competently as in the trials that originally documented their benefits. Despite the best of intentions, patterns of behaviour do not change readily, and persistent efforts must be made to replace the traditional pattern of practice with the new approach.

The models illustrate the magnitude of the undertaking to engage clinicians, to convince them that a change in practice will be beneficial for their patients, to ensure they understand the content, targets, and application of the guidelines, and to ensure they have the skills and motivation to apply a new practice in a timely and appropriate fashion.

Methodological issues affecting guidelines

Guidelines are based on literature that can be both inaccurate and evanescent; in addition, the methodological structure supporting the guidelines may be deficient. Shaneyfelt et al. evaluated 279 guidelines to determine whether they were developed according to established methodological standards.43 Measured on a 25-point scale, each guideline’s mean overall adherence to standards was 43.1%, and there was little improvement in adherence over time. Grilli et al. also assessed the methodological quality of the practice guidelines developed by specialty societies.44 Sixty-seven percent of the 431 guidelines identified did not report a description of the type of stakeholders, 88% gave no information on searches for published studies, and 82% did not give an explicit grading of the strength of the recommendations. All three criteria for quality were met in only 22 (5%) guidelines.

The perceived strength and quality of the evidence underlying the actual recommendations contained in many guidelines has been subject to criticism. The American College of Cardiology and the American Heart Association have a long history of generating guidelines typically perceived to be evidence-based, credible, and high quality. However, when Tricoci et al. reviewed the data supporting 53 American College of Cardiology/American Heart Association (ACC/AHA) practice guidelines on 22 topics, they concluded that most of the recommendations issued in current ACC/AHA clinical practice guidelines were supported by lower levels of evidence or expert opinion alone.45 As well, the proportion of recommendations for which there is no conclusive supporting evidence has actually increased over time.

As evidence evolves and is updated and clinical priorities change, guidelines supported by that evidence and those priorities must also be modified, and mechanisms must be in place to periodically review, revise, or withdraw guidelines. Shekelle et al. surveyed authors who had contributed to original guidelines on behalf of the Agency for Healthcare Research and Quality.46 They concluded that 90% of the guidelines remained valid for up to 3.6 yr, but about half of the guidelines were outdated by about 5.8 yr.

These findings highlight the need to improve the process of producing guidelines. The obligation to review and revise guidelines should rest with the agency responsible for initially generating the guidelines, and the plan for future management of the guidelines, including the intent to review and revise, ideally should be specified in the guidelines themselves.

Physician autonomy, evidence-based medicine, and accountability

While physician autonomy is frequently invoked as a defining value in medical practice, there have been few attempts to specify its meaning. To some, autonomy means that physicians should have the freedom to provide treatments for patients according to their own best judgement.47 The attachment to this notion of autonomy likely underlies to some degree the variability in care offered to patients and the reluctance of physicians to adhere to guidelines. However, physician autonomy should be based on promoting the patients’ interests by providing the best identifiable care. Is there room for autonomy in an evidence-based practice – absolutely! To paraphrase Sackett, the practice of evidence-based medicine involves integrating the best available evidence with the knowledge and expertise of the physician in the process of care. Variation in care based on physician preferences is an inappropriate manifestation of autonomy; variation tailored to the different sets of individual patient scenarios is an appropriate expression of autonomy.

As evidence accumulates to support the superiority of a certain pattern of practice over conventional past practice, there is a reasonable expectation that the new pattern of practice will be adopted by physicians. Once evidence supporting a reasonable safety rule becomes uncontested, failure to adhere enters the domain of accountability.48 Why do we fail to implement knowledge reliably? The answer lies at least in part to our fierce attachment to individual clinical autonomy and our reluctance to be “told what to do”.

Future directions

To manage their patients optimally, there is clearly an enormous amount of evidence for physicians to integrate into their patient care decisions. Specialty societies can aid their members in this process by developing and facilitating the development of evidence-based guidance statements. Guidelines developed with a rigorous and transparent process that combines quality scientific evidence, clinician experience, and patient values have the potential to limit practice variation, enhance quality, and improve patient outcomes. It is important that these guidance statements be supported by the most accurate evidence base possible, and it is necessary that the statements be routinely evaluated for their effectiveness and be reviewed and revised to ensure consistency with a continuously evolving evidence base.

To be trustworthy, guidelines should be based on a systematic review of the existing evidence and on an explicit and transparent process that minimizes distortions, biases, and conflicts of interest. They should be developed by a knowledgeable multidisciplinary panel of experts and representatives from appropriate stakeholder groups, and they should clearly define the targeted patient groups and consider important patient subgroups. They should provide a clear explanation of the logical relationships between alternative care options and health outcomes and provide ratings of both the quality of evidence and the strength of the recommendations. The guidelines should be reconsidered and revised, as appropriate, when important new evidence warrants modifications of recommendations.3 Medical journals are becoming increasingly forthright in their expectations regarding how authors should prepare their guidelines for publication. For example, in its Guide for Authors, the Editorial Board of the Canadian Journal of Anesthesia recommends that the guidelines stipulate the clinical issue being addressed and the methodology employed to construct the guidelines. The Board also recommends that the guidelines include a review of the evidence and a commentary comparing the proposed guidelines with existing guidelines.49 (Table).

Table 2011 Canadian Journal of Anesthesia Guide for Authors

It is abundantly obvious from past experience that simply summarizing the evidence and generating guidance statements is not sufficient to ensure that evidence is translated into practice. Guidelines must also be combined with a strategy for transcending the barriers to their application.50 As the quality of the guidelines improve, there should be efforts to ensure that the potential positive impact of the guidelines on outcomes and safety is realized. A process of dissemination should be implemented that will encourage physicians and help them to make use of sound guidelines for relevant decision-making in support of patient care.

Summary

There are increasing demands being placed on physicians to improve patient safety, apply evidence to medical practice more consistently, limit variation in practice, and reduce unnecessary consumption of resources. There are a number of drivers to improve clinical quality and patient safety, and there is a professional and ethical responsibility to provide accountable and effective care. There is a strong economic case for avoiding complications and creating predictable outcomes, and finally, system stakeholders are demanding evidence to prove that they are obtaining optimum value for their ever-increasing health care expenditures.

Physicians are perceived to be resistant to the application of evidence-based care. Standardization threatens physician autonomy, and autonomy is a transcendent value within the physician culture. Nevertheless, even with evidence-based practice, autonomy remains an integral element in the provision of care. It supports physician decision-making and facilitates the formation of optimal physician-patient relationships, and it needs to be preserved in medical practice. In the future, however, autonomy will be defensible only on the basis of its accountability. It must be evidence-based, outcomes-oriented, safety-driven, and cost-conscious.

Key points

  • It is difficult for physicians to track the enormous output of new medical literature.

  • There is a high degree of variable care offered to patients, and this inconsistency impacts safety.

  • Guidelines can synthesize literature at a point in time; they can improve care and enhance safety.

  • Even guidelines that are methodologically strong are underutilized by physicians.

  • There are complex reasons behind the lack of uptake of guidelines by physicians.