Healthcare providers want to give their patients the best evidence-based care possible. And that can be challenging, given that the practice of medicine changes quickly. In the area of oncology, for example, genetic and genomic testing is personalizing treatment for patients, and new developments seem to occur almost daily. It can be tough for physicians to stay on top of the latest recommendations.
That’s where clinical practice guidelines come into play. Clinical practice guidelines are designed to assist providers to apply evidence-based medicine to patient care and reduce variability in the care they deliver to patients. These tools are intended to serve as road maps so that practitioners have up-to-date, evidence-based information.
Adherence to practice guidelines, however, is variable, and some clinicians complain that guidelines represent an attempt to boil down rigorous medical practice into “cookbook medicine.” We think clinical practice guidelines are a natural way to communicate evidence-based practices, as long as the individuals and entities that created them followed a well-defined, rigorous, and systematic review methodology. In a perfect world, we would expect these guidelines to be developed without bias and that the experts who wrote the guidelines would have few, if any, relationships with commercial groups that may have a stake in the recommendations.
The reality is that lots of groups have jumped on the bandwagon to develop clinical practice guidelines. To get an idea of how many, go to the National Guideline Clearinghouse, type in a condition, and see how many results you get. A recent search using the terms “aortic stenosis” yielded 57 results. Granted, not all of those results hit the target, but a quick review shows that a number of different organizations have guidelines for this condition.
Some guideline developers are credible organizations with proven track records; they follow a standard process to review and grade the evidence. Other groups are less experienced at the literature-review process; they may fail to grade the evidence or they fail to disclose financial or other ties to manufacturers or other entities that create the potential for a conflict of interest. When such conflicts of interest are present, they should be disclosed. Unfortunately, that’s not always the case.
The results of a systematic analysis of interventional medicine clinical practice guidelines developed by the American Society of Diagnostic and Interventional Nephrology (ASDIN), American Society for Gastrointestinal Endoscopy (ASGE), and Society for Cardiovascular Angiography and Interventions (SCAI) showed that a whopping 62% of the guidelines examined (92 of 149) failed to report potential conflicts of interest. In total, 45% of the authors (317 of 697) reported 1827 conflicts of interest, averaging 5.8 conflicts of interest per author. Even more concerning was the finding that only 46% of the guidelines (69 of 149) graded the quality of evidence. When the evidence was graded, 7 different methods were used. You can read more about this analysis in the January 2014 issue of the Mayo Clinic Proceedings.
Considering these results, is it any wonder some practitioners hesitate to follow clinical practice guidelines?
Given the proliferation of guidelines, we would advise users to ask 5 key questions when reviewing any clinical practice guideline:
- Who developed it?
- Does it use a standard grading system to evaluate the evidence?
- Does it describe the quality of the evidence?
- Does it disclose potential biases and conflicts of interest of the experts involved in its creation?
- When was it developed and last updated?