Abstracts Are Not Evidence

abstract_1200.jpgWe get it. Time is valuable. Particularly your own. Whether you are a payer, a healthcare provider, or a supply chain or value analysis professional, you have to make the most of the time you have to get your job done. When it comes to your search for evidence to determine whether a medical technology is safe, efficacious, and the best option, maybe you only read the abstract of an original research article for a quick overview of the results. Perhaps you couldn’t access the full text of the article without incurring a charge. The truth is, we have all done it at one point or another, primarily because we believe that the abstract provides an accurate synopsis of the highlights of the study.

In many cases, however, it doesn’t.

In 1999, The Journal of the American Medical Association (JAMA) published a seminal paper on the inaccuracies of abstracts. It detailed the results of an analysis of 88 articles and their accompanying abstracts that appeared in 6 major medical journals during a 1-year time frame. The investigators looked for two types of discrepancies:

  • data reported differently in the abstract and the body of the manuscript
  • data reported in the body of the manuscript but not in the abstract

If either discrepancy was identified, the abstract was considered deficient.

“The proportion of deficient abstracts ranged from 18% to 68%, depending on the journal. The most common discrepancy was inconsistency between what was presented in the abstract versus the body of the manuscript. A total of 24% of deficient abstracts contained both kinds of discrepancies.”

inaccurate_1200.jpgWhat they found was that the proportion of deficient abstracts ranged from 18% to 68%, depending on the journal. The most common discrepancy was inconsistency between what was presented in the abstract versus the body of the manuscript. A total of 24% of deficient abstracts contained both kinds of discrepancies.

That was 18 years ago.

 

View Sample Report

“Sole reliance on abstracts can cause readers to draw inappropriate conclusions about the efficacy and safety of the technology under investigation, especially when study limitations, adverse events, and subject dropouts or losses are omitted from the abstract.”

Have abstracts become more accurate? Not really. More recent research shows that in biomedical publications, abstracts continue to be less than optimal. One assessment of 418 abstracts of original research published in 4 major otolaryngology journals (McCoul et al. Do abstracts in otolaryngology journals report study findings accurately? Otolaryngol Head Neck Surg. 2010;142:225-230.) showed that when compared with the complete article, abstracts commonly omitted study limitations (91% left out of abstract), geographic location (79%), confidence intervals (75%), dropouts or losses (62%), and harms and adverse events (44%).

A similar analysis of 243 abstracts in pharmacy journals found that nearly 25% of abstracts contained omissions and 33% contained either an omission or inaccuracy. Finally, an investigation of 227 abstracts published in the New England Journal of MedicineJAMALancet, and British Medical Journal showed that, with regard to the reporting of results:

  • 28% of abstracts omitted primary outcomes
  • 38% failed to include effect size and confidence intervals
  • 49% did not report harms and side effects

One of the contributing factors to suboptimal abstracts is likely abstract word restrictions, which make it difficult for authors to describe the research comprehensively, especially the trial methods and results sections. Nevertheless, these data point out the problems that can arise with relying on abstracts to determine the quality and outcomes of clinical research. Sole reliance on abstracts can cause readers to draw inappropriate conclusions about the efficacy, safety, and comparative effectiveness of the technology under investigation, especially when study limitations, adverse events, and subject dropouts or losses are omitted from the abstract.

research examination_1200.jpgWe have the time, the resources, and the expertise to look beyond the abstracts. The unbiased evidence assessments provided by Hayes are a “deep dive” into the published, peer-reviewed literature, performed with scientific rigor by clinical experts and PhD-trained scientists. We don’t rely on “evidence-lite”; the thousands of reports in our Knowledge Center represent the gold standard in health technology assessments. Whether you are looking for information on the clinical utility of a genetic test or deciding on a technology acquisition for your hospital, let us save you the time required to uncover the true evidence. See a sample of one of our reports by clicking below, then schedule a demo today.

View Sample Report

 

Topics: Hayes Blog, Healthcare Evidence, Health Technology Assessments,

Sign up to receive updates from our blog

Our latest articles

New Call-to-action