Don’t Rely on Abstracts When Answering Questions About Efficacy and Safety

Posted by Winifred S. Hayes, RN, PhD, ANP, Founder and CEO on December 13, 2011

Okay, admit it. In your search for evidence to determine whether a medical technology is safe and efficacious, you read just the abstract of an original research article for a quick overview of the results. Maybe time was at a premium (and when isn’t it?), or perhaps you couldn’t access the full text of the article without incurring a charge.

We’ve all done it at one point or another because we believe the abstract provides an accurate synopsis of the highlights of the study. Guess what? In many cases, it doesn’t.

A seminal paper on the inaccuracies of abstracts was published in 1999 in the Journal of the American Medical Association (Pitkin et al. Accuracy of data in abstracts of published research articles. JAMA. 1999;281:1110-1111.). That paper reported the results of an analysis of 88 articles and their accompanying abstracts that appeared in 6 major medical journals during a 1-year time frame. The investigators looked for two types of discrepancies: data reported differently in the abstract and the body of the manuscript; and data reported in the body of the manuscript but not in the abstract. If either discrepancy was identified, the abstract was considered deficient. The proportion of deficient abstracts ranged from 18% to 68%, depending on the journal. The most common discrepancy was inconsistency between what was presented in the abstract versus the body of the manuscript. A total of 24% of deficient abstracts contained both kinds of discrepancies.

That was 12 years ago. Have abstracts become more accurate? Not really. More recent research shows that in biomedical publications, abstracts continue to be suboptimal. One assessment of 418 abstracts of original research published in 4 major otolaryngology journals (McCoul et al. Do abstracts in otolaryngology journals report study findings accurately? Otolaryngol Head Neck Surg. 2010;142:225-230.) showed that when compared with the complete article, abstracts commonly omitted study limitations (91% left out of abstract), geographic location (79%), confidence intervals (75%), dropouts or losses (62%), and harms and adverse events (44%). A similar analysis of 243 abstracts in pharmacy journals (Ward et al. Accuracy of abstracts of original research articles in pharmacy journals. Ann Pharmacother. 2004;38-1173-1177.) found that nearly 25% of abstracts contained omissions and 33% contained either an omission or inaccuracy. And finally, an investigation of 227 abstracts published in the New England Journal of Medicine, JAMA, Lancet, and British Medical Journal showed that, with regard to the reporting of results, 28% of abstracts omitted primary outcomes, 38% failed to include effect size and confidence intervals, and about half (49%) did not report harms and side effects (Berwanger et al. The quality of reporting of trial abstracts is suboptimal: survey of major medical journals. J Clin Epidemiol. 2009;62:387-392.).

One of the contributing factors to suboptimal abstracts is probably abstract word restrictions, which make it difficult for authors to describe the research comprehensively, especially the trial methods and results sections. Nevertheless, these data point out the problems that can arise with relying on abstracts to determine the quality and outcomes of clinical research. Sole reliance on abstracts can cause readers to draw inappropriate conclusions about the efficacy and safety of the technology under investigation, especially when study limitations, adverse events, and subject dropouts or losses are omitted from the abstract.

The next time you’re tempted to look at the abstract only, think twice. The safety and efficacy conclusions you draw from the brief abstract may not be as accurate as you think.

Topics: Health Technologies, Hayes Blog

Sign up to receive updates from our blog

Our latest articles

New Call-to-action