Results of studies in top medical journals may be misleading to readers

Friday, August 26, 2011

Ω



Studies published in the most influential medical journals are frequently designed in a way that yields misleading or confusing results, new research suggests.



Investigators from the medical schools at UCLA and Harvard analyzed all the randomized medication trials published in the six highest-impact general medicine journals between June 1, 2008, and Sept. 30, 2010, to determine the prevalence of three types of outcome measures that make data interpretation difficult.



In addition, they reviewed each study's abstract to determine the percentage that reported results using relative rather than absolute numbers, which can also be a misleading.



The findings are published online in the Journal of General Internal Medicine.



The six journals examined by the investigators— the New England Journal of Medicine, the Journal of the American Medical Association, The Lancet, the Annals of Internal Medicine, the British Medical Journal and the Archives of Internal Medicine — included studies that used the following types of outcome measures, which have received increasing criticism from scientific experts:



Surrogate outcomes (37 percent of studies), which refer to intermediate markers, such as a heart medication's ability to lower blood pressure, but which may not be a good indicator of the medication's impact on more important clinical outcomes, like heart attacks.



Composite outcomes (34 percent), which consist of multiple individual outcomes of unequal importance lumped together — such as hospitalizations and mortality — making it difficult to understand the effects on each outcome individually.



Disease-specific mortality (27 percent), which measures deaths from a specific cause rather than from any cause; this may be a misleading measure because, even if a given treatment reduces one type of death, it could increase the risk of dying from another cause, to an equal or greater extent.



"Patients and doctors care less about whether a medication lowers blood pressure than they do about whether it prevents heart attacks and strokes or decreases the risk of premature death," said the study's lead author, Dr. Michael Hochman, a fellow in the Robert Wood Johnson Foundation Clinical Scholars Program at the David Geffen School of Medicine at UCLA's division of general internal medicine and health services research, and at the U.S. Department of Veterans Affairs' Los Angeles Medical Center.



"Knowing the effects of a medication on blood pressure does not always tell you what the effect will be on the things that are really important, like heart attacks or strokes," Hochman said. "Similarly, patients don't care if a medication prevents deaths from heart disease if it leads to an equivalent increase in deaths from cancer."



Dr. Danny McCormick, the study's senior author and a physician at the Cambridge Health Alliance and Harvard Medical School, added: "Patients also want to know, in as much detail as possible, what the effects of a treatment are, and this can be difficult when multiple outcomes of unequal importance are lumped together."



The authors also found that trials that used surrogate outcomes and disease-specific mortality were more likely to be exclusively commercially funded — for instance, by a pharmaceutical company.



While 45 percent of exclusively commercially funded trials used surrogate endpoints, only 29 percent of trials receiving non-commercial funding did. And while 39 percent of exclusively commercially funded trials used disease-specific mortality, only 16 percent of trials receiving non-commercial funding did.



The researchers suggest that commercial sponsors of research may promote the use of outcomes that are most likely to indicate favorable results for their products, Hochman said.



"For example, it may be easier to show that a commercial product has a beneficial effect on a surrogate marker like blood pressure than on a hard outcome like heart attacks," he said. "In fact, studies in our analysis using surrogate outcomes were more likely to report positive results than those using hard outcomes like heart attacks."



The new study also shows that 44 percent of study abstracts reported study results exclusively in relative — rather than absolute — numbers, which can be misleading.



"The way in which study results are presented is critical," McCormick said. "It's one thing to say a medication lowers your risk of heart attacks from two-in-a-million to one-in-a-million, and something completely different to say a medication lowers your risk of heart attacks by 50 percent. Both ways of presenting the data are technically correct, but the second way, using relative numbers, could be misleading."



Still, the authors acknowledge that the use of surrogate and composite outcomes and disease-specific mortality is appropriate in some cases. For example, these outcomes may be preferable in early-phase studies in which researchers hope to quickly determine whether a new treatment has the potential to help patients.



To remedy the problems identified by their analysis, Hochman and McCormick believe that studies should report results in absolute numbers, either instead of or in addition to relative numbers, and that committees overseeing research studies should closely scrutinize study outcomes to ensure that lower-quality outcomes, like surrogate makers, are only used in appropriate circumstances.



"Finally, medical journals should ensure that authors clearly indicate the limitations of lower-quality endpoints when they are used — something that does not always occur," McCormick said.

Powered by Blogger.