Event Rates, Effect Sizes Often Overestimated in Major CV Trials

More robust data are needed to improve estimates and ensure adequate statistical power for trials, a researcher says.

Event Rates, Effect Sizes Often Overestimated in Major CV Trials

Across contemporary cardiovascular clinical trials, the estimates of expected event rates and effect sizes that investigators use to calculate sample sizes—and funding requirements—often come in higher than what’s ultimately observed in the studies, according to a systematic review of published results in three leading journals.

Event rates were overestimated in 61.1% of trials, with the proportion of trials overestimating effect sizes coming in at 82.1%, researchers led by Christoph Olivier, MD (University Heart Center Freiburg-Bad Krozingen, Germany), report in a study published online recently in JAMA Network Open.

The reasons underlying the overestimations aren’t entirely clear, Olivier et al say, but possible contributors include a lack of high-quality data on which to base estimates, the influence of restrictive inclusion/exclusion criteria for trials, general improvements in medical care over time, the challenges of extrapolating preclinical results, and limitations related to funding or feasibility that require smaller-than-ideal sample sizes.

Regardless of why, when these estimates aren’t accurate, the statistical power of the trial is compromised as is its ability to provide a definitive answer to the research question at hand.

“I was surprised in many ways,” Olivier said regarding the review’s results. But, he added, “There is always some level of uncertainty about the tested intervention. If we were completely certain about an intervention, we would need to question the need for a trial. Under this assumption, overestimation in a fraction of trials seems inevitable.”

Still, referring to the finding that effect sizes were overestimated in four out of five trials, he told TCTMD: “That’s more than I expected.”

If we were completely certain about an intervention, we would need to question the need for a trial. Christoph Olivier

For the analysis, the researchers looked at the accuracy of event rate and effect size estimation—comparing observed versus estimated—in 344 major multicenter CV trials published in the New England Journal of Medicine, JAMA, and the Lancet between 2010 and 2019. The most common areas of focus for the trials were cardiovascular risk factors (43.3%), antithrombotic therapy (23.8%), heart failure (7.8%), and arrhythmias (6.9%), with more than half (54.1%) testing a drug intervention. The median estimated sample size was 2,386.

Overestimation of event rates and effect sizes was common. The median observed event rate was 9.0%, which was lower than the median estimated rate of 11.0% (mean relative deviation -12.3%). And the median observed effect size in superiority trials was 0.91, which was lower than the median estimated effect size of 0.72 (mean relative overestimation of 23.1%).

How to Improve the Situation

Olivier underscored that overestimating event rates in particular when designing a study “might contribute to an underpowered trial and to inconclusive results,” a problem when considering the substantial resources that go into these efforts. “This can happen despite careful evaluation,” he added, “but that’s an unfortunate outcome clinical trialists want to avoid.”

A potential way to improve on the estimation of event rates, which is often based on older observational or randomized data, is to anticipate the positive impact of continuing medical advances, Olivier suggested. “Maybe we need to adjust our sample size calculation and anticipate an improvement of medical care over time, if possible,” he said, acknowledging, “these predictions can be challenging.”

A more careful assessment of the data on which these estimates are based would be helpful, too, he indicated. “If you’re estimating event rates and you rely on observational data that include older patients and patients who are sicker who cannot participate in your trial because of restrictive inclusion or exclusion criteria, one should consider that to calculate the sample size,” Olivier said. “For the event rates, I think more robust data might help to estimate more accurately.”

[This paper] underscores why it’s so important that study designs be event-driven in nature as opposed to relying purely on what you expect the event rate to be. Michelle O’Donoghue

Commenting for TCTMD, Michelle O’Donoghue, MD (Brigham and Women’s Hospital, Boston, MA), said it can be challenging to predict event rates when designing a clinical trial, noting that that is at least partly related to the fact that participants in trials tend to be healthier on average compared with the population at large.

Because of the difficulty in getting the estimates right, O’Donoghue said, most trials will employ a strategy of ending the study once a certain number of primary endpoint events have occurred. “I think it is a critical part of study design to help protect you against inaccurate estimates of event rates, because the event rate in and of itself shouldn’t stop you from being able to answer the question that you put forward, but rather an insufficient number of primary endpoint events would be the real danger,” she explained.

The ‘Dirty Secret’ of Trial Design

The major reason to aim for accurate event-rate estimates is to ensure that there are sufficient funds to bring the trial to completion, O’Donoghue said. “If you were way off in your predictions at study outset, ultimately you may not have enough money to conclude the trial.”

That highlights the “dirty secret” for clinical trial design and sample size estimates, she said. “Typically you really start with the amount of money that you have in your coffer to conduct the clinical trial and then are in a position to have to work backwards from there and fit the sample size estimates accordingly,” she explained. Clinical trials run into trouble when events are accruing so slowly that there wouldn’t be sufficient resources to keep the study going for long enough to get a definitive answer, she said, adding that this is more of a concern for studies with tight budgets, such as those funded by the National Institutes of Health, for example.

Estimating treatment effects also poses a challenge because investigational therapies have not yet been studied in large outcomes studies, O’Donoghue said. Complicating matters is that the estimated treatment effect is not necessarily what’s expected but rather tailored to capture what benefit would be expected to be clinically impactful. “If you want to power your trial for being able to detect at least a 20% relative risk reduction, you essentially can work backwards from there, again to figure out exactly how many patients you would need to enroll and, even more critically, how many primary endpoint events you would need to accrue.”

This paper “underscores why it’s so important that study designs be event-driven in nature as opposed to relying purely on what you expect the event rate to be,” O’Donoghue said. “And I think it’s also important for investigators to do a reality check at the study outset. Basically, [they should] try to project a different range of anticipated event rates to have a more realistic view of the potential study duration if the event rate ultimately proves to be lower than what had originally been expected, to make sure that you’re in a position to be able to still bring the study through to completion.”

Undersizing clinical trials often is about what one can afford to do rather than what one ideally would like to do. David Cohen

David Cohen, MD (Cardiovascular Research Foundation, New York NY, and St. Francis Hospital and Heart Center, Roslyn, NY), told TCTMD it’s useful to have these aspects of trial design quantified. He agreed with Olivier and O’Donoghue that ways to deal with the overestimation observed here include either discounting estimated event rates to account for improvements in medical care over time or performing adaptive or event-driven trials. But he, too, raised the issue of limitations imposed by funding.

“Somewhere along the way, it always comes back to money, and undersizing clinical trials often is about what one can afford to do rather than what one ideally would like to do,” Cohen said.

Still, it’s important to try to get accurate estimates of event rates, he said, because “if you dramatically overestimate the event rate, you end up with a very underpowered trial with a negative result that is very unsatisfying. The message there may be that we need to just be more realistic.”

Regarding the estimation of effect sizes, on the other hand, Cohen said the inaccuracy identified in the review “is a little less problematic” because trials are conducted to establish what the impact of an intervention will be. “That is just the nature of research: that we don’t know the answer when we’re starting.”

Todd Neale is the Associate News Editor for TCTMD and a Senior Medical Journalist. He got his start in journalism at …

Read Full Bio
Sources
Disclosures
  • The study was supported by a research grant from Deutsche Herzstiftung.
  • Olivier reports receiving research support from Deutsche Forschungsgemeinschaft, Deutsche Herzstiftung, Freiburg University, Else Kröner-Fresenius Stiftung, and Haemonetics, as well as honoraria from Bayer Vital GmbH, Bristol Myers Squibb, Boehringer Ingelheim, Daiichi Sankyo, Ferrer, Idorsia, and Janssen.

Comments