The Price of Knowledge: Industry-Sponsored Studies in the Era of Evidence-Based Medicine
The biggest advances in medicine come from companies sponsoring trials in the hopes of turning a profit. What’s the right balance of help and harm?
(UPDATED) The vast majority of clinical trials conducted to support regulatory approval of a drug, device, or vaccine—or track its safety after approval—are sponsored by manufacturers. But just how much influence do companies have over the design, conduct, and reporting of trials?
This question came up last month when the COAPT and MITRA-FR trials of the MitraClip produced starkly different results. COAPT, which was overwhelmingly positive for device therapy over best medical care, was 100% industry sponsored, with Abbott participating in site selection, site management, and data analysis. MITRA-FR, in contrast, was decidedly negative and conducted and coordinated by an academic research organization.
Asked about these differences, COAPT co-principal investigator (PI) Michael Mack, MD (The Heart Hospital Baylor, Plano, TX), said he understood the optics, but was adamant that investigators had full control, with zero influence by the sponsor. Also contacted by TCTMD, MITRA-FR PI Jean-François Obadia, MD (Hôpital Cardiovasculaire Louis Pradel, Bron, France), said: “I still think that it is probably the two different populations which explains the results.”
Industry-funded trials have, for years, prompted polarized opinions as to whether the sponsor’s involvement is healthy and necessary or heavy-handed and damaging. The topic has increasingly been the focus of research, with multiple studies documenting the mounting proportion of industry-funded trials, matched by a decline in government-funded research.
In the United States today, approximately 70% of all clinical trials are industry-funded, former Food and Drug Administration (FDA) commissioner Robert Califf, MD (Duke University, Durham, NC), estimated to TCTMD. “I don't think the system we have is balanced the way it should be,” said Califf, “but it's been that way forever.”
The pros and cons of commercial funding in clinical research are myriad. A recent analysis in the BMJ showed that 68% of industry-funded trials are likely to be published within the recommended 12 months after completion as compared with just 11% of academic-run trials. Other research has shown, however, that trials sponsored by industry are more likely to yield positive results than government-sponsored trials, a finding also recently seen for cardiovascular device trials that have an industry employee listed among the authors. Moreover, studies headed by investigators who have financial ties to the sponsor are also more likely to wind up positive for the investigational intervention.
But industry funding also deserves the credit for generating the bulk of the evidence in clinical research, said Mitchell Krucoff, MD (Duke Clinical Research Institute [DCRI], Durham, NC).
In a world where we progressively do emphasize the importance of evidence-based medicine, it’s worth realizing that the vast majority of randomized trial evidence that we're ever going to see is driven by the ability to generate a profitable new therapy. Mitchell Krucoff
“In a world where we progressively do emphasize the importance of evidence-based medicine, it’s worth realizing that the vast majority of randomized trial evidence that we're ever going to see is driven by the ability to generate a profitable new therapy,” Krucoff told TCTMD. “The amount of evidence we're ever going to have for yoga or for meditation is going to be very disproportionate to the amount of evidence we're going to have for new PCSK9 inhibitors, or antithrombotic therapies, or blockbuster drugs that make company stocks go up. That's worth remembering.”
Industry Relations
In new research published this month in the BMJ, Kristine Rasmussen, PhD (Nordic Cochrane Centre, Rigshospitalet, Copenhagen, Denmark), and colleagues point out that collaboration between academics and industry is “mutually beneficial: academics provide access to trial participants as well as clinical and methodological experience, while industry provides funding and experience.”
But the degree of industry influence appears to vary markedly across trials. And, as Rasmussen et al note, other studies of industry-academia relationships haven’t directly asked researchers about the nature of the collaboration and the extent to which this is reflected in statements that accompany study publications.
According to Rasmussen, the idea for the study was “inspired” after hearing stories from colleagues, including cardiologists, who had participated in industry-funded trials. Some felt sponsors were too involved in the study, participating in every research meeting, while others said they didn’t pay much attention to the funder beyond the fact that their institution benefited from the research dollars. Many said they presumed that steering committees and ethical review boards would “sort out the more ethical aspects of the trial,” she commented.
Some stories struck a deeper chord, however. “One colleague told me a story about how they were offered a grant to conduct an industry-funded trial and when they started demanding a bit more academic freedom and access to data, the grant went to a competing hospital,” Rasmussen said. “So it was interesting hearing all these anecdotes. We wanted to see if this was random . . . or if this was a more systemic thing.”
Rasmussen et al’s study focused on the 200 most recent phase III and IV drug, device, or vaccine trials (starting in April 2017 and working backwards) that were fully sponsored by industry, across all areas of medicine. To be included in the analysis, studies had to have at least one academic author and be published in one of the top seven medical journals (New England Journal of Medicine, Lancet, JAMA, BMJ, Annals of Internal Medicine, JAMA Internal Medicine, and PLOS Medicine). In a second step for the study, Rasmussen and colleagues surveyed the authors listed on the published studies to get additional details on what types of interactions with industry took place.
They found that employees of trial sponsors were co-authors of the studies for 87% of publications. Funding companies were involved in study design for 87% of the trials, whereas academic author involvement was somewhat less, at 84%. The trial sponsor was involved in data analysis for the studies in 73% of trials, while academic authors were involved just 40% of the time. Trial reporting involved the authors in 99% of trials and the company in 87% of trials. Third-party contract research organizations (CROs), which are for-profit businesses offering a range of research services, were also involved in the reporting of results for a full 62% of trials.
Only eight of the 200 trials were conducted completely independent of input from the industry sponsor.
For the survey portion of the study, just 80 of the 200 lead academic authors contacted (40%) responded to provide further details to the BMJ investigators. Of these, 29 said that the academic authors had final say on the study design and the majority said that collaboration with industry had been beneficial. Nine survey respondents said they had experienced disagreements with the industry funder, typically due to trial design and reporting.
In one notable finding, 10 respondents said that an unnamed funder or CRO employee had participated in the data analysis and reporting, while an additional seven mentioned that an employee of the company or CRO had been involved in the trial design, analysis, or reporting but was not named in the subsequent publication.
“That means seventeen trials had evidence of ghost authorship or contributor ship where someone had participated in the design or statistical reporting of the trial . . . and wasn't named in the publication,” Rasmussen told TCTMD. “I found that quite surprising, that this still takes place. I think it's a bit worrying that one would downplay the role of the industry funder or, in these 17 cases, omit their involvement in the trial.”
Another striking finding, said Rasmussen, was the nuts and bolts of data analysis for these trials. “Even though the paper said ‘all authors had full access to the data’ that didn't necessarily mean that they had access to the raw data,” she explained. “When we asked the authors, that might have meant: we had access to any analysis we wanted, we just got that delivered from the industry sponsor.”
“Full access” in many cases just meant being able to request an analysis, but not actually having the full data set to query firsthand. “It's interesting how few of these studies actually had independent authors involved in the statistical analysis,” Rasmussen said. “Very few studies had statistical analyses done independently of the funder.”
Cardiologist Rita Redberg, MD (University of California, San Francisco), editor of JAMA Internal Medicine, has long been an outspoken advocate for drug and device safety. She’s a co-author on Rasmussen’s study.
I'm fine with industry funding, but I don't think it's fine to have someone with a lot of money riding on the results participating in how the trial’s going to come out. Rita Redberg
“I agree that we need to have industry funding of trials, but I think even with industry funding these can be wholly independent,” Redberg told TCTMD. “Not setting the questions, not participating in the study, not being a part of the writing, and not participating in the data analysis, and allowing the academic authors to own data. I'm fine with industry funding, but I don't think it's fine to have someone with a lot of money riding on the results participating in how the trial’s going to come out.”
Who Deals With the Data?
While nonacademic physicians and members of the public may find Rasmussen et al’s findings surprising, most of the experts who spoke with TCTMD did not.
According to Califf, most industry-run clinical trials conducted outside of cardiology are managed and run entirely by the sponsor, as are many of the biggest trials in cardiology with academic lead investigators. The exceptions are trials done at universities with academic research organizations, he clarified, such as the DCRI, the Cleveland Clinic, the TIMI group at Harvard University/Brigham and Women’s Hospital, Columbia University/Cardiovascular Research Foundation (CRF), and Stanford University. While that may be a shock to nonacademic physicians and the general public, said Califf, the reality is that many of the best biostatisticians working in clinical research are company employees.
“If you said, this has to be done at academic centers, you'd have nobody to do them,” Califf said. “There's DCRI, CRF, schools of public health, but the capacity to analyze complex clinical trial data and do all the work that's involved, it would probably overwhelm academia.”
Robert Harrington, MD, chair of medicine at Stanford University in California, agreed, noting, “Not every center has the expertise in biostatistics, because there is a countrywide shortage of biostatisticians. Some clinical investigators are challenged by having no access to collaboration by biostatistical colleagues.”
Krucoff took this one step further pointing out that “it takes a village” to run a major clinical trial. The most important ingredient for a successful clinical trial, he argued, is having the best people doing the work they excel at, taking advantage of all the expertise available. In some cases, he continued, that may be someone on staff at the company or a clinician and clinical trialist working at a private CRO who won’t be participating in the study.
“On the other hand, if you take a high-volume practitioner who is an excellent writer, very academic but who knows nothing about clinical trial design and let him take over designing the trial, you may have a big problem of a different sort,” Krucoff said. The reality is, the large drug and device companies have “an army of extremely experienced, frequently ex-academic people who know this program better” as well as a deeper understanding of the regulatory environment. “I wouldn't by definition assume that any one of the stakeholders in this it-takes-a-village process is a good guy or a bad guy. The worst . . . is when you put someone in a position where they don't have the right skills to manage the appropriate needs of a human-subjects trial,” he said.
According to Ron Waksman, MD (MedStar Washington Hospital Center, Washington, DC), “the authors of the survey are biased in their approach that industry funded trials are villain and academic freedom is limited when the trials are funded by industry. This is not the case most of the time as industry subjected to external audit and penalties of submission of inaccurate or falsified data.”
Waksman further pointed out that a 40% response rate calls the results into question, potentially biasing the results towards the opinions of the responders.
TCTMD contacted renowned biostatistician Frank E Harrell Jr, PhD (Vanderbilt University, Nashville, TN), a longtime consultant to the FDA, and asked whether industry involvement or control of clinical trial data analyses was problematic. Responding by email, Harrell said: “Analyses done by industry are more rigorous and reproducible than those done by academic researchers, but academic researchers should have the ability to also analyze the data themselves. In an ideal world the two sets would have a mutually agreed-upon statistical analysis plan before anybody does anything.”
Waksman echoed that stance, pointing out that investigator-initiated studies may lack the resources to do the kind of robust analyses possibly by an industry analytics team.
Without a doubt, there is more openness to industry partnerships among academic cardiologists and, equally, an appreciation that it’s not easy to explain to the public the difference between good relationships and flawed ones. Califf, who was grilled by the US Senate about his involvement in industry-funded research prior to his confirmation as FDA commissioner, said he spent 8 hours explaining to Senator Elizabeth Warren (D-MA) and her staff how such relationships could be established so that trials conducted out of DCRI under his tenure had full academic independence. In the end, he said, “she really got it. I was actually really impressed.”
Harlan Krumholz, MD (Yale Center for Outcomes Research & Evaluation, New Haven, CT), is a longtime cardiovascular clinical trialist whose work has often focused on improving research quality and transparency. “I think there are all sorts of arrangements with industry that can work and work in different ways,” he told TCTMD. “What’s most important is that there is full transparency about what the arrangement is.”
I think there are all sorts of arrangements with industry that can work and work in different ways. What’s most important is that there is full transparency about what the arrangement is. Harlan Krumholz
Cardiology has seen its fair share of celebrated scandals related to data “mismanagement” or suppression by companies as well as to the ghostwriting of key study results submitted to major medical journals.
“I think the first authors need to take responsibility for the quality of the data, [and] if they've never touched the data and the data has for example been managed elsewhere then it’s important that that be crystal clear,” Krumholz continued.
The “scenarios” common in the design, conduct, and reporting of clinical trials today are wide-ranging, he added, and include everything from PIs who insist on working with data directly to those who don’t have the skills, or the team, to do so and end up serving a different role.
“The key is not to lump them all together, not to suggest that these are all the same,” said Krumholz. “That’s not to indicate that there's a hierarchy or that we know there's a problem, but you can't begin to interpret how it was done unless you can understand what’s happened and who has been responsible for what.”
Clinical Trials in Cardiology
In their paper, Rasmussen and colleagues did not break down their analysis according to type of trial—drug, device, or vaccine—nor were the numbers big enough to look at whether there were differences according to medical subspecialty. But many of the people who spoke with TCTMD said they believe cardiology clinical trials are typically more rigorous, and have more safeguards, than those in other areas of medicine.
“There's no question about it,” said Califf. “Most people acknowledge that, and you can find out pretty easily just by asking how many trials in other areas of medicine even have a PI that has a significant role.”
Harrington said he has a broader perspective now, as chair of a medical department, than he did in the past. “Sometimes working in areas like cardiology or interventional cardiology, in particular, we have a sense that the world is a certain way,” he observed, “And when you look more broadly at medicine, you realize that's not necessarily the case and that includes interactions with industry.”
His sense is that “most of the big trials in cardiology, and all of the big trials in cardiology that are led by one of the big academic coordinating centers, not only have access to the data but typically have unfettered access, meaning that they actually have a copy of the data and they all do their own original analyses.”
Krucoff was slightly more reticent, commenting, “I think that the cardiovascular community does the most human therapeutics research of any specialty, perhaps outside of oncology, and so we have more success and more horror stories than any other area.”
The Horror Stories
Experts interviewed by TCTMD provided many of the same examples of cardiology drugs and trials that became poster children for industry misdeeds: Vytorin (Merck/Schering-Plough), Vioxx (Merck), and Avandia (GlaxoSmithKline), all of which triggered congressional or Department of Justice (DoJ) investigations in the United States. Rasmussen further pointed to the PLATO trial of ticagrelor (Brilinta; AstraZeneca), after which the US DoJ looked into concerns that results were skewed according to whether the sponsor or independent investigators had analyzed the data.
But the same experts who cited these examples for TCTMD also cautioned that industry is not the only source of undue influence in clinical trials. “I think everybody should be concerned about clinical trials and concerned about how they get done, how they get disseminated, and all of that, but I don't see it as an academic versus nonacademic issue. It's really a clinical practice issue,” Califf said. “I wouldn't say that industry would be the only side that would be capable of nefarious tampering.”
I think everybody should be concerned about clinical trials and concerned about how they get done, how they get disseminated, and all of that, but I don't see it as an academic versus nonacademic issue. It's really a clinical practice issue. Robert Califf
Califf believes “nefariousness” by industry is, for the most part, “not actively doing wrong things.” Rather, “it's not looking at the whole issue or it’s in the way the questions are asked,” he said. “Nefariousness by academics is usually based on academic ego and beliefs and may also be influenced by money transfer, because it's well known that if you have a positive study and you're the PI, you're more likely to get the next one, especially if you do a great job in promoting it.”
Krucoff, who is a special government employee of the FDA and served for many years on the agency’s Circulatory System Devices Panel, had a similar position. “I could tell you stories of sites participating in clinical trials completely independently of a manufacturer’s influence who [demonstrated] poor judgment, poor protocol compliance, or outright fraud. Now why they would do that is a whole other set of questions,” he said.
Krumholz, likewise, pointed out that seemingly independent investigators have their own sets of biases, ranging from financial to intellectual. “I think there are high-integrity investigators who do the job, I don't want to suggest otherwise, [but] there’s just a little more fuel when someone has a strong vested interest and they are also the ones overseeing the data,” he observed.
That said, Krumholz continued, there’s a difference between the leaders of trials that end up biased and “people who wake up in the morning and say, ‘I'm going to manipulate data.’ I'm sure that has and will continue to happen, but I think it's a very small percentage. We get bedeviled by our own cognitive biases and the way we set up [studies] when we think we know what the answers should be, or the answers that we hope for.”
The horror stories have long shadows, Krucoff acknowledged. They’ve left a legacy of suspicion that sponsors are “going to do dirty things because they are going to make money if they can just cut corners, or slip by, or somehow fool everybody. And having worked with the medical device industry for a little more than three decades I can say that’s not a conclusion to leap to, probably in 87% of cases. . . . There's no question that the incentive for a manufacturer ultimately starts with a business plan, but the success of that business plan is also where we get newer and better medical devices.”
Speaking with TCTMD, Redberg acknowledged that intellectual biases on the part of academics can’t be discounted but argued it’s a question of scale. “I do think there are other kinds of bias, [but] financial conflicts are probably one of the most powerful,” she said. “We're not going to be able to take away human nature and intellectual conflicts—none of us are perfect. But that said, a big and powerful source of bias comes from lots of money riding on the results of the trial, and funding of the trial and conduct of the trial being done by investigators or funders who have a financial interest in the results.”
Oversight and Solutions
Harrington said he believes there are sufficient checks and balances in place to help preserve the quality and objectivity of published data, although they may not be immediately clear to the public. “I know from reviewing papers that I am frequently asking for what I call the data provenance: where did the data go from the patient to the analysis and who had control of it along the way? I think those are important questions,” he explained.
Both Califf and Krucoff argued that it’s not unreasonable for sponsors who are investing millions of dollars to have some say in the trial design. But what can keep this in check are the trial steering committees, in which the majority of members are not from the company. Trial design and oversight should also, in Califf’s opinion, include patients, who he described as “the people who need the questions answered.”
Califf had served as the PI for a full 64 clinical trials by the time he was questioned by the US Senate. “A human experiment done on a large scale is probably one of the most complicated things that human beings do,” particularly given all the sites, roles, and often countries and cultures involved, he told TCTMD. “The design of a clinical trial is like writing a law: you have a lot of smart people and some biased people and they all would do it differently and they have to come to an agreement to have one protocol, which means everybody has to compromise. That means that at the end of the trial, every single person will say that they would have done it differently.”
What few people realize, continued Califf, is the role already being played by the FDA. “It's the only regulatory agency that gets a raw copy of the database and does its own independent analysis of the data, and that's really important,” he stressed.
Moreover, he noted, “you can't start your study until the FDA approves it, if you're on the regulatory path, so you have public servants who have no financial conflict telling you if the design is okay. And then the adverse events get reported to the FDA, so they track the studies. Then if you decide to submit for marketing claims, the FDA gets the raw data.”
The agency also conducts site inspections, he added, although data analyses are typically much more revealing. “It's hard to really cheat these days, because for major trials they actually look at the databases,” Califf added.
Other changes of the past decade have also helped to improve transparency, a point made in different ways by Krumholz, Krucoff, and Waksman. These include the requirement of mandatory clinical trial registration on ClinicalTrials.gov as well as efforts by the International Committee of Medical Journal Editors to mandate open access to clinical trial data as a prerequisite for publishing in a member journal.
Rasmussen and Redberg, however, argue that while important steps have been taken, much more could be done. FDA reanalysis of raw data, for example, does not typically have an impact on study publications, which often precede the FDA review. In the past, Rasmussen pointed out, JAMA had mandatory criteria for publishing in its journal, including the requirement that an independent statistical analysis be conducted by an academic biostatistician. This requirement, however, was dropped in 2013 following complaints that this rule created a barrier to publication and because, after 2010, the journal no longer was seeing meaningful differences between study results analyzed by industry sponsors versus independent statisticians.
Rasmussen, though, asserted that journals could still be part of the solution, by requiring more autonomy for academic investigators. She pointed to the eight trials in their study, which had no industry involvement beyond the funding of the study. “It does go to show that you can have a fully industry-funded trial where everything from design, conduct, statistical analysis, and reporting [is done] by academics alone,” she said.
The idea that journals could play a bigger role gets support from an editorial accompanying Rasmussen et al’s study. In it, Paula A Rochon, MD, MPH (Women’s College Research Institute, Toronto, Canada), and colleagues argue that journal editors could require authors to independently disclose their role in the study; stipulate that authors submit full trial protocols as set out in the SPIRIT statement (which includes details on roles and responsibilities of study sponsors); and insist that journals do more to enforce the ICMJE’s recommendations for research conduct, reporting, editing, and publication.
When I talk to lay people about this they say, ‘What? This is how it's done?’ So it just seems a bit strange that we allow that to happen. Kristine Rasmussen
Redberg, who has been critical of aspects of the FDA’s regulatory pathway in the past, argued that the agency’s track record of requiring industry-funded trial design elements or duration of follow-up remains imperfect. “My understanding is that for SYMPLICITY HTN-3, it was the FDA that asked for a sham-control arm in that study, but I can't think of a lot of other examples where the FDA has asked for a sham-control arm in a device study,” she said. “They certainly haven't done so for the 30 years of stenting and angioplasty and many other implanted devices.”
She would like to see more of the considerable industry dollars spent on clinical trials be earmarked for clinicians and scientists working in academia. A shortage of university biostatisticians, for example, is likely explained by the fact that private companies are offering higher salaries, Redberg suggested.
“The solution? Give more money to academics to run these trials,” she proposed. “I think we have to agree that it is important to us as a society and as a profession to have objective data that is not biased by conflicts of interest. We're talking about drugs and devices that cost hundreds of millions and billions that we're all paying for, and it' s really important to know that the risks and benefits are really what we think they are. And I think our study [in the BMJ] calls that into question.”
Califf, for his part, said industry can’t be blamed for hiring some of the brightest minds in the business, noting, “I would put it differently. I would say that academic medical centers have been asleep at the wheel and haven't stood up to their end of their responsibility. And as a result, by default, industry has to do [the analyses].”
Rasmussen acknowledged the importance of not unduly burdening the research process when human health is at stake. “Obviously you don't want to turn this into a bureaucratic process, but the onus [is on physicians] to be able to prescribe drugs that you know are effective and [for which] you are aware of what harms there are,” she said. “Patient safety reasons would be a pretty good argument to allow academics to [have control].”
She added: “I think you could regulate this quite easily and I'm a bit surprised—well, I guess I don't know if ‘surprised’ is the right word because I think that's how the world works. But when I talk to lay people about this they say, ‘What? This is how it's done?’ So it just seems a bit strange that we allow that to happen.”
Shelley Wood is the Editor-in-Chief of TCTMD and the Editorial Director at CRF. She did her undergraduate degree at McGill…
Read Full BioSources
Rasmussen K, Bero L, Redberg R, et al. Collaboration between academics and industry in clinical trials: cross sectional study of publications and survey of lead academic authors. BMJ. 2018;363:k3654.
Rochon PA, Stall NM, Savage RD, Chan A-W. Transparency in clinical trial reporting. BMJ. 2018;363:k4224.
Disclosures
- The BMJ authors report having no support from any organization for the submitted work.
- Redberg reports serving as the editor of JAMA Internal Medicine, which is included in the sample of journals studied by Rasmussen et al, but had no role in data extraction or analysis of the results; no financial relationships with any organizations that might have an interest in the submitted work in the previous 3 years; and no other relationships or activities that could appear to have influenced the submitted work.
- Rochon reports no relevant conflicts of interest.
- Other experts interviewed for this story have all been PIs for large clinical trials, including industry-sponsored trials, or have received industry funding for other collaborative projects.
Comments