HonCode

Go Back   HER2 Support Group Forums > Articles of Interest
Register Gallery FAQ Members List Calendar Today's Posts

Reply
 
Thread Tools Display Modes
Old 05-06-2014, 09:54 PM   #1
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Personalized Cancer Care: N-of-1

Robert A. Nagourney, M.D.

The New York Yankees catcher Yogi Berra famous quote, “Déj* vu all over again,” reminds me of the growing focus on the concept of “N- of-1.” For those of you unfamiliar with the catchphrase, it refers to a clinical trial of one subject.

In clinical research, studies are deemed reportable when they achieve statistical significance. The so-called power analysis is the purview of the biostatistician who examines the desired outcome and explores the number of patients (subjects) required to achieve significance. The term “N” is this number. The most famous clinical trials are those large, cooperative group studies that, when successful, are considered practice-changing. That is, a new paradigm for a disease is described. To achieve this level of significance it is generally necessary to accrue hundreds, even thousands of patients. This is the “N” that satisfies the power analysis and fulfills the investigators expectations.

So what about an N-of-1? This disrupts every tenet of cancer research, upends every power analysis, and completely rewrites the book of developmental therapeutics. Every patient is his or her own control. Their good outcome reflects the success or failure of “the trial.” There is no power analysis. It is an “N” of 1.

This “breakthrough” concept however, has been the underpinning of the work of investigators like Drs. Larry Weisenthal, Andrew Bosanquet, Ian Cree, myself and all the other dedicated researchers who pioneered the concept of advancing cancer outcomes one patient at a time. These intrepid scientists described the use of each patient’s tissue to guide therapy selection. They wrote papers, conducted trials and reported their successful results in the peer-reviewed literature. These results I might add have provided statistically significant improvements in clinical responses, times to progression, even survival. By incorporating the contribution of the cellular milieu into clinical response prediction, these functional platforms have consistently outperformed their genomic counterparts in therapy selection So why, one might ask, have the efforts of these dedicated investigators fallen on deaf ears?

I think that the explanation lies in the fact that we live in a technocracy. In this environment, science has replaced religion and medical doctors have abdicated control of clinical development to the basic scientists and basic scientists love genomics. It is no longer enough to have good results; you have to get the results the right way. And so, meaningful advances in therapeutics based on functional platforms have been passed over in favor of marginal advances based on genomic platforms.

There is nothing new about N-of-1. It has been the subject of these investigators compelling observations for more than two decades. Though functional platforms (such as our EVA-PCD®) are not perfect, they provide a 2.04 (1.62 to 2.57, P < 0.001) fold improvement in clinical response for virtually all forms of cancer – as we will be reporting (Apfel C, et al Proc ASCO, 2013).

It seems that in the field of cancer therapeutics “perfect is the enemy of good.” By this reasoning, good tests should not be used until perfect tests are available. Unfortunately, for the thousands of Americans who confront cancer each day there are no perfect tests. Perhaps we should be more willing to use good ones while we await the arrival of perfect ones. After all, it was Yogi Berra who said, “If the world was perfect, it wouldn’t be.”
gdpawel is offline   Reply With Quote
Old 05-06-2014, 09:56 PM   #2
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Published Studies Often Conflict With Results Reported to ClinicalTrials.gov

Joseph S. Ross, M.D., MHS.
Yale University School of Medicine

Study results published in major medical journals often conflict with the data its authors have submitted to ClinicalTrials.gov, according to an analysis published in JAMA March 11, 2014.

The ClinicalTrials.gov registry, maintained by the National Library of Medicine, was created to help improve transparency in the medical literature by ensuring that all results of clinical trials, whether published or not, are archived in a single repository. A 2007 law mandated that researchers post results of studies on all products regulated by the US Food and Drug Administration (FDA) within 12 months. Many journals have also pledged to require their authors to report their findings in the registry. But numerous problems with the registry have been documented since its creation, including a failure of many researchers to report their results and sloppy data entry by investigators.

A new analysis by Joseph S. Ross, MD, MHS, an assistant professor of medicine at Yale University School of Medicine, and his colleagues raise questions about the accuracy of what is reported in the registry and in the medical literature. The team compared the results of 96 trials published in top-tier medical journals, including JAMA, the New England Journal of Medicine, and the Lancet, with the results of those trials reported in ClinicalTrials.gov. They found at least 1 discrepency in the results reported for 93 of the trials. Results matched in both the registry and journal article in only about half the cases.

Ross discussed the findings with news@JAMA.

news@JAMA: Why did you choose to do this study?

Dr Ross: Our research group is interested in thinking of ways to improve the quality of clinical research. When the Food and Drug Administration amendments were passed requiring results reporting [to the ClinicalTrials.gov registry], we were interested in how that would play out. There have been studies about how compliant researchers are with this requirement. We wanted to look at how accurate the reported findings are. By comparing the reported results to published trials, we wanted to see how well it was working. What we found was a surprise.

news@JAMA: Why were the results surprising?

Dr Ross: We found important discrepancies between the results reported in ClinicalTrials.gov and the published results. We don’t know which is right. There were lots of end points reported in 1 source that weren’t reported in the other.

news@JAMA: Can you give an example?

Dr Ross: We started by looking at the primary end points published in high-impact journals and what end points were reported in ClinicalTrials.gov. Of 90-some-odd trials, there were 150 to 160 primary end points; 85% were described in both sources, 9% only in ClinicalTrials.gov and 6% only in the publications.

For the more than 2000 secondary end points, 20% were reported only in ClinicalTrials.gov and 50% only in publications. Only 30% were described in both sources.

You see that only part of the information is available in 1 source. We need to make the sources as complete as possible. The publications need to link back to ClinicalTrials.gov because they often don’t include all the end points.

news@JAMA:Why might there be such a difference?

Dr Ross: There are a lot of potential explanations.

More end points were reported in the published papers than in ClinicalTrials.gov. This suggests authors are reporting end points in the paper that make the results look better that weren’t predetermined. That can skew the literature.

news@JAMA: Could edits made by the journals, such as requests for more information or new analyses, or typographical errors account for some discrepancies?

Dr Ross: It could be editing. An authorship team submits the results and these are publications that have strong editorial staffs. There could be slightly different approaches in analysis submitted to the 2 sources.

Some are typographical errors. For example, 1 study reported a hazard ratio of 4 in ClinicalTrials.gov instead of the hazard ratio of 2 in the study [the hazard ratio and standard deviation were transposed]. That perverts the study result.

news@JAMA: What can be done to improve the accuracy results in reporting?

Dr Ross: These results are increasingly being used by researchers and in meta-analyses; we want them to be accurate. The journals pay a large staff of full-time editors to make sure these studies don’t have errors, but ClinicalTrials.gov has a relatively small staff. We may need a larger endeavor than what the National Library of Medicine originally envisioned.

A third of the discordant results led to a different interpretation of the trial. This a problem we need to be attending to. We studied the highest-tier journals, so this is likely the best-case scenario. These are likely the highest-achieving researchers. Who knows what’s happening with lower-tier journals?

http://newsatjama.jama.com/2014/03/1...caltrials-gov/

Note: Different results from the same study reported in different publications. This is sort of mind boggling. It shows that a whole lot of the time medical research authors are massaging and/or cherry picking their own data and they can't even keep their own stories straight!
gdpawel is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

Forum Jump


All times are GMT -7. The time now is 02:38 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright HER2 Support Group 2007 - 2021
free webpage hit counter