HonCode

Go Back   HER2 Support Group Forums > Breast Cancer Newsfeed
Register Gallery FAQ Members List Calendar Today's Posts

Reply
 
Thread Tools Display Modes
Old 01-11-2013, 02:26 AM   #1
News
Senior Member
 
News's Avatar
 
Join Date: Oct 2007
Posts: 18,946
Spin And Bias In Published Studies Of Breast Cancer Trials

Spin and bias exist in a high proportion of published studies of the outcomes and adverse side-effects of phase III clinical trials of breast cancer treatments, according to new research published in the cancer journal Annals of Oncology [1] today (Thursday)...

More...
News is offline   Reply With Quote
Old 02-09-2013, 02:15 PM   #2
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Spin and Bias Sometimes Used to Put Best Face on Breast Cancer Study Findings

(news@JAMA) - Use of bias and spin when reporting negative findings from clinical trials may tip the balance in creating a more positive perception of a treatment that has little or no demonstrated benefit or may downplay serious adverse effects.

Most researchers conducting clinical trials hope their work pays off in positive results demonstrating that an experimental intervention benefits patients. But when a trial produces negative findings that a treatment is not helpful or that it has adverse effects, some investigators mask the disappointing results by selective and biased reporting of the findings, say researchers whose findings appear today in the Annals of Oncology.

The researchers, from the Princess Margaret Cancer Centre and the University of Toronto in Toronto, looked at 164 randomized, controlled, phase 3 clinical trials of breast cancer treatments, the results of which were published between 1995 and 2011. They found 92 to be negative trials, in which the data did not demonstrate that the treatment under study had an effect on the primary end point, a specific event (such as survival or halting disease progression) that is measured at the end of a trial to see whether or not a given treatment works. But in 59% of the negative trials, the researchers used positive findings from secondary end points, which are additional events of interest but which the studies have not been designed specifically to address, to cast the treatment under study in a positive light.

The Canadian researchers also found bias in the way adverse effects of the treatment were discussed, with poor reporting of serious adverse effects (such as omitting mention of these problems in the abstract or conclusion) in about two-thirds of the publications with both positive and negative findings. They also found that publications of clinical trials with positive findings were twice as likely to underplay serious adverse effects compared with publications of negative studies.

Ian Tannock, MD, PhD, of Princess Margaret Cancer Centre, who guided the research, said better vigilance is needed to detect and eliminate bias and spin in clinical research. “Better and more accurate reporting is urgently needed,” Tannock said in a release. “Journal editors and reviewers, who give their expertise on the topic, are very important in ensuring this happen. However, readers also need to critically appraise reports in order to detect potential bias.”

http://annonc.oxfordjournals.org/con...36.short?rss=1
gdpawel is offline   Reply With Quote
Old 02-09-2013, 02:17 PM   #3
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
What about peer-review Journal bias?

Peer review lacks consistent standards. A peer reviewer often spends about four hours reviewing research that may have taken months or years to complete, but the amount of time spent on a review and the expertise of the reviewer can differ greatly.

Recent disclosures of fraudulent or flawed studies in professional medical journals have called into question the merits of their peer-review system. Passing peer-review is not the scientific equivalent of the Good Housekeeping seal of approval. They do not control the world's information flow.

The power of the internet is amazing. All papers can be viewed on internet websites, not just those that would selectively be handled by so-called peer-reviewed journals. Papers are sent to so-called first rate journals. Get it peer-reviewed. If they are accepted, great. If not, up it goes on the internet. And the information gets out there even more quickly and effectively than it would have been had the journal done the right thing and publish what are very good and important papers.

Release of news about medical findings is among the most tightly managed in the country. Journals control when the public learns about findings by setting dates when the research can be published (if they allow them published at all). They impose severe restrictions on what authors can say publicly, even before they submit a manuscript, and they have penalized authors for infractions by refusing to publish their papers.

Journal Editors are the "gatekeepers" of information (only information that they allow). What's that saying, "if peer-review were a drug, it would never be marketed." Peer-review is nothing but a form of vetting (whether it be anger, jealousy, or whatever). Reviewers are in fact often competitors of the authors of the papers they scrutinize, raising potential conflicts of interest.

Such problems are far more embarrassing for journals because of their claims for the superiority of their system of editing. Journal Editors do not routinely examine authors' scientific notebooks, they rely on peer reviewers' criticisms.

Then there is the problem with respected cancer journals publishing articles that identify safer and more effective treatment regimens, yet few oncologists are incorporating these synergistic methods into their clinical practice. Because of this, cancer patients often suffer through chemotherapy sessions that do not integrate all possibilities.

These are the major flaws in the system of peer-reviewed science. All the more reason why journalists should avoid relying on the latest studies for medical news coverage.

Retractions in the scientific literature: is the incidence of research fraud increasing?

R Grant Steen

Background:

Scientific papers are retracted for many reasons including fraud (data fabrication or falsification) or error (plagiarism, scientific mistake, ethical problems). Growing attention to fraud in the lay press suggests that the incidence of fraud is increasing.

Methods:

The reasons for retracting 742 English language research papers retracted from the PubMed database between 2000 and 2010 were evaluated. Reasons for retraction were initially dichotomised as fraud or error and then analysed to determine specific reasons for retraction.

Results:

Error was more common than fraud (73.5% of papers were retracted for error (or an undisclosed reason) vs 26.6% retracted for fraud). Eight reasons for retraction were identified; the most common reason was scientific mistake in 234 papers (31.5%), but 134 papers (18.1%) were retracted for ambiguous reasons. Fabrication (including data plagiarism) was more common than text plagiarism. Total papers retracted per year have increased sharply over the decade (r=0.96; p<0.001), as have retractions specifically for fraud (r=0.89; p<0.001). Journals now reach farther back in time to retract, both for fraud (r=0.87; p<0.001) and for scientific mistakes (r=0.95; p<0.001). Journals often fail to alert the naïve reader; 31.8% of retracted papers were not noted as retracted in any way.

Conclusions:

Levels of misconduct appear to be higher than in the past. This may reflect either a real increase in the incidence of fraud or a greater effort on the part of journals to police the literature. However, research bias is rarely cited as a reason for retraction.

J Med Ethics 2011;37:249-253 doi:10.1136/jme.2010.040923

http://jme.bmj.com/content/37/4/249....4-5f0509a6b599
gdpawel is offline   Reply With Quote
Old 02-09-2013, 02:19 PM   #4
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
From Disclosure to Transparency

Pharmalot's Ed Silverman asks, what is the conclusion? Either there is more fraud or more policing? Ivan Oransky, the executive editor of Reuters Health and a co-founder of the Retraction Watch blog that began recently in response to the spate of retractions, writes us that the simple use of eyeballs and software that can detect plagiarism has made it possible to root out bad papers.

He also notes, however, that there are more journals, which explains why there are more papers, in general, being published. “So the question is whether there have been more retractions per paper published,” Oransky writes, and then points to a chart to note that were, indeed, many more.

“That’s really no surprise, given the increasing numbers of eyeballs on studies, and the introduction of plagiarism detection software. It’s unclear whether the actual amount of misconduct and legitimate error has grown; it may just be that we’re picking up on more of it,” he continues. “What makes it difficult to tell is a problem we often see at Retraction Watch: Opaque and unhelpful retraction notices saying only ‘this study was withdrawn by the authors.’ How does that make for transparent science? We think journals can do a lot better, by demanding that authors and institutions come clean about what went wrong.”

And why is there more fraud? As the Wall Street Journal notes, there is a lot to be gained - by both researchers and journal editors - to publish influential papers. “The stakes are so high,” The Lancet editor Richard Horton tells the Journal. “A single paper in Lancet and you get your chair and you get your money. It’s your passport to success.”

A few notable retractions include an episode at the Mayo Clinic, where a decade of cancer research - which was partly taxpayer-funded - was undermined after the clinic realized that data about harnessing the immune system to fight cancer had been fabricated. A total of 17 papers published in nine research journals were retracted and one researcher, who maintained innocence, was fired.

Recently, 18 journals indicated plans to retract a total of 89 published studies by a German anesthesiologist, many concerning a drug used for maintaining blood pressure during surgery. Meanwhile, authorities in the UK are reviewing usage guidelines and a prosecutor in Germany is conducting a criminal probe, because data may have been fabricated.

And the Journal goes into detail in one instance. A 2003 paper in The Lancet that compared two high blood pressure meds and found to be much better in combination than either alone. Patients given the combination experienced a 76 percent drop in protein loss, compared with 42 percent with one drug by itself and 44 percent with the other one alone.

The dramatic findings prompted suspicion, which led to a lengthy investigation, but one that took very long. After noting “serious concerns,” The Lancet did not issue a retraction for more than six years. By then, however, more than 100,000 patients had been prescribed the combo and thousands of people are probably still taking the drugs.

An investigation by a Japanese hospital where the lead author had worked found the researchers never obtained proper patient consent or approval for the study from the ethics committee of the hospital where they said the research was done. And the involvement of a statistician in the clinical trial could not be verified.

One of the doctors who suspected article was dubious criticized The Lancet and its peer reviewers for not being more skeptical about the dramatic results. “Journals all want to have spectacular results,” Regina Kunz tells the paper. “Increasingly, they’re willing to publish more risky papers.”

The Lancet’s Horton pooh-poohs her criticism, and insists journals are becoming more conservative about publishing “provocative” research. But he concedes journals lack adequate systems to investigate misconduct. The apparent rise in scientific fraud, he tells the Journal, “is a scar on the moral body of science.”

The Use of Company Payment Data

Susan Chimonas, PhD; Zachary Frosch, BA; David J. Rothman, PhD

Arch Intern Med. Published online September 13, 2010. doi:10.1001/archinternmed.2010.341

ABSTRACT

Background: It has become standard practice in medical journals to require authors to disclose their relationships with industry. However, these requirements vary among journals and often lack specificity. As a result, disclosures may not consistently reveal author-industry ties.

Methods: We examined the 2007 physician payment information from 5 orthopedic device companies to evaluate the current journal disclosure system. We compared company payment information for recipients of $1 million or more with disclosures in the recipients' journal articles. Payment data were obtained from Biomet, DePuy, Smith & Nephew, Stryker, and Zimmer. Disclosures were obtained in the acknowledgments section, conflict of interest statements, and financial disclosures of recipients' published articles. We also assessed variations in disclosure by authorship position, payment-article relatedness, and journal disclosure policies.

Results: Of the 41 individuals who received $1 million or more in 2007, 32 had published articles relating to orthopedics between January 1, 2008, and January 15, 2009. Disclosures of company payments varied considerably. Prominent authorship position and article-payment relatedness were associated with greater disclosure, although nondisclosure rates remained high (46% among first-, sole-, and senior-authored articles and 50% among articles directly or indirectly related to payments). The accuracy of disclosures did not vary with the strength of journals' disclosure policies.

Conclusions: Current journal disclosure practices do not yield complete or consistent information regarding authors' industry ties. Medical journals, along with other medical institutions, should consider new strategies to facilitate accurate and complete transparency.

http://archinte.ama-assn.org/cgi/con...rnmed.2010.341
gdpawel is offline   Reply With Quote
Old 02-09-2013, 02:22 PM   #5
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
How are journal articles peer-reviewed?

[A surgeon-blogger known as Skeptical Scalpel tries to educate readers about the medical journal peer review process. He has been a surgeon for 40 years and was surgical department chairman and residency program director for over 23 of those years. He is board-certified in general surgery and a surgical sub-specialty. He has over 90 publications including peer-reviewed papers, case reports, editorials, letters and book chapters. He is an associate editor of a journal. He has been blogging for a year-and-a half.]

There is a possibly some misunderstanding among science journalists regarding the process that the term “peer-review” encompasses.

I am an associate editor (AE) of a medical journal with a respectable impact factor. I also am or have been a manuscript reviewer for five different journals. I feel qualified to describe how manuscripts are reviewed and published.

In 2012, authors submit a manuscript electronically to the journal. It is assigned to an AE who screens it for appropriateness, format and, on occasion, readability in the English language. Manuscripts are not blinded. AEs and reviewers are aware of the authors’ names and their institutions.

The AE emails prospective peer-reviewers asking if they are willing to review the submission. Reviewers are chosen based on their self-reported areas of interest. They become listed as peer-reviewers by demonstrating expertise, usually having submitted papers of their own. They may also be well-known experts through society memberships or familiarity with the journal’s editorial board members. I once became a peer-reviewer for a journal after writing a letter to the editor pointing out a statistical flaw in a published paper. Although seen by many as a career-enhancing, the jobs of AE and peer-reviewer are not compensated.

If all goes well, the peer-reviewers return their recommendations in a timely way. Unfortunately, being a peer, an expert or an author in a field related to the manuscript’s topic does not necessarily mean that one can review a research paper competently. Most journals have guidelines for reviewers but no way to tell if the reviewer has read them. We often receive “two-sentence” reviews of 25-30 page (double-spaced) manuscripts.

A manuscript would have to be quite extraordinary to elicit only a two-sentence review. The reviewer may have been too busy, disinterested, incapable or not motivated to do a thorough job. But then why would he have accepted the assignment? That’s one of life’s great mysteries. The AE may have to become a peer-reviewer at times.

Assuming the AE receives two or three adequate reviews, he decides to accept, accept with revisions or reject the paper and forwards it to the editor for a final decision.

Here is what we cannot do. We cannot verify that

1. the data are not fabricated;
2. all authors deserve to have their names listed on the paper;
3. no plagiarism has occurred;
4. the paper is not an attempted duplicate publication.

Journals have no resources to investigate any of these issues. We must accept the word of those submitting. Among other causes, pressure on faculty to publish and/or greed may promote scientific misconduct.

Is this a good system? No. What are the alternatives? I don’t know. I pointed out in a previous post that many more people have read my blog than ever read my research publications. One day, every paper may be posted and critiqued by the scientific public, a movement that has already begun on websites such as Faculty of 1000.

Meanwhile, expect to see more publications retracted as internet users discover and expose fraud and duplicate publications. For more on retractions, follow the blog Retraction Watch for interesting insights into the process.

One of the drivers of the proliferation of journals, both online and print, is the requirement of most Residency Review Committees that faculty of residency training programs must engage in research. This rule is not “evidenced-based,” as there is no proof that a surgeon has to do research in order to be a good teacher or role model. Sometimes the opposite is true; the researcher can’t teach at all.

Residents choose to train at community hospitals because they do not want to participate in research. [Anecdotally, I think many residents at university hospitals would rather not do research too.] As is the case with faculty, there is no proof that forcing a resident to do research will result in important discoveries or make her a better surgeon.

Look at this language from the RRC forSurgery.

Some members of the faculty should also demonstrate scholarship by one or more of the following:

II.B.5.b).(1) peer-reviewed funding;
II.B.5.b).(2) publication of original research or review articles in peer-reviewed journals, or chapters in textbooks;
II.B.5.b).(3) publication or presentation of case reports or clinical series at local, regional, or national professional and scientific society meetings

Since I dropped out of the business of training residents, I have been actively blogging and not cranking out mindless publishable research. Here is an interesting fact. I have no doubt that far more people have read what I have written in my blog for a year and a half than ever read all of my 95 published works combined.

For example, I wrote a blog entitled “Statistical vs. Clinical Significance: They Are Not the Same” in August of 2011. To date, it has been viewed 4466 times. I would guess that one post alone has been read by more people than ever have read my combined published papers. I have 1070 followers on Twitter. Again, it is likely that more people have read what I tweet than ever read my scholarly works.

So what’s the point? Although I have written that individuals who participate actively in social media like Twitter have very little influence when one looks at the big picture, the same can be said of publishing a journal article. Who really reads the 25 or so critical care journals that are currently being published online and in print?

Did I have more influence with my published writings or do I have more influence now with my blogging and tweeting? What do you think?

PS: Just like a journal article, I have cited myself three times.

http://skepticalscalpel.blogspot.com/
gdpawel is offline   Reply With Quote
Old 02-09-2013, 02:25 PM   #6
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Bias in reporting of end points of efficacy & toxicity in randomized clinical trials

Of 164 included trials, 33% showed bias in reporting of the primary endpoint and 67% in the reporting of toxicity.

Bias in reporting of outcome is common for studies with negative primary endpoints. Reporting of toxicity is poor, especially for studies with positive primary endpoints.

The spin thing is agonizing. Include in that category the unbridled promotion of scientific papers in the media, and the fawning of reporters over meaningless results.

Pharmalot's Ed Silverman says "File this under ‘sweep it under the rug.’" It may be human nature to downplay unwanted or negative developments, but it is not considered to be good science. Nonetheless, some investigators have masked disappointing clinical trial results, such as missing primary endpoints and reporting toxicity, with selective and biased reporting for breast cancer treatments, according to a study in the Annals of Oncology.

Specifically, in one-third of all trials that failed to show a statistically significant benefit for a breast cancer medications being tested, the published studies emphasized less important outcomes in hopes of giving the results a positive spin. However, there was no association between industry or for-profit trial sponsorship and biased reporting of either efficacy or toxicity.

The researchers examined the results of 164 randomized, controlled Phase III trials of breast cancer treatments that were published between 1995 and 2011. They found 92 that were negative trials in which data did not demonstrate the medication had an effect on the primary endpoint. But in 59 percent of the negative trials, the authors used positive findings from secondary endpoints to cast the treatment in a positive light.

The researchers also found bias in the way adverse effects were discussed. Serious side effects, such as omitting mention in the abstract or conclusion, were found in 67 percent of the publications with both positive and negative findings. Moreover, trials with positive findings were twice as likely to underplay serious adverse effects compared with publications of negative studies.

However, some of the studies that were analyzed began before registration of clinical trials in such registries as ClinicalTrials.gov or clinicaltrialsregister.eu became mandatory. For those that were registered, the researchers found that some trials had primary endpoints changed between registration and when the findings were published.

“Among these trials, there was a trend towards change of the primary endpoint being associated with positive results, suggesting that it may be a strategy to make a negative trial appear positive,” the study authors write. “Trial registration does not necessarily remove bias in reporting outcomes, although it does make it easier to detect.”

They evaluated associations of bias with the Journal Impact Factor, which measures the frequency with which the average article in a journal has been cited in a given period of time; changes in the primary endpoint compared with information in ClinicalTrials.gov and funding sources. For recent trials, they determined whether the primary endpoints listed in ClinicalTrials.gov were the same as reported in abstracts or papers.

Why focus on trials that are testing breast cancer treatments? Because breast cancer is “the most common malignancy in women, has substantial mortality and is a cancer site with a large number of trials,” the authors write. They noted that neither disease-free survival nor progression-free survival have been shown as adequate surrogates for overall survival for women with breast cancer, but 83.5 percent of the trials used these endpoints.
gdpawel is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 11:13 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright HER2 Support Group 2007 - 2021
free webpage hit counter