HonCode

Go Back   HER2 Support Group Forums > Articles of Interest
Register Gallery FAQ Members List Calendar Search Today's Posts Mark Forums Read

Reply
 
Thread Tools Display Modes
Old 05-12-2011, 04:16 PM   #1
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Genomic Medicine -- A Welcome Dose of Humility

The leading lights of genomics in May 27, 2010's New England Journal of Medicine offer a expectations-lowering retrospective on the genomics revolution's impact on health care. It is the first in a series of articles on Genomic Medicine in NEJM, occasioned by the ten-year anniversary of the sequencing of the human genome.

The scientist in charge of that effort, Francis Collins, now heads the National Institutes of Health. He is one of three co-authors of a new review that notes:

Most SNPs (single nucleotide polymorphisms or small variations in a single gene) associated with common diseases explain a small proportion of the observed contribution of heredity to the risk of disease - in many cases less than 5 to 10% - substantially limiting the use of these markers to predict risk. It thus comes as no surprise that as yet there are no evidence-based guidelines that recommend the use of SNP markers in assessing the risk of common diseases in clinical care.

http://www.nejm.org/doi/full/10.1056...jkey=d1a2b3572

In an accompany commentary, Harold Varmus, a former NIH director who is slated to become the new head of the National Cancer Institute, also seeks to lower expectations about the impact of genomics on health care. He specifically takes aim at mechanistic interpretations of "personalized" medicine, which is often used to refer to the use of an individual's genomic analysis to drive medication strategies.

The term "personalized medicine" has become nearly ubiquitous as a means of conveying how molecular tests can subdivide diagnostic categories and refine therapeutic choices. This phrase may also prove to be strategically successful - by preemptively warding off claims that an overreliance on genotypes in medical practice is deterministic and thus "impersonal," or that genetic approaches undermine more traditional approaches to "personalized" care that are based on knowledge of a patient's behavior, diet, social circumstances, and environment. Of course, both genetic and nongenetic information is important; the more we know about a patient - genes and physiology, character and context - the better we will be as physicians. By measuring the distance to a fuller integration of genomic knowledge into patient care, this new series of articles may encourage a more nuanced calibration of what it means to "personalize" medicine.

Most of the first article and comment in the series is devoted to outlining the promise of genomics, of course. We'd expect nothing less from the scientists-turned-government-officials who are in charge of awarding billions of dollars annually to researchers pursuing population-based gene-disease correlation studies from their desktop computers. But it's an important milestone in its admission that genes in the vast majority of cases are not destiny and, with the exception of a few cancers that have been well studied (like breast cancer), provide limited guidance to care.

http://www.nejm.org/doi/full/10.1056...1933?query=TOC

Source: Gooznews
gdpawel is offline   Reply With Quote
Old 05-12-2011, 04:17 PM   #2
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
The Microarray (Gene Chip)

About a decade ago, scientists figured out how to transform genetic instructions into an electronic format. Gene profiling using a "microarray" - a chip of glass arrayed with thousands of gene fragments - was expected to revolutionize medicine by decoding the basis of disease.

"All human illness can be studied by microarray analysis, and the ultimate goal of this work is to develop effective treatments or cures for every human disease by 2050," wrote Mark Schena, an inventor of the technology.

But skepticism had set in. In an article in the Lancet, researchers reanalyzed the seven largest microarray studies on cancer prognosis. In five of the seven, this technology performed no better than flipping a coin. The two other studies barely beat horoscopes, according to John P. Ioannidis, a clinical epidemiologist with Tufts University School of Medicine, who wrote in an accompanying editorial.

To understand why, consider the fable about six blind men and an elephant. Each man feels a different part of the animal. One man argues that the creature is a snake, another a spear, another a wall, and so on. A little girl who can see the elephant says, "Each of you is right, but you are all wrong."

Depending on how researchers "feel" their molecular data - using computer analysis to massage, stroke and ignore certain parts - they may discover right answers that are all wrong.

David Ransohoff, a University of North Carolina epidemiologist, says results cannot be trusted unless they can be produced again and again: "Figuring out whether a result is real and not simply caused by chance is determined in part by validation - by reproducing the result in an independent set of samples." In other words, go feel another elephant.

But even that is not enough, Ransohoff and other experts say. The ultimate validation requires clinical studies in actual patients. A molecular diagnostic method must be as reliable as traditional tools such as imaging tests and surgical biopsy.

This analysis is tremendously manipulative, indirect, and often ambiguous. One problem is that trace proteins - the potential biomarkers - may be swamped by other proteins, despite techniques to concentrate the scarcest ones on the special chip that goes into the mass spectrometer.

Another problem is that the spectrometer's measurements - made after vaporizing the proteins and giving them a positive charge - are least reliable in the low range where biomarkers are presumed to exist.

Finally, the spectrometry results can be thrown off by countless variables, including machine miscalibration and handling of blood samples. All of which makes results difficult to reproduce, even in the same lab using the same blood samples.

Source: Lancet, February 7, 2002
gdpawel is offline   Reply With Quote
Old 05-12-2011, 04:19 PM   #3
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
The Cancer Letter July 23, 2010, Vol. 36 No. 28

By Defending Potti, Duke Officials Become Target Of Charges Of Institutional Failure. Duke Officials Decline To Provide Details Of Probe. Biostatisticians Write To Varmus Asking NCI To Investigate. The Lancet Oncology Issues “Expression Of Concern” Over Paper. Duke Insiders Allege Intimidation By Administration. Also in this issue: ODAC Votes To Strip Breast Cancer Indication From Avastin.

http://www.cancerletter.com/downloads/20100803_10

Ninety percent of biomarkers studies are total crap. And this is so, even if the logistical, study conduct issues are carried out flawlessly. Sloppiness a la Potti/Nevins leads to 100 percent crap. But it’s not just Potti/Nevins. The whole concept of using molecular signatures of any kind to do anything beyond the most straightforward of cases is so flawed that everyone should have seen the problems at the beginning. A beautiful biological technology is no different than a beautiful computer technology. It’s not worth much without some very good apps, and personalized molecular medicine is still waiting for its first killer app.

“100 Percent Crap”

Donald Berry, chairman of the Department of Biostatistics and head of the Division of Quantitative Sciences at MD Anderson, said the Duke scandal [i.e. Potti] puts the entire field of genomics at risk.

(Berry then said the following):

“About 10 years ago, I read in Newsweek that the high-paying, glamorous job of the new millenium was bioinformatics,” Berry, one of the statisticians who signed the letter to Varmus, said in an email. “We were going to cure diseases in the near time frame. (Francis Collins was at the forefront of pushing this attitude.) My reaction was that we didn’t know how to handle one gene (and we still don’t), never mind 20,000 genes.

“It was clear then, and it is clear now, that false-positive leads pop up all over the place and we have to keep banging them back down, as in ‘Whack-a-Mole.’ I say ‘we.’ Unfortunately, few people understand this, although the plethora of unconfirmable observations gets people asking, ‘Why?’ I’ve been saying for years that 90 percent of biomarkers studies are crap. And this is so even if the logistical, study conduct issues are carried out flawlessly. Sloppiness a la Potti/Nevins leads to 100 percent crap.”
gdpawel is offline   Reply With Quote
Reply

Thread Tools
Display Modes

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is On

Forum Jump


All times are GMT -7. The time now is 03:24 PM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright HER2 Support Group 2007 - 2021
free webpage hit counter