View Single Post
Old 10-13-2012, 11:07 AM   #3
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Type I Error

Robert A. Nagourney, M.D., PhD.

Scientific proof is rarely proof, but instead our best approximation. Beyond death and taxes, there are few certainties in life. That is why investigators rely so heavily on statistics.

Statistical analyses enable researchers to establish “levels” of certainty. Reported as “p-values,” these metrics offer the reader levels of statistical significance indicating that a given finding is not simply the result of chance. To wit, a p-value equal to 0.1 (1 in 10) means that the findings are 90 percent likely to be true with a 10 percent error. A p-value of 0.05 (1 in 20) tells the reader that the findings are 95 percent likely to be true. While a p-value equal to 0.01 (1 in 100) tells the reader that the results are 99 percent likely to be true. For an example in real time, we are just reporting a paper in the lung cancer literature that doubled the response rate for metastatic disease compared with the national standard. The results achieved statistical significance where p = 0.00015. That is to say, that there is only 15 chances out of 100,000 that this finding is the result of chance.

Today, many laboratories offer tests that claim to select candidates for treatment. Almost all of these laboratories are conducting gene-based analysis. While there are no good prospective studies that prove that these genomic analyses accurately predict response, this has not prevented these companies from marketing their tests aggressively. Indeed, many insurers are covering these services despite the lack of proof.

So let’s examine why these tests may encounter difficulties now and in the future. The answer to put it succinctly is Type I errors. In the statistical literature, a Type I error occurs when a premise cannot be rejected. The statistical term for this is to reject the “null” hypothesis. Type II errors occur when the null hypothesis is falsely rejected.

Example: The scientific community is asked to test the hypothesis that Up is Down. Dedicated investigators conduct exhaustive analyses to test this provocative hypothesis but cannot refute the premise that Up is Down. They are left with no alternative but to report according to their carefully conducted studies that Up is Down.

The unsuspecting recipient of this report takes it to their physician and demands to be treated based on the finding. The physician explains that, to his best recollection, Up is not Down. Unfazed the patient, armed with this august laboratory’s result, demands to be treated accordingly. What is wrong with this scenario? Type I error.

The human genome is comprised of more than 23,000 genes: Splice variants, duplications, mutations, SNPs, non-coding DNA, small interfering RNAs and a wealth of downstream events, which make the interpretation of genomic data highly problematic. The fact that a laboratory can identify a gene does not confer a certainty that the gene or mutation or splice variant will confer an outcome. To put it simply, the input of possibilities overwhelms the capacity of the test to rule in or out, the answer.

Yes, we can measure the gene finding, and yes we have found some interesting mutations. But no we can’t reject the null hypothesis. Thus, other than a small number of discreet events for which the performance characteristics of these genomic analyses have been established and rigorously tested, Type I errors undermine and corrupt the predictions of even the best laboratories. You would think with all of the brainpower dedicated to contemporary genomic analyses that these smart guys would remember some basic statistics.
gdpawel is offline   Reply With Quote