View Single Post
Old 10-08-2013, 12:27 PM   #1
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Why Oncologists Don’t Like In Vitro Chemosensitivity Tests

Robert Nagourney, M.D.

In human experience, the level of disappointment is directly proportional to the level of expectation. When, for example, the world was apprised of the successful development of cold fusion, a breakthrough of historic proportions, the expectations could not have been greater. Cold fusion, the capacity to harness the sun’s power without the heat and radiation, was so appealing that people rushed into a field about which they understood little. Those who remember this episode during the 1990s will recall the shock and dismay of the scientists and investors who rushed to sponsor and support this venture only to be left out in the cold when the data came in.

Since the earliest introduction of chemotherapy, the ability to select active treatments before having to administer them to patients has been the holy grail of oncologic investigation. During the 1950s and 60s, chemotherapy treatments were punishing. Drugs like nitrogen mustard were administered without the benefit of modern anti-emetics and cancer patients suffered every minute. The nausea was extreme, the bone marrow suppression dramatic and the benefits – marginal at best. With the introduction of cisplatin in the pre Zofran/Kytril era, patients experienced a heretofore unimaginable level of nausea and vomiting. Each passing day medical oncologists wondered why they couldn’t use the same techniques that had proven so useful in microbiology (bacterial culture and sensitivity) to select chemotherapy.

And then it happened. In June of 1978, the New England Journal of Medicine (NEJM) published a study involving a small series of patients whose tumors responded to drugs selected by in vitro (laboratory) chemosensitivity. Eureka! Everyone, everywhere wanted to do clonogenic (human tumor stem cell) assays. Scientists traveled to Tucson to learn the methodology. Commercial laboratories were established to offer the service. It was a new era of cancer medicine. Finally, cancer patients could benefit from effective drugs and avoid ineffective ones. At least, it appeared that way in 1978.

Five years later, the NEJM published an update of more than 8,000 patients who had been studied by clonogenic assay. It seemed that with all the hype and hoopla, this teeny, tiny little detail had been overlooked: the clonogenic assay didn’t work. Like air rushing out of a punctured tire, the field collapsed on itself. No one ever wanted to hear about using human tumor cancer cells to predict response to chemotherapy – not ever!

In the midst of this, a seminal paper was published in the British Journal of Cancer in 1972 that described the phenomenon of apoptosis, a form of programmed cell death. All at once it became evident exactly why the clonogenic assay didn’t work. By re-examining the basic tenets of cancer chemosensitivity testing, a new generation of assays were developed that used drug induced programmed cell death, not growth inhibition. Cancer didn’t grow too much, it died too little. And these tests proved it.

Immediately, the predictive validity improved. Every time the assays were put to the test, they met the challenge. From leukemia and lymphoma to lung, breast, ovarian, and even melanoma, cancer patients who received drugs found active in the test tube did better than cancer patients who received drugs that looked inactive. Eureka! A new era of cancer therapy was born. Or so it seemed.

I was one of those naive investigators who believed that because these tests worked, they would be embraced by the oncology community. I presented my first observations in the 1980s, using the test to develop a curative therapy for a rare form of leukemia. Then we used this laboratory platform to pioneer drug combinations that, today, are used all over the world. We brought the work to the national cooperative groups, conducted studies and published the observations. It didn’t matter. Because the clonogenic assay hadn’t worked, regardless of its evident deficiencies, no one wanted to talk about the field ever again.

In 1600, Giordano Bruno was burned at the stake for suggesting that the universe contained other planetary systems. In 1634, Galileo Galilei was excommunicated for promoting the heliocentric model of the solar system. Centuries later, Ignaz Semmelweis, MD, was committed to an insane asylum after he (correctly) suggested that puerperal sepsis was caused by bacterial contamination. A century later, the discoverers of helicobacter (the cause of peptic ulcer disease) were forced to suffer the slings and arrows of ignoble academic fortune until they were vindicated through the efforts of a small coterie of enlightened colleagues.

Innovations are not suffered lightly by those who prosper under established norms. To disrupt the standard of care is to invite the wrath of academia. The 2004 Technology Assessment published by Blue Cross/Blue Shield and ASCO in the Journal of Oncology and ASCO’s update seven years later, reflect little more than an established paradigm attempting to escape irrelevance.

Cancer chemosensitivity tests work exactly according to their well-established performance characteristics of sensitivity and specificity. They consistently provide superior response and, in many cases, time to progression and even survival. They can improve outcomes, reduce costs, accelerate research and eliminate futile care. If the academic community is so intent to put these assays to the test, then why have they repeatedly failed to support the innumerable efforts that our colleagues have made over the past two decades to fairly evaluate them in prospective randomized trials? It is time for patients to ask exactly why it is that their physicians do not use them and to demand that these physicians provide data, not hearsay, to support their arguments.
gdpawel is offline   Reply With Quote