HonCode

Go Back   HER2 Support Group Forums > Breast Cancer News
Register Gallery FAQ Members List Calendar Today's Posts

Reply
 
Thread Tools Display Modes
Old 07-07-2013, 05:08 PM   #1
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
There is the issue about cell-lines vs fresh cells. Cell-lines have always played, and continue to play, an important role in drug screening and drug development.

The problem is that cell-lines do not predict for disease or patient specific drug effects. If you can kill cancer cell-lines with a given drug, it doesn’t tell you anything about how the drug will work in real world, clinical cancer (real-world conditions). But you can learn certain things about general drug biology through the study of cell-lines.

As a general rule, studies from established cell-lines (tumor cells that are cultured and maniplated so that they continue to divide) have proved worthless as models to predict the activity of drugs in cancer. They are more misleading than helpful. An established cell-line is not reflective of the behavior of the fresh tumor samples (live samples derived from tumors) in primary culture, much less in the patient.

Established cell-lines have been a huge disappointment over the decades, with respect to their ability to correctly model the disease-specific activity of new drugs. What works in cell-lines do not often translate into human beings. You get different results when you test passaged cells compared to primary, fresh tumor.

Research on cell-lines is cheap compared to clinical trials on humans. One gets more accurate information when using intact RNA isolated from “fresh” tissue than from using degraded RNA, which is present in paraffin-fixed tissue.

My question would be, do you want to utilize your tissue specimen for “drug selection” against “your” individual cancer cells or for mutation identification, to see if you are “potentially” susceptible to a certain mechanism of attack?

Cell Lines vs Fresh Cells

http://cancerfocus.org/forum/showthread.php?t=3702
gdpawel is offline   Reply With Quote
Old 08-06-2013, 01:37 PM   #2
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Is Genomic Sequencing Ready for Prime Time in Drug Selection?

Next-generation sequencing (NGS) technologies have come a long way since 1977 when Frederick Sanger developed chain-termination sequencing, but are they ready for prime time in drug selection?

Researchers have realized that cancer biology is driven by signaling pathways. Cells speak to each other and the messages they send are interpreted via intracellular pathways known as signal transduction. Many of these pathways are activated or deactivated by phosphorylations on select cellular proteins.

Sequencing the genome of cancer cells is explicitly based upon the assumption that the pathways - network of genes - of tumor cells can be known in sufficient detail to control cancer. Each cancer cell can be different and the cancer cells that are present change and evolve with time.

Although the theory behind inhibitor targeted therapy is appealing, the reality is more complex. Cancer cells often have many mutations in many different pathways, so even if one route is shut down by a targted treatment, the cancer cell may be able to use other routes.

In other words, cancer cells have "backup systems" that allow them to survive. The result is that the drug does not affect the tumor as expected. The cancer state is typically characterized by a signaling process that is unregulated and in a continuous state of activation.

In chemotherapy selection, genotype analysis (genomic profiling) examines a single process within the cell or a relatively small number of processes. All a gene mutation study can tell is whether or not the cells are potentially susceptible to a mechanism of attack. The aim is to tell if there is a theoretical predisposition to drug response.

It doesn't tell you the effectiveness of one drug (or combination) or any other drug which may target this in the individual. There are many pathways to altered cellular function. Phenotype analysis (functional profiling) measures the end result of pathway activation or deactivation to predict whether patients will actually respond (clinical responders).

It measures what happens at the end, rather than the status of the individual pathway, by assessing the activity of a drug (or combinations) upon combined effect of all cellular processes, using combined metabolic and morphologic endpoints, at the cell population level, measuring the interaction of the entire genome.

Should oncologists begin using deep genome sequencing in their clinical practice? At the annual meeting of the European Society for Medical Oncology, two key opinion leaders battled it out over this topic in a debate.
gdpawel is offline   Reply With Quote
Old 08-06-2013, 01:38 PM   #3
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Debating Next-Generation Deep Sequencing

At ESMO, experts assess the clinical use of genome sequencing

Vienna—Should oncologists begin using deep genome sequencing in their clinical practice? Next-generation sequencing (NGS) technologies have come a long way since 1977 when Frederick Sanger developed chain-termination sequencing, but are they ready for prime time? At the annual meeting of the European Society for Medical Oncology, two key opinion leaders battled it out over this topic in a debate.

The Argument for Deep Genome Sequencing

Arguing the pro position, Fabrice Andre, MD, PhD, of the Institut Gustave Roussy in Villejuif, France, said that embracing deep sequencing in daily clinical practice is not only the right thing to do, it is a necessity. The number of genetic biomarkers known to influence patient outcomes and care has risen dramatically in recent years and is only expected to grow, he said.

“The current system is not sustainable for hospitals and academic centers,” said Dr. Andre. “It’s not possible for [them] to run more than 10 bioassays per patient. We need to move to multiplex technology.”

For breast cancer, he said, clinicians can run tests for ER/HER2, TOP2A, FGR1, IGFR1R, EGFR, PAK1, BRCA1, CYP2D6, PTEN and PI3KCA among others. With whole genome sequencing, “you can assess all the genes that you want,” said Dr. Andre. “When you do one test for each biomarker, each biomarker has a cost. Keep in mind that three FISH [fluorescence in situ hybridization] is equal to the same cost of one whole genome CGH [comparative genomic hybridization] array.”

Whole genome sequencing also offers a number of other potential advantages. High throughput approaches can identify a large number of rare targetable gene alterations. This is increasingly important as researchers find genetic alterations that exist in 1% or 2% of patients. The technology also can capture minority clones that may be hard to identify when there is a low percentage of tumor cells in a sample. The next-generation sequencers have been proven to be accurate and they do not need large samples of tissue. Dr. Andre pointed out that some protein-based assays, which are used because they are less expensive than FISH, are not reliable. One study found that the immunohistochemistry test for the HER2 protein was accurate only 81.6% of the time (J Clin Oncol 2006;24:3032-3038, PMID: 16809727).

The “robust” deep sequencing technology is already being used for patient care at academic centers. One such example is the MOSCATO trial, which began in fall 2011. This trial enrolled 120 patients with difficult-to-treat cancers and is using whole genome sequencing to identify potential therapeutic targets. Once a target has been identified, patients receive targeted therapy in a clinical trial if one is available. The turnover time for sequencing is 15 days and the total cost is 1,500 euros, or roughly $2,000 per patient.

The cost of technology is expected to decrease dramatically in the next few years. By the end of 2012, Oxford Nanopore Technology is expected to launch a technology that is the size of a USB drive and will offer whole genome sequencing in 15 minutes for less than $1,000. Dr. Andre argued that deep sequencing will be less expensive than a multiplicity of tests.

He pointed to a case study recently described in the Journal of Thoracic Oncology as an example of a success story (2012;7:e14-e16). In the case report, a 43-year-old never smoker with lung cancer had tested negative for EML4-ALK on the approved companion genetic test for crizotinib (Xalkori, Pfizer). Sensing that an oncogenic genetic driver was spurring the patient’s cancer, clinicians ordered deep sequencing and identified a novel ALK fusion. The patient was treated with crizotinib and was recently reported to have had a complete response.

“In the context of prospective cohorts, but not clinical trials, I think we need to deliver NGS in order to detect a high number of rare, relevant genomic alterations and then treatment can be done in the context of Phase I trials or drug access programs,” said Dr. Andre.

The Argument Against Deep Genome Sequencing

According to Kenneth O’Byrne, MD, a consultant medical oncologist at St. James Hospital and Trinity College Dublin, Ireland, Dr. Andre is jumping the gun. “He makes the fundamental error that all people who are enthusiastic about new technologies always make and that is the non-application of evidence-based medicine,” Dr. O’Byrne said. “Deep sequencing is a fantastic tool, but it is a research toy and an expensive toy at the moment. For day-to-day practical medicine, we have to go by evidence base.”

Dr. O’Byrne cast doubt on Dr. Andre’s success story example. “They treated the patient with crizotinib and made the false conclusion that the ALK rearrangement they detected was responsible for the response. Do we know if that patient expressed MET? Is there any other reason [he] may have responded to crizotinib?” Dr. O’Byrne said.

He agreed that the cost of the sequencing technology was decreasing, but argued that analysis would remain expensive. He argued that the clinical benefit of identifying genetic drivers is still uncertain.

“I would argue that in lung cancer, and indeed in almost every other tumor, there are only a few proven genetic alterations that can be identified that actually affect the way we treat our patients in clinic,” Dr. O’Byrne said. “EGFR [epidermal growth factor receptor] mutations and ALK rearrangements are the only validated predictive biomarkers in NSCLC [non-small cell lung cancer].” He pointed out that these affect only 15% of lung cancer patients, and although there are targeted agents available, the jury is still out on whether the drugs that target these mutations improve survival.

As an example of this, he pointed out that an interim analysis of the PROFILE 007 trial presented at the ESMO meeting (abstract LBA1) showed that although crizotinib increased progression-free survival by 4.7 months compared with chemotherapy, there was no difference in overall survival. “If you look at all of the EGFR TKI [tyrosine kinase inhibitor] randomized controlled trials versus cytotoxic chemotherapy in EGFR mutation–positive disease, there has yet to be a proven [overall] survival benefit, despite obvious clinical benefits,” Dr. O’Byrne said. Researchers say the lack of overall survival advantage in many of these trials can be blamed on the large numbers of patients who cross over to the experimental therapy. “The argument is crossover, but we don’t know that yet,” he said.

Dr. O’Byrne urged caution, as several years ago, it was thought that tumor angiogenesis inhibitors would be the salvation of lung cancer patients and that did not happen. There was clear evidence that new blood tumor vessels were associated with poor outcome, but when researchers tested a slew of antiangiogenic TKIs in patients with lung cancer, none of them worked. These included apatinib, axitinib, cedarinib, motesanib, pazopanib, sorafenib, sunitinib and vandetanib. “There is still some promise that some of these might break through,” Dr. O’Byrne said, pointing to Boehringer Ingelheim’s BIBF1120. “But to date, we’ve spent billions of euros proving that many of these are of no value.

“In my view, and I feel this quite strongly, predictive biomarker tests must undergo validation and quality assurance before they are used rou- tinely in clinical practice,” Dr. O’Byrne said. “Deep DNA sequencing holds huge promise … but it is a research tool, and I do genuinely believe that a lot of clinically irrelevant data is generated that actually confuses the clinician and the patient.”

Clinical Oncology News Issue: December 2012 | Volume: 07:12
gdpawel is offline   Reply With Quote
Old 03-27-2014, 11:45 PM   #4
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Genome-wide sequencing in cancer: not ready for prime time

(Reuters Health) - Routine genome-wide screening of cancers is likely a long way off, a new paper says.

The technology, known as next-generation sequencing, promises to revolutionize doctors' understanding of cancer and underpins perhaps the biggest paradigm shift taking place in cancer research today: the growing emphasis on a cancer's genetic makeup, rather than its location within the body.

Understanding the genetic makeup of an individual patient's tumor may allow physicians to pick the drug that best targets that specific tumor, as well as recognize when a tumor has developed resistance to the drug through new genetic mutations.

"Next-generation sequencing is especially promising in cancer because in a single test, one can interrogate all clinically relevant cancer genes for all types of genomic alterations, including sequence mutations and chromosomal rearrangements," Dr. Michael Berger, a geneticist at Memorial Sloan-Kettering Cancer Center in New York City, told Reuters Health in an email.

There are several different specific screening technologies that are considered "next-generation" - but all share the ability to sequence entire human genomes in a matter of days. When applied to cancer, the technology is used to screen the entire genome of cancer cells.

By some measures, this promise is already being realized. For example, last year The Cancer Genome Atlas Research Network used genome-wide screening of breast cancer tumors to demonstrate that there are four main breast cancer types defined by differing genomic and epigenetic mutations. The study showed that individual breast cancers have many genetic differences from each other but that one subgroup of breast cancers, basal-like breast cancer, was similar genetically to serous ovarian cancer.

Cancer cells present unique and complex challenges, they note. Because they are genetically so different from normal human tissue, there is not always a 'reference sequence' against which to compare the tumor DNA. There are also frequent chromosome-scale as well as epigenetic changes, and even significant genetic differences among cells within the same tumor, an issue specific to cancer cells known as tumor heterogeneity.

The authors of the new paper, writing online July 25 in the British Journal of Cancer, pointed out that this complexity creates a number of problems that must be solved before next-generation sequencing is a common part of cancer care.

One of the first issues is developing the algorithms that are used to map the genome.

"The computational challenges involved in analyzing and storing clinical (next-generation sequencing) data cannot be overstated," said Dr. Berger, who wasn't involved in the new study. "Better algorithms must be developed to reliably and accurately detect mutations in heterogeneous tumors."

In genome-wide sequencing, a seemingly minuscule misstep in the analysis could have massive consequences. For example, say the authors of the new paper, led by Dr. Danny Ulahannan of the Wellcome Trust Center for Human Genetics in Oxford, UK, "the sheer quantity of data means that getting 0.01% of the human genome wrong would correspond to 300,000 errors scattered along the three billion base pairs."

Dr. Lynda Chin, the chair of Genomic Medicine and scientific director of the Institute for Applied Cancer Science at MD Anderson Cancer Center, told Reuters Health this is often an overlooked problem.

"One barrier that is often overlooked or underestimated from the clinical side is the technical challenge of generating high-quality (next-generation sequencing) data," Dr. Chin said. "There is a sense that generating (the data) is easy, and it is the analysis that is hard. I would disagree, as I believe that the technology is still unstable, for lack of a better word, not yet turn-key, and no matter how good the analytics-interpretation become, if the data is poor quality, the result will be poor."

And mapping the genome is really only the first step. The next step is figuring out which mutations are relevant to the development of cancer and whether they can be targeted with a drug.

"I agree with the obvious barriers of interpretation. Not just analytically that we need improved algorithms, (but) more importantly, more knowledge and understanding of what each alteration means and how each event impact on clinical decision," Dr. Chin said.

The advances will require a "cultural change" in cancer research, Dr. Chin said, that makes "patient-oriented genomic research a standard, rather than a heroic effort by a researcher."
gdpawel is offline   Reply With Quote
Old 03-27-2014, 11:47 PM   #5
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
First FDA Authorization for Next-Generation Sequencer

Francis S. Collins, M.D., Ph.D., and Margaret A. Hamburg, M.D.
N Engl J Med 2013; 369:2369-2371December 19, 2013DOI: 10.1056/NEJMp1314561

Article

This year marks 60 years since James Watson and Francis Crick described the structure of DNA and 10 years since the complete sequencing of the human genome. Fittingly, today the Food and Drug Administration (FDA) has granted marketing authorization for the first high-throughput (next-generation) genomic sequencer, Illumina's MiSeqDx, which will allow the development and use of innumerable new genome-based tests. When a global team of researchers sequenced that first human genome, it took more than a decade and cost hundreds of millions of dollars. Today, because of federal and private investment, sequencing technologies have advanced dramatically, and a human genome can be sequenced in about 24 hours for what is now less than $5,000. This is a rare example of technology development in which faster, cheaper, and better have coincided: as costs have plummeted and capacity has increased, the accuracy of sequencing has substantially improved. With the FDA's announcement, a platform that took nearly a decade to develop from an initial research project funded by the National Institutes of Health will be brought into use for clinical care. Clinicians can selectively look for an almost unlimited number of genetic changes that may be of medical significance. Access to these data opens the door for the transformation of research, clinical care, and patient engagement.

To see how this technology could be used, consider cancer. Comprehensive analysis of the genome sequence of individual cancers has helped uncover the specific mutations that contribute to the malignant phenotype, identify new targets for therapy, and increase the opportunities for choosing the optimal treatment for each patient. For instance, lung adenocarcinoma can now be divided into subtypes with unique genomic fingerprints associated with different outcomes and different responses to particular therapies. More broadly, recent work from the Cancer Genome Atlas demonstrates that the tissue of origin of a particular cancer may be much less relevant to prognosis and response to therapy than the array of causative mutations.1 As a result, patients diagnosed with a cancer for which there are few therapeutic options may increasingly benefit from drug therapies originally aimed at other cancers that share common driver mutations. The new technology allows us go from our current approach of targeted searches for specific mutations in individual cancers to widespread use of approaches that survey the entire genome.

A major area of opportunity that has yet to be fully exploited is pharmacogenomics — the use of genomic information to identify the right drug at the right dose for each patient. More than 120 FDA-approved drugs have pharmacogenomics information in their labeling, providing important details about differences in response to the drug and, in some cases, recommending genetic testing before prescribing.2

But the full potential of pharmacogenomics is largely unrealized, because of the logistic challenges in obtaining suitable genomic information in a timely enough fashion to guide prescribing. Placing genomic information in the electronic medical record would facilitate this kind of personalized medicine. If the patient's entire genome were part of his or her medical record, then the complexities of acquiring a DNA sample, shipping it, and performing laboratory work would be replaced by a quick electronic query.

Although this scenario holds great promise, the utility of genomic information for drug prescribing must be documented with rigorous evidence. For example, three recently published clinical trials raise questions about the clinical utility of using pharmacogenetic information in the initial dosing of vitamin K anatagonists.3

The FDA based its decision to grant marketing authorization for the Illumina instrument platform and reagents on their demonstrated accuracy across numerous genomic segments, spanning 19 human chromosomes. Precision and reproducibility across instruments, users, days, and reagent lots were also demonstrated.

The marketing authorization of a sequencing platform for clinical use will probably expand the incorporation of genetic information into health care. But even the most promising technologies cannot fully realize their potential if the relevant policy, legal, and regulatory issues are not adequately addressed. Already, key policy advances have helped smooth the way and address many of the public's concerns about the potential misuse of genetic information.4 For example, the Health Insurance Portability and Accountability Act of 1996 (HIPAA) and the Genetic Information Nondiscrimination Act (GINA) prohibit health insurers from considering genetic information as a preexisting condition, as material to underwriting, or as the basis for denying coverage. GINA also protects against use of genetic information by employers. These protections do not, however, extend to the disease manifestations of genetic risks. Although genomic information showing a predisposition to cancer would be protected under GINA, other clinical signs or symptoms indicative of cancer are not protected. Provisions of the Affordable Care Act set to go into effect in 2014 go a step further and will preclude consideration of all preexisting conditions, whether genomic or not, in establishing insurance premiums. Current federal laws, however, do not restrict the use of genomic information in life insurance, long-term care insurance, or disability insurance.

The legal landscape for the use of genomics in personalized medicine grew brighter in June of this year when the Supreme Court ruled (in Association for Molecular Pathology v. Myriad Genetics) that isolated naturally occurring DNA cannot be patented. This decision was a breakthrough for access to individual genetic tests but also, even more important, for the integration of genome sequencing into clinical care. Before the Myriad decision, there were substantial concerns that in order to offer whole genome sequencing, clinical laboratories would have to pay royalties to a long list of gene patent holders. The decision has opened the creative doors to an as yet unimaginable set of products that may benefit the public health.

The FDA has also been active in addressing other regulatory issues surrounding personalized medicine.5 Along with authorizing the Illumina technology for marketing, the FDA recognized the need for reference materials and methods that would permit performance assessment. As a result, the FDA collaborated with the National Institute for Standards and Technology (NIST) to develop reference materials consisting of whole human genome DNA, together with the best possible sequence interpretation of such genomes. The first human genome reference materials are expected to be available for public use in the next 12 months.

This marketing authorization of a non–disease-specific platform will allow any lab to test any sequence for any purpose. Thus, putting in place an appropriate risk-based regulatory framework is now critical to ensure the validation and quality of tests (called laboratory-developed tests, or LDTs) developed in-house by clinical laboratories.

The marketing authorization for the first next-generation genome sequencer represents a significant step forward in the ability to generate genomic information that will ultimately improve patient care. Yet it is only one step. There are many challenges ahead before personalized medicine can be considered truly embedded in health care. We need to continue to uncover variants within the genome that can be used to predict disease onset, affect progression, and modulate drug response. New genomic findings need to be validated before they can be integrated into medical decision making. Doctors and other health care professionals will need support in interpreting genomic data and their meaning for individual patients. Patients will want to be able to talk about their genetic information with their doctor. With the right information and support, patients will be able to participate alongside their doctors in making more informed decisions. Reimbursement issues need to be resolved to assure that patients have access to the best tests and that manufacturers have incentives to develop them.

The arrival of next-generation sequencing at this regulatory landmark is only the beginning. We need to work together to ensure that research progresses, that regulatory policies are developed, that patients' rights and needs are addressed, and that clinical use of genomic information is based on rigorous evidence.

References:

1. Kandoth C, McLellan MD, Vandin F, et al. Mutational landscape and significance across 12 major cancer types. Nature 2013;502:333-339

2. Table of pharmacogenomic biomarkers in drug labeling. Silver Spring, MD: Food and Drug Administration, 2013 http://www.fda.gov/%20drugs/sciencer.../ucm083378.htm)

3. Furie B. Do pharmacogenetics have a role in the dosing of vitamin K antagonists? N Engl J Med 2013;369:2345-2346

4. Hudson KL. Genomics, health care, and society. N Engl J Med 2011;365:1033-1041

5. Paving the way for personalized medicine: FDA's role in a new era of medical product development. Silver Spring, MD: Food and Drug Administration, October 2013 http://www.fda.gov/downloads/Science.../UCM372421.pdf)

http://www.nejm.org/doi/full/10.1056...561?query=TOC&
gdpawel is offline   Reply With Quote
Old 03-27-2014, 11:48 PM   #6
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Pharmacogenomics can be defined as the study of how a person’s genetic makeup determines response to a drug. Although any number of labs and techniques can detect mutant genes, this area of pharmacogenomics was ripe for proprietary tests, invented alongside the drug and owned by the drug developer and/or a partner in the diagnostics field.

This business opportunity evolved as more drugs were approved with companion diagnostics. Unfortunately, the introduction of these new drugs has not been accompanied by specific predictive tests allowing for a rational and economical use of the drugs.

Companion diagnostics and their companion therapies are what's being pushed as "personalized medicine" as they enable the identification of likely responders to therapies that work in patients with a specific molecular profile. However, companion diagnostics tend to only answer a targeted drug-specific question and may not address other important clinical decision needs.

These companion diagnostics are being used to predict responsiveness and determine candidacy for a particular therapy often included in drug labels as either required or recommended testing prior to therapy initiation. I certainly would not want to be "denied" treatment because of gene testing. Gene testing is not a clear predictor of a lack of benefit from particular targeted therapies.

Anyone familiar with cellular biology knows that having the genetic sequence of a known gene (genotype) does not equate to having the disease state (phenotype) represented by that gene. It requires specific cellular triggers and specialized cellular mechanisms to literally translate the code into the work horse of the cellular world - proteins.
gdpawel is offline   Reply With Quote
Old 03-27-2014, 11:49 PM   #7
gdpawel
Senior Member
 
gdpawel's Avatar
 
Join Date: Aug 2006
Location: Pennsylvania
Posts: 1,080
Scientists challenge the genetic interpretation of biology

A proposal for reformulating the foundations of biology, based on the 2nd law of thermodynamics and which is in sharp contrast to the prevailing genetic view, is published in the Journal of the Royal Society Interface under the title "Genes without prominence: a reappraisal of the foundations of biology". The authors, Arto Annila, Professor of physics at Helsinki University and Keith Baverstock, Docent and former professor at the University of Eastern Finland, assert that the prominent emphasis currently given to the gene in biology is based on a flawed interpretation of experimental genetics and should be replaced by more fundamental considerations of how the cell utilises energy. There are far-reaching implications, both in research and for the current strategy in many countries to develop personalised medicine based on genome-wide sequencing.

Is it in your genes?

By "it" we mean intelligence, sexual orientation, increased risk of cancer, stroke or heart attack, criminal behaviour, political preference and religious beliefs, etcetera. Genes have been implicated in influencing, wholly or partly, all these aspects of our lives by researchers. Genes cannot cause any of these features, although geneticists have found associations between specific genes and all of these features, many of which are entirely spurious and a few are fortuitous.

How can we be so sure?

When a gene, a string of bases on the DNA molecule, is deployed, it is first transcribed and then translated into a peptide - a string of amino acids. To give rise to biological properties it needs to "fold" into a protein.

This process consumes energy and is therefore governed by the 2nd law, but also by the environment in which the folding takes place. These two factors mean that there is no causal relationship between the original gene coding sequence and the biological activity of the protein.

Is there any empirical evidence to support this?

Yes, a Nordic study of twins conducted in 2000 showed there was no evidence that cancer was a "genetic" disease - that is - that genes play no role in the causation of cancer. A wider international study involving 50,000 identical twin pairs published in 2012, showed that this conclusion applied to other common disease as well. Since the sequencing of the human genome was completed in 2001 it has not proved possible to relate abnormal gene sequences to common diseases giving rise to the problem of the "missing heritability".

What is the essence of the reformulation?

At the most fundamental level organisms are energy-consuming systems and the appropriate foundation in physics is that of complex dissipative systems. As energy flows in and out and within, the complex molecular system called the cell, fundamental physical considerations, dictated by the 2nd law of thermodynamics, demand that these flows, called actions, are maximally efficient (follow the path of least resistance) in space and time. Energy exchanges can give rise to new emergent properties that modify the actions and give rise to more new emergent properties and so on. The result is evolution from simpler to more complex and diverse organisms in both form and function, without the need to invoke genes. This model is supported by earlier computer simulations to create a virtual ecosystem by Mauno Rönkkö of the University of Eastern Finland.

What implications does this have in practice?

There are many, but two are urgent.

1. to assume that genes are unavoidable influences on our health and behaviour will distract attention from the real causes of disease, many of which arise from our environment;

2. the current strategy towards basing healthcare on genome-wide sequencing, so called "personalised healthcare", will prove costly and ineffective.

What is personalised health care?

This is the idea that it will be possible to predict at birth, by determining the total DNA sequence (genome-wide sequence), health outcomes in the future and take preventive measures. Most European countries have research programmes in this and in the UK a pilot study with 100,000 participants is underway.

Reference: University of Eastern Finland

Citation: "Scientists challenge the genetic interpretation of biology." Medical News Today. MediLexicon, Intl., 21 Feb. 2014.
gdpawel is offline   Reply With Quote
Reply


Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -7. The time now is 08:31 AM.


Powered by vBulletin® Version 3.8.7
Copyright ©2000 - 2024, vBulletin Solutions, Inc.
Copyright HER2 Support Group 2007 - 2021
free webpage hit counter