By Diane Allingham-Hawkins, PhD, FCCMG, FACMG, Senior Director, Genetic Test Evaluation Program
It is an exciting time to be working in genetics. The Human Genome Project took 15 years and $3 billion to generate the first sequence of the human genome (a genome is the complete set of genetic information for an organism) but rapid advances in technology have reduced the time requirement to a few days and the cost to less than $1000. This means that having your genome sequenced is now a very real option. The problem? Our knowledge and understanding of the medical implications of the vast majority of sequence variants – any change from the “normal” or “expected” DNA sequence at any given place in the genome – has not kept pace with our technical ability to detect them. Consequently, tested individuals will often be faced with information of uncertain clinical significance after having their genomes sequenced.
Take the case of Dr. Robert Green, recently reported in The Boston Globe. Dr. Green is a physician-scientist at Brigham and Women’s Hospital and Harvard Medical School and is a well-known proponent of genetic testing. When Dr. Green had his genome sequenced, he was found to carry a rare variant in a gene that causes Treacher-Collins syndrome, a disorder that causes facial deformities. Dr. Green does not have Treacher-Collins syndrome so the result was surprising and the significance unclear. Although Dr. Green is not affected himself, could a son or daughter inheriting the variant be affected? Are other members of his family at risk? Or is this a meaningless variant, as Dr. Green himself believes according to the article? The truth is we simply don’t know the answer.
Historically, gene discovery has been done by taking a group of patients with common symptoms and looking for shared genetic variants that might explain their disease. If something interesting was found, a group of unaffected people would be tested to see if the association is real (i.e. found in affected people but not unaffected people) or by chance (i.e. found in both affected and unaffected people and therefore not likely related). Now, with more and more “normal” individuals having their genomes sequenced, we are discovering that there is far more natural variation in human genes than we ever imagined and figuring out which variants are important and which are not is tricky at best. I heard a speaker at the recent American College of Genetics and Genomics conference state that he expects that every base (or letter) of human DNA – all 3 billion of them - will eventually be found to be variable. The fact is there likely is no one “normal” human DNA sequence and no standard reference to which to compare and, therefore, no foolproof way to decide which variants are important and which are normal variation.
So, how do we separate the wheat from the chaff? Through evidence. The association between every single DNA variant and disease needs to be thoroughly researched and proven before any prediction of clinical significance can be accurately made. Geneticists are working towards this goal through the development of curated databases to provide reliable resources for interpreting DNA variants. Although promising, this is an enormous task that must be done systematically and collaboratively with researchers around the world and it will likely be many years before we truly understand the implications of all or most DNA variants. On top of that, our knowledge is likely to change over time so maintaining and updating the resources once they are created will be critical.
In the meantime, we must proceed with caution. The benefits, risks, and potential harms of widespread DNA sequencing, whether it be large gene panels or whole genome or exome sequencing, must be weighed before being used in clinical care. The reality is that these are still largely research tools: the evidence is still accumulating and it may be many years before any true clinical benefit can be proven.