How much can you help yourself by getting your genome sequenced? A lot, a little, not at all? Scenario 1 (big help): You discover you have a greatly elevated risk of Disease X. You do various things to reduce that risk that actually reduce it. Scenario 2: (a little help): You discover you have a greatly elevated risk of Rare Disease X. You do various things to reduce that risk but they don’t help. At least, when Disease X starts, you will be less upset. Scenario 3 (no help): You discover that you have a greatly elevated risk for a common easily-noticed disease (such as obesity). You already watched your weight, this changes nothing. Scenario 4 (harm): You discover that you have a greatly elevated risk of Scary Disease X (e.g., bipolar disorder). It is depressing news. Later studies show that the gene/disease association was a mistake. (Many gene/disease associations have failed to replicate.)
A recent Wired article tries to answer this question for one person: Raymond McCauley, a bioinformatics scientist who had his genome sequenced four years ago and learned he was “four or five times more likely than most people to develop age-related macular degeneration (AMD)”. The article says “of all the ailments described in the 23andme profile, AMD has one of the strongest genetic associations”. If I found this in my genetic profile, I would want to know the confidence interval of the increased risk. Is it a factor of 4.5 plus or minus 1? Or 4.5 plus or minus 8? This isn’t easy to figure out. In addition to the question of variability, there can easily be bias (= estimate is too high). Let’s say I do 100 gene/disease association studies. Then I scan these studies to pick the one with the strongest gene/disease association. It should be obvious that this particular association is likely to be too high and, depending on the details, could plausibly be pure chance (i.e., true association is zero). I have been unable to find out how replicable the gene/AMD association is. According to Wikipedia, “the lifetime risk of developing late-stage macular degeneration is 50% for people that have a relative with macular degeneration, versus 12% for people that do not have relatives with macular degeneration.” (Until it was eliminated via better diet, pellagra also ran in families.) The Wired article does not say whether any of McCauley’s relatives have/had AMD — a huge omission, given the uncertainty of gene/disease associations.
It wasn’t obvious what McCauley should do, according to the article:
McCauley read that there were a few preventative measures he could take to reduce the chances of AMD one day rendering him blind: don’t smoke and avoid ultraviolet light, for instance. Also, it seemed, he could try taking a special combination of vitamins, including B12 and lutein. But when he consulted the research, he could find little evidence to support the effectiveness of the regime, based on his genotype.
The article says nothing about quitting smoking but he does wear glasses that reduce ultraviolet light and takes certain vitamins. It is very hard for him to determine whether they help.
I have documented in other posts that brain function is sensitive to omega-3 intake and (probably) most people don’t get enough. Of course, just as it is foolish to smoke (a lot) regardless of whether you have genetic risk of AMD, it is foolish to not optimize one’s omega-3 intake, whether or not you have genetic risk of AMD. In other words: everyone should optimize their omega-3 intake. If the 23andme results cause McCauley to do something wise like this that he would otherwise not have done, they have helped him.
The omega-3 study appeared after the Wired article so I don’t know how McCauley reacted to it. A puzzle about the story is that it isn’t even clear that the gene/AMD associations are true. Consider McCauley’s older relatives: parents, grandparents. Did/do any of them have AMD? If not, it is more plausible that all of them were at 12% risk of the disease than at 50% risk. Suppose all of them had, according to 23andme, the same increased risk as McCauley (at least some of them have the risk-bearing genes). Now it becomes more plausible that something is wrong with the 23andme risk estimate. If some of McCauley’s older relatives do have AMD, it is not clear why the 23andme results would make much difference. He should have already have known he was at increased risk of AMD.
The upshot is that in this particular case, I cannot even rule out Scenario 4 (does harm). All four scenarios strike me as plausible. Based on this article, we are a long way from learning the value of personal genomics.
Previously I used the example of Aaron Blaisdell to make the possibly counter-intuitive point that if you have a genetic disease something is wrong with your environment. Well, I do not have any obvious genetic disease. But I discovered, via self-experimentation, that my environment was terrible — meaning it could be improved in all sorts of ways: stop eating breakfast, drink flaxseed oil, eat butter, look at faces in the morning, take Vitamin D in the morning, and so on, not to mention eat fermented foods (which I figured out via psychology, not self-experimentation). My findings about what is optimal are so different than the way anyone now lives (except people who read this blog) that I believe everyone‘s environment can be vastly improved. If so, the value of discovering you have a genetically elevated risk of this or that is not obvious — you should already be trying to improve your environment. At least that is what my data has taught me. On the other hand, maybe genetic info (even wrong genetic info!) will give you a kick in the pants. Maybe that has happened with McCauley.