Brain tracking — frequent measurement of how well your brain is working — will become common, I believe, because brain function is important and because the brain is more sensitive to the environment (especially food) than the rest of the body. You will find it easier to decide what to eat if you measure your brain than if you measure other parts of your body. For example, I have used it to decide how much flaxseed and butter to eat. I have used R and the methodological wisdom of cognitive psychologists to make brain tracking tests. Alex Chernavsky, who lives in upstate New York, recently tried the most recent version:
In August, Seth solicited readers to help him test a new brain-tracking program. I said I was interested. I had a number of reasons for volunteering:
- My job involves working a lot with computers, so I thought I had a decent shot at ferreting out any bugs or usability issues.
- I have been tracking my weight daily for over eleven years, so I was confident that I would have enough motivation to do the test on a regular basis.
- I have a long-standing interest in neuroscience, so I was eager to help advance the field, even if in a very small way.
- I’m in my late 40s, and I’ve noticed a distinct increase in my forgetfulness. There are probably other, less-noticeable decreases in my cognitive function. Thus I have an interest in finding ways to boost the performance of my brain. Hacking brain function is obviously much easier if you can assay it via a quick, reliable proxy (i.e., reaction time).
The program itself was relatively easy to set up. The code is written in a free, open-source scripting language called R, so you have to install R on your Windows computer in order to run the program. Upon downloading the script (which is contained within an R workspace), you have to edit a function to specify the Windows folder that contains the workspace file. After that, you’re ready to go.
The three-month pilot study did not involve testing any hypotheses with regard to the effectiveness of interventions (for example, measuring reaction times before and after flaxseed oil). My task was simply to perform the test once or twice a day.
Taking the test involves hitting a number key (2 through 8, inclusive) to match a random target number that is displayed on the screen. The program measures the latency of your response. If you hit the wrong key, the program forces you to repeat the same trial until you press the right key. Reaction-times from these “correction trials” are not used in any subsequent data analysis. A session consists of 32 individual trials and takes about four minutes to complete.
I performed the test daily for three months, although I did miss two days. The test stopped short of being fun, but it was certainly not onerous. The biggest hassle was having to wait for my laptop to boot into Windows. If I had to do the pilot study over again, I would install R on both my home and my work desktop computers, so I could perform the test more easily (perhaps as a way to take a short break from whatever other task I happened to be working on).
The original plan was for me to email the R workspace to Seth once a week or so. However, I suggested to Seth that we could improve efficiency by using a shared DropBox folder. He agreed, and that is the method we adopted. Using this system, Seth had ongoing access to the latest data, and he could also easily make any bug-fixes or other edits that would take effect the next time I ran the script.
I did identify one bug in the script. After each trial, the script briefly displays some feedback in the form of your reaction-time (in milliseconds) for that trial, your cumulative average for that session, and a percentile figure that compares your latest speed with past trials for that same target key. I noticed that the percentile scores didn’t seem to make sense for some of the keys. Seth examined his code and agreed that this was indeed a bug. He made some adjustments and the bug was fixed.
I found that over time, as expected, my scores improved substantially. They seemed to plateau after six weeks. However, my accuracy suffered. During the third month of the pilot study, I made a conscious effort to reduce my error rate. I had some success, but I also found myself frustrated by my inability to reduce the errors as much as I would have liked. Making errors, despite my best efforts, was the only vexing part of taking the test.