How neuroscientists learned to stop worrying and love the bomb

The definitive version of this post was originally published on February 4, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


Contemporary research in neuroscience is constantly adding to and refining our knowledge of how the brain works. One of the tenets of that knowledge during most of the 20th century—that our brains are supplied from birth with a finite number of neurons that only dwindles with age and disease—was finally refuted in the 1990s, thanks to the paradigm-shifting work of Fred Gage and colleagues. However, quantifying how many neurons were born throughout life in the different regions of the human brain remained impossible. That is, until Jonas Frisen, of the Karolinska Institute in Stockholm, Sweden, and his colleagues had a wildly brilliant idea: using the spike in the concentration of radioactive carbon in the Earth’s atmosphere as a result of above-ground nuclear testing between 1945 and 1963. In an essay recently published in PLOS Biology, Frisen and colleague Aurelie Ernst recently reviewed what this highly original approach has taught us.

Dating the birth of neurons in the human brain

As a consequence of nuclear explosions in the mid-20th century, the atmospheric concentration of the radioactive isotope of carbon C-14 increased massively, before decreasing rapidly following the ban of most above-ground nuclear testing in 1963. Proliferating cells (including neuronal precursors) integrate carbon atoms into their DNA, and as this carbon ultimately comes from our environment, the amount of C-14 incorporated into a new neuron depends on the atmospheric concentration of C-14 at the time of its birth. The rapid changes in that concentration caused by nuclear testing thus provide a time scale of sorts that allows dating quite precisely the birth of a new neuron (the principle has been beautifully illustrated in a Perspective published by Science).

New neurons are known to appear throughout adult life in several brain structures, such as the olfactory bulb (OB) of rodents; the dentate gyrus (DG), which is part of the hippocampus; and the striatum, where adult neurogenesis is most prominent in humans. From Ernst and Frisen, PLOS Biol 2015.

New neurons are known to appear throughout adult life in several brain structures, such as the olfactory bulb (OB) of rodents; the dentate gyrus (DG), which is part of the hippocampus; and the striatum, where adult neurogenesis is most prominent in humans. From Ernst and Frisen, PLOS Biol 2015.

In their well worth reading essay, Ernst and Frisen focus for the most part on what this new technique, along with others, has added to our understanding of how the human brain renews some of its neuronal populations throughout life. They highlight in particular the commonalities and the differences in the dynamics of neuronal renewal between humans and other mammals.

An interview with Jonas Frisen

Dr. Jonas Frisen

Dr. Frisen kindly agreed to answer a few questions, starting with how that brilliant idea came to him.

How did the idea come to you that above-ground nuclear testing during the Cold War would create an “atomic clock” of sorts that would allow dating the birth of cells in the human brain?

The idea came out of frustration of not being able to study cell turnover in humans. In archeology they retrospectively birth date specimen by carbon dating. This builds on the radioactive decay of C-14. I started reading up about this, thinking that maybe we could carbon date cells in the same way. This proved to be a very naive thought, as the radioactive half-life of C-14 is almost 6000 years, which provides a miserable resolution for the life span of cells. When I read a little more about C-14 I came across the huge increase created by the nuclear bomb tests, followed by a steep drop as C-14 diffused from the atmosphere. When I saw that, I knew that we had to try the strategy. So, it is a pure coincidence that we use the same isotope, C-14, for birth dating cells as they use in archeology. Whereas in archeology they take advantage of the radioactive decay, we take advantage of the varying concentrations created by the Cold War.

The Cold War has created a “time window of opportunity” for the retrospective birth dating of neurons. How long is this window and when will it close? Could other events (e.g. the nuclear catastrophe in Chernobyl, or natural accidents such as volcanic eruptions) give rise to similar opportunities?

The window is closing gradually, and it is not possible to say with any precision when it will be closed. However, tissue collected in biobanks now or earlier will be available for analysis for a long time to come. I am afraid that we are not aware of any other source of a similar pulse-chase like the situation of a marker that integrates in DNA.

Birth dating of neurons is currently only possible retrospectively, i.e. after death. Do you foresee any technical developments that would allow measuring neuronal birth in vivo?

That would be extremely valuable. I do not see how to do it today, but I wouldn’t be surprised if it comes.

Adult neurogenesis in humans is now a given. Will we one day be able to influence this process for therapeutic purposes, or even to “improve” the functions of the healthy brain?

I am optimistic that there will be therapeutic strategies in the future that lead to some replacement of neurons lost in disease.

References

Ernst, A., & Frisén, J. (2015). Adult Neurogenesis in Humans- Common and Unique Traits in Mammals PLOS Biology, 13 (1) DOI: 10.1371/journal.pbio.1002045

Brain games not quite ready for prime time

The definitive version of this post was originally published on December 23, 2014 on the PLOS Neuroscience Community website, where I serve as an editor.


Ever since video games have become widely available, they have reflected a strong generational divide: most of today’s grandparents probably never played video games, whereas most of their grandchildren play them on a daily basis. Now, after recent scientific discoveries have revealed that video games might influence brain function for the better, many companies have started selling “brain games”, or computerized cognitive training programs, creating a market worth close to $1 billion per year.

What’s more, some of these companies are seeking the Food and Drug Administration’s approval to use these computer programs in healthy older adults to compensate the effects of aging on cognition. But there may be quite a long way to go before the sight of an elderly person bashing their handheld console in the clinic waiting room becomes daily routine: the neuroscience of video games and their cognitive impact is still in its infancy, and academic researchers in the field are warning that the promises made by some companies amount to quackery more than solid science. A new meta-analysis, recently published in PLOS Medicine, reviews the field and points out which types of brain games might work—and which might not.

A meta-analysis is a type of medical research article where scientists aggregate together the results of individual studies to assess whether a particular intervention has consistent effects across studies, and also to determine how large those effects are. This meta-analysis focused on the effects of computerized cognitive training in healthy older adults (roughly 60 years and older).

Better at What?

Studies were included if the participants were tested on cognitive tests both before and after the training. Importantly, those tests needed to be different than the ones trained in the brain games: we know that playing Sudoku makes you better at playing Sudoku, but the real question is whether it makes you better at something else, too. The type of computerized cognitive training in the studies varied widely, from studies that simply had participants play video games (Tetris, Rise of Nations or Medal of Honor were among the list) to custom-developed programs specifically designed to train one or several capacities such as working memory, attention, processing speed, verbal memory, visuospatial skills or executive functions.

Altogether, the authors identified 52 studies of sufficient quality to be included in the meta-analysis. Overall, they found that computerized cognitive training was associated with a significant, but very small improvement in cognitive performance. Most importantly, the authors offer a few pointers for further studies.

  • First, because the improvement of performance brought on by computerized cognitive training is expected to be small, studies should be sufficiently powered, i.e. have enough participants (about 90 people would be the minimum).
  • Also, group-based training had a positive effect on performance, whereas at-home training did not.
  • Perhaps surprisingly, training between 1 and 3 times per week proved effective, but studies with more intensive training sessions did not, suggesting that the negative consequences of fatigue might offset the larger amount of time dedicated to practice.
  • On the other hand, studies where each practice session was shorter than 30 minutes were negative.
Overall efficacy of computerized cognitive training on cognition. Each line represents one study, with the line’s position on the horizontal axis indicating whether the study found an effect (lines to the right of the 0 vertical axis) or not (lines to the left of the 0 vertical axis). The red line at the bottom summarizes the overall effect of all studies combined. As indicated, there is a significant, but small overall beneficial effect of computerized cognitive training on cognition.

Overall efficacy of computerized cognitive training on cognition. Each line represents one study, with the line’s position on the horizontal axis indicating whether the study found an effect (lines to the right of the 0 vertical axis) or not (lines to the left of the 0 vertical axis). The red line at the bottom summarizes the overall effect of all studies combined. As indicated, there is a significant, but small overall beneficial effect of computerized cognitive training on cognition.

Cognitive Improvement Not Tied to Working Memory

The researchers also found that the details of what the computer programs had the participants do were important. For instance, working memory is often thought of as our mental notepad, the limited quantity of information that we can keep in mind from one moment to the next (working memory is often assessed by having participants keep series of digits in memory for a few seconds. How good are you at keeping in memory a phone number that you just read? For most of us, 7 digits is the upper limit of our working memory capacity). Working memory is thus implicated in multiple aspects of cognition. Nevertheless, the meta-analysis revealed that training that specifically targeted working memory did not improve other cognitive functions.

As any scientific research project, meta-analyses have limitations, mostly related to the heterogeneity of the individual studies that they attempt to combine. Here, a major shortcoming concerned the fact that most studies did not assess whether the effects of computerized cognitive training lasted beyond the moments immediately following the practice session. Thus, the meta-analysis cannot answer the crucial question of whether “brain games” can have any lasting positive impact on cognition, let alone fend off the adverse effects of aging. Also, the potential benefits of computerized cognitive training were generally assessed only with psychology laboratory tests, leaving aside the burning question of whether any gain on those tests translates into progress in real-life situations such as remembering appointments or resisting distractions while driving a car.

Importantly, about half of the studies used “wait-lists” or other types of passive control groups (in a wait-list control group, the participants assigned to the control group were first run on the baseline cognitive tests and then put on a waiting list to receive the cognitive training at the end of the study). As pointed out in the comments to the article, passive control groups might have created artificially large differences with the intervention groups as opposed to active control groups, where participants were trained using something else than computer programs. Active control groups are generally considered better from a methodological standpoint, but are more time- and resource-consuming.

Consensus Paper Warns on ‘Unwarranted Enthusiasm of Brain Training Industry’

The meta-analysis is not the only one to temper the enthusiasm for “brain games”: a few weeks earlier, a large group of cognitive psychologists and neuroscientists, led by the Stanford Center on Longevity and the Berlin Max Planck Institute for Human Development, released a consensus paper on the evidence—or lack thereof—of the benefits of brain training. The consensus paper did not conduct a rigorous review of the existing literature, but because its authors were prominent scientists who know the state of the art of research inside out, the conclusions overlap with those of the meta-analysis to a very large extent.

Importantly, the authors of the consensus paper caution against the unwarranted enthusiasm of the “brain training industry” that massively overstates its products’ benefits. In their words, “the small, narrow, and fleeting advances [due to computerized cognitive training] are often billed as general and lasting improvements of mind and brain.” The consensus paper laments the exploitation by that industry of the understandable anxiety that older adults might have regarding the decline of their cognitive function.

All of this is not to say that computerized cognitive training has no effect whatsoever. Indeed, the meta-analysis does point to significant albeit small benefits. The authors of both the meta-analysis and the consensus paper suggest key points to improve the quality of future research to the highest scientific standards. To conclude, you don’t need to stop playing that Game Boy right now, but don’t forget to pause every once in a while and also make time for hiking, gardening, socializing, and so on—all of which will benefit your brain and mind just as much!

References

Lampit, A., Hallock, H., & Valenzuela, M. (2014). Computerized Cognitive Training in Cognitively Healthy Older Adults: A Systematic Review and Meta-Analysis of Effect Modifiers PLoS Medicine, 11 (11) DOI: 10.1371/journal.pmed.1001756

#AESmtg14 highlights: Frontal lobe epilepsy: semiology and cognitive aspects

Here is my live-tweeting from this Special Interest Group session from the 2014 Annual Meeting of the American Epilepsy Society on Dec. 7 in Seattle, WA, collected on Storify.

This session in fact turned into one long presentation by Prof. Patrick Chauvel, from the CHU of Marseille. And it was a truly masterful lesson, with many fascinating video-intracranial EEG presentations of patients with epilepsy involving the prefrontal lobe. Dr. Chauvel drew anatomical-electrical-clinical correlates from each patient to build a systematic approach to these poorly understood epilepsies.

#AESmtg14 highlights: What parts of the brain are active during seizures?

Here is my live-tweeting from this Investigators’ Workshop from the 2014 Annual Meeting of the American Epilepsy Society on Dec. 7 in Seattle, WA, collected in Storify.

Very interesting, thought-provoking session focusing in part on high-density micro-electrode array recordings of seizures in human patients (“Utah array”). The speakers were:

#AESmtg14 highlights: do focal seizure networks matter?

Here is my live-tweeting from this Investigators’ Workshop from the 2014 Annual Meeting of the American Epilepsy Society on Dec. 7 in Seattle, WA, collected in Storify.

This was a great session with many provocative ideas (starting with the title!), chaired by Dr. Jean Gotman from the Montreal Neurological Institute. The general discussion at the end of the presentations yielded several outstanding questions and exchanges between the panelists and the audience. The speakers were:

#AESmtg14 highlights: ictal semiology helps to localize the seizure onset zone

Here is my live-tweeting from this Special Interest Group session from the 2014 Annual Meeting of the American Epilepsy Society on Dec. 6 in Seattle, WA, collected in Storify.

These sessions, where very experimented clinicians and neurophysiologists discuss the relationships between the clinical features of seizures (semiology) and the brain areas involved in the seizure revealed by intracranial EEG, are extremely valuable to young clinicians and researchers interested in epilepsy and its neurophysiological underpinnings. The speakers for this session were:

#AESmtg14 highlights: dense array EEG and source localization in clinical practice

Here is my live-tweeting from this Special Interest Group session from the 2014 Annual Meeting of the American Epilepsy Society on Dec. 5 in Seattle, WA, collected in Storify.

EEG source imaging is a particular forte of the group where I did my PhD, Christoph Michel’s Functional Brain Mapping Laboratory, at the University of Geneva Medical Faculty and Geneva University Hospitals, and of Margitta Seeck’s Epilepsy and EEG unit at Geneva University Hospitals where I trained in clinical neurophysiology and epileptology. I had a hugely rewarding feeling when some of the work I contributed to was presented at the session (first time ever for me)!

The speakers at the session were:

#AESmtg14 highlights: Epilepsy as a spectrum disorder (or, another conference already?!)

I have been extremely lucky this year to be able to attend both the Society for Neuroscience’s (#SfN14) and the American Epilepsy Society’s (#AESmtg14) annual meetings in close succession. And as I did for the SFN, I will be live-tweeting and blogging about the AES meeting over the next few days.

AESconfLOGO_0

To get started, here is a Storify collection of my live-tweeting of the opening lecture, the Judith Hoyer lecture, given by Dr. Frances Jensen from the University of Pennsylvania. Dr. Jensen made the case for envisioning epilepsy as a spectrum disorder. In my opinion, it was the ideal way of opening the meeting, with insightful and thought-provoking ideas on how to broaden our vision of epilepsy research.

Stay tuned for more exciting research on epilepsy, its neural underpinnings, its consequences and its therapies from #AESmtg14!

#SfN14 highlights: the best burger at the SfN

Let’s get straight to business: the best burger at the Society for Neuroscience’s annual meeting, when it is held in Washington DC like this year, is served meters away from the Convention center, in a dark, narrow bar recalling the ones that you can find all over Brooklyn: The Passenger. It’s just the right size, with two thin patties that are cooked medium-rare, american cheese sandwiched between them (I think), grilled onions, lettuce, and a special sauce. You get something that reminded me of the burger at Five Guys, but much more hand-crafted, while still keeping that fast-food burger satisfying taste. It comes with a pickle spear and potato chips (get the salt-and-vinegar flavor!). At $14, it’s quite expensive, but that burger really delivers.

Materials and Methods: The Passenger is in fact the only place I’ve eaten a burger in Washington, but if I can get an even better one anywhere near the Convention center, I’ll happily retract the present publication.

Now that I know where to satisfy my burger cravings in both DC and San Diego (that most welcoming city is the home of the best burger I’ve ever had, at Hodad’s), all I need is to find a place to go in Chicago and I’ll be covered for many SfN’s to come. Tips anyone?

#SfN14 highlights: The Neuroscience of Gaming

The definitive version of this post was originally published on November 17, 2014 on the PLOS Neuroscience Community‘s Collaborative coverage of SfN2014, where I serve as an editor.


ME08.The Neuroscience of Gaming. Social Issues Roundtable. Sun Nov 16 2014, 1–3 PM.

The Social Issues Roundtable on the Neuroscience of Gaming brought together four panelists with varied backgrounds, most of whom had an intimate knowledge of both video games and of the recent neuroscience studies that focused on them. The roundtable format meant that, after each panelist had given a brief presentation on their work and ideas, a long questions and answer session with the attendance took place, generating an interesting discussion.

Here are brief overviews of some of the speakers’ talks. My apologies to the speakers that I did not cover. Of course, all inaccuracies or outright misunderstandings in what follows are mine alone.


ME08. An Inside Look on Gaming Design — Daniel Greenberg.

Daniel Greenberg has a legit pedigree as a designer of video games, as he worked for several big-name game companies (yay, Atari!) on several big-name games (does The Lord of the Rings Online ring a bell, anyone?).

Bad Games: Video games are now in a place where comics were in the 1950’s: the focus of mostly negative scrutiny by scientists, physicians, psychologists and public health specialists. The problem with such negative scrutiny is that it might cause society to overlook the positive effects of video games, e.g. practice by doing, experiential learning.

Good Games: Furthermore, game play itself mimics the scientific method: you are first confronted with an unknown system, with which you interact, forming and testing hypotheses and validating or rejecting them depending on the observed results. Games don’t explain, they encourage exploration and building on one’s errors through a tolerance to the consequences of those errors (infinite lives!). Video games train a number of sensorimotor and cognitive skills, especially so-called “shooters”. “Play-fighting” is also an important part of a child’s social development, and video games provide such a form of social play.

Healthy Games: Games have been developed for improving adherence to treatment in cancer (Re:Mission), cognitive-behavioral therapy in depression (SPARX), pain management (SnowWorld). Games have also been used as tools for science (e.g. FoldIt). They have been shown to be effective in well-controlled studies. So these games clearly represent an interesting lead to explore further.


ME08. Advances in Education, Training, and Therapeutic Outcomes Using Games — Adam Gazzaley.

Why is it interesting to use video games in neuroscience research? Dr. Gazzaley’s lab is all about improving cognition in both healthy and impaired people. Current diagnostic and therapeutic approaches for cognitive impairment, but also the educational system, are essentially open-loop approaches that, according to Dr. Gazzaley, are just not good enough. Now, video games have become ubiquitous, and they are examples of closed-loop systems where an agent impacts a system that in turn feeds information back to the agent, allowing them to modify their actions.

Dr. Gazzaley credits Daphne Bavelier, then at the University of Rochester and now at the University of Geneva, Switzerland, with basically creating a whole field, that of studying the positive effects of video games on cognition. Dr. Gazzaley views video games as a tool to harness brain plasticity. He asked this question: can we create a custom-designed video game to enhance cognition in older adults?

Dr. Gazzaley then developed just such a game, Neuroracer, together with people from LucasArts (a famous video game company). Neuroracer featured multitasking, combining a driving task with a perceptual discrimination task. Performance in players of Neuroracer decreased progressively as players were older. However, intense training on the game allowed older adults (60 years old and older) to become even better than naive 20-year-olds. Crucially, this learning effect was maintained over a 6-month period, and the researchers also found transfer of improved performance on other cognitive tasks (a crucial point, since getting better at a video game per se would not bring much improvement to the lives of seniors). The behavioral changes were paralleled by changes in brain rhythms, measured by EEG.

The principles of Neuroracer are now being tested and developed by a R&D company, Akili, composed of LucasArts alumni. FDA approval is being sought.


ME08. When Gaming Goes Too Far: The Negative Implications of Problematic Gaming — Mark Griffiths.

Dr. Griffiths asks the following questions in his work: What do we mean when we talk of video games addiction? Does gaming addiction actually exist? If it does, what are people actually addicted to?

According to him, any behavior is addictive if it fulfills 6 criteria: salience (the total preoccupation with the behavior in someone’s life, such that it becomes one’s single most important thing in life); mood modification; tolerance (more of the behavior is needed to achieve the same mood modification effect); withdrawal; conflict (the most important criterion according to Dr. Griffiths: the compromise to your life—education, work, relationships—caused by the behavior); and relapse. (Dr. Griffiths notes that the newly introduced criteria for internet gaming disorder in the DSM-5 mostly overlap with these.) Generic risk factors that may facilitate online addictions include access, affordability, anonymity, convenience, disinhibition, escape, and social acceptability.

Dr. Griffiths mentioned that, according to those strict criteria, the proportion of people addicted to video games is likely very low. However, according to the approximately 100 studies published on video game abuse so far, excessive or problematic engagement seems to concern 8–12% of young persons, whereas addiction would affect 2–5% of children, teenagers and students. Dr. Griffiths thinks that these numbers are way too high: if that were the case, the problem would be much more visible, and most US cities would have a video game addiction clinics. Part of the problem may lie in the varying and inconsistent definitions of addiction, problematic use etc. across studies. Also, and that is a very important point, almost all those studies were performed on self-selected samples, as opposed to epidemiologically representative samples. Therefore, consensus is required to improve the quality of research in the field and make studies more easy to compare with each other.


Open discussion

The open discussion gave rise to some great exchanges between the audience and the speakers. Here are a few of the questions.

“Could video games, especially ‘sandbox games’ (where the player can interact with the game environment in a non-restrictive fashion, as opposed to games with a very linear progression), be used more prominently in education?”

To sum up the speakers’ answer: They might, but it is really important that both the content and the game design be optimal. Games with a lot of educational content were developed in the 1990’s, but they were not engaging and were therefore mostly ignored by children.

“When we play video games, we ‘become’ and identify with the protagonist the game to some extent. Could video games therefore be used to improve attitudes towards people of different races or sexual orientations?”

Yes, studies with avatars have already been performed and have shown that they indeed improved identification with people of different characteristics.

“What is your favorite game, video or otherwise?” (a gem of a question!)

To Gazzaley, Portal 2 was the best. Mark Griffiths answered Tetris (“because I’m red-hot at it!”). Farah admitted to not having ever played a video game, and Greenberg did not get to answer the question.


What I took home from the session

I was impressed by the large attendance and by the fact that most now agree that video games have a unique potential, both in improving our understanding of cerebral functions, but also in improving brain functions themselves! I also liked the fact that the potentially negative effects of video games (addiction was the most discussed aspect in this roundtable, but violent behaviors or social isolation were also mentioned in passing) are studied with strong underlying investigational and scientific principles, far from fear-mongering and propaganda, but without blinding ourselves to the fact that these negative effects are real.