Home » Neuroscience

Category Archives: Neuroscience

The stuff of screams

The definitive version of this post was originally published on September 6, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.

The tone of this post and the interview are a little different from my usual pieces. Heartfelt thanks to Luc Arnal, who went out of his way to answer my questions, serious and otherwise!


Luc Arnal, a post-doctoral scientist in David Poeppel’s laboratory at New York University, was having his brain hijacked. No, this isn’t a report of futuristic brain implants and neurohacking: rather, Arnal was experiencing the joys of fatherhood, together with the unavoidable alarm at hearing his newborn baby scream. Ever the scientist, he decided to explore what made those screams such an irresistible alarm signal. The answer fits in one word: roughness. Arnal applied an innovative approach to unpack the acoustic properties of screams along the two dimensions that characterize all sounds: time (how a sound evolves through time) and frequency (the pitch of a sound, among others, depends on its frequency). He found that alarm signals exploit a portion of the acoustic space that other sounds, such as normal voices, do not use: this part of the spectrum corresponds to our perception of roughness (think of how a heavily distorted guitar sounds, for instance, as opposed to a pure piano note). He then went on to characterize how our brains respond to alarm signals, and discovered that rough sounds selectively activate the amygdala, one of the structures in the brain that process emotions. Arnal and colleagues’ findings were recently published in Current Biology. Here, he went out of his way to answer questions, serious and otherwise, about his research.

Where did you find that idea of studying screams? Who among the authors is the horror movie buff? Did you create a “scream scale” to rate the most famous screams in Hollywood history by roughness?

I started being interested in screams because my newborn’s screams were literally hijacking my brain and I wondered what makes screams so efficient as an alarm signal. Interestingly, screams are highly relevant biologically: screaming is innate in humans and it constitutes a primordial vocalization that is possibly shared by various animals.

So I decided to characterize the acoustics of human screams. I wanted to analyze good scream recordings. Before recording volunteer screamers on a roller-coaster or in the lab I needed some preliminary evidence supporting my theory. I started working on excerpts from horror movies that were available on Youtube. But to be honest I’m not really a horror movie buff and it’s been really brutal and depressing for me to listen and edit so many screams overnight. But in the end, a few nightmares were totally worth the findings and the potential applications. A scream scale? We haven’t thought about that, but great idea to cast more credible victims for movies!

More seriously, you show that roughness is the defining characteristic of “alarm” signals, and that adding roughness to normal words makes them more fearsome, whereas filtering out the roughness makes screams more benign. Could you imagine a way that this filter would work in real time, in order to remove roughness in cases where it is unwanted (e.g. in economy class aboard planes)?

Well, filtering out rough modulations from the acoustic spectrum is a rather tricky manipulation especially in real time; we’re not there yet but I agree that ‘roughLess’ earplugs would be pretty amazing (although you’d probably miss the alarm signals if there is a real problem during your flight).

Also, you tested a series of sounds from musical instruments that were all considered “non-alarm” sounds. But did you try a very distorted guitar, like in hard rock or heavy metal (Black Sabbath comes to mind)?

We have compared alarm sounds with sounds from musical instruments played using Garage Band, but effects like distortion were not tested. On the other hand, we found that dissonant intervals sound rough (as do distorted guitars). It may sound counterintuitive that modern music (such as jazz and metal) uses these rough sounds that presumably trigger unpleasant, fearful responses in the brain. One possible explanation is that, in the same way that people enjoy being frightened and stimulating their amygdala when watching a horror movie, it is possible that roughness in music may induce slightly unpleasant and fearful responses in the brain of the listener, and maybe people who like hard rock music like it because it’s slightly aggressive and stimulating.

Even more seriously, your neuroimaging results suggest some specialization for the processing of screams and alarm sounds. Do we know of neurological disorders where the capacity to recognize danger through sounds is altered or abolished? Do the anatomical substrates of these disorders coincide with your fMRI findings?

Well yes, there are well-known cases of patients with bilateral lesions of the amygdala who have impaired perception of vocal affect, in particular of the expression of fear and anger. To our knowledge, however, no other work had found any acoustic specificity of the amygdalar response.

You show that alarm sounds activate the amygdala as well as the auditory cortex to a greater degree than non-danger sounds. Could you speculate on the neuronal pathways involved, especially with respect to the timing of their activation? In other words, would you think that the amygdala gets activated by alarm signals directly from subcortical structures, and then influences the amplitude of activity in the auditory cortex? Or, on the contrary, is the auditory cortex passing on information to the amygdala that then feeds the alarm detection signal back to auditory cortex?

This is a great question but we can only speculate about that since our fMRI data do not really allow investigating the timing of brain responses. Whether the recruitment of the amygdala by rough aversive sounds results from a direct routing from subcortical auditory nuclei to the amygdala or an indirect routing through the auditory cortex remains an open question. However, we think that our finding might support the view that fast temporally modulated (rough) sounds would be directly routed from subcortical auditory nuclei to the amygdala. The fact that the amygdala is activated by roughness regardless of context (vocal, musical) is consistent with this view. The fast recruitment of the amygdala might in turn cause sensory unpleasantness, increased attention or arousal, and speed up the reaction to the signaled danger. Importantly, this hypothesis does not rule out subsequent interactions with other cortical areas involved in the processing of more complex information (pertaining to the context or valence of the stimulus).

Reference

Arnal, L., Flinker, A., Kleinschmidt, A., Giraud, A., & Poeppel, D. (2015). Human Screams Occupy a Privileged Niche in the Communication Soundscape Current Biology, 25 (15), 2051-2056 DOI: 10.1016/j.cub.2015.06.043

Advertisements

The brain’s ebb and flow cares not for distance

The definitive version of this post was originally published on July 29, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


 

Over the past decade, functional neuroimaging has revealed that our brains go through ever-changing patterns of activity, whether we are active or at rest, healthy or sick, under legal medication or high on illegal drugs. Yet this dynamic activity takes place over the comparatively fixed anatomical grid of neuronal connections; the functional weights of those connections therefore must be changing over time. Two competing hypotheses have been put forth regarding the strength and malleability of neuronal connections: on the one hand, local neuronal connections could be more stable than long-distance ones, because neighboring regions of the cerebral cortex tend to take part in the same functions. On the other one, the flexibility of connections might not depend on their length, thus promoting equilibrium between local specialization and widespread integration. Bratislav Misic, Marc G. Berman and their colleagues, from the Rotman Research Institute in Toronto and the University of Chicago, among others, set out to find evidence for either of these hypotheses by analyzing an extremely diverse bunch of data, lumping together different functional neuroimaging modalities (what the participants were tested with), clinical populations (who the participants were) and task parameters (what they were asked to do). Notwithstanding this immense heterogeneity, they were able to show that the spatial distance between regions does not impact the stability of their functional connections. Their results, published in PLOS ONE, support the notion that the brain’s functional connectivity transits seamlessly between local specialized processing and global integration.

A cocktail of studies

The authors re-analyzed the data from six studies: four used functional MRI, one magnetoencephalography and one positron emission tomography. The participants were either healthy volunteers or patients suffering from depression or breast cancer. Finally, the experimental conditions consisted of either just resting in the scanner on 2 separate occasions, ruminating autobiographical memories versus rest, or performing various tasks of sensory perception or learning and memory. The authors selected such a diverse group of studies precisely so that they would be able to assess connectivity changes across a wide spectrum of situations, regardless of the methodological details of each study.

In each study and for each participant, the authors first grouped measurements of brain activity into regions of interest and then correlated cerebral activity in each region of interest to that of all the others, yielding matrices of functional connectivity within each experimental condition. They then measured the distance between all the regions of interest and computed the correlation between distance and functional connectivity. Others had previously found that regions of the brain that were closer to each other tended to have higher functional connectivity, and that is also what the authors observed here. This probably reflects the fact that neighboring brain regions tend to carry out the same functions.

No correlation between anatomical distance and changes in functional connectivity

The authors then undertook to compare how distance between brain regions correlated with changes in functional connectivity across experimental conditions. They computed two indexes of connectivity changes: salience values derived from a partial least-squares analysis, as well as simply subtracting the connectivity values from different experimental conditions. Overall, they found no correlation between the distance between two brain regions and changes in their connectivity across experimental conditions.

This held true regardless of whether the connectivity increased or decreased as a result of the experimental condition, whether only those brain regions that displayed strong connectivity changes were taken into account, whether the regions were part of the same functional brain networks, or even whether they were in the same or the opposite cerebral hemisphere. These results thus argue against the notion that there is a relationship between anatomical distance and changes in functional connectivity. Importantly, it made no difference which neuroimaging technique was used to look at brain function, since the magnetoencephalography and positron emission studies yielded essentially the same observations as the ones that used functional MRI.

Intriguingly, homotopic regions (i.e. the same region on both cerebral hemispheres) had the lowest tendency to see their functional connectivity change across experimental conditions, suggesting that interhemispheric connections between homotopic regions are among the most stable.

When negative evidence yields positive results

In this study, the authors provide mostly negative evidence, which means that they looked for a systematic tendency of functional connectivity changes as a function of anatomical distance and failed to find it. Because absence of evidence is not the same as evidence of absence, does that mean that the conclusions of the article are unwarranted? Most likely not: the fact that there was no correlation across such a diverse group of participant populations, task parameters, and even neuroimaging modalities argues strongly for the hypothesis that, indeed, anatomical distance plays no role in determining the stability or flexibility of functional connections.

What is the functional consequence of this? According to the authors, the fact that short- and long-distance connections have an equal propensity to change might favor a subtle balance between local, presumably specialized processing of information by the brain and the integration of this processing with that of distant modules within distributed networks. The authors suggest that the stable interhemispheric connections between homotopic regions might serve as anchors within this dynamic connectivity landscape.

References

Mišić, B., Fatima, Z., Askren, M., Buschkuehl, M., Churchill, N., Cimprich, B., Deldin, P., Jaeggi, S., Jung, M., Korostil, M., Kross, E., Krpan, K., Peltier, S., Reuter-Lorenz, P., Strother, S., Jonides, J., McIntosh, A., & Berman, M. (2014). The Functional Connectivity Landscape of the Human Brain PLoS ONE, 9 (10) DOI: 10.1371/journal.pone.0111007

Here be values (in the brain): how the ventral striatum participates in decision-making

The definitive version of this post was originally published on June 30, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


How do we make decisions? The question is of huge importance for every aspect of our lives. Research into the neuronal mechanisms underlying decision-making has made considerable progress in recent years, from basic research in animal models through functional neuroimaging in humans to clinical studies in patient populations. As a result, we are starting to have a reasonably complete picture of the brain areas involved in this process, which include among others the ventral striatum, a subset of the basal ganglia in the depth of each cerebral hemisphere, and the ventromedial prefrontal cortex. The former is generally linked to learning, especially habit learning, while the latter is thought to be important for the online regulation of behavior. One thing that remains unclear, however, is what the role of these individual brain areas is exactly. In a recent study published in PLOS Biology, Caleb Strait and colleagues at the University of Rochester, NY, investigated the role of the ventral striatum in making choices based on the expectation of a reward, and compared it with that of the ventromedial prefrontal cortex. Their intriguing results show that both areas reflect mostly overlapping aspects of decision-making, and suggest that choices are the result of distributed neuronal computations occurring in multiple brain regions.

Decision-making research uses a number of specific terms, some of which are defined here (from Stott and Redish, PLOS Biol 2015).

Decision-making research uses a number of specific terms, some of which are defined here (from Stott and Redish, PLOS Biol 2015).

Gambling monkeys seek risk

In the study, monkeys performed a gambling task, with drinking water as a reward. On each trial, they had to choose one of two options, each of which varied in the magnitude of the reward (the number of water drops) and its likeliness (say, 50% vs. 80%). The two options were presented to the monkeys one after the other, with the probability of the reward and its magnitude represented as rectangles of varying size and color respectively. Neuronal activity was recorded in the ventral striatum throughout the task. The monkeys’ choices indicated that they understood well the mechanisms of the gamble: on 83% of trials, they chose the option with the larger expected value (that is, the option that would bring the larger cumulative reward if it were chosen a large number of times). Further, the monkeys were risk-seekers: when presented with two options of equal expected value, they more often chose the riskier one (that is, the option with the smaller probability, but the larger reward amount).

Neural correlates of value in the ventral striatum

More than half the neurons recorded in the ventral striatum responded to some aspect or another of the gamble. About 15% of neurons changed their activity as a function of the reward amplitude in the first offer (that is, they fired either more or less than baseline, but consistently so for a given reward amplitude), and the same proportion responded to the probability of that first offer. Collectively, the neurons encoded both the reward amplitude and its probability within one abstract representation (that is, they tended to react in a similar fashion both when the reward probability of the first offer increased and when its magnitude increased).

When the second offer appeared on the screen, neurons in the ventral striatum responded to the magnitude and probability of the reward in a similar fashion as they did during the presentation of the first offer. Interestingly, however, a proportion of neurons also encoded the expected value of the first offer during the presentation of the second one: for instance, for a given expected value of offer 2, neurons would fire more when it was larger than that of offer 1 than when it was smaller. Thus, these neurons are encoding the comparison between the two offers by weighing their relative expected values; the authors name this antagonistic coding.

Ventral striatal neurons track eventual choice

The activity of ventral striatal neurons also tended to represent the offer that the monkey eventually chose (whether the first or the second one) more and more as each trial progressed; that effect became significant before the monkeys gave their response. Looking now at the period during which the monkeys received their reward (or not!), the authors found that about 35% of ventral striatal neurons encoded the outcome of the gamble. Importantly, neurons used the same (or at least a similar) code to represent the expected value of an offer and the actual outcome of the trial. Finally, the authors showed that neurons in the ventral striatum encode the actual reward size of a gamble, rather than the reward prediction error (the comparison between the expected reward and the actual one).

Strait and colleagues’s study is summarized in the top line (a-c), and compared to results from a similar study conducted in rats (bottom line, d-f). In both species, value representation appeared in the ventral striatum (VS) before it did in the prefrontal cortex (ventromedial prefrontal cortex, vmPFC, in monkeys; orbitofrontal cortex, OFC, in rats), pointing toward a common or similar mechanism for value assessment in both species (from Stott and Redish, PLOS Biol 2015).

Strait and colleagues’s study is summarized in the top line (a-c), and compared to results from a similar study conducted in rats (bottom line, d-f). In both species, value representation appeared in the ventral striatum (VS) before it did in the prefrontal cortex (ventromedial prefrontal cortex, vmPFC, in monkeys; orbitofrontal cortex, OFC, in rats), pointing toward a common or similar mechanism for value assessment in both species (from Stott and Redish, PLOS Biol 2015).

“An embarrassment of riches?”

Overall, neurons in the ventral striatum encoded not only the expected value of each offer, but also the difference between the expected values of the two offers. Further, they came to represent the selected offer rather than the ignored one; finally, they showed strong outcome-related responses. This behavior is strikingly similar to that of neurons in the ventromedial prefrontal cortex, another cerebral area involved in decision making that the authors previously explored in monkeys as well. Importantly, however, the moment when neurons started to encode the selected choice happened earlier in the ventral striatum than in the ventromedial prefrontal cortex, leading to the intriguing suggestion that value-based decision-making could actually take place in the basal ganglia and not in the cortex. Alternatively, as Jeffrey Stott and David Redish explain in a primer accompanying the research article, these multiple representations of value in the brain could be part of parallel decision-making systems, each one impacting a different aspect of behavior. Whatever the final word on these questions might be, Strait and colleagues’s research has enhanced our understanding of how our brains assess value and make choices based on this assessment.

References

Strait, C., Sleezer, B., & Hayden, B. (2015). Signatures of Value Comparison in Ventral Striatum Neurons PLOS Biology, 13 (6) DOI: 10.1371/journal.pbio.1002173

From Broca’s area to Broca’s aphasia: a tale of two eponyms

The definitive version of this post was originally published on May 27, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


In 1861, the French scientific journal Bulletin de la Societe Anatomique published an article that would prove immeasurably important to the study of language and of the human brain [1]. The article described M. Leborgne, a middle-aged patient who had suffered for the past 20 years from a striking inability to speak; so much so that he had become known as “Tan”, after the only syllable that he could utter. Leborgne had the misfortune to die soon afterwards, and the physician who had taken care of him in his last days performed an autopsy and collected the brain. What he found almost definitively established the notion of the cerebral localization of cognitive functions: Leborgne’s brain bore a single lesion in the inferior part of the left frontal lobe. Thus, focal and circumscribed brain damage was responsible for Leborgne’s loss of the ability to speak. The features of Leborgne’s speech impairment and the damaged area of his brain both came to bear the name of the physician who reported on his plight: Paul Broca.

The area and the syndrome do not match

Today, Broca’s area refers to the posterior portion of the inferior frontal gyrus on the left cerebral hemisphere, and Broca’s aphasia to an acquired alteration of spoken and written language that includes problems with speech fluency, word finding, repetition, and the ability to construct and understand grammatically complex sentences: patients suffering from Broca’s aphasia have a hard time getting words out and speak in short and hesitant sentences, sometimes called telegraphic speech. But the careful observation of numerous patients led many physicians and scientists to question the existence of an exclusive and absolute link between the two eponyms: there are patients whose Broca’s area is damaged, yet whose speech does not resemble that of Leborgne, and can even be almost normal; conversely, some patients speak like Leborgne after brain damage that does not involve the inferior frontal gyrus. How can one determine the precise role of Broca’s area given this discrepancy?

The rise of functional neuroimaging and neurophysiology affords another approach: rather than study patients whose brain is damaged or whose speech is abnormal, functional studies would measure the brain activity of healthy people while they speak. The question then becomes: what aspects of speech production—from conceptualizing a word to selecting its correct grammatical form to translating it into syllables to preparing the motor commands that would produce those syllables to finally executing the motor commands and speak—are under the control of Broca’s area? Unfortunately, technicalities got in the way: the phenomenon of speech production unfolds rather quickly in time, over a few hundreds of milliseconds, far beyond the temporal resolution of functional magnetic resonance imaging, which generally produces only about one “brain map” per second. On the other hand, several brain areas involved in speech production sit next to each other in the brain, which makes them impossible to resolve using electroencephalography and magnetoencephalography, despite the millisecond temporal resolution of those methods.

Probing the human brain’s function from within

If functional MRI is too slow and EEG is too blurry, would it mean that studying brain function during the production of normal speech is altogether impossible? Not quite: there are situations when medical conditions such as brain tumors or epilepsy dictate the placing of electrodes directly in contact with the human brain. These electrodes have the same millisecond temporal resolution as EEG, but with a spatial resolution and specificity that rivals that of functional MRI. In other words, they provide a uniquely detailed window onto the human brain’s functions. Neuroscientific research using intracranial electrodes is made possible thanks to the extraordinary generosity of the patients who agree to participate in extra tests and experiments despite the fact that they have just undergone a significant neurosurgical procedure.

A grid of intracranial electrodes is placed over the surface of the cerebral cortex (source: Electrocorticography. Wikipedia. Retrieved April 12, 2015).

A grid of intracranial electrodes is placed over the surface of the cerebral cortex (source: Electrocorticography. Wikipedia. Retrieved April 12, 2015).

In a study recently published in the Proceedings of the National Academy of Sciences, Dr. Adeen Flinker and colleagues, from the University of California, Berkeley and the Johns Hopkins University in Baltimore, used intracranial electrodes to reexamine the role of Broca’s area in speech production [2]. They studied seven patients whose epilepsy could not be controlled by drugs and who were candidates for surgical removal of the epileptic focus in their brain. In these patients, the intracranial electrodes were necessary both to determine the exact origin of the seizures, and also to map cortical functions in order to spare areas essential for speech. Dr. Flinker asked the patients to repeat out loud words that they had just heard or read while he measured with millisecond precision neural activity in Broca’s area as well as in the motor cortex that ultimately controls the movements of the tongue and mouth, further back on the surface of the frontal lobe, and in parts of the temporal lobe important for hearing and the comprehension of language.

Resolving the role of Broca’s area with millisecond precision

When patients were repeating words that they had just heard or read, Dr. Flinker found a characteristic pattern of activation: first the auditory cortex, then Broca’s area, and finally the motor cortex. Importantly, activity in Broca’s area closely followed that in the auditory cortex, and by the time the patients were starting to speak themselves, neural activity in Broca’s area had resumed to its resting level. This suggests that Broca’s area cannot be responsible for actually coordinating speech movements. Flinker and colleagues then used Granger causality analysis, a statistical method originally developed in economic forecasting, in order to estimate the direction of information flow from one brain area to another. That analysis confirmed that the auditory cortex first influenced Broca’s area, which in turn influenced the motor cortex. Importantly, the influence of Broca’s area over the motor cortex had terminated before the patients started speaking. These results confirm that it could not be responsible for coordinating articulation.

The graphs on the left side of the figure represent the amount of activity in the auditory cortex (superior temporal gyrus, STG, top), Broca’s area (middle) and the motor cortex (bottom). Yellows and reds indicate larger amounts of activity, whereas green indicates baseline activity. Notice how activity in Broca’s area has returned to baseline by the time the patient is speaking (later than the vertical dashed line). (Source: Flinker A, et al. Redefining the role of Broca’s area in speech. PNAS 2015;112:2871-2875)

The graphs on the left side of the figure represent the amount of activity in the auditory cortex (superior temporal gyrus, STG, top), Broca’s area (middle) and the motor cortex (bottom). Yellows and reds indicate larger amounts of activity, whereas green indicates baseline activity. Notice how activity in Broca’s area has returned to baseline by the time the patient is speaking (later than the vertical dashed line). (Source: Flinker A, et al. Redefining the role of Broca’s area in speech. PNAS 2015;112:2871-2875)

In a clever twist built into the word repeating task, Flinker and colleagues included pseudo-words such as “yode” in addition to existing words such as “book”. The patients were able to speak these pseudo-words just as well as the standard ones, although it took them a little more time to do so. Crucially, Broca’s area was more intensely at work before the patients repeated the pseudo-words, suggesting that the role of that area was to prepare the novel articulatory combinations that were then executed in the motor cortex.

Flinker and colleagues’ findings nicely align with those of another study that directly assessed Broca’s area in different conditions: Dr. Matthew Tate and colleagues from the University Hospital of Montpellier, France, applied bursts of electrical stimulation directly to the surface of the cerebral cortex in patients who were undergoing neurosurgery [3] (I reported about that study here). Such a procedure, known as intraoperative mapping, sounds more painful and uncomfortable than it really is: after the patient’s brain has been exposed under general anesthesia, the patient is allowed to wake up on the operating table, with local anesthetics taking care of the pain caused by the surgery. She is then asked to repeat words, just as Dr. Flinker’s patients, while the neurosurgeon transiently and reversibly disrupts cortical function with electricity. Not your typical day in the park, but it is worth the effort: direct stimulation mapping yields the most precise functional maps of the human brain, and therefore ensures that the surgery won’t affect the patient’s language or cause any other disability. Dr. Tate and colleagues found that briefly tampering with Broca’s area while patients were speaking rarely prevented them from getting the words out altogether. Instead, it caused them to have “slips of the tongue”, paraphasias in technical parlance: incorrect wrong speech sounds would be inserted into words, but articulation would then proceed normally.

To scan a dead brain

If Broca’s area is not active during articulation itself, and if transiently impairing its function leaves patients able to articulate, why did Leborgne, and why do patients with Broca’s aphasia, have such massive difficulties to get any word out at all? Here, neuroimaging did make a critical contribution: Broca had the good idea of preserving Leborgne’s brain for posterity, which meant that it could be examined with a modern MRI scanner. That is just what Dr. Dronkers and colleagues, from the University of California, Davis and the Université Pierre et Marie Curie, Paris, did, and they found that the damage extended far beyond Broca’s area per se, also involving the neighboring parietal and temporal lobes, but especially reaching into the depth of the cerebral hemisphere and destroying most of the insula and part of the basal ganglia [4]. In fairness to Broca, he did mention in his original report that the damage seemed more extensive than what he could see from the surface of the brain; but he chose not to dissect the brain precisely because he wanted to preserve it, and could therefore not assess the extent of the lesion completely. Thus, the apparent discrepancy between Broca’s area and Broca’s aphasia stems from the fact that the damage to Leborgne’s brain extended far beyond the confines of Broca’s area!

The story of Broca’s foundational discovery, and how modern neuroscience carefully refined and improved our understanding of the functional organization of speech production in the brain, is a vibrant example of cognitive neuroscience at work. There is no understating the absolutely crucial role of serendipitous clinical observations of patients with brain damage, the unfortunate victims of “Nature’s experiments”. Armed with modern neuroimaging and neurophysiological techniques, we can now functionally dissect the brain’s activity in health as well as in disease. The resulting, ever more detailed picture of the human brain at work changes the way we conceive of the relationship between our brains and our minds.

References

  1. Broca P. Remarques sur le siége de la faculté du langage articulé, suivies d’une observation d’aphémie (perte de la parole). Bull Soc Anat 1861;6:330–357.
  2. Flinker A, Korzeniewska A, Shestyuk AY, Franaszczuk PJ, Dronkers NF, Knight RT, & Crone NE (2015). Redefining the role of Broca’s area in speech. Proceedings of the National Academy of Sciences of the United States of America, 112 (9), 2871-5 PMID: 25730850
  3. Tate MC, Herbet G, Moritz-Gasser S, Tate JE, & Duffau H (2014). Probabilistic map of critical functional regions of the human cerebral cortex: Broca’s area revisited. Brain : a journal of neurology, 137 (Pt 10), 2773-82 PMID: 24970097
  4. Dronkers NF, Plaisant O, Iba-Zizen MT, & Cabanis EA (2007). Paul Broca’s historic cases: high resolution MR imaging of the brains of Leborgne and Lelong. Brain : a journal of neurology, 130 (Pt 5), 1432-41 PMID: 17405763

Book review: Tales from both sides of the brain, by Michael Gazzaniga

I’m introducing a new category of posts: short reviews of neuro-related books I recently read and liked (or didn’t!). I am starting this series with Michael Gazzaniga’s scientific autobiography, which was published earlier this year.


Michael S. Gazzaniga. Tales from both sides of the brain: a life in neuroscience. Ecco, 2015.

book_cover_gazzaniga_talesIn this very enjoyable book, legendary neuroscientist Michael Gazzaniga tells of his life in science–and a bit about his life outside science as well.

Why enjoyable? Because Dr. Gazzaniga is a great story-teller; in the book, his stories pleasantly weave together the essential (the inventiveness and drive that led Dr. Gazzaniga to build a neuropsychological testing laboratory, complete with an elaborate stimulus-presentation apparatus called tachistoscope, inside a trailer van in order to go test the patients at their homes) and the anecdotal (the joys of martinis at lunchtime in 1970’s Manhattan, or how to moonlight as a political meeting organizer while a grad student at Caltech).

Why legendary? Dr. Gazzaniga was instrumental in starting the field of cognitive neuroscience (he came up with the term itself) and founded both its Journal and its Society. His numerous studies brought essential insight into the human brain’s higher functions, perhaps most famously regarding the consequences of sectioning the interhemispheric commissures of the brain. Indeed, he was one of the major players in the ground-breaking work on split-brain patients that earned Dr. Roger W. Sperry the Nobel Prize in 1981.

What this book is not is a scientifically complete and concise, third-person summary of the research on split-brain patients; in that sense, I found the title slightly misleading. The subtitle is much more to the point: this is Dr. Gazzaniga’s “professional autobiography”, and we stand right behind him as he describes how his breath is taken away by the results of the first experiments that began to reveal hemispheric specialization in the human brain.

We literally stand right behind Dr. Gazzaniga, since he pioneered the archiving of neuropsychological tests on film. The book includes links to about two dozen movies that illustrate both the ingenuity of the experimental designs and the endlessly fascinating data yielded by split-brain patients.

Throughout the book, I was immensely impressed by Dr. Gazzaniga’s passion, his relentless energy in solving problems and coming up with solutions to help crack the secrets of cognitive function in the two halves of the brain. Despite not having learned as much about split-brain research as I had hoped to, I left the book reinvigorated with respect to my own scientific pursuits. I warmly recommend this book: some of Dr. Gannaziga’s enthusiasm for research is bound to rub off on his readers!


Multisensory integration and causal inference in the brain

The definitive version of this post was originally published on April 3, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


Amidst the incessant onslaught of signals that reach our senses every second, how does the brain determine which auditory and visual signals originate from a common source and should be integrated, and which ones should be distinguished as they reflect separate objects?  An influential idea is that the brain solves this problem by performing optimal probabilistic inference, known as Bayesian causal inference. In a recent study published in PLOS Biology, Drs. Tim Rohe and Uta Noppeney, from the Max Planck Institute of Tuebingen, Germany, and the University of Birmingham, UK, combined behavior and functional MRI with intricate analysis models to shed light on the neural underpinnings of Bayesian causal inference in the cerebral cortex.

An everyday life analogy

Picture yourself as you are about to cross a busy street. Both the visual shape and the engine noise coming from your right surely signal the same car to you; but what about that sudden honking horn? If the sound is coming from the left, chances are it is another vehicle, potentially closer and more dangerous to you. If, on the other hand, your ears tell you that it is coming from the right, then it most likely originated from the same car you had already seen. How and where in the brain does such processing of the sensory scene take place?

A model for multisensory processing

Bayesian causal inference is a framework for analyzing multisensory integration that postulates that the attributes of a sensory object (e.g. the location of the honking horn in space) are represented in the brain in a probabilistic fashion, with the probability distribution reflecting the reliability of the given sensory modality. Unisensory stimuli would then be combined into a single percept if those distributions overlap enough. Behavioral experiments have established that this model most accurately reflects the way we handle multisensory stimuli, but the neural basis remains unexplored.

The ventriloquist illusion

In the case of spatial localization, vision is more accurate than audition, so that our perception of the source of a sound in space can be biased towards that of a neighboring visual stimulus occurring at the same time. Rohe and Noppeney used such an experimental paradigm, the “ventriloquist illusion”, in extensive functional MRI sessions (participants were scanned for 18 hours each!). Importantly, they manipulated the reliability of visual inputs in addition to just their spatial localization to probe how well the brain conforms to the predictions of Bayesian causal inference.

Panels A-C illustrate the principle of the experiment used by Rohe and Noppeney; panel D summarizes the main findings. From: Kayser C, Shams L, Multisensory Causal Inference in the Brain, PLOS Biol 2015.

Panels A-C illustrate the principle of the experiment used by Rohe and Noppeney; panel D summarizes the main findings. From: Kayser C, Shams L, Multisensory Causal Inference in the Brain, PLOS Biol 2015.

A hierarchy of cortical multisensory processing

Their results, fascinatingly, point toward an organizational hierarchy in the multisensory processing of spatial information: whereas primary visual and auditory cortices mostly processed their respective inputs separate from any concurrent input in the other modality (forced segregation), cortical areas further up in the processing hierarchy (posterior intraparietal sulcus) systematically integrated sensory inputs, regardless of their spatial provenance (forced fusion). Only at the highest stage of sensory processing, in the anterior intraparietal sulcus, did neural activity reflect the uncertainty about where the sound and flash were coming from. Drs. Rohe and Noppeney’s complex study is put into perspective ina very informative and beautifully illustrated Primer by Drs. Christoph Kayser and Ladan Shams, also published in PLOS Biology.


A few questions to the authors

These findings in turn generate a series of questions: how and when does the brain “learn” about the natural statistics of the outside world and the reliability of its perceptual abilities? Is the situation similar in the time dimension as in space? I asked a few questions to Drs. Rohe and Noppeney about the perspectives opened by their exciting work.

The brain uses a Bayesian framework by incorporating prior knowledge about the world when processing sensory information. According to you, how does the brain acquire that knowledge?

The brain may acquire prior knowledge about the statistical structure of the world at multiple timescales. Some priors may have evolved through evolutionary selection and be innately specified. Other priors may slowly evolve during neurodevelopment, when children are exposed to the statistical structure of sensory signals. Yet, numerous studies have demonstrated that even low level sensory priors can be modified across experimental sessions suggesting that in many cases the brain constantly adapts prior expectations to the current environmental statistics (Sotiropoulos et al., 2011).

What would be the neuronal underpinnings of that knowledge?

It is largely unknown how the brain implements prior knowledge and expectations. Various mechanisms have been proposed such as spontaneous activity (Berkes et al., 2011), the fraction of neurons encoding a particular feature (Girshick et al., 2011), their response gain and tuning curves or connectivity and top-down projections from higher order areas (Rao and Ballard, 1999). Potentially, the brain may use different mechanisms depending on the particular prior and the timescale of learning. In the multisensory context, it is still unknown whether the brain encodes modality-specific or supramodal priors. Again this may depend on the particular type of prior (e.g. spatial vs. temporal).

The ventriloquist effect is based on the notion that the visual system provides the brain with a more accurate readout of the space around us than the auditory system. This is in contrast to the time dimension, where audition is generally thought to be more precise than vision. Could there be some sort of “temporal ventriloquy” effect where timing judgments made on visual inputs get biased by conflicting auditory inputs?

In the temporal domain, the sound is more likely to bias the visual percept. This is illustrated in the classical flutter driving flicker phenomenon where participants are biased in their judgment of the flicker rate by a concurrent fluttering sound (e.g. Gebhard and Mowbray, 1959). More recent studies have demonstrated that even a single sound that is presented with a temporal offset to the flash can attract the perceived timing of the flash (Vroomen and de Gelder, 2004).

Would you predict that the same set of brain areas that you investigated here would show similar activations?

For spatial ventriloquism we focused on the dorsal visual processing stream that is known to be involved in spatial processing. A temporal ventriloquist effect may emerge along a temporal processing stream which is thought to culminate in the right temporoparietal junction (Batelli et al., 2007). However, the temporal ventriloquist effect may not be reflected in a functionally specialized and segregated temporal processing system, but rather emerge in multiple regions affecting temporal features of the neural response. MEG and EEG studies with their greater temporal resolution may thus provide better insights into the neural mechanisms of temporal ventriloquism.

I was wondering if you also looked at reaction times in your study. I would expect that multisensory integration affects reaction times as well as accuracy.

In this particular study, we did not look at response times. However, multisensory integration and Bayesian Causal Inference will indeed also affect response times. Most research to date has either focused on response choices / accuracy or response times. Future research will need to develop models of multisensory integration and segregation that make predictions jointly for both response choices and times (e.g. see: Drugowitsch et al., 2014; Noppeney et al., 2010).


 

References

Rohe, T., & Noppeney, U. (2015). Cortical Hierarchies Perform Bayesian Causal Inference in Multisensory Perception PLOS Biology, 13 (2) DOI: 10.1371/journal.pbio.1002073

Kayser, C., & Shams, L. (2015). Multisensory Causal Inference in the Brain PLOS Biology, 13 (2) DOI: 10.1371/journal.pbio.1002075

Multimodal measures of brain connectivity: how much should they agree?

The definitive version of this post was originally published on March 3, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


A study recently published in Frontiers in Neurology started an interesting discussion on Twitter. The paper, by Stephen Jones and colleagues from the Cleveland Clinic, tackled a seemingly simple question: do measures of cerebral connectivity derived from different modalities (functional MRI, intracranial EEG, or diffusion tensor imaging) give similar results? To make a long story short, the answer is not much, as the authors report in Frontiers in Neurology and as Ged Ridgway pointed out on Twitter. Correlation coefficients (r-squared) between the connectivity metrics derived from pairs of modalities ranged from 0.001 to 0.20, which is admittedly not very high. The question is: are those observations surprising?

brain_connectivity_tweet1

brain_connectivity_tweet2

brain_connectivity_tweet3brain_connectivity_tweet4

Whether these results are surprising, and whether they make sense at all, requires looking in more detail at what the authors did. In their study, Jones and colleagues used four distinct ways of measuring brain connectivity. Two are based on magnetic resonance imaging (MRI), and are relatively well known, while the other two center on intracranial electrodes placed into the brain of patients with severely disabling seizures during the work-up for epilepsy surgery.

  1. functional connectivity using resting-state functional MRI
  2. structural connectivity using high angular resolution diffusion-weighted MRI and probabilistic tractography
  3. cortico-cortical evoked potentials (CCEPs) evoked by direct, single-pulse electrical stimulation of the brain
  4. simultaneous functional MRI and direct electrical stimulation of the brain through the intracranial electrodes

Intracranial electrodes in human brains

I will spend some time on the last two modalities, with which most researchers are probably not familiar. As I briefly mentioned above, those modalities rely on intracranial electrodes, which are inserted surgically into an epileptic patient’s brain in order to localize precisely the site of origin of seizures. Once in place, the electrodes are recording the brain’s electrical activity for a few days to a few weeks, until the patient has had a few seizures and the physicians have determined where in the brain the seizures start.

In addition to recording the local EEG, intracranial electrodes can also be used to deliver electrical stimulation to the brain tissue surrounding them. At high frequencies (around 50 Hz) and amplitudes (several milli-amperes), and depending on where the electrode is located, direct electrical stimulation can elicit clinically observable phenomena such as muscle contractions or subjective percepts of the patient’s, such as visual hallucinations.

Cortico-cortical evoked potentials

Stimulation at lower frequencies (around 1 Hz) and amplitudes, on the other hand, is very rarely felt by the patient. The other intracranial electrodes, meanwhile, continue to record the local EEG. By stimulating one electrode with single pulses multiple times, and averaging the responses of the other electrodes, one can obtain an evoked potential in the same fashion as sensory-evoked potentials: the cortico-cortical evoked potential (CCEP). For much more information about CCEPs, check out Mapping human brain networks with cortico-cortical evoked potentials.

Metrics of connectivity between the stimulated and recording electrodes can then be derived from CCEPs. One potential limitation of such an approach is that there are many ways to record and measure CCEPs. The amplitude of the evoked response, for instance, is known to vary as a function of the stimulation amplitude; however, the authors here used a range of amplitudes across stimulation sites and patients. It is even harder to decide what to measure on the CCEP responses themselves, since the neural mechanisms that generate them remain incompletely understood. Here, Jones et al. actually extracted 2 different metrics from the CCEPs depending on what they were going to compare CCEPs against: an early-latency amplitude metric (presumably reflecting direct, cortico-cortical projections) to compare against DWI; and another metric integrating the CCEP trace over a later and longer time period to compare against RS-fMRI. Stricto sensu, they are therefore not comparing the same thing anymore. Clearly, the potential parameter space, i.e. the total number of metrics that could be extracted from CCEPs (and RS-fMRI and every other modality, for that matter), is huge; the choice of parameters in a given study could drastically affect connectivity measures.

Intracranial electrical stimulation during functional MRI

This second metric is even less widespread than CCEPs, since the authors are basically the only ones to use it. It amounts to an fMRI measurement of the BOLD signal while electrical stimulation is delivered through the intracranial electrodes. Here, however, electrical stimulation needs to be delivered at higher intensities through time in order to cause any measurable BOLD change, and the authors used 20-Hz stimulation. That of course makes comparison with “standard” CCEPs difficult, because we do not know whether the same neuronal mechanisms underlie the responses to 1-Hz and 20-Hz stimulation.

Measures of brain connectivity do not line up… should we expect them to?

brain_connectivity_table

The authors report rather low pairwise correlations between connectivity measures, with r-squared values ranging from 0.001 (comparing resting-state fMRI to CCEP-evoked fMRI or to diffusion tensor imaging) to 0.20 (between CCEPs and CCEP-evoked fMRI). Why would these values be so low? The authors acknowledge the above-mentioned parameter space problem. They suggest at length that the low correlations could reflect our inability to precisely localize and co-register the different measurements in brain space. They also thoughtfully write that the optimal measure of connectivity could integrate information from more than one modality and go beyond linear correlations between point-to-point, scalar metrics. In their own words: “This development would represent the next step in the evolution of neuroimaging, in which the imaging biomarker moves from being the images themselves, to a mathematical brain model that is informed by images”.

Indeed, as stated by PractiCal fMRI’s tweet, the different modalities explored here are measuring very different facets of the brain’s anatomy and physiology, and how well their results should line up is undetermined. For instance, diffusion tensor imaging reveals major fiber tracts in the white matter, whereas functional MRI highlights statistical dependences between the very slow fluctuations of blood supply to brain regions. To take but one example, functional MRI generally shows very strong connectivity between the left and right hippocampi, whereas the (direct) anatomical connections between these structures are sparse. And that’s only taking into account the temporally sluggish-to-static MRI-based approaches to connectivity; factoring in neurophysiology-based methods, with their millisecond temporal resolution, basically adds a whole other dimension to the dataset. Also, the data from different modalities were collected at different times, thus preventing the possibility of looking at the dynamic evolution of those data through time.

To conclude, the different modalities used in this study likely reflect different aspects of the brain’s structure and function, and we should probably not expect the connectivity metrics to line up perfectly.

Critical omissions

A couple of things are missing from Jones et al.’s study. First, they could have added resting-state intracranial EEG to the list of modalities that they investigated, since they would have had the data already. The authors also fail to refer to previous work that actually investigated the very same questions—and shed an interesting light on those correlations. Specifically, Conner et al. reported on the correlation between CCEPs and MRI tractography in the language system and found an average r-squared value of 0.41. Keller et al. explored the correlations between CCEPs and resting-state fMRI connectivity and obtained overall r-squared values between 0.04 and 0.1. Importantly, when they focused on the language system and only considered CCEPs whose amplitude went beyond a significance criterion derived from the baseline intracranial EEG, Keller et al. found much higher values of r-squared, ranging from about 0.25 to 0.50. Thus, pairwise comparisons of brain connectivity across modalities were much higher than reported by Jones et al. in cases where prior knowledge strongly suggests the existence of connections.

Despite those omissions, the study by Jones et al. is valuable thanks to the richness of the dataset that they generated. With this in mind, the neuroscientific community would benefit tremendously if the authors would make this vast dataset publicly available, so that others can design new ways to extract and combine connectivity metrics and thus shine further light on the structural and functional organization of the human brain.

References

Jones, S., Beall, E., Najm, I., Sakaie, K., Phillips, M., Zhang, M., & Gonzalez-Martinez, J. (2014). Low Consistency of Four Brain Connectivity Measures Derived from Intracranial Electrode Measurements Frontiers in Neurology, 5 DOI: 10.3389/fneur.2014.00272

NATUS XLTEK EMU128FS breakout box pin-out

This very short and technical post is about a piece of equipment for recording intracranial EEG, made by XLTEK, a company now part of Natus. The piece of equipment, called “breakout box”, is part of the EMU128FS system that allows recording up to 128 channels of EEG. That breakout box is a passive relay that takes “touchproof” connectors as inputs on the front side (carrying EEG signals from the patient) and relays them to four DB37 connectors on the back side towards the recording computer (figure 1). The pin-out refers to the way each touchproof connector is relayed to each pin of the DB37 connectors (figure 2).

Figure 1. Top: the front side of the XLTEK EMU128FS breakout box shows the touchproof connector inputs. Bottom: the back side shows the four DB37 connectors.

Figure 1. Top: the front side of the XLTEK EMU128FS breakout box shows the touchproof connector inputs. Bottom: the back side shows the four DB37 connectors.

The pin-out of the XLTEK EMU128FS breakout box.

Figure 2. The pin-out of the XLTEK EMU128FS breakout box. The pin numbers refer to the female input connectors on the headbox (further downstream than the breakout box).

The pin-out information is not available in the documentation to the system. I am sharing this information here because I believe that it could be useful to advance neuroscientific research. This information has not been verified or endorsed by XLTEK or NATUS in any fashion. If you own a EMU128FS system, it would take you about 10 minutes and a multi-meter to get that information for yourself (and if you do, it would be great if you could share it so that we would be able to compare).

Touchscreen virtuosos: how smartphone use influences the brain maps of our fingertips

The definitive version of this post was originally published on February 4, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


Twenty years ago, very few among us had ever used touchscreens; now, there is one in every pocket. Their ease of use provides a seamless interface with mobile phones, tablets and computers, and we spend literally hours every day interacting with those devices through touchscreens. How does all that tapping and swiping influence the brain? Quite a bit, according to scientists at the University of Fribourg and the Swiss Federal Institute of Technology of Zurich, both in Switzerland, who recently reported their findings in Current Biology. Using EEG and somatosensory evoked potentials (an EEG measure of the brain’s response to tactile stimulation), they found that users of smartphones had increased responses to stimulation of their thumb. This increase was directly proportional to both the average intensity of smartphone use and its day-to-day fluctuations. The researchers conclude that sensory processing in our brains is continuously updated by our use of modern technology.

Touchscreen virtuosos

Previous research had shown that musicians had increased cerebral responses to finger touch, as did blind people who read Braille. Here, Anne-Dominique Gindrat and her colleagues recruited 38 university students, 27 of whom had a smartphone, the remaining 11 still using old-technology mobile phones (those with numbered buttons on them, if you remember). They first confirmed that the owners of touchscreen smartphones used their devices for a much longer time each day, and that they predominantly manipulated them with their right thumb. Then, while recording their EEG, the researchers delivered tactile stimulation to the tip of each of the first three fingers. They found a considerable increase in the magnitude of the brain responses, most importantly for the thumb, but also for the index and middle fingers. Based on the location and timing of the responses, they could ascertain that the changes involved the representation of the fingers in the primary somatosensory cortex.

Mobile phones old and new

Updating cortical maps

Spectacularly, the changes in the brain’s response to touch correlated with each participant’s recent history of smartphone use. In order to estimate that use over a 10-day period, Gindrat and colleagues cleverly turned to battery logs: they had the participants install an app that would record battery usage every ten minutes when the phone was in use. Hourly phone usage as well as the time elapsed since the period of most intense use during the last 10 days were extracted from those logs and used as regressors on the brain responses. The researchers found that the more the volunteers had used their smartphone in the days before the EEG recording session, the more intense their brain responses to tactile stimulation of the thumb. Similarly, the closer the period of most intense touchscreen use from the recording session, the more intense the changes in brain responses. Results for the index finger were along the same lines, although less pronounced. By contrast, the researchers found that the total time of smartphone ownership (a measure that they termed “age of inception”) did not meaningfully impact brain tactile responses. These findings strongly suggest that the brain continuously updates its sensory representations of the environment to reflect day-to-day variations in sensory inputs.

No loss of lateral inhibition

Turning to the potential mechanisms for this striking plasticity, Gindrat and colleagues explored whether it could be due to a loss of lateral inhibition. Simply put, brain responses to simultaneous stimulation of the thumb and index fingers are normally smaller than what would be expected by summing the brain responses to either finger in isolation. This phenomenon is presumably explained by lateral inhibitory interactions between the cortical representations of neighboring fingers. Here, the researchers found that responses to combined thumb and index finger stimulation were indeed smaller than expected in smartphone users; in fact, that reduction was even more pronounced than in non-smartphone users.

Overall, the results of this study suggest that, in smartphone users, the representations of the thumb and index fingers in the somatosensory cortex are both enhanced and better individualized. This likely reflects the importance of somatosensory inputs and feedback for the fine motor behaviors that are required to efficiently control smartphones and other computers using a touchscreen.

A caveat regarding waveform-based EEG analysis

No scientific study is perfect, and the major weakness of the present work, in my opinion, is the fact that all the analyses were based on the EEG measurements themselves. Furthermore, some of the statistical tests were performed directly on the individual EEG waveforms, which are not entirely adequate indexes of the underlying neural activity. Modern EEG analysis incorporates information from every EEG electrode into whole-head maps which better reflect cerebral activity (much more on this technical but important subject in this review article). These maps can then be used to estimate the active sources in the brain that generated them (EEG source imaging). Such an approach could have better revealed the cerebral sites where plasticity took place, as well as the neural correlates of the intensity of recent smartphone use.

Nevertheless, this study is a beautiful experimental confirmation of the notion that the brain dynamically and continuously adapts to changes in the environment and in sensorimotor experience.

References

Gindrat, A., Chytiris, M., Balerna, M., Rouiller, E., & Ghosh, A. (2015). Use-Dependent Cortical Processing from Fingertips in Touchscreen Phone Users Current Biology, 25 (1), 109-116 DOI: 10.1016/j.cub.2014.11.026

How neuroscientists learned to stop worrying and love the bomb

The definitive version of this post was originally published on February 4, 2015 on the PLOS Neuroscience Community website, where I serve as an editor.


Contemporary research in neuroscience is constantly adding to and refining our knowledge of how the brain works. One of the tenets of that knowledge during most of the 20th century—that our brains are supplied from birth with a finite number of neurons that only dwindles with age and disease—was finally refuted in the 1990s, thanks to the paradigm-shifting work of Fred Gage and colleagues. However, quantifying how many neurons were born throughout life in the different regions of the human brain remained impossible. That is, until Jonas Frisen, of the Karolinska Institute in Stockholm, Sweden, and his colleagues had a wildly brilliant idea: using the spike in the concentration of radioactive carbon in the Earth’s atmosphere as a result of above-ground nuclear testing between 1945 and 1963. In an essay recently published in PLOS Biology, Frisen and colleague Aurelie Ernst recently reviewed what this highly original approach has taught us.

Dating the birth of neurons in the human brain

As a consequence of nuclear explosions in the mid-20th century, the atmospheric concentration of the radioactive isotope of carbon C-14 increased massively, before decreasing rapidly following the ban of most above-ground nuclear testing in 1963. Proliferating cells (including neuronal precursors) integrate carbon atoms into their DNA, and as this carbon ultimately comes from our environment, the amount of C-14 incorporated into a new neuron depends on the atmospheric concentration of C-14 at the time of its birth. The rapid changes in that concentration caused by nuclear testing thus provide a time scale of sorts that allows dating quite precisely the birth of a new neuron (the principle has been beautifully illustrated in a Perspective published by Science).

New neurons are known to appear throughout adult life in several brain structures, such as the olfactory bulb (OB) of rodents; the dentate gyrus (DG), which is part of the hippocampus; and the striatum, where adult neurogenesis is most prominent in humans. From Ernst and Frisen, PLOS Biol 2015.

New neurons are known to appear throughout adult life in several brain structures, such as the olfactory bulb (OB) of rodents; the dentate gyrus (DG), which is part of the hippocampus; and the striatum, where adult neurogenesis is most prominent in humans. From Ernst and Frisen, PLOS Biol 2015.

In their well worth reading essay, Ernst and Frisen focus for the most part on what this new technique, along with others, has added to our understanding of how the human brain renews some of its neuronal populations throughout life. They highlight in particular the commonalities and the differences in the dynamics of neuronal renewal between humans and other mammals.

An interview with Jonas Frisen

Dr. Jonas Frisen

Dr. Frisen kindly agreed to answer a few questions, starting with how that brilliant idea came to him.

How did the idea come to you that above-ground nuclear testing during the Cold War would create an “atomic clock” of sorts that would allow dating the birth of cells in the human brain?

The idea came out of frustration of not being able to study cell turnover in humans. In archeology they retrospectively birth date specimen by carbon dating. This builds on the radioactive decay of C-14. I started reading up about this, thinking that maybe we could carbon date cells in the same way. This proved to be a very naive thought, as the radioactive half-life of C-14 is almost 6000 years, which provides a miserable resolution for the life span of cells. When I read a little more about C-14 I came across the huge increase created by the nuclear bomb tests, followed by a steep drop as C-14 diffused from the atmosphere. When I saw that, I knew that we had to try the strategy. So, it is a pure coincidence that we use the same isotope, C-14, for birth dating cells as they use in archeology. Whereas in archeology they take advantage of the radioactive decay, we take advantage of the varying concentrations created by the Cold War.

The Cold War has created a “time window of opportunity” for the retrospective birth dating of neurons. How long is this window and when will it close? Could other events (e.g. the nuclear catastrophe in Chernobyl, or natural accidents such as volcanic eruptions) give rise to similar opportunities?

The window is closing gradually, and it is not possible to say with any precision when it will be closed. However, tissue collected in biobanks now or earlier will be available for analysis for a long time to come. I am afraid that we are not aware of any other source of a similar pulse-chase like the situation of a marker that integrates in DNA.

Birth dating of neurons is currently only possible retrospectively, i.e. after death. Do you foresee any technical developments that would allow measuring neuronal birth in vivo?

That would be extremely valuable. I do not see how to do it today, but I wouldn’t be surprised if it comes.

Adult neurogenesis in humans is now a given. Will we one day be able to influence this process for therapeutic purposes, or even to “improve” the functions of the healthy brain?

I am optimistic that there will be therapeutic strategies in the future that lead to some replacement of neurons lost in disease.

References

Ernst, A., & Frisén, J. (2015). Adult Neurogenesis in Humans- Common and Unique Traits in Mammals PLOS Biology, 13 (1) DOI: 10.1371/journal.pbio.1002045