Feeds:
Posts
Comments

Posts Tagged ‘Development’

Just a pointer to onetime University of Edinburgh Professor C.H. Waddington’s 1972 Gifford Lecture on framing the genes vs. environment debate of human behavior.  Although Waddington is famous for his work on population genetics and evolutionary change over time, several of his concepts are experiencing some resurgence in the neuroimaging and psychological development literatures these days.

One term, CHREOD, combines the Greek word for “determined” or “necessary” and the word for “pathway.” It describes a system that returns to a steady trajectory in contrast to homeostasis which describes a system which returns to a steady state.  Recent reviews on the development of brain structure have suggested that the “trajectory” (the actual term “chreod” hasn’t survived) as opposed to any specific time point is the essential phenotype to use for understanding how genes relate to psychological development.  Another term, CANALIZATION, refers to the ability of a population to produce the same phenotype regardless of variability in its environment or genotype.  A recent neonatal twin study found that the heritability of grey matter in neonatal humans was rather low.  However it seems to then rise until young adulthood – as genetic programs presumably kick-in – and then decline again.  Articles by neurobiologist Jay N. Giedd and colleagues have suggested that this may reflect Waddington’s idea of canalization.  The relative influence of genes vs. environment may change over time in ways that perhaps buffer against mutations and/or environmental insults to ensure the stability and robustness of functions and processes that are both appropriate for survival and necessary for future development.  Another Waddington term, EPIGENETIC LANDSCAPE, refers to the limitations on how much influence genes and environment can have on the development of a given cell or structure.  Certainly the environment can alter the differentiation, migration, connectivity, etc. of neurons by only so much.  Likewise, most genetic mutations have effects that are constrained or compensated for by the larger system as well.

Its amazing to me how well these evolutionary genetic concepts capture the issues at the nexus of of genetics and cognitive development.  From his lecture, it is clear that Waddington was not unaware of this.  Amazing to see a conceptual roadmap laid out so long ago.  Digging the book cover art as well!

Reblog this post [with Zemanta]

Read Full Post »

We hope, that you choke, that you choke.
Image by Corrie… via Flickr

Coping with fear and anxiety is difficult.  At times when one’s life, livelihood or loved one’s are threatened, we naturally hightenen our senses and allocate our emotional and physical resources for conflict.  At times, when all is well, and resources, relationships and relaxation time are plentiful, we should unwind and and enjoy the moment.  But most of us don’t.  Our prized cognitive abilities to remember, relive and ruminate on the bad stuff out there are just too well developed – and we suffer – some more than others  (see Robert Saplosky’s book “Why Zebras Don’t Get Ulcers” and related video lecture (hint – they don’t get ulcers because they don’t have the cognitive ability to ruminate on past events).  Such may be the flip side to our (homo sapiens) super-duper cognitive abilities.

Nevertheless, we try to understand our fears and axieties and understand their bio-social-psychological bases. A recent paper entitled, “A Genetically Informed Study of the Association Between Childhood Separation Anxiety, Sensitivity to CO2, Panic Disorder, and the Effect of Childhood Parental Loss” by Battaglia et al. [Arch Gen Psychiatry. 2009;66(1):64-71] brought to mind many of the complexities in beginning to understand the way in which some individuals come to suffer more emotional anguish than others.  The research team addressed a set of emotional difficulties that have been categorized by psychiatrists as “panic disorder” and involving sudden attacks of fear, sweating, racing heart, shortness of breath, etc. which can begin to occur in early adulthood.

Right off the bat, it seems that one of the difficulties in understanding such an emotional state(s) are the conventions (important for $$ billing purposes) used to describe the relationship between “healthy” and “illness” or “disorder”.  I mean, honestly, who hasn’t experienced what could be described as a mild panic disorder once or twice?  I have, but perhaps that doesn’t amount to a disorder.  A good read on the conflation of normal stress responses and disordered mental states is “Transforming Normality into Pathology: The DSM and the Outcomes of Stressful Social Arrangements” by Allan V. Horwitz.

Another difficulty in understanding how and why someone might experience such a condition has to do with the complexities of their childhood experience (not to mention genes). Child development and mental health are inextrictably related, yet, the relationship is hard to understand.  Certainly, the function of the adult brain is the product of countless developmental unfoldings that build upon one another, and certainly there is ample evidence that when healthy development is disrupted in a social or physical way, the consequences can be very unfortunate and long-lasting. Yet, our ability to make sense of how and why an individual is having mental and/or emotional difficulty is limited.  Its a complex, interactive and emergent set of processes.

What I liked about the Battaglia et al., article was the way in which they acknowledged all of these complexities and – using a multivariate twin study design – tried to objectively measure the effects of genes and environment (early and late) as well as candidate biological pathways (sensitivity to carbon dioxide).  The team gathered 346 twin pairs (equal mix of MZ and DZ) and assessed aspects of early and late emotional life as well as the sensitivity to the inhalation of 35% CO2 (kind of feels like suffocating and is known to activate fear circuitry perhaps via the ASC1a gene).   The basic notion was to parcel out the correlations between early emotional distress and adult emotional distress as well as with a very specific physiological response (fear illicited by breathing CO2).  If there were no correlation or covariation between early and late distress (or the physiological response) then perhaps these processes are not underlain by any common mechanism.

However, the team found that there was covariation between early life emotion (criteria for separation anxiety disorder) and adult emotion (panic disorder) as well as the physiological/fear response illicited by CO2.  Indeed there seems to be a common, or continuous, set of processes whose disruption early in development can manifest as emotional difficulty later in development.  Furthermore, the team suggests that the underlying unifying or core process is heavily regulated by a set of additive genetic factors.  Lastly, the team finds that the experience of parental loss in childhood increased (but not via an interaction with genetic variation) the strength of the covariation between early emotion, late emotion and CO2 reactivity.  The authors note several limitations and cautions to over-interpreting these data – which are from the largest such study of its kind to date.

For individuals who are tangled in persistent ruminations and emotional difficulties, I don’t know if these findings help.  They seem to bear out some of the cold, cruel logic of life and evolution – that our fear systems are great at keeping us alive when we’ve had adverse experience in childhood, but not necessarily happy.  On the other hand, the covariation is weak, so there is no such destiny in life, even when dealt unfortunate early experience AND genetic risk.  I hope that learning about the science might help folks cope with such cases of emotional distress.

Reblog this post [with Zemanta]

Read Full Post »

DON’T tell the grant funding agencies, but, in at least one way, the effort to relate genetic variation to individual differences in cognitive function is a totally intractable waste of money.

Let’s say we ask a population of folks to perform a task – perhaps a word memory task – and then we use neuroimaging to identify the areas of the brain that (i) were associated with performance of the task, and (ii) were not only associated with performance, but were also associated with genetic variation in the population.  Indeed, there are already examples of just this type of “imaging-genetic” study in the literature.  Such studies form a crucial translational link in understanding how genes (whose biochemical functions are most often studied in animal models) relate to human brain function (usually studied with cognitive psychology). However, do these genes relate to just this task? What if subjects were recalling objects? or feelings?  What if subjects were recalling objects / experiences / feelings / etc. from their childhoods?  Of course, there are thousands of common cognitive operations one’s brain routinely performs, and, hence, thousands of experimental paradigms that could be used in such “imaging-genetic” gene association studies.  At more than $500/hour (some paradigms last up to 2 hours) in imaging costs, the translational genes-to-cognition endeavor could get expensive!

DO tell the grant funding agencies that this may not be a problem any longer.

The recent paper by Liu and colleagues “Prefrontal-Related Functional Connectivities within the Default Network Are Modulated by COMT val158met in Healthy Young Adults” [doi: 10.1523/jneurosci.3941-09.2010] suggests an approach that may simplify matters.  Their approach still involves genotyping (in this case for rs4680) and neuroimaging.  However, instead of performing a specific cognitive task, the team asks subjects to lay in the scanner – and do nothing.  That’s right – nothing – just lay still with eyes closed and just let the mind wander and not to think about anything in particular – for a mere 10 minutes.  Hunh?  What the heck can you learn from that?

It turns out that one can learn a lot.  This is because the neural pathways that the brain uses when you are actively doing something (a word recall task) are largely intact even when you are doing nothing.  Your brain does not “turn off” when you are laying still with your eyes closed and drifting in thought.  Rather, your brain slips into a kind of default pattern, described in studies of  “default networks” or “resting-state networks” where wide-ranging brain circuits remain dynamically coupled and actively exchange neural information.  One really great paper that describes these networks is a free-and-open article by Hagmann et al., “Mapping the Structural Core of Human Cerebral Cortex” [doi: 10.1371/journal.pbio.0060159] from which I’ve lifted their Figure 1 above.  The work by Hagmann et al., and others show that the brain has a sort of “connectome” where there are thousands of “connector hubs” or nodes that remain actively coupled (meaning that if one node fires, the other node will fire in a synchronized way) when the brain is at rest and when the brain is actively performing cognitive operations.  In a few studies, it seems that the strength of functional coupling in certain brain areas at rest is correlated (positively and negatively) with the activation of these areas when subjects are performing a specific task.

In the genetic study reported by Liu and colleagues, they found that genotype (N=57) at the dopaminergic COMT gene correlated with differences in the functional connectivity (synchronization of firing) of nodes in the prefrontal cortex.  This result is eerily similar to results found for a number of specific tasks (N-back, Wisconsin Card Sorting, Gambling, etc.) where COMT genotype was correlated with the differential activation of the frontal cortex during the task.  So it seems that one imaging paradigm (lay still and rest for 10 minutes) provided comparable insights to several lengthy (and diverse) activation tasks.  Perhaps this is the case. If so, might it provide a more direct route to linking genetic variation with cognitive function?

Liu and colleagues do not comment on this proposition directly nor do they seem to be over-interpreting their results in they way I have editorialized things here.  They very thoughtfully point out the ways in which the networks they’ve identified and similar and different to the published findings of others.  Certainly, this study and the other one like it are the first in what might be a promising new direction!

Reblog this post [with Zemanta]

Read Full Post »

Last year I dug a bit into the area of epigenetics (indexed here) and learned that the methylation (CH3) and acetylation (OCCH3) of genomic DNA & histones, respectively, can have dramatic effects on the structure of DNA and its accessibility to transcription factors – and hence – gene expression.  Many of the papers I covered suggested that the environment can influence the degree to which these so-called “epigenetic marks” are covalently bonded onto the genome during early development.  Thus, the thinking goes, the early environment can modulate gene expression in ways that are long-lasting – even transgenerational.  The idea is a powerful one to be sure.  And a scary one as well, as parents who read this literature, may fret that their children (and grandchildren) can be epigenetically scarred by early nutritional, physical and/or psycho-social stress.  I must admit that, as a parent of young children myself, I began to wonder if I might be negatively influencing the epigenome of my children.

I’m wondering how much physical and/or social stress is enough to cause changes in the epigenome?  Does the concern about epigenetics only apply to exposure to severe stress?  or run of the mill forms of stress?  How much do we know about this?

This year, I hope to explore this line of inquiry further.  For starters, I came across a fantastic paper by Fraga et al., entitled, “Epigenetic differences arise during the lifetime of monozygotic twins” [doi:10.1073/pnas.0500398102].   The group carries out a remarkably straightforward and time honored approach – a twin study – to ask how much identical twins differ at the epigenetic level.  Since identical twins have the same genome sequence, any differences in their physiology, behavior etc. are, strictly speaking, due to the way in which the environment (from the uterus to adulthood) shapes their development.  Hence, the team of Fraga et al., can compare the amount and location of methyl (CH3) and acetyl (OCCH3) groups to see whether the environment has differentially shaped the epigenome.

An analysis of some 40 identical twin pairs from ages 3-74 years old showed that – YES – the environment, over time, does seem to shape the epigenome (in this case of lymphocytes).  The most compelling evidence for me was seen in Figure 4 where the team used a method known as Restriction Landmark Genomic Scanning (RLGS) to compare patterns of methylation in a genome-wide manner.  Using this analysis, the team found that older twin pairs had about 2.5 times as many differences as did the epigenomes of the youngest twin pairs.  These methylation differences also correlated with gene expression differences (older pairs also had more gene expression differences) and they found that the individual who showed the lowest levels of methylation also had the highest levels of gene expression.  Furthermore, the team finds that twin pairs who lived apart and had more differences in life history were more likely to have epigenetic differences.  Finally, measures of histone acetylation seemed consistent with the gradient of epigenetic change over time and life-history distance.

Thus it seems that, as everyday life progresses, the epigenome changes too.  So, perhaps, one does not need extreme forms of stress to leave long-lasting epigenetic marks on the genome?  Is this true during early life (where the team did not see many differences between pairs)?  and in the brain (the team focused mainly on lymphocytes)?  Are the differences between twins due to the creation of new environmentally-mediated marks or the faulty passage of existing marks from dividing cell-to-cell over time?  Will be fun to seek out information on this.

Reblog this post [with Zemanta]

Read Full Post »

Some quick sketches that might help put the fast-growing epigenetics and cognitive development literature into context.  Visit the University of Utah’s Epigenetics training site for more background!

The genome is just the A,G,T,C bases that encode proteins and other mRNA molecules.  The “epi”genome are various modification to the DNA – such as methylation (at C residues) – and acetylation of histone proteins.   These changes help the DNA form various secondary and tertiary structures that can facilitate or block the interaction of DNA with the transcriptional machinery.

When DNA is highly methylated, it generally is less accessible for transcription and hence gene expression is reduced.  When histone proteins (purple blobs that help DNA coil into a compact shape) are acetylated, the DNA is much more accessible and gene expression goes up.

We know that proper epigenetic regulation is critical for cognitive development because mutations in MeCP2 – a protein that binds to methylated C residues – leads to Rett syndrome.  MeCP2 is normally responsible for binding to methylated DNA and recruiting histone de-acetylases (HDACs) to help DNA coil and condense into a closed form that is inaccessible for gene expression (related post here).

When DNA is accessible for gene expression, then it appears that – during brain development – there are relatively more synaptic spines produced (related post here).  Is this a good thing? Rett syndrome would suggest that – NO – too many synaptic spines and too much excitatory activity during brain development may not be optimal.  Neither is too little excitatory (too much inhibitory) activity and too few synaptic spines.  It is likely that you need just the right balance (related post here). Some have argued (here) that autism & schizophrenia are consequences of too many & too few synapses during development.

The sketch above illustrates a theoretical conjecture – not a scenario that has been verified by extensive scientific study. It tries to explain why epigenetic effects can, in practice, be difficult to disentangle from true (changes in the A,G,T,C sequence) genetic effects.  This is because – for one reason – a mother’s experience (extreme stress, malnutrition, chemical toxins) can – based on some evidence – exert an effect on the methylation of her child’s genome.  Keep in mind, that methylation is normal and widespread throughout the genome during development.  However, in this scenario, if the daughter’s behavior or physiology were to be influenced by such methylation, then she could, in theory, when reaching reproductive age, expose her developing child to an environment that leads to altered methylation (shown here of the grandaughter’s genome).  Thus, an epigenetic change would look much like there is a genetic variant being passed from one generation to the next, but such a genetic variant need not exist (related post here, here) – as its an epigenetic phenomenon.  Genes such as BDNF have been the focus of many genetic/epigenetic studies (here, here) – however, much, much more work remains to determine and understand just how much stress/malnutrition/toxin exposure is enough to cause such multi-generational effects.  Disentangling the interaction of genetics with the environment (and its influence on the epigenome) is a complex task, and it is very difficult to prove the conjecture/model above, so be sure to read the literature and popular press on these topics carefully.

Reblog this post [with Zemanta]

Read Full Post »

Darwin's finches or Galapagos finches. Darwin,...
Image via Wikipedia

In his book, The Beak of the Finch, Jonathan Weiner describes the great diversity of finches on the Galapagos Islands – so much diversity – that Darwin himself initially thought the finch variants to be completely different birds (wrens, mockingbirds, blackbirds and “gross-bills”).  It turns out that one of the pivotal events in Charles Darwin‘s life was his work in 1837 with the great ornithologist John Gould who advised that the birds were actually closely related finches and also specific to separate islands!

Fast-forward to 2009, and we are well on our way to understanding how closely related species can, via natural selection of genetic variation, diverge across space and time. The BMP4 and CaM genes, for example, have been associated with beak morphology in what are now known as Darwin’s Finches.  Wonderful indeed, but now consider, for a moment, the variability – not of finch beaks – but of human cognition.

If you’ve ever been a part of a team or group project at work or school, you know that very few people THINK just like you.  Indeed, variability in human cognition can be the source of a lot of frustration.  Let’s face it, people have different experiences stored away (in a highly distributed fashion) in their memory banks, and each persons brain is extensively wired with trillions of synapses.  Of course! nobody thinks like you.  How could such a complex organ function exactly the same way in 2 separate individuals.

Perhaps then, if you were an alien visitor (as Darwin was to the Galapagos Islands) and you watched 5 separate individuals devise a plan to – oh lets just say, to improve healthcare accessibility and affordability – and you measured individuals based solely on their “thinking patterns” you might conclude (as Darwin did) that you were dealing with 5 separate “species”.  Just flip the TV between FOX, CNN, CNBC, CSPAN and MSNBC if you’re not convinced!

However, if you were to take a more in-depth approach and crack open a current issue of a neuroimaging journal – you might come to the exact opposite conclusion.  That’s right.  If you looked at patterns of brain activity and other indirect measures of neural network dynamics (what I casually meant by “thinking patterns” ) you would mostly see conclusions drawn from studies where many individuals are pooled into large groups and then probed for forms of brain activity that are common rather than different.  Most studies today show that humans use a common set of neural systems to perform mental operations (e.g., recalling events and information).  Brain structures including the hippocampus, frontal cortex, thalamus, parietal cortex are all known to be involved in deciding whether or not you have seen something before.  Thus, if you perform an fMRI brain scanning study on individuals and ask them to complete an episodic memory recall task (show them a list of words before scanning and then – when they are in the scanner – ask them to respond to words they remember seeing), you will likely observe that all or most individuals show some BOLD response activity in these structures.

OK great! But can you imagine where we would be if Charles Darwin returned home from his voyage and said, “Oh, just a bunch of birds out there … you know, the usual common stuff … beaks, wings, etc.”  I’d rather not imagine.

Enter Professor Michael Miller and colleagues and their recent paper, “Unique and persistent individual patterns of brain activity across different memory retrieval tasks” [doi:10.1016/j.neuroimage.2009.06.033].  This paper looks – not just at the common stuff – but the individual differences in BOLD responses among individuals who perform a number of different memory tasks.  The team reports that there are dramatic differences in the patterns of brain activity between individuals.  This can be seen very clearly in Figure 1 which shows left hemisphere activity associated with memory recall.  The group data (N=14) show nice clean frontal parietal activations – but when the data is broken down on an individual-by-individual basis, you might – without knowing that the all subjects were performing the same recall tasks – suspect that each person was doing or “thinking” something quite different.  The research team then re-scanned each subject several months later and asked whether the individual differences were consistent from person to person. Indeed, the team shows that the 2nd brain scan is much more similar to the first (correlations were about 0.5) and that the scan-rescan data for an individual was more similar than the correlation between any single person and the rest of the group (about 0.25).  Hence, as the authors state, “unique patterns of brain activity persist across different tasks”.

Vive la difference!  Yes, the variability is – if you’re interested in using genetics to understand human history and cognitive development – the really exciting part!  Of course, genetics is not the main reason for the stable individual-to-individual differences in brain activity.  There are likely to be many factors that could alter the neural dynamics of broadly distributed neural networks used for memory recall.  Environment, experience, gender are just a few factors that are known to influence the function of these networks.  The authors reveal that individuals may also differ in the strategies and criteria they use to make decisions about whether they can recall or detect a previously viewed item.  Some people will respond only when they are very certain (high criteria) and others will respond even if they feel only slightly sure they’ve seen an item before (low criteria).  The authors show in Figure 5 that the folks who showed similar decision criteria are more likely to have similar patterns of brain activity.

Perhaps then, the genetic differences that (partially) underlie individual differences in brain activity might relate to personality or other aspects of decision making?  I don’t have a clue, but I do know that this approach – of looking carefully at individual differences – is a step forward to doing what Darwin (and don’t forget John Gould!) is so well known for.  Understand where the variation comes from, and you will understand where you come from!

I will follow this literature more closely in the months to come.

Reblog this post [with Zemanta]

Read Full Post »

John Keats, by William Hilton (died 1839). See...
Image via Wikipedia

If you slam your hand in the car door and experience physical pain, medical science can offer you a “pain killer!“.  Certainly morphine (via its activation of the mu opioid receptor (OPRM1)) will make you feel a whole lot better.  However, if your boyfriend or girlfriend breaks up with you and you experience emotional pain, its not so clear whether medical science has, or should offer, such a treatment.  Most parents and doctors would not offer a pain killer.  Rather, it’s off to sulk in private, perhaps finding relief in the writings of countless poets who’ve attested to the acute pain that ensues when emotional bonds are broken.

Love hurts! But why should this be? Why does the loss of love hurt so much?

From a purely biological point of view, it seems obvious that during certain periods of life – childhood for instance – social bonds are important for survival.  Perhaps anything that helped make the breaking of such bonds feel bad, might be selected for?  Its a very complex evolutionary genetic problem to be sure.  One way to begin to solve this question might be to study genes like OPRM1 and ask how and why they might be important for survival.

Such is the case for Christina Barr and colleagues, who, in their paper, “Variation at the mu-opioid receptor gene (OPRM1) influences attachment behavior in infant primates” [doi:10.1073/pnas.0710225105] examine relationships between emotional bonds and genetics in rhesus macaques.  The team examines an amino acid substitution polymorphism in the N-terminus of the OPRM1 protein (C77G which leads to an Arginine to Proline change at position 26).  This polymorphism is similar to the human polymorphism (covered here) A118G (which leads to an Asparagine to Aspartate change at position 40).  Binding studies showed that both the 77G and 118G alleles have a higher affinity for beta-endorphin peptides.

Interestingly, Barr and colleagues find that the classical “pain gene” OPRM1 G-allele carrier macaques display higher levels of attachment to their mothers during a critical developmental phase (18-24 months of age).  These G-allele carriers were also more prone to distress vocalizations when temporarily separated from their mothers and they also spent more time (than did CC controls) with their mothers when reunited.  Hence, there ?may be? some preliminary credence to the notion that a gene involved in feeling pleasant/unpleasant might have been used during evolution to reinforce social interactions between mother and child.  The authors place their results into a larger context of the work of John Bowlby who is known for developing a theory of attachment and the consequences of attachment style on later phases of emotional life.

Click here for a previous interview with Dr. Barr and a post on another related project of hers.

Reblog this post [with Zemanta]

Read Full Post »

We are all familiar with the notion that genes are NOT destiny and that the development of an individual’s mind and body occur in a manner that is sensitive to the environment (e.g. children who eat lots of healthy food grow bigger and stronger than those who have little or no access to food).  In the case of the brain, one of the ways in which the environment gets factored into development – is via so-called “sensitive periods” where certain parts of the brain transiently rely on sensory experience in order to develop.  Children born with cataracts, for example, will have much better vision if the cataracts are removed in the first few weeks of life rather than later on.  This is because the human visual system has a “sensitive period” early in development where it is extra-sensitive to visual input and, after which, the function and connectivity of various parts of the system is – somewhat permanently – established for the rest of the person’s life.  Hence, if there is little visual input (cataracts) during the sensitive period, then the visual system is somewhat permanently unable to process visual information – even if the cataracts are subsequently removed.  (To learn more about this topic, visit Pawan Sinha’s lab at M.I.T and his Project Prakash intervention study on childhood blindness.)

What the heck is an “in”sensitive period then?   Well, whereas visual input is clearly a “good thing” for the sensitive period of visual development, perhaps some inputs are “bad” and it may be useful to shield or protect the brain from exposure.  Maybe some environmental inputs are “bad” and one would not want the developing brain to be exposed to them and say, “OK, this (bad stuff) is normal“.  As a parent, I am constantly telling my children that the traffic-filled street is a “bad place” and, like all parents, I would not want my children to think that it was OK to wander into the street.  Clearly, I want my child to recognize the car-filled street as a “bad thing”.

In the developing brain, it turns out that there are some “bad things” that one would NOT like (the brain) to get accustomed to.  Long-term exposure to glucocorticoids is one example – well-known to cause a type of neuronal remodelling in the hippocampus, that is associated with poor cognitive performance (visit Bruce McEwen’s lab at Rockefeller University to learn more about this).  Perhaps an “in”sensitive period – where the brain is insensitive to glucocorticoids – is one way to teach the brain that glucocorticoids are “bad” and DO NOT get too familiar with them (such a period does actually occur during early post-natal mammalian development).  Of course, we do need our brains to mount an acute stress response, if and when, we are being threatened, but it is also very important that the brain learn to TURN-OFF the acute stress response when the threat has passed – an extensive literature on the deleterious effects of chronic exposure to stress bears this out.  Hence, the brain needs to learn to recognize the flow of glucocorticoids as something that needs to be shut down.

OK, so our developing brain needs to learn what/who is “good vs. bad”.  Perhaps sensitive and insensitive periods help to reinforce this learning – and also – to cement learning into the system in a sort of permanent way (I’m really not sure if this is the consensus view, but I’ll try and podcast interview some of the experts here asap).  In any case, in the case of the visual system, it is clear that the lack of visual input during the sensitive period has long lasting consequences.  In the case of the stress response, it is also clear that if there is untoward stress early in development, one can be (somewhat) destined to endure a lifetime of emotional difficulty.  Previous posts here, here, here cover research on behavioral/genomic correlates of early life stress.

Genes meet environment in the epigenome during sensitive and insensitive periods?

As stated at the outset – genes are not destiny.  The DNA cannot encode a system that knows who/what is good vs. bad, but rather can only encode a system of molecular parts that can assemble to learn these contingencies on the fly.  During sensitive periods in the visual system, cells in the visual system are more active and fire more profusely during the sensitive period. This extra firing leads to changes in gene expression in ways that (somewhat) permanently set the connectivity, strength and sensitivity of visual synapses.  The expression of neuroligins, neurexins, integrins and all manner of extracellular proteins that stabilize synaptic connections are well-known tagets of activity-induced gene expression.  Hence the environment “interacts” with the genome via neuronal firing which induces gene expression which – in turn – feeds back and modulates neuronal firing.  Environment –> neuronal firing –> gene expression –> modified neuronal firing.  OK.

Similarly, in the stress response system, the environment induces changes in the firing of cells in the hypothalamus which leads (through a series of intermediates) to the release of glucocorticoids.  Genes induced during the firing of hypothalamic cells and by the release of glucocorticoid can modify the organism’s subsequent response to stressful events.  Environment –> neuronal firing –> gene expression –> modified neuronal firing.  OK.

Digging deeper into the mechanism by which neuronal firing induces gene expression, we find an interesting twist.   Certainly there is a well-studied mechanism wherein neuronal firing causes Ca++ release which activates gene expression of neuroligins, neurexins, integrins and all manner of extracellular proteins that stabilize synaptic connections – for many decades.  There is another mechanism that can permanently mark certain genes and alter their levels of expression – in a long-lasting manner.  These are so-called epigenetic mechanisms such as DNA methylation and acetylation.  As covered here and here, for instance, Michael Meaney’s lab has shown that DNA CpG methylation of various genes can vary in response to early-life stress and/or maternal care. In some cases, females who were poorly cared for, may, in turn, be rather lousy mothers themselves as a consequence of these epigenetic markings.

A new research article, “Dynamic DNA methylation programs persistent adverse effects of early-life stress” by Chris Murgatroyd and colleagues [doi:10.1038/nn.2436] explores these mechanisms in great detail.  The team explored the expression of the arginine vasopressin (AVP) peptide – a gene which is important for healthy social interaction and social-stress responsivity.  Among many other interesting results, the team reports that early life stress (using a mouse model) leads to lower levels of methylation in the 3rd CpG island which is located downstream in a distal gene-expression-enhancer region.  In short, more early-life stress was correlated with less methylation, more AVP expression which is known to potentiate the release of glucocorticoids (a bad thing).   The team reports that the methyl binding MeCP2 protein, encoded by the gene that underlies Rett syndrome, acts as a repressor of AVP expression – which would normally be a good thing since it would keep AVP levels (and hence glucocorticoid levels) down.  But unfortunately, early-life stress removes the very methyl groups to which MeCP2 binds and also the team reports that parvocelluar neuronal depolarization leads to phosphorylation (on serine residue #438) of MeCP2 – a form of MeCP2 that is less accessible to its targets.  So, in  a manner similar to other examples, early life stress can have long-lasting effects on gene expression via an epigenetic mechanism – and disables an otherwise protective mechanism that would shield the organism from the effects of stress.  Much like in the case of Rett syndrome (as covered here) it seems that when MeCP2 is bound – then it silences gene expression – which would seem to be a good thing when it comes to the case of AVP.

So who puts these epigenetic marks on chromosomes and why?

I’ll try and explore this further in the weeks ahead.  One intriguing idea about why methylation has been co-opted among mammals, has to do with the idea of parent-offspring conflict.  According to David Haig, one of the experts on this topic, males have various incentives to cause their offspring to be large and fast growing, while females have incentive to combat the genomic tricks that males use, and to keep their offspring smaller and more manageable in size.  The literature clearly show that genes that are marked or methylated by fathers (paternally imprinted genes) tend to be growth promoting genes and that maternally imprinted genes tend to be growth inhibitors.  One might imagine that maternally methylated genes might have an impact on maternal care as well.

Lastly, the growth promoting/inhibiting effects of paternal/maternal genes and gene markings is now starting to be discussed somewhat in the context of autism/schizophrenia which have have been associated with synaptic under-/over-growth, respectively.

Building a brain is already tough enough – but to have to do it amidst an eons-old battle between maternal and paternal genomes.  Sheesh!  More on this to come.

Reblog this post [with Zemanta]

Read Full Post »

The human brain is renown for its complexity.  Indeed, while we often marvel at the mature brain in its splendid form and capability, its even more staggering to consider how to build such a powerful computing machine.  Admittedly, mother nature has been working on this for a long time – perhaps since the first neuronal cells and cell networks appeared on the scene hundreds of millions of years ago.  In that case, shouldn’t things be pretty well figured out by now?  Consider the example of Down syndrome, a developmental disability that affects about 1 in 800 children.  In this disability, a mere 50% increase in a relative handful of genes is enough to alter the development of the human brain.  To me, its somehow surprising that the development of such a complex organ can be so sensitive to minor disruptions – but perhaps that’s the main attribute of the design – to factor-in aspects of the early environment whilst building.  Perhaps?

So what are these genes that, in the case of Down syndrome, can alter the course of brain development?  Well, it is widely known that individuals with Down syndrome have an extra copy of chromosome 21.  However, the disorder does not necessarily depend on having an extra copy of each and every gene on chromosome 21.   Rare partial trisomies of only 5.4 million base-pairs on 21q22 can produce the same developmental outcomes as the full chromosome trisomy.  Also, it turns out that mice have a large chunk of mouse chromosome 16 that has the very same linear array of genes (synteny) found on human chromosome 21 (see the figure here).  In mice that have an extra copy of about 104 genes, (the Ts65Dn segment above) many of the developmental traits related to brain structure and physiology are observed.  In mice that have an extra copy of about 81 genes, this is also the case (the Ts1Cje segment).

To focus this line of research even further, the recent paper by Belichenko et al., “The “Down Syndrome Critical Region” Is Sufficient in the Mouse Model to Confer Behavioral, Neurophysiological, and Synaptic Phenotypes Characteristic of Down Syndrome” [DOI:10.1523/JNEUROSCI.1547-09.2009]  examine brain structure, physiology and behavior in a line of mice that carry an extra copy of just 33 genes (this is the Ts1Rhr segment seen in the figure above).  Interestingly, these mice display many of the various traits (admittedly mouse versions) that have been associated with Down syndrome – thus greatly narrowing the search from a whole chromosome to a small number of genes.  20 out of 48 Down syndrome-related traits such as enlargement of dendritic spines, reductions of dendritic spines, brain morphology and various behaviors were  observed.  The authors suggest that 2 genes in this Ts1Rhr segment, in particular, look like intriguing candidates.  DYRK1A a gene, that when over-expressed can lead to hippocampal-dependent learning deficits, and KCNJ6, a potassium channel which could readily drive neurons to hyperpolarize if over-expressed.

Reblog this post [with Zemanta]

Read Full Post »

ruler - STUPID INCOMPETENT MANUFACTURERS
Image by Biking Nikon PDX via Flickr

One of the difficult aspects of understanding mental illness, is separating the real causes of the illness from what might be secondary or tertiary consequences of having the illness.  If you think about a car whose engine is not running normally, there may be many observable things going wrong (pinging sound, stalling, smoke, vibration, overheating, loss of power, etc.) – but, what is the real cause of the problem?  What should be done to fix the car? – a faulty sparkplug or timing belt perhaps?  Such is often the problem in medicine, where a fundamental problem can lead to a complex, hard-to-disentangle, etiology of symptoms.  Ideally, you would fix the core problem and then expect the secondary and tertiary consequences to normalize.

This inherent difficulty, particularly in mental illness, is one of the reasons that genetic research is of such interest.  Presumably, the genetic risk factors are deeper and more fundamentally involved in the root causes of the illness – and hence – are preferable targets for treatment.  The recent paper, “Widespread Reductions of Cortical Thickness in Schizophrenia and Spectrum Disorders and Evidence of Heritability” [Arch Gen Psychiatry. 2009;66(5):467-477] seeks to ascertain whether one aspect of schizophrenia – a widespread and well-documented thinning of the neocortex – is due to genetic risk (hence something that is closer to a primary cause) or – rather – if cortical thinning is not due to genetics, and so more of a secondary consequence of things that go wrong earlier in the development of the illness.

To explore this idea, the team of Goldman et al., did something novel.  Rather than examine the differences in cortical thickness between patients and control subjects, the team evaluated the cortical thickness of 59 patients and 72 unaffected siblings as well as 196 unrelated, matched control participants.  If the cortical thickness of the siblings (who share 50% of their genetic variation) was more similar to the patients, then it would suggest that the cortical thinning of the patients was under genetic control and hence – perhaps – a biological trait that is more of a primary cause.  On the other hand, if the cortical thickness of the siblings (who share 0% of their genetic variation) was more similar to that of the healthy control participants, then it would suggest that cortical thinning was – perhaps more of a secondary consequence of some earlier deficit.

The high-resolution structural neuroimaging allowed the team to carefully assess cortical thickness – which is normally between a mere 2 and 4 millimeters – across different areas of the cortex.  The team reports that, for the most part, the cortical thickness measures of the siblings were more similar to the unrelated controls – thus suggesting that cortical thickness may not be a direct component of the genetic risk architecture for schizophrenia.  Still, the paper discusses several candidate mechanisms which could lead to cortical thinning in the illness – some of which might be assessed in the future using other imaging modalities in the context of their patient/sibling/control experimental design.

Reblog this post [with Zemanta]

Read Full Post »

Gravestone of Samuel Coleridge-Taylor,Wallington
Image by sludgegulper via Flickr

Few events are as hard to understand as the loss of a loved one to suicide – a fatal confluence of factors that are oft scrutinized – but whose analysis can provide little comfort to family and friends.  To me, one frightening and vexing aspect of what is known about the biological roots of depression, anxiety, impulsivity and other mental traits and states associated with suicide, is the way in which early life (even prenatal) experience can influence events in later life.  As covered in this blog here and here, there appear to be very early interactions between emotional experience in early life and the methylation of specific points in the genome.  Such methylation – often referred to as epigenetic marks – can regulate the expression of genes that are important for synaptic plasticity and cognitive development.

The recent paper, “Alternative Splicing, Methylation State, and Expression Profile of Tropomyosin-Related Kinase B in the Frontal Cortex of Suicide Completers” is a recent example of a link between epigenetic marks and suicide.  The team of Ernst et al., examined gene expression profiles from the frontal cortex and cerebellum of 28 males lost to suicide and 11 control, ethnically-matched control participants.  Using a subject-by-subject comparison method described as “extreme value analysis” the team identified 2 Affymetrix probes: 221794_at and 221796_at – that are specific to NTRK2 (TRKB) gene – that showed significantly lower expression in several areas of the frontal cortex.  The team also found that these probes were specific to exon 16 – which is expressed only in the TRKB.T1 isoform that is expressed only in astrocytes.

Further analysis showed that there were no genetic differences in the promoter region of this gene that would explain the expression differences, but, however, that there were 2 methylation sites (epigenetic differences) whose methylation status correlated with expression levels (P=0.01 and 0.004).  As a control, the DNA-methylation at these sites was not correlated with TRKB.T1 expression when DNA and RNA was taken from the cerebellum (a control since the cerebellum is not thought to be directly involved in the regulation of mood).

In the case of TRKB.T1 expression, the team reports that more methylation at these 2 sites in the promoter region is associated with less TRKB.T1 expression in the frontal cortex.  Where and when are these marks laid down?  Are they reversible?  How can we know or suspect what is happening to our epigenome (you can’t measure this by spitting into a cup as with current genome sequencing methods)? To me, the team has identified an important clue from which such follow-up questions can be addressed.  Now that they have a biomarker, they can help us begin to better understand our complex and often difficult emotional lives within a broader biological context.

Reblog this post [with Zemanta]

Read Full Post »

slow motion video
Image via Wikipedia

The neuregulin-1 (NRG1) gene is widely known as one of the most well-replicated genetic risk factors for schizophrenia.  Converging evidence shows that it is associated with schizophrenia at the gene expression and mouse model levels which are consistent with its molecular functions in neural development.   However, in several recent genome-wide association studies (GWAS), there appeared nary a blip of association at the 8p12 locus where NRG1 resides.  What gives?

While there are many possibilities for this phenomenon (some discussed here), the recent paper, “Support for NRG1 as a Susceptibility Factor for Schizophrenia in a Northern Swedish Isolated Population” by Maaike Alaerts and colleagues, suggest that the typical GWAS study may not adequately probe genetic variation at a fine enough scale – or, if you will, use a netting with sufficiently small holes.  By holes, I mean both the physical distance between genetic markers and the frequency with which they occur in populations.  While GWAS studies may use upwards of 500,000 markers – that’s a pretty fine scale net for a 3,000,000,000bp genome (about 6,000bp apart) – Alaerts and colleagues set forth with slightly finer-scale netting.  They focus on a 157kb region that is about 60kb upstream from the start of the NRG1 gene and construct a net consisting of 37 variants between the markers rs4268087 and rs17601950 (average spacing about 5kb).  They used the tagger program to select markers that account for all haplotypes whose frequency is higher than 1.5%.  Thus – even though there are still more than 500 possible snps in the region Alaerts and colleagues are exploring, they are using a slightly finer netting than a typical GWAS.

The results of their analysis (using GENEPOP) of 486 patients and 514 ethnically matched control participants from northern Sweden did reveal significant associations in an area slightly downstream (about 50kb closer to the start point of the NRG1 gene) than the location of the “previously often replicated variants”, suggesting that the region does confer some risk for schizophrenia, but, that diagnostic markers for such risk will be different for different populations.  More telling however are the very weak effects of the haplotypes that show significant association.  Those haplotypes with the most significance show meager differences in how often they are observed in patients vs. controls.  For example, one haplotype was observed in 5% of patients vs. 3% of controls. Others examples were, 11 vs. 9, 25 vs. 22 and 40% vs. 35% – revealing the very modest (krill sized) effects that single genetic variants can have in conferring risk toward mental illness.

However, there are potentially lots of krill in the genomic sea!

Reblog this post [with Zemanta]

Read Full Post »

astrocyteIf you compare the left panel to the right panel, you’ll see a dendrite (grey) with dendritic spines (green) on the left-side and then, on the right-side, these spines enveloped by the membrane of an astrocyte (white).  These images were obtained from synapse-web.org who use a method known as 3D reconstruction of serial section electron microscopy – or something like that – to better understand what types of structural factors underlie normal and abnormal synaptic function.  What is so amazing to me are the delicate ruffles of the astrocyte membrane that seem to want to ensheath each spine.  Was any organelle so gently and well cared for?  Perhaps not.  These are dendritic spines afterall – the very structures that form synaptic contacts and process the neural signals – that allow us to think and function.

It turns out that astrocytes not only seem to care for dendritic spines, but also provide the essential signal that initiates the sprouting of neuronal spines in the first place.  As covered in their recent paper, “Gabapentin Receptor α2δ-1 Is a Neuronal Thrombospondin Receptor Responsible for Excitatory CNS Synaptogenesis” [doi:10.1016/j.cell.2009.09.025] Eroglu and colleagues report the discovery – in mice – of CACNA2D1 the alpha-2/delta-1 subunit of the voltage-dependent calcium channel complex encodes a protein that binds to thrombospondins (humans have THBS1 and THBS2) which are adhesive glycoproteins that mediate cell-to-cell and cell-to-matrix interactions – and are required for the formation of new dendritic spines.  When neurons are cultured in the absence of thrombospondins, they fail to produce new spines and mice that do not make thrombospondins do not make very many excitatory synaptic spines.

The interesting twist to me is that thrombospondins are secreted solely by astrocytes! The newly identified CACNA2D1 receptor – as revealed by Eroglu et al., – binds to the EGF-repeats of thrombospondin and initiates a signalling cascade that results in the sprouting of new – silent – dendritic spines.  Gabapentin, a drug that is prescribed for seizures, pain, methamphetamine addiction and many other mental health conditions appears to bind to CACNA2D1 and interfere with the binding of thrombospondin and also inhibits the formation of new spines in vitro as well during the development of somatotopic maps in the mouse whisker barrel cortex.

This seems to be an important discovery in the understanding of how cognitive development unfolds since much of the expression of thrombospondin and its effects on synaptogenesis occur in the early postnatal stages of development.  I will follow this thread in the months to come.

Reblog this post [with Zemanta]

Read Full Post »

caliban missing miranda
Image by shehal via Flickr

“A devil, a born devil, on whose nature
Nurture can never stick; on whom my pains,
Humanely taken, all, all lost, quite lost
And as with age his body uglier grows,
So his mind cankers.”

So says the wizard Prospero about the wretched Caliban in Shakespeare’s The Tempest (Act IV, Scene I, lines 188 – 192).  Although Shakespeare was not a neuroscientist (more to his credit!), his poignant phrase, “on whose nature, Nurture can never stick”  strikes the very core of the modern debates on the role of genes and personal genomes, and perhaps reminds us that our human experience is delicately balanced amidst the interaction of genes and environment.

Among the some 20,500 genes in the human genome (yes, this is the latest estimate from Eric Lander this past weekend) one particularly amazing gene stands out.   CACNA2D1 the alpha-2/delta-1 subunit of the voltage-dependent calcium channel complex (which also binds to the widely-prescribed drug Gabapentin) encodes a protein who, in conjunction with other related subunits, forms a calcium channel to mediate the influx of calcium ions into neurons when membrane polarization occurs.  In the recent article, “Gabapentin Receptor α2δ-1 Is a Neuronal Thrombospondin Receptor Responsible for Excitatory CNS Synaptogenesis” [doi:10.1016/j.cell.2009.09.025] Eroglu and colleagues reveal that this single gene – initiates the development of synapses – the dynamic structures whose ever changing interconnections make us who we are – that allow “nurture to stick” as it were.

More on the biology of CACNA2D1 and its interactions with its ligand – Thrombospondins – to come.

Reblog this post [with Zemanta]

Read Full Post »

By Richard Wheeler (Zephyris) 2007. The three ...
Image via Wikipedia

File this story under “the more you know, the more you don’t know” or simply under “WTF!”  The new paper, “Microduplications of 16p11.2 are associated with schizophrenia” [doi:10.1038/ng.474] reveals that a short stretch of DNA on chromosome 16p11.2 is – very rarely – duplicated and – more rarely – deleted.  In an analysis of 8,590 individuals with schizophrenia, 2,172 with developmental delay or autism, 4,822 with bipolar disorder and 30,492 controls, the the microduplication of 16p11.2 was strongly associated with schizophrenia, bipolar and autism while the reciprocal microdeletion was strongly associated with developmental delay or autism – but not associated with schizophrenia or bipolar disorder.

OK, so the title of my post is misleading (hey, its a blog) since there are clearly many additional factors that contribute to the developmental outcome of autism vs. schizophrenia, but this stretch of DNA seems to hold clues about early development of brain systems that go awry in both disorders.  Here is a list of the brain expressed genes in this 600 kbp region (in order from telomere-side to centromere-side): SPN, QPRT, C16orf54, MAZ, PRRT2, C16orf53, MVP, CDIPT, SEZ6L2, ASPHD1, KCTD13, TMEM219, TAOK2, HIRIP3, INO80E, DOC2A, FLJ25404, FAM57B, ALDOA, PPP4C, TBX6, YPEL3, GDPD3, MAPK3, CORO1A.

Any guess as to which one(s) are the culprits?  I’ll go with HIRIP3 given its role in chromatin structure regulation – and the consequent regulation of under- (schiz?)/over- (autism) growth of synapses. What an amazing mystery to pursue.

Reblog this post [with Zemanta]

Read Full Post »

ipod
Image by Oliver Lavery via Flickr

Daniel R. Weinberger, M.D., Chief of the Clinical Brain Disorders Branch and Director of the Genes, Cognition and Psychosis Program, National Institute of Mental Health  discusses the background, findings and general issues of genes and mental illness in this brief interview on his paper, “A primate-specific, brain isoform of KCNH2 affects cortical physiology, cognition, neuronal repolarization and risk of schizophrenia”.  Click  HERE for the podcast and HERE for the original post.

Thanks again to Dr. Weinberger for his generous participation!

Reblog this post [with Zemanta]

Read Full Post »

SfNneuroblogbadge Phrenological thinking, a popular pseudoscientific practice in the 1800’s suggested that the structure of the head and underlying brain held the clues to understanding human behavior.  Today, amidst the ongoing convergence of developmental science, molecular & biochemical science and systems-dynamical science (to name just a few), there is – of course – no single or agreed-upon level of analysis that can provide all the answers.  Circuit dynamics are wonderfully correlated with behavior, but they can be regulated by synaptic weights.  Also,  while developmental studies reveal the far reaching beauty of neuronal circuitry, such elegant wiring is of little benefit without healthy and properly regulated synaptic connections.  Genes too, can be associated with circuit dynamics and behavior, but what do these genes do?  Perchance encode proteins that help to form and regulate synapses? Synapses, synapses, synapses.  Perhaps there is a level of analysis – or a nexus – where all levels of analysis intersect?  What do we know about synapses and how these essential aspects of brain function are formed and regulated?

With this in mind I’ve been exploring the nanosymposium, “Molecular Dynamics and Regulation at Synapses” to learn more about the latest findings in this important crossroads of neurobiology.  If you’re like me, you sort of take synapses for granted and think of them as being very tiny and sort of generic.  Delve a while into the material presented at this symposium and you may come to view the lowly synapse – a single synapse – as a much larger, more complex, ever changing biochemical world unto itself.  The number of molecular players under scrutiny by the groups presenting in this one session is staggering.  GTPase activating proteins, kinases, molecular motors, receptors, proteases, cell adhesive proteins, ion channels and many others must interact according to standard biochemical and thermodynamic laws.  At this molecular-soup level, it seems rather miraculous that the core process of vessicle-to-cell membrane fusion can happen at all – let alone in the precise way needed to maintain the proper oscillatory timing needed for Hebbian plasticity and higher-level circuit properties associated with attention and memory.

For sure, this is one reason why the brain and behavior are hard to understand.  Synapses are very complex!

Reblog this post [with Zemanta]

Read Full Post »

DCDC2 (gene)
Image via Wikipedia

A recent analysis of brain structure in healthy individuals who carry a common 2,445-bp deletion in intron 2 of the doublecortin domain containing 2 (DCDC2) gene found that heterozygotes for the deletion showed higher grey matter volumes for several brain areas known to be involved in the processing of written and spoken language (superior, medial and inferior temporal cortex, fusiform, hippocampal / parahippocampal, inferior occipito-parietal, inferior and middle frontal gyri, especially in the left hemisphere) [doi:10.1007/s11682-007-9012-1].  The DCDC2 gene sits within a well known locus frequently found to be associated with developmental dyslexia, and associations of reading disability with DCDC2 have been confirmed in population-based studies.  dcdc2rnai Further work on DCDC2 (open access) shows that the DNA that is deleted in the 2,445-bp deletion in intron 2 carries a number of repeating sequences to which developmental transcription factors bind and that inhibition of DCDC2 results in altered neuronal migration (the right-hand panel shows altered radial migration when DCDC2 is inhibited).  Perhaps the greater grey matter volumes are related to this type of neuronal migration finding?  Will be interesting to follow this story further!

Reblog this post [with Zemanta]

Read Full Post »

Logo of the United States National Institute o...
Image via Wikipedia

Many thanks to Dr. Christina S. Barr from the National Institutes of Health/National Institute on Alcohol Abuse and Alcoholism-Laboratory of Clinical and Translational Studies, National Institutes of Health Animal Center for taking the time to comment on her team’s recent publication, “Functional CRH variation increases stress-induced alcohol consumption in primates” [doi:10.1073/pnas.0902863106] which was covered here.  On behalf of students and interested readers, I am so grateful to her for doing this!  Thank you Dr. Barr!

For readers who are unfamiliar with the extensive literature on this topic, can you give them some basic background context for the study?

“In rodents, increased CRH system functioning in parts of the brain that drive anxious responding (ie, amygdala) occurs following extended access to alcohol and causes animals to transition to the addicted state.  In rodent lines in which genetic factors drive increased CRH system functioning, those animals are essentially phenocopies of those in the post-dependent state.  We had a variant in the macaque that we expected would drive increased CRH expression in response to stress, and similar variants may exist in humans.  We, therefore, hypothesized that this type of genetic variation may interact with prior stress exposure to increase alcohol drinking.”

Can you tells us more about the experimental design strategy and methods?

“This was a study that relied on use of archived NIAAA datasets. The behavioral and endocrine data had been collected years ago, but we took a gene of interest, and determined whether there was variation. We then put a considerable amount of effort into assessing the functional effects of this variant, in order to have a better understanding of how it might relate to individual variation. We then genotyped archived DNA samples in the colony for this polymorphism.”

“I am actually a veterinarian in addition to being a neuroscientist- we have the “3 R’s”. Reduce, refine, and replace…..meaning that animal studies should involve reduced numbers, should be refined to minimize pain/distress and should be replaced with molecular studies if possible.  This is an example of how you can marry use of archived data and sophisticated molecular biology techniques/data analysis to come up with a testable hypothesis without the use of animal subjects. (of course, it means you need to have access to the datasets….;)”

How do the results relate to broader questions and your field at large?

“I became interested in this system because it is one that appears to be under intense selection.  In a wide variety of animal species, individuals or strains that are particularly stress-reactive may be more likely to survive and reproduce successfully in highly variable or stressful environments. Over the course of human evolution, however, selective pressures have shifted, as have the nature and chronicity of stress exposures.  In fact, in modern society, highly stress-reactive individuals, who are no less likely to be eaten by a predator (predation not being a major cause of mortality in modern humans), may instead be more likely to fall susceptible to various-stress related disorders, including chronic infections, diabetes, heart disease, accelerated brain aging, stress-related psychiatric disorders, and even drug and alcohol problems. Therefore, these genetic variants that are persistent in modern humans may make individuals more vulnerable to “modern problems.”

I do hope this helps. Let me know if it doesn’t, and I will try to better answer your questions.”

THANK YOU AGAIN VERY MUCH DR. BARR!!

Reblog this post [with Zemanta]

Read Full Post »

creb1According to Joseph LeDoux, “One of the most important contributions of modern neuroscience has been to show that the nature/nurture debate operates around a false dichotomy: the assumption that biology, on one hand, and lived experience, on the other, affect us in fundamentally different ways” (ref).  Indeed.  While I know not where the current debate stands, I’d like to point to a fantastic example of just how inextricably linked the genome is to the environment.  In their recent paper, “A Biological Function for the Neuronal Activity-Dependent Component of Bdnf Transcription in the Development of Cortical Inhibition” [doi:10.1016/j.neuron.2008.09.024]  Hong et al., ask what happens when you take away the ability of a given gene to respond to the environment.  This is not a traditional “knockout” experiment – where the gene is inactivated from the moment of conception onwards – but rather a much more subtle type of experimental manipulation.  What happens when you prevent nurture from exerting an effect on gene expression?

The team focused on the BDNF gene whose transcription can be initiated from any one of eight promoter sites (I-XIII).  These sites vary in activity depending on the phase of development and/or the tissue or type of cell – all of which make for a complex set of instructions able to turn the BDNF gene on and off in precise developmental and/or tissue-specific ways.  In the case of promoter IV, it appears to be triggered in the cortex in response to Ca++ release that occurs when neurons are firing – a phenomena called, “neuronal activity dependent transcription” – a top example of how the environment can influence gene expression.  Seeing as how BDNF promoter IV is important for this type of environment-induced gene expression, the team asked what happens when you remove this particular promoter?

To do this, the team constructed – keep in mind that these are – mice that contain mutations in several of the Calcium (Ca++) response elements in the promoter IV region.  They introduced point mutations so that the Ca++ sensitive protein CREB could not bind to the promoter and activate gene expression.  OK, so what happens?

Firstly, the team reports that the mutant mice are more or less indistinguishable from controls in appearance, gait, growth rate, brain size and can also reproduce and transmit the mutations.  WOW! Is that one strike AGAINST nurture? The team then shows that BDNF levels are more than 50% reduced in cultured neurons, but that levels of other immediate early genes are NOT affected (as expected).  In living animals, the effects were similar when they asked how much gene expression occurs in the sensory cortex when animals are exposed to light (after an extended period of darkness).  OK, so there are few effects, so far, other than lower levels of nurture-induced BDNF expression – hmmm. Looking more closely however, the team found that the mutant mice generated lower levels of inhibitory neuron activity – as measured by the firing of miniature inhibitory postsynaptic currents.  Follow-on results showed that the total number of inhibitory neurons (parvalbumin and NPY + GABAergic cells) was no different than controls and so it would seem that the activity dependence of BDNF is important for the maintenance of inhibitory synapses.

Hence, the team has found that what “nurture” does (via the BDNF promoter IV in this case) is to exert an effect on the connectivity of inhibitory neurons.  Wow, thanks mother nurture!  Although it may seem like an obscure role for something as important as THE environment, the team points out that the relative balance of excitation-to-inhibition (yin-yang as covered here for Rett syndrome) is crucial for proper cognitive development.

To explore the notion of inhibory/excitation balance further, check out this (TED link) video lecture, where Michael Merzenich describes this imbalance as a “signal-to-noise” problem wherein some children’s brains are rather noisy (due to any number of genetic/environmental reasons – such as, perhaps, poorly maintained inhibitory connections).  This can make it harder to develop and function in life.  Perhaps someday, the genetic/environment research like that of Hong and colleagues will inform the rehabilitative strategies developed by Merzenich.

Reblog this post [with Zemanta]

Read Full Post »

« Newer Posts - Older Posts »