Feeds:
Posts
Comments

Posts Tagged ‘Gene’

Twin studies have long suggested that genetic variation is a part of healthy and disordered mental life.  The problem however – some 10 years now since the full genome sequence era began – has been finding the actual genes that account for this heritability.

It sounds simple on paper – just collect lots of folks with disorder X and look at their genomes in reference to a demographically matched healthy control population.  Voila! whatever is different is a candidate for genetic risk.  Apparently, not so.

The missing heritability problem that clouds the birth of the personal genomes era refers to the baffling inability to find enough common genetic variants that can account for the genetic risk of an illness or disorder.

There are any number of reasons for this … (i) even as any given MZ and DZ twin pair shares genetic variants that predispose them toward the similar brains and mental states, it may be the case that different MZ and DZ pairs have different types of rare genetic variation thus diluting out any similar patterns of variation when large pools of cases and controls are compared …  (ii) also, the way that the environment interacts with common risk-promoting genetic variation may be quite different from person to person – making it hard to find variation that is similarly risk-promoting in large pools of cases and controls … and many others I’m sure.

One research group recently asked whether the type of common genetic variation(SNP vs. CNV) might inform the search for the missing heritability.  The authors of the recent paper, “Genome-wide association study of CNVs in 16,000 cases of eight common diseases and 3,000 shared controls” [doi:10.1038/nature08979] looked at an alternative to the usual SNP markers – so called common copy number variants (CNVs) – and asked if these markers might provide a stronger accounting for genetic risk.  While a number of previous papers in the mental health field have indeed shown associations with CNVs, this massive study (some 3,432 CNV probes in 2000 or so cases and 3000 controls) did not reveal an association with bipolar disorder.  Furthermore, the team reports that common CNV variants are already in fairly strong linkage disequilibrium with common SNPs and so perhaps may not have reached any farther into the abyss of rare genetic variation than previous GWAS studies.

Disappointing perhaps, but a big step forward nonetheless!  What will the personal genomes era look like if we all have different forms of rare genetic variation?

Reblog this post [with Zemanta]

Read Full Post »

One of the complexities in beginning to understand how genetic variation relates to cognitive function and behavior is that – unfortunately – there is no gene for “personality”, “anxiety”, “memory” or any other type of “this” or “that” trait.  Most genes are expressed rather broadly across the entire brain’s cortical layers and subcortical systems.  So, just as there is no single brain region for “personality”, “anxiety”, “memory” or any other type of “this” or “that” trait, there can be no such gene.  In order for us to begin to understand how to interpret our genetic make-up, we must learn how to interpret genetic variation via its effects on cells and synapses – that go on to function in circuits and networks.  Easier said than done?  Yes, but perhaps not so intractable.

Here’s an example.  One of the most well studied circuits/networks/systems in the field of cognitive science are so-called basal-ganglia-thalamcortical loops.  These loops have been implicated in a great many forms of cognitive function involving the regulation of everything from movement, emotion and memory to reasoning ability.  Not surprisingly, neuroimaging studies on cognitive function almost always find activations in this circuitry.  In many cases, the data from neuroimaging and other methodologies suggests that one portion of this circuitry – the frontal cortex – plays a role in the representation of such aspects as task rules, relationships between task variables and associations between possible choices and outcomes.  This would be sort of like the “thinking” part of our mental life where we ruminate on all the possible choices we have and the ins and outs of what each choice has to offer.  Have you ever gone into a Burger King and – even though you’ve known for 20 years what’s on the menu – you freeze up and become lost in thought just as its your turn to place your order?  Your frontal cortex is at work!

The other aspect of this circuitry is the subcortical basla ganglia, which seems to play the downstream role of processing all that ruminating activity going on in the frontal cortex and filtering it down into a single action.  This is a simple fact of life – that we can be thinking about dozens of things at a time, but we can only DO 1 thing at a time.  Alas, we must choose something at Burger King and place our order.  Indeed, one of the hallmarks of mental illness seems to be that this circuitry functions poorly – which may be why individuals have difficulty in keeping their thoughts and actions straight – the thinking clearly and acting clearly aspect of healthy mental life.  Certainly, in neurological disorders such as Parkinson’s Disease and Huntington’s Disease, where this circuitry is damaged, the ability to think and move one’s body in a coordinated fashion is disrupted.

Thus, there are at least 2 main components to a complex system/circuits/networks that are involved in many aspects of learning and decision making in everyday life.  Therefore, if we wanted to understand how a gene – that is expressed in both portions of this circuitry – inflenced our mental life, we would have to interpret its function in relation to each specific portion of the circuitry.  In otherwords, the gene might effect the prefrontal (thinking) circuitry in one way and the basla-ganglia (action-selection) circuitry in a different way.  Since we’re all familiar with the experience of walking in to a Burger King and seeing folks perplexed and frozen as they stare at the menu, perhaps its not too difficult to imagine that a gene might differentially influence the ruminating process (hmm, what shall I have today?) and the action selection (I’ll take the #3 combo) aspect of this eveyday occurrance (for me, usually 2 times per week).

Nice idea you say, but does the idea flow from solid science?  Well, check out the recent paper from Cindy M. de Frias and colleagues “Influence of COMT Gene Polymorphism on fMRI-assessed Sustained and Transient Activity during a Working Memory Task.” [PMID: 19642882].  In this paper, the authors probed the function of a single genetic variant (rs4680 is the Methionine/Valine variant of the dopamine metabolizing COMT gene) on cognitive functions that preferentially rely on the prefronal cortex as well as mental operations that rely heavily on the basal-ganglia.  As an added bonus, the team also probed the function of the hippocampus – yet a different set of circuits/networks that are important for healthy mental function.  OK, so here is 1 gene who is functioning  within 3 separable (yet connected) neural networks!

The team focused on a well-studied Methionine/Valine variant of the dopamine metabolizing COMT gene which is broadly expessed across the pre-frontal (thinking) part of the circuitry and the basal-ganglia part of the circuitry (action-selection) as well as the hippocampus.  The team performed a neuroimaging study wherein participants (11 Met/Met and 11 Val/Val) subjects had to view a series of words presented one-at-a-time and respond if they recalled that a word was a match to the word presented 2-trials beforehand  (a so-called “n-back task“).  In this task, each of the 3 networks/circuits (frontal cortex, basal-ganglia and hippocampus) are doing somewhat different computations – and have different needs for dopamine (hence COMT may be doing different things in each network).  In the prefrontal cortex, according to a theory proposed by Robert Bilder and colleagues [doi:10.1038/sj.npp.1300542] the need is for long temporal windows of sustained neuronal firing – known as tonic firing (neuronal correlate with trying to “keep in mind” all the different words that you are seeing).  The authors predicted that under conditions of tonic activity in the frontal cortex, dopamine release promotes extended tonic firing and that Met/Met individuals should produce enhanced tonic activity.  Indeed, when the authors looked at their data and asked, “where in the brain do we see COMT gene associations with extended firing? they found such associations in the frontal cortex (frontal gyrus and cingulate cortex)!

Down below, in the subcortical networks, a differerent type of cognitive operation is taking place.  Here the cells/circuits are involved in the action selection (press a button) of whether the word is a match and in the working memory updating of each new word.  Instead of prolonged, sustained “tonic” neuronal firing, the cells rely on fast, transient “phasic” bursts of activity.  Here, the modulatory role of dopamine is expected to be different and the Bilder et al. theory predicts that COMT Val/Val individuals would be more efficient at modulating the fast, transient form of cell firing required here.   Similarly, when the research team explored their genotype and brain activity data and asked, “where in the brain do we see COMT gene associations with transient firing? they found such associations in the right hippocampus.

Thus, what can someone who carries the Met/Met genotype at rs4680 say to their fellow Val/Val lunch-mate next time they visit a Burger King?  “I have the gene for obesity? or impulsivity? or “this” or “that”?  Perhaps not.  The gene influences different parts of each person’s neural networks in different ways.  The Met/Met having the advantage in pondering (perhaps more prone to annoyingly gaze at the menu forever) whist the Val/Val has the advantage in the action selecting (perhaps ordering promptly but not getting the best burger and fries combo).

Reblog this post [with Zemanta]

Read Full Post »

Last year I dug a bit into the area of epigenetics (indexed here) and learned that the methylation (CH3) and acetylation (OCCH3) of genomic DNA & histones, respectively, can have dramatic effects on the structure of DNA and its accessibility to transcription factors – and hence – gene expression.  Many of the papers I covered suggested that the environment can influence the degree to which these so-called “epigenetic marks” are covalently bonded onto the genome during early development.  Thus, the thinking goes, the early environment can modulate gene expression in ways that are long-lasting – even transgenerational.  The idea is a powerful one to be sure.  And a scary one as well, as parents who read this literature, may fret that their children (and grandchildren) can be epigenetically scarred by early nutritional, physical and/or psycho-social stress.  I must admit that, as a parent of young children myself, I began to wonder if I might be negatively influencing the epigenome of my children.

I’m wondering how much physical and/or social stress is enough to cause changes in the epigenome?  Does the concern about epigenetics only apply to exposure to severe stress?  or run of the mill forms of stress?  How much do we know about this?

This year, I hope to explore this line of inquiry further.  For starters, I came across a fantastic paper by Fraga et al., entitled, “Epigenetic differences arise during the lifetime of monozygotic twins” [doi:10.1073/pnas.0500398102].   The group carries out a remarkably straightforward and time honored approach – a twin study – to ask how much identical twins differ at the epigenetic level.  Since identical twins have the same genome sequence, any differences in their physiology, behavior etc. are, strictly speaking, due to the way in which the environment (from the uterus to adulthood) shapes their development.  Hence, the team of Fraga et al., can compare the amount and location of methyl (CH3) and acetyl (OCCH3) groups to see whether the environment has differentially shaped the epigenome.

An analysis of some 40 identical twin pairs from ages 3-74 years old showed that – YES – the environment, over time, does seem to shape the epigenome (in this case of lymphocytes).  The most compelling evidence for me was seen in Figure 4 where the team used a method known as Restriction Landmark Genomic Scanning (RLGS) to compare patterns of methylation in a genome-wide manner.  Using this analysis, the team found that older twin pairs had about 2.5 times as many differences as did the epigenomes of the youngest twin pairs.  These methylation differences also correlated with gene expression differences (older pairs also had more gene expression differences) and they found that the individual who showed the lowest levels of methylation also had the highest levels of gene expression.  Furthermore, the team finds that twin pairs who lived apart and had more differences in life history were more likely to have epigenetic differences.  Finally, measures of histone acetylation seemed consistent with the gradient of epigenetic change over time and life-history distance.

Thus it seems that, as everyday life progresses, the epigenome changes too.  So, perhaps, one does not need extreme forms of stress to leave long-lasting epigenetic marks on the genome?  Is this true during early life (where the team did not see many differences between pairs)?  and in the brain (the team focused mainly on lymphocytes)?  Are the differences between twins due to the creation of new environmentally-mediated marks or the faulty passage of existing marks from dividing cell-to-cell over time?  Will be fun to seek out information on this.

Reblog this post [with Zemanta]

Read Full Post »

Some quick sketches that might help put the fast-growing epigenetics and cognitive development literature into context.  Visit the University of Utah’s Epigenetics training site for more background!

The genome is just the A,G,T,C bases that encode proteins and other mRNA molecules.  The “epi”genome are various modification to the DNA – such as methylation (at C residues) – and acetylation of histone proteins.   These changes help the DNA form various secondary and tertiary structures that can facilitate or block the interaction of DNA with the transcriptional machinery.

When DNA is highly methylated, it generally is less accessible for transcription and hence gene expression is reduced.  When histone proteins (purple blobs that help DNA coil into a compact shape) are acetylated, the DNA is much more accessible and gene expression goes up.

We know that proper epigenetic regulation is critical for cognitive development because mutations in MeCP2 – a protein that binds to methylated C residues – leads to Rett syndrome.  MeCP2 is normally responsible for binding to methylated DNA and recruiting histone de-acetylases (HDACs) to help DNA coil and condense into a closed form that is inaccessible for gene expression (related post here).

When DNA is accessible for gene expression, then it appears that – during brain development – there are relatively more synaptic spines produced (related post here).  Is this a good thing? Rett syndrome would suggest that – NO – too many synaptic spines and too much excitatory activity during brain development may not be optimal.  Neither is too little excitatory (too much inhibitory) activity and too few synaptic spines.  It is likely that you need just the right balance (related post here). Some have argued (here) that autism & schizophrenia are consequences of too many & too few synapses during development.

The sketch above illustrates a theoretical conjecture – not a scenario that has been verified by extensive scientific study. It tries to explain why epigenetic effects can, in practice, be difficult to disentangle from true (changes in the A,G,T,C sequence) genetic effects.  This is because – for one reason – a mother’s experience (extreme stress, malnutrition, chemical toxins) can – based on some evidence – exert an effect on the methylation of her child’s genome.  Keep in mind, that methylation is normal and widespread throughout the genome during development.  However, in this scenario, if the daughter’s behavior or physiology were to be influenced by such methylation, then she could, in theory, when reaching reproductive age, expose her developing child to an environment that leads to altered methylation (shown here of the grandaughter’s genome).  Thus, an epigenetic change would look much like there is a genetic variant being passed from one generation to the next, but such a genetic variant need not exist (related post here, here) – as its an epigenetic phenomenon.  Genes such as BDNF have been the focus of many genetic/epigenetic studies (here, here) – however, much, much more work remains to determine and understand just how much stress/malnutrition/toxin exposure is enough to cause such multi-generational effects.  Disentangling the interaction of genetics with the environment (and its influence on the epigenome) is a complex task, and it is very difficult to prove the conjecture/model above, so be sure to read the literature and popular press on these topics carefully.

Reblog this post [with Zemanta]

Read Full Post »

Tao Te Ching
Image via Wikipedia

In previous posts, we have explored some of the basic molecular (de-repression of chromatin structure) and cellular (excess synaptogenesis) consequences of mutations in the MeCP2 gene – a.k.a the gene whose loss of function gives rise to Rett syndrome.  One of the more difficult aspects of understanding how a mutation in a lowly gene can give rise to changes in cognitive function is bridging a conceptual gap between biochemical functions of a gene product — to its effects on neural network structure and dynamics.  Sure, we can readily acknowledge that neural computations underlie our mental life and that these neurons are simply cells that link-up in special ways – but just what is it about the “connecting up part” that goes wrong during developmental disorders?

In a recent paper entitled, “Intact Long-Term Potentiation but Reduced Connectivity between Neocortical Layer 5 Pyramidal Neurons in a Mouse Model of Rett Syndrome” [doi: 10.1523/jneurosci.1019-09.2009] Vardhan Dani and Sacha Nelson explore this question in great detail.  They address the question by directly measuring the strength of neural connections between pyramidal cells in the somatosensory cortex of healthy and MeCP2 mutant mice.  In earlier reports, MeCP2 neurons showed weaker neurotransmission and weaker plasticity (an ability to change the strength of interconnection – often estimated by a property known as “long term potentiation” (LTP – see video)).   In this paper, the authors examined the connectivity of cortical cells using an electrophysiological method known as patch clamp recording and found that early in development, the LTP induction was comparable in healthy and MeCP2 mutant animals, and even so once the animals were old enough to show cognitive symptoms.  During these early stages of development, there were also no differences between baseline neurotransmission between cortical cells in normal and MeCP2 mice.  Hmmm – no differences? Yes, during the early stages of development, there were no differences between genetic groups – however – once the team examined later stages of development (4 weeks of age) it was apparent that the MeCP2 animals had weaker amplitudes of cortical-cortical excitatory neurotransmission.  Closer comparisons of when the baseline and LTP deficits occurred, suggested that the LTP deficits are secondary to baseline strength of neurotransmission and connectivity in the developing cortex in MeCP2 animals.

So it seems that MeCP2 can alter the excitatory connection strength of cortical cells.  In the discussion of the paper, the authors point out the importance of a proper balance of inhibition and excitation (yin and yang, if you will) in the construction or “connecting up part” of neural networks.  Just as Rett syndrome may arise due to such a problem in the proper linking-up of cells – who use their excitatory and inhibitory connections to establish balanced feedback loops – so too may other developmental disorders such as autism, Down’s syndrome, fragile X-linked mental retardation arise from an improper balance of inhibition and excitation.

Reblog this post [with Zemanta]

Read Full Post »

MECP2
Image via Wikipedia

The cognitive and emotional impairments in the autism spectrum disorders can be difficult for parents and siblings to understand and cope with.  Here are some graphics and videos that might assist in understanding how genetic mutations and epigenetic modifications can lead to various forms of social withdrawl commonly observed in the autism spectrum disorders in children.

In this post, the focus is just on the MecP2 gene – where mutations are known to give rise to Rett Syndrome – one of the autism spectrum disorders.  I’ll try and lay out some of the key steps in the typical bare-bones-link-infested-blogger-fashion – starting with mutations in the MecP2 gene.  Disclaimer: there are several fuzzy areas and leaps of faith in the points and mouse model evidence below, and there are many other genes associated with various aspects of autism spectrum disorders that may or may not work in this fashion.  Nevertheless, still it seems one can begin to pull a mechanistic thread from gene to social behavior Stay tuned for more on this topic.

1. The MecP2 gene encodes a protein that binds to 5-Methylcytosine – very simply – a regular cytosine reside with an extra methyl group added at position 5.  Look at the extra -CH3 group on the cytosine residue in the picture at right.  See?  That’s a 5-methylcyctosine residue – and it pairs in the DNA double helix with guanosine (G) in the same fashion as does the regular cyctosine reside (C). 5methC OK, now, mutations in the gene that encode the  MecP2 gene – such as those found at Arginine residue 133 and Serine residue 134 impair the ability of the protein to bind to these 5-Methylcyctosine residues.  bindingMecP2The figure at left illustrates this, and shows how the MecP2 protein lines up with the bulky yellow 5-Methylcytosine residues in the blue DNA double helix during binding.

2. When the MecP2 protein is bound to the methylated DNA, it serves as a binding site for another type of protein – an HDAC or histone deacetylase. The binding of MecP2 and HDAC (and other proteins (see p172 section 5.3 of this online bookChromatin Structure and Gene Expression“)).  The binding of the eponymously named HDAC’s leads to the “de-acetylation” of proteins known as histones.  The movie below illustrates how histone “de-acetylation” leads to the condensation of DNA structure and repression or shutting down of gene expression (when the DNA is tightly coiled, it is inaccessible to transcription factors).  Hence: DNA methylation leads (via MecP2, HDAC binding) to a repression on gene expression.


3. When mutated forms of MecP2 cannot bind, the net result is MORE acetylation and MORE gene expression. As covered previously here, this may not be a good thing during brain development since more gene expression can induce the formation of more synapses and – possibly – lead to neural networks that fail to grow and mature in the “normal” fashion. The figure at right toomanysynapsessuggests that neural networks with too many synapses may not be appropriately connected and may be locked-in to sub-optimal architectures.  Evidence for excessive synaptogenesis is abundant within the autism spectrum disorders.  Neuroligins – a class of genes that have been implicated in autism are known to function in cell & synaptic adhesion (open access review here), and can alter the balance of excitation/inhibition when mutated – which seems consistent with this heuristic model of neural networks that can be too adhesive or sticky.

4. Cognitive and social impairment can result from poor-functioning neural networks containing, but not limited to the amygdala. The normal development of neural networks containing the forntal cortex and amygdala are important for proper social and emotional function.  The last piece of the puzzle then would be to find evidence for developmental abnormalities in these networks and to show that such abnormalities mediate social and/or emotional function.  Such evidence is abundant.

Regarding the effects of MecP2 however, we can consider the work of Adachi et al., who were able to delete the MecP2 gene – just in the amygdala – of (albeit, an adult) mouse.  Doing so, led to the disruption of various emotional behaviors – BUT NOT – of various social interaction deficits that are observed when MecP2 is deleted in the entire forebrain.  This was the case also when the team infused HDAC inhibitors into the amygdala suggesting that loss of transcriptional repression in the adult amygdala may underlie the emotional impariments seen in some autism spectrum disorders.  Hence, such emotional impairments (anxiety etc.) might be treatable in adults (more on this result later and its implications for gene-therapy).

Whew!  Admittedly, the more you know – the more you don’t know.  True here, but still amazing to see the literature starting to interlink across human-genetic, mouse-genetic, human-functional-imaging levels of analysis. Hoping this rambling was helpful.

Reblog this post [with Zemanta]

Read Full Post »

Violinist marionette performs
Image by eugene via Flickr

The homunculus (argument) is a pesky problem in cognitive science – a little guy who might suddenly appear when you propose a mechanism for decision making, spontaneous action or forethought  etc. – and would take credit for the origination of the neural impulse.  While there are many mechanistic models of decision making that have slain the little bugger – by invoking competition between past experience and memory as the source of new thoughts and ideas – one must always tread lightly, I suppose, to be wary that cognitive mechanisms are based completely in neural properties devoid of a homuncular source.

Still, the human mind must begin somewhere.  After all, its just a ball of cells initially, and then a tube and then some more folds, layers, neurogenesis and neural migration  etc. before maturing – miraculously – into a child that one day looks at you and says, “momma” or “dada”.  How do these neural networks come into being?  Who or what guides their development toward that unforgettable, “momma (dada)” moment?  A somewhat homuncluar “genetic program” – whose instructions we can attribute to millions of years of natural selection?  Did early hominid babies say “momma (dada)?  Hmmm. Seems like we might be placing a lot of faith in the so-called “instructions” provided by the genome, but who am I to quibble.

On the other hand, you might find that the recent paper by Akhtar et al., “Histone Deacetylases 1 and 2 Form a Developmental Switch That Controls Excitatory Synapse Maturation and Function” [doi:10.1523/jneurosci.0097-09.2009] may change the way you think about cognitive development.  The team explores the function of two very important epigenetic regulators of gene expression – histone deacetylases 1,2 (HDAC1, HDAC2) on the functionality of synapses in early developing mice and mature animals.  By epigenetic, I refer to the role of these genes in regulating chromatin structure and not via direct, site-specific DNA binding.  The way the HDAC genes work is by de-acetylating – removing acetyl groups – thus removing a electrostatic repulsion of acetyl groups (negative charge) on histone proteins with the phosphate backbone of DNA (also a negative charge).  When the histone proteins carry such an acetyl group, they do NOT bind well to DNA (negative-negative charge repulsion) and the DNA molecule is more open and exposed to binding of transcription factors that activate gene expression.  Thus if one (as Akhtar do) turns off a de-acetylating HDAC gene, then the resulting animal has a genome that is more open and exposed to transcription factor binding and gene expression.  Less HDAC = more gene expression!

What were the effects on synaptic function?  To summarize, the team found that in early development (neonatal mouse hippocampal cells) cells where the HDAC1 or 2 genes were turned off (either through pharmacologic blockers or via partial deletion of the gene(s) via lentivirus introduction of Cre recombinase) had more synapses and more synaptic electrical activity than did hippocampal cells from control animals.  Keep in mind that the HDACs are located in the nucleus of the neuron and the synapses are far, far away.  Amazingly – they are under the control of an epigenetic regulator of gene expression;  hence, ahem, “epigenetic puppetmasters”.  In adult cells, the knockdown of HDACs did not show the same effects on synaptic formation and activity.  Rather the cells where HDAC2 was shut down showed less synaptic formation and activity (HDAC1 had no effect).  Again, it is amazing to see effects on synaptic function regulated at vast distances.  Neat!

The authors suggest that the epigenetic regulatory system of HDAC1 & 2 can serve to regulate the overall levels of synaptic formation during early cognitive development.  If I understand their comments in the discussion, this may be because, you don’t necessarily want to have too many active synapses during the formation of a neural network.   Might such networks might be prone to excitotoxic damage or perhaps to being locked-in to inefficient circuits?  The authors note that HDACs interact with MecP2, a gene associated with Rett Syndrome – a developmental disorder (in many ways similar to autism) where neural networks underlying cognitive development in children fail to progress to support higher, more flexible forms of cognition.  Surely the results of Akhtar et al., must be a key to understanding and treating these disorders.

Interestingly, here, the controller of these developmental phenotypes is not a “genetic program” but rather an epigenetic one, whose effects are wide-spread across the genome and heavily influenced by the environment.  So no need for an homunculus here.

Reblog this post [with Zemanta]

Read Full Post »

Older Posts »