- Image by Jeezny via Flickr
Pity the poor brain. What a job it has! Did you know that just to reach into a refrigerator and grab a glass of milk, involves at least 50 or so key muscles in the hand, arm and shoulder which can, in principle, lead to over 1,000,000,000,000,000 possible combinations of muscle contractions? Just so you know, this is 1,000 times MORE contraction possibilities than there are neurons in the brain (only a mere 1,000,000,000,000 neurons). I’m sorry brain, I’ll keep my hands out of the fridge, I promise!
To accomplish this computational feat, Rodolfo R. Llinas and Sisir Roy in their paper entitled, “The ‘prediction imperative’ as the basis for self-awareness” [doi:10.1098/rstb.2008.0309] suggest that brain has evolved a number of strategies.
For starters, the authors point out that the brain can lower the computational workload of controlling motor output by sending motor control signals in a non-continuous and pulsatile fashion.
“We see that the underlying nature of movement is not smooth and continuous as our voluntary movements overtly appear; rather, the execution of movement is a discontinuous series of muscle twitches, the periodicity of which is highly regular.”
This computational strategy has the added benefit of making it easier to bind and synchronize motor-movement signals with a constant flow of sensory input:
“a periodic control system may allow for input and output to be bound in time; in other words, this type of control system might enhance the ability of sensory inputs and descending motor command/controls to be integrated within the functioning motor apparatus as a whole.“
Another strategy is the use of memory for the purposes of prediction (actually, their paper is part of a special theme issue from the Philosophical Transactions of the Royal Society B entitled, Predictions in the brain: using our past to prepare for the future). The authors describe the way in which neural circuits in the body and brain are inherently good at learning and storing information which makes them very good at using that information for making predictions and pre-prepared plans for what to do with expected incoming sensory inputs. These neural mechanisms may also help reduce computational loads associated with moving and coordinating the body. Interestingly, the authors note,
“while prediction is localized in the CNS, it is a distributed function and does not have a single location within the brain. What is the repository of predictive function? The answer lies in what we call the self, i.e. the self is the centralization of the predictive imperative. The self is not born out of the realm of consciousness—only the noticing of it is (i.e. self-awareness).” Here’s a link to Llinas’ book on where the “self” resides.
Lastly, the authors suggest that the genome might encode certain structural and functional aspects of neural development that create a bias for certain types of computation and prime neural networks with a Bayesian type of prior knowledge. Their idea is akin to an organism being “experience expectant” rather than a pure blank slate that has to learn every stimulus-response contingency by trial-and-error. To support their notion of the role of the genome, the authors cite a 2003 study from the Yonas Lab on the development of depth perception. Another related study is covered here.
Methinks that genetic variants might someday be understood in terms of how they bias computational processes. Something to shoot for in the decades to come!
Bit of a layman here, but am I correct in thinking genome basically ‘loads’ into the [in terms of AI] knowledge base?
hmmm … at conception the genome unpacks in the maternal egg soup of proteins and various gene expression/regulation machinery … whose design has been crafted via evolution to initiate a developmental “program” of gene expression … not sure if that would suffice as a knowledge base … yeah, this question is over my head i think! … my talents were less in the theory and more in the lab work area 🙂