Auditory+Neocortex

Although the various cytoarchitectural functional regions are not well demarcated via the use of Brodmann's maps, it is possible to very loosely define the superior-temporal and anterior-inferior and anterior middle temporal lobes as auditory cortex.

**Connections**
linked together by local circuit (inter-) neurons and via a rich belt of projection fibers which include the arcuate and inferior fasciculus. The arcuate and inferior fasciculus innervate - inferiorally; amygdala and entorhinal cortex posteriorally: inferior parietal lobule. That is the auditory and language areas of the brain are linked: from the amygdala, to the auditory neocortex, to the inferior parietal lobule and to Broca's speech area in the frontal lobes, and then back again.

via the arcuate and inferior fasciculus: inferior temporal lobe, entorhinal cortex & amygdala (receive from) and transfer complex auditory information to the primary and secondary auditory cortex which simultaneously receives auditory input from the medial geniculate of the thalamus, the pulvinar, and (sparingly) the midbrain.

It is noteworthy that immediately beneath the insula and approaching the auditory neocortex is the claustrum which maintains rich interconnections with the amygdala, the insula, and to some extent, the auditory cortex.

**CORTICAL ORGANIZATION** The functional neural-architecture of the auditory cortex is quite

imilar to the somesthetic and visual cortex in that neurons in the same vertical columns have the same functional properties and are activated by the same type or frequency of auditory stimulus. The auditory cortex, therefore, is basically subdivided into discrete columns which extend from the white matter (layers VII/6b) to the pial surface (layer I). Although most neurons in a single column receive excitatory input from the contralateral ear, some receive input from the ipsilateral ear which may exert excitatory or inhibitory influences so as to suppressed, within the same column, input from itself or from the contralateral ear. These interactions have been referred to as summation interaction (excitatory/excitatory) and suppression interaction (inhibitory/inhibitory, inhibitory/excitatory). Moreover, some columns tend to engage in excitatory summation, whereas others tend to engage in inhibitory suppression--a process that would contribute to the focusing of auditory attention and the elimination of neural noise thereby promoting the ability to selectively attend to motivationally significant environmental and animal sounds including human vocalizations.

In over 80-90% of right handers and in over 50% to 80% of left handers, the left hemisphere is dominant for expressive and receptive speech. The auditory cortex including Wernicke's (i.e. the planum temporale) is generally larger on the left temporal lobe. In brains studied 65% had a larger left hemisphere while 25% had a larger right hemisphere and 10% showed no difference. It is argued that the larger left planum temporale is a significant factor in the establishment of left hemisphere dominance for language.

AUDITORY TRANSMISSION FROM THE COCHLEA TO THE TEMPORAL LOBE
Within the cochlea of the inner ear are tiny hair cells which serve as sensory receptors. These cells give rise to axons which form the cochlear division of the 8th cranial nerve; i.e. the auditory nerve. This rope of fibers exits the inner ear and travels to and terminates in the cochlear nucleus which overlaps and is located immediately adjacent to the vestibular-nucleus from which it evolved within the brainstem. Auditory stimuli are received in the cochlear nucleus there follows a series of transformations as this information is relayed to various nuclei, i.e., -the superior olivary complex -the nucleus of the lateral lemniscus of the brainstem -the midbrain inferior colliculus -medial geniculate nucleus of the thalamus -amygdala (extracts features which are emotionally or motivationally significant) -cingulate gyrus.

Auditory information is then relayed from the medial geniculate nucleus of the thalamus as well as via the amygdala (through the inferior fasciculus) to Heschl's gyrus..

The old cortical centers located in the midbrain and brain stem evolved long before the appearance of neocortex and became specialized for performing a considerable degree of auditory analysis long before mammals evolved. Auditory information would be projected to the brainstem eitherfrom the vestibular-cochlear (8th cranial )nerve, and from the medial geniculate nucleus which is part of the thalamus. This information would then be sent to the inferior colliculus of the midbrain, and in mammals it would be transmitted to the auditory neocortex. Moreover, many of these old cortical nuclei also project back to each other such that each subcortical structure might hear and analyze the same sound repeatedly. In this manner the brain is able heighten or diminish the amplitude of various sounds via feedback adjustment. This same process continues in the neocortex which has the advantage of being the recipient of signals that have already been highly processed and analyzed. Primary auditory neurons are especially responsive to the temporal sequencing of acoustic stimuli, coupled with the capacity of auditory neurons to extract non-random sounds from noise, that language related sounds begin to be organized and recognized.
 * FILTERING, FEEDBACK & TEMPORAL-SEQUENTIAL REORGANIZATION **

Neurons in the primary auditory cortex can determine and recognize differences and similarities between harmonic complex tones and demonstrated auditory response patterns that vary in response to lower and higher frequency and to specific tones. Some display "tuning bandwithdts" for pure tones, whereas others are able to identify up to seven components of harmonic complex tones. In this manner, pitch can also be discerned

One of the main functions of the primary auditory receptive area appears to be the retention of sounds for brief time periods (up to a second) so that temporal and sequential features may be extracted and discrepancies in spatial location identified; i.e. so that we can determine from where a sound may have originated
 * Sustained Auditory Activity.**

From this the left temporal lobe becomes increasingly active as word length increases, due presumably to the increased processing necessary. Moreover, via their sustained activity, these neurons are able to prolong (perhaps via a perseverating feedback loop with the thalamus) the duration of certain sounds so that they are more amenable to analysis--which may explain why activity increases in response to unfamiliar words and as word length increases. In this manner, even complex sounds can be broken down into components which are then separately analyzed. Hence, sounds can be perceived as sustained temporal sequences. Although it is apparent that the auditory regions of both cerebral hemispheres are capable of discerning and extracting temporal-sequential rhythmic acoustics, the left temporal lobe is clearly superior in this capacity.



-perception of real words -word lists -numbers -backwards speech -morse code, consonants -consonant vowell syllables -nonsense syllables -transitional elements of speech -single phonemes -rhymes -activity significantly increases in the left hemisphere during language tasks including reading
 * Right ear (left temporal lobe) has been shown to be dominant for**

In part the association of the left hemisphere and left temporal lobe with performing complex temporal-sequential and linguistic analysis is due to its interconnections with the inferior parietal lobule

The language capacities of the left temporal lobe are also made possible via feedback from "subcortical" auditory neurons, and via sustained (vs diminished) activity and analysis. That is, because of these "feedback" loops the importance and even order of the sounds perceived can be changed, filtered or heightened; an extremely important development in regard to the acquisition of human language. In this manner sound elements composed of consonants, vowels, and phonemes and morphemes can be more readily identified, particularly within the auditory neocortex of the left half of the brain.

< Via these interactions and feedback loops sounds can be repeated, their order can be rearranged, and the amplitude on specific auditory signals can be enhanced whereas others can be filtered out. It is in this manner, coupled with experience and learning (Edeline et al. 1990; Diamond & Weinberger 1989) that fine tuning of the nervous system occurs so that specific signals are attended to, perceived, processed, committed to memory and so on. A significant degree of plasticity in response to experience as well as auditory filtering occurs throughout the brain not only in regard to sound, but visual and tactual information as well


 * Right temporal lobe (left ear) has been shown to be dominant for**

- acoustically related sounds -non-verbal environmental acoustics (e.g. wind, rain, animal noises) -prosodic-melodic nuances -sounds which convey emotional meaning -most aspects of music including temp and meter

human speech consists of about 12-60 units of sound depending whether speaking Hawaiian vs English. English vocabulary consists of several hundred thousand words which are based on the combinations of just 45 different sounds. monkeys and apes use between 20-25 units of sound, whereas a fox uses 36. However, these animals cannot string these sounds together so as to create a spoken language. Most animals tend to use only a few units of sound at one time which varies depending on their situation, e.g. lost, afraid, playing. Humans combine these sounds to make a huge number of words. In fact, employing only 13 sound units, humans are able to combine them to form five million word sounds
 * HEARING SOUNDS & LANGUAGE**


 * PHONEMES**
 * CONSONANTS AND VOWELS **
 * GRAMMAR & AUDITORY CLOSURE **
 * UNIVERSAL GRAMMARS **
 * LANGUAGE ACQUISITION: FINE TUNING THE AUDITORY SYSTEM **