Accordingly, the model was implemented at the systems level (i.e., each layer of units
is expected to reflect the functioning selleck chemicals llc of a specific brain region) rather than at the micro level of neuronal assemblies or spiking neurons. We assume a layer represents a cortical region which computes representations and delivers information through its ongoing connections (O’Reilly, 2006). Connections primarily represent white matter pathways and language processing is underpinned by both cortical regions and their connectivity (Mesulam, 1990). Like real cortical areas, the layers have both afferent and efferent connections. Other than the representations applied at the input or output layers, the rest of the model’s function was unspecified. In this sense, these representations are not present at onset but are formed across the intermediate units and connections in order to maximize performance across the various tasks. Following development or recovery, the nature of the resultant representations has to be probed by the modeler. Three layers of the model were assumed to be the starting (input) and end (output) points of the simulated language activities and so the representations for these regions were prespecified. The primary auditory area and surrounding region, including
pSTG, process complex acoustic stimuli including phonetic contrasts (Chang et al., 2010 and Griffiths, 2002). Accordingly, the corresponding input layer of the model coded phonetic-based auditory inputs for all the words in the training set
and novel forms (for testing generalization). http://www.selleckchem.com/products/azd5363.html Anterior insular cortex has been demonstrated to play a key role in speech output (Baldo et al., 2011, Dronkers, 1996 and Wise et al., 1999). Although classically implicated in speech, the role of pars opercularis is more controversial (Dhanjal et al., 2008 and Wise et al., 1999). As a result, we assume that this general insular-motor area plays a key role in speech output and so the corresponding layer in the model was set to generate speech output. Finally, inferolateral (ventral) anterior temporal Histamine H2 receptor cortex (vATL) is known to be a key locus for transmodal semantic representations and thus crucial for both multimodal comprehension and the semantic input to speech production/naming (Lambon Ralph et al., 2001, Rogers et al., 2004 and Visser and Lambon Ralph, 2011). This is not to say that this is the only key region for semantic cognition. Indeed, other regions provide modality-specific sources of information or executive mechanisms for controlled semantic processing (Jefferies and Lambon Ralph, 2006). Unlike more complex tasks or nonverbal semantic processing, these components of semantic cognition are not crucial to the single-word comprehension and speaking/naming tasks included in the model’s training regime.