A novel connectionist architecture of artificial neural networks is presented to model the assignment of meaning to test sentences on the basis of learning from relevantly sparse input. Training and testing sentences are generated from simple recursive grammars, and once trained, the architecture successfully processes thousands of sentences containing deeply embedded clauses, therefore experimentally showing the architecture exhibits partial semantic and strong systematicities — two properties that humans also satisfy. The architecture’s novelty derives, in part, from analyzing language meaning on the basis of Cognitive semantics (Langacker, 2008), and the concept of affirmative stimulus meaning (Quine, 1960). The architecture demonstrates one possible way of providing a connectionist processing model of Cognitive semantics. The architecture is argued to be oriented towards increasing neurobiological and psychological plausibility as well, and will also be argued as being capable of providing an explanation of the aforementioned systematicity properties in humans.
Copyright is held by the author.
The author granted permission for the file to be printed and for the text to be copied and pasted.
Supervisor or Senior Supervisor
Thesis advisor: Hadley, Robert F.
Member of collection