File(s) not publicly available
Two ways of learning associations
journal contribution
posted on 2023-06-07, 18:42 authored by Luke Boucher, Zoltan DienesZoltan DienesHow people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan-Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally-structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two types of interference was determined: prediction conflicts, in which a given letter can predict two other letters that appear next with an unequal frequency; and retroactive interference, in which the prediction made by a letter changes in the second half of training. The predictions of the models were determined by exploring parameter space and seeing how densely different regions of the space of possible experimental outcomes were populated by model outcomes. For both types of interference, human data fell squarely in regions characteristic of CCN performance but not characteristic of SRN performance.
History
Publication status
- Published
Journal
Cognitive ScienceISSN
0364-0213External DOI
Issue
6Volume
27Page range
807-842Pages
36.0Department affiliated with
- Psychology Publications
Full text available
- No
Peer reviewed?
- Yes