How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan-Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally-structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two types of interference was determined: prediction conflicts, in which a given letter can predict two other letters that appear next with an unequal frequency; and retroactive interference, in which the prediction made by a letter changes in the second half of training. The predictions of the models were determined by exploring parameter space and seeing how densely different regions of the space of possible experimental outcomes were populated by model outcomes. For both types of interference, human data fell squarely in regions characteristic of CCN performance but not characteristic of SRN performance.