University of Sussex
Browse
- No file added yet -

The role of capacity constraints in Convolutional Neural Networks for learning random versus natural data

Download (921.43 kB)
journal contribution
posted on 2024-10-01, 10:54 authored by C Tsvetkov, G Malhotra, Benjamin EvansBenjamin Evans, JS Bowers
Convolutional neural networks (CNNs) are often described as promising models of human vision, yet they show many differences from human abilities. We focus on a superhuman capacity of top-performing CNNs, namely, their ability to learn very large datasets of random patterns. We verify that human learning on such tasks is extremely limited, even with few stimuli. We argue that the performance difference is due to CNNs’ overcapacity and introduce biologically inspired mechanisms to constrain it, while retaining the good test set generalisation to structured images as characteristic of CNNs. We investigate the efficacy of adding noise to hidden units’ activations, restricting early convolutional layers with a bottleneck, and using a bounded activation function. Internal noise was the most potent intervention and the only one which, by itself, could reduce random data performance in the tested models to chance levels. We also investigated whether networks with biologically inspired capacity constraints show improved generalisation to out-of-distribution stimuli, however little benefit was observed. Our results suggest that constraining networks with biologically motivated mechanisms paves the way for closer correspondence between network and human performance, but the few manipulations we have tested are only a small step towards that goal.

History

Publication status

  • Published

File Version

  • Published version

Journal

Neural Networks

ISSN

0893-6080

Publisher

Elsevier BV

Volume

161

Page range

515-524

Department affiliated with

  • Informatics Publications

Institution

University of Sussex

Full text available

  • Yes

Peer reviewed?

  • Yes