University of Sussex
Browse

Supervised structure learning

Download (8.18 MB)
journal contribution
posted on 2024-11-29, 10:24 authored by Karl J Friston, Lancelot Da Costa, Alexander Tschantz, Alex Kiefer, Tommaso Salvatori, Victorita Neacsu, Magnus Koudahl, Conor Heins, Noor Sajid, Dimitrije Markovic, Thomas Parr, Tim Verbelen, Christopher BuckleyChristopher Buckley
This paper concerns structure learning or discovery of discrete generative models. It focuses on Bayesian model selection and the assimilation of training data or content, with a special emphasis on the order in which data are ingested. A key move—in the ensuing schemes—is to place priors on the selection of models, based upon expected free energy. In this setting, expected free energy reduces to a constrained mutual information, where the constraints inherit from priors over outcomes (i.e., preferred outcomes). The resulting scheme is first used to perform image classification on the MNIST dataset to illustrate the basic idea, and then tested on a more challenging problem of discovering models with dynamics, using a simple sprite-based visual disentanglement paradigm and the Tower of Hanoi (cf., blocks world) problem. In these examples, generative models are constructed autodidactically to recover (i.e., disentangle) the factorial structure of latent states—and their characteristic paths or dynamics.

History

Publication status

  • Published

File Version

  • Published version

Journal

Biological Psychology

ISSN

0301-0511

Publisher

Elsevier BV

Volume

193

Article number

108891

Department affiliated with

  • Informatics Publications

Institution

University of Sussex

Full text available

  • Yes

Peer reviewed?

  • Yes