University of Sussex
Browse

Research data for paper "Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks"

dataset
posted on 2025-01-14, 08:46 authored by Thomas NowotnyThomas Nowotny, James P Turner, James KnightJames Knight

The data in this repository was generated in the context of training spiking neural networks for keyword recognition using the Eventprop algorithm. It accompanies the paper 'Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks' Neuromorphic Computing and Engineering (09 Jan, 2025)

The data relates to two benchmarks:

  1. Spiking Heidelberg Digits (SHD) (Cramer et al. 2022)
  2. Spiking Speech Commands, derived from Google Speech Commands (Warden et al. 2018).

The data was generated and analysed with the code available on GitHub at https://github.com/tnowotny/genn_eventprop

The data is organised into 6 zip volumes each of which corresponds to a parameter scan of networks trained on the SHD data set (4 scans) and SSC data set (2 scans).

scan_SHD_base_xval.zip

Result from leave-one-speaker-out cross-validation runs on the "base SHD models", i.e. networks trained with Eventprop, including regularisation but no augmentations and only one hidden layer. There were 160 parameter combinations:

  1. Four different loss types - LOSS_TYPE: sum, sum_weigh_exp, first_spike_exp, max; each with individual best settings for HIDDEN_OUTPUT_MEAN, HIDDEN_OUTPUT_STD, LBD_UPPER, ETA
  2. scaling of LBD_UPPER from base value by: 0.1, 0.5, 1.0, 5.0, 10.0
  3. RECURRENT: False, True
  4. TAU_MEM: 20, 40
  5. TAU_SYN: 5,10

For each of the combinations, there are two files:

SHD_xval_xxxx.json:

A JSON file containing the used parameter settings.

SHD_xval_xxxx_results.txt:

An ASCII file containing in each row the metrics after each training epoch, separated by blanks:

  1. Epoch
  2. training accuracy
  3. training loss
  4. validation accuracy
  5. validation loss
  6. mean number of spikes in the hidden layer
  7. standard deviation of number of spikes in the hidden layer
  8. minimum of number of spikes in the hidden layer
  9. maximum of number of spikes in the hidden layer
  10. mean number of spikes per neuron per trial across a mini-batch
  11. standard deviation of number of spikes per neuron per trial across a mini-batch
  12. minimum number of spikes per neuron per trial across a mini-batch
  13. maximal number of spikes per neuron per trial across a mini-batch
  14. number of silent neurons
  15. time (s) since training start

scan_SHD_base_traintest.zip

Results from training the base models on the SHD training set, interleaved with testing on the test set. This uses the 8 parameter combinations from scan_SHD_base_xval.zip that use the four LOSS_TYPE choices and RECURRENT False or True. For each of these 8 cases, the other parameters from scan_SHD_base_xval.zip were taken from the run with the best mean cross-validation score. Each of the 8 runs was repeated 8 times with different random seeds.

The files included are:

SHD_tt_xxxx.json

Parameter settings as above.

SHD_tt_xxxx_results.txt

Results file with columns as above except that columns 4 and 5 now relate to test accuracy and -loss respectively.

SHD_tt_xxxx_best.txt

The best result across epochs (same data format as SHD_tt_xxxx_results.txt)

SHD_tt_xxxx_w_input_hidden_best.npy

The weight matrix of input to hidden connections at the epoch where the best training accuracy was achieved (early stopping upon training accuracy). The weights are arranged in a "pre-major" order, i.e. entries 1 to n_hidden are the weights from input neuron 0 to all hidden neurons, followed by the weight from input neuron 2 and so on. All weight matrices are stored in this way.

SHD_tt_xxxx_w_hidden_output_best.npy

The weight matrix of hidden to output connections at the best epoch.

If the network is recurrent, there is also

SHD_tt_xxxx_w_hidden0_hidden0_best.npy

The recurrent weight matrix from the hidden layer to itself.

scan_SHD_final_xval.zip

Results from the ablation experiments on the full SHD models. Leave-one-speaker-out cross-validation runs were performed to determine the best regularisation strength LBD_UPPER for each of the following parameter combinations (512 combinations):

  1. DT_MS: 1,2,5,10,20
  2. NUM_HIDDEN: 64, 128, 256, 512, 1024 (for DT= 1 or 2), 256, 1024 (for other DT)
  3. N_INPUT_DELAY: 0, 10
  4. AUGMENTATION: None, blend: [0.5, 0.5], random_shift: 40.0, blend, blend & shift
  5. HIDDEN_NEURON_TYPE: LIF, hetLIF
  6. TRAIN_TAU: False, True

5 different LBD_UPPER values were tested for 2 repeats with different random seed each (total 5120 runs).

The files included are:

SHD_xval_xxxx.json

Parameter settings as above.

SHD_xval_results.txt

Results file with columns as described for scan_SHD_base_xval above.

scan_SHD_final_traintest.zip

Results from training on the SHD training set, interleaved with testing on the test set. This was done for 320 different parameter settings, corresponding to dt=1,2 only and choosing the best LBD_UPPER as determined by the run in scan_SHD_final_xval where the average validation error in the epochs of best training error in each fold was best. For each of the 320 combinations, 8 independent runs with different random seeds were executed (2560 total runs).

For each of the runs, there are 3 files:

SHD_tt_xxxx.json

A JSON file with the used parameter settings.

SHD_tt_xxxx_results.txt

The results file with columns as described before, columns 4 and 5 relate to the accuracy and loss on the test set.

SHD_tt_xxxx_best.txt

The values from the epoch when the test accuracy was best. Same columns as SHD_tt_xxxx_results.txt.

In addition, for the runs that had the best test results (within the 8 repeats), we also include

SHD_tt_0004_w_input_hidden_best.npy

The weights from input to hidden layer.

SHD_tt_0004_w_hidden_output_best.npy

The weights from hidden to output layer.

SHD_tt_0004_w_hidden0_hidden0_best.npy

The recurrent weights from the hidden layer to itself (in this scan all networks are recurrent).

scan_SSC_final.zip

Results from the ablation experiments on SSC. We ran the same parameter combinations as for scan_SHD_final_xval but as SSC has a dedicated validation set, the runs were performed as training epochs interleaved with testing on the validation set (5120 runs). The provided files are

SSC_xxxx.json

The parameter values used.

SSC_xxxx_results.txt

The results of the training/validation run.

SSC_5118_best.txt

The row from SSC_xxxx_results.txt with the best validation error.

We subsequently ran testing on the trained network from the epoch where the validation error was best. From these we have

SSC_5118_test.json

The parameter settings of the test run.

SSC_5118_test_results.txt

The results of the test run. This has the same columns as the training runs, except that columns 2 and 3 are devoid meaning.

For the runs of a given parameter setting that were best across LBD_UPPER and random seed values, we also provide

SSC_xxxx_w_input_hidden_best.npy

The weights from input to hidden layer for the epoch where the validation error was best. These are the connection weights used for the testing run.

SSC_xxxx_w_hidden_output_best.npy

The corresponding weights from hidden to output layer.

SSC_xxxx_w_hidden0_hidden0_best.npy

The corresponding recurrent weights from the hidden layer to itself (in this scan all networks are recurrent).

scan_SSC_final_repeats.zip

In this scan we made 6 more repeated runs for all parameter combinations from scan_SSC_final with the best performing LBD_UPPER values (1920 runs). The files provided are exactly as for scan_SSC_final.

Relationship to the publication

Figure 2 of the publication is based on scan_SHD_base_xval and scan_SHD_base_traintest and the panels of the figure can be generated using the scripts plot_SHD_base_curves.py and plot_SHD_base_summary.py.

Figure 3 of the publication is based on the data scan_SHD_final_traintest and its panels can be generated with the script plot_final_ablation.py with the argument "SHD".

Figure 4 of the publication is based on data scan_SSC_final and scan_SSC_final_repeats and can be generated with the script plot_final_ablation.py with the argument "SSC".

Funding

Brains on Board: Neuromorphic Control of Flying Robots

Engineering and Physical Sciences Research Council

Find out more...

ActiveAI - active learning and selective attention for robust, transparent and efficient AI

Engineering and Physical Sciences Research Council

Find out more...

Unlocking spiking neural networks for machine learning research

Engineering and Physical Sciences Research Council

Find out more...

Human Brain Project Specific Grant Agreement 3

European Commission

Find out more...

History

Usage metrics

    School of Engineering and Informatics

    Licence

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC