HGF Toolbox v3.0

Version 3.0 of the HGF Toolbox has been released.

The HGF Toolbox implements many variants of the hierarchical Gaussian filter (HGF). The HGF is a generic Bayesian hierarchical model for inference on a changing environment based on sequential input. This makes it a general model of learning in discrete time. The HGF was introduced in

Mathys C, Daunizeau J, Friston KJ, Stephan KE (2011). A Bayesian foundation for individual learning under uncertainty. Front. Hum. Neurosci. 5:39. doi:10.3389/fnhum.2011.00039

In addition to learning models based on the HGF, this toolbox contains implementations of many other learning and response models, such as Rescorla-Wagner, Softmax, etc.

After downloading, unzip and read the introduction in the README file.

The most visible change in this release is that all function names have been prefixed with “tapas_”. While this makes usage somewhat more cumbersome, bringing the toolbox into the TAPAS name space will have benefits in the long run, such as seamless access to advanced optimization algorithms from other parts of TAPAS.

The second main change is an improvement in the error handling. The tapas_fitModel() function has been adapted to give much more informative messages when something goes wrong.

Apart from that, many more models have been added. Some of them are:

- An HGF variant for inference in multi-armed bandit situations such as in Daw et al. (2006), Nature, 441(7095). This is in tapas_hgf_ar1_mab.
- All the models from Vossel et al. (2013), Cerebral Cortex, doi:10.1093/cercor/bhs418.
- Sutton’s (1992) K1 model featuring gain adaptation (i.e., a variable learning rate).


Release Notes
- Improved error handling in tapas_fitModel()
- Prefixed all function names with “tapas_”
- Added rs_precision
- Added rs_belief
- Added rs_surprise
- Added sutton_k1
- Added hgf_ar1_mab
- Added softmax for continuous responses
- Improved checking of trajectory validity in HGF models
- Debugged input handling in softmax_binary