Krishn Bera

Cognitive Science PhD Student, Brown University

Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models


Conference paper


Krishn Bera, Alexander Fengler, Michael J. Frank
6th Multidisciplinary Conference on Reinforcement Learning and Decision Making, RLDM, 2025

View PDF
Cite

Cite

APA   Click to copy
Bera, K., Fengler, A., & Frank, M. J. (2025). Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models. In 6th Multidisciplinary Conference on Reinforcement Learning and Decision Making. RLDM.


Chicago/Turabian   Click to copy
Bera, Krishn, Alexander Fengler, and Michael J. Frank. “Fast and Robust Bayesian Inference for Modular Combinations of Dynamic Learning and Decision Models.” In 6th Multidisciplinary Conference on Reinforcement Learning and Decision Making. RLDM, 2025.


MLA   Click to copy
Bera, Krishn, et al. “Fast and Robust Bayesian Inference for Modular Combinations of Dynamic Learning and Decision Models.” 6th Multidisciplinary Conference on Reinforcement Learning and Decision Making, RLDM, 2025.


BibTeX   Click to copy

@inproceedings{krishn2025a,
  title = {Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models},
  year = {2025},
  publisher = {RLDM},
  author = {Bera, Krishn and Fengler, Alexander and Frank, Michael J.},
  booktitle = {6th Multidisciplinary Conference on Reinforcement Learning and Decision Making}
}

Efficient hierarchical Bayesian inference via differentiable Reinforcement Learning (RL) likelihoods and Likelihood Approximation Networks (LANs).
Efficient hierarchical Bayesian inference via differentiable Reinforcement Learning (RL) likelihoods and Likelihood Approximation Networks (LANs).

Abstract

In cognitive neuroscience, there has been growing interest in adopting sequential sampling models (SSM) as the generative choice function for reinforcement learning (RLSSM), opening up new avenues for exploring generative processes that can jointly account for dynamics within and across trials. To date, such approaches have been limited by computational tractability, for example due lack of closed-form likelihoods for the decision process and expensive trial-by-trial evaluation of complex reinforcement learning (RL) process. 

To enable hierarchical Bayesian estimation for a broad class of RLSSM models, we use Likelihood Approximation Networks (LANs) in conjunction with differentiable RL likelihoods to leverage fast gradient-based inference methods including NUTS MCMC or Variational Inference (VI) for approximating the posterior over model parameters. The LAN approach involves training neural networks that serve as surrogate likelihoods for arbitrary decision processes, allowing fast likelihood evaluations with only a one-off cost for model simulations that are amortized for future inference. Differentiable RL likelihoods improve scalability and enable faster convergence with gradient-based optimizers or MCMC samplers for complex RL processes. 

To showcase this approach, we consider the RLWM task and model with multiple interacting generative learning processes. We use differentiable likelihoods for the RLWM model in combination with LANs that can be used for any arbitrary SSM. We show that this approach can be combined with hierarchical variational inference to accurately recover the posterior parameter distributions in arbitrarily complex RLSSM paradigms, and moreover, that in comparison, traditional RL models with only softmax choice processes can be strongly biased estimators of the true generative process.

Share

Tools
Translate to