Krishn Bera

Cognitive Science PhD Student, Brown University

Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models


Conference paper


Krishn Bera, Alexander Fengler, Michael J. Frank
47th Annual Meeting of the Cognitive Science Society, CogSci, 2025

View PDF
Cite

Cite

APA   Click to copy
Bera, K., Fengler, A., & Frank, M. J. (2025). Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models. In 47th Annual Meeting of the Cognitive Science Society. CogSci.


Chicago/Turabian   Click to copy
Bera, Krishn, Alexander Fengler, and Michael J. Frank. “Fast and Robust Bayesian Inference for Modular Combinations of Dynamic Learning and Decision Models.” In 47th Annual Meeting of the Cognitive Science Society. CogSci, 2025.


MLA   Click to copy
Bera, Krishn, et al. “Fast and Robust Bayesian Inference for Modular Combinations of Dynamic Learning and Decision Models.” 47th Annual Meeting of the Cognitive Science Society, CogSci, 2025.


BibTeX   Click to copy

@inproceedings{krishn2025a,
  title = {Fast and robust Bayesian inference for modular combinations of dynamic learning and decision models},
  year = {2025},
  publisher = {CogSci},
  author = {Bera, Krishn and Fengler, Alexander and Frank, Michael J.},
  booktitle = {47th Annual Meeting of the Cognitive Science Society}
}

Efficient hierarchical Bayesian inference via differentiable Reinforcement Learning (RL) likelihoods and Likelihood Approximation Networks (LANs).
Efficient hierarchical Bayesian inference via differentiable Reinforcement Learning (RL) likelihoods and Likelihood Approximation Networks (LANs).

Abstract

In cognitive neuroscience, there has been growing interest in adopting sequential sampling models (SSM) as the generative choice function for reinforcement learning (RLSSM) to jointly account for decision dynamics within and across trials. However, such approaches have been limited by computational tractability due to lack of closed-form likelihoods for the decision process or expensive trial-by-trial evaluation of complex reinforcement learning (RL) processes. We enable hierarchical Bayesian estimation for a broad class of RLSSM models, using Likelihood Approximation Networks (LANs) in conjunction with differentiable RL likelihoods to leverage fast gradient-based inference methods including Hamiltonian Monte Carlo or Variational Inference (VI). To showcase the scalability and faster convergence with our approach, we consider the Reinforcement Learning - Working Memory (RLWM) task and model with multiple interacting generative learning processes. We show that our method enables accurate recovery of the posterior parameter distributions in arbitrarily complex RLSSM paradigms, and moreover, that in comparison, fitting data with the equivalent choice-only model yields a biased estimator of the true generative process. Moreover, leveraging the SSM with efficient inference allows us to uncover a heretofore undescribed cognitive process within the RLWM task, whereby participants proactively adjust the decision threshold as a function of WM load.

Share

Tools
Translate to