Uncertainty in population receptive field estimates revealed by variational qPRF

Sebastian Waz, Yalin Wang, Zhong-Lin Lu


Abstract

Background: Population receptive field (pRF) modeling is a cornerstone of retinotopic mapping in visual neuroscience, enabling precise mapping of visual stimulus processing in the human brain. However, pRF estimates are influenced by multiple sources of variability, including scanner properties, neurovascular coupling, physiological noise, and task-related factors. Traditionally, these estimates are treated as definitive because quantifying variance has been computationally infeasible.

New method: We introduce qPRF-v, an innovative extension of the qPRF software which computes pRF point estimates over 1,000 times faster than existing packages while preserving goodness-of-fit (Waz et al., 2025a).

Results: Leveraging variational inference, qPRF-v efficiently approximates the posterior variance of pRF model parameters, revealing substantial uncertainty, particularly in parameters governing neural dynamics (e.g., compressive exponent) compared to those defining receptive field centers.

Comparison with existing methods: To evaluate the impact of variance quantification, we compared population-level retinotopic maps generated via simple averaging (the prior standard) and R2 weighted averaging with those using inverse-variance weighted averaging. Inverse-variance weighted averaging significantly enhanced map quality, yielding naturally denoised representations with improved eccentricity detail.

Conclusions: By directly addressing variability, qPRF-v not only increases the accuracy of retinotopic mapping but also enables more robust, individualized analyses of visual processing. This advancement has profound implications for studying visual function in health and disease, paving the way for personalized neuroscience and improved brain atlases for research and clinical applications.

Keywords: Population receptive field; Retinotopic map; Retinotopy; Variational inference; qPRF.


Figures (click on each for a larger version):