ReceptiveField#
- class mvpy.estimators.ReceptiveField(t_min: float, t_max: float, fs: int, alpha: int | float | ndarray | Tensor | List = 1.0, reg_type: str | List = 'ridge', reg_cv: Any = 5, patterns: bool = False, fit_intercept: bool = True, edge_correction: bool = True)[source]#
Implements receptive field estimation (for multivariate temporal response functions or stimulus reconstruction).
Generally, mTRF models are described by:
\[r(t,n) = \sum_{\tau} w(\tau, n) s(t - \tau) + \varepsilon\]where \(r(t,n)\) is the reconstructed signal at timepoint \(t\) for channel \(n\), \(s(t)\) is the stimulus at time \(t\), \(w(\tau, n)\) is the weight at time delay \(\tau\) for channel \(n\), and \(\varepsilon\) is the error.
SR models are estimated as:
\[s(t) = \sum_{n}\sum_{\tau} r(t + \tau, n) g(\tau, n)\]where \(s(t)\) is the reconstructed stimulus at time \(t\), \(r(t,n)\) is the neural response at \(t\) and lagged by \(\tau\) for channel \(n\), \(g(\tau, n)\) is the weight at time delay \(\tau\) for channel \(n\).
For more information on mTRF or SR models, see [1].
Consequently, this class fundamentally solves the same problem as
TimeDelayed. However, unlikeTimeDelayed, this approach avoids creating and solving the full time-delayed design and outcome matrix. Instead, this approach uses the fact that we are fundamentally interested in (de-)convolution, which can be solved efficiently through estimation of auto- and cross- correlations in the Fourier domain. For more information on this approach, see [2] [3] [4].Solving this in the Fourier domain can be extremely beneficial when the number of predictors
n_featuresis small, but scales poorly for a higher number ofn_featuresunlessedge_correctionis explicitly disabled. Generally, we would recommend testing bothReceptiveFieldandTimeDelayedon a realistic subset of the data before deciding for one of the two approaches.Like
TimeDelayed, this class will automatically perform inner cross-validation if multiple values ofalphaare supplied.- Parameters:
- t_minfloat
Minimum time point to fit (unlike TimeDelayed, this is relative to y).
- t_maxfloat
Maximum time point to fit (unlike TimeDelayed, this is relative to y). Must be greater than t_min.
- fsint
Sampling frequency.
- alphaint | float | np.ndarray | torch.Tensor | List, default=1.0
Alpha penalties as float or of shape (n_penalties,). If not float, cross-validation will be employed (see
reg_cv).- reg_type{‘ridge’, ‘laplacian’, List}, default=’ridge’
Type of regularisation to employ (either ‘ridge’ or ‘laplacian’ or tuple describing
(time, features)).- reg_cv{int, ‘LOO’, mvpy.crossvalidation.KFold}, default=5
If alpha is list or array, what cross-validation scheme should we use? Integers are interpeted as
n_splitsforKFold. String input'LOO'will useRidgeCVto solve LOO-CV over alphas, but is available only forreg_type'ridge'. Alternatively, a cross-validator that exposes asplit()method can be supplied.- patternsbool, default=False
Should we estimate the patterns from coefficients and data (useful only for stimulus reconstruction, not mTRF)?
- fit_interceptbool, default=True
Should we fit an intercept for this model?
- edge_correctionbool, default=True
Should we apply edge corrections to auto-correlations?
- Attributes:
- t_minfloat
Minimum time point to fit (unlike TimeDelayed, this is relative to y).
- t_maxfloat
Maximum time point to fit (unlike TimeDelayed, this is relative to y). Must be greater than t_min.
- fsint
Sampling frequency.
- alphaint | float | np.ndarray | torch.Tensor | List, default=1.0
Alpha penalties as float or of shape
(n_penalties,). If not float, cross-validation will be employed (seereg_cv).- reg_type{‘ridge’, ‘laplacian’, List}, default=’ridge’
Type of regularisation to employ (either ‘ridge’ or ‘laplacian’ or tuple describing
(time, features)).- reg_cv{int, ‘LOO’, mvpy.crossvalidation.KFold}, default=5
If alpha is list or array, what cross-validation scheme should we use? Integers are interpeted as
n_splitsforKFold. String input'LOO'will useRidgeCVto solve LOO-CV over alphas, but is available only forreg_type'ridge'. Alternatively, a cross-validator that exposes asplit()method can be supplied.- patternsbool, default=False
Should we estimate the patterns from coefficients and data (useful only for stimulus reconstruction, not mTRF)?
- fit_interceptbool, default=True
Should we fit an intercept for this model?
- edge_correctionbool, default=True
Should we apply edge corrections to auto-correlations?
- s_minint
t_min converted to samples.
- s_maxint
t_max converted to samples.
- windownp.ndarray | torch.Tensor
The TRF window ranging from s_min-s_max of shape
(n_trf,).- n_features_int
Number of features in \(X\).
- n_channels_int
Number of channels in \(y\).
- n_trf_int
Number of timepoints in the estimated response functions.
- cov_np.ndarray | torch.Tensor
Covariance from auto-correlations of shape
(n_samples, n_features * n_trf, n_features * n_trf).- coef_np.ndarray | torch.Tensor
Estimated coefficients of shape
(n_channels, n_features, n_trf).- pattern_np.ndarray | torch.Tensor
If computed, estimated pattern of shape
(n_channels, n_features, n_trf).- intercept_float | np.ndarray | torch.Tensor
Estimated intercepts of shape
(n_channels,)orfloat.- metric_mvpy.metrics.r2
The default metric to use.
See also
mvpy.estimators.TimeDelayedAn alternative mTRF/SR estimator that solves the time-expanded design matrix.
mvpy.crossvalidation.KFold,mvpy.crossvalidation.RepeatedKFold,mvpy.crossvalidation.StratifiedKFold,mvpy.crossvalidation.RepeatedStratifiedKFoldCross-validation classes for automatically testing multiple values of
alpha.
Notes
For SR models it is recommended to also set
patternstoTrueto estimate not only the coefficients but also the patterns that were actually used for reconstructing stimuli. For more information, see [5].Warning
Unlike
TimeDelayed, this class expectst_minandt_maxto be causal in \(y\). Consequently, positive values mean \(X(t)\) asserts influence over \(y(t + \tau)\). This is in line with MNE’s behaviour.References
[1]Crosse, M.J., Di Liberto, G.M., Bednar, A., & Lalor, E.C. (2016). The multivariate temporal response function (mTRF) toolbox: A MATLAB toolbox for relating neural signals to continuous stimuli. Frontiers in Human Neuroscience, 10, 604. 10.3389/fnhum.2016.00604
[2]Willmore, B., & Smyth, D. (2009). Methods for first-order kernel estimation: Simple-cell receptive fields from responses to natural scenes. Network: Computation in Neural Systems, 14, 553-577. 10.1088/0954-898X_14_3_309
[3]Theunissen, F.E., David, S.V., Singh, N.C., Hsu, A., Vinje, W.E., & Gallant, J.L. (2001). Estimating spatio-temporal receptive fields of auditory and visual neurons from their responses to natural stimuli. Network: Computation in Neural Systems, 12, 289-316. 10.1080/net.12.3.289.316
[5]Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96-110. 10.1016/j.neuroimage.2013.10.067
Examples
For mTRF estimation, we can do:
>>> import torch >>> from mvpy.estimators import ReceptiveField >>> ß = torch.tensor([1., 2., 3., 2., 1.]) >>> X = torch.normal(0, 1, (100, 1, 50)) >>> y = torch.nn.functional.conv1d(X, ß[None,None,:], padding = 'same') >>> y = y + torch.normal(0, 1, y.shape) >>> trf = ReceptiveField(-2, 2, 1, alpha = 1e-5) >>> trf.fit(X, y).coef_ tensor([[[0.9912, 2.0055, 2.9974, 1.9930, 0.9842]]])
For stimulus reconstruction, we can do:
>>> import torch >>> from mvpy.estimators import ReceptiveField >>> ß = torch.tensor([1., 2., 3., 2., 1.]) >>> X = torch.arange(50)[None,None,:] * torch.ones((100, 1, 50)) >>> y = torch.nn.functional.conv1d(X, ß[None,None,:], padding = 'same') >>> y = y + torch.normal(0, 1, y.shape) >>> X, y = y, X >>> sr = ReceptiveField(-2, 2, 1, alpha = 1e-3, patterns = True).fit(X, y) >>> sr.predict(X).mean(0)[0,:] tensor([ 0.2148, 0.7017, 1.4021, 2.3925, 3.5046, 4.4022, 5.4741, 6.4759, 7.5530, 8.4915, 9.6014, 10.5186, 11.5872, 12.6197, 13.5862, 14.6769, 15.6523, 16.6765, 17.6622, 18.7172, 19.7117, 20.7994, 21.7023, 22.7885, 23.8434, 24.7849, 25.8697, 26.8705, 27.8523, 28.9028, 29.9428, 30.9342, 31.9401, 32.9729, 33.9704, 34.9847, 36.0325, 37.0251, 38.0297, 39.0678, 40.0847, 41.0827, 42.1410, 43.0924, 44.2115, 45.1548, 41.9511, 45.9482, 32.2861, 76.4690])
- clone() ReceptiveField[source]#
Clone this class.
- Returns:
- rfReceptiveField
The cloned object.
- fit(X: ndarray | Tensor, y: ndarray | Tensor) ReceptiveField[source]#
Fit the estimator, optionally with cross-validation over penalties.
- Parameters:
- Xnp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_features, n_timepoints).- ynp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_channels, n_timepoints).
- Returns:
- rfmvpy.estimators._ReceptiveField_numpy | mvpy.estimators._ReceptiveField_torch
The fitted ReceptiveField estimator.
- predict(X: ndarray | Tensor) ndarray | Tensor[source]#
Make predictions from model.
- Parameters:
- Xnp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_features, n_timepoints).
- Returns:
- y_hnp.ndarray | torch.Tensor
Predicted responses of shape
(n_samples, n_channels, n_timepoints).
- score(X: ndarray | Tensor, y: ndarray | Tensor, metric: Metric | Tuple[Metric] | None = None) ndarray | Tensor | Dict[str, ndarray] | Dict[str, Tensor][source]#
Make predictions from \(X\) and score against \(y\).
- Parameters:
- Xnp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_features, n_timepoints).- ynp.ndarray | torch.Tensor
Output data of shape
(n_samples, n_channels, n_timepoints).- metricOptional[Metric | Tuple[Metric]], default=None
Metric or tuple of metrics to compute. If
None, defaults tometric_.
- Returns:
- scorenp.ndarray | torch.Tensor | Dict[str, np.ndarray] | Dict[str, torch.Tensor]
Scores of shape
(n_channels, n_timepoints)or, for multiple metrics, a dictionary of metric names and scores of shape(n_channels, n_timepoints).
Warning
If multiple values are supplied for
metric, this function will output a dictionary of{Metric.name: score, ...}rather than a stacked array. This is to provide consistency across cases where metrics may or may not differ in their output shapes.