TimeDelayed#
- class mvpy.estimators.TimeDelayed(t_min: float, t_max: float, fs: int, alphas: ndarray | Tensor = tensor([1]), patterns: bool = False, **kwargs)[source]#
Implements time delayed ridge regression (for multivariate temporal response functions or stimulus reconstruction).
Generally, mTRF models are described by:
\[r(t,n) = \sum_{\tau} w(\tau, n) s(t - \tau) + \varepsilon\]where \(r(t,n)\) is the reconstructed signal at timepoint \(t\) for channel \(n\), \(s(t)\) is the stimulus at time \(t\), \(w(\tau, n)\) is the weight at time delay \(\tau\) for channel \(n\), and \(\varepsilon\) is the error.
SR models are estimated as:
\[s(t) = \sum_{n}\sum_{\tau} r(t + \tau, n) g(\tau, n)\]where \(s(t)\) is the reconstructed stimulus at time \(t\), \(r(t,n)\) is the neural response at \(t\) and lagged by \(\tau\) for channel \(n\), \(g(\tau, n)\) is the weight at time delay \(\tau\) for channel \(n\).
For more information on mTRF or SR models, see [1].
In both cases, models are constructed by temporally expanding the design matrix and outcome matrix and then solving for the regression problem:
\[y = \beta X + \varepsilon\]Consequently, we solve for coefficients through:
\[\arg\min_{\beta} \sum_{i} (y_i - \beta^T X_i)^2 + \alpha_\beta \lvert\lvert\beta\rvert\rvert^2\]where \(\alpha_\beta\) are the penalties to test in LOO-CV. Therefore, this class is functionally equivalent to
ReceptiveField, but solves the problem through ridge regression rather than auto- and cross-correlations in the Fourier domain. For more information on this, seeReceptiveField.- Parameters:
- t_minfloat
The minimum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
- t_maxfloat
The maximum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
- fsint
The sampling frequency.
- alphasnp.ndarray | torch.Tensor, default=torch.tensor([1])
The penalties to use for estimation.
- patternsbool, default=False
Should patterns be estimated?
- kwargsAny
Additional arguments for the estimator.
- Attributes:
- alphasnp.ndarray | torch.Tensor
The penalties to use for estimation.
- kwargsAny
Additional arguments.
- patternsbool
Should patterns be estimated?
- t_minfloat
The minimum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
- t_maxfloat
The maximum time delay. Note that positive values indicate X is delayed relative to y. This is unlike MNE’s behaviour.
- fsint
The sampling frequency.
- windownp.ndarray | torch.Tensor
The window to use for estimation.
- estimatormvpy.estimators.RidgeCV
The estimator to use.
- f_int
The number of output features.
- c_int
The number of input features.
- w_int
The number of time delays.
- intercept_np.ndarray | torch.Tensor
The intercepts of the estimator.
- coef_np.ndarray | torch.Tensor
The coefficients of the estimator.
- pattern_np.ndarray | torch.Tensor
The patterns of the estimator.
- metric_mvpy.metrics.r2
The default metric to use.
See also
mvpy.estimators.ReceptiveFieldAn alternative mTRF/SR estimator that solves through auto- and cross-correlations in the Fourier domain.
Notes
For SR models it is recommended to also pass
patternsTrueto estimate not only the coefficients but also the patterns that were actually used for reconstructing stimuli. For more information, see [2].References
[1]Crosse, M.J., Di Liberto, G.M., Bednar, A., & Lalor, E.C. (2016). The multivariate temporal response function (mTRF) toolbox: A MATLAB toolbox for relating neural signals to continuous stimuli. Frontiers in Human Neuroscience, 10, 604. 10.3389/fnhum.2016.00604
[2]Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96-110. 10.1016/j.neuroimage.2013.10.067
Examples
For mTRF estimation, we can do:
>>> import torch >>> from mvpy.estimators import TimeDelayed >>> ß = torch.tensor([1., 2., 3., 2., 1.]) >>> X = torch.normal(0, 1, (100, 1, 50)) >>> y = torch.nn.functional.conv1d(X, ß[None,None,:], padding = 'same') >>> y = y + torch.normal(0, 1, y.shape) >>> trf = TimeDelayed(-2, 2, 1, alphas = 1e-5) >>> trf.fit(X, y).coef_ tensor([[[0.9290, 1.9101, 2.8802, 1.9790, 0.9453]]])
For stimulus reconstruction, we can do:
>>> import torch >>> from mvpy.estimators import TimeDelayed >>> ß = torch.tensor([1., 2., 3., 2., 1.]) >>> X = torch.arange(50)[None,None,:] * torch.ones((100, 1, 50)) >>> y = torch.nn.functional.conv1d(X, ß[None,None,:], padding = 'same') >>> y = y + torch.normal(0, 1, y.shape) >>> X, y = y, X >>> sr = TimeDelayed(-2, 2, 1, alphas = 1e-3, patterns = True).fit(X, y) >>> sr.predict(X).mean(0)[0,:] tensor([ 1.3591, 1.2549, 1.5662, 2.3544, 3.3440, 4.3683, 5.4097, 6.4418, 7.4454, 8.4978, 9.5206, 10.5374, 11.5841, 12.6102, 13.6254, 14.6939, 15.6932, 16.7168, 17.7619, 18.8130, 19.8182, 20.8687, 21.8854, 22.9310, 23.9270, 24.9808, 26.0085, 27.0347, 28.0728, 29.0828, 30.1400, 31.1452, 32.1793, 33.2047, 34.2332, 35.2717, 36.2945, 37.3491, 38.3800, 39.3817, 40.3962, 41.4489, 42.4854, 43.4965, 44.5346, 45.5716, 46.7301, 47.2251, 48.4449, 48.8793])
- clone() TimeDelayed[source]#
Clone this class.
- Returns:
- tdTimeDelayed
The cloned object.
- fit(X: ndarray | Tensor, y: ndarray | Tensor)[source]#
Fit the estimator.
- Parameters:
- Xnp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_features, n_timepoints).- ynp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_channels, n_timepoints).
- Returns:
- tdmvpy.estimators._TimeDelayed_numpy | mvpy.estimators._TimeDelayed_torch
The fitted TimeDelayed estimator.
- predict(X: ndarray | Tensor) ndarray | Tensor[source]#
Make predictions from model.
- Parameters:
- Xnp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_features, n_timepoints).
- Returns:
- y_hnp.ndarray | torch.Tensor
Predicted responses of shape
(n_samples, n_channels, n_timepoints).
- score(X: ndarray | Tensor, y: ndarray | Tensor, metric: Metric | Tuple[Metric] | None = None) ndarray | Tensor | Dict[str, ndarray] | Dict[str, Tensor][source]#
Make predictions from \(X\) and score against \(y\).
- Parameters:
- Xnp.ndarray | torch.Tensor
Input data of shape
(n_samples, n_features, n_timepoints).- ynp.ndarray | torch.Tensor
Output data of shape
(n_samples, n_channels, n_timepoints).- metricOptional[Metric | Tuple[Metric]], default=None
Metric or tuple of metrics to compute. If
None, defaults tometric_.
- Returns:
- scorenp.ndarray | torch.Tensor | Dict[str, np.ndarray] | Dict[str, torch.Tensor]
Scores of shape
(n_channels, n_timepoints)or, for multiple metrics, a dictionary of metric names and scores of shape(n_channels, n_timepoints).
Warning
If multiple values are supplied for
metric, this function will output a dictionary of{Metric.name: score, ...}rather than a stacked array. This is to provide consistency across cases where metrics may or may not differ in their output shapes.