RidgeDecoder#

class mvpy.estimators.RidgeDecoder(alphas: Tensor | ndarray | float | int = 1, **kwargs)[source]#

Implements a linear ridge decoder.

This decoder maps from neural data \(X\) to features \(y\) through spatial filters \(\beta\):

\[y = \beta X + \varepsilon\]

Consequently, we solve for spatial filters through:

\[\arg\min_{\beta} \sum_{i} (y_i - \beta^T X_i)^2 + \alpha_\beta \lvert\lvert\beta\rvert\rvert^2\]

where \(\alpha_\beta\) are the penalties to test in LOO-CV.

Beyond what RidgeCV would also achieve, this class additionally computes the patterns used for decoding following [1].

Parameters:
alphasnp.ndarray | torch.Tensor

The penalties to use for estimation.

fit_interceptbool, default=True

Whether to fit an intercept.

normalisebool, default=True

Whether to normalise the data.

alpha_per_targetbool, default=False

Whether to use a different penalty for each target.

Attributes:
estimator_mvpy.estimators.RidgeCV

The ridge estimator.

pattern_np.ndarray | torch.Tensor

The decoded pattern of shape (n_channels, n_features).

coef_np.ndarray | torch.Tensor

The coefficeints of the decoder of shape (n_features, n_channels).

intercept_np.ndarray | torch.Tensor

The intercepts of the decoder of shape (n_features,).

alpha_np.ndarray | torch.Tensor

The penalties used for estimation.

metric_mvpy.metrics.r2

The default metric to use.

See also

mvpy.estimators.RidgeCV

The estimator used for ridge decoding.

mvpy.estimators.B2B

An alternative decoding estimator that explicitly disentangles correlated features.

Notes

While this class supports decoding an arbitrary number of features at once, all features will be treated as individual regressions. Consequently, this class cannot control for correlations among predictors. If this is desired, refer to B2B instead.

References

[1]

Haufe, S., Meinecke, F., Görgen, K., Dähne, S., Haynes, J.D., Blankertz, B., & Bießmann, F. (2014). On the interpretation of weight vectors of linear models in multivariate neuroimaging. NeuroImage, 87, 96-110. 10.1016/j.neuroimage.2013.10.067

Examples

>>> import torch
>>> from mvpy.estimators import RidgeDecoder
>>> X = torch.normal(0, 1, (100, 5))
>>> ß = torch.normal(0, 1, (5, 60))
>>> y = X @ ß + torch.normal(0, 1, (100, 60))
>>> decoder = RidgeDecoder(alphas = torch.logspace(-5, 10, 20)).fit(y, X)
>>> decoder.pattern_.shape
torch.Size([60, 5])
>>> decoder.predict(y).shape
torch.size([100, 5])
clone() RidgeDecoder[source]#

Clone this class.

Returns:
decodermvpy.estimators.RidgeDecoder

The cloned object.

fit(X: ndarray | Tensor, y: ndarray | Tensor)[source]#

Fit the estimator.

Parameters:
Xnp.ndarray | torch.Tensor

The neural data of shape (n_trials, n_channels).

ynp.ndarray | torch.Tensor

The features of shape (n_trials, n_features).

predict(X: ndarray | Tensor) ndarray | Tensor[source]#

Predict from the estimator.

Parameters:
Xnp.ndarray | torch.Tensor

The neural data of shape (n_trials, n_channels).

Returns:
y_hnp.ndarray | torch.Tensor

The predictions of shape (n_trials, n_features).

score(X: ndarray | Tensor, y: ndarray | Tensor, metric: Metric | Tuple[Metric] | None = None) ndarray | Tensor | Dict[str, ndarray] | Dict[str, Tensor][source]#

Make predictions from \(X\) and score against \(y\).

Parameters:
Xtorch.Tensor

Input data of shape (n_samples, n_channels).

ytorch.Tensor

Output data of shape (n_samples, n_features).

metricOptional[Metric], default=None

Metric or tuple of metrics to compute. If None, defaults to metric_.

Returns:
scorenp.ndarray | torch.Tensor | Dict[str, np.ndarray] | Dict[str, torch.Tensor]

Scores of shape (n_features,) or, for multiple metrics, a dictionary of metric names and scores of shape (n_features,).

Warning

If multiple values are supplied for metric, this function will output a dictionary of {Metric.name: score, ...} rather than a stacked array. This is to provide consistency across cases where metrics may or may not differ in their output shapes.