RidgeEncoder#
- class mvpy.estimators.RidgeEncoder(alphas: Tensor | ndarray | float | int = 1, **kwargs)[source]#
Implements a linear ridge encoder.
This encoder maps features \(X\) to neural data \(y\) through the forward model \(\beta\):
\[y = \beta X + \varepsilon\]Consequently, we solve for the forward model through:
\[\arg\min_{\beta} \sum_i(y_i - \beta^T X_i)^2 + \alpha_\beta \lvert\lvert\beta\rvert\rvert^2\]where \(\alpha_\beta\) are the penalties to test in LOO-CV.
Unlike a standard
RidgeCV, this class also supports solving for the full encoding model (including all time points) at once, using a single alpha. This may be useful when trying to avoid different alphas at different time steps, as would be the case when usingSlidingto slide over the temporal dimension when encoding.- Parameters:
- alphasnp.ndarray | torch.Tensor | float | int, default=1
The penalties to use for estimation.
- kwargsAny
Additional arguments.
- Attributes:
- alphasnp.ndarray | torch.Tensor
The penalties to use for estimation.
- kwargsAny
Additional arguments for the estimator.
- estimatormvpy.estimators.RidgeCV
The estimator to use.
- intercept_np.ndarray | torch.Tensor
The intercepts of the encoder of shape
(1, n_channels).- coef_np.ndarray | torch.Tensor
The coefficients of the encoder of shape
(n_features, n_channels[, n_timepoints]).- metric_mvpy.metrics.r2
The default metric to use.
See also
mvpy.estimators.RidgeCVThe estimator used for encoding.
mvpy.estimators.TimeDelayed,mvpy.estimators.ReceptiveFieldAlternative estimators for explicitly modeling temporal response functions.
Notes
This assumes a one-to-one mapping in feature and neural time. This is, of course, principally wrong, but may be good enough when we have a simple set of features and want to find out at what points in time they might correspond to neural data, for example for regressing semantic embeddings on neural data. For more explicit modeling of temporal response functions, see
TimeDelayedorReceptiveField.Examples
Let’s say we want to do a very simple encoding:
>>> import torch >>> from mvpy.estimators import RidgeEncoder >>> ß = torch.normal(0, 1, (50,)) >>> X = torch.normal(0, 1, (100, 50)) >>> y = X @ ß >>> y = y[:,None] + torch.normal(0, 1, (100, 1)) >>> encoder = RidgeEncoder().fit(X, y) >>> encoder.coef_.shape torch.Size([1, 50])
Next, let’s assume we want to do a temporally expanded encoding instead:
>>> import torch >>> from mvpy.estimators import RidgeEncoder >>> X = torch.normal(0, 1, (240, 5, 100)) >>> ß = torch.normal(0, 1, (60, 5, 100)) >>> y = torch.stack([torch.stack([X[:,:,i] @ ß[j,:,i] for i in range(X.shape[2])], 0) for j in range(ß.shape[0])], 0).swapaxes(0, 2).swapaxes(1, 2) >>> y = y + torch.normal(0, 1, y.shape) >>> encoder = RidgeEncoder().fit(X, y) >>> encoder.coef_.shape torch.Size([60, 5, 100])
- clone() RidgeEncoder[source]#
Clone this class.
- Returns:
- encodermvpy.estimators.RidgeEncoder
The cloned object.
- fit(X: ndarray | Tensor, y: ndarray | Tensor) RidgeEncoder[source]#
Fit the estimator.
- Parameters:
- Xnp.ndarray | torch.Tensor
The features of shape
(n_trials, n_features[, n_timepoints]).- ynp.ndarray | torch.Tensor
The neural data of shape
(n_trials, n_channels[, n_timepoints]).
- Returns:
- encodermvpy.estimators.RidgeEncoder
The fitted encoder.
- predict(X: ndarray | Tensor) ndarray | Tensor[source]#
Predict from the estimator.
- Parameters:
- Xnp.ndarray | torch.Tensor
The features of shape
(n_trials, n_features[, n_timepoints]).
- Returns:
- y_hnp.ndarray | torch.Tensor
The predictions of shape
(n_trials, n_channels[, n_timepoints]).
- score(X: ndarray | Tensor, y: ndarray | Tensor, metric: Metric | Tuple[Metric] | None = None) ndarray | Tensor | Dict[str, ndarray] | Dict[str, Tensor][source]#
Make predictions from \(X\) and score against \(y\).
- Parameters:
- Xtorch.Tensor
Input data of shape
(n_trials, n_features[, n_timepoints]).- ytorch.Tensor
Output data of shape
(n_trials, n_channels[, n_timepoints]).- metricOptional[Metric], default=None
Metric or tuple of metrics to compute. If
None, defaults tometric_.
- Returns:
- scorenp.ndarray | torch.Tensor | Dict[str, np.ndarray] | Dict[str, torch.Tensor]
Scores of shape
(n_features[, n_timepoints])or, for multiple metrics, a dictionary of metric names and scores of shape(n_features[, n_timepoints]).
Warning
If multiple values are supplied for
metric, this function will output a dictionary of{Metric.name: score, ...}rather than a stacked array. This is to provide consistency across cases where metrics may or may not differ in their output shapes.