Sliding#

class mvpy.estimators.Sliding(estimator: Callable | BaseEstimator, dims: int | tuple | list | ndarray | Tensor = -1, n_jobs: int | None = None, top: bool = True, verbose: bool = False)[source]#

Implements a sliding estimator that allows you to fit estimators iteratively over a set of dimensions.

This is particularly useful when we have, for example, a temporal dimension in our data such that, for example, we have neural data \(X\) (n_trials, n_channels, n_timepoints) and class labels \(y\) (n_trials, n_features, n_timepoints) and want to fit a separate classifier at each time step. In this case, we can wrap our classifier object in Sliding with dims=(-1,) to automatically fit our classifiers across all timepoints.

Parameters:
estimatorCallable | sklearn.base.BaseEstimator

Estimator to use. Note that this must expose a clone() method.

dimsint | Tuple[int] | List[int] | np.ndarray | torch.Tensor, default=-1

Dimensions to slide over. Note that types are inferred here, defaulting to torch. If you are fitting a numpy estimator, please specify dims as np.ndarray.

n_jobsOptional[int], default=None

Number of jobs to run in parallel.

topbool, default=True

Is this a top-level estimator? If multiple dims are specified, this will be False in recursive Sliding objects.

verbosebool, default=False

Should progress be reported verbosely?

Attributes:
estimatorCallable | sklearn.base.BaseEstimator

Estimator to use. Note that this must expose a clone() method.

dimsint | Tuple[int] | List[int] | np.ndarray | torch.Tensor, default=-1

Dimensions to slide over. Note that types are inferred here, defaulting to torch. If you are fitting a numpy estimator, please specify dims as np.ndarray.

n_jobsOptional[int], default=None

Number of jobs to run in parallel.

topbool, default=True

Is this a top-level estimator? If multiple dims are specified, this will be False in recursive Sliding objects.

verbosebool, default=False

Should progress be reported verbosely?

estimators_List[Callable, sklearn.base.BaseEstimator]

List of fitted estimators.

Notes

When fitting estimators using fit(), X and y must have the same number of dimensions. If this is not the case, please pad or expand your data appropriately.

Examples

If, for example, we have \(X\) (n_trials, n_frequencies, n_channels, n_timepoints) and \(y\) (n_trials, n_frequencies, n_features, n_timepoints) and we want to slide a RidgeDecoder over (n_frequencies, n_timepoints), we can do:

>>> import torch
>>> from mvpy.estimators import Sliding, RidgeDecoder
>>> X = torch.normal(0, 1, (240, 5, 64, 100))
>>> y = torch.normal(0, 1, (240, 1, 5, 100))
>>> decoder = RidgeDecoder(
>>>     alphas = torch.logspace(-5, 10, 20)
>>> )
>>> sliding = Sliding(
>>>     estimator = decoder, 
>>>     dims = (1, 3), 
>>>     n_jobs = 4
>>> ).fit(X, y)
>>> patterns = sliding.collect('pattern_')
>>> patterns.shape
torch.Size([5, 100, 64, 5])
clone()[source]#

Clone this class.

Returns:
slidingmvpy.estimators.Sliding

Cloned class.

collect(attr: str) ndarray | Tensor[source]#

Collect an attribute from all estimators.

Parameters:
attrstr

Attribute to collect from all fitted estimators.

Returns:
attrnp.ndarray | torch.Tensor

Collected attribute of shape (*dims[, ...]).

fit(X: ndarray | Tensor, y: ndarray | Tensor, *args) Sliding[source]#

Fit the sliding estimators.

Parameters:
Xnp.ndarray | torch.Tensor

Input data of arbitrary shape.

ynp.ndarray | torch.Tensor

Target data of arbitrary shape.

*args

Additional arguments to pass to estimators.

Returns:
slidingmvpy.estimators.Sliding

The fitted sliding estimator.

fit_transform(X: ndarray | Tensor, y: ndarray | Tensor, *args) ndarray | Tensor[source]#

Fit and transform the data.

Parameters:
Xnp.ndarray | torch.Tensor

Input data of arbitrary shape.

yOptional[np.ndarray | torch.Tensor], default=None

Target data of arbitrary shape.

*argsAny

Additional arguments.

Returns:
Znp.ndarray | torch.Tensor

Transformed data of arbitrary shape.

predict(X: ndarray | Tensor, y: ndarray | Tensor | None = None, *args) ndarray | Tensor[source]#

Predict the targets.

Parameters:
Xnp.ndarray | torch.Tensor

Input data of arbitrary shape.

yOptional[np.ndarray | torch.Tensor], default=None

Target data of arbitrary shape.

*argsAny

Additional arguments.

Returns:
y_hnp.ndarray | torch.Tensor

Predicted data of arbitrary shape.

predict_proba(X: ndarray | Tensor, y: ndarray | Tensor | None = None, *args) ndarray | Tensor[source]#

Predict the probabilities.

Parameters:
Xnp.ndarray | torch.Tensor

Input data of arbitrary shape.

yOptional[np.ndarray | torch.Tensor], default=None

Target data of arbitrary shape.

*argsAny

Additional arguments.

Returns:
pnp.ndarray | torch.Tensor

Probabilities of arbitrary shape.

score(X: ndarray | Tensor, y: ndarray | Tensor, metric: Metric | Tuple[Metric] | None = None) ndarray | Tensor | Dict[str, ndarray] | Dict[str, Tensor][source]#

Make predictions from \(X\) and score against \(y\).

Parameters:
Xnp.ndarray | torch.Tensor

Input data of arbitrary shape.

ynp.ndarray | torch.Tensor

Output data of arbitrary shape.

metricOptional[Metric | Tuple[Metric]], default=None

Metric or tuple of metrics to compute. If None, defaults to the metric specified for the underlying estimator.

Returns:
scorenp.ndarray | torch.Tensor | Dict[str, np.ndarray] | Dict[str, torch.Tensor]

Scores of shape arbitrary shape.

Warning

If multiple values are supplied for metric, this function will output a dictionary of {Metric.name: score, ...} rather than a stacked array. This is to provide consistency across cases where metrics may or may not differ in their output shapes.

transform(X: ndarray | Tensor, y: ndarray | Tensor | None = None, *args) ndarray | Tensor[source]#

Transform the data.

Parameters:
Xnp.ndarray | torch.Tensor

Input data of arbitrary shape.

yOptional[np.ndarray | torch.Tensor], default=None

Target data of arbitrary shape.

*argsAny

Additional arguments.

Returns:
Znp.ndarray | torch.Tensor

Transformed data of arbitrary shape.