mvpy.metrics package#
Submodules#
mvpy.metrics.accuracy module#
- class mvpy.metrics.accuracy.Accuracy(name: str = 'accuracy', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function accuracy>)[source]#
Bases:
MetricImplements
mvpy.math.accuracy()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.accuracy.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’accuracy’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.accuracy
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeClassifier >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import accuracy >>> X, y = make_meeg_categorical() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = accuracy)
- f(y: ndarray | Tensor) ndarray | Tensor[source]#
Compute accuracy between x and y. Note that accuracy is always computed over the final dimension.
- Parameters:
- xUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- yUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- Returns:
- Union[np.ndarray, torch.Tensor]
Accuracy
Notes
Accuracy is defined as:
\[\text{accuracy}(x, y) = \frac{1}{N}\sum_i^N{1(x_i = y_i)}\]Examples
>>> import torch >>> from mvpy.math import accuracy >>> x = torch.tensor([1, 0]) >>> y = torch.tensor([-1, 0]) >>> accuracy(x, y) tensor([0.5])
- name: str = 'accuracy'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.accuracy() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.metric module#
Base metric class.
- class mvpy.metrics.metric.Metric(name: str = 'metric', request: ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function Metric.<lambda>>)[source]#
Bases:
objectImplements an interface class for mutable metrics of model evaluation.
Generally, when we fit models, we want to be able to evaluate how well they explain some facet of our data. Many individual functions for doing so are implemented in
math. However, depending on the exact dimensionality of our data and the exact outcome we want to measure, it is convenient to have automatic control over what kinds of data are required for a given metric, what dimensions should be considered features, and what exact function should be called.For example, metric functions in
mathgenerally expect features to live in the final dimension of a tensor. However, if we have neural data \(X\) of shape(n_trials, n_channels, n_timepoints)and a semantic embedding \(y\) of shape(n_trials, n_features, n_timepoints)that we decode using a pipeline wrappingRidgeDecoderinSliding, the results from callingpredict()will be of shape(n_trials, n_features, n_timepoints). In practice, we often want to compute a metric over the first dimension that is returned–i.e., trials. Consequently, we can specify a metric whererequestis('y', 'predict'),reduceis(0,), andfismvpy.math.r2. In fact, this metric is already implemented asR2, which will conveniently handle all of these steps for us.At the same time, we may want to use this existing
R2metric, but may also be interested in how well the embedding geometry was decoded. For this, we may want to compute our metric over the feature dimension instead. For such cases,Metricexposesmutate(), which allows us to obtain a newMetricwith, for example, a mutated structure inreducewhich is(1,).- Parameters:
- namestr, default=’metric’
The name of the metric. If models are scored using multiple metrics, the name of any metric will be a key in the resulting output dictionary. See
score()for more information.- requestTuple[{‘X’, ‘y’, ‘decision_function’, ‘predict’, ‘transform’, ‘predict_proba’, str}], default=(‘y’, ‘predict’)
A tuple of strings defining what measures are required for computing this metric. Generally, this will first try to find a corresponding method or, alternatively, a corresponding attribute in the class. All requested attributes will be supplied to the metric function in order of request.
- reduceint | Tuple[int], default=(0,)
Dimensions that should be reduced when computing this metric. In practice, this means that the specified dimensions will be flattened and moved to the final dimension where
mathexpects features to live. Seescore()for more information.- fCallable, default=lambda x: x
The math function that should be used for computing this particular metric.
- Attributes:
- namestr, default=’metric’
The name of the metric. If models are scored using multiple metrics, the name of any metric will be a key in the resulting output dictionary. See
score()for more information.- requestTuple[{‘X’, ‘y’, ‘decision_function’, ‘predict’, ‘transform’, ‘predict_proba’, str}], default=(‘y’, ‘predict’)
A tuple of strings defining what measures are required for computing this metric. Generally, this will first try to find a corresponding method or, alternatively, a corresponding attribute in the class. All requested attributes will be supplied to the metric function in order of request.
- reduceint | Tuple[int], default=(0,)
Dimensions that should be reduced when computing this metric. In practice, this means that the specified dimensions will be flattened and moved to the final dimension where
mathexpects features to live. Seescore()for more information.- fCallable, default=lambda x: x
The math function that should be used for computing this particular metric.
See also
mvpy.metrics.scoreThe function handling the logic before
Metricis called.
Notes
All metrics implemented in this submodule are implemented as dataclasses. Instances of these classes are automatically instantiated and are available as snake cased variables. For example, for the metric class
Roc_auc, we can directly accessroc_auc.Examples
>>> import torch >>> from mvpy.dataset import make_meeg_continuous >>> from mvpy.preprocessing import Scaler >>> from mvpy.estimators import TimeDelayed >>> from mvpy import metrics >>> from sklearn.pipeline import make_pipeline >>> fs = 200 >>> X, y = make_meeg_continuous(fs = fs) >>> trf = make_pipeline( >>> Scaler().to_torch(), >>> TimeDelayed( >>> -1.0, 0.0, fs, >>> alphas = torch.logspace(-5, 5, 10) >>> ) >>> ).fit(X, y) >>> scores = metrics.score( >>> trf, >>> ( >>> metrics.r2, >>> metrics.r2.mutate( >>> name = 'r2_time', >>> reduce = (2,) >>> ) >>> ), >>> X, y >>> ) >>> scores['r2'].shape, scores['r2_time'].shape (torch.Size([64, 400]), torch.Size([120, 64]))
- mutate(name: str | None = None, request: str | None = None, reduce: int | Tuple[int] | None = None, f: Callable | None = None) Metric[source]#
Mutate an existing metric.
- Parameters:
- nameOptional[str], default=None
If not
None, the name of the mutated metric.- requestOptional[str], default=None
If not
None, the features to request for the mutated metric.- reduceOptional[int | Tuple[int]], default=None
If not
None, the dimensions to reduce for the mutated metric.- fOptional[Callable], default=None
If not
None, the underlying math function to use for the mutated metric.
- Returns:
- metricMetric
The mutated metric.
- name: str = 'metric'#
- reduce: int | Tuple[int] = (0,)#
- request: Tuple[str] = ('y', 'predict')#
The torch package contains data structures for multi-dimensional
mvpy.metrics.pearsonr module#
- class mvpy.metrics.pearsonr.Pearsonr(name: str = 'pearsonr', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function pearsonr>)[source]#
Bases:
MetricImplements
mvpy.math.pearsonr()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.pearsonr.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’pearsonr’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.pearsonr
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeDecoder >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import pearsonr >>> X, y = make_meeg_continuous() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = pearsonr)
- f(y: ndarray | Tensor, *args: Any) ndarray | Tensor[source]#
Computes pearson correlations between x and y. Note that correlations are always computed over the final dimension.
- Parameters:
- xUnion[np.ndarray, torch.Tensor]
Matrix ([samples …] x features)
- yUnion[np.ndarray, torch.Tensor]
Matrix ([samples …] x features)
- Returns:
- Union[np.ndarray, torch.Tensor]
Vector or matrix of pearson correlations
Notes
Pearson correlations are defined as:
\[r = \frac{\sum{(x_i - \bar{x})(y_i - \bar{y})}}{\sqrt{\sum{(x_i - \bar{x})^2} \sum{(y_i - \bar{y})^2}}}\]where \(x_i\) and \(y_i\) are the \(i\)-th elements of \(x\) and \(y\), respectively.
Examples
>>> import torch >>> from mvpy.math import pearsonr >>> x = torch.tensor([1, 2, 3]) >>> y = torch.tensor([4, 5, 6]) >>> pearsonr(x, y) tensor(1., dtype=torch.float64)
- name: str = 'pearsonr'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.pearsonr() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.r2 module#
- class mvpy.metrics.r2.R2(name: str = 'r2', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function r2>)[source]#
Bases:
MetricImplements
mvpy.math.r2()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.r2.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’r2’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.r2
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeClassifier >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import r2 >>> X, y = make_meeg_categorical() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = r2)
- f(y_h: ndarray | Tensor) ndarray | Tensor[source]#
Rank data in x along its final feature dimension. Ties are computed as averages.
- Parameters:
- yUnion[np.ndarray, torch.Tensor]
True outcomes of shape
([n_samples, ...,] n_features).- y_hUnion[np.ndarray, torch.Tensor]
Predicted outcomes of shape
([n_samples, ...,] n_features).
- Returns:
- rUnion[np.ndarray, torch.Tensor]
R2 scores of shape
([n_samples, ...]).
Examples
>>> import torch >>> from mvpy.math import rank >>> y = torch.tensor([1.0, 2.0, 3.0]) >>> y_h = torch.tensor([1.0, 2.0, 3.0]) >>> r2(x) tensor([1.0])
- name: str = 'r2'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.r2() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.roc_auc module#
- class mvpy.metrics.roc_auc.Roc_auc(name: str = 'roc_auc', request: str | ~typing.Tuple[str] = ('y', 'decision_function'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function roc_auc>)[source]#
Bases:
MetricImplements
mvpy.math.roc_auc()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.roc_auc.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’roc_auc’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘decision_function’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.roc_auc
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeClassifier >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import roc_auc >>> X, y = make_meeg_categorical() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = roc_auc)
- f(y_score: ndarray | Tensor) ndarray | Tensor[source]#
Compute ROC AUC score between y_true and y_score. Note that ROC AUC is always computed over the final dimension.
- Parameters:
- y_trueUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- y_scoreUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- Returns:
- Union[np.ndarray, torch.Tensor]
Accuracy
Notes
ROC AUC is computed using the Mann-Whitney U formula:
\[\text{ROCAUC}(y, \hat{y}) = \frac{R_{+} - \frac{P * (P + 1)}{2}}}{P * N}\]where \(R_{+}\) is the sum of ranks for positive classes, \(P\) is the number of positive samples, and \(N\) is the number of negative samples.
In the case that labels are not binary, we create unique binary labels by one-hot encoding the labels and then take the macro-average over classes.
Examples
>>> import torch >>> from mvpy.math import roc_auc >>> y_true = torch.tensor([1., 0.]) >>> y_score = torch.tensor([-1., 1.]) >>> roc_auc(y_true, y_score) tensor(0.)
>>> import torch >>> from mvpy.math import roc_auc >>> y_true = torch.tensor([0., 1., 2.]) >>> y_score = torch.tensor([[-1., 1., 1.], [1., -1., 1.], [1., 1., -1.]]) >>> roc_auc(y_true, y_score) tensor(0.)
- name: str = 'roc_auc'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'decision_function')#
Implements mvpy.math.roc_auc() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.score module#
Base metric class.
- mvpy.metrics.score.score(model: Pipeline | BaseEstimator, metric: Metric | Tuple[Metric], X: ndarray | Tensor, y: ndarray | Tensor | None = None) List[ndarray | Tensor][source]#
Configure global settings and get information about the working environment.
The torch package contains data structures for multi-dimensional
mvpy.metrics.spearmanr module#
- class mvpy.metrics.spearmanr.Spearmanr(name: str = 'spearmanr', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function spearmanr>)[source]#
Bases:
MetricImplements
mvpy.math.spearmanr()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.spearmanr.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’spearmanr’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.spearmanr
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeDecoder >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import spearmanr >>> X, y = make_meeg_continuous() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = spearmanr)
- f(y: ndarray | Tensor, *args: Any) ndarray | Tensor[source]#
Compute Spearman correlation between x and y. Note that correlations are always computed over the final dimension in your inputs.
- Parameters:
- xUnion[np.ndarray, torch.Tensor]
Matrix to compute correlation with.
- yUnion[np.ndarray, torch.Tensor]
Matrix to compute correlation with.
- Returns:
- Union[np.ndarray, torch.Tensor]
Spearman correlations.
See also
Notes
Spearman correlations are defined as Pearson correlations between the ranks of x and y.
Examples
>>> import torch >>> from mvpy.math import spearmanr >>> x = torch.tensor([1, 5, 9]) >>> y = torch.tensor([1, 50, 60]) >>> spearmanr(x, y) tensor(1., dtype=torch.float64)
- name: str = 'spearmanr'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.spearmanr() as a Metric.
The torch package contains data structures for multi-dimensional
Module contents#
Base metric class.