mvpy.metrics package#
Submodules#
mvpy.metrics.accuracy module#
- class mvpy.metrics.accuracy.Accuracy(name: str = 'accuracy', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function accuracy>)[source]#
Bases:
MetricImplements
mvpy.math.accuracy()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.accuracy.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’accuracy’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.accuracy
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeClassifier >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import accuracy >>> X, y = make_meeg_categorical() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = accuracy)
- f(y: ndarray | Tensor) ndarray | Tensor[source]#
Compute accuracy between x and y. Note that accuracy is always computed over the final dimension.
- Parameters:
- xUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- yUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- Returns:
- Union[np.ndarray, torch.Tensor]
Accuracy
Notes
Accuracy is defined as:
\[\text{accuracy}(x, y) = \frac{1}{N}\sum_i^N{1(x_i = y_i)}\]Examples
>>> import torch >>> from mvpy.math import accuracy >>> x = torch.tensor([1, 0]) >>> y = torch.tensor([-1, 0]) >>> accuracy(x, y) tensor([0.5])
- name: str = 'accuracy'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.accuracy() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.metric module#
Base metric class.
- class mvpy.metrics.metric.Metric(name: str = 'metric', request: ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function Metric.<lambda>>)[source]#
Bases:
object- mutate(name: str | None = None, request: str | None = None, reduce: int | Tuple[int] | None = None, f: Callable | None = None) Metric[source]#
- name: str = 'metric'#
- reduce: int | Tuple[int] = (0,)#
- request: Tuple[str] = ('y', 'predict')#
The torch package contains data structures for multi-dimensional
mvpy.metrics.pearsonr module#
- class mvpy.metrics.pearsonr.Pearsonr(name: str = 'pearsonr', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function pearsonr>)[source]#
Bases:
MetricImplements
mvpy.math.pearsonr()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.pearsonr.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’pearsonr’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.pearsonr
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeDecoder >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import pearsonr >>> X, y = make_meeg_continuous() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = pearsonr)
- f(y: ndarray | Tensor, *args: Any) ndarray | Tensor[source]#
Computes pearson correlations between x and y. Note that correlations are always computed over the final dimension.
- Parameters:
- xUnion[np.ndarray, torch.Tensor]
Matrix ([samples …] x features)
- yUnion[np.ndarray, torch.Tensor]
Matrix ([samples …] x features)
- Returns:
- Union[np.ndarray, torch.Tensor]
Vector or matrix of pearson correlations
Notes
Pearson correlations are defined as:
\[r = \frac{\sum{(x_i - \bar{x})(y_i - \bar{y})}}{\sqrt{\sum{(x_i - \bar{x})^2} \sum{(y_i - \bar{y})^2}}}\]where \(x_i\) and \(y_i\) are the \(i\)-th elements of \(x\) and \(y\), respectively.
Examples
>>> import torch >>> from mvpy.math import pearsonr >>> x = torch.tensor([1, 2, 3]) >>> y = torch.tensor([4, 5, 6]) >>> pearsonr(x, y) tensor(1., dtype=torch.float64)
- name: str = 'pearsonr'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.pearsonr() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.r2 module#
- class mvpy.metrics.r2.R2(name: str = 'r2', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function r2>)[source]#
Bases:
MetricImplements
mvpy.math.r2()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.r2.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’r2’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.r2
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeClassifier >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import r2 >>> X, y = make_meeg_categorical() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = r2)
- f(y_h: ndarray | Tensor) ndarray | Tensor[source]#
Rank data in x along its final feature dimension. Ties are computed as averages.
- Parameters:
- yUnion[np.ndarray, torch.Tensor]
True outcomes of shape
([n_samples, ...,] n_features).- y_hUnion[np.ndarray, torch.Tensor]
Predicted outcomes of shape
([n_samples, ...,] n_features).
- Returns:
- rUnion[np.ndarray, torch.Tensor]
R2 scores of shape
([n_samples, ...]).
Examples
>>> import torch >>> from mvpy.math import rank >>> y = torch.tensor([1.0, 2.0, 3.0]) >>> y_h = torch.tensor([1.0, 2.0, 3.0]) >>> r2(x) tensor([1.0])
- name: str = 'r2'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.r2() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.roc_auc module#
- class mvpy.metrics.roc_auc.Roc_auc(name: str = 'roc_auc', request: str | ~typing.Tuple[str] = ('y', 'decision_function'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function roc_auc>)[source]#
Bases:
MetricImplements
mvpy.math.roc_auc()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.roc_auc.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’roc_auc’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘decision_function’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.roc_auc
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeClassifier >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import roc_auc >>> X, y = make_meeg_categorical() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = roc_auc)
- f(y_score: ndarray | Tensor) ndarray | Tensor[source]#
Compute ROC AUC score between y_true and y_score. Note that ROC AUC is always computed over the final dimension.
- Parameters:
- y_trueUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- y_scoreUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- Returns:
- Union[np.ndarray, torch.Tensor]
Accuracy
Notes
ROC AUC is computed using the Mann-Whitney U formula:
\[\text{ROCAUC}(y, \hat{y}) = \frac{R_{+} - \frac{P * (P + 1)}{2}}}{P * N}\]where \(R_{+}\) is the sum of ranks for positive classes, \(P\) is the number of positive samples, and \(N\) is the number of negative samples.
In the case that labels are not binary, we create unique binary labels by one-hot encoding the labels and then take the macro-average over classes.
Examples
>>> import torch >>> from mvpy.math import roc_auc >>> y_true = torch.tensor([1., 0.]) >>> y_score = torch.tensor([-1., 1.]) >>> roc_auc(y_true, y_score) tensor(0.)
>>> import torch >>> from mvpy.math import roc_auc >>> y_true = torch.tensor([0., 1., 2.]) >>> y_score = torch.tensor([[-1., 1., 1.], [1., -1., 1.], [1., 1., -1.]]) >>> roc_auc(y_true, y_score) tensor(0.)
- name: str = 'roc_auc'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'decision_function')#
Implements mvpy.math.roc_auc() as a Metric.
The torch package contains data structures for multi-dimensional
mvpy.metrics.score module#
Base metric class.
- mvpy.metrics.score.score(model: Pipeline | BaseEstimator, metric: Metric | Tuple[Metric], X: ndarray | Tensor, y: ndarray | Tensor | None = None) List[ndarray | Tensor][source]#
Configure global settings and get information about the working environment.
The torch package contains data structures for multi-dimensional
mvpy.metrics.spearmanr module#
- class mvpy.metrics.spearmanr.Spearmanr(name: str = 'spearmanr', request: str | ~typing.Tuple[str] = ('y', 'predict'), reduce: int | ~typing.Tuple[int] = (0, ), f: ~typing.Callable = <function spearmanr>)[source]#
Bases:
MetricImplements
mvpy.math.spearmanr()as aMetric.Warning
This class extends
Metric. If you would like to apply this metric, please use the instance exposed undermvpy.metrics.spearmanr.For more information on this, please consult the documentation of
Metricandscore().- Parameters:
- namestr, default=’spearmanr’
The name of this metric.
- requeststr | tuple[str], default=(‘y’, ‘predict’)
The values to request for scoring.
- reduceint | tuple[int], default= (0,)
The dimension(s) to reduce over.
- fCallable, default=mvpy.math.spearmanr
The function to call.
Examples
>>> import torch >>> from mvpy.dataset import make_meeg_categorical >>> from mvpy.estimators import RidgeDecoder >>> from mvpy.crossvalidation import cross_val_score >>> from mvpy.metric import spearmanr >>> X, y = make_meeg_continuous() >>> clf = RidgeClassifier() >>> cross_val_score(clf, X, y, metric = spearmanr)
- f(y: ndarray | Tensor, *args: Any) ndarray | Tensor[source]#
Compute Spearman correlation between x and y. Note that correlations are always computed over the final dimension in your inputs.
- Parameters:
- xUnion[np.ndarray, torch.Tensor]
Matrix to compute correlation with.
- yUnion[np.ndarray, torch.Tensor]
Matrix to compute correlation with.
- Returns:
- Union[np.ndarray, torch.Tensor]
Spearman correlations.
See also
Notes
Spearman correlations are defined as Pearson correlations between the ranks of x and y.
Examples
>>> import torch >>> from mvpy.math import spearmanr >>> x = torch.tensor([1, 5, 9]) >>> y = torch.tensor([1, 50, 60]) >>> spearmanr(x, y) tensor(1., dtype=torch.float64)
- name: str = 'spearmanr'#
- reduce: int | Tuple[int] = (0,)#
- request: str | Tuple[str] = ('y', 'predict')#
Implements mvpy.math.spearmanr() as a Metric.
The torch package contains data structures for multi-dimensional
Module contents#
Base metric class.