roc_auc#
- mvpy.math.roc_auc(y_true: ndarray | Tensor, y_score: ndarray | Tensor) ndarray | Tensor[source]#
Compute ROC AUC score between y_true and y_score. Note that ROC AUC is always computed over the final dimension.
- Parameters:
- y_trueUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- y_scoreUnion[np.ndarray, torch.Tensor]
Vector/Matrix/Tensor
- Returns:
- Union[np.ndarray, torch.Tensor]
Accuracy
Notes
ROC AUC is computed using the Mann-Whitney U formula:
\[\text{ROCAUC}(y, \hat{y}) = \frac{R_{+} - \frac{P * (P + 1)}{2}}}{P * N}\]where \(R_{+}\) is the sum of ranks for positive classes, \(P\) is the number of positive samples, and \(N\) is the number of negative samples.
In the case that labels are not binary, we create unique binary labels by one-hot encoding the labels and then take the macro-average over classes.
Examples
>>> import torch >>> from mvpy.math import roc_auc >>> y_true = torch.tensor([1., 0.]) >>> y_score = torch.tensor([-1., 1.]) >>> roc_auc(y_true, y_score) tensor(0.)
>>> import torch >>> from mvpy.math import roc_auc >>> y_true = torch.tensor([0., 1., 2.]) >>> y_score = torch.tensor([[-1., 1., 1.], [1., -1., 1.], [1., 1., -1.]]) >>> roc_auc(y_true, y_score) tensor(0.)