CohenKappa#
- class ignite.contrib.metrics.CohenKappa(output_transform=<function CohenKappa.<lambda>>, weights=None, check_compute_fn=False, device=device(type='cpu'))[source]#
Compute different types of Cohen’s Kappa: Non-Wieghted, Linear, Quadratic. Accumulating predictions and the ground-truth during an epoch and applying sklearn.metrics.cohen_kappa_score .
- Parameters
output_transform (Callable) – a callable that is used to transform the
Engine’sprocess_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs.weights (Optional[str]) – a string is used to define the type of Cohen’s Kappa whether Non-Weighted or Linear or Quadratic. Default, None.
check_compute_fn (bool) – Default False. If True, cohen_kappa_score is run on the first batch of data to ensure there are no issues. User will be warned in case there are any issues computing the function.
device (Union[str, torch.device]) – optional device specification for internal storage.
Examples
To use with
Engineandprocess_function, simply attach the metric instance to the engine. The output of the engine’sprocess_functionneeds to be in the format of(y_pred, y)or{'y_pred': y_pred, 'y': y, ...}. If not,output_tranformcan be added to the metric to transform the output into the form expected by the metric.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
metric = CohenKappa() metric.attach(default_evaluator, 'ck') y_true = torch.tensor([2, 0, 2, 2, 0, 1]) y_pred = torch.tensor([0, 0, 2, 2, 0, 2]) state = default_evaluator.run([[y_pred, y_true]]) print(state.metrics['ck'])
0.4285...
Methods
Return a function computing Cohen Kappa from scikit-learn.
- get_cohen_kappa_fn()[source]#
Return a function computing Cohen Kappa from scikit-learn.
- Return type
Callable[[torch.Tensor, torch.Tensor], float]