TopKCategoricalAccuracy#
- class ignite.metrics.TopKCategoricalAccuracy(k=5, output_transform=<function TopKCategoricalAccuracy.<lambda>>, device=device(type='cpu'))[source]#
Calculates the top-k categorical accuracy.
updatemust receive output of the form(y_pred, y)or{'y_pred': y_pred, 'y': y}.
- Parameters
k (int) – the k in “top-k”.
output_transform (Callable) – a callable that is used to transform the
Engine’sprocess_function’s output into the form expected by the metric. This can be useful if, for example, you have a multi-output model and you want to compute the metric with respect to one of the outputs. By default, metrics require the output as(y_pred, y)or{'y_pred': y_pred, 'y': y}.device (Union[str, torch.device]) – specifies which device updates are accumulated on. Setting the metric’s device to be the same as your
updatearguments ensures theupdatemethod is non-blocking. By default, CPU.
- Return type
Examples
To use with
Engineandprocess_function, simply attach the metric instance to the engine. The output of the engine’sprocess_functionneeds to be in the format of(y_pred, y)or{'y_pred': y_pred, 'y': y, ...}. If not,output_tranformcan be added to the metric to transform the output into the form expected by the metric.For more information on how metric works with
Engine, visit Attach Engine API.from collections import OrderedDict import torch from torch import nn, optim from ignite.engine import * from ignite.handlers import * from ignite.metrics import * from ignite.utils import * from ignite.contrib.metrics.regression import * from ignite.contrib.metrics import * # create default evaluator for doctests def eval_step(engine, batch): return batch default_evaluator = Engine(eval_step) # create default optimizer for doctests param_tensor = torch.zeros([1], requires_grad=True) default_optimizer = torch.optim.SGD([param_tensor], lr=0.1) # create default trainer for doctests # as handlers could be attached to the trainer, # each test must define his own trainer using `.. testsetup:` def get_default_trainer(): def train_step(engine, batch): return batch return Engine(train_step) # create default model for doctests default_model = nn.Sequential(OrderedDict([ ('base', nn.Linear(4, 2)), ('fc', nn.Linear(2, 1)) ])) manual_seed(666)
def process_function(engine, batch): y_pred, y = batch return y_pred, y def one_hot_to_binary_output_transform(output): y_pred, y = output y = torch.argmax(y, dim=1) # one-hot vector to label index vector return y_pred, y engine = Engine(process_function) metric = TopKCategoricalAccuracy( k=2, output_transform=one_hot_to_binary_output_transform) metric.attach(engine, 'top_k_accuracy') preds = torch.tensor([ [0.7, 0.2, 0.05, 0.05], # 1 is in the top 2 [0.2, 0.3, 0.4, 0.1], # 0 is not in the top 2 [0.4, 0.4, 0.1, 0.1], # 0 is in the top 2 [0.7, 0.05, 0.2, 0.05] # 2 is in the top 2 ]) target = torch.tensor([ # targets as one-hot vectors [0, 1, 0, 0], [1, 0, 0, 0], [1, 0, 0, 0], [0, 0, 1, 0] ]) state = engine.run([[preds, target]]) print(state.metrics['top_k_accuracy'])
0.75
Methods
Computes the metric based on it's accumulated state.
Resets the metric to it's initial state.
Updates the metric's state using the passed batch output.
- compute()[source]#
Computes the metric based on it’s accumulated state.
By default, this is called at the end of each epoch.
- Returns
- the actual quantity of interest. However, if a
Mappingis returned, it will be (shallow) flattened into engine.state.metrics whencompleted()is called. - Return type
Any
- Raises
NotComputableError – raised when the metric cannot be computed.
- reset()[source]#
Resets the metric to it’s initial state.
By default, this is called at the start of each epoch.
- Return type
- update(output)[source]#
Updates the metric’s state using the passed batch output.
By default, this is called once for each batch.
- Parameters
output (Sequence[torch.Tensor]) – the is the output from the engine’s process function.
- Return type