• Docs >
  • ignite.contrib.handlers
Shortcuts

ignite.contrib.handlers#

Contribution module of handlers

class ignite.contrib.handlers.CosineAnnealingScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1, save_history=False)[source]#

Anneals ‘start_value’ to ‘end_value’ over each cycle.

The annealing takes the form of the first half of a cosine wave (as suggested in [Smith17]).

Parameters
  • optimizer (torch.optim.Optimizer) – the optimizer to use

  • param_name (str) – name of optimizer’s parameter to update

  • start_value (float) – value at start of cycle

  • end_value (float) – value at the end of the cycle

  • cycle_size (int) – length of cycle.

  • cycle_mult (float, optional) – ratio by which to change the cycle_size at the end of each cycle (default=1),

  • save_history (bool, optional) – whether to log the parameter values (default: False)

Note

If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.

Examples:

from ignite.contrib.handlers.param_scheduler import CosineAnnealingScheduler

scheduler = CosineAnnealingScheduler(optimizer, 'lr', 1e-1, 1e-3, len(train_loader))
trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)
#
# Anneals the learning rate from 1e-1 to 1e-3 over the course of 1 epoch.
#
Smith17

Smith, Leslie N. “Cyclical learning rates for training neural networks.” Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, 2017

get_param()[source]#

Method to get current optimizer’s parameter value

class ignite.contrib.handlers.CyclicalScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1, save_history=False)[source]#

An abstract class for updating an optimizer’s parameter value over a cycle of some size.

Parameters
  • optimizer (torch.optim.Optimizer) – the optimizer to use

  • param_name (str) – name of optimizer’s parameter to update

  • start_value (float) – value at start of cycle

  • end_value (float) – value at the middle of the cycle

  • cycle_size (int) – length of cycle.

  • cycle_mult (float, optional) – ratio by which to change the cycle_size at the end of each cycle (default=1),

  • save_history (bool, optional) – whether to log the parameter values (default: False)

Note

If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.

class ignite.contrib.handlers.LinearCyclicalScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1, save_history=False)[source]#

Linearly adjusts param value to ‘end_value’ for a half-cycle, then linearly adjusts it back to ‘start_value’ for a half-cycle.

Parameters
  • optimizer (torch.optim.Optimizer) – the optimizer to use

  • param_name (str) – name of optimizer’s parameter to update

  • start_value (float) – value at start of cycle

  • end_value (float) – value at the middle of the cycle

  • cycle_size (int) – length of cycle.

  • cycle_mult (float, optional) – ratio by which to change the cycle_size at the end of each cycle (default=1),

  • save_history (bool, optional) – whether to log the parameter values (default: False)

Note

If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.

Examples:

from ignite.contrib.handlers.param_scheduler import LinearCyclicalScheduler

scheduler = LinearCyclicalScheduler(optimizer, 'lr', 1e-3, 1e-1, len(train_loader))
trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler)
#
# Linearly increases the learning rate from 1e-3 to 1e-1 and back to 1e-3
# over the course of 1 epoch
#
get_param()[source]#

Method to get current optimizer’s parameter value

class ignite.contrib.handlers.ParamScheduler(optimizer, param_name, save_history=False)[source]#

An abstract class for updating an optimizer’s parameter value during training.

Parameters
  • optimizer (torch.optim.Optimizer) – the optimizer to use

  • param_name (str) – name of optimizer’s parameter to update

  • save_history (bool, optional) – whether to log the parameter values (default=False)

get_param()[source]#

Method to get current optimizer’s parameter value

class ignite.contrib.handlers.ProgressBar(persist=False)[source]#

TQDM progress bar handler to log training progress and computed metrics.

Parameters

persist (bool, optional) – set to True to persist the progress bar after completion (default = False)

Examples

Simple progress bar

trainer = create_supervised_trainer(model, optimizer, loss)

pbar = ProgressBar()
pbar.attach(trainer)

Attach metrics that already have been computed at ITERATION_COMPLETED (such as RunningAverage)

trainer = create_supervised_trainer(model, optimizer, loss)

RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss')

pbar = ProgressBar()
pbar.attach(trainer, ['loss'])

Directly attach the engine’s output

trainer = create_supervised_trainer(model, optimizer, loss)

pbar = ProgressBar()
pbar.attach(trainer, output_transform=lambda x: {'loss': x})

Note

When adding attaching the progress bar to an engine, it is recommend that you replace every print operation in the engine’s handlers triggered every iteration with pbar.log_message to guarantee the correct format of the stdout.

attach(engine, metric_names=None, output_transform=None)[source]#

Attaches the progress bar to an engine object

Parameters
  • engine (Engine) – engine object

  • metric_names (list, optional) – list of the metrics names to log as the bar progresses

  • output_transform (Callable, optional) – a function to select what you want to print from the engine’s output. This function may return either a dictionary with entries in the format of {name: value}, or a single scalar, which will be displayed with the default name output.

static log_message(message)[source]#

Logs a message, preserving the progress bar correct output format

Parameters

message (str) – string you wish to log