ignite.contrib.handlers#
Contribution module of handlers
- class ignite.contrib.handlers.CosineAnnealingScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1, save_history=False)[source]#
Anneals ‘start_value’ to ‘end_value’ over each cycle.
The annealing takes the form of the first half of a cosine wave (as suggested in [Smith17]).
- Parameters
optimizer (torch.optim.Optimizer or dict) – the optimizer or parameters group to use
param_name (str) – name of optimizer’s parameter to update
start_value (float) – value at start of cycle
end_value (float) – value at the end of the cycle
cycle_size (int) – length of cycle.
cycle_mult (float, optional) – ratio by which to change the cycle_size at the end of each cycle (default=1),
save_history (bool, optional) – whether to log the parameter values (default: False)
Note
If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.
Examples:
from ignite.contrib.handlers.param_scheduler import CosineAnnealingScheduler scheduler = CosineAnnealingScheduler(optimizer, 'lr', 1e-1, 1e-3, len(train_loader)) trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler) # # Anneals the learning rate from 1e-1 to 1e-3 over the course of 1 epoch. #
from ignite.contrib.handlers.param_scheduler import CosineAnnealingScheduler from ignite.contrib.handlers.param_scheduler import LinearCyclicalScheduler optimizer = SGD( [ {"params": model.base.parameters(), 'lr': 0.001), {"params": model.fc.parameters(), 'lr': 0.01), ] ) scheduler1 = LinearCyclicalScheduler(optimizer.param_groups[0], 'lr', 1e-7, 1e-5, len(train_loader)) trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler1, "lr (base)") scheduler2 = CosineAnnealingScheduler(optimizer.param_groups[1], 'lr', 1e-5, 1e-3, len(train_loader)) trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler2, "lr (fc)")
- Smith17
Smith, Leslie N. “Cyclical learning rates for training neural networks.” Applications of Computer Vision (WACV), 2017 IEEE Winter Conference on. IEEE, 2017
- class ignite.contrib.handlers.CyclicalScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1, save_history=False)[source]#
An abstract class for updating an optimizer’s parameter value over a cycle of some size.
- Parameters
optimizer (torch.optim.Optimizer or dict) – the optimizer or parameters group to use
param_name (str) – name of optimizer’s parameter to update
start_value (float) – value at start of cycle
end_value (float) – value at the middle of the cycle
cycle_size (int) – length of cycle.
cycle_mult (float, optional) – ratio by which to change the cycle_size at the end of each cycle (default=1),
save_history (bool, optional) – whether to log the parameter values (default: False)
Note
If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.
- class ignite.contrib.handlers.LinearCyclicalScheduler(optimizer, param_name, start_value, end_value, cycle_size, cycle_mult=1, save_history=False)[source]#
Linearly adjusts param value to ‘end_value’ for a half-cycle, then linearly adjusts it back to ‘start_value’ for a half-cycle.
- Parameters
optimizer (torch.optim.Optimizer or dict) – the optimizer or parameters group to use
param_name (str) – name of optimizer’s parameter to update
start_value (float) – value at start of cycle
end_value (float) – value at the middle of the cycle
cycle_size (int) – length of cycle.
cycle_mult (float, optional) – ratio by which to change the cycle_size at the end of each cycle (default=1),
save_history (bool, optional) – whether to log the parameter values (default: False)
Note
If the scheduler is bound to an ‘ITERATION_*’ event, ‘cycle_size’ should usually be the number of batches in an epoch.
Examples:
from ignite.contrib.handlers.param_scheduler import LinearCyclicalScheduler scheduler = LinearCyclicalScheduler(optimizer, 'lr', 1e-3, 1e-1, len(train_loader)) trainer.add_event_handler(Events.ITERATION_COMPLETED, scheduler) # # Linearly increases the learning rate from 1e-3 to 1e-1 and back to 1e-3 # over the course of 1 epoch #
- class ignite.contrib.handlers.ParamScheduler(optimizer, param_name, save_history=False)[source]#
An abstract class for updating an optimizer’s parameter value during training.
- Parameters
- class ignite.contrib.handlers.ProgressBar(persist=False, bar_format='{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]', **tqdm_kwargs)[source]#
TQDM progress bar handler to log training progress and computed metrics.
- Parameters
persist (bool, optional) – set to
True
to persist the progress bar after completion (default =False
)bar_format (str, optional) –
Specify a custom bar string formatting. May impact performance. [default: ‘{desc}[{n_fmt}/{total_fmt}] {percentage:3.0f}%|{bar}{postfix} [{elapsed}<{remaining}]’]. Set to
None
to usetqdm
default bar formatting: ‘{l_bar}{bar}{r_bar}’, where l_bar=’{desc}: {percentage:3.0f}%|’ and r_bar=’| {n_fmt}/{total_fmt} [{elapsed}<{remaining}, ‘’{rate_fmt}{postfix}]’
- Possible vars: l_bar, bar, r_bar, n, n_fmt, total, total_fmt,
percentage, rate, rate_fmt, rate_noinv, rate_noinv_fmt, rate_inv, rate_inv_fmt, elapsed, remaining, desc, postfix.
Note that a trailing “: ” is automatically removed after {desc} if the latter is empty.
**tqdm_kwargs – kwargs passed to tqdm progress bar
Examples
Simple progress bar
trainer = create_supervised_trainer(model, optimizer, loss) pbar = ProgressBar() pbar.attach(trainer)
Attach metrics that already have been computed at ITERATION_COMPLETED (such as RunningAverage)
trainer = create_supervised_trainer(model, optimizer, loss) RunningAverage(output_transform=lambda x: x).attach(trainer, 'loss') pbar = ProgressBar() pbar.attach(trainer, ['loss'])
Directly attach the engine’s output
trainer = create_supervised_trainer(model, optimizer, loss) pbar = ProgressBar() pbar.attach(trainer, output_transform=lambda x: {'loss': x})
Note
When adding attaching the progress bar to an engine, it is recommend that you replace every print operation in the engine’s handlers triggered every iteration with
pbar.log_message
to guarantee the correct format of the stdout.- attach(engine, metric_names=None, output_transform=None)[source]#
Attaches the progress bar to an engine object
- Parameters
engine (Engine) – engine object
metric_names (list, optional) – list of the metrics names to log as the bar progresses
output_transform (Callable, optional) – a function to select what you want to print from the engine’s output. This function may return either a dictionary with entries in the format of
{name: value}
, or a single scalar, which will be displayed with the default name output.