visdom_logger#
Visdom logger and its helper handlers.
Classes
| Helper handler to log model's gradients as scalars. | |
| Helper handler to log optimizer parameters | |
| Helper handler to log engine's output and/or metrics | |
| VisdomLogger handler to log metrics, model/optimizer parameters, gradients during the training and validation. | |
| Helper handler to log model's weights as scalars. | 
- class ignite.contrib.handlers.visdom_logger.GradsScalarHandler(model, reduction=<function norm>, tag=None, show_legend=False)[source]#
- Helper handler to log model’s gradients as scalars. Handler iterates over the gradients of named parameters of the model, applies reduction function to each parameter produce a scalar and then logs the scalar. - Parameters
 - Examples - from ignite.contrib.handlers.visdom_logger import * # Create a logger vd_logger = VisdomLogger() # Attach the logger to the trainer to log model's weights norm after each iteration vd_logger.attach( trainer, event_name=Events.ITERATION_COMPLETED, log_handler=GradsScalarHandler(model, reduction=torch.norm) ) 
- class ignite.contrib.handlers.visdom_logger.OptimizerParamsHandler(optimizer, param_name='lr', tag=None, show_legend=False)[source]#
- Helper handler to log optimizer parameters - Parameters
 - Examples - from ignite.contrib.handlers.visdom_logger import * # Create a logger vb_logger = VisdomLogger() # Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration vd_logger.attach( trainer, log_handler=OptimizerParamsHandler(optimizer), event_name=Events.ITERATION_STARTED ) # or equivalently vd_logger.attach_opt_params_handler( trainer, event_name=Events.ITERATION_STARTED, optimizer=optimizer ) 
- class ignite.contrib.handlers.visdom_logger.OutputHandler(tag, metric_names=None, output_transform=None, global_step_transform=None, show_legend=False, state_attributes=None)[source]#
- Helper handler to log engine’s output and/or metrics - Parameters
- tag (str) – common title for all produced plots. For example, “training” 
- metric_names (Optional[str]) – list of metric names to plot or a string “all” to plot all available metrics. 
- output_transform (Optional[Callable]) – output transform function to prepare engine.state.output as a number. For example, output_transform = lambda output: output This function can also return a dictionary, e.g {“loss”: loss1, “another_loss”: loss2} to label the plot with corresponding keys. 
- global_step_transform (Optional[Callable]) – global step transform function to output a desired global step. Input of the function is (engine, event_name). Output of function should be an integer. Default is None, global_step based on attached engine. If provided, uses function output as global_step. To setup global step from another engine, please use - global_step_from_engine().
- show_legend (bool) – flag to show legend in the window 
- state_attributes (Optional[List[str]]) – list of attributes of the - trainer.stateto plot.
 
 - Examples - from ignite.contrib.handlers.visdom_logger import * # Create a logger vd_logger = VisdomLogger() # Attach the logger to the evaluator on the validation dataset and log NLL, Accuracy metrics after # each epoch. We setup `global_step_transform=global_step_from_engine(trainer)` to take the epoch # of the `trainer`: vd_logger.attach( evaluator, log_handler=OutputHandler( tag="validation", metric_names=["nll", "accuracy"], global_step_transform=global_step_from_engine(trainer) ), event_name=Events.EPOCH_COMPLETED ) # or equivalently vd_logger.attach_output_handler( evaluator, event_name=Events.EPOCH_COMPLETED, tag="validation", metric_names=["nll", "accuracy"], global_step_transform=global_step_from_engine(trainer) ) - Another example, where model is evaluated every 500 iterations: - from ignite.contrib.handlers.visdom_logger import * @trainer.on(Events.ITERATION_COMPLETED(every=500)) def evaluate(engine): evaluator.run(validation_set, max_epochs=1) vd_logger = VisdomLogger() def global_step_transform(*args, **kwargs): return trainer.state.iteration # Attach the logger to the evaluator on the validation dataset and log NLL, Accuracy metrics after # every 500 iterations. Since evaluator engine does not have access to the training iteration, we # provide a global_step_transform to return the trainer.state.iteration for the global_step, each time # evaluator metrics are plotted on Visdom. vd_logger.attach_output_handler( evaluator, event_name=Events.EPOCH_COMPLETED, tag="validation", metrics=["nll", "accuracy"], global_step_transform=global_step_transform ) - Another example where the State Attributes - trainer.state.alphaand- trainer.state.betaare also logged along with the NLL and Accuracy after each iteration:- vd_logger.attach( trainer, log_handler=OutputHandler( tag="training", metric_names=["nll", "accuracy"], state_attributes=["alpha", "beta"], ), event_name=Events.ITERATION_COMPLETED ) - Example of global_step_transform: - def global_step_transform(engine, event_name): return engine.state.get_event_attrib_value(event_name) 
- class ignite.contrib.handlers.visdom_logger.VisdomLogger(server=None, port=None, num_workers=1, raise_exceptions=True, **kwargs)[source]#
- VisdomLogger handler to log metrics, model/optimizer parameters, gradients during the training and validation. - This class requires visdom package to be installed: - pip install git+https://github.com/fossasia/visdom.git - Parameters
- server (Optional[str]) – visdom server URL. It can be also specified by environment variable VISDOM_SERVER_URL 
- port (Optional[int]) – visdom server’s port. It can be also specified by environment variable VISDOM_PORT 
- num_workers (int) – number of workers to use in concurrent.futures.ThreadPoolExecutor to post data to visdom server. Default, num_workers=1. If num_workers=0 and logger uses the main thread. If using Python 2.7 and num_workers>0 the package futures should be installed: pip install futures 
- kwargs (Any) – kwargs to pass into visdom.Visdom. 
- raise_exceptions (bool) – 
 
 - Note - We can also specify username/password using environment variables: VISDOM_USERNAME, VISDOM_PASSWORD - Warning - Frequent logging, e.g. when logger is attached to Events.ITERATION_COMPLETED, can slow down the run if the main thread is used to send the data to visdom server (num_workers=0). To avoid this situation we can either log less frequently or set num_workers=1. - Examples - from ignite.contrib.handlers.visdom_logger import * # Create a logger vd_logger = VisdomLogger() # Attach the logger to the trainer to log training loss at each iteration vd_logger.attach_output_handler( trainer, event_name=Events.ITERATION_COMPLETED, tag="training", output_transform=lambda loss: {"loss": loss} ) # Attach the logger to the evaluator on the training dataset and log NLL, Accuracy metrics after each epoch # We setup `global_step_transform=global_step_from_engine(trainer)` to take the epoch # of the `trainer` instead of `train_evaluator`. vd_logger.attach_output_handler( train_evaluator, event_name=Events.EPOCH_COMPLETED, tag="training", metric_names=["nll", "accuracy"], global_step_transform=global_step_from_engine(trainer), ) # Attach the logger to the evaluator on the validation dataset and log NLL, Accuracy metrics after # each epoch. We setup `global_step_transform=global_step_from_engine(trainer)` to take the epoch of the # `trainer` instead of `evaluator`. vd_logger.attach_output_handler( evaluator, event_name=Events.EPOCH_COMPLETED, tag="validation", metric_names=["nll", "accuracy"], global_step_transform=global_step_from_engine(trainer)), ) # Attach the logger to the trainer to log optimizer's parameters, e.g. learning rate at each iteration vd_logger.attach_opt_params_handler( trainer, event_name=Events.ITERATION_STARTED, optimizer=optimizer, param_name='lr' # optional ) # Attach the logger to the trainer to log model's weights norm after each iteration vd_logger.attach( trainer, event_name=Events.ITERATION_COMPLETED, log_handler=WeightsScalarHandler(model) ) # Attach the logger to the trainer to log model's gradients norm after each iteration vd_logger.attach( trainer, event_name=Events.ITERATION_COMPLETED, log_handler=GradsScalarHandler(model) ) # We need to close the logger with we are done vd_logger.close() - It is also possible to use the logger as context manager: - from ignite.contrib.handlers.visdom_logger import * with VisdomLogger() as vd_logger: trainer = Engine(update_fn) # Attach the logger to the trainer to log training loss at each iteration vd_logger.attach_output_handler( trainer, event_name=Events.ITERATION_COMPLETED, tag="training", output_transform=lambda loss: {"loss": loss} ) - Changed in version 0.5.0: accepts an optional list of state_attributes 
- class ignite.contrib.handlers.visdom_logger.WeightsScalarHandler(model, reduction=<function norm>, tag=None, show_legend=False)[source]#
- Helper handler to log model’s weights as scalars. Handler iterates over named parameters of the model, applies reduction function to each parameter produce a scalar and then logs the scalar. - Parameters
 - Examples - from ignite.contrib.handlers.visdom_logger import * # Create a logger vd_logger = VisdomLogger() # Attach the logger to the trainer to log model's weights norm after each iteration vd_logger.attach( trainer, event_name=Events.ITERATION_COMPLETED, log_handler=WeightsScalarHandler(model, reduction=torch.norm) ) 
- ignite.contrib.handlers.visdom_logger.global_step_from_engine(engine, custom_event_name=None)[source]#
- Helper method to setup global_step_transform function using another engine. This can be helpful for logging trainer epoch/iteration while output handler is attached to an evaluator. - Parameters
- engine (ignite.engine.engine.Engine) – engine which state is used to provide the global step 
- custom_event_name (Optional[ignite.engine.events.Events]) – registered event name. Optional argument, event name to use. 
 
- Returns
- global step based on provided engine 
- Return type
- Callable