callback_handler

class aitoolbox.torchtrain.train_loop.components.callback_handler.CallbacksHandler(train_loop_obj)[source]

Bases: object

Callback handler used for the callback orchestration inside the TrainLoop

The use of this handler is to call specified callback methods inside the TrainLoop at different stages of the training process. This executes desired callbacks’ functionality at the desired point of the training process.

The CallbacksHandler handler will at certain TrainLoop stage only execute those callback methods which have implemented the functionality intended to be executed at this particular stage. Thus, CallbacksHandler doesn’t unnecessarily execute callbacks at stages they are not implemented at - their respective callback methods are left as pass and aren’t overridden with some desired code logic.

Parameters:

train_loop_obj (aitoolbox.torchtrain.train_loop.train_loop.TrainLoop) – reference to the encapsulating TrainLoop

register_callbacks(callbacks, cache_callbacks=False, print_callbacks=False)[source]

Register TrainLoop object reference inside the listed callbacks when the TrainLoop is created

Normally, this is called from inside the train loop by the TrainLoop itself. Basically train loop “registers” itself with each of the provided callbacks.

Add via append new provided callbacks to the existing ones.

Parameters:
  • callbacks (list or None) – list of new callbacks to be added (appended)

  • cache_callbacks (bool) – should the provided callbacks be cached and not yet registered. First subsequent time this method is called without cache_callbacks enabled all the previously cached callbacks are added and also registered with the current list of callbacks.

  • print_callbacks (bool) – after registering the provided callbacks also print the list of registered callbacks which will be executed during the run of the train loop

Returns:

None

should_enable_callback(callback)[source]

Determine if callback should be enabled and executed to be in accordance with the GPU device setting

Always true in case of training on single device (CPU or one GPU).

In case of multi (GPU) device training such as DDP, this function checks if a callback should be executed on the particular GPU device. If the callback doesn’t have any device_idx_execution set than it is executed on all the GPUs. In case the parameter is set in the callback than this function will only be True when the set device_idx_execution in the callback and the train loop’s GPU device index match. In other words the callback will be executed only in the DDP process which sits on the matching GPU.

Parameters:

callback (AbstractCallback) – callback which will be checked if it should be enabled during the particular train loop run

Returns:

if the provided callback should be enabled or disabled based on (GPU) device index matching.

Return type:

bool

execute_epoch_begin()[source]
execute_epoch_end()[source]
execute_train_begin()[source]
execute_train_end()[source]
execute_batch_begin()[source]
execute_batch_end()[source]
execute_gradient_update(optimizer_idx=0)[source]
execute_optimizer_step()[source]
execute_multiprocess_start()[source]
execute_after_batch_prediction(y_pred_batch, y_test_batch, metadata_batch, dataset_info)[source]
split_on_execution_position(callbacks, register_train_loop=False)[source]
mp_filter_callbacks()[source]
enforce_callbacks_quality(callbacks)[source]
static print_callback_info(callback_list)[source]
print_registered_callback_names()[source]
__add__(other)[source]
Parameters:

other (list) – callbacks list

Return type:

CallbacksHandler

__iadd__(other)[source]
Parameters:

other (list) – callbacks list

Return type:

CallbacksHandler

__contains__(item)[source]
Parameters:

item

Return type:

bool