local_model_load

class aitoolbox.experiment.local_load.local_model_load.AbstractLocalModelLoader[source]

Bases: abc.ABC

abstract load_model(project_name, experiment_name, experiment_timestamp, model_save_dir, epoch_num=None, **kwargs)[source]

Model loading method all the model loaders need to implement

Parameters
  • project_name (str) – root name of the project

  • experiment_name (str) – name of the particular experiment

  • experiment_timestamp (str) – time stamp at the start of training

  • model_save_dir (str) – name of the folder inside experiment folder where the model is saved

  • epoch_num (int or None) – epoch number of the model checkpoint or none if loading final model

  • **kwargs – additional parameters for specific framework model loader

Returns

model

class aitoolbox.experiment.local_load.local_model_load.PyTorchLocalModelLoader(local_model_result_folder_path)[source]

Bases: aitoolbox.experiment.local_load.local_model_load.AbstractLocalModelLoader

PyTorch saved model loader and initializer

Parameters

local_model_result_folder_path (str) – root local path where project folder will be created

load_model(project_name, experiment_name, experiment_timestamp, model_save_dir='checkpoint_model', epoch_num=None, map_location=None)[source]

Model loading interface compatible with the experiment folder structure maintained by the AIToolbox TrainLoop

Parameters
  • project_name (str) – root name of the project

  • experiment_name (str) – name of the particular experiment

  • experiment_timestamp (str) – time stamp at the start of training

  • model_save_dir (str) – name of the folder inside experiment folder where the model is saved

  • epoch_num (int or None) – epoch number of the model checkpoint or none if loading final model

  • map_location (str or None) –

Returns

model

load_model_from_path(model_path, map_location=None)[source]

General model loading when the AIToolbox TrainLoop experiment folder structure is not used

Parameters
  • model_path (str) – full path to the model

  • map_location (str or None) – a function, torch.device, string or a dict specifying how to remap storage locations

Returns

model

check_if_model_loaded()[source]
init_model(model, used_data_parallel=False)[source]

Initialize provided PyTorch model with the loaded model weights

For this function to work, load_model() must be first called to read the model representation into memory.

Parameters
  • model (TTModel or nn.Module) – PyTorch model

  • used_data_parallel (bool) – if the saved model was nn.DataParallel or normal model

Returns

PyTorch model

init_optimizer(optimizer, device='cuda')[source]

Initialize the optimizer based on saved model/optimizer checkpoint

Parameters
  • optimizer – PyTorch optimizer

  • device (str) – device id

Returns

PyTorch optimizer

init_scheduler(scheduler_callbacks_list, ignore_saved=False, ignore_missing_saved=False)[source]

Initialize the list of schedulers based on saved model/optimizer/scheduler checkpoint

Parameters
  • scheduler_callbacks_list (list) – list of scheduler (callbacks)

  • ignore_saved (bool) – if exception should be raised in the case there are found scheduler snapshots in the checkpoint, but not schedulers are provided to this method

  • ignore_missing_saved (bool) – if exception should be raised in the case schedulers are provided to this method but no saved scheduler snapshots can be found in the checkpoint

Returns

list of initialized scheduler (callbacks)

Return type

list

init_amp(amp_scaler)[source]

Initialize AMP GradScaler

Parameters

amp_scaler (torch.cuda.amp.GradScaler) – AMP GradScaler

Returns

initialized AMP GradScaler

Return type

torch.cuda.amp.GradScaler