model_load

class aitoolbox.cloud.AWS.model_load.BaseModelLoader(local_model_loader, local_model_result_folder_path='~/project/model_result', bucket_name='model-result', cloud_dir_prefix='')[source]

Bases: BaseDataLoader

Base saved model loading from S3 storage

Parameters:
  • local_model_loader (AbstractLocalModelLoader) – model loader implementing the loading of the saved model for the selected deep learning framework

  • local_model_result_folder_path (str) – root local path where project folder will be created

  • bucket_name (str) – name of the bucket in the cloud storage from which the model will be downloaded

  • cloud_dir_prefix (str) – path to the folder inside the bucket where the experiments are going to be saved

load_model(project_name, experiment_name, experiment_timestamp, model_save_dir='checkpoint_model', epoch_num=None, **kwargs)[source]

Download and read/load the model

Parameters:
  • project_name (str) – root name of the project

  • experiment_name (str) – name of the particular experiment

  • experiment_timestamp (str) – time stamp at the start of training

  • model_save_dir (str) – name of the folder inside experiment folder where the model is saved

  • epoch_num (int or None) – epoch number of the model checkpoint or none if loading final model

  • **kwargs – additional local_model_loader parameters

Returns:

model representation. (currently only returning dicts as only PyTorch model loading is supported)

Return type:

dict

class aitoolbox.cloud.AWS.model_load.PyTorchS3ModelLoader(local_model_result_folder_path='~/project/model_result', bucket_name='model-result', cloud_dir_prefix='')[source]

Bases: BaseModelLoader

PyTorch S3 model downloader & loader

Parameters:
  • local_model_result_folder_path (str) – root local path where project folder will be created

  • bucket_name (str) – name of the bucket in the cloud storage from which the model will be downloaded

  • cloud_dir_prefix (str) – path to the folder inside the bucket where the experiments are going to be saved

init_model(model, used_data_parallel=False)[source]

Initialize provided PyTorch model with the loaded model weights

For this function to work, load_model() must be first called to read the model representation into memory.

Parameters:
  • model – PyTorch model

  • used_data_parallel (bool) – if the saved model was nn.DataParallel or normal model

Returns:

initialized model

init_optimizer(optimizer, device='cuda')[source]

Initialize PyTorch optimizer

Parameters:
  • optimizer

  • device (str) –

Returns:

initialized optimizer

init_scheduler(scheduler_callbacks_list, ignore_saved=False, ignore_missing_saved=False)[source]

Initialize the list of schedulers based on saved model/optimizer/scheduler checkpoint

Parameters:
  • scheduler_callbacks_list (list) – list of scheduler (callbacks)

  • ignore_saved (bool) – if exception should be raised in the case there are found scheduler snapshots in the checkpoint, but not schedulers are provided to this method

  • ignore_missing_saved (bool) – if exception should be raised in the case schedulers are provided to this method but no saved scheduler snapshots can be found in the checkpoint

Returns:

list of initialized scheduler (callbacks)

Return type:

list

init_amp(amp_scaler)[source]

Initialize AMP GradScaler

Parameters:

amp_scaler (torch.cuda.amp.GradScaler) – AMP GradScaler

Returns:

initialized AMP GradScaler

Return type:

torch.cuda.amp.GradScaler