model_predict

class aitoolbox.torchtrain.model_predict.PyTorchModelPredictor(model, data_loader, callbacks=None)[source]

Bases: object

PyTorch model predictions based on provided dataloader

Parameters:
model_predict()[source]

Calculate model output predictions

Returns:

y_pred, y_true, metadata

Return type:

(torch.Tensor, torch.Tensor, dict)

model_get_loss(loss_criterion)[source]

Calculate model’s loss on the given dataloader and based on provided loss function

Parameters:

loss_criterion (torch.nn.Module) – criterion criterion during the training procedure

Returns:

loss

Return type:

torch.Tensor or MultiLoss

evaluate_model(result_package, project_name, experiment_name, local_model_result_folder_path, cloud_save_mode='s3', bucket_name='model-result', cloud_dir_prefix='', save_true_pred_labels=False)[source]

Evaluate model’s performance with full experiment tracking

Parameters:
  • result_package (aitoolbox.experiment.result_package.abstract_result_packages.AbstractResultPackage) – result package defining the evaluation metrics on which the model is evaluated when predicting the values from the provided dataloader

  • project_name (str) – root name of the project

  • experiment_name (str) – name of the particular experiment

  • local_model_result_folder_path (str) – root local path where project folder will be created

  • cloud_save_mode (str or None) – Storage destination selector. For AWS S3: ‘s3’ / ‘aws_s3’ / ‘aws’ For Google Cloud Storage: ‘gcs’ / ‘google_storage’ / ‘google storage’ Everything else results just in local storage to disk

  • bucket_name (str) – name of the bucket in the cloud storage

  • cloud_dir_prefix (str) – path to the folder inside the bucket where the experiments are going to be saved

  • save_true_pred_labels (bool) – should ground truth labels also be saved

Returns:

None

evaluate_result_package(result_package, return_result_package=True)[source]

Evaluate model’s performance based on provided Result Package

Parameters:
Returns:

calculated result package or results dict

Return type:

aitoolbox.experiment.result_package.abstract_result_packages.AbstractResultPackage or dict

execute_batch_end_callbacks()[source]

Execute provided callbacks which are triggered at the end of the batch in train loop

Returns:

None

execute_epoch_end_callbacks()[source]

Execute provided callbacks which are triggered at the end of the epoch in train loop

Returns:

None

evaluate_metric(metric_class, return_metric=True)[source]

Evaluate a model with a single performance metric

Only for really simple cases where the output from the network can be directly used for metric calculation. For more advanced cases where the network output needs to be preprocessed before the metric evaluation, the use of the result package is preferred.

Parameters:
Returns:

calculated performance metric or result dict

Return type:

aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric or dict

evaluate_metric_list(metrics_class_list, return_metric_list=True)[source]

Evaluate a model with a list of performance metrics

Parameters:
  • metrics_class_list (list) – list of metric classes not the objects

  • return_metric_list (bool) – if True, the full performance metrics objects are returned, otherwise only metric results dict is returned

Returns:

list of calculated performance metrics or results dict

Return type:

list or dict