model_predict
- class aitoolbox.torchtrain.model_predict.PyTorchModelPredictor(model, data_loader, callbacks=None)[source]
Bases:
object
PyTorch model predictions based on provided dataloader
- Parameters:
model (aitoolbox.torchtrain.model.TTModel or aitoolbox.torchtrain.model.ModelWrap) – neural network model
data_loader (torch.utils.data.DataLoader) – dataloader based on which the model output predictions are made
- model_predict()[source]
Calculate model output predictions
- Returns:
y_pred, y_true, metadata
- Return type:
- model_get_loss(loss_criterion)[source]
Calculate model’s loss on the given dataloader and based on provided loss function
- Parameters:
loss_criterion (torch.nn.Module) – criterion criterion during the training procedure
- Returns:
loss
- Return type:
- evaluate_model(result_package, project_name, experiment_name, local_model_result_folder_path, cloud_save_mode='s3', bucket_name='model-result', cloud_dir_prefix='', save_true_pred_labels=False)[source]
Evaluate model’s performance with full experiment tracking
- Parameters:
result_package (aitoolbox.experiment.result_package.abstract_result_packages.AbstractResultPackage) – result package defining the evaluation metrics on which the model is evaluated when predicting the values from the provided dataloader
project_name (str) – root name of the project
experiment_name (str) – name of the particular experiment
local_model_result_folder_path (str) – root local path where project folder will be created
cloud_save_mode (str or None) – Storage destination selector. For AWS S3: ‘s3’ / ‘aws_s3’ / ‘aws’ For Google Cloud Storage: ‘gcs’ / ‘google_storage’ / ‘google storage’ Everything else results just in local storage to disk
bucket_name (str) – name of the bucket in the cloud storage
cloud_dir_prefix (str) – path to the folder inside the bucket where the experiments are going to be saved
save_true_pred_labels (bool) – should ground truth labels also be saved
- Returns:
None
- evaluate_result_package(result_package, return_result_package=True)[source]
Evaluate model’s performance based on provided Result Package
- Parameters:
result_package (aitoolbox.experiment.result_package.abstract_result_packages.AbstractResultPackage) –
return_result_package (bool) – if True, the full calculated result package is returned, otherwise only the results dict is returned
- Returns:
calculated result package or results dict
- Return type:
aitoolbox.experiment.result_package.abstract_result_packages.AbstractResultPackage or dict
- execute_batch_end_callbacks()[source]
Execute provided callbacks which are triggered at the end of the batch in train loop
- Returns:
None
- execute_epoch_end_callbacks()[source]
Execute provided callbacks which are triggered at the end of the epoch in train loop
- Returns:
None
- evaluate_metric(metric_class, return_metric=True)[source]
Evaluate a model with a single performance metric
Only for really simple cases where the output from the network can be directly used for metric calculation. For more advanced cases where the network output needs to be preprocessed before the metric evaluation, the use of the result package is preferred.
- Parameters:
metric_class (aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric) – metric class not the object
return_metric (bool) – if True, the full performance metric object is returned, otherwise only metric result dict is returned
- Returns:
calculated performance metric or result dict
- Return type:
aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric or dict