basic_packages

class aitoolbox.experiment.result_package.basic_packages.GeneralResultPackage(metrics_list, strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Result package executing given list of metrics

Parameters:
  • metrics_list (list) – List of objects which are inherited from aitoolbox.experiment.core_metrics.BaseMetric.AbstractBaseMetric

  • strict_content_check (bool) – should just print warning or raise the error and crash

  • **kwargs (dict) – additional package_metadata for the result package

prepare_results_dict()[source]

Perform result package building

Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:

return {**metric_dict_1, **metric_dict_2}
Returns:

calculated result dict

Return type:

dict

qa_check_metrics_list()[source]
class aitoolbox.experiment.result_package.basic_packages.BinaryClassificationResultPackage(positive_class_thresh=0.5, strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Binary classification task result package

Evaluates the following metrics: accuracy, ROC-AUC, PR-AUC and F1 score

Parameters:
  • positive_class_thresh (float or None) – predicted probability positive class threshold

  • strict_content_check (bool) – should just print warning or raise the error and crash

  • **kwargs (dict) – additional package_metadata for the result package

prepare_results_dict()[source]

Perform result package building

Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:

return {**metric_dict_1, **metric_dict_2}
Returns:

calculated result dict

Return type:

dict

class aitoolbox.experiment.result_package.basic_packages.ClassificationResultPackage(strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Multi-class classification result package

Evaluates the accuracy of the predictions. Without Precision-Recall metric which is available only for binary classification problems.

Parameters:
  • strict_content_check (bool) – should just print warning or raise the error and crash

  • **kwargs (dict) – additional package_metadata for the result package

prepare_results_dict()[source]

Perform result package building

Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:

return {**metric_dict_1, **metric_dict_2}
Returns:

calculated result dict

Return type:

dict

class aitoolbox.experiment.result_package.basic_packages.RegressionResultPackage(strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Regression task result package

Evaluates MSE and MAE metrics.

Parameters:
  • strict_content_check (bool) – should just print warning or raise the error and crash

  • **kwargs (dict) – additional package_metadata for the result package

prepare_results_dict()[source]

Perform result package building

Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:

return {**metric_dict_1, **metric_dict_2}
Returns:

calculated result dict

Return type:

dict