basic_packages
- class aitoolbox.experiment.result_package.basic_packages.GeneralResultPackage(metrics_list, strict_content_check=False, **kwargs)[source]
Bases:
AbstractResultPackage
Result package executing given list of metrics
- Parameters:
- prepare_results_dict()[source]
Perform result package building
Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:
return {**metric_dict_1, **metric_dict_2}
- Returns:
calculated result dict
- Return type:
- class aitoolbox.experiment.result_package.basic_packages.BinaryClassificationResultPackage(positive_class_thresh=0.5, strict_content_check=False, **kwargs)[source]
Bases:
AbstractResultPackage
Binary classification task result package
Evaluates the following metrics: accuracy, ROC-AUC, PR-AUC and F1 score
- Parameters:
- prepare_results_dict()[source]
Perform result package building
Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:
return {**metric_dict_1, **metric_dict_2}
- Returns:
calculated result dict
- Return type:
- class aitoolbox.experiment.result_package.basic_packages.ClassificationResultPackage(strict_content_check=False, **kwargs)[source]
Bases:
AbstractResultPackage
Multi-class classification result package
Evaluates the accuracy of the predictions. Without Precision-Recall metric which is available only for binary classification problems.
- Parameters:
- prepare_results_dict()[source]
Perform result package building
Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:
return {**metric_dict_1, **metric_dict_2}
- Returns:
calculated result dict
- Return type:
- class aitoolbox.experiment.result_package.basic_packages.RegressionResultPackage(strict_content_check=False, **kwargs)[source]
Bases:
AbstractResultPackage
Regression task result package
Evaluates MSE and MAE metrics.
- Parameters:
- prepare_results_dict()[source]
Perform result package building
Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:
return {**metric_dict_1, **metric_dict_2}
- Returns:
calculated result dict
- Return type: