classification

class aitoolbox.experiment.core_metrics.classification.AccuracyMetric(y_true, y_predicted, positive_class_thresh=0.5)[source]

Bases: aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric

Model prediction accuracy

Parameters
  • y_true (numpy.array or list) – ground truth targets

  • y_predicted (numpy.array or list) – predicted targets

  • positive_class_thresh (float or None) – predicted probability positive class threshold. Set it to None when dealing with multi-class labels.

calculate_metric()[source]

Perform metric calculation and return it from this function

Returns

return metric_result

Return type

float or dict

class aitoolbox.experiment.core_metrics.classification.ROCAUCMetric(y_true, y_predicted)[source]

Bases: aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric

Model prediction ROC-AUC

Parameters
  • y_true (numpy.array or list) – ground truth targets

  • y_predicted (numpy.array or list) – predicted targets

calculate_metric()[source]

Perform metric calculation and return it from this function

Returns

return metric_result

Return type

float or dict

class aitoolbox.experiment.core_metrics.classification.PrecisionRecallCurveAUCMetric(y_true, y_predicted)[source]

Bases: aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric

Model prediction PR-AUC

Parameters
  • y_true (numpy.array or list) – ground truth targets

  • y_predicted (numpy.array or list) – predicted targets

calculate_metric()[source]

Perform metric calculation and return it from this function

Returns

return metric_result

Return type

float or dict

class aitoolbox.experiment.core_metrics.classification.F1ScoreMetric(y_true, y_predicted, positive_class_thresh=0.5)[source]

Bases: aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric

Model prediction F1 score

Parameters
  • y_true (numpy.array or list) – ground truth targets

  • y_predicted (numpy.array or list) – predicted targets

  • positive_class_thresh (float) – predicted probability positive class threshold

calculate_metric()[source]

Perform metric calculation and return it from this function

Returns

return metric_result

Return type

float or dict

class aitoolbox.experiment.core_metrics.classification.PrecisionMetric(y_true, y_predicted, positive_class_thresh=0.5)[source]

Bases: aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric

Model prediction precision

Parameters
  • y_true (numpy.array or list) – ground truth targets

  • y_predicted (numpy.array or list) – predicted targets

  • positive_class_thresh (float) – predicted probability positive class threshold

calculate_metric()[source]

Perform metric calculation and return it from this function

Returns

return metric_result

Return type

float or dict

class aitoolbox.experiment.core_metrics.classification.RecallMetric(y_true, y_predicted, positive_class_thresh=0.5)[source]

Bases: aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric

Model prediction recall score

Parameters
  • y_true (numpy.array or list) – ground truth targets

  • y_predicted (numpy.array or list) – predicted targets

  • positive_class_thresh (float) – predicted probability positive class threshold

calculate_metric()[source]

Perform metric calculation and return it from this function

Returns

return metric_result

Return type

float or dict