abstract_metric

class aitoolbox.experiment.core_metrics.abstract_metric.AbstractBaseMetric(y_true, y_predicted, metric_name, np_array=True)[source]

Bases: ABC

Base metric with core metric functionality needed by all the derived actual performance metrics

Parameters:
  • y_true (numpy.array or list or str) – ground truth targets

  • y_predicted (numpy.array or list or str) – predicted targets

  • metric_name (str) – name of the calculated metric

  • np_array (bool) – should the provided targets be converted to numpy array or left as they are

abstract calculate_metric()[source]

Perform metric calculation and return it from this function

Returns:

return metric_result

Return type:

float or dict

get_metric()[source]

Returns metric result

Returns:

return metric_result

Return type:

float or dict

get_metric_dict()[source]

Creates and return metric result key-value dict

Returns:

metric dict

Return type:

dict

_get_metric_self_other_val(other)[source]

Metric comparison prep util

Parameters:

other (AbstractBaseMetric or float or int) – other compared metric

Returns:

metric value

Return type:

float or int

__add__(other)[source]

Concatenate two metrics

Parameters:

other (AbstractBaseMetric or dict) – new metric to be added

Returns:

combined metric dict

Return type:

dict

__radd__(other)[source]

Append another metric

Parameters:

other (AbstractBaseMetric or dict) – new metric to be added

Returns:

combined metric dict

Return type:

dict

concat_metric(other)[source]

Concatenate another metric to this one

Parameters:

other (AbstractBaseMetric or dict) – new metric to be added

Returns:

combined metric dict

Return type:

dict