NLP_result_package

class aitoolbox.nlp.experiment_evaluation.NLP_result_package.QuestionAnswerResultPackage(paragraph_text_tokens, target_actual_text=None, output_text_dir=None, use_perl_rouge=False, flatten_result_dict=False, strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Question Answering task performance evaluation result package

Parameters:
  • paragraph_text_tokens (list) –

  • target_actual_text (list or None) –

  • output_text_dir (str or None) –

  • use_perl_rouge (bool) –

  • flatten_result_dict (bool) –

  • strict_content_check (bool) –

  • **kwargs (dict) –

prepare_results_dict()[source]
Return type:

dict

set_experiment_dir_path_for_additional_results(project_name, experiment_name, experiment_timestamp, local_model_result_folder_path)[source]

Set experiment folder path after potential timestamps have already been generated.

Experiment folder setting for additional metadata results output is needed only in certain result packages, for example in QuestionAnswerResultPackage where the self.output_text_dir initially has only the name of the folder where the results text predictions for each example should be stored. This function when implemented reforms the folder name so that it becomes a full path placing the folder inside the experiment folder (for which the timestamp at the start of train loop is needed).

Another use of this function is in MachineTranslationResultPackage where the attention heatmap pictures are stored as additional metadata results.

As can be seen from the fact that the train loop mechanism is mentioned, this method’s functionality is primarily used for PyTorch experiments.

Parameters:
  • project_name (str) – root name of the project

  • experiment_name (str) – name of the particular experiment

  • experiment_timestamp (str) – time stamp at the start of training

  • local_model_result_folder_path (str) – root local path where project folder will be created

Returns:

None

list_additional_results_dump_paths()[source]

Specify the list of metadata files you also want to save & upload to s3 during the experiment saving procedure

By default, there are no additional files that are saved as the return is None. If you want to save your specific additional files produced during the training procedure, then override this method specifying the file paths.

If you want to save a whole folder of files, use zip_additional_results_dump() function to zip it into a single file and save this zip instead.

The specified files are any additional data you would want to include into the experiment folder in addition to the model save files and performance evaluation report files. For example a zip of attention heatmap pictures in the machine translation projects.

Returns:

list of lists of string paths if it is not None. Each element of the list should be list of: [[results_file_name, results_file_local_path], … [,]]

Return type:

list or None

class aitoolbox.nlp.experiment_evaluation.NLP_result_package.QuestionAnswerSpanClassificationResultPackage(strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Extractive Question Answering task performance evaluation result package

Evaluates the classification of the correct answer start and end points.

Parameters:
  • strict_content_check (bool) –

  • **kwargs (dict) –

prepare_results_dict()[source]

Available general data:

y_span_start_true (numpy.array or list): y_span_start_predicted (numpy.array or list): y_span_end_true (numpy.array or list): y_span_end_predicted (numpy.array or list): strict_content_check (bool): **kwargs (dict):

Return type:

dict

class aitoolbox.nlp.experiment_evaluation.NLP_result_package.TextSummarizationResultPackage(strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Text summarization task performance evaluation package

Parameters:
  • strict_content_check (bool) –

  • **kwargs (dict) –

prepare_results_dict()[source]
Return type:

dict

class aitoolbox.nlp.experiment_evaluation.NLP_result_package.MachineTranslationResultPackage(target_vocab, source_vocab=None, source_sents=None, output_text_dir=None, output_attn_heatmap_dir=None, strict_content_check=False, **kwargs)[source]

Bases: AbstractResultPackage

Machine Translation task performance evaluation package

Parameters:
prepare_results_dict()[source]
Returns:

result dict which is combination of different BLEU metric calculations and possibly

saved attention heatmap plot files and perplexity

Return type:

dict

set_experiment_dir_path_for_additional_results(project_name, experiment_name, experiment_timestamp, local_model_result_folder_path)[source]

Set experiment folder path after potential timestamps have already been generated.

Experiment folder setting for additional metadata results output is needed only in certain result packages, for example in QuestionAnswerResultPackage where the self.output_text_dir initially has only the name of the folder where the results text predictions for each example should be stored. This function when implemented reforms the folder name so that it becomes a full path placing the folder inside the experiment folder (for which the timestamp at the start of train loop is needed).

Another use of this function is in MachineTranslationResultPackage where the attention heatmap pictures are stored as additional metadata results.

As can be seen from the fact that the train loop mechanism is mentioned, this method’s functionality is primarily used for PyTorch experiments.

Parameters:
  • project_name (str) – root name of the project

  • experiment_name (str) – name of the particular experiment

  • experiment_timestamp (str) – time stamp at the start of training

  • local_model_result_folder_path (str) – root local path where project folder will be created

Returns:

None

list_additional_results_dump_paths()[source]

Specify the list of metadata files you also want to save & upload to s3 during the experiment saving procedure

By default, there are no additional files that are saved as the return is None. If you want to save your specific additional files produced during the training procedure, then override this method specifying the file paths.

If you want to save a whole folder of files, use zip_additional_results_dump() function to zip it into a single file and save this zip instead.

The specified files are any additional data you would want to include into the experiment folder in addition to the model save files and performance evaluation report files. For example a zip of attention heatmap pictures in the machine translation projects.

Returns:

list of lists of string paths if it is not None. Each element of the list should be list of: [[results_file_name, results_file_local_path], … [,]]

Return type:

list or None

class aitoolbox.nlp.experiment_evaluation.NLP_result_package.GLUEResultPackage(task_name)[source]

Bases: AbstractResultPackage

GLUE task result package

Wrapper around HF Transformers glue_compute_metrics()

Parameters:

task_name (str) – name of the GLUE task

prepare_results_dict()[source]

Perform result package building

Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:

return {**metric_dict_1, **metric_dict_2}
Returns:

calculated result dict

Return type:

dict

class aitoolbox.nlp.experiment_evaluation.NLP_result_package.XNLIResultPackage[source]

Bases: AbstractResultPackage

XNLI task result package

Wrapper around HF Transformers xnli_compute_metrics()

prepare_results_dict()[source]

Perform result package building

Mostly this consists of executing calculation of selected performance metrics and returning their result dicts. If you want to use multiple performance metrics you have to combine them in the single self.results_dict at the end by doing this:

return {**metric_dict_1, **metric_dict_2}
Returns:

calculated result dict

Return type:

dict