Dumping robustness check results¶
We provide two standard ways for dumping the robustness check results:
- robustcheck.utils.save_robustness_stats_artifacts(robustness_check, run_output_folder)[source]¶
Saves robustness check artifacts containings metrics and histograms of queries and perturbartion distances on the local file system.
- Parameters:
robustness_check – RobustnessCheck containing the model and dataset to be benchmarked. This requires its run_robustness_check() method to have been executed such that we have metrics to extract from it.
run_output_folder – A string representing where to save the arising artifacts.
- robustcheck.utils.generate_mlflow_logs(robustness_check, run_name, experiment_name='default', tracking_uri='mlruns')[source]¶
Generates robustness check logs on mlflow.
- Parameters:
robustness_check – RobustnessCheck containing the model and dataset to be benchmarked. This requires its run_robustness_check() method to have been executed such that we have metrics to extract from it.
run_name – A string representing the run name under which the mlflow artifacts and metrics will be logged.
experiment_name – A string representing the experiment name under which the mlflow artifacts and metrics will be logged.
tracking_uri – A string representing the path where the mlflow artifacts and metrics will be stored.