UntargetedAttack¶
- class robustcheck.types.UntargetedAttack.UntargetedAttack(model, img, label)[source]¶
Abstract class for untargeted adversarial attacks
This is an abstract class that provides a template for standard untargeted adversarial attacks.
- model¶
Target model to be attacked. This has to expose a predict method that returns the output probability distributions when provided a batch of images as input.
- img¶
An array (HxWxC) representing the target image to be perturbed.
- label¶
An integer representing the correct class index of the image.
- run_adversarial_attack(self)[source]¶
Abstract method, its implementation will run the adversarial attack.
- is_perturbed(self)[source]¶
Abstract method, its method will return a boolean indicating whether a successful untargeted adversarial perturbation was found.