Adversarial-Machine-Learnin.../wiki/Approach.md
2024-05-01 01:26:25 -04:00

384 B

The Approach

Attacking classifier models essentially boils down to adding precisely calculated noise to the input image, thereby tricking the classifier into selecting an incorrect class. The goal is to understand the efficacy of an array of denoising algorithms as adversarial machine learning defenses.

Requirements

For a given filter to be beneficial to th e

  1. The filter