8 lines
384 B
Markdown
8 lines
384 B
Markdown
# The Approach
|
|
|
|
Attacking classifier models essentially boils down to adding precisely calculated noise to the input image, thereby tricking the classifier into selecting an incorrect class. The goal is to understand the efficacy of an array of denoising algorithms as adversarial machine learning defenses.
|
|
|
|
## Requirements
|
|
For a given filter to be beneficial to th e
|
|
1. The filter
|