Directory structure overhaul, poster almost done
This commit is contained in:
7
wiki/Approach.md
Normal file
7
wiki/Approach.md
Normal file
@ -0,0 +1,7 @@
|
||||
# The Approach
|
||||
|
||||
Attacking classifier models essentially boils down to adding precisely calculated noise to the input image, thereby tricking the classifier into selecting an incorrect class. The goal is to understand the efficacy of an array of denoising algorithms as adversarial machine learning defenses.
|
||||
|
||||
## Requirements
|
||||
For a given filter to be beneficial to th e
|
||||
1. The filter
|
Reference in New Issue
Block a user