Directory structure overhaul, poster almost done

This commit is contained in:
Aidan Sharpe
2024-05-01 01:26:25 -04:00
parent bf2440ef86
commit 008ef906d7
106 changed files with 58 additions and 20 deletions

7
wiki/Approach.md Normal file
View File

@ -0,0 +1,7 @@
# The Approach
Attacking classifier models essentially boils down to adding precisely calculated noise to the input image, thereby tricking the classifier into selecting an incorrect class. The goal is to understand the efficacy of an array of denoising algorithms as adversarial machine learning defenses.
## Requirements
For a given filter to be beneficial to th e
1. The filter