Moved to separate project on Gitea
This commit is contained in:
4
Filter_Analysis/wiki/Approach.wiki
Normal file
4
Filter_Analysis/wiki/Approach.wiki
Normal file
@ -0,0 +1,4 @@
|
||||
= The Approach =
|
||||
|
||||
The goal is to use a filtering algorithm such as the [[https://en.wikipedia.org/wiki/Kuwahara_filter#|Kuwahara Filter]] to
|
||||
|
7
Filter_Analysis/wiki/FilterAnalysis.wiki
Normal file
7
Filter_Analysis/wiki/FilterAnalysis.wiki
Normal file
@ -0,0 +1,7 @@
|
||||
= Halting Gradient Attacks with Non-Gradient Defenses =
|
||||
|
||||
== Contents ==
|
||||
- [[Tests]]
|
||||
- [[Approach]]
|
||||
- [[Rationale]]
|
||||
- [[Notes]]
|
18
Filter_Analysis/wiki/Notes.wiki
Normal file
18
Filter_Analysis/wiki/Notes.wiki
Normal file
@ -0,0 +1,18 @@
|
||||
= Notes on Filter-Based Defenses =
|
||||
|
||||
== Engineering Design Principles ==
|
||||
1. Clearly defined problem
|
||||
a) Defending gradient-based attacks using denoising filters as a buffer between an attacked image and a classifier
|
||||
2. Requirements
|
||||
3. Constraints
|
||||
4. Engineering standards
|
||||
5. Cite applicable references
|
||||
6. Considered alternatives
|
||||
a) Iterate on the design
|
||||
i) Advantages
|
||||
ii) Disadvantages
|
||||
iii) Risks
|
||||
7. Evaluation process
|
||||
a) Validation
|
||||
8. Deliverables and timeline
|
||||
9.
|
42
Filter_Analysis/wiki/Tests.wiki
Normal file
42
Filter_Analysis/wiki/Tests.wiki
Normal file
@ -0,0 +1,42 @@
|
||||
= Test Process for Non-Gradient Filter Pipeline =
|
||||
|
||||
For each attack, the following tests are to be evaluated. The performance of each attack should be evaluated using cross validation with $k=5$.
|
||||
|
||||
| Training | Test |
|
||||
|----------|-------------------------|
|
||||
| Clean | Clean |
|
||||
| Clean | Attacked |
|
||||
| Clean | Filtered (Not Attacked) |
|
||||
| Clean | Filtered (Attacked) |
|
||||
| Filtered | Filtered (Not Attacked) |
|
||||
| Filtered | Filtered (Attacked) |
|
||||
|
||||
Epsilon: 0.05
|
||||
Original Accuracy = 9912 / 10000 = 0.9912
|
||||
Attacked Accuracy = 9605 / 10000 = 0.9605
|
||||
Filtered Accuracy = 9522 / 10000 = 0.9522
|
||||
|
||||
Epsilon: 0.1
|
||||
Original Accuracy = 9912 / 10000 = 0.9912
|
||||
Attacked Accuracy = 8743 / 10000 = 0.8743
|
||||
Filtered Accuracy = 9031 / 10000 = 0.9031
|
||||
|
||||
Epsilon: 0.15000000000000002
|
||||
Original Accuracy = 9912 / 10000 = 0.9912
|
||||
Attacked Accuracy = 7107 / 10000 = 0.7107
|
||||
Filtered Accuracy = 8138 / 10000 = 0.8138
|
||||
|
||||
Epsilon: 0.2
|
||||
Original Accuracy = 9912 / 10000 = 0.9912
|
||||
Attacked Accuracy = 4876 / 10000 = 0.4876
|
||||
Filtered Accuracy = 6921 / 10000 = 0.6921
|
||||
|
||||
Epsilon: 0.25
|
||||
Original Accuracy = 9912 / 10000 = 0.9912
|
||||
Attacked Accuracy = 2714 / 10000 = 0.2714
|
||||
Filtered Accuracy = 5350 / 10000 = 0.535
|
||||
|
||||
Epsilon: 0.3
|
||||
Original Accuracy = 9912 / 10000 = 0.9912
|
||||
Attacked Accuracy = 1418 / 10000 = 0.1418
|
||||
Filtered Accuracy = 3605 / 10000 = 0.3605
|
Reference in New Issue
Block a user