37 lines
1.4 KiB
Markdown
37 lines
1.4 KiB
Markdown
# Engineering Design Principles
|
|
1. Clearly defined problem
|
|
- Assess the efficacy of various denoising filters in preserving the accuracy of image classifier models under a noise-based attack.
|
|
2. Requirements
|
|
- Only algorithmic approach for defense
|
|
- Must be faster than auto-encoder
|
|
3. Constraints
|
|
- Computing power
|
|
- Memory usage
|
|
- Impossible to know who and how a model will be attacked
|
|
4. Engineering standards
|
|
- [[https://peps.python.org/pep-0008/|PEP 8]]
|
|
-
|
|
5. Cite applicable references
|
|
- [[https://pytorch.org/tutorials/beginner/fgsm_tutorial.html|FGSM Attack]]
|
|
- [[https://github.com/pytorch/examples/blob/main/mnist/main.py|MNIST Model]]
|
|
- [[https://www.cs.toronto.edu/~kriz/cifar.html|CIFAR-10]]
|
|
6. Considered alternatives
|
|
a) Iterate on the design
|
|
i) Advantages
|
|
- Potentially more computationally efficient than an ML approach
|
|
- Will likely use less memory than a model used to clean inputs
|
|
- No training (very computationally intense) stage
|
|
ii) Disadvantages
|
|
- Potentially less effective than than an ML approach
|
|
iii) Risks
|
|
- Conventional algorithm may be more vulnerable to reverse engineering
|
|
7. Evaluation process
|
|
- Cross validation
|
|
- Effectiveness will be measured as the percent of correct classifications
|
|
- Testing clean vs. filtered training data
|
|
- Ablation variables:
|
|
- Different models
|
|
- Different datasets
|
|
- Different filters
|
|
8. Deliverables and timeline
|