Main Article Content
This paper provides an overview of backdoor attacks
and defenses in machine learning across multiple domains,
including computer vision, natural language processing, and
federated learning. Backdoor attacks typically involve injecting
malicious inputs into the training data to produce incorrect
outputs when presented with specific triggers in the test data.
While many defense techniques have been proposed, they often
have limited effectiveness and are challenging to implement
across different domains. This paper proposes a general defense
method that incorporates multiple techniques and can adapt to
the dynamic environment.
This work is licensed under a Creative Commons Attribution 4.0 International License.