Thông tin tài liệu
Nhan đề : |
2N labeling defense method against adversarial attacks by filtering and extended class label set |
Tác giả : |
Gábor, Szűcs Richárd, Kiss |
Năm xuất bản : |
2022 |
Nhà xuất bản : |
Springer |
Tóm tắt : |
The fast improvement of deep learning methods resulted in breakthroughs in image classification, however, these models are sensitive to adversarial perturbations, which can cause serious problems. Adversarial attacks try to change the model output by adding noise to the input, in our research we propose a combined defense method against it. Two defense approaches have been evolved in the literature, one robustizes the attacked model for higher accuracy, and the other approach detects the adversarial examples. Only very few papers discuss both approaches, thus our aim was to combine them to obtain a more robust model and to examine the combination, in particular the filtering capability of the detector. Our contribution was that the filtering based on the decision of the detector is able to enhance the accuracy, which was theoretically proved. |
Mô tả: |
CC BY |
URI: |
https://link.springer.com/article/10.1007/s11042-022-14021-5 https://dlib.phenikaa-uni.edu.vn/handle/PNK/8333 |
Bộ sưu tập |
OER - Công nghệ thông tin |
XEM MÔ TẢ
90
XEM TOÀN VĂN
68
Danh sách tệp tin đính kèm: