Thông tin tài liệu
Thông tin siêu dữ liệu biểu ghi
| Trường DC | Giá trị | Ngôn ngữ |
|---|---|---|
| dc.contributor.author | Katharina, Hoedt | - |
| dc.contributor.author | Verena, Praher | - |
| dc.contributor.author | Arthur, Flexer | - |
| dc.date.accessioned | 2023-04-26T03:57:39Z | - |
| dc.date.available | 2023-04-26T03:57:39Z | - |
| dc.date.issued | 2022 | - |
| dc.identifier.uri | https://link.springer.com/article/10.1007/s00521-022-07918-7 | - |
| dc.identifier.uri | https://dlib.phenikaa-uni.edu.vn/handle/PNK/8325 | - |
| dc.description | CC BY | vi |
| dc.description.abstract | Given the rise of deep learning and its inherent black-box nature, the desire to interpret these systems and explain their behaviour became increasingly more prominent. The main idea of so-called explainers is to identify which features of particular samples have the most influence on a classifier’s prediction, and present them as explanations. Evaluating explainers, however, is difficult, due to reasons such as a lack of ground truth. In this work, we construct adversarial examples to check the plausibility of explanations, perturbing input deliberately to change a classifier’s prediction. This allows us to investigate whether explainers are able to detect these perturbed regions as the parts of an input that strongly influence a particular classification. Our results from the audio and image domain suggest that the investigated explainers often fail to identify the input regions most relevant for a prediction; hence, it remains questionable whether explanations are useful or potentially misleading. | vi |
| dc.language.iso | en | vi |
| dc.publisher | Springer | vi |
| dc.subject | black-box nature | vi |
| dc.subject | deep audio and image classifiers | vi |
| dc.title | Constructing adversarial examples to investigate the plausibility of explanations in deep audio and image classifiers | vi |
| dc.type | Book | vi |
| Bộ sưu tập | ||
| OER - Công nghệ thông tin | ||
Danh sách tệp tin đính kèm:

