Search

Current filters:

Current filters:

Author

Subject

Date issued

Has File(s)

Search Results

Results 1-1 of 1 (Search time: 0.001 seconds).
  • <<
  • 1
  • >>
  • Authors: Katharina, Hoedt; Verena, Praher; Arthur, Flexer;  Advisor: -;  Co-Author: - (2022)

    Given the rise of deep learning and its inherent black-box nature, the desire to interpret these systems and explain their behaviour became increasingly more prominent. The main idea of so-called explainers is to identify which features of particular samples have the most influence on a classifier’s prediction, and present them as explanations. Evaluating explainers, however, is difficult, due to reasons such as a lack of ground truth. In this work, we construct adversarial examples to check the plausibility of explanations, perturbing input deliberately to change a classifier’s prediction. This allows us to investigate whether explainers are able to detect these perturbed regions as the parts of an input that strongly influence a particular classification. Our results from the audi...