Browsing by Subject deep neural network

Jump to: 0-9 A B C D E F G H I J K L M N O P Q R S T U V W X Y Z
or enter first few letters:  
Showing results [2 - 2] / 2
  • Authors: Zhiquan, He; Xujia, Lan; Jianhe, Yuan;  Advisor: -;  Co-Author: - (2023)

    Adversarial attack aims to fail the deep neural network by adding a small amount of perturbation to the input image, in which the attack success rate and resulting image quality are maximized under the lp norm perturbation constraint. However, the lp norm is not accurately correlated to human perception of image quality. Attack methods based on l0 norm constraint usually suffer from the high computational cost due to the iterative search for candidate pixels to modify.