Mimic Octopus Attack: Dynamic Camouflage Adversarial Examples using Mimetic Feature for 3D Humans

Inscrypt 2022

1State Key Laboratory of Information Security, Institute of Information Engineering, CAS
2School of Cyber Security, University of Chinese Academy of Sciences, Beijing 100049, China

Mimic Octopus Attack (MOA) architecture for generating robust and mimetic adversarial camouflage texture. The area surrounded by the blue and red dashed lines indicate mimetic fea- ture extraction and adversarial texture generation, respectively. The gray dotted line represents the return direction of the gradient descent.

Abstract


Physical adversarial attacks in object detection have become an attractive topic. Many works have proposed adversarial patches or camouflage to perform successful attacks in the real world, but all of these methods have drawbacks, especially for 3D humans. One is that the camouflage-based method is not dynamic or mimetic enough. That is, the adversarial texture is not rendered in conjunction with the background features of the target, which somehow violates the definition of adversarial examples; the other is that there is no detailing of non-rigid physical surfaces, such that the rendered textures are not robust and very rough in 3D scenarios. In this paper, we propose the Mimic Octopus Attack (MOA) to overcome the above gap, a novel method for generating a mimetic and robust physical adversarial texture to target objects to camouflage them against detectors with Multi-View and Multi-Scene. To achieve joint optimization, it utilizes the combined iterative training of mimetic style loss, adversarial loss, and human eye intuition. Experiments in specific scenarios of CARLA, which is generally recognized as an alternative to physical domains, demonstrate its advanced performance, resulting in a 67.62% decrease in mAP@0.5 for the YOLO-V5 detector compared to normal, an average increase of 4.14% compared to the state-of-the-art attacks and an average ASR of up to 85.28%. Besides, the robustness in attacking diverse populations and detectors of MOA proves its outstanding transferability.

Attack for 3D humans effect concept demo comparison. (a) Normal without adversarial texture method. (b) Use classic AdvPatch method. (c) Use robust AdvTexture method. (d) Use Naturalistic method. (e) Use FCA method. (f) Use our Mimic Octopus Attack (MOA) method.


MOA Architecture


As shown in the top picture, MOA Architecture consists of two phases: Extract Multi-View feature and Mesh fold face and Render final attack texture.

Extract Multi-View feature and Mesh fold face


Render final attack texture


Loss Function


Mimetic Camouflage Loss


Adversarial Loss


Evaluation


We compare the proposed at- tacks with several classic and recent advanced adversarial camouflage attacks, including AdvPatch , AdvTexture [14], Naturalistic [13], and FCA [39]. The comparison results are listed in Tab. 1. Note that the target model is all YOLO- V5 [17], and the experiments are conducted in the CARLA simulator. It can be concluded that our MOA significantly outperforms other methods, both in terms of mAP or Conf metrics. In addition, the subjective HEI metric also shows that MOA is more imperceptible than other methods. We got an average increase of 4.14% compared to the state-of-the-art attacks FCA and compared to ear- lier methods, an average of 30.38% increased the attack strength. Besides, our approach is imperceptible for the first time in our HEI metric.

Multi-Scene and Multi-View Adversarial Attack Evaluation


Transferability Evaluation


Diversity Adversarial Attack Evaluation


Citation



@InProceedings{MOA_2022_Inscrypt,
    author    = {Jing Li, Sisi Zhang, Xingbin Wang and Rui Hou},
    title     = {Mimic Octopus Attack: Dynamic Camouflage Adversarial Examples using Mimetic Feature for 3D Humans},
    booktitle = {},
    month     = {},
    year      = {},
    pages     = {}
}
            

Paper


Mimic Octopus Attack: Dynamic Camouflage Adversarial Examples using Mimetic Feature for 3D Humans

Jing Li, Sisi Zhang, Xingbin Wang and Rui Hou

description Paper
slideshow Slides
school BibTeX
code Code