PPDTSA: Privacy-preserving Deep Transformation Self-attention Framework For Object Detection (original) (raw)

2021 IEEE Global Communications Conference (GLOBECOM), 2021

Abstract

In order to perform competitive privacy-guaranteed object detection, we propose an end-to-end model called Privacy-preserving Deep Transformation Self-attention (PPDTSA). This model ensures the privacy of the inference results. It has a low-complexity hierarchical structure with a relatively small number of hyper-parameters. Consistency of prediction is achieved through the encoding and decoding blocks of the self-attention mechanism which enables points of interest to be located. Focus loss is estimated based on foreground-background imbalance. The remaining dense blocks enable image details to be retained and the Region Of Interest to be expanded. At the same time, the objects detected in the image are protected through the privacy noise volume which is specified by the user. Experimental results demonstrate that PPDTSA achieves superior performance on the MOT20 dataset compared with three other state-of-the-art object detection models.

Edmund M Lai hasn't uploaded this paper.

Let Edmund know you want this paper to be uploaded.

Ask for this paper to be uploaded.