Skip to content

Fooling Automated Surveillance Cameras - Adversarial Patches to Attack Person Detection

Danush Shekar
Published: at 07:00 PM

We have all seen machine learning algorithms gaining widespread attention in multiple industries. Surveillance and security is one such area where machine learning has applications in. Recent research papers have been focussing on finding image patches for such algorithms which cause say, a classifier, to ignore the said object. The arXiv paper I will be presenting is one such example, wherein the authors present an approach to generate adversarial image patches that one can wear or hold to be hidden from a person-detection classifier.

Additional resources: