Publications by authors named "Battista Biggio"

Article Synopsis
  • The paper examines adversarial attacks and defenses in multi-label classification, highlighting how domain knowledge can help identify incoherent predictions caused by these attacks.
  • By integrating first-order logic constraints into a semi-supervised learning framework, the authors demonstrate that classifiers can reject samples that don't align with the established domain knowledge.
  • Their findings reveal that even without prior knowledge of specific attacks, domain constraints can effectively detect adversarial examples, suggesting a path toward more resilient multi-label classifiers.
View Article and Find Full Text PDF

Prior work has shown that multibiometric systems are vulnerable to presentation attacks, assuming that their matching score distribution is identical to that of genuine users, without fabricating any fake trait. We have recently shown that this assumption is not representative of current fingerprint and face presentation attacks, leading one to overestimate the vulnerability of multibiometric systems, and to design less effective fusion rules. In this paper, we overcome these limitations by proposing a statistical meta-model of face and fingerprint presentation attacks that characterizes a wider family of fake score distributions, including distributions of known and, potentially, unknown attacks.

View Article and Find Full Text PDF

In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits.

View Article and Find Full Text PDF

Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion, and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed. While previous work has been mainly focused on devising adversary-aware classification algorithms to counter evasion attempts, only few authors have considered the impact of using reduced feature sets on classifier security against the same attacks. An interesting, preliminary result is that classifier security to evasion may be even worsened by the application of feature selection.

View Article and Find Full Text PDF