Secure ML Demo - Deep Learning security

Free demo for the security evaluation of Deep Learning algorithms

Secure ML Research   Tutorial: Wild Patterns   Secure ML Library   Web Demo

Secure ML Demo has been partially developed with the support of European Union’s ALOHA project
Horizon 2020 Research and Innovation programme, grant agreement No. 780788.

This web demo allows the user to evaluate the security level of a neural network against worst-case input perturbation [1]. Adding this specifically designed perturbation is used by attackers to create adversarial examples and perform evasion attacks by feeding them to the network causing it to fail the classification [2]. In order to defend a system we first need to evaluate the effectiveness of the attacks. During the security evaluation process, the network is tested against increasing levels of perturbation, and its accuracy is tracked down in order to create a security evaluation curve. This curve, showing the drop in accuracy with respect to the maximum perturbation allowed for the input, can be directly used by the model designer to compare different networks and countermeasures. The user can further generate and inspect adversarial examples and see how the perturbation affects the outputs of the network.
For further details on different attack algorithms and defense methods, we refer the reader to Biggio and Roli [3].

[1] Szegedy et al., Intriguing Properties of Neural Networks, ICLR 2014
[2] Biggio et al., Evasion attacks against ML at test time, ECML PKDD 2013
[3] Biggio and Roli, Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning, Patt. Rec., 2018


Pluribus One S.r.l.

Via Bellini 9, 09128, Cagliari (CA)


PEC: pluribus-one[at]


Legal entity

Share capital: € 10008

VAT no.: 03621820921

R.E.A.: Cagliari 285352


University of Cagliari

  Pluribus One is a spin-off

  of the Department of

  Electrical and Electronic Engineering

  University of Cagliari, Italy