In this paper, we evaluate the performance of machine learning classifiers (Logistic Regression, CART, Random Forest) by fabricating adversarial examples (malware samples) statistically identical to goodware. To this end, we demonstrate three scenarios, (a) random attribute injection (b) insertion of prominent attributes from legitimate apps and (c) poisoning of class labels, for creating tainted malware samples, to mislead reduce accuracy of classification models. Experiments were conducted on data-set consisting of 15649 android applications comprising 5373 malicious and 10276 legitimate apps. The outcome of investigations demonstrates significant drop in accuracies in the range of 12-50%. However, in the absence of adversarial examples in the test set, the performance of classifiers was observed between 94.8-97.9%.
Vulnerability evaluation of android malware detectors against adversarial examples
Mercaldo F.;Santone A.
2021-01-01
Abstract
In this paper, we evaluate the performance of machine learning classifiers (Logistic Regression, CART, Random Forest) by fabricating adversarial examples (malware samples) statistically identical to goodware. To this end, we demonstrate three scenarios, (a) random attribute injection (b) insertion of prominent attributes from legitimate apps and (c) poisoning of class labels, for creating tainted malware samples, to mislead reduce accuracy of classification models. Experiments were conducted on data-set consisting of 15649 android applications comprising 5373 malicious and 10276 legitimate apps. The outcome of investigations demonstrates significant drop in accuracies in the range of 12-50%. However, in the absence of adversarial examples in the test set, the performance of classifiers was observed between 94.8-97.9%.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.