Cybercriminals are continually working to develop increasingly aggressive malicious code to steal sensitive and private information from mobile devices. Antimalware are not always able to detect all threats, especially when they do not have previous knowledge of the malware signature. Moreover, malware code analysis remains a time-consuming process for security analysts. In this regard, we propose a method aimed to detect the malware belonging family and automatically pointing out a subset of potentially malicious classes. The rationale behind this work aims (i) to save valuable time for the security analyst by decreasing the amount of code to analyse, and (ii) to improve the interpretability of image-based deep learning model for malware family detection. We represent an application as an image and classify it with a deep learning model aimed to predict the belonging family; then, exploiting the use of activation maps, the approach points out potentially malicious classes to help the security analysts in the malicious behaviour recognition. The proposed method obtains an overall accuracy of 0.944 in the evaluation of a dataset composed of 8430 real-world Android malware, showing also that the use of activation maps can provide explainability about the deep learning model decision.

A Semi-Automated Explainability-Driven Approach for Malware Analysis through Deep Learning

Casolare R.;Mercaldo F.;Peluso C.;Santone A.
2021-01-01

Abstract

Cybercriminals are continually working to develop increasingly aggressive malicious code to steal sensitive and private information from mobile devices. Antimalware are not always able to detect all threats, especially when they do not have previous knowledge of the malware signature. Moreover, malware code analysis remains a time-consuming process for security analysts. In this regard, we propose a method aimed to detect the malware belonging family and automatically pointing out a subset of potentially malicious classes. The rationale behind this work aims (i) to save valuable time for the security analyst by decreasing the amount of code to analyse, and (ii) to improve the interpretability of image-based deep learning model for malware family detection. We represent an application as an image and classify it with a deep learning model aimed to predict the belonging family; then, exploiting the use of activation maps, the approach points out potentially malicious classes to help the security analysts in the malicious behaviour recognition. The proposed method obtains an overall accuracy of 0.944 in the evaluation of a dataset composed of 8430 real-world Android malware, showing also that the use of activation maps can provide explainability about the deep learning model decision.
2021
978-1-6654-3900-8
File in questo prodotto:
Non ci sono file associati a questo prodotto.

I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.

Utilizza questo identificativo per citare o creare un link a questo documento: https://hdl.handle.net/11695/107215
Citazioni
  • ???jsp.display-item.citation.pmc??? ND
  • Scopus 5
  • ???jsp.display-item.citation.isi??? 2
social impact