Objective: This study introduces an explainable, radiomics-based machine learning framework for the automated classification of sarcoma tumors using MRI. The approach aims to empower clinicians, reducing dependence on subjective image interpretation. Methods: A total of 186 MRI scans from 86 patients diagnosed with bone and soft tissue sarcoma were manually segmented to isolate tumor regions and corresponding healthy tissue. From these segmentations, 851 handcrafted radiomic features were extracted, including wavelet-transformed descriptors. A Random Forest classifier was trained to distinguish between tumor and healthy tissue, with hyperparameter tuning performed through nested cross-validation. To ensure transparency and interpretability, model behavior was explored through Feature Importance analysis and Local Interpretable Model-agnostic Explanations (LIME). Results: The model achieved an F1-score of 0.742, with an accuracy of 0.724 on the test set. LIME analysis revealed that texture and wavelet-based features were the most influential in driving the model’s predictions. Conclusions: By enabling accurate and interpretable classification of sarcomas in MRI, the proposed method provides a non-invasive approach to tumor classification, supporting an earlier, more personalized and precision-driven diagnosis. This study highlights the potential of explainable AI to assist in more secure clinical decision-making.
An Explainable Radiomics-Based Classification Model for Sarcoma Diagnosis
Correra, Simona;Mercaldo, Francesco;Nardone, Vittoria;Santone, Antonella;
2025-01-01
Abstract
Objective: This study introduces an explainable, radiomics-based machine learning framework for the automated classification of sarcoma tumors using MRI. The approach aims to empower clinicians, reducing dependence on subjective image interpretation. Methods: A total of 186 MRI scans from 86 patients diagnosed with bone and soft tissue sarcoma were manually segmented to isolate tumor regions and corresponding healthy tissue. From these segmentations, 851 handcrafted radiomic features were extracted, including wavelet-transformed descriptors. A Random Forest classifier was trained to distinguish between tumor and healthy tissue, with hyperparameter tuning performed through nested cross-validation. To ensure transparency and interpretability, model behavior was explored through Feature Importance analysis and Local Interpretable Model-agnostic Explanations (LIME). Results: The model achieved an F1-score of 0.742, with an accuracy of 0.724 on the test set. LIME analysis revealed that texture and wavelet-based features were the most influential in driving the model’s predictions. Conclusions: By enabling accurate and interpretable classification of sarcomas in MRI, the proposed method provides a non-invasive approach to tumor classification, supporting an earlier, more personalized and precision-driven diagnosis. This study highlights the potential of explainable AI to assist in more secure clinical decision-making.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


