Brain age prediction using neuroimaging data is a promising tool for identifying accelerated aging and potential neurological disorders. However, the integration of explainable artificial intelligence (XAI) in this domain remains an open challenge, particularly in ensuring the robustness of model interpretations when data augmentation techniques are applied.
This project aims to investigate the impact of different augmentation methods on the explanations generated by XAI frameworks such as SHAP (SHapley Additive Explanations) and GradCAM (Gradient-weighted Class Activation Mapping). We will apply geometric transformations, intensity variations, and generative augmentation (e.g., Variational Autoencoders and Diffusion Models) to neuroimaging data and assess how these modifications influence feature attributions and interpretability.
By leveraging UK Biobank’s extensive MRI dataset, we will:
1.Train multiple deep learning models for brain age prediction with and without augmentation.
2.Compare the stability and consistency of explanation outputs across different augmentation strategies.
3.Evaluate the reliability of XAI methods in detecting biologically meaningful features under augmented conditions.
4.Assess potential biases introduced by augmentation in XAI explanations.