This project aims to investigate how large-scale multimodal data, including fMRI, DTI, T1w/T2w MRI, fundus photography, OCT, and genetic profiles, can be integrated to reveal common biomarkers underlying both neurological and ocular diseases. It further explores whether a foundation model pre-trained on these data modalities can reliably detect early pathological changes in diseases such as Alzheimer’s, Parkinson’s, strokes, cognitive decline, and retinal degenerations, and designs interpretability strategies to elucidate the pathogenic mechanisms shared between brain and eye, potentially informing new therapeutic targets.
The primary objective is to develop a unified deep learning foundation model capable of processing multimodal brain and eye data, leveraging large-scale UKB data for self-supervised pretraining. The model will be evaluated for its performance in early-stage detection and more accurate diagnosis of neurological and ocular diseases, including Alzheimer’s, Parkinson’s, dementia, stroke, schizophrenia, glaucoma, and hereditary retinal disorders, as compared to single-modality baselines. Additionally, the research aims to incorporate explainable AI tools to identify and interpret the critical imaging and genomic features driving disease risk and progression, and to investigate gene-environment interactions by integrating genetic data with imaging for biomarker and therapeutic target discovery.
The scientific rationale lies in the shared developmental origins of retinal and cerebral tissues, which position ocular imaging as a non-invasive proxy for brain health. Recent studies have demonstrated strong correlations between retinal changes and structural or functional brain abnormalities, yet current approaches rarely exploit the full potential of multimodal data integration. The use of interpretable AI will further enable the linking of model predictions with underlying pathophysiology, guiding early interventions and advancing precision medicine.