Published papers
Featured Publications

Genome-wide meta-analysis of macronutrient intake of 91114 European ancestry participants from the cohorts for heart and aging research in genomic epidemiology consortium
Type: article, Author: J Merino and et al , Date: 2019-12-24

The role of haematological traits in risk of ischaemic stroke and its subtypes
Type: article, Author: E L Harshfield, Date: 2019-11-22

Assessment of MTNR1B Type 2 Diabetes Genetic Risk Modification by Shift Work and Morningness-Eveningness Preference in the UK Biobank
Type: article, Author: H Dashti, Date: 2019-11-22
Last updated Jan 15, 2019
2019 |
B Ruijsink; E Puyol-Antón; I Oksuz; M Sinclair; W Bai; JA Schnabel; R Razavi; AP King Fully Automated, Quality-Controlled Cardiac Analysis From CMR: Validation and Large-Scale Application to Characterize Cardiac Function Journal Article In: JAAC Cardiovascular Imaging, 2019. Abstract | Links | BibTeX | Tags: 17806, cardiac analysis, CMR, imaging @article{Ruijsink2019, title = {Fully Automated, Quality-Controlled Cardiac Analysis From CMR: Validation and Large-Scale Application to Characterize Cardiac Function}, author = {B Ruijsink and E Puyol-Antón and I Oksuz and M Sinclair and W Bai and JA Schnabel and R Razavi and AP King }, url = {https://www.ncbi.nlm.nih.gov/pubmed/31326477}, year = {2019}, date = {2019-07-11}, journal = {JAAC Cardiovascular Imaging}, abstract = {OBJECTIVES: This study sought to develop a fully automated framework for cardiac function analysis from cardiac magnetic resonance (CMR), including comprehensive quality control (QC) algorithms to detect erroneous output. BACKGROUND: Analysis of cine CMR imaging using deep learning (DL) algorithms could automate ventricular function assessment. However, variable image quality, variability in phenotypes of disease, and unavoidable weaknesses in training of DL algorithms currently prevent their use in clinical practice. METHODS: The framework consists of a pre-analysis DL image QC, followed by a DL algorithm for biventricular segmentation in long-axis and short-axis views, myocardial feature-tracking (FT), and a post-analysis QC to detect erroneous results. The study validated the framework in healthy subjects and cardiac patients by comparison against manual analysis (n = 100) and evaluation of the QC steps' ability to detect erroneous results (n = 700). Next, this method was used to obtain reference values for cardiac function metrics from the UK Biobank. RESULTS: Automated analysis correlated highly with manual analysis for left and right ventricular volumes (all r > 0.95), strain (circumferential r = 0.89, longitudinal r > 0.89), and filling and ejection rates (all r ≥ 0.93). There was no significant bias for cardiac volumes and filling and ejection rates, except for right ventricular end-systolic volume (bias +1.80 ml; p = 0.01). The bias for FT strain was <1.3%. The sensitivity of detection of erroneous output was 95% for volume-derived parameters and 93% for FT strain. Finally, reference values were automatically derived from 2,029 CMR exams in healthy subjects. CONCLUSIONS: The study demonstrates a DL-based framework for automated, quality-controlled characterization of cardiac function from cine CMR, without the need for direct clinician oversight.}, keywords = {17806, cardiac analysis, CMR, imaging}, pubstate = {published}, tppubtype = {article} } OBJECTIVES: This study sought to develop a fully automated framework for cardiac function analysis from cardiac magnetic resonance (CMR), including comprehensive quality control (QC) algorithms to detect erroneous output. BACKGROUND: Analysis of cine CMR imaging using deep learning (DL) algorithms could automate ventricular function assessment. However, variable image quality, variability in phenotypes of disease, and unavoidable weaknesses in training of DL algorithms currently prevent their use in clinical practice. METHODS: The framework consists of a pre-analysis DL image QC, followed by a DL algorithm for biventricular segmentation in long-axis and short-axis views, myocardial feature-tracking (FT), and a post-analysis QC to detect erroneous results. The study validated the framework in healthy subjects and cardiac patients by comparison against manual analysis (n = 100) and evaluation of the QC steps' ability to detect erroneous results (n = 700). Next, this method was used to obtain reference values for cardiac function metrics from the UK Biobank. RESULTS: Automated analysis correlated highly with manual analysis for left and right ventricular volumes (all r > 0.95), strain (circumferential r = 0.89, longitudinal r > 0.89), and filling and ejection rates (all r ≥ 0.93). There was no significant bias for cardiac volumes and filling and ejection rates, except for right ventricular end-systolic volume (bias +1.80 ml; p = 0.01). The bias for FT strain was <1.3%. The sensitivity of detection of erroneous output was 95% for volume-derived parameters and 93% for FT strain. Finally, reference values were automatically derived from 2,029 CMR exams in healthy subjects. CONCLUSIONS: The study demonstrates a DL-based framework for automated, quality-controlled characterization of cardiac function from cine CMR, without the need for direct clinician oversight. |
Ilkay Oksuz; Bram Ruijsinka; EstherPuyol-Antón; James R.Clough; Gastao Cruz; Aurelien Bustin; Claudia Prieto; ReneBotnara; DanielRueckert; Julia A.Schnabel; Andrew P.King Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning Journal Article In: Medical Image Analysis, 2019. Abstract | Links | BibTeX | Tags: 17806, cnn based detection, imaging @article{Oksuz2019, title = {Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning}, author = {Ilkay Oksuz and Bram Ruijsinka and EstherPuyol-Antón and James R.Clough and Gastao Cruz and Aurelien Bustin and Claudia Prieto and ReneBotnara and DanielRueckert and Julia A.Schnabel and Andrew P.King}, url = {https://www.sciencedirect.com/science/article/pii/S1361841518306765?via%3Dihub}, year = {2019}, date = {2019-07-01}, journal = {Medical Image Analysis}, abstract = {Good quality of medical images is a prerequisite for the success of subsequent image analysis pipelines. Quality assessment of medical images is therefore an essential activity and for large population studies such as the UK Biobank (UKBB), manual identification of artefacts such as those caused by unanticipated motion is tedious and time-consuming. Therefore, there is an urgent need for automatic image quality assessment techniques. In this paper, we propose a method to automatically detect the presence of motion-related artefacts in cardiac magnetic resonance (CMR) cine images. We compare two deep learning architectures to classify poor quality CMR images: 1) 3D spatio-temporal Convolutional Neural Networks (3D-CNN), 2) Long-term Recurrent Convolutional Network (LRCN). Though in real clinical setup motion artefacts are common, high-quality imaging of UKBB, which comprises cross-sectional population data of volunteers who do not necessarily have health problems creates a highly imbalanced classification problem. Due to the high number of good quality images compared to the relatively low number of images with motion artefacts, we propose a novel data augmentation scheme based on synthetic artefact creation in k-space. We also investigate a learning approach using a predetermined curriculum based on synthetic artefact severity. We evaluate our pipeline on a subset of the UK Biobank data set consisting of 3510 CMR images. The LRCN architecture outperformed the 3D-CNN architecture and was able to detect 2D+time short axis images with motion artefacts in less than 1ms with high recall. We compare our approach to a range of state-of-the-art quality assessment methods. The novel data augmentation and curriculum learning approaches both improved classification performance achieving overall area under the ROC curve of 0.89.}, keywords = {17806, cnn based detection, imaging}, pubstate = {published}, tppubtype = {article} } Good quality of medical images is a prerequisite for the success of subsequent image analysis pipelines. Quality assessment of medical images is therefore an essential activity and for large population studies such as the UK Biobank (UKBB), manual identification of artefacts such as those caused by unanticipated motion is tedious and time-consuming. Therefore, there is an urgent need for automatic image quality assessment techniques. In this paper, we propose a method to automatically detect the presence of motion-related artefacts in cardiac magnetic resonance (CMR) cine images. We compare two deep learning architectures to classify poor quality CMR images: 1) 3D spatio-temporal Convolutional Neural Networks (3D-CNN), 2) Long-term Recurrent Convolutional Network (LRCN). Though in real clinical setup motion artefacts are common, high-quality imaging of UKBB, which comprises cross-sectional population data of volunteers who do not necessarily have health problems creates a highly imbalanced classification problem. Due to the high number of good quality images compared to the relatively low number of images with motion artefacts, we propose a novel data augmentation scheme based on synthetic artefact creation in k-space. We also investigate a learning approach using a predetermined curriculum based on synthetic artefact severity. We evaluate our pipeline on a subset of the UK Biobank data set consisting of 3510 CMR images. The LRCN architecture outperformed the 3D-CNN architecture and was able to detect 2D+time short axis images with motion artefacts in less than 1ms with high recall. We compare our approach to a range of state-of-the-art quality assessment methods. The novel data augmentation and curriculum learning approaches both improved classification performance achieving overall area under the ROC curve of 0.89. |
2017 |
Esther Puyol-Antón; Matthew Sinclaira; Bernhard Gerberc; Mihaela Silvia Amzulescuc; Hélène Langetb; Mathieu De C Craeneb; Paul Aljabara; Paolo Pirob; Andrew P Kinga A multimodal spatiotemporal cardiac motion atlas from MR and ultrasound data Journal Article In: Science Direct, 2017. Abstract | Links | BibTeX | Tags: 17806, cardiac, featured, imaging, methodology, MR, ultrasound @article{Puyol-Antón2017, title = {A multimodal spatiotemporal cardiac motion atlas from MR and ultrasound data}, author = {Esther Puyol-Antón and Matthew Sinclaira and Bernhard Gerberc and Mihaela Silvia Amzulescuc and Hélène Langetb and Mathieu De C Craeneb and Paul Aljabara and Paolo Pirob and Andrew P Kinga}, url = {http://www.sciencedirect.com/science/article/pii/S1361841517300890}, year = {2017}, date = {2017-08-01}, journal = {Science Direct}, abstract = {Cardiac motion atlases provide a space of reference in which the motions of a cohort of subjects can be directly compared. Motion atlases can be used to learn descriptors that are linked to different pathologies and which can subsequently be used for diagnosis. To date, all such atlases have been formed and applied using data from the same modality. In this work we propose a framework to build a multimodal cardiac motion atlas from 3D magnetic resonance (MR) and 3D ultrasound (US) data. Such an atlas will benefit from the complementary motion features derived from the two modalities, and furthermore, it could be applied in clinics to detect cardiovascular disease using US data alone. The processing pipeline for the formation of the multimodal motion atlas initially involves spatial and temporal normalisation of subjects’ cardiac geometry and motion. This step was accomplished following a similar pipeline to that proposed for single modality atlas formation. The main novelty of this paper lies in the use of a multi-view algorithm to simultaneously reduce the dimensionality of both the MR and US derived motion data in order to find a common space between both modalities to model their variability. Three different dimensionality reduction algorithms were investigated: principal component analysis, canonical correlation analysis and partial least squares regression (PLS). A leave-one-out cross validation on a multimodal data set of 50 volunteers was employed to quantify the accuracy of the three algorithms. Results show that PLS resulted in the lowest errors, with a reconstruction error of less than 2.3 mm for MR-derived motion data, and less than 2.5 mm for US-derived motion data. In addition, 1000 subjects from the UK Biobank database were used to build a large scale monomodal data set for a systematic validation of the proposed algorithms. Our results demonstrate the feasibility of using US data alone to analyse cardiac function based on a multimodal motion atlas.}, keywords = {17806, cardiac, featured, imaging, methodology, MR, ultrasound}, pubstate = {published}, tppubtype = {article} } Cardiac motion atlases provide a space of reference in which the motions of a cohort of subjects can be directly compared. Motion atlases can be used to learn descriptors that are linked to different pathologies and which can subsequently be used for diagnosis. To date, all such atlases have been formed and applied using data from the same modality. In this work we propose a framework to build a multimodal cardiac motion atlas from 3D magnetic resonance (MR) and 3D ultrasound (US) data. Such an atlas will benefit from the complementary motion features derived from the two modalities, and furthermore, it could be applied in clinics to detect cardiovascular disease using US data alone. The processing pipeline for the formation of the multimodal motion atlas initially involves spatial and temporal normalisation of subjects’ cardiac geometry and motion. This step was accomplished following a similar pipeline to that proposed for single modality atlas formation. The main novelty of this paper lies in the use of a multi-view algorithm to simultaneously reduce the dimensionality of both the MR and US derived motion data in order to find a common space between both modalities to model their variability. Three different dimensionality reduction algorithms were investigated: principal component analysis, canonical correlation analysis and partial least squares regression (PLS). A leave-one-out cross validation on a multimodal data set of 50 volunteers was employed to quantify the accuracy of the three algorithms. Results show that PLS resulted in the lowest errors, with a reconstruction error of less than 2.3 mm for MR-derived motion data, and less than 2.5 mm for US-derived motion data. In addition, 1000 subjects from the UK Biobank database were used to build a large scale monomodal data set for a systematic validation of the proposed algorithms. Our results demonstrate the feasibility of using US data alone to analyse cardiac function based on a multimodal motion atlas. |