CISTIB

Centre for Computational Imaging & Simulation Technologies in Biomedicine

The Centre for Computational Imaging & Simulation Technologies in Biomedicine (CISTIB) has a strong focus on data science and artificial intelligence. More specifically, during 2021-2022, we have published two landmark publications that reflect our broader interests. In addition, we undertook several activities to promote our science and engage with the general public.

Predicting myocardial infarction through retinal scans and minimal personal information

Diaz-Pinto A, Ravikumar N, Attar R, Suinesiaputra A, Zhao Y, Levelt E, Dall’Armellina E, Lorenzi M, Chen Q, Keenan TDL, Agrón E, Chew EY, Lu Z, Gale CP, Gale RP, Plein S, Frangi AF. Predicting myocardial infarction through retinal scans and minimal personal information. Nat Mach Intell 4, 55–61 (2022). https://doi.org/10.1038/s42256-021-00427-7.

Abstract – In ophthalmologic practice, retinal images are routinely obtained to diagnose and monitor primary eye diseases and systemic conditions affecting the eye, such as diabetic retinopathy. Recent studies have shown that biomarkers on retinal images, for example, retinal blood vessel density or tortuosity, are associated with cardiac function and may identify patients at risk of coronary artery disease. In this work, we investigate the use of retinal images, alongside relevant patient metadata, to estimate left ventricular mass and left ventricular end-diastolic volume, and, subsequently, predict incident myocardial infarction. We trained a multichannel variational autoencoder and a deep regressor model to estimate left ventricular mass (4.4 (–32.30, 41.1) g) and left ventricular end-diastolic volume (3.02 (–53.45, 59.49) ml) and predict risk of myocardial infarction (AUC = 0.80 ± 0.02, sensitivity = 0.74 ± 0.02, specificity = 0.71 ± 0.03) using just the retinal images and demographic data. Our results indicate that one could identify patients at high risk of future myocardial infarction from retinal imaging available in every optician and eye clinic.

 

In-silico trial of intracranial flow diverters replicates and expands insights from conventional clinical trials

Sarrami-Foroushani A, Lassila T, MacRaild M, Asquith J, Roes KCB, Byrne JV, Frangi AF. In-silico trial of intracranial flow diverters replicates and expands insights from conventional clinical trials. Nat Commun. 2021 Jun 23;12(1):3861.

Abstract – The cost of clinical trials is ever-increasing. In-silico trials rely on virtual populations and interventions simulated using patient-specific models and may offer a solution to lower these costs. We present the flow diverter performance assessment (FD-PASS) in-silico trial, which models the treatment of intracranial aneurysms in 164 virtual patients with 82 distinct anatomies with a flow-diverting stent, using computational fluid dynamics to quantify post-treatment flow reduction. The predicted FD-PASS flow-diversion success rates replicate the values previously reported in three clinical trials. The in-silico approach allows a broader investigation of factors associated with insufficient flow reduction than feasible in a conventional trial. Our findings demonstrate that in-silico trials of endovascular medical devices can: (i) replicate findings of conventional clinical trials, and (ii) perform virtual experiments and sub-group analyses that are difficult or impossible in conventional trials to discover new insights on treatment failure, e.g., in the presence of side-branches or hypertension.

Outreach is integral to modern science, communicating what we do and why we do it to the community and the wider world. We are committed to this both here in Leeds, and through links with institutes and groups worldwide. We are proud to support students’ engagement to raise the profile of science, technology, and mathematics. During 2021-2022, CISTIB undertook significant outreach efforts to promote its activities to the general public in addition to targeted events for more specialised audiences. CISTIB keeps track of all its activities in the following repository https://www.cistib.org/outreach

 

Intelligent Image-driven Motion Modelling for Adaptive Radiotherapy (iDAPT)

Research problem and aims

Patient motion (e.g., respiratory) during treatment is one of the most difficult challenges for precise delivery of radiotherapy. Precision is, however, critical to ensure prescribed radiation doses are delivered to the target (tumour), while surrounding healthy tissues are preserved from damage. If the motion itself can be accurately estimated, the treatment plan and/or delivery can adapted to compensate.

Current methods for motion estimation rely either on invasive implanted fiducial markers, imperfect surrogate models based, for example, on external optical measurements or breathing traces, or expensive and rare systems like in-treatment MRI. The iDAPT project, in contrast, aims to achieve accurate motion prediction using only relatively low-quality, but almost universally available planar x-ray imaging. This is challenging since such images have poor soft tissue contrast and provide only 2D projections through the anatomy. We hypothesise nonetheless that when combined with strong priors in the form of learnt models of anatomical motion and image appearance, they provide sufficient information to reconstruct the 3D motion accurately. A high-level overview of the approach is shown in Fig 1.

Figure 1: High-level overview of the work

Figure 1: High-level overview of the work

 

Methods

The developed model seeks a mapping between deformations of the target anatomy and the corresponding appearance of that anatomy in x-ray images acquired at any arbitrary projection angle. The target organ geometry is first represented as a 3D mesh which in turn is derived from a reference CT volume. A 2D-CNN encoder is used to extract x-ray image features and four feature pooling networks are then used to fuse these features to the organ mesh. A ResNet-based graph attention network then deforms the feature-encoded mesh to match the predicted current configuration of the anatomy.

Training the model presents its own challenge, since ground truth volumetric motion information and matching in-treatment x-ray images are impossible to acquire. We therefore created synthetic data: motion patterns over a breathing cycle were first estimated from 4D-CT data for the same patient; a plausible set of synthetic motion states were then created by perturbing the original motions within reasonable bounds; for each deformation instance the reference CT volume was correspondingly deformed, and digitally reconstructed radiographs were generated for all projection angles. The result was a matched set of known deformations of the anatomy and corresponding synthetic x-ray images. The full dataset was divided into training, validation, and test sets. The process was performed on lung images from the XCAT phantom and on images from three liver cancer patients.

Preliminary findings

Qualitative results for XCAT data are depicted in Figs 2 & 3 and for one liver patient in Figs 4 & 5. High prediction accuracy was achieved in almost all regions of the organ models.

Figure 2: Recovered motion from projection angle 252°, viewed in coronal plane – XCAT. Rendered lung mesh overlaid on the ground truth CT data.

Figure 2: Recovered motion from projection angle 252°, viewed in coronal plane – XCAT. Rendered lung mesh overlaid on the ground truth CT data.

 

Figure 3: Recovered motion from projection angle 252°, viewed in sagittal plane – XCAT. Rendered lung mesh overlaid on the ground truth CT data.

Figure 3: Recovered motion from projection angle 252°, viewed in sagittal plane – XCAT. Rendered lung mesh overlaid on the ground truth CT data.

 

Figure 4: Recovered motion from projection angle 0°, viewed in coronal plane – a liver patient. Rendered lung mesh overlaid on the ground truth CT data.

Figure 4: Recovered motion from projection angle 0°, viewed in coronal plane – a liver patient. Rendered lung mesh overlaid on the ground truth CT data.

 

Figure 5: Recovered motion from projection angle 0°, viewed in sagittal plane – a liver patient. Rendered lung mesh overlaid on the ground truth CT data.

Figure 5: Recovered motion from projection angle 0°, viewed in sagittal plane – a liver patient. Rendered lung mesh overlaid on the ground truth CT data.

 

People 

The project is a collaboration between the Centre for Computational Imaging & Simulation Technologies in Biomedicine (CISTIB) at the University of Leeds and the Leeds Cancer Centre (LCC) within the Leeds Teaching Hospitals NHS Trust.

Key partners:

  • CISTIB: Mr Isuru Wijesinghe, Dr Arezoo Zakeri, Dr Zeike Taylor
  • LCC: Dr Michael Nix, Dr Bashar Al-Qaisieh

Funding

The project is supported by CRUK RadNet Leeds Centre of Excellence (C19942/A28832) and a Royal Academy of Engineering Leverhulme Trust Research Fellowship (DIADEM-ART LTRF2021-17115).

 

Return to the 2022 annual showcase main page