Imperial PWP | dblp | Google Scholar | CV | Publications | linkedin | ResearchGate | Twitter orcid.org/0000-0002-7813-5023


I am Reader (= Associate Professor++) in the Department of Computing at Imperial College London. I head the human-in-the-loop computing group and I am one of four academics leading the Biomedical Image Analysis, BioMedIA Collaboratory. Human-in-the-loop computing research aims at complementing human intelligence with machine capabilities and machine intelligence with human flexibility.

I co-create intensively with King’s College London, Division of Imaging Sciences and Biomedical Engineering, St. Thomas Hospital London and the department of Bioengineering at Imperial.  I am Associate Editor for IEEE Transactions on Medical Imaging and a scientific adviser for ThinkSono Ltd and Ultromics Ltd. I am Affordable Imaging stream lead for the EPSRC Centre for Doctoral Training in Smart Medical Imaging and involved in the UKRI Centre for Doctoral Training in Artificial Intelligence for Healthcare.

My research is about intelligent algorithms in healthcare, especially Medical Imaging. I am working on self-driving medical image acquisition that can guide human operators in real-time during diagnostics. Artificial Intelligence is currently used as a blanket term to describe research in these areas.

Current research questions:
Can we democratize rare healthcare expertise through Machine Learning, providing guidance in real-time applications and second reader expertise in retrospective analysis?
Can we develop normative learning from large populations, integrating imaging, patient records and omics, leading to data analysis that mimics human decision making?
Can we provide human interpretability of machine decision making to support the ‘right for explanation’ in healthcare?

My teaching is focused on real-time computing, Machine Learning, Image Analysis, Computer Graphics and Visualisation.

For my research I am using Nvidia and Intel hardware (thank you for the donations!).


PhD Opportunities

I am currently only looking for outstanding PhD students who are interested in working on human interpretable machine learning and efficient medical image processing topics for deep learning applications.

My definition of ‘outstanding’: You did amazing things in the past that had real impact.

If you are interested in applying for a PhD position in my group you can apply through the Imperial College PhD online appplication system.

We have a number of PhD projects on offer within the EPSRC Centre for Doctoral Training in Medical Imaging and the UKRI Centre for Doctoral Training in AI for Healthcare 

Interested? Find out ‘how to apply’ for the medical imaging CDT and ‘here’ for the UKRI AI4 Health centre.


In digital pathology, demand is much higher than supply, thus, there is a real risk that disease is missed.
KidneyCaliper is a computational pathology project that applies deep learning image analysis to microscope slides of kidney biopsies. We believe that high throughput browser-based content analysis of digitised pathology slides will allow for early prediction of kidney transplant rejection. A prototype has been developed here http://kidneycaliper.lucidifai.com/ but for larger scale clinical in-house deployment, we require local GPU acceleration that can be operated behind clinic firewalls (patient data protection!) to test clinical hypothesis about kidney transplant rejection on multi-modal patient data. Quantitative manual analysis of a single slide took over 5 hours, with our tools it can be done in less than 2h on Intel CPUs and less than 60s on Nvidia GPUs. 2 A6000 GPUs would be required to allow 8 pathologists to work in parallel on multi-modal data (image + omics).


Chest radiographs are one of the most common diagnostic modalities in clinical routine. It can be done cheaply, requires minimal equipment, and the image can be diagnosed by every radiologists. However, the number of chest radiographs obtained on a daily basis can easily overwhelm the available clinical capacities. We propose RATCHET: RAdiological Text Captioning for Human Examined Thoraces. RATCHET is a CNN-RNN-based medical transformer that is trained end-to-end. It is capable of extracting image features from chest radiographs, and generates medically accurate text reports that fit seamlessly into clinical work flows. The model is evaluated for its natural language generation ability using common metrics from NLP literature, as well as its medically accuracy through a surrogate report classification task. The model is available for download at: http://www.github.com/farrell236/RATCHET.


Ever wondered how the human brain works? Check out the Cortical Explorer by Sam Budd (Imperial MEng. final year project).

An early prototype Service-Oriented Architecture for cortical parcellation evaluation with multi-device, distributed front-end, i.e., a browser.

The various ways to navigate through our human brain map attracted a large crowd at the Imperial Intelligence redesigned Fringe on January 18th 2018. The cortical explorer currently also features on the touch screen at the front of the Data Science Institute.


Delicious Twitter Digg this StumbleUpon Facebook

Projects

  • 1) MAVEHA: Automated Fetal and Neonatal Movement Assessment for Very Early Health Assessment

    Fetal movements are an important indicator of a developing baby’s health and particularly of brain development. However, fetal movements are not commonly assessed clinically and no automated tracking or analyses of movements are performed. Ongoing research in our group has developed algorithms to track fetal leg movements, and pilot data indicates that aspects of fetal movements may correlate with healthy or unhealthy brain development. This project will build upon previous projects to develop enhanced automated tracking methods for fetal movements from fetal cine MRI data, and correlate the movements with normal and abnormal brain development. This project offers a valuable opportunity to work on a highly interdisciplinary project, with close collaboration with clinicians. This project is researched in close collaboration with Dr Niamh Nowlan from the Department of Bioengineering at Imperial.

    [+] more
  • 2) iFIND: intelligent Fetal Imaging and Diagnosis

    Ultrasound, which passes sound waves into the body to create pictures from their reflections, is commonly used to check that babies in the womb (or fetuses) are healthy. Although every pregnant mother in the country has a scan at around 20 weeks, not all of the babies who have problems are picked up on these ultrasound scans. The iFind project is about:
    New technologies that allow scanning to be carried out with multiple ultrasound probes (the device which takes the ultrasound picture) at the same time which have better imaging capabilities and move automatically to the right place to get the best pictures of the whole baby. Improved fetal ultrasound imaging through automated image processing. By combining conventional ultrasound imaging from routine scans with more detailed MRI we will build a map of fetal anatomy to use for computer assisted diagnosis of fetal anomalies. These advances should mean a high quality scanning service across the country which is not dependent on local expertise, and fewer babies who have major problems will be missed.

    [+] more
  • 3) Quantitative fetal imaging in utero – novel methods for measuring T1, T2 and perfusion in moving subjects

    The aim is to develop and apply methods for quantitative Magnetic Resonance relaxometry and perfusion assessment in the fetal brain to create a comprehensive capability to measure key parameters in the presence of fetal motion. Conventional Magnetic Resonance Imaging (MRI) results in subject and scanner dependent signals rather than absolute measures. Relaxometry allows scanner and sequence independent unbiased signal maps to be obtained, which have the potential to provide a more objective readout. A complementary type of quantitative imaging focuses on functional measures rather than static tissue relaxation properties. One such measure focuses on tissue perfusion, which is rate of blood flow through unit mass of tissue within the micro-vasculature. MRI can be used to quantify perfusion, providing critical information that can be used both for scientific studies and for clinical care. Quantitative imaging requires the combination of multiple measurements at each tissue location, which is a major challenge in the presence of fetal motion.

    [+] more
  • 4) F.A.U.S.T. — Flexible Application of Uncertainty for Scanning and Tracking 2013 – 2015

    F.A.U.S.T dealt with reconstruction of fetal Magnetic Resonance Imaging (MRI) data that are taken directly from the female uterus. MRI is a harmless imaging procedure, which does not use any radiation or other invasive procedures to get the images. However, the fetus is awake and will move around during the images are taken. This is where my colleges and I come into play. We try to reconstruct the fetus in 3D as if it would not move. Furthermore, I’ll try to evaluated these steps for their reliability and I researched methods that allow a doctor to draw conclusions on the reliability of each part of the resulting images.

    [+] more
  • 5) ClinicImppact — Clinical Intervention Modelling, Planning, and Proof for Ablation Cancer Treatment

    ClinicIMPPACT is an European FP7 ICT-Project started on 01.02.2014 with the end on 31.01.2017 . The main objective of the project is to bring the existing radio frequency ablation (RFA) model for liver cancer treatment (Project IMPPACT , Grant No. 223877, completed in February 2012) into clinical practice. For this the project will pursue the following objectives: i) To prove and refine the RFA model in a small clinical study ii) To develop the model into a real-time patient specific RFA planning and support system for Interventional Radiologists (IR) under special consideration of their clinical work-flow needs iii) To establish a corresponding training procedure for IR’s iv) To evaluate the clinical practicality and benefit of the model for use in the routine work-flow in a user survey and expert forum.

    [+] more
  • 6) GoSmart — generic open-ended simulation environment for minimally invasive cancer treatment

    GO-SMART is an European FP7 ICT-Project started on 01.04.2013 with the end on 31.03.2016. It aims to build a generic open-source software simulation environment for the planning of image guided percutaneous Minimally Invasive Cancer Treatment (MICT). The environment will allow the Interventional Radiologist (IR) to select the optimal type of MICT by simulating the personalized result of the different treatments and medical protocols in patient specific conditions.

    [+] more
  • Normative Learning

    TBA

    [+] more