MICCAI 2024: advancements in aortic valve and liver tumor segmentation

MICCAI 2024 (Medical Image Computing and Computer-Assisted Interventions Conference) is coming soon. This prestigious annual event brings together scientists, engineers, and medical professionals working in the disciplines of computational medicine, computer vision, and medical imaging. It offers an international forum for presenting and discussing the most recent developments in medical image processing.

MICCAI 2024: meet together

That is why it was essential for the conference to include us. Graylight Imaging is a company that specializes in algorithms development and creating of comprehensive systems for medical imaging. Our main focus is on utilizing both machine learning and deep learning.

Not surprisingly, a presentations related to our company will be featured:

  • during the Poster Sessions:

“Seeing the Invisible: On Aortic Valve Reconstruction in Non-Contrast CT” is the title of the poster that Mariusz Bujny, a research specialist at Graylight Imaging, will present as part of the poster session on Image Registration, Computer-Aided Diagnosis, and Transparency, Fairness, and Uncertainty.

You will have the opportunity to meet Mariusz, ask questions, and engage in discussion with him on Wednesday, October 9, from 10:30 to 11:30 a.m. [1]

  • during Caption Workshops

“Automated Hepatocellular Carcinoma Analysis in Multi-Phase CT with Deep Learning is the title of the paper by Krzysztof Kotowski (KP Labs); Bartosz Machura (Graylight Imaging); Damian Kucharski (Silesian University of Technology); Benjamin Gutierrez Becker (Roche); Agata Krason (Roche); Jean Tessier (Roche); Jakub Nalepa (Silesian University of Technology

You will have the opportunity to hear about the topic of the publication, what the goal of the project work was and what we worked out during the Caption Workshops on Sunday, October 6 from 8:00-12:30. [2]

On aortic valve reconstruction in non-contrast: the challenge 

Aortic valve segmentation in CT scans is crucial for assessing disease severity and informing treatment decisions. In particular, accurate delineation of the valve in non-contrast CT is essential for aortic valve calcium scoring. Automated detection methods are vital also for coronary artery calcium scoring based on anatomical information, or in cases where contrast agents are contraindicated. However, the low visibility of the aortic valve in non-contrast CT presents a significant challenge. Why? Due to the similar radiological density of the surrounding tissues.

Identifying the aorta in non-contrast CT scans: innovative method

In our work, we propose a scalable, semi-automatic method for the generation of Ground Truth (GT) data for training ML segmentation models in non-contrast CT. We use this approach to train a deep neural network capable of segmenting aortic roots based exclusively on the CT scans without contrast enhancement. Surprisingly, the network, by learning effectively an atlas model, is able to reconstruct also the complex geometry of the aortic valve, even if it is not visible in the scan.

However, how can we evaluate the accuracy of aortic valve segmentation in non-contrast CT scans when manual delineation is impossible due to the extremely low visibility of the valve?

We created a method that aligns the aortic root from contrast images to the non-contrast coordinate system using the Iterative Closest Point (ICP) algorithm. This allows us to measure the distance between the reconstructed aortic valve from the non-contrast data and the aligned shape obtained from the accurate contrast images.

Our model can accurately segment the aorta in non-contrast cardiac CT scans. We evaluated the model’s performance using 70 pairs of contrast and non-contrast CT scans dataset. Segmentations were obtained for both contrast and non-contrast images and converted to surface meshes. This allowed us to quantify the distance between the reconstructed aortic valve from the non-contrast data and the aligned shape obtained from the accurate contrast images. The final model was capable of extrapolating details of the aortic valve with a mean distance error of less than 1 mm, which is the detection limit for non-contrast CT.

We’ve created a reliable machine-learning model

In summary, the model we developed is useful for identifying the aorta in non-contrast CT scans. This model is based on a semi-automated process that uses image registration to generate accurate training data.

This method could also be extended to segment the aortic valve in high-resolution non-contrast series for patients who cannot receive contrast agents.

Automated Hepatocellular Carcinoma Analysis in Multi-Phase CT with Deep Learning – the paper

Hepatocellular carcinoma (HCC) is a frequent type of liver cancer. To effectively diagnose and track its progression, it is necessary to examine CT scans taken at various time points after injecting contrast material into the bloodstream.

In this regard, Roche created a project titled “Automated analysis of hepatocellular carcinoma in multi-phase CT using deep learning”. At MICCAI 2024, Benjamin Guttierez-Becker from Roche will present a study introducing the first fully automated assessment from multiphase CT scans.

At this year’s MICCAI conference, Benjamin Gutierrez Becker will speak about the project described in the publication Automated Hepatocellular Carcinoma Analysis in Multi-Phase CT with Deep Learning. On behalf of Graylight Imaging, co-authors of the publication are Krzysztof Kotowski and Bartosz Machura, on behalf of the Silesian University of Technology Jakub Nalepa and Damian Kucharski, and from Roche are Jean Tessier, Benjamin Gutierrez Becker and Agata Krason.

MICCAI 2024 and two interesting topics from the world of medicine

As you can see, Graylight Imaging will also participate in MICCAI 2024. Firstly, Mariusz Bujny from Graylight will present a poster on aortic valve reconstruction in non-contrast CT scans. Secondly, Benjamin Guttierez-Becker from Roche will display a paper which was also co-authored by experts from Graylight Imaging. Join us!

References:

Index