IEEE 2021 NSS MIC

Please note! All times in the online program are given in New York - America (GMT -04:00) times!

New York - America ()
Jan 29, 2022, 7:45:52 AM
Your time ()
n/a
Tokyo - Asia ()
Jan 29, 2022, 9:45:52 PM
Our exhibitors and sponsors – click on name to visit booth:

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Denoising and Segmentation Using Deep Learning Approaches

Session chair: Gong , Kuang (Massachusetts General Hospital and Harvard Medical School, Department of Radiology, Boston, USA); Jaouen , Vincent (LaTIM, INSERM, Brest, France)
 
Shortcut: M-08
Date: Thursday, 21 October, 2021, 9:15 AM - 11:15 AM
Room: MIC - 2
Session type: MIC Session

Contents

Click on an contribution to preview the abstract content.

9:15 AM M-08-01

Unsupervised PET Image Denoising Using Attention-Guided Anatomical Information (#246)

Y. Onishi1, F. Hashimoto1, K. Ote1, H. Ohba1, R. Ota1, E. Yoshikawa1, Y. Ouchi2

1 Hamamatsu Photonics K. K., Central Research Laboratory, Hamamatsu, Japan
2 Hamamatsu University School of Medicine, Department of Biofunctional Imaging, Preeminent Medical Photonics Education & Research Center, Hamamatsu, Japan

Abstract

Although supervised convolutional neural networks (CNNs) often outperform conventional alternatives for denoising positron emission tomography (PET) images, they require many low- and high-quality reference PET image pairs. Herein, we propose an unsupervised 3D PET image denoising method based on attention-guided anatomical information. Our proposed magnetic resonance-guided deep decoder (MR-GDD) utilizes the spatial details and semantic features of MR image more effectively by introducing encoder-decoder and deep decoder subnetworks. Moreover, the specific shapes and patterns of the guidance image do not affect the denoised PET image, because the guidance image is input to the network through an attention gate. Monte Carlo simulation using the [18F]fluoro-2-deoxy-D-glucose (FDG) shows that the proposed method outperforms other denoising algorithms in terms of the highest peak signal-to-noise ratio and structural similarity. For preclinical (using [18F]FDG and [11C]raclopride) and clinical (using [18F]florbetapir) studies, the proposed method demonstrates state-of-the-art denoising performance while retaining spatial resolution and quantitative accuracy, despite using only a single architecture for various noisy PET images with 1/10th of the full counts. These results suggest that the proposed MR-GDD can reduce PET scan times and PET tracer doses considerably without impacting patients.

Keywords: Positron emission tomography, Image denoising, Unsupervised learning, Attention, Anatomical information
9:30 AM M-08-02

Uncertainty Prediction for Deep Learning-based Image Denoising in Low-dose CT Imaging (#304)

D. Wu1, Y. Xie2, Q. Li1

1 Massachusetts General Hospital, Radiology, Boston, Massachusetts, United States of America
2 Peking University, Academy for Advanced Interdisciplinary Studies, Beijing, China

Abstract

Deep learning-based low-dose CT image denoising demonstrates good performance, but it remains a question how certain the results are for a trained denoising network. Existing methods to quantify the uncertainty of denoising networks either assume simple noise models or need Monte-Carlo sampling during testing. In this work, we proposed a simple but effective method to directly predict the uncertainty for a given denoising network. An uncertainty prediction network with a basic UNet structure is trained on the training dataset to predict the squared error between the denoising results and label images using L2-norm. Then given a single noisy image and its denoising result, the network will give the expected error between the denoising result and possible clean images. The proposed method does not have any constraints to the structure of the denoising network and is very easy to implement. We validated the efficacy of the proposed method with both simulated and real data.

Keywords: Computed Tomography, Image denoising, Artificial neural networks, Uncertainty
9:45 AM M-08-03

PET denoising and uncertainty estimation based on NVAE model (#725)

J. Cui1, 2, Y. Xie3, K. Gong2, 4, K. Kim2, 4, J. Yang5, P. Larson5, T. Hope5, S. Behr5, Y. Seo5, H. Liu1, Q. Li2, 4

1 Zhejiang University, State Key Laboratory of Modern Optical Instrumentation, College of Optical Science and Engineering, Hangzhou, China
2 Massachusetts General Hospital/Harvard Medical School, the Center for Advanced Medical Computing and Analysis, Boston, Massachusetts, United States of America
3 Beijing University, Beijing, China
4 Massachusetts General Hospital/Harvard Medical School, the Gordon Center for Medical Imaging, Boston, Massachusetts, United States of America
5 University of California, Department of Radiology and Biomedical Imaging, San Francisco, California, United States of America

Abstract

The structure of the deep neural network is constantly changing, and its performance is constantly breaking through. Recently, a new network, Nouveau variational auto-encoder (NVAE), has been proposed and gained great attention. In addition to the ability to generate high-quality images, the more important nature of NVAE is that it can generate a distribution that makes it possible to measure the uncertainty. In this work, we proposed to use NVAE for PET image denoising and estimate the uncertainty from both training data and model at the same time. 2.5D training based on 28 patients was performed and quantification based on 7 real patient data showed that NVAE has a good performance for PET denoising, which outperforms the Unet. The variance of 50 sample output was calculated to show the uncertainty map.

Keywords: PET, denoising, Nouveau variational auto-encoder, uncertainty
10:00 AM M-08-04

A Spatiotemporal Unpaired Deep Learning Method for Low-Dose Cardiac CT Image Denoising (#1019)

J. Yang1, S. Zhou2, C. Li1, L. Yu3, J. Huang1, M. Jin2

1 University of Texas, Arlington, Computer Science and Engineering, Arlington, Texas, United States of America
2 University of Texas, Arlington, Physics, Arlington, Texas, United States of America
3 Mayo Clinic, Radiology, Rochester, Minnesota, United States of America

Abstract

Multi-phase computed tomographic angiography (MP-CTA) may have much improved risk-benefit ratio for diagnosis of coronary artery disease if its radiation dose can be significantly reduced without compromising diagnostic accuracy. To suppress the elevated noise in low-dose CT, deep learning based denoising methods have been actively investigated, but limited to the spatial domain. In this study, we propose to use RecycleGAN, which is evolved from GAN with a cycle consistency loss (CycleGAN), for denoising of low-dose MP-CTA (LDMP-CTA). Since the image series of LDMP-CTA are temporally ordered, RecycleGAN deploys a recurrent loss for temporal prediction from one image frame to another and replace the cycle-consistency loss in CycleGAN (for static images) with a recycle loss that enforces the consistency of predicted temporal frames between low-dose and full-dose image domains. Thus, RecycleGAN can be trained utilizing the temporal order of MP-CTA image series. CycleGAN and RecycleGAN were trained and tested on the simulated FDMP-CTA and LDMP-CTA images using the XCAT program with 18 patient data. Although CycleGAN showed substantial improvement on the image quality and the quantitative metrics for LDMP-CTA, RecycleGAN can further significantly improve these evaluation metrics over CycleGAN. This study lays a strong foundation for the further comprehensive investigation of RecycleGAN using real patient data.

AcknowledgmentThis work was supported in part by the U.S. National Institutes of Health under Grant No. NIH/NIHLB 1R15HL150708-01A1.
Keywords: Multi-phase computed tomographic angiography (MP-CTA), low-dose MP-CTA, image denoising, CycleGAN, RecycleGAN
10:15 AM M-08-05

Efficient domain adaptation few-shot learning for PET image denoising (#1194)

K. Kim1, P. Xu1, J. Koh1, D. Wu1, K. Gong1, Y. Han1, J. Yang2, P. Larson2, T. Hope2, S. Behr2, Y. D. Son3, J. H. Kim3, Y. Seo2, Q. Li1

1 Massachusetts General Hospital & Harvard Medical School, Radiology Department, Boston, Massachusetts, United States of America
2 University of California, San Francisco, Radiology and Biomedical Imaging, San Francisco, California, United States of America
3 Gachon University, Incheon, Republic of Korea

Abstract

Deep learning has been successfully used for PET image enhancement, however, it is almost impossible to train all kinds of deep learning models with sufficient data because there are various target regions, doses, and radiotracers. Domain adaptation has been increasingly investigated to improve the performance of the new task model and to achieve the generalization, where the source and target domains contain a common feature space but different distributions. Although the source domain can have sufficient data, the target domain is likely to be suffered from insufficient (referred to as a few-shot) data. In this paper, we propose a novel domain adaptation few-shot learning method for PET image denoising, where the feature maps from the trained model are directly utilized without source data for computing efficiency. The distributions of feature maps of two domains are used in the feature loss using the KL divergence. The optimization reduces both the feature loss and root mean square error (RMSE) loss. We demonstrate that the proposed method can improve the PET image quantitatively and qualitatively in the target domain with small training data, which shows the feasibility of the generalization for clinical use.

Keywords: Domain adaptation, PET image denoising
10:30 AM M-08-06

A novel unsupervised COVID-19 lesion segmentation from CT images based on the lung tissue detection (#525)

F. Gholamiankhah1, S. Mostafapour2, S. Shojaerazavi3, N. Abdi Goushbolagh1, H. Arabi4, H. Zaidi4

1 Shahid Sadoughi University of Medical Sciences, Department of Medical Physics, Yazd, Iran (Islamic Republic of)
2 Mashhad University of Medical Sciences, Department of Radiology Technology, Mashhad, Iran (Islamic Republic of)
3 Mashhad University of Medical Sciences, Department of Cardiology, Mashhad, Iran (Islamic Republic of)
4 Geneva University Hospital, Division of Nuclear Medicine & Molecular Imaging, Geneva, Switzerland

Abstract

Image segmentation plays a significant role in quantitative image analysis. Lung segmentation of CT images has received more importance in fighting against the COVID-19. In this work, a novel unsupervised framework was developed for COVID-19 infectious lesion segmentation from CT images without using annotation data. A residual network was trained supervised for 450 normal cases and 450 COVID patients separately for the lung segmentation task (DL-Covid and DL-Norm). The outcomes of both models were in the form of voxel-vise probability maps. For COVID lesion prediction, DL-Covid and DL-Norm models were applied to COVID CT images. The DL-Covid model (trained only with COVID CT images) is familiar with COVID infections as well as healthy lung tissues. While the DL-Norm is only familiar with the healthy lung tissues and would give low probabilities to the COVID infections. So, the lung lesion probability maps could be obtained by subtraction of the two predicted lung probability maps by the DL-Covid and DL-Norm. The performance of infection segmentation framework was assessed on 50 COVID CT images considering the manual lesion segmentation as reference. Different parameters such as Dice coefficients, Jaccard index (JC), false-positive, and false-negative ratios were calculated. Dice coefficients of 0.985 ± 0.003 and 0.978 ± 0.010 were achieved for lung segmentation from normal and COVID CT images, respectively. Quantitative analysis of COVID lesion segmentation revealed the Dice coefficient and JC of 0.67 ± 0.033 and 0.60 ± 0.06, respectively. Furthermore, false-positive ratio of 0.072 ± 0.049 and false-negative ratio of 0.062 ± 0.042 were obtained for the COVID lesion segmentation. The proposed unsupervised approach for COVID-19 infection segmentation showed satisfying performance. The outcome of this approach could be employed in supervised deep learning algorithms with noisy labels or weakly annotated data to achieve higher accuracy of the lung lesion segmentation.

Keywords: Unsupervised Learning, COVID-19, label-free lesion segmentation, deep learning
10:45 AM M-08-07

Synthetic tumor insertion using one-shot generative learning for cross-modal image segmentation (#817)

G. Sallé1, P. - H. Conze1, N. Boussion1, 2, J. Bert1, 2, D. Visvikis1, V. Jaouen1

1 UMR 1101 Inserm LaTIM, Université de Bretagne Occidentale, IMT Atlantique, Brest, France
2 CHRU de Brest, University Hospital, Brest, France

Abstract

Unpaired cross-modal translation with cyclic loss is being increasingly used for a large variety of medical imaging applications such as e.g. segmentation. However, finer-scale details like tumors may be lost during translation, which is a critical limitation in oncological imaging. In this paper, we propose to address the problem of vanishing tumors for cross-modal segmentation. First, we propose a new method to insert realistic tumors in 3-D images using a deep generative model trained on a single 2-D image. Second, we leverage the proposed model using a new unpaired-then-paired two-stage Image-to-image architecture to better penalize the suppression of tumors in cross-modal segmentation. In our experiments, we validate our model on the ongoing MICCAI crossMoDa tumor segmentation challenge, where we demonstrate superior performance over CycleGAN-based models.

Keywords: Image segmentation, Domain adaptation, Multi-modal imaging, Tumor imaging
11:00 AM M-08-08

Pathological Prostate Gleason Score Prediction Using MRI Radiomics and Machine Learning Algorithms (#905)

S. Bagheri2, G. Hajianfar3, A. Saberi1, M. Oveisi4, I. Shiri1, H. Zaidi1

1 Geneva University Hospital, Division of Nuclear Medicine and Molecular Imaging, Geneva, Genève, Switzerland
2 Kashan University of Medical Sciences, Department of Medical physics, Kashan, Iran (Islamic Republic of)
3 Iran University of Medical Science, Rajaie Cardiovascular Medical and Research Center, Tehran, Iran (Islamic Republic of)
4 Kings College London, London, United Kingdom

Abstract

Prostate cancer (PCa) is one of the most critical diseases in all-male groups. Clinical diagnosis and follow-up is commonly performed using MRI, which proved to have good specificity and sensitivity for identifying prostate cancer. The pathologic grade or PCa aggressivity is calculated by the Gleason score as a scale. Gleason patterns are numbered in rising cellular disorder and misfortune of standard glandular design from 1 to 5. As of lately, in addition to conventional parameters extracted from MRI, there has been noticeable progress in the extraction of high-throughput quantitative features from medical images, called radiomics. In this work, we enrolled 140 patients with prostate cancer. The whole prostate was segmented manually to extract 55 radiomic features including shape-based, histogram based, and texture-base features. Data were randomly split into 70% for train and 30% as test datasets, respectively. Z-score normalization was applied to features and after normalization of features, these were fed to different feature selection and classifiers. We demonstrated the feasibility of differentiation of Gleason score (3+4 from 4+3) using MRI radiomics and machine learning algorithms.

AcknowledgmentThis work was supported by the Swiss National Science Foundation under grant SNRF 320030_176052; the Swiss Cancer Research Foundation under Grant KFS-3855-02-2016
Keywords: prostate cancer, Gleason score, Machine Learning, Radiomics.

Our exhibitors and sponsors – click on name to visit booth: