IEEE 2017 NSS/MIC/RTSD ControlCenter

Online Program Overview Session: M-02

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Image Reconstruction I

Session chair: Johan Nuyts; Arman Rahmim
Shortcut: M-02
Date: Wednesday, October 25, 2017, 10:20
Room: Centennial IV
Session type: MIC Session

Regularization in iterative image reconstruction


10:20 am M-02-1 Download

Multi-modal weighted quadratic priors for robust intensity independent synergistic PET-MR reconstruction (#3037)

A. Mehranian1, M. A. Belzunce1, C. J. McGinnity1, C. Prieto1, A. Hammers1, A. J. Reader1

1 King's College London, Division of Imaging Sciences & Biomedical Engineering, London, United Kingdom of Great Britain and Northern Ireland


We propose a simple and robust synergistic PET-MR reconstruction algorithm using mutually-weighted quadratic priors. Maximum a-posteriori (MAP) objective functions were used for PET and MR reconstructions, including MAP expectation maximization (MAPEM) for PET and MAP sensitivity encoding (SENSE) for MR reconstruction. For both reconstructions, mutually-weighted quadratic priors were used for reduction of noise and artifacts, with preservation of PET-MR common boundaries. The weighting coefficients are updated from the current PET and MR estimates using normalized multi-modal Gaussian similarity kernels, which are in turn derived as the product of modality-specific kernels. Hence, the resulting kernels are independent of both signal intensities and contrast orientations. The performance of the proposed method was evaluated using 3D realistic simulations and a clinical FDG PET/T1-MPRAGE/FLAIR MR dataset. For simulations, undersampled MR reconstructions with undersampling factors of 4, 6 and 8 were considered while for the clinical dataset an MR undersampling factor of 4 was used. For PET reconstructions, the proposed method was compared with maximum likelihood expectation maximization (MLEM) and fully-sampled MR guided MAPEM (as a PET benchmark). For MR reconstructions, the proposed method was compared with fully-sampled reconstruction (as an MR benchmark), total variation (TV) regularized undersampled SENSE, and PET/FLAIR guided undersampled SENSE. Results showed that the proposed method can outperform conventional reconstructions especially for highly undersampled MR data, while preserving modality-unique features. For the clinical dataset, the proposed method showed promising results especially for PET reconstruction, in spite of the substantial PET-MR intensity and contrast differences. In summary, the proposed synergistic algorithm and priors offer a robust multi-modal synergistic image reconstruction framework.

Keywords: Multi-modal imaging, synergistic reconstruction, PET, MRI
10:38 am M-02-2

Guided Image Reconstruction for Multi-Tracer PET (#1674)

S. Ellis1, A. Mallia1, 2, C. J. McGinnity1, 2, G. J. R. Cook1, 2, A. J. Reader1

1 King's College London, Division of Imaging Sciences and Biomedical Engineering, London, United Kingdom of Great Britain and Northern Ireland
2 King's College London and Guy's and St Thomas' NHS Foundation Trust, PET Centre, London, United Kingdom of Great Britain and Northern Ireland


Scanning a patient with two or more PET radiotracers provides complementary imaging information provided by the different tracers. Invariably, one of the datasets will provide a higher quality image than the others due to a number of possible reasons, such as i) the positron energy of the radioisotope used, ii) the contrast of the tracer uptake, iii) the specificity of the tracer uptake, or iv) the quantity of injected activity and the total detected counts. This work proposes use of a higher quality PET image (the prior) to guide the reconstruction of another, lower quality PET dataset to help compensate for the reduced image quality. The prior image is used to calculate the a priori similarities between each voxel and its neighbours. Similarity is quantified using a patch-based Gaussian kernel modulated by spatial distances between voxels. A patch-based sparsification step is also included to reduce the number of non-zero similarities. The second dataset is then reconstructed with a weighted quadratic prior using these similarities as spatially-variant weights. This method penalises intensity differences between voxel pairs in accordance with those which are similar in the prior. The proposed methodology has been tested for [18F]fluorodeoxyglucose (FDG)/[11C]methionine (MET) paired datasets. In a 3D simulation study the FDG-guided MET reconstruction produced images with lower whole-brain error levels compared to both unregularised maximum likelihood expectation-maximisation and an unguided quadratically penalised method. These improvements were also observed for real data from a patient who had undergone scans with these two tracers, where noise reduction and greater anatomical detail were attained using the FDG-guided MET reconstruction. These results suggest that using one PET tracer to guide the reconstruction of another is both feasible and potentially beneficial. Future work will require hyperparameter optimisation and further application-specific validation.

Keywords: Positron emission tomography, image reconstruction, guided reconstruction, regularised reconstruction, multi-tracer, oncology
10:56 am M-02-3 Download

A New PET Reconstruction Formulation that Enforces Non-negativity in Projection Space for Bias Reduction in Y-90 Imaging (#1039)

H. Lim1, Y. K. Dewaraja2, J. A. Fessler1

1 University of Michigan, Electrical Engineering and Computer Science, Ann Arbor, Michigan, United States of America
2 University of Michigan, Radiology, Ann Arbor, Michigan, United States of America


Most existing PET image reconstruction methods impose nonnegativity constraint in the image domain that is natural physically, but can lead to biased reconstructions. This bias is particularly problematic for Y-90 PET because of the low probability positron production and high random fractions. We propose a new PET reconstruction formulation that enforces nonnegativity of the projections instead of the voxel values. This formulation allows some negative voxel values thereby potentially reducing bias. To relax the non-negativity constraint embedded in PET reconstruction, we used the Alternating Direction Method of Multipliers (ADMM). As choice of ADMM parameters can greatly influence convergence rate, we applied an automatic parameter selection method to improve the convergence speed. Initial testing of several variants differentiated by the base model and the constraint condition were performed using lung to liver slices of XCAT phantom. We simulated the low true coincidence count-rates with high random fractions corresponding to the typical values from Y-90 radioembolization patient data. We compared our new methods with standard reconstruction algorithms. As the proposed algorithm iterates, the new method reduces the bias in cold spot while yielding lower noise than the standard method. The new model reduces the error of total activity in field of view by 11.1 - 43.7% when the methods achieve similar level of noise in the liver. The improvements with the new method are especially notable when simulating conditions corresponding to patients with lower activity administration (i.e., higher random fractions).

Keywords: PET image reconstruction, Bias reduction, Yttrium-90, Quantification, Radioembolization
11:14 am M-02-4 Download

Hybrid PET-MR list-mode kernelized expectation maximization reconstruction for quantitative PET images of the carotid arteries (#1791)

D. Deidda1, 2, N. Karakatsanis3, P. M. Robson3, N. Efthimiou4, Z. A. Fayad3, R. Aykroyd2, C. Tsoumpas1, 3

1 University of Leeds, Division of Biomedical Imaging, Leeds, United Kingdom of Great Britain and Northern Ireland
2 University of Leeds, Department of Statistics, Leeds, United Kingdom of Great Britain and Northern Ireland
3 Icahn school of Medicine at Mount Sinai, Translational and Molecular Imaging Institute, New York, New York, United States of America
4 University of Hull, School of Life Sciences, Faculty of Health Sciences, Hull, United Kingdom of Great Britain and Northern Ireland


Ordered subsets expectation maximization (OSEM) has been widely used in PET imaging. Although Bayesian algorithms have been shown to perform better, they are still not used in the clinical practice due to the difficulty of choosing appropriate and robust regularization parameters. The recently introduced kernelized expectation maximization (KEM) has shown some promise to work successfully for different applications. Therefore, we propose a list mode hybrid KEM (LM-HKEM) for static reconstructions, which we implemented in the open source Software for Tomographic Image Reconstruction (STIR) library. The proposed algorithm uses both MR and PET update images to create a feature vector for each voxel in the image, which contains the information about the local neighborhood. So as not to over-smooth the reconstructed images a 3*3*3 voxels kernel was used. Three real FDG datasets were acquired with the Siemens mMR: a phantom to validate the algorithm and two patient carotid artery studies to show the possible applications of the method. The reconstructed images are assessed and compared for different algorithms: OSEM, OSEM with median root prior (MRP), KEM and LM-HKEM. The results show better contrast for the proposed algorithm without influencing the convergence rate. LM-HKEM shows promising quantification performance for the phantom low count images with around 4% bias compared to 7% for KEM and over 11% for OSEM and OSEM with (MRP). Our results show that the proposed technique can be used to improve quantification at different noise levels and it shows promising performance in terms of stability as for different subsets, with comparable number of events, we used the same parameters values. Emphasis is given on the reconstruction of the carotid artery as it has two important applications: the use of the carotid artery as input function for dynamic studies and the identification and characterization of atherosclerosis.

Keywords: PET-MR, Kernel, reconstruction, anatomical prior, list-mode, carotid artery, low-count, PET
11:32 am M-02-5

Penalized PET reconstruction using CNN prior (#3995)

K. Kim1, D. Wu1, Y. D. Son2, H. K. Kim2, G. El Fakhri1, Q. Li1

1 Massachusetts General Hospital & Harvard Medical School, Radiology, Boston, Massachusetts, United States of America
2 Gachon University of Medicine and Science, Neuroscience Research Institute, Incheon, Republic of Korea


Recently, the convolutional neural networks (CNN) has been increasingly used for denoising in medical imaging. Specifically, the low-dose CT image has been improved using CNN. However, the CNN is still not commonly used in PET image denosing or in iterative reconstruction. In PET reconstruction, one issue is to make a training data in which intensities of noisy (subsampled) and ground truth (full sampled) images are different and the regional intensity variation in PET image is much higher than that in CT image, which can produce bias in a denoised PET image. Another issue is that if we use the CNN as a penalty function in the iterative reconstruction, the noise level of input image changes in each iteration, which makes the image diverge and very blurred. In this paper, to address these issues, we proposed a new type of CNN prior to use in the iterative PET reconstruction. Specifically, the network is trained using both noisy image and the denoised image as input. By keeping noisy image, we can prevent the divergence or over-smoothing in the iterative reconstruction. The proposed method iterates two steps: one step is penalized reconstruction using separable quadratic surrogate (SQS), a CNN prior and guided filtering; and another step is to calculate the CNN using the current iteration image and the initial noisy image. Here, the guided filtering is used for reducing bias. We perform the experiment using patient data to demonstrate that the proposed method can significantly improve the image quality compared to the conventional methods.

Keywords: convolutional neural networks, PET reconstruction
11:50 am M-02-6

PET Image Denoising Using Deep Neural Network (#4162)

K. Gong1, J. Guan2, C. - C. Liu1, J. Qi1

1 University of California Davis, Biomedical Engineering, Davis, California, United States of America
2 University of California Davis, Statistics, Davis, California, United States of America


Deep neural networks have been widely and successfully used in computer vision and attracted growing interests in medical imaging. In this work, we trained a deep residual convolutional neural network to improve quality of PET images. This network is a combination of the U-net and residual network. It consists of a total of 18 layers that form an encoder followed by a decoder. To train the deep neural network, we augmented real patient data with computer simulated phantom data. Specifically, we first trained the network using simulation data and then fine tuned the network using real data. Results based on simulation and real data show that the proposed method is more effective in removing noise than the traditional Gaussian filtering method.

This work was supported by the National Institutes of Health under grants R01 EB000194.

Keywords: PET, Convolutional neural network, Denoising