Please note! All times in the online program are given in New York - America (GMT -04:00) times!

New York - America ()
Jan 26, 2022, 3:32:07 PM
Your time ()
Tokyo - Asia ()
Jan 27, 2022, 5:32:07 AM
Our exhibitors and sponsors – click on name to visit booth:

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Tomographic Reconstruction

Session chair: Reader , Andrew J. (King's College London, School of Biomedical Engineering and Imaging Sciences, London, UK); Qi , Jinyi (University of California, Davis, Department of Biomedical Engineering, Davis, USA)
Shortcut: M-04
Date: Wednesday, 20 October, 2021, 9:15 AM - 11:15 AM
Room: MIC - 2
Session type: MIC Session


Click on an contribution to preview the abstract content.

9:15 AM M-04-01

Feasibility of image reconstruction from triple modality data of Yttrium-90 (#1368)

D. Deidda1, 2, A. Denis-Bacelar1, A. Fenwick1, K. Ferreira1, W. Heetun1, B. Hutton2, J. Scuffham3, 1, A. Robinson1, K. Thielemans2

1 National Physical Laboratory, Nuclear Medicine, Medical Radiation Physics, London, United Kingdom
2 University College of London, Institute of Nuclear Medicine, London, United Kingdom
3 Royal Surrey NHS Foundation Trust, Guildford, United Kingdom


The recent implementation of the first clinical triple modality scanner in STIR enables investigation of the possibility of triple modality image reconstruction. Such a tool represents an important step toward the improvement of dosimetry for theranostics, where the exploitation of multi-modality imaging can have an impact on treatment planning and follow-up. To give a demonstration of triple modality image reconstruction we used data from a NEMA phantom that was filled with Yttrium-90 (Y-90), which emits Bremsstrahlung photons detectable with SPECT as well as gamma rays that can go through pair production, therefore creating positrons that make PET acquisition possible.  The data were acquired with the Mediso AnyScan SPECT/PET/CT. Different ways of including multiple side information using the kernelised expectation maximisation (KEM) and the Hybrid KEM (HKEM) were used and investigated in terms of ROI activity and noise suppression. This work presents an example of application with (Y-90) but it can be extended to any other radionuclide combination used in Theranostic applications.

AcknowledgmentThis work is supported by the UK National Physical Laboratory through the National Measurement System.
Keywords: Image reconstruction, triple modality, PET-SPECT-CT, kernel
9:30 AM M-04-02

Neural KEM for PET Image Reconstruction (#1170)

S. Li1, K. Gong2, J. Qi3, G. Wang1

1 University of California Davis Medical Center, Department of Radiology, Sacramento, California, United States of America
2 Massachusetts General Hospital, Department of Radiology, Boston, Massachusetts, United States of America
3 University of California Davis, Department of Biomedical Engineering, Davis, California, United States of America


Image reconstruction of low-count positron emission tomography (PET) data is challenging. Kernel methods address the challenge by incorporating image prior information in the forward model of iterative PET image reconstruction. The kernelized expectation-maximization (KEM) algorithm has been developed and demonstrated to be effective and easy to implement. A common approach for a further improvement of the kernel method would be adding an explicit regularization, which however leads to a complex optimization problem. In this paper, we propose an implicit regularization for the kernel method by using convolutional neural-networks to represent the kernel coefficient image in the PET forward model. To solve the maximum-likelihood neural network-based reconstruction problem, we apply the principle of optimization transfer to derive a neural KEM algorithm. Each iteration of the algorithm consists of two separate steps: a KEM step for image update from the projection data and a deep-learning step in the image domain for approximating the kernel coefficient image using neural networks. The results from computer simulations and a real patient scan demonstrate that the neural KEM can outperform existing KEM and the deep image prior method.

Keywords: PET image reconstruction, Kernel methods, covolutional neural network, image prior
9:45 AM M-04-03

Dense-Syn-Net: Inter-Modal and Self-Guided Deep Learned PET-MR Reconstruction (#1338)

G. Corda-D'Incan1, J. A. Schnabel1, A. J. Reader1

1 King's College London, Biomedical Engineering and Imaging Sciences, London, United Kingdom


We present a new framework for PET-MR reconstruction using deep learning. The proposed network named Dense-Syn-Net is an improved version of the synergistic network Syn-Net. This method is built on two model-based image reconstruction algorithms, the maximum a posteriori expectation-maximization algorithm for PET  and the Landweber algorithm for MR, that we fully connect to each other. To avoid the use of handcrafted regularisations, the gradient of the priors for PET and MR are learned from training data along with regularisation strengths for both modalities. Two major modifications of the original Syn-Net have been introduced: i) iteration-dependent targets are used to ensure that the output of every module matches the corresponding iteration of the reconstruction of high quality data, ii) all the previous PET and MR estimates are used to guide the regularisation in a given module allowing both self and inter-modality guidance. Results on 2D simulated data show that Dense-Syn-Net outperforms conventional independent PET and MR reconstruction algorithms. For MR, our method outperforms deep learned independent methods and synergistic PET-MR reconstruction with mutually weighted quadratic priors. For PET reconstruction, our network shows greater robustness towards mismatches than MR-guided and synergistic methods by better preserving modality-unique features. Not only MR-unique lesions are less visible in the reconstructed PET but PET-unique and MR-unique lesions have higher contrast-to-noise ratio. Dense-Syn-Net also achieves a lower reconstruction error locally on regions of mismatches. Future work will focus on assessing the performance of the network on 3D real data.

Keywords: Deep Learning, Synergistic PET-MR reconstruction, MR reconstruction, Unrolled networks, PET reconstruction
10:00 AM M-04-04

PET Physics-enabled transfer learning for improved 68Ga-DOTATATE imaging (#1189)

K. Gong1, Q. Li1, T. Pan2

1 Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts, United States of America
2 MD Anderson Cancer Center, Houston, Texas, United States of America


Though 68Ga-DOTATATE PET is promising for neuroendocrine tumor (NET) managements, due to the shorter half-life, much larger positron range, and lower injection activity limited by the generator capacity, it has a higher image noise and lower image resolution compared with 18F-FDG PET, which compromises its lesion detectability. In this work, we developed a PET physics-enabled transfer learning framework to improve 68Ga-DOTATATE imaging based on existing and widely available PET datsets of other tracers. Specifically, a 3D U-net was pre-trained based on low-dose and full-dose pairs from 18F-FDG and 18F-ACBC datsets. The decoder path of the pre-trained network was further optimized when applied to the normal-dose 68Ga-DOTATATE dataset, with the training function based on the Poisson assumption of PET raw data. In addition, the denoised image based on the pre-trained network was used to build an additional kernel layer that can perform additional feature denoising. Results based on clinical datasets demonstrate the feasibility of the proposed framework.


This work was supported by the National Institutes of Health under grants R21AG067422 and R03EB030280.

Keywords: PET, Deep learning, Transfer learning, Image reconstrucion
10:15 AM M-04-05

Investigation of Joint Image Reconstructions for a Dual-Panel Breast TOF PET Scanner (#896)

Y. Li1, S. Surti1, J. S. Karp1, S. Matej1

1 University of Pennsylvania, Department of Radiology, Philadelphia, Pennsylvania, United States of America


Dual-panel PET scanners have many advantages in dedicated breast and on-board imaging applications since the compact scanners can be easily combined with other imaging and treatment modalities. We previously showed that the time-of-flight (TOF) provides new information compared to the non-TOF case when a PET scanner has incomplete data sampling, and the new information reduces limited-angle artifacts caused by incomplete data sampling and data truncation. In this work, we investigate the joint image reconstructions for a dedicated dual-panel TOF PET scanner, known as B-PET, for breast imaging application. It is combined with a breast tomosynthesis for attenuation correction, but can also work in a stand alone mode, and joint image reconstructions can provide quantitative breast imaging. We implement both MLAA (maximum likelihood reconstruction of attenuation and activity) and MLACF (maximum likelihood attenuation correction factors) for B-PET. An anthropomorphic breast phantom with mild compression is used in our 2D and 3D simulated reconstructions. We show that the reconstructed attenuation image in MLAA has both limited-angle artifacts and crosstalks near lesion locations; however, reconstructed activity image using MLAA can be comparable to reconstructed image using OSEM with reference attenuation correction. The reconstructed images using MLACF also has crosstalks in reconstructed activity images and attenuation factors. MLACF provide an alternative effective approach for joint reconstruction when attenuation image is not needed. We compare the noise performance of MLAA and MLACF compared to OSEM using 60 noise realizations with 1M counts. In summary, quantitative breast imaging with dual-panel TOF PET scanner is possible using joint image reconstruction with good TOF timing resolution.


We thank Srilalan Krishnamoorthy for providing 3D GATE simulated data. This work is supported in part by the NIH under award numbers R01CA113941, R01EB023274, R01CA196528.

Keywords: Joint image reconstruction, MLAA and MLACF, limited-angle sampling, time-of-flight (TOF), positron emission tomography (PET)
10:30 AM M-04-06

Deep Learning-based Fast TOF-PET Image Reconstruction Using Direction Information (#883)

K. Ote1, F. Hashimoto1

1 Hamamatsu Photonics K.K., Central Research Laboratory, Hamamatsu, Japan


We proposed to incorporate a direction information into a deep learning-based fast time-of-flight positron emission tomography (TOF-PET) image reconstruction. A previous study of deep learning-based fast TOF-PET reconstruction combined all coincidence events into single histo-image, and input the single histo-image to a 3D convolutional neural network (CNN) to output a final image. However, the input of single histo-image impairs the signal to noise ratio of the final image because it discards the direction information of coincidence event. Therefore, in this study, we incorporated the direction information into the deep learning-based reconstruction. The proposed method divided the events into N groups depending on the angle of coincidence, and accumulated the group of events separately in N histo-images. 3D-CNN received N histo-images and one attenuation-map as (N + 1) channel image. Using a Monte Carlo simulation data of digital brain phanton of twenty subjects, we compared peak signal to noise ratio (PSNR) and calculation time between the proposed method and the other method. PSNR of the proposed method was about 1.5 dB higher than that of the previouse deep learning-based method. The reconstruction time of the proposed method for a volume image with 70 × 128 × 128 voxels was 0.24 second. These results indicated that the proposed method can efficienctly use the direction information to improve PSNR of deep learning-based subsecond TOF-PET image reconstruction.

Keywords: Positron Emission Tomography, Image Reconstruction, Convolutional Neural Network, Deep Learning
10:45 AM M-04-07

Fast list-mode reconstruction of sparse TOF PET data with non-smooth priors (#859)

G. Schramm1, M. Holler2

1 KU Leuven, Department of Imaging and Pathology, Division of Nuclear Medicine, Leuven, Belgium
2 University of Graz, Institute of Mathematics and Scientific Computing, Graz, Austria


In this work, we propose and analyze a list-mode version of the stochastic primal-dual hybrid gradient (SPDHG) algorithm for image reconstruction from TOF PET data using subsets and non-smooth priors. By strongly exploiting sparsity in the TOF PET data, our proposed algorithm substantially reduces both memory requirements and computation time. To study the behavior of the proposed algorithm in detail, its performance is investigated based on simulated 2D TOF data using a brain-like software phantom.
We find that our listmode-version of SPDHG converges essentially as fast the original version, which will lead to a substantial improvement in the time needed for a reconstruction of real 3D TOF PET data. However, as with the original algorithm, a careful choice of the ratio of the primal and dual step sizes, depending on the magnitude of the image to be reconstructed, is crucial to obtain fast convergence.

AcknowledgmentGS has been supported by NIH under the NIH 5P41EB017183-07 grant.
MH is a member of NAWI Graz ( and BioTechMed Graz (
Keywords: Positron emission tomography, Reconstruction algorithms
11:00 AM M-04-08

Fast regularized material decomposition for spectral X-ray systems using an empirical model (#1144)

F. Jolivet1, J. Nuyts1

1 KU/UZ Leuven, Department of Imaging and Pathology, Division of Nuclear Medicine, Leuven, Belgium


Recently X-ray systems using dual-energy detection or photon counting are receiving increased attention. One of the applications is to use the spectral information to obtain a material decomposition (water/iodine, water/bone, ...). The material decomposition step greatly amplifies noise due to the ill-conditioning of the inversion step in the basis change, and it is very sensitive to the modelling of the source and detector response. To reduce this noise amplification, one-step methods have been proposed, which combine the material decomposition and the image reconstruction in a single optimization step. However, these methods can be very time consuming. To improve this computation time a good initialization is useful. In this work we propose a regularized material decomposition method which includes a non-negativity constraint. The key of this method based on an Alternating Direction Method of Multipliers (ADMM) optimization strategy is to use only proximal operators with closed-form solutions. Therefore this method is able to efficiently decompose tens millions of pixels in few seconds. We show results on experimental data from an angiography with a CBCT system using a dual-energy detector.

AcknowledgmentThis work was done under the NEXIS project, that has received funding from the European Union’s Horizon 2020 Research and Innovations Program (Grant Agreement no.780026).
Keywords: Computed tomography, Image reconstruction, Inverse problems, Reconstruction algorithms.

Our exhibitors and sponsors – click on name to visit booth: