Please note! All times in the online program are given in New York - America (GMT -04:00) times!
Jan 29, 2022, 8:46:58 AM
Jan 29, 2022, 10:46:58 PM
Click on an contribution to preview the abstract content.
Data-driven, Energy-based Scatter Estimation for PET (#318)
N. Efthimiou1, J. S. Karp1, S. Surti1
1 University of Pennsylvania, Radiology, Perelman School of Medicine, Philadelphia, Pennsylvania, United States of America
Scattered photons in PET datasets lead to a bias in the reconstructed images and poor quantification. In this paper, we present a practical data-driven, energy-based (EB) scatter estimation method that leverages the marked difference between the energy distribution of non-scattered and scattered events. Similar approaches have been presented in the past, but have not found their way into clinical practice primarily due to their need for an accurate estimate of the scatter energy shape.
AcknowledgmentThis work was supported in part by NIH grants R21-CA239177, R01-EB028764, R01-CA196528, R01-CA113941 and the Siemens Research contract with the University of Pennsylvania.
Keywords: PennPET, data driven, energy based, scatter estimation
PET scatter correction using energy based trues estimation (#152)
H. Bal1, V. Panin1, M. Conti1
1 Siemens Medical Solutions USA, Inc, Molecular Imaging, Knoxville, Tennessee, United States of America
Most commercial PET scanners employ a scatter simulation model or Monte Carlo simulation for scatter estimation. Both these methods require an emission image and an attenuation map for scatter computation. In this work, we present a novel method to estimate scatter using only the energy information in the PET list-mode data. The method is based upon the premise that for a given isotope, the energy response of the true coincidences will have a specific global baseline shape. The PET list-mode data was binned into a series of 3D TOF sinograms with coarse energy sampling and scatter was estimated for each sinogram bin. A high energy window was used to obtain the weighted trues estimate and the expected trues estimate for each energy bin was dictated by the normalized baseline trues response. Additionally, high energy scatter estimate for a high energy bin was chosen to compensate for low angle scatter. The total scatter estimate was a linear combination of low energy scatter, high energy scatter and weighted trues estimate. A Gaussian filter was applied to the total scatter estimate to obtain the scatter from energy based trues (EBT). PETCT scans with a large scatter phantom, NEMA IQ and a high contrast oval phantom with insert were performed. Clinical datasets included whole-body F-18 FDG patient scans. Each dataset was processed with scatter correction using the single scatter simulation model based relative scatter scaling (SSS-rel) and the EBT approach. EBT approach matched the expected scatter shape better than the SSS-rel approach for scatter phantom. For the high contrast oval phantom, EBT approach gave a more uniform background activity distribution compared to the SSS-rel approach. The proposed energy based scatter correction approach was found to provide image quality comparable or better than the model based approach with potential to provide improved scatter correction for large patients and in case of PET and CT mismatch.
Keywords: scatter correction, PETCT
Fully 3D scatter estimation in axially long FOV PETCT scanners: Residual estimation approach (#1197)
H. Bal1, V. Panin1, J. Schaefferkoetter1, J. Cabello1, M. Conti1
1 Siemens Medical Solutions USA, Inc, Knoxville, Tennessee, United States of America
Long axial field-of-view PETCT scanners provide very high sensitivity owing to large LOR acceptance angles while also posing challenges for robust scatter correction due to different contributions from single and multiple scatter in oblique segments. In this work we present a novel approach to estimate fully 3D scatter by means of estimating the residuals between the measured and modelled data. For this, the 2D SSS model based scatter along with 2D measured data was used to obtain the unscattered trues image estimate. The scatter for oblique segments was computed as the residual between the measured net trues and the unscattered trues estimate. A Gaussian filter was used to smooth the noisy scatter estimate and the resulting 3D TOF scatter sinogram was used for reconstruction. A preliminary assessment was performed using experimental phantoms and clinical data acquired on the Biograph Vision Quadra (Siemens Healthineers). The experimental phantoms included the long cylindrical phantom with uniform activity and the oval phantom with cylindrical insert having a contrast of 200:1 between the insert and background. The clinical datasets included two F-18 FDG patient studies. All data were reconstructed with TOF OP-OSEM (4 iterations, 5 subsets) using the maximum ring difference (MRD) of 85 with 2D scatter and 322 with the proposed 3D scatter. Scatter profiles from cylinder phantom with 3D scatter approach matched the tails for the corresponding net trues for different oblique segments. Image quantification with 3D scatter approach for MRD=322 was found to be similar to that obtained with 2D scatter with MRD=85. For high contrast phantom image quality was improved with the use of 3D scatter approach compared to the 2D scatter approach. Preliminary results suggests that the proposed 3D scatter approach can provide robust and accurate scatter estimates without increasing the computational cost for the long axial field-of-view scanner.
Keywords: scatter correction, long axial FOV scanner, wholebody PET
Evaluation of down-scatter contamination in multi-pinhole 123I-IMP brain perfusion SPECT imaging (#1065)
B. Auer1, J. De Beenhouwer2, K. S. Kalluri1, C. Lindsay1, R. G. Richards3, M. May3, M. A. Kupinski3, P. H. Kuo3, L. R. Furenlid3, M. A. King1
1 University of Massachusetts Medical School, Dept. of Radiology, Worcester, Massachusetts, United States of America
Brain imaging with 123I radionuclide remains essential to assess the dopamine transporter activity or cerebral blood flow in various cerebral disorders. However, imaging with 123I-labeled tracers suffers from down-scatter contaminations from the emission of a series of high energy (>183 keV, ~3% abundance) gamma photons in addition to the primary photons (159 keV, 83% abundance). In this work, we investigated through simulation studies the effect of down-scatter contamination on image quality using multiple pinhole configurations and aperture sizes of AdaptiSPECT-C, which is a next-generation multi-pinhole system currently under construction. We simulated a brain phantom with source distribution for the perfusion imaging agent 123I-IMP as imaged 1h post injection. To enable comparison of imaging without down-scatter interactions, reconstructions were compared qualitatively and quantitively to the ones obtained from acquisition of similar activity distribution simulated for solely the 159-keV principal emission of 123I. In this initial study, we demonstrated through quantification and visual inspection of cerebral perfusion reconstruction incorporating down-scatter correction that the inclusion of down-scatter counts does not hamper the imaging performance of AdaptiSPECT-C even for the pinhole combination the most contaminated by such interactions. We have initiated a comparison of these findings against the ones obtained from a dual-head system employing parallel-hole collimator for which acquisition is considerably more impacted by down-scatter interactions.
Research reported in this publication was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under award number R01 EB022521. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Keywords: 123I-IMP SPECT imaging, cerebral blood-flow perfusion, AdaptiSPECT-C, GATE Monte-Carlo simulation
A CT-less PET Reconstruction Framework for Long-axial FOV Scanners with Lutetium-based Scintillators (#876)
M. Teimoorisichani1, H. Sari2, 3, V. Panin1, C. Mingels3, I. Alberts3, H. Rothfuss1, A. Rominger3, M. Conti1
1 Siemens Medical Solutions, Inc., Knoxville, Tennessee, United States of America
Lutetium-based scintillators, such as LSO, are commonly used in modern PET scanners. These scintillators emit background radiation as a result of the presence of radioisotope 176Lu in natural occurring lutetium. The background radiation in long-axial FOV scanners is particularly of interest due to the increased geometrical sensitivity of such scanners. In this study, using the 202 and 307 keV photons of the background radiation, we propose a CT-less PET reconstruction framework. In the proposed reconstruction framework, background radiation is separated from the 511 keV emission data through its distinct energy and TOF properties. Then, initial attenuation maps from the 202 and 307 keV are reconstructed, mapped to 511 keV, merged, and denoised to create a single set of attenuation maps at 511 keV. The obtained attenuation maps are then used in a maximum likelihood estimation of activity and attenuation correction coefficients (MLACF) reconstruction algorithm in which the PET image and attenuation sinograms were updated sequentially. The proposed algorithm was evaluated using data obtained from two 18F-FDG scans with a Biograph Vision Quadra (Siemens Healthineers) scanner. The relative uptake value (UV) for various organs in the reconstructed PET images from MLACF was compared with those obtained from PET images reconstructed using CT-based attenuation and scatter correction with TOF OS-EM. Quantitative comparison of the reconstructed images showed a discrepancy of less than 15% in the relative UVs between various organs in the PET images reconstructed with the proposed CT-less reconstruction scheme vs. CT-based TOF OS-EM.
Keywords: LSO background radiation, MLTR, MLACF, CT-less PET
Investigation of Direct and Indirect Approaches of Deep-Learning-Based Attenuation Correction for General Purpose and Dedicated Cardiac SPECT Scanners (#68)
X. Chen1, B. Zhou1, H. Xie1, L. Shi1, H. Liu1, 2, C. Liu1, 2
1 Yale University, Department of Biomedical Engineering, NEW HAVEN, Connecticut, United States of America
Attenuation correction using CT transmission scanning enables accurate quantitative assessment of cardiac SPECT. Deep-learning-based indirect approaches have been established to predict attenuation maps from emission data for rotational SPECT-only scanners with parallel-hole collimators and NaI crystals. Direct transformation approaches to generate attenuation-corrected images from non-attenuation-corrected images might be easier to implement without the intermediate step of the attenuation map generation, particularly useful for the small field-of-view of dedicated cardiac SPECT scanners with CZT detectors. In this work, we first implemented and compared the direct and indirect approaches for both conventional parallel-hole SPECT using 200 anonymized datasets and dedicated cardiac pinhole SPECT using 176 anonymized datasets. To avoid the inaccuracy caused by truncated reconstruction of the dedicated SPECT, we proposed novel methods to predict truncated attenuation maps from truncated emission images, and full attenuation maps from full though inaccurate emission images. The predicted truncated and full attenuation maps were then zero-padded and incorporated into the iterative reconstruction to generate attenuation-corrected images. For parallel-hole SPECT. the averaged error of the attenuation-corrected images using the direct approach was 2.57 ± 1.06% as compared to 1.37 ± 1.16% using the indirect approach. For the dedicated pinhole cardiac SPECT, the averaged error using our proposed indirect approaches was 1.14 ± 0.74% as compared to 2.20 ± 1.11% using the direct approach. In addition, we designed and implemented a novel neural network that can better extract information from a multi-channel input, which showed superior performance than conventional U-Net in both indirect and direct approaches.
This work is supported by internal funding from the Department of Radiology and Biomedical Imaging at Yale University, and the NIH grants R01HL154345 and R01HL123949.
Keywords: Attenuation correction, Cardiac SPECT/CT, deep learning, myocardial perfusion imaging
Comparison of deep learning-based attenuation corrections for myocardial perfusion SPECT (#389)
Y. Du1, 2, J. Sun1, G. S. Mok1, 2
1 University of Macau, Department of Electrical and Computer Engineering, Taipa, China, Macao Special Administrative Region
Deep learning (DL)-based attenuation correction (AC) for dedicated myocardial perfusion (MP) SPECT has been proposed recently, based on attenuation map (µ-map) generation (Shi et al. 2020) or direct AC (Yang et al. 2021). This study aims to provide a direct comparison of the effectiveness of these two AC methods using simulation. We used a population of 100 XCAT phantoms modelling various body and organ sizes, 99mTc-sestamibi distributions, defect sizes and locations. An analytical projector of a LEHR collimator with attenuation, scatter and collimator-detector response modeling was used to simulate 64 noisy projections for 180° based on a standard clinical count level. Projections were then reconstructed by OS-EM method with and without AC (NAC) using 12 iterations and 6 subsets. A 3D conditional generative adversarial network was implemented and optimized for the two DL-based AC methods respectively with training based on: (i) NAC SPECT paired with the corresponding µ-map. The projections were then reconstructed with the DL-generated µ-map for AC (DL-ACµ); (ii) NAC SPECT paired with the corresponding AC SPECT to perform direct AC (DL-AC). We randomly used 70, 10 and 20 phantoms for training, validation and testing respectively. The relative defect size difference (RSD) on polar maps, normalized mean square error (NMSE), structural similarity index measure (SSIM), joint correlation histogram on a 3D cardiac VOI (cVOI, 36×36×36) and the whole reconstructed volume (wVOI, 128×128×114) were compared for DL-ACµ and DL-AC, using AC as the gold standard. For cVOI, the NMSE and SSIM were significantly lower for DL-ACµ as compared to DL-AC (p<0.0001). Results were similar for wVOI. The RSD is also significantly lower for DL-ACµ (p<0.05). The cross-correlation analysis was consistent (R2 =0.9996 for DL-ACµ v.s. 0.9979 for DL-AC on cVOI). We conclude that DL-ACµ is superior to DL-AC for MP SPECT.
This work is supported by NSFC Excellent Young Scientists Fund (81922080).
Keywords: SPECT, myocardial perfusion, attenuation correction, deep learning, generative adversarial network
Development of a strategy against performance variability in direct attenuation correction via deep learning for SPECT myocardial perfusion imaging (#425)
M. Torkaman1, J. Yang1, L. Shi2, R. Wang3, 4, E. J. Miller3, 5, A. J. Sinusas2, 5, C. Liu2, 3, G. T. Gullberg1, 6, Y. Seo7, 8
1 University of California San Francisco, Department of Radiology and Biomedical Imaging, San Francisco, California, United States of America
Attenuation correction (AC) is important for accurate interpretation and quantification of SPECT myocardial perfusion imaging (MPI). However, AC is challenging in stand-alone systems not combined with a CT that provides patient-specific attenuation maps. We previously demonstrated the feasibility of generating attenuation-corrected SPECT images using a deep learning technique (SPECTDL) directly from non-corrected images (SPECTNC) without requiring attenuation map generation as an intermediate step. However, we observed performance variability of the technique across patients. This study aims to develop a data management strategy to investigate the feasibility of overcoming the performance variability with limited data across patients for the direct AC in SPECT MPI. Our current dataset only includes 100 patches from 100 99mTc-tetrofosmin SPECT scans acquired by the GE Discovery NM/CT 570c scanner at Yale New Haven Hospital. We hypothesized that a training data management strategy based on the similarity of the data enables a network to learn features efficiently and to be robust to new data. To investigate, we applied hierarchical clustering to polar plots of the non-corrected data in t-SNE space and divided the data into three groups (G1, G2, G3). The t-SNE space was created by transforming non-corrected data into a lower dimension space via the t-distributed stochastic neighbor embedding technique (t-SNE). The training data were categorized according to the proximity of data in the t-SNE space. Our initial results demonstrate that managing the training set in a way which consists of data with a more similar distribution can help the learning process and generate results with less variability.
The study was supported by the National Institutes of Health under Grants R01HL135490 and R01EB026331, R01HL123949, and American Heart Association award 18PRE33990138.
Keywords: attenuation correction, deep learning, myocardial perfusion imaging (MPI), performance variability, t-SNE