IEEE 2021 NSS MIC

Please note! All times in the online program are given in New York - America (GMT -04:00) times!

New York - America ()
Jan 29, 2022, 7:36:22 AM
Your time ()
n/a
Tokyo - Asia ()
Jan 29, 2022, 9:36:22 PM
Our exhibitors and sponsors – click on name to visit booth:

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Parametric Imaging and Motion Correction

Session chair: Visvikis , Dimitris (LaTIM, Brest, France); Bowen , Spencer L. (UT Southwestern Medical Center, Department of Radiology, Dallas, USA)
 
Shortcut: M-10
Date: Thursday, 21 October, 2021, 11:45 AM - 1:30 PM
Room: MIC - 2
Session type: MIC Session

Contents

Click on an contribution to preview the abstract content.

11:45 AM M-10-01

Direct Ki Patlak generation without using the input function guided by deep-learning methods (#289)

N. Zaker1, 5, K. Haddad1, R. Faghihi1, H. Arabi2, H. Zaidi3, 4

1 Shiraz University, Department of Nuclear Engineering, Shiraz, Iran (Islamic Republic of)
2 Geneva University, Division of Nuclear Medicine, Geneva, Genève, Switzerland
3 Geneva University, Division of Nuclear Medicine, Geneva, Genève, Switzerland
4 University of Groningen, Medical Phyisics, Groningen, Netherlands
5 Geneva University, PET Instrumentation & Neuroimaging Laboratory (PINLab), Geneva, Genève, Switzerland

Abstract

Whole-body dynamic PET imaging was proposed for thorough estimation of clinically relevant physiological parameters through tracer kinetic modelling. The adopted approach consisted of an initial blood pool (cardiac) scan followed by a number of whole-body passes (13) for estimation of Ki parametric maps through Patlak graphical analysis. Two major difficulties facing the clinical adoption of this technique is the need for an image-derived input function and the long acquisition time. To tackle these issues, we use deep convolutional neural networks (DCNNs) to produce Ki maps from standardized Uptake Value (SUV) images along with dynamic WB passes. The method doesn’t require an input function while optimizing the acquisition procedure to fewer number of passes for dynamic whole-body PET acquisition. The DCNN used here is a high-resolution residual network architecture with 20 convolutional layers. PET/CT images from 19 adult patients who underwent 18F-FDG for staging or restaging of lung or abdominal lesions were used for the training phase to generate reference Ki-Patlak images. A nine-fold cross-validation scheme was used for training/testing of the proposed algorithm. Input data were categorized in two groups with SUV and without SUV images. Each time, one pass was added to the input dataset starting from pass 13. A number of metrics were used for model evaluation. Lesion detectability in reference and predicted images were compared. The relation error (%) when using SUV plus dynamic passes from 13 to 9 as input data is 7.45±0.94%. The visual analysis revealed that in cases where Ki-Patlak could detect lesions that were not visible on SUV images, the lesions were detectable/visible on the predicted images. Our results demonstrated that using the last 3 passes as input of the deep learning model would result in acceptable images qualitatively and quantitatively.

AcknowledgmentSwiss National Science Foundation under grant  SNSF 320030_176052.
Keywords: PET/CT, dynamic PET, deep convolutional neural network, lesion detectability., Patlak image analysis
12:00 PM M-10-02

Image-Domain Bootstrapping of PET Time-Course Data for Assessment of Uncertainty in Complex Regional Summaries of Mapped Kinetics (#627)

F. Gu1, Q. Wu1, F. O'Sullivan1

1 University College Cork, Cork, Ireland

Abstract

Imaging biomarkers extracted from diagnostic PET imaging scans are increasingly use in clinical decisions about the treatment of individual cancer patients. In this setting, measures of uncertainty in the biomarker information presented for an individual patient may have an important role. In theory, the non-parametric bootstrap, based on resampling list-mode data, provides a solution to this problem. However, the computations have limited its use in practical setting, particularly for dynamic studies and scanners using iterative reconstruction. In recent work [Gu et al 2019 IEEE NSS/MIC], our group has developed an image-domain bootstrapping technique that gives the ability to efficiently process 4-D dynamic PET data. This has been used to map uncertainties in images of tracer kinetics. The work here explores the utility of this approach for evaluation of sampling characteristics of more complex imaging biomarkers - regional maximum and regional coefficient of variation - of mapped kinetics. A series of numerical simulations matched to two dynamic PET imaging studies with FDG and FLT in brain and breast cancer patients are carried out. A large (>650) collection of ROIs with varying size and location are considered. True target values of uncertainty are evaluated by study replication. Both the non-parametric projection-domain and the novel image-domain bootstraps are evaluated. Comparisons across a range of mapped kinetic variables are considered. The results find that the accuracy of the image-domain assessment of uncertainty is very acceptable - within 10% of the accuracy of the non-parametric bootstrap approach. The image-domain bootstrap gives a potential to practically evaluate the uncertainties of complex biomarkers recovered from the analysis of information in an individual patient PET study.

Keywords: Biomarkers, Uncertainty, Image-Domain Bootstrapping
12:15 PM M-10-03

Motion Correction for Direct Whole Body Parametric PET with Symmetric and Inverse-consistent Deformable Image Registration (#1441)

J. Hu1, L. Sibille1, D. Pigg1, B. Spottiswoode1

1 Siemens Medical Solutions USA, Molecular Imaging, Knoxville, Tennessee, United States of America

Abstract

Whole body parametric PET imaging uses multiple dynamic frames to calculate activity change over time in each image voxel. To evaluate tracer kinetics accurately, ideally each voxel should contain the same tissue for the duration of the dynamic scan. However, the patient may move voluntarily or involuntarily during dynamic data acquisition, resulting in motion artifacts in reconstructed parametric images. Inter-frame motion correction is thus an important step in ensuring good image quality for parametric PET. In this paper, we propose using symmetric and inverse-consistent deformable image registration for motion correction in direct whole body parametric PET. From the dynamic datasets, we select a reference frame, then calculate forward and inverse motion fields for each other frames relative to this reference frame. The calculated motion fields were incorporated in the direct Patlak reconstruction to make sure that the same voxel in a series of dynamic frames contained the same tissue during the nested parametric fitting. The proposed method was validated with patient data, where motion artifacts in parametric images were shown to be significantly reduced while the advantage of low noise level achieved by using nested parametric fitting was retained.

Keywords: Image Reconstruction, Parametric PET, Motion Correction
12:30 PM M-10-04

Event-by-event non-rigid respiratory motion correction for multi-pass continuous-bed-motion whole-body parametric PET imaging (#1000)

Y. - J. Tsai1, X. Guo2, J. Onofrey1, Y. Lu1, K. Fontaine3, C. Liu1

1 Yale University, Radiology and Biomedical Imaging, New Haven, Connecticut, United States of America
2 Yale University, Biomedical Engineering, New Haven, Connecticut, United States of America
3 Yale University, PET Center, New Haven, Connecticut, United States of America

Abstract

Positron emission tomography (PET) scanners with continuous-bed-motion (CBM) feature allows efficient whole-body parametric imaging through multi-pass acquisition. However, motion blurring within each pass due to patient movement and respiration remains a major challenge. In this study, we aim to reduce the overall motion effects in parametric images by incorporating non-rigid voxel-wise respiratory motion model based on a previously proposed internal-external correlation (INTEX) technique into each CBM PET reconstruction. The incorporation is achieved by utilizing an event-by-event listmode ordered-subsets expectation maximization (OSEM) reconstruction framework that was developed initially for single-bed PET data and extended to CBM datasets recently. Data for two healthy volunteers and two lung cancer patients are included. Parametric images derived from reconstructions with and without considering the pass-by-pass respiratory motion compensation are compared qualitatively and quantitatively. Preliminary results demonstrate that both the image sharpness and full width at half maximum (FWHM) of lung lesions are improved when the motion compensation is applied.

Keywords: positron emission tomography, parametric imaging, respiratory motion reduction
12:45 PM M-10-05

Utilization of Lutetium Background Radiation for Motion Correction in Total-Body PET (#411)

N. Omidvari1, E. Berg1, L. Cheng2, T. Ma2, J. Qi1, S. R. Cherry1, 3

1 University of California, Davis, Department of Biomedical Engineering, Davis, California, United States of America
2 Tsinghua University, Department of Engineering Physics, Beijing, China
3 University of California, Davis, Department of Radiology, Sacramento, California, United States of America

Abstract

The presence of 176Lu in PET scanners that use lutetium-based scintillation detectors creates a background radiation. In total-body PET scanners with extended axial length, the increased volume of crystal material creates a higher flux of the background radiation and significantly higher sensitivity for detecting it. This enables effective use of the background radiation in applications that were not practically feasible with conventional PET scanners. One of these applications is detection and correction of patient motion, a factor largely contributing to quantification error in all PET scans that causes image blurring and mismatched attenuation correction. Motion can induce larger errors in total-body PET, as motion in one part of the body contributes to attenuation and scatter correction errors in other parts. Data-driven motion correction (MC) approaches have shown promising results with standard-dose PET data. However, their accuracy can be affected by tracer distribution in early dynamic frames and be prone to error in low-count regions. Utilizing the lutetium background radiation in the MC framework provides a tracer-independent method and can be used in ultralow-dose PET scans that are made possible for the first time with total-body PET scanners. In this work, feasibility of utilizing the background radiation for motion correction is studied in Monte-Carlo simulations of the uEXPLORER total-body PET scanner with a 3D XCAT phantom, specifically for respiratory, body extremities, and head motion, and with a particular focus on ultralow-dose scans. In each case, accuracy of motion detection was evaluated for line-of-response (LOR)-based and image-based motion detection methods, when using the lutetium background data and the PET emission data. The simulation results suggest that sub-second frames of the lutetium background in total-body PET can be used to detect bulk motion in body extremities, head, and chest.

AcknowledgmentThis work was funded by NIH grant R01 CA 206187.
Keywords: Total-body PET, Motion Correction, GATE Simulations
1:00 PM M-10-06

Markerless Head Motion Tracking and Event-by-event Correction in Brain PET (#915)

Y. Lu1, K. Fontaine1, T. Mulnix1, J. Zhang1, T. Toyonaga1, J. Zheng2, Y. Jiang2, Q. Wan2, Z. Yang2, X. Zhang2, T. Cao2, L. Hu2, R. E. Carson1

1 Yale University, Department of Radiology and Biomedical Imaging, New Haven, Connecticut, United States of America
2 United Imaging Healthcare, Houston, Texas, United States of America

On behalf of the NeuroeXplorer (NX) consortium.

Abstract

Background: Head motion is an important issue in brain PET imaging. At the Yale PET Center, the Polaris Vicra (referred as “Vicra”), an optical hardware-based motion tracking (HMT) device, has been used in over 4,300 research PET studies. However, Vicra is not routinely used clinically, since it requires attaching a light-reflecting marker to the patient. Comparing to marker-based HMT, markerless HMT methods are more convenient for clinical translation. In the past, several markerless methods were proposed, however, even with encouraging results from those studies, there has been no commercial application in brain PET. In this study, we propose to leverage a commercial prototype markerless HMT, developed by United Imaging Healthcare (UIH) and Percipio.XYZ, to perform real-time head motion tracking in brain PET studies.

Methods: The UIH HMT uses a stereovision camera with infrared structured light to capture real time 3D patient facial surface in a form of point cloud. Each point cloud (30 fps), which is calculated on an FPGA board, is matched to the initial reference point cloud in real time using a GPU-powered rigid-body iterative closest point registration algorithm to estimate rigid head motion. The device is designed with spatial resolution <0.2mm. We validated the UIH HMT against Vicra using a phantom and a human volunteer studies with attached radioactive point sources. The UIH HMT was tested on three 2-hour human 18F-FPEB PET studies and brain ROI analysis was performed. Vicra-based correction was used as the reference. Event-by-event motion compensated OSEM reconstruction was performed.

Results and conclusions: The proposed camera outperformed Vicra in the phantom study and achieved comparable motion correction results as compared to Vicra for all the human studies. Future studies include testing on a broader range of subjects with different facial features and special motion, e.g., coughing and sneezing.

AcknowledgmentNIH grants U01EB029811, R21EB028954 and R03EB027209
Keywords: Markerless, rigid-motion correction, PET, motion correction, head motion
1:15 PM M-10-07

Motion correction using multi-resolution scheme (MOTIONLESS) for 18F-FDG total-body PET/CT systems (#491)

L. K. Shiyam Sundar1, Y. Wang2, B. Spencer2, G. Wang2, E. Li2, S. Cherry2, T. Beyer1, R. Badawi2

1 Medical University of Vienna, Quantitative Imaging and Medical Physics, Vienna, Wien, Austria
2 University of California, Davis, California, United States of America

Abstract

Extended axial field-of-view PET systems offer great promise for total-body parametric imaging. However, estimation of parametric maps may be critically limited by subject motion, which implies a need for accurate total-body motion compensation. The computational neuroanatomy community has routinely used large deformation diffeomorphic metric mapping (LDDMM) for aligning inter-subject brains to generate population-based brain atlases. In this work, we have adopted the LDDMM approach from neuroinformatics to total-body PET. We propose a diffeomorphic multi-resolution scheme-based motion correction (MOTIONLESS) for performing total-body voxel-wise motion compensation. Our resulting data indicates that MOTIONLESS is capable of generating 3D dense voxel-wise deformation fields. As expected, the deformations were high around abdominal regions (deformable organs) and low in the cranial areas (rigid organs). To evaluate the impact of our methodology on clinical scenarios, we assessed the parametric values of a tumor in a genitourinary cancer subject. Visual assessment of the tumor parametric maps indicates improved sharpness and contrast following motion correction. Of note, the Ki values of the motion-compensated tumor were high (0.0675 min-1) in comparison to manually motion-compensated tumor (0.0436 min-1) and non-motion compensated tumor (0.0362 min-1), therefore warranting further investigation. Our data indicate that the proposed MOTIONLESS scheme has the potential to perform total-body PET motion correction with the aid of 3D-dense deformation fields.

AcknowledgmentWe would like to thank Yasser Gaber Abdelhafez and Abhijit J Chaudhari for their constructive feedback and comments.
Keywords: Total-body PET, Motion-correction, Diffeomorphic registration

Our exhibitors and sponsors – click on name to visit booth: