Please note! All times in the online program are given in New York - America (GMT -04:00) times!
Jan 26, 2022, 3:10:36 PM
Jan 27, 2022, 5:10:36 AM
Click on an contribution to preview the abstract content.
DirectReconstruction of Parametric Images with Non-Rigid MotionCorrection (#993)
J. - D. Gallezot1, K. Fontaine1, T. Mulnix1, R. Carson1, Y. Lu1
1 Yale University, Radiology and Biomedical Imaging, New Haven, Connecticut, United States of America
Background: Subject motion is an important issue in PET imaging. This motion can induce blurring of the images or artifacts due to mismatch of attenuation and emission data, or in the case of kinetic modeling studies, mismatch between different time points of the scan. In previous studies, we implemented rigid motion correction for all types of studies, including for direct parametric image reconstruction, and non-rigid motion correction for standard uptake value (SUV) image reconstruction, or for indirect parametric image reconstruction. In this study we implemented non-rigid motion correction for direct parametric image reconstruction and provided initial evaluation.
Methods: The non-rigid motion correction was first implemented for the direct Nested-EM algorithm, due to the similarity of this algorithm’s equations with the ones of the standard 3D-EM algorithm, and applied it to real and simulated 18F-FDG data. In this initial investigation, special emphasis was put on data with high contrast structures, such as lung tumors and the myocardium, since motion artifacts are especially visible in this situation.
Results and conclusions: The non-rigid motion correction was successfully applied to both single-bed and whole-body scans with continuous-bed motion. Motion induced similar artifacts in both direct and indirect images, though the positivity constrains implicitly included in the Nested-EM algorithm for 18F-FDG data can reduce the magnitude of artifacts in uncorrected direct images. Conversely, residual uncorrected motion was more visible in some direct images, due to the lower level of noise in these images. Future studies will include testing on a broader range of subjects with different tracer distribution due to tumors and different motion patterns, or large motion affecting different organs.
Keywords: Motion-Correction, direct parametric reconstruction, non-rigid, positron emission tomography, FDG
Accelerated Convergent Motion Compensated Image Reconstruction (#1122)
C. Delplancke1, K. Thielemans3, 4, M. J. Ehrhardt2
1 University of Bath, Department of Mathematical Sciences, Bath, United Kingdom
Motion correction aims to prevent motion artefacts which may be caused by respiration, heartbeat, or head movements for example. In a preliminary step, the measured data is divided in gates corresponding to motion states, and displacement maps from a reference state to each motion state are estimated. One common technique to perform motion correction is the motion compensated image reconstruction framework, where the displacement maps are integrated into the forward model corresponding to gated data. For standard algorithms, the computational cost per iteration increases linearly with the number of gates. In order to accelerate the reconstruction, we propose the use of a randomized and convergent algorithm whose per iteration computational cost scales constantly with the number of gates. We show improvement on theoretical rates of convergence and observe the predicted speed-up on two synthetic datasets corresponding to rigid and non-rigid motion.
Keywords: Motion Compensated Image Reconstruction, Randomized algorithm, Motion correction, Acceleration
Development of a Robust Head Tracking System Through Virtual and Physical Optimization (#673)
K. S. Kalluri1, C. Lindsay1, R. G. Richards2, M. May2, B. Auer1, P. H. Kuo3, L. R. Furenlid2, 3, M. A. King1
1 University of Massachusetts Medical School, Deptartment of Radiology, Worcester, Massachusetts, United States of America
Reconstructed image quality can be degraded by patient head motion and requires precise motion measurement and compensation during reconstruction. Head motion can be estimated using optical motion tracking systems (MTS). Unfortunately, optimizing motion tracking system performance can be a challenging task, due to limited repeatability and susceptibility to a variety of confounding errors. Herein, we developed a novel MTS in which the head tracking system is optimized through both physical and virtual (simulated) configuration and experimentation with the goal of making a more robust physical MTS. The virtual system will help identify optimal operating conditions such as phantom placement, robot generated motion paths, provides a reliable estimate of the recorded head motion, and alleviate sources of confounding errors. The physical system can then be used to measure quantify the measured motion in a controlled environment and investigate further improvements.
AcknowledgmentResearch reported in this publication was supported by the National Institute of Biomedical Imaging and Bioengineering of the National Institutes of Health under Award Number R01 EB022521. P. H. Kuo has a financial interest in and partial employment at Invicro division of Konica Minolta.
Keywords: SPECT, Motion Tracking, Brain Imaging, MPH
Deep learning-aided data-driven quasi-continuous non-rigid motion correction in PET (#923)
J. Zhang1, K. Fontaine1, R. E. Carson1, 2, J. A. Onofrey1, 3, Y. Lu1
1 Yale University, Department of Radiology and Biomedical Imaging, New Haven, Connecticut, United States of America
Background: Patient movement is one of the major degrading factors in PET/CT imaging. In the past, we proposed a data-driven algorithm, centroid of distribution (COD), to detect and perform body motion correction (BMC). For respiratory motion, internal-external technique (INTEX) was proposed, which builds a continuous respiratory motion model by establishing relationships between each voxel’s movement and a 1-D respiratory surrogate signal. To further account for varying respiratory patterns, we proposed Dynamic-INTEX (D-INTEX). Recently, we combined the COD-based BMC and D-INTEX to perform simultaneous body and respiratory MC (BRMC). So far, there is no universal method that accounts for all types of motion without establishing sophisticated models. In addition, all the previously proposed methods hold assumptions, and violations of those assumptions may lead to sub-optimal correction performance.
Methods: We propose a new MC framework, which establishes a subject-specific motion model with the aid of deep learning (DL)-based image synthesis. The proposed DL-aided data-driven Motion Correction (DLMC) framework estimates non-rigid motion in a quasi-continuous fashion, without establishing models for specific motion sub-types. Motion is estimated using non-rigid image registration between synthetic reconstructions, which are predicted by a subject-specific neural network using point clouds as network input. A point cloud is a 3-D simple back-projection image based on 500-msec PET raw data. DLMC is compared with INTEX, D-INTEX or BRMC methods using data from three 18F-FPDTBZ PET studies.
Results and conclusions: For all the studies, DLMC successfully outperformed the current start-of-the-art methods and demonstrated superior performance in resolution recovery. High computation cost due to high volume of non-rigid image registrations and image reconstruction are the major limitations of DLMC, i.e., ~4 times higher than other methods.
AcknowledgmentNIH grants R21EB028954, R03EB027209.
Keywords: Motion correction, Deep learning, PET, Non-rigid, Data-driven
Post-Reconstruction PET Resolution Modelling by Synthesised Image Reconstruction (#999)
L. D. Vass1, A. J. Reader2
1 King's College London, Department of Imaging Chemistry and Biology, London, United Kingdom
Resolution recovery techniques in PET aim to improve spatial resolution, signal-to-noise ratio and quantitative accuracy. Several factors are responsible for the degradation of spatial resolution; techniques have been developed to compensate for resolution loss through knowledge of the point spread function (PSF). In particular, the widely used Richardson-Lucy technique can be used to model the PSF and improve spatial resolution. However, one recognised limitation is that the RL algorithm, when applied for post-reconstruction resolution recovery, can rapidly produce noisy images. Incorporation of the PSF into iterative reconstruction algorithms has shown potential benefits (depending on the task) when compared to conventional reconstruction with no PSF modelling. Yet, PSF-based reconstruction requires raw projection data and knowledge of the forward/back-projectors relating to the scanner's geometry, and if these are unavailable the technique becomes infeasible. In this proof-of-concept work, we propose a novel post-reconstruction resolution recovery technique based on synthesising an image reconstruction problem with our own chosen geometry; we hypothesise it will improve on the existing limitations of post-reconstruction techniques. The method can be understood as a means of decelerating the resolution recovery by embedding it into a new, synthesised, inverse problem. The aim of this study is to evaluate the proposed method in 2D using three simulated digital phantoms at various levels of noise. We compare the performance of the proposed method with the RL algorithm and PSF-based maximum likelihood expectation maximisation (MLEM) reconstruction. Promisingly, in conditions which match typical clinical scans (e.g. low iteration numbers and counts), the proposed method achieves a substantially lower root mean square error than the RL algorithm. Interestingly, the performance of the proposed method is comparable to PSF modelling within iterative reconstruction.
Keywords: Positron emission tomography, resolution recovery, image reconstruction
Texture Transformer Super-Resolution for Computed Tomography (#1218)
S. Zhou1, L. Yu2, M. Jin1
1 University of Texas at Arlington, Physics, Arlington, Texas, United States of America
In order to improve the spatial resolution and suppress the noise of computed tomography (CT) images, we propose to use a texture transformer network for the image super-resolution (TTSR). TTSR is a reference-based image super-resolution method. The noisy low-resolution CT (LRCT) image and the clean high-resolution image (HRCT) are severed as the query and key in a transformer, respectively. Image translation is optimized through deep neural network (DNN) texture extraction, correlation embedding, and attention-based texture transfer and synthesis to achieve joint feature learning between LRCT and HRCT images. The 4D XCAT phantom program based on 18 patient data is used to generate baseline HRCT images and LRCT images (20% dose and 16 times pixel size of HRCT). The reference images in the TTSR can be randomly selected from the training set, which are different from the test images. Peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) are used as quantitative indicators. For comparison, we use cubic spline interpolation and a generative adversarial network (GAN) with cycle consistency, namely GAN-CIRCLE. Compared with the other two methods, TTSR can restore more details in SRCT images, although some noise artifacts are still visible. The quantitative results also show that TTSR can achieve better PSNR, while GAN-CIRCLE has better SSIM. In general, TTSR based on texture transformer and attention mechanism can effectively improve the spatial resolution and suppress the noise of LRCT images. Further work is needed to optimize the network structure for better noise reduction.
AcknowledgmentThis work was supported in part by the U.S. National Institutes of Health under Grant No. NIH/NIHLB 1R15HL150708-01A1.
Keywords: Super resolution CT (SRCT), High resolution CT (HRCT), Low-resolution CT (LRCT), GAN-CIRCLE, Texture transformer super resolution (TTSR)
Autonomous Timing Calibration for Time-of-Flight PET (#1263)
1 University of Pennsylvania, Department of Radiology, Philadelphia, Pennsylvania, United States of America
Timing calibration is critical to achieve the best possible timing resolution. In this work, we present an autonomous timing calibration for 3D TOF PET by explicitly using intrinsic TOF data consistency. First, we derive generalized consistency equations in native coordinates for 3D TOF PET scanners with arbitrary transverse geometry, including polygonal PET scanners with modular detectors as a special case. In native coordinates, the two degrees of entangled redundancy and rich structure of 3D TOF data are explicitly elicited and exploited by the two TOF consistency equations. We then develop an autonomous timing calibration as an application of the TOF data consistency equations. Timing offsets on a per-crystal basis can be computed by solving the two linear timing offset equations involving two TOF moments–the zeroth and first TOF moments. Currently, timing calibration is usually obtained from a specialized data acquisition with known tracer distribution, e.g., a cylinder phantom, or an annulus phantom. The proposed autonomous timing calibration can be applied to data acquired with an arbitrary tracer distribution, which eliminates the need for a specialized data acquisition. To evaluate the method, we perform GATE simulations of a generic 3D TOF PET scanner with a NEMA phantom, and timing offsets were embedded into the list-format data event-by-event. We then deposit the list-format event into two TOF moments, and the timing offsets were accurately computed using a Landweber algorithm. Next generation TOF PET scanners have significantly improved timing resolution using silicon photomultiplier (SiPM) based detectors, which may require frequent timing calibration and close monitoring in performance compared to photomultiplier-tube based detectors. The proposed autonomous timing calibrations allows the residual timing offsets be corrected automatically using clinical data sets whenever computing resources are available.
Keywords: Consistency equations, native coordinates, autonomous timing calibration, time-of-flight (TOF), positron emission tomography (PET).