IEEE 2017 NSS/MIC/RTSD ControlCenter

Online Program Overview Session: M-12

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Image and Signal Processing

Session chair: Andrew J. Reader; Emil Y. Sidky
Shortcut: M-12
Date: Friday, October 27, 2017, 08:00
Room: Centennial IV
Session type: MIC Session


8:00 am M-12-1

A Deep-learning Method for Detruncation of Attenuation Maps (#3646)

A. Thejaswi1, A. Nivarthi1, D. J. Beckwith1, K. L. Johnson2, P. H. Pretorius2, E. O. Agu1, M. A. King2, C. Lindsay2

1 Worcester Polytechnic Institute, Computer Science, Worcester, Massachusetts, United States of America
2 University of Massachusetts Medical School, Department of Radiology, Worcester, Massachusetts, United States of America


In hybrid imaging, such as with SPECT/CT, the use of CT-derived attenuation maps has the potential to improve image quality. However, the benefits of attenuation correction can be reduced when the patient CT (e.g. obese) is truncated. We investigate the use of Deep Learning to complete truncated regions within cone-beam CT-derived attenuation maps for attenuation correction in cardiac perfusion SPECT. Our technique is based on inpainting, which attempts to reconstruct missing parts of an image using a special type of Convolutional Neural Networks called a context encoder to learn the size and shape of the patient’s body. For training, we used 1,169 non-truncated low-dose cone-beam CTs acquired with a SPECT/CT clinical imaging system from an existing cardiac perfusion study under an IRB approved protocol. Using our method we were able to construct contours for the truncated images and fill them in with appropriate voxel values. Our method can be advantageous over other de-truncation methods due to being image-based and not requiring specialized reconstruction methods. We also show that utilizing the de-truncated CTs for attenuation correction is beneficial in improving the photon counts in cardiac perfusion studies.

Keywords: truncation, CT, SPECT, Deep-learning
8:18 am M-12-2

A Novel Spine Matching Scheme Across Diffusion Weighted and MRAC-WATER Volumes for Whole Body Imaging Lesion Detection (#1336)

R. Prasad1, J. Dholakia1, R. Venkatesan1

1 GE Healthcare, MRI, Bengaluru, Karnataka, India


Whole-body (WB) magnetic resonance imaging (MRI) with diffusion-weighted imaging (DWI) is a recent advancement which is widely used for the evaluation of multifocal metastases in oncology cases. DWI provides functional information and can be utilized for the detection and characterization of pathologic processes, not only for acute cerebral infarction, but also for malignant tumors. In whole body imaging, axial MR images acquired station by station need to be bound/stitched together and the combined series so generated is expected to have continuity & alignment of anatomy and uniformity of intensities. Clinicians tend to look for multiple information, to diagnose the abnormality, hence they overlay DWI images (i.e. which provides information on the lesion diffusivity) over MR attenuation corrected images. When overlayed or fused, misalignments are observed lumbar and c-spine regions due to incorrect station-to-station volume registration /geometrical distortions, pixel size varying between DWI volumes. This paper aims to minimize this misalignment/gap using geodesic based spine detection and hausdorff distance based registration. In this invention, geodesic / curvature approach is employed to detect the spine in the DWI images. A one to one correspondence is established between DWI and LAVA spine using scale invariant feature transform and gradient of spine. Further, slope and intercepts of DWI and LAVA spine are computed and stored as individual sets. Hausdorff distance is used to compute the maximum distance and DWI spine is translated by the computed Hausdorff offset. Any further mismatch during overlay is corrected by computing the squared difference between the centroids of the set and resifting the spine based on computed squared difference. The accuracy of spine tracking was high (p < 0.001). This method is less sensitive to noise and eliminates outliers and has the potential to assist clinicians in the diagnosis of bone marrow metastases.

Keywords: Volume Registration, Image Registration, Whole Body Imaging, Geodesics, Hausdorff Distance, Diffusion Weighted Imaging, MRI
8:36 am M-12-3 Download

Data-Driven Cardiac Gating in PET with VOI Optimization and Frequency Tracking (#3048)

T. Feng1, J. Wang1, W. Zhu1, Y. Dong2, H. Li1

1 UIH America, Inc, Houston, Texas, United States of America
2 United Imaging Healthcare, Shanghai, China


Current data-driven cardiac gating is less reliable compared with ECG. In this paper, we developed a new data-driven cardiac gating approach to achieve significantly higher success rate by modeling the typical motion state of the heart

First, the central location of the heart was obtained from corresponding CT image or PET image. A cylinder-shaped mask centered at the heart was used to confine cardiac signal calculation. The size of the cylinder was chosen to include a typical human heart. A volume of interest (VOI) of heart was initialized as the same of the mask. Cardiac signal measuring the expansion/contraction motions of heart was calculated as the second order moment of the tracer distribution in the VOI in projection domain. A moving-window with width of ~90 seconds and interval of ~20 seconds was placed on the extracted signal to monitor the cardiac frequencies over time. The shift of cardiac frequency was modelled with modified Fourier transformation of the signal. The signal to noise ratio (SNR) of the cardiac motion signal was defined as the energy of the spectrum around the cardiac frequency over the energy of the spectrum in other non-cardiac frequencies. The optimal cardiac signal was determined by iteratively updating the VOI and cardiac frequencies to maximize the SNR. Nineteen patients with high myocardium uptake were included in the study. The average scan time was 6 minutes with ~10 mCi FDG injection. Both ECG signals from external devices and conventional data-driven method were applied for comparison of our new method. Cardiac gated image reconstructions were used to validate our methods.

Visible cardiac peak was detected in all patients using our new method, and triggers from data-driven cardiac signal matched well from those from ECG. Only 12 showed visible cardiac peak in the conventional data-driven method.

We have shown that our new data-driven method provides a robust alternative to cardiac gating in PET images with high successful rate.

Keywords: data-driven gating, cardiac gating, PET
8:54 am M-12-4

HOSVD-Based Multigraph Cuts for Joint Segmentation of Multi-Channel Images (#1382)

S. Roy Chowdhury1, Q. Li2, G. El Fakhri2, J. Dutta1, 2

1 UMass Lowell, Electrical and Computer Engineering, Lowell, Massachusetts, United States of America
2 Harvard/MGH, Gordon Center for Medical Imaging, Boston, Massachusetts, United States of America


Techniques for multi-channel image segmentation are important in a wide range of medical imaging contexts, including multimodal imaging, multi-parametric imaging, and multi-time-point imaging. We describe here a joint segmentation approach based on multigraph cuts. Our method relies on the computation of a composite graph Laplacian based on pairwise voxel similarities in multi-channel images. We apply higher order singular value decomposition (HOSVD) to this Laplacian tensor. The resultant singular vectors are then used in conjunction with the k-means algorithm to find the clusters ingrained in the data leading to effective segmentation. We successfully demonstrate the application of this method for lesion segmentation for multiple disease types and imaging contexts, namely multimodal (PET/MR) imaging of sarcomas, dynamic PET imaging of hepatocellular carcinomas, and MR images of glioblastomas.

Keywords: Image segmentation, multimodal, tensor, graph Laplacian, higher order singular value decomposition
9:12 am M-12-5

Design and Development of LSI for new photon-counting computed tomography with multi-pixel photon counters (#2827)

M. Arimoto1, H. Morita1, J. Kataoka1, K. Fujieda1, T. Maruhashi1, H. Nitta2, H. Ikeda3

1 Waseda University, Research Institute for Science and Engineering, Shinjuku, Tokyo, Japan
2 Hitachi Metals, Ltd., Osaka, Japan
3 Institute of Space and Astronautical Science, Kanagawa, Japan


X-ray imaging using computed tomography (CT) is widely used for nondestructive imaging of the interior of a human body. In the next decade, photon-counting X-ray CT is expected to reduce the required dose and enable multicolor imaging. Recently, we proposed a novel photon-counting method using a multi-pixel photon counter (MPPC) with significantly high signal gain (~106) and fast temporal response (a few ns), combined with a high-speed scintillator. To realize photon-counting CT imaging for a wide area with irradiation by an extremely high X-ray flux (105 -- 107 Hz/mm2), a multi-channel MPPC is required. Thus, we have developed a large scale integration (LSI) with ultrafast signal processing capability for a 16-channel MPPC. The developed LSI can extract a pulse charge from an MPPC with a large detector capacitance (~200 pF) by utilizing an electrical circuit with low input impedance. The obtained pulse signal is differentiated using different energy thresholds in the LSI, thereby enabling ultrafast multicolor CT imaging. In this report, we give a brief overview of the electrical circuit design and report the performance of the LSI integrated with MPPCs.

Keywords: X-ray CT, MPPC, LSI, low-dose, multicolor
9:30 am M-12-6

Deep learning for suppression of resolution-recovery artefacts in MLEM PET image reconstruction (#3564)

C. O. da Costa-Luis1, A. J. Reader1

1 King's College London, Imaging Sciences & Biomedical Engineering, London, United Kingdom of Great Britain and Northern Ireland


Resolution modelling in maximum likelihood expectation maximisation (MLEM) image reconstruction recovers resolution but at the cost of introducing ringing artefacts. Under-modelling, post-smoothing (PS) and regularisation methods which aim to suppress these artefacts nearly all result in a loss of resolution. This work proposes the use of deep convolutional neural networks (DCNNs) as a post-reconstruction image processing step to reduce reconstruction artefacts without compromising the resolution recovery. The DCNN results successfully suppress ringing artefacts and furthermore result in a 19.8% lower root mean squared error (RMSE) versus MLEM, compared to a best decrease of only 0.2% when an optimal level of PS of MLEM is performed. The resultant images from the DCNN have sharper edges and, in fact, also demonstrate improvement in spatial resolution compared to resolution-recovery MLEM. Future work will consider the introduction of varying noise levels, and using DL to compensate for ringing for different iterations of MLEM. Importantly, the DCNN could also be extended to assist in MR-guided reduced dose PET imaging.

Keywords: Machine Learning, Deep Learning, Deep Convolutional Neural Networks, Resolution Modelling, Positron Emission Tomography, Image Processing, Image Reconstruction, Resolution Recovery