IEEE 2017 NSS/MIC/RTSD ControlCenter

Online Program Overview Session: N-37

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Experimental software

Session chair: Elisabetta Ronchieri; Maria Grazia Pia
 
Shortcut: N-37
Date: Thursday, October 26, 2017, 13:40
Room: Centennial II
Session type: NSS Session

Software applications and results

Contents

1:40 pm N-37-1

Offline Reconstruction Algorithms for the CMS High Granularity Calorimeter for HL-LHC (#2471)

E. Meschi1

1 CERN, GENEVE 23, Switzerland

Content

The upgraded High Luminosity LHC, after the third Long Shutdown (LS3), will provide an instantaneous luminosity of 7.5 × 10^34 cm−2s−1 (levelled), at the price of extreme pileup of up to 200 interactions per crossing. Such extreme pileup poses significant challenges, in particular for forward calorimetry. As part of its HL-LHC upgrade program, the CMS collaboration is designing a High Granularity Calorimeter to replace the existing endcap calorimeters. It features unprecedented transverse and longitudinal segmentation for both electromagnetic and hadronic compartments. The electromagnetic and a large fraction of the hadronic portions will be based on hexagonal silicon sensors of 0.5 - 1 cm^2 cell size, with the remainder of the hadronic portion based on highly-segmented scintillators with SiPM readout. Offline clustering algorithms that make use of this extreme granularity require novel approaches to preserve the fine structure of showers and to be stable against pileup, while supporting the particle flow approach by enhancing pileup rejection and particle identification. We discuss the principle and performance of a set of clustering algorithms for the HGCAL based on techniques borrowed from machine learning and computer vision. These algorithms lend themselves particularly well to be deployed on GPUs. The features of the algorithm, as well as an analysis of the CPU requirements in the presence of large pileup, are discussed in some detail in view of the physics requirements of the upgraded CMS detector.

Keywords: Calorimetry, Reconstruction Software, LHC, CMS
1:58 pm N-37-2

The design and performance of the ATLAS Inner Detector trigger in high pileup collisions at 13 TeV at the Large Hadron Collider (#2148)

C. - L. Sotiropoulou1

1 University of Pisa and INFN Sezione di Pisa, Department of Physics, Pisa, Italy

Content

The design and performance of the ATLAS Inner Detector (ID) trigger algorithms running online on the high level trigger (HLT) processor farm for 13 TeV LHC collision data with high pileup are discussed. The HLT ID tracking is a vital component in all physics signatures in the ATLAS Trigger for the precise selection of the rare or interesting events necessary for physics analysis without overwhelming the onine data storage in terms of both size and rate. To cope with the high expected interaction rates in the 13 TeV LHC collisions the ID trigger was redesigned during the 2013-15 long shutdown. The performance of the ID Trigger in the 2016 data from 13 TeV LHC collisions has been excellent and exceeded expectations as the interaction multiplicity increased throughout the year. The detailed effciencies and resolutions of the trigger in a wide range of physics signatures are presented, to demonstrate how the trigger responded well under the extreme pileup conditions. The performance of the ID Trigger algorithms in the rst data from the even higher interaction multiplicity collisions from 2017 are presented, and illustrates how the ID tracking continues to enable the ATLAS physics program currently, and will continue to do so in the future.

Keywords: Trigger, tracking, algorithms
2:16 pm N-37-3

Verification and Validation of Monte Carlo n-Particle Code 6 (MCNP6) Neutron Spectra Estimates Within Nested Steel Cubes at the White Sands Missle Range (WSMR) Fast Burst Reactor (FBR) (#3381)

A. W. Decker1, S. A. Heider1, S. R. McHale2, M. Millett2, J. A. Clinton3, J. W. McClory3

1 Nuclear Science and Engineering Research Center (NSERC), Defense Threat Reduction Agency (DTRA), West Point, New York, United States of America
2 United States Naval Academy (USNA), Department of Mechanical Engineering, Annapolis, Maryland, United States of America
3 Air Force Institute of Technology (AFIT), Graduate School of Engineering and Management, Wright-Patterson AFB, Ohio, United States of America

Content

A pair of nested steel cubes with wall thicknesses totaling 2in are modeled in MCNP6.1.1 (MCNP6) and exposed to simulated 235U neutron fission spectra at a distance of 12ft; the perturbed energy spectrum is recorded within the innermost of the two cubes.  The simulation is repeated without the cubes to compute the unshielded neutron flux, and the MCNP6-derived flux spectra are converted into values of ambient dose equivalence (H*(10)) to determine the estimated ratio of neutron protection provided by the steel cubes.  This computational design is replicated as a physical experiment using the White Sands Missile Range (WSMR) Fast-Burst Reactor (FBR) as the 235U fission source.  A Bonner Sphere Spectrometer (BSS) measures neutron count rates, and neutron flux measurements are then unfolded from these data via the Maximum Entropy Deconvolution (MAXED) program, which incorporates the MCNP6 flux spectra as a priori information.  Data analysis provides χ2/df values <1.0 for both the unshielded and shielded configurations, which indicates statistically significant agreement between the MAXED spectra and the experimental BSS count rates.  Experimental H*(10) values for the unshielded and shielded experimental configuration are calculated and the ratio of neutron protection is compared against MCNP6.1 estimates, with differences <10%.  This experiment supports the further verification and validation of MCNP6 for shielded neutron dose and spectroscopy estimations using simple geometries and materials.

Keywords: MCNP, neutron, spectroscopy, modeling, protection
2:34 pm N-37-4

Reconstruction of Irradiation Field from Prompt gamma-rays in Geant4 Based Proton Therapy Simulation (#2416)

T. Aso1, K. Matsushita2, T. Nishio3, S. Kabuki4, T. Sasaki5

1 National Institute of Technology, Toyama College, Electronics and Computer Engineering, Imizu, Toyama, Japan
2 Kyoto Prefectural University of Medicine, Radiology, Kamigyo-ku, Kyoto, Japan
3 Tokyo Women's Medical University, Radiation Oncology, Shinjuku-ku, Tokyo, Japan
4 Tokai Univerisity, Radiation Oncology, Isehara, Kanagawa, Japan
5 High Energy Acceler, Computing Research Center, Tsukuba, Ibaraki, Japan

Content

A reconstruction of irradiation field from prompt gamma-rays and annihilation gamma-rays is an interest of particle therapy to verify the range of therapeutic beam and confirm the safety of patient. For the purpose of precision medicine in proton therapy, an irradiation field monitoring system is under development in the project “Tumor Response Observation System for Dose-volume delivers Guided Particle Therapy, TROS-DGPT”. It is developing a hybrid beam online PET and Compton Camera (H-BOLP/CCs). In TROS-DGPT project, a simulation tool has an important role to verify the mutual relationship among dose and radionuclide distributions inside a patient, and reconstructed images from signals in imaging devices. In previous conference, we reported on the development of imaging device functions in Geant4 based particle therapy system simulation framework (PTSIM). In this paper, we report on results from the study about the irradiation field reconstructions in PTSIM by using prompt gamma-rays.

The simulation was performed for proton beam with a target. The simple tubular detector was placed around the target to coincide the isocenter. Scorers were attached to the target and the detector, respectively. The irradiation field was reconstructed in the filtered back projection (FBP) method and the maximum likelihood expectation maximization (MLEM) method, respectively. The reconstructed irradiation fields were compared with the origin of gamma-rays and the dose distributions inside the target.

The results show that the range in reconstructed irradiation field is consistent with that in the gamma-rays’ origin distribution, while it has some deviation from the depth dose distribution depending on material of the target. 

Keywords: Irradiation Field, Geant4, Prompt gamma, Proton therapy, Range verification
2:52 pm N-37-5

Study of systematic and statistical uncertainties in offset, noise, and gain calibration of the DSSC detector for the European XFEL (#3080)

G. Weidenspointner1, S. Schlee1, A. Castoldi2, 3, C. Guazzoni2, 3, S. Maffessanti2, 3, M. Porro1

1 European X-Ray Free-Electron Laser Facility GmbH, Schenefeld, Schleswig-Holstein, Germany
2 Politecnico di Milano, Milano, Italy
3 INFN, Sezione di Milano, Milano, Italy

Content

The DSSC (DEPFET Sensor with Signal Compression) is a new instrument with non-linear compression of the input signal and with parallel signal processing (filtering, linear amplification, and 8-bit digitization) for all pixels. The DSSC will serve as ultra fast, megapixel sized imaging detector at the European XFEL (X-ray Free Electron Laser) in Hamburg, Germany, which will begin science operation in September this year. 

The DSSC detector needs to be calibrated for each foreseen operation mode before being employed in scientific experiments. A crucial step in the calibration of the response of each individual detector pixel is the calibration of offset, noise, and gain. Calibration of these three quantities is rendered difficult by the limited 8-bit resolution of the ADCs. In addition, it is necessary to take into account the non-ideal DNL (differential non-linearity) of the ADCs, and the limited accuracy to which the ADC binning in each pixel can be determined. To estimate the detector performance, and to compare with scientific requirements, both systematic and statistical uncertainties in the calibration of offset, noise, and gain must be studied. To do so, we first generated a large collection of realistic dark frame data as well as realistic 55Fe and 109Cd spectra for a range of offset and gain settings, ADC binnings, and number of frames, using our validated simulation tools. In addition, the simulation tools provided the actual values of offset, noise, and gain for each simulated data set. The simulated data were then analyzed using our calibration tools, in which various calibration methods are implemented. Systematic and statistical uncertainties in offset, noise, and gain calibration were quantified by comparing analysis results with the actual values used in the simulation. A review of all results identified the most suitable calibration methods.

Keywords: European XFEL, DSSC detector, calibration, system simulation, systematic and statistical uncertainty
3:10 pm N-37-6 Download

A container-based solution to generate HTCondor Batch Systems on demand exploiting heterogeneous Clouds for data analysis (#1195)

D. Spiga1, T. Boccali3, G. Donvito2, D. Salomoni4, A. Ceccanti4, M. Antonacci2, C. Duma4

1 Istituto Nazionale di Fisica Nucleare, Perugia, Perugia, Perugia, Italy
2 Istituto Nazionale di Fisica Nucleare, Bari, Bari, Italy
3 Istituto Nazionale di Fisica Nucleare, Pisa, Pisa, Italy
4 Istituto Nazionale di Fisica Nucleare, CNAF, Bologna, Italy

Content

This paper describes Dynamic On Demand Analysis Service (DODAS), an automated system that simplifies the process of provisioning, creating, managing and accessing a pool of heterogeneous computing and storage resources, by generating clusters to run batch systems thereby implementing the “Batch System as a Service” paradigm.

HTCondor technology is adopted as workload manager, which in turn can both seamlessly join an already existing HTCondor Pool and to deploy a standalone, auto-scaling batch farm, possibly using different geographically distributed computing centers. Apache Mesos manages CPU, RAM and storage offered by cloud providers, and Marathon is the application framework.

Using the INDIGO-DataCloud technological solutions and recipes, the overall cluster topology as well as the orchestration of all services such as HTCondor, Mesos, Marathon, Squid proxy, including any possible software dependency, are orchestrated with TOSCA templates and Ansible roles. The key outcome is that a single YAML file allows the description of any complex setup leaving end-user just with the management of a trivial configuration file. A high level of automation is obtained, spanning from the services deployment and configuration up to the self-healing and auto-scaling features.

DODAS is built on several INDIGO-DataCloud services among which: the PaaS Orchestrator, the Infrastructure Manager, and the Identity and Access Manager are the most important. The first two represent the resource abstraction layer, a key to the open interoperation across CLOUD solutions; the latter is the pillar of authentication, authorization and delegation mechanism, adopted to securely tie modern federations and well established computing models.

The paper describes also a successfully integration of DODAS with the computing infrastructure of two High Energy Physics experiments: the Compact Muon Solenoid installed at LHC and the Alpha Magnetic Spectrometer mounted on ISS.

Keywords: PaaS Automated Solutions, Dynamic Clusters, Batch System as a service, Data Analysis