IEEE 2017 NSS/MIC/RTSD ControlCenter

Online Program Overview Session: BDW-04

To search for a specific ID please enter the hash sign followed by the ID number (e.g. #123).

Big Data Workshop

Session chair: Ren-Yuan Zhu California Institute of Technology
 
Shortcut: BDW-04
Date: Friday, October 27, 2017, 16:00
Room: Hannover A&B
Session type: Workshop

Contents

4:00 pm BDW-04-1

Prediction Machines based on Deep Sparse Generative Autoencoders (#4310)

B. Yoon1, P. F. Schultz2, G. T. Kenyon1, 2

1 Los Alamos National Laboratory, CCS, Los Alamos, New Mexico, United States of America
2 New Mexico Consortium, Los Alamos, New Mexico, United States of America

Content

Although the vast majority of synapses in the cerebral cortex convey lateral or top-down feed-back, most convolutional neural networks (CNNs) are based on a feed-forward architecture.   In a similar vein, although cortical receptive fields are richly dynamic and combine temporal with other types of information, most CNNs employ purely static representations in which temporal information is ignored.  Finally, whereas CNNs trained with backprop require large amounts of labeled training data, biological systems learn primarily from raw, unlabeled sensory inputs.  We hypothesize that fundamental improvements in the performance of neurally-inspired computer algorithms can be achieved by incorporating lateral and top-down feedback along with spatiotemporal dynamics into RCNNs that construct their internal representations so as to model the environment in which they are embedded.

Deep Sparse Generative Autoencoders (DSGAs) provide a neurally-plausible means for constructing artificial neural networks that incorporate lateral and top-down feedback in an essential way.   DSGAs utilize attractor-driven dynamics to encode both spatial and temporal context, and learn their internal representations in an unsupervised manner from raw environmental inputs.  In addition to unsupervised learning, the internal representations in DSGAs can be influenced by limited amounts of labeled training data and/or reward signals, thereby biasing feature selection to behaviorally relevant stimuli.  DSGAs can be used as highly pre-processed input to conventional CNN/RCNN classifiers, allowing the latter to be trained with much less labeled training data than is typically the case.

Here, we describe a Sparse Prediction Machine (SPM) based on a new class of DSGAs for processing video feeds from mobile platforms such as drones, autonomous vehicles and cell phones as well for learning “dark knowledge” from dynamic experimental data.  The goal of an SPM is to predict future states of a system from a sequence of previous states, or in the case of video, to predict a subsequent frame from previous frames.  We used PetaVision [https://github.com/PetaVision/OpenPV], an open source high-performance neural simulation toolbox,  to implement a 4-layer SPM applied to ImageNet video.

Keywords: sparse coding, video
4:36 pm BDW-04-2

The HEP.TrkX Project (#1201)

P. Calafiura2, S. Farrell2, M. Mudigonda2, M. Prabhat2, D. Anderson1, J. Bendavid1, M. Spiropulu1, J. - R. Vlimant1, S. Zheng1, G. Cerati3, L. Gray3, J. Kowalkowski3, P. Spentzouris3, A. Tsaris3

1 California Institute of Technology, Pasadena, California, United States of America
2 Lawrence Berkeley National Laboratory (LBNL), Berkeley, California, United States of America
3 Enrico Fermi National Laboratory, Batavia, Illinois, United States of America

Content

Charged particle track reconstruction in dense environments such as the detectors of the HL-LHC is a challenging pattern recognition problem. Traditional tracking algorithms such as the combinatorial Kalman Filter have been used with great success in HEP experiments for years. However, these state-of-the-art techniques are inherently sequential and scale quadratically or worse with increased detector occupancy. To process 60M charged particle tracks per second at the HL-LHC, tracking algorithms will need to be one order of magnitude faster and run in parallel on one order of magnitude more processing units (cores/threads). 

The HEP.TrkX project is a pilot project with the aim to identify and develop cross-experiment solutions based on machine learning algorithms for track reconstruction. Machine learning algorithms bring a lot of potential to this problem thanks to their capability to model complex non-linear data dependencies, to learn effective representations of high-dimensional data through training, and to parallelize easily on high-throughput architectures such as FPGAs or GPUs.

This contribution will describe our initial explorations into this new idea space. We will discuss the use of recurrent (LSTM) and convolutional neural networks to find and fit charged particle tracks from simulated data coming from a realistic HL-LHC tracking detector, and their applicability to online and offline data processing.

Keywords: Tracking, HL-LHC, Deep Learning
5:12 pm BDW-04-3

Group discussions