Deep learning algorithms for muon momentum estimation in the CMS Trigger System

Overview

Deep learning algorithms for muon momentum estimation in the CMS Trigger System

The Compact Muon Solenoid (CMS) is a general-purpose detector at the Large Hadron Collider (LHC). During a run, it generates about 40 TB data per second. Since It is not feasible to readout and store such a vast amount of data, so a trigger system selects and stores only interesting events or events likely to reveal new physics phenomena. The goal of this project is to benchmark the muon momentum estimation performance of Fully Connected Neural Networks (FCNN), Convolutional Neural Networks (CNN), and Graph Neural Networks (GNN), on the prompt and displaced muon samples detected by CSC stations at CMS to aid trigger system's transverse momentum (pT) muon estimation.

About

In the project FCNNs, CNNs, and GNNs are trained and evaluated on the prompt muon samples (two versions of same samples with different sampling approaches), and displaced muon samples generated by Monte Carlo simulation. The other details are -

  • Target Variables: Three types of predictions are benchmarked with each type of algorithm.
Target Loss
1/Transverse_momentum (1/pT) Mean Square Error (MSE)
Transverse Momentum (pT)
4 class classification
(0-10 GeV, 10-30 GeV, 30-100 GeV, >100 GeV)
Focal Loss
  • Validation Scheme: 10 fold out-of-fold predictions (i.e. dataset is splitted into 10 small batches, out of them 8 are used for training, 1 as validation dataset and 1 as holdout. This holdout is changed 10 times to give the final scores.)

  • Metrices Tracked:

    • MAE - Mean Absolute Error at a given transverse momentum (pT).
    • MAE/pT - Ratio of Mean Absolute Error to transverse momentum at a given transverse momentum.
    • Acurracy - At a given pT, muon samples can be divided into two classes, one muons with pT more than this given and another class of muons with pT less than this. So, Acurracy at a given pT is the accuracy for these two classes.
    • F1-score (of class pT>x GeV) - At a given pT, this is the f1-score of the class of muons with pT more than this given pT.
    • F1-score (of class pT - At a given pT, this is the f1-score of the class of muons with pT less than this given pT.
    • ROC-AUC Score of each class - only in case of four class classification
  • Preprocessing: Standard scaling of input coordinates

How to use

  1. Make sure that all the libraries mentioned in requirements.txt are installed
  2. Clone the repo
https://github.com/lastnameis-borah/CMS_moun_transverse_momentum_estimation.git
  1. Change current directory to the cloned directory and execute main.py with the required arguments
python main.py --path='/kaggle/input/cmsnewsamples/new-smaples.csv' \
                --dataset='prompt_new'\
                --predict='pT'\
                --model='FCNN'\
                --epochs=50 \
                --batch_size=512\
                --folds="0,1,2,3,4,5,6,7,8,9" \
                --results='/kaggle/working/results'

Note: Give absolute paths as argument

Arguments

  1. path - path of the csv having the coordinates of generated muon samples
  2. dataset - specify the samples that you are using (i.e. prompt_new, prompt_old, or displaced)
  3. predict - target variable (i.e. pT, 1/pT, or pT_classes)
  4. model - architecture to use (i.e. FCNN, CNN, or GNN)
  5. epochs - max number of epochs to train, if score converges than due to early-stopping training may stop earlier
  6. batchsize - number of samples in a batch
  7. folds - a string containing the info on which folds one wants the result
  8. results - path of the directory to save the results

Results

Regressing 1/pT

Metric Prompt Muons Samples-1 Prompt Muons Samples-2 Displaced Muons Samples
MAE/pT
MAE
Accuracy
F1-score (pT>x)
F1-score (pT

Regressing pT

Metric Prompt Muons Samples-1 Prompt Muons Samples-2 Displaced Muons Samples
MAE/pT
MAE
Accuracy
F1-score (pT>x)
F1-score (pT

Four class classification

  • Prompt Muons Samples-1
Model 0-10 GeV 10-30 GeV 30-100 GeV >100GeV
FCNN 0.990 0.970 0.977 0.969
CNN 0.991 0.973 0.980 0.983
  • Prompt Muons Samples-2
Model 0-10 GeV 10-30 GeV 30-100 GeV >100GeV
FCNN 0.990 0.975 0.981 0.958
CNN 0.991 0.976 0.983 0.983
  • Displaced Muons Samples
Model 0-10 GeV 10-30 GeV 30-100 GeV >100GeV
FCNN 0.944 0.898 0.910 0.839
CNN 0.958 0.907 0.932 0.910
Owner
anuragB
Petroleum Engineering Undergrad. IITM Data Science Undergrad.
anuragB
(ICCV 2021) Official code of "Dressing in Order: Recurrent Person Image Generation for Pose Transfer, Virtual Try-on and Outfit Editing."

Dressing in Order (DiOr) 👚 [Paper] 👖 [Webpage] 👗 [Running this code] The official implementation of "Dressing in Order: Recurrent Person Image Gene

Aiyu Cui 277 Dec 28, 2022
Calculates carbon footprint based on fuel mix and discharge profile at the utility selected. Can create graphs and tabular output for fuel mix based on input file of series of power drawn over a period of time.

carbon-footprint-calculator Conda distribution ~/anaconda3/bin/conda install anaconda-client conda-build ~/anaconda3/bin/conda config --set anaconda_u

Seattle university Renewable energy research 7 Sep 26, 2022
Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network

Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network This is the official implementation of

azad 2 Jul 09, 2022
An addon uses SMPL's poses and global translation to drive cartoon character in Blender.

Blender addon for driving character The addon drives the cartoon character by passing SMPL's poses and global translation into model's armature in Ble

犹在镜中 153 Dec 14, 2022
Saeed Lotfi 28 Dec 12, 2022
PyTorch implementation of PP-LCNet: A Lightweight CPU Convolutional Neural Network

PyTorch implementation of PP-LCNet Reproduction of PP-LCNet architecture as described in PP-LCNet: A Lightweight CPU Convolutional Neural Network by C

Quan Nguyen (Fly) 47 Nov 02, 2022
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
This is a code repository for paper OODformer: Out-Of-Distribution Detection Transformer

OODformer: Out-Of-Distribution Detection Transformer This repo is the official the implementation of the OODformer: Out-Of-Distribution Detection Tran

34 Dec 02, 2022
NNR conformation conditional and global probabilities estimation and analysis in peptides or proteins fragments

NNR and global probabilities estimation and analysis in peptides or protein fragments This module calculates global and NNR conformation dependent pro

0 Jul 15, 2021
Implementation of Axial attention - attending to multi-dimensional data efficiently

Axial Attention Implementation of Axial attention in Pytorch. A simple but powerful technique to attend to multi-dimensional data efficiently. It has

Phil Wang 250 Dec 25, 2022
CenterNet:Objects as Points目标检测模型在Pytorch当中的实现

CenterNet:Objects as Points目标检测模型在Pytorch当中的实现

Bubbliiiing 267 Dec 29, 2022
Image Segmentation using U-Net, U-Net with skip connections and M-Net architectures

Brain-Image-Segmentation Segmentation of brain tissues in MRI image has a number of applications in diagnosis, surgical planning, and treatment of bra

Angad Bajwa 8 Oct 27, 2022
The Empirical Investigation of Representation Learning for Imitation (EIRLI)

The Empirical Investigation of Representation Learning for Imitation (EIRLI)

Center for Human-Compatible AI 31 Nov 06, 2022
PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

PyTorch implementation of "Supervised Contrastive Learning" (and SimCLR incidentally)

Yonglong Tian 2.2k Jan 08, 2023
Active window border replacement for window managers.

xborder Active window border replacement for window managers. Usage git clone https://github.com/deter0/xborder cd xborder chmod +x xborders ./xborder

deter 250 Dec 30, 2022
Instance-wise Occlusion and Depth Orders in Natural Scenes (CVPR 2022)

Instance-wise Occlusion and Depth Orders in Natural Scenes Official source code. Appears at CVPR 2022 This repository provides a new dataset, named In

27 Dec 27, 2022
Source code of "Hold me tight! Influence of discriminative features on deep network boundaries"

Hold me tight! Influence of discriminative features on deep network boundaries This is the source code to reproduce the experiments of the NeurIPS 202

EPFL LTS4 19 Dec 10, 2021
Python Library for Signal/Image Data Analysis with Transport Methods

PyTransKit Python Transport Based Signal Processing Toolkit Website and documentation: https://pytranskit.readthedocs.io/ Installation The library cou

24 Dec 23, 2022
Joint Versus Independent Multiview Hashing for Cross-View Retrieval[J] (IEEE TCYB 2021, PyTorch Code)

Thanks to the low storage cost and high query speed, cross-view hashing (CVH) has been successfully used for similarity search in multimedia retrieval. However, most existing CVH methods use all view

4 Nov 19, 2022
Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES)

Non-Imaging Transient Reconstruction And TEmporal Search (NITRATES) This repo contains the full NITRATES pipeline for maximum likelihood-driven discov

13 Nov 08, 2022