Deep learning algorithms for muon momentum estimation in the CMS Trigger System

Overview

Deep learning algorithms for muon momentum estimation in the CMS Trigger System

The Compact Muon Solenoid (CMS) is a general-purpose detector at the Large Hadron Collider (LHC). During a run, it generates about 40 TB data per second. Since It is not feasible to readout and store such a vast amount of data, so a trigger system selects and stores only interesting events or events likely to reveal new physics phenomena. The goal of this project is to benchmark the muon momentum estimation performance of Fully Connected Neural Networks (FCNN), Convolutional Neural Networks (CNN), and Graph Neural Networks (GNN), on the prompt and displaced muon samples detected by CSC stations at CMS to aid trigger system's transverse momentum (pT) muon estimation.

About

In the project FCNNs, CNNs, and GNNs are trained and evaluated on the prompt muon samples (two versions of same samples with different sampling approaches), and displaced muon samples generated by Monte Carlo simulation. The other details are -

  • Target Variables: Three types of predictions are benchmarked with each type of algorithm.
Target Loss
1/Transverse_momentum (1/pT) Mean Square Error (MSE)
Transverse Momentum (pT)
4 class classification
(0-10 GeV, 10-30 GeV, 30-100 GeV, >100 GeV)
Focal Loss
  • Validation Scheme: 10 fold out-of-fold predictions (i.e. dataset is splitted into 10 small batches, out of them 8 are used for training, 1 as validation dataset and 1 as holdout. This holdout is changed 10 times to give the final scores.)

  • Metrices Tracked:

    • MAE - Mean Absolute Error at a given transverse momentum (pT).
    • MAE/pT - Ratio of Mean Absolute Error to transverse momentum at a given transverse momentum.
    • Acurracy - At a given pT, muon samples can be divided into two classes, one muons with pT more than this given and another class of muons with pT less than this. So, Acurracy at a given pT is the accuracy for these two classes.
    • F1-score (of class pT>x GeV) - At a given pT, this is the f1-score of the class of muons with pT more than this given pT.
    • F1-score (of class pT - At a given pT, this is the f1-score of the class of muons with pT less than this given pT.
    • ROC-AUC Score of each class - only in case of four class classification
  • Preprocessing: Standard scaling of input coordinates

How to use

  1. Make sure that all the libraries mentioned in requirements.txt are installed
  2. Clone the repo
https://github.com/lastnameis-borah/CMS_moun_transverse_momentum_estimation.git
  1. Change current directory to the cloned directory and execute main.py with the required arguments
python main.py --path='/kaggle/input/cmsnewsamples/new-smaples.csv' \
                --dataset='prompt_new'\
                --predict='pT'\
                --model='FCNN'\
                --epochs=50 \
                --batch_size=512\
                --folds="0,1,2,3,4,5,6,7,8,9" \
                --results='/kaggle/working/results'

Note: Give absolute paths as argument

Arguments

  1. path - path of the csv having the coordinates of generated muon samples
  2. dataset - specify the samples that you are using (i.e. prompt_new, prompt_old, or displaced)
  3. predict - target variable (i.e. pT, 1/pT, or pT_classes)
  4. model - architecture to use (i.e. FCNN, CNN, or GNN)
  5. epochs - max number of epochs to train, if score converges than due to early-stopping training may stop earlier
  6. batchsize - number of samples in a batch
  7. folds - a string containing the info on which folds one wants the result
  8. results - path of the directory to save the results

Results

Regressing 1/pT

Metric Prompt Muons Samples-1 Prompt Muons Samples-2 Displaced Muons Samples
MAE/pT
MAE
Accuracy
F1-score (pT>x)
F1-score (pT

Regressing pT

Metric Prompt Muons Samples-1 Prompt Muons Samples-2 Displaced Muons Samples
MAE/pT
MAE
Accuracy
F1-score (pT>x)
F1-score (pT

Four class classification

  • Prompt Muons Samples-1
Model 0-10 GeV 10-30 GeV 30-100 GeV >100GeV
FCNN 0.990 0.970 0.977 0.969
CNN 0.991 0.973 0.980 0.983
  • Prompt Muons Samples-2
Model 0-10 GeV 10-30 GeV 30-100 GeV >100GeV
FCNN 0.990 0.975 0.981 0.958
CNN 0.991 0.976 0.983 0.983
  • Displaced Muons Samples
Model 0-10 GeV 10-30 GeV 30-100 GeV >100GeV
FCNN 0.944 0.898 0.910 0.839
CNN 0.958 0.907 0.932 0.910
Owner
anuragB
Petroleum Engineering Undergrad. IITM Data Science Undergrad.
anuragB
2021 Artificial Intelligence Diabetes Datathon

A.I.D.D. 2021 2021 Artificial Intelligence Diabetes Datathon A.I.D.D. 2021은 ‘2021 인공지능 학습용 데이터 구축사업’을 통해 만들어진 학습용 데이터를 활용하여 당뇨병을 효과적으로 예측할 수 있는가에 대한 A

2 Dec 27, 2021
A Simple Key-Value Data-store written in Python

mercury-db This is a File Based Key-Value Datastore that supports basic CRUD (Create, Read, Update, Delete) operations developed using Python. The dat

Vaidhyanathan S M 1 Jan 09, 2022
Leveraging Social Influence based on Users Activity Centers for Point-of-Interest Recommendation

SUCP Leveraging Social Influence based on Users Activity Centers for Point-of-Interest Recommendation () Direct Friends (i.e., users who follow each o

Kosar 8 Nov 26, 2022
Multivariate Time Series Forecasting with efficient Transformers. Code for the paper "Long-Range Transformers for Dynamic Spatiotemporal Forecasting."

Spacetimeformer Multivariate Forecasting This repository contains the code for the paper, "Long-Range Transformers for Dynamic Spatiotemporal Forecast

QData 440 Jan 02, 2023
Official Implementation of "Designing an Encoder for StyleGAN Image Manipulation"

Designing an Encoder for StyleGAN Image Manipulation (SIGGRAPH 2021) Recently, there has been a surge of diverse methods for performing image editing

749 Jan 09, 2023
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

170 Jan 03, 2023
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
QAT(quantize aware training) for classification with MQBench

MQBench Quantization Aware Training with PyTorch I am using MQBench(Model Quantization Benchmark)(http://mqbench.tech/) to quantize the model for depl

Ling Zhang 29 Nov 18, 2022
TorchIO is a Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Medical image preprocessing and augmentation toolkit for deep learning. Part of the PyTorch Ecosystem.

Fernando Pérez-García 1.6k Jan 06, 2023
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Awesome Remote Sensing Toolkit based on PaddlePaddle.

基于飞桨框架开发的高性能遥感图像处理开发套件,端到端地完成从训练到部署的全流程遥感深度学习应用。 最新动态 PaddleRS 即将发布alpha版本!欢迎大家试用 简介 PaddleRS是遥感科研院所、相关高校共同基于飞桨开发的遥感处理平台,支持遥感图像分类,目标检测,图像分割,以及变化检测等常用遥

146 Dec 11, 2022
Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving

Frequency Domain Image Translation: More Photo-realistic, Better Identity-preserving This is the source code for our paper Frequency Domain Image Tran

Mu Cai 52 Dec 23, 2022
Apply AnimeGAN-v2 across frames of a video clip

title emoji colorFrom colorTo sdk app_file pinned AnimeGAN-v2 For Videos 🔥 blue red gradio app.py false AnimeGAN-v2 For Videos Apply AnimeGAN-v2 acro

Nathan Raw 36 Oct 18, 2022
A multi-functional library for full-stack Deep Learning. Simplifies Model Building, API development, and Model Deployment.

chitra What is chitra? chitra (चित्र) is a multi-functional library for full-stack Deep Learning. It simplifies Model Building, API development, and M

Aniket Maurya 210 Dec 21, 2022
PyTorch implementation of PP-LCNet: A Lightweight CPU Convolutional Neural Network

PyTorch implementation of PP-LCNet Reproduction of PP-LCNet architecture as described in PP-LCNet: A Lightweight CPU Convolutional Neural Network by C

Quan Nguyen (Fly) 47 Nov 02, 2022
GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

GNN4Traffic - This is the repository for the collection of Graph Neural Network for Traffic Forecasting

564 Jan 02, 2023
Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative adversarial networks (GAN)

Flickr-Faces-HQ Dataset (FFHQ) Flickr-Faces-HQ (FFHQ) is a high-quality image dataset of human faces, originally created as a benchmark for generative

NVIDIA Research Projects 2.9k Dec 28, 2022
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

31 Sep 27, 2022
Code accompanying "Learning What To Do by Simulating the Past", ICLR 2021.

Learning What To Do by Simulating the Past This repository contains code that implements the Deep Reward Learning by Simulating the Past (Deep RSLP) a

Center for Human-Compatible AI 24 Aug 07, 2021
Tensor-Based Quantum Machine Learning

TensorLy_Quantum TensorLy-Quantum is a Python library for Tensor-Based Quantum Machine Learning that builds on top of TensorLy and PyTorch. Website: h

TensorLy 85 Dec 03, 2022