Measure WWjj polarization fraction

Overview

WlWl Polarization

Measure WWjj polarization fraction

sm sm_lltt sm_lttl

Paper: arXiv:2109.09924
Notice: This code can only be used for the inference process, if you want to train your own model, please contact [email protected].

Requirements

  • Both Linux and Windows are supported.
  • 64-bit Python3.6(or higher, recommend 3.8) installation.
  • Tensorflow2.x(recommend 2.6), Numpy(recommend 1.19.5), Matplotlib(recommend 3.4.2)
  • One or more high-end NVIDIA GPUs(at least 4 GB of DRAM), NVIDIA drivers, CUDA(recommend 11.4) toolkit and cuDNN(recommend 8.2.x).

Preparing dataset

The raw dataset needs to be transformed before it can be imported into the model.

  • You need to create a raw dataset(we provide a test dataset, stored in ./raw/), the data structure is as follows:
The file has N events:
   Event 1
   Event 2
   ...
   Event N
One event for every 6 lines:
   1. first lepton 
   2. second lepton 
   3. first FB jet 
   4. second FB jet 
   5. MET 
   6. remaining jet 
Each line has the following five columns of elements:
   1.ParticleID  2.Px  3.Py  4.Pz  5.E
The format of an event in the dataset is as follows:
   ...
   -1.0  166.023   5.35817   10.784    166.459
   1.0   -36.1648  -64.1513  -28.9064  79.113
   7.0   -11.3233  -39.6316  -318.178  320.85
   7.0   -34.2795  22.0472   622.79    624.128
   0.0   -22.6711  52.8976   -422.567  426.468
   6.0   -49.9758  29.3283   274.517   294.098
   ...

ParticleID: 1 for electron, 2 for muon, 3 for tau, 4 for b-jet, 5 for normal jet, 0 for met, 6 for remaining jets, 7 for forward backward jet, signs represent electric charge.

  • Use the command python create_dataset.py YOUR_RAWDATA_PATH, it will create a file with the same name as YOUR_RAWDATA_PATH in the ./dataset/.

Using pre-trained models

After completing the preparation of the dataset, you can use the model to predict the polarization fraction.

  • Pre-trained weights are placed in ./weights/.
  • Use the command python inference.py --dataset YOUR_TRADATA_NAME --model_name <MODEL_NAME> --energy_level <ENERGY_LEVEL>, it will give the polarization fractions.

Notice: <ENERGY_LEVEL> should correspond to the collision energy of events.

Example

Run the following command to get the polarization fractions for the standard model:

python create_dataset.py ./raw/sm.dat
python inference.py --dataset sm --model_name TRANS --energy_level 13

Citation

@misc{li2021polarization,
    title={Polarization measurement for the dileptonic channel of $W^+ W^-$ scattering using generative adversarial network},
    author={Jinmian Li and Cong Zhang and Rao Zhang},
    year={2021},
    eprint={2109.09924},
    archivePrefix={arXiv},
    primaryClass={hep-ph}
}
In this work, we will implement some basic but important algorithm of machine learning step by step.

WoRkS continued English 中文 Français Probability Density Estimation-Non-Parametric Methods(概率密度估计-非参数方法) 1. Kernel / k-Nearest Neighborhood Density Est

liziyu0104 1 Dec 30, 2021
Pointer-generator - Code for the ACL 2017 paper Get To The Point: Summarization with Pointer-Generator Networks

Note: this code is no longer actively maintained. However, feel free to use the Issues section to discuss the code with other users. Some users have u

Abi See 2.1k Jan 04, 2023
PyTea: PyTorch Tensor shape error analyzer

PyTea: PyTorch Tensor Shape Error Analyzer paper project page Requirements node.js = 12.x python = 3.8 z3-solver = 4.8 How to install and use # ins

ROPAS Lab. 240 Jan 02, 2023
The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021)

The Official PyTorch Implementation of "LSGM: Score-based Generative Modeling in Latent Space" (NeurIPS 2021) Arash Vahdat*   ·   Karsten Kreis*   ·  

NVIDIA Research Projects 238 Jan 02, 2023
A computer vision pipeline to identify the "icons" in Christian paintings

Christian-Iconography A computer vision pipeline to identify the "icons" in Christian paintings. A bit about iconography. Iconography is related to id

Rishab Mudliar 3 Jul 30, 2022
Azion the best solution of Edge Computing in the world.

Azion Edge Function docker action Create or update an Edge Functions on Azion Edge Nodes. The domain name is the key for decision to a create or updat

8 Jul 16, 2022
Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples / ICLR 2018

Training Confidence-Calibrated Classifier for Detecting Out-of-Distribution Samples This project is for the paper "Training Confidence-Calibrated Clas

168 Nov 29, 2022
Official PyTorch implementation for "Low Precision Decentralized Distributed Training with Heterogenous Data"

Low Precision Decentralized Training with Heterogenous Data Official PyTorch implementation for "Low Precision Decentralized Distributed Training with

Aparna Aketi 0 Nov 23, 2021
simple demo codes for Learning to Teach with Dynamic Loss Functions

Learning to Teach with Dynamic Loss Functions This repo contains the simple demo for the NeurIPS-18 paper: Learning to Teach with Dynamic Loss Functio

Lijun Wu 15 Dec 30, 2021
Monify: an Expense tracker Program implemented in a Graphical User Interface that allows users to keep track of their expenses

💳 MONIFY (EXPENSE TRACKER PRO) 💳 Description Monify is an Expense tracker Program implemented in a Graphical User Interface allows users to add inco

Moyosore Weke 1 Dec 14, 2021
Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB)

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
UMPNet: Universal Manipulation Policy Network for Articulated Objects

UMPNet: Universal Manipulation Policy Network for Articulated Objects Zhenjia Xu, Zhanpeng He, Shuran Song Columbia University Robotics and Automation

Columbia Artificial Intelligence and Robotics Lab 33 Dec 03, 2022
High-performance moving least squares material point method (MLS-MPM) solver.

High-Performance MLS-MPM Solver with Cutting and Coupling (CPIC) (MIT License) A Moving Least Squares Material Point Method with Displacement Disconti

Yuanming Hu 2.2k Dec 31, 2022
NeurIPS-2021: Neural Auto-Curricula in Two-Player Zero-Sum Games.

NAC Official PyTorch implementation of NAC from the paper: Neural Auto-Curricula in Two-Player Zero-Sum Games. We release code for: Gradient based ora

Xidong Feng 19 Nov 11, 2022
State-of-the-art data augmentation search algorithms in PyTorch

MuarAugment Description MuarAugment is a package providing the easiest way to a state-of-the-art data augmentation pipeline. How to use You can instal

43 Dec 12, 2022
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
Contrastively Disentangled Sequential Variational Audoencoder

Contrastively Disentangled Sequential Variational Audoencoder (C-DSVAE) Overview This is the implementation for our C-DSVAE, a novel self-supervised d

Junwen Bai 35 Dec 24, 2022
Code for 'Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning' (AAAI 2022)

Blockwise Sequential Model Learning Code for 'Blockwise Sequential Model Learning for Partially Observable Reinforcement Learning' (AAAI 2022) For ins

2 Jun 17, 2022
ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representation from common sense knowledge graphs.

ZSL-KG is a general-purpose zero-shot learning framework with a novel transformer graph convolutional network (TrGCN) to learn class representa

Bats Research 94 Nov 21, 2022
QilingLab challenge writeup

qiling lab writeup shielder 在 2021/7/21 發布了 QilingLab 來幫助學習 qiling framwork 的用法,剛好最近有用到,順手解了一下並寫了一下 writeup。 前情提要 Qiling 是一款功能強大的模擬框架,和 qemu user mode

Yuan 17 Nov 17, 2022