HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep.

Related tags

Deep LearningHODEmu
Overview

HODEmu

HODEmu, is both an executable and a python library that is based on Ragagnin 2021 in prep. and emulates satellite abundance as a function of cosmological parameters Omega_m, Omega_b, sigma_8, h_0 and redshift.

The Emulator is trained on satellite abundance of Magneticum simulations Box1a/mr spanning 15 cosmologies (see Table 1 of the paper) and on all satellites with a stellar mass cut of M* > 2 1011 M. Use Eq. 3 to rescale it to a stelalr mass cut of 1010M.

The Emulator has been trained with sklearn GPR, however the class implemented in hod_emu.py is a stand-alone porting and does not need sklearn to be installed.

satellite average abundance for two Magneticum Box1a/mr simulations, from Ragagnin et al. 2021

TOC:

Install

You can either )1) download the file hod_emu.py and _hod_emu_sklearn_gpr_serialized.py or (2) install it with python -mpip install git+https://github.com/aragagnin/HODEmu. The package depends only on scipy. The file hod_emu.py can be executed from your command line interface by running ./hod_emu.py in the installation folder.

Check this ipython-notebook for a guided usage on a python code: https://github.com/aragagnin/HODEmu/blob/main/examples.ipynb

Example 1: Obtain normalisation, logslope and gaussian scatter of Ns-M relation

The following command will output, respectively, normalisation A, log-slope \beta, log-scatter \sigma, and the respective standard deviation from the emulator. Since the emulator has been trained on the residual of the power-law dependency in Eq. 6, the errors are respectively, the standard deviation on log-A, on log-beta, and on log-sigma. Note that --delta can be only 200c or vir as the paper only emulates these two overdensities.

 ./hod_emu.py  200c  .27  .04   0.8  0.7   0.0 #overdensity omega_m omega_b sigma8 h0 redshift

Here below we will use hod_emyu as python library to plot the Ns-M relation. First we use hod_emu.get_emulator_m200c() to obtain an instance of the Emulator class trianed on Delta_200c, and the function emu.predict_A_beta_sigma(input) to retrieve A,\beta and \sigma.

Note that input can be evaluated on a number N of data points (in this example only one), thus being is a N x 5 numpy array and the return value is a N x 3 numpy array. The parameter emulator_std=True will also return a N x 3 numpy array with the corresponding emulator standard deviations.

import hod_emu
Om0, Ob0, s8, h0, z = 0.3, 0.04, 0.8, 0.7, 0.9

input = [[Om0, Ob0, s8, h0, 1./(1.+z)]] #the input must be a 2d array because you can feed an array of data points

emu = hod_emu.get_emulator_m200c() # use get_emulator_mvir to obtain the emulator within Delta_vir

A, beta, sigma  =  emu.predict_A_beta_sigma(input).T #the function outputs a 1x3 matrix 

masses = np.logspace(14.5,15.5,20)
Ns = A*(masses/5e14)**beta 

plt.plot(masses,Ns)
plt.fill_between(masses, Ns*(1.-sigma), Ns*(1.+sigma),alpha=0.2)
plt.xlabel(r'$M_{\rm{halo}}$')
plt.ylabel(r'$N_s$')
plt.title(r'$M_\bigstar>2\cdot10^{11}M_\odot \ \ \ \tt{ and }  \ \ \ \ \  r
   )
plt.xscale('log')
plt.yscale('log')

params_tuple, stds_tuple  =  emu.predict_A_beta_sigma(input, emulator_std=True) #here we also asks for Emulator std deviation

A, beta, sigma = params_tuple.T
error_logA, error_logbeta, error_logsigma = stds_tuple.T

print('A: %.3e, log-std A: %.3e'%(A[0], error_logA[0]))
print('B: %.3e, log-std beta: %.3e'%(beta[0], error_logbeta[0]))
print('sigma: %.3e, log-std sigma: %.3e'%(sigma[0], error_logsigma[0]))

Will show the following figure:

Ns-M relation produced by HODEmu

And print the following output:

A: 1.933e+00, log-std A: 1.242e-01
B: 1.002e+00, log-std beta: 8.275e-02
sigma: 6.723e-02, log-std sigma: 2.128e-01

Example 2: Produce mock catalog of galaxies

In this example we use package hmf to produce a mock catalog of haloe masses. Note that the mock number of satellite is based on a gaussian distribution with a cut on negative value (see Eq. 5 of the paper), hence the function non_neg_normal_sample.

2\cdot10^{11}M_\odot \ \ \ \tt{ and } \ \ \ \ \ r
import hmf.helpers.sample
import scipy.stats

masses = hmf.helpers.sample.sample_mf(400,14.0,hmf_model="PS",Mmax=17,sort=True)[0]    
    
def non_neg_normal_sample(loc, scale,  max_iters=1000):
    "Given a numpy-array of loc and scale, return data from only-positive normal distribution."
    vals = scipy.stats.norm.rvs(loc = loc, scale=scale)
    mask_negative = vals<0.
    if(np.any(vals[mask_negative])):
        non_neg_normal_sample(loc[mask_negative], scale[mask_negative],  max_iters=1000)
    # after the recursion, we should have all positive numbers
    
    if(np.any(vals<0.)):
        raise Exception("non_neg_normal_sample function failed to provide  positive-normal")    
    return vals

A, beta, logscatter = emu.predict_A_beta_sigma( [Om0, Ob0, s8, h0, 1./(1.+z)])[0].T

Ns = A*(masses/5e14)**beta

modelmu = non_neg_normal_sample(loc = Ns, scale=logscatter*Ns)
modelpois = scipy.stats.poisson.rvs(modelmu)
modelmock = modelpois

plt.fill_between(masses, Ns *(1.-logscatter), Ns *(1.+logscatter), label='Ns +/- log scatter from Emu', color='black',alpha=0.5)
plt.scatter(masses, modelmock , label='Ns mock', color='orange')
plt.plot(masses, Ns , label='
    
      from Emu'
    , color='black')
plt.ylim([0.1,100.])
plt.xscale('log')
plt.yscale('log')
plt.xlabel(r'$M_{\rm {halo}} [M_\odot]$')
plt.ylabel(r'$N_s$')
plt.title(r'$M_\bigstar>2\cdot10^{11}M_\odot \ \ \ \tt{ and }  \ \ \ \ \  r
    )

plt.legend();

Will show the following figure:

Mock catalog of halos and satellite abundance produced by HODEmu

Owner
Antonio Ragagnin
I cook math
Antonio Ragagnin
PyTorch wrappers for using your model in audacity!

audacitorch This package contains utilities for prepping PyTorch audio models for use in Audacity. More specifically, it provides abstract classes for

Hugo Flores García 130 Dec 14, 2022
Sequential model-based optimization with a `scipy.optimize` interface

Scikit-Optimize Scikit-Optimize, or skopt, is a simple and efficient library to minimize (very) expensive and noisy black-box functions. It implements

Scikit-Optimize 2.5k Jan 04, 2023
Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics

Dataset Cartography Code for the paper Dataset Cartography: Mapping and Diagnosing Datasets with Training Dynamics at EMNLP 2020. This repository cont

AI2 125 Dec 22, 2022
A pytorch reprelication of the model-based reinforcement learning algorithm MBPO

Overview This is a re-implementation of the model-based RL algorithm MBPO in pytorch as described in the following paper: When to Trust Your Model: Mo

Xingyu Lin 93 Jan 05, 2023
ML From Scratch

ML from Scratch MACHINE LEARNING TOPICS COVERED - FROM SCRATCH Linear Regression Logistic Regression K Means Clustering K Nearest Neighbours Decision

Tanishq Gautam 66 Nov 02, 2022
Code for A Volumetric Transformer for Accurate 3D Tumor Segmentation

VT-UNet This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet. Environmen

Himashi Amanda Peiris 114 Dec 20, 2022
nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures.

nextPARS, a novel Illumina-based implementation of in-vitro parallel probing of RNA structures. Here you will find the scripts necessary to produce th

Jesse Willis 0 Jan 20, 2022
Asterisk is a framework to generate high-quality training datasets at scale

Asterisk is a framework to generate high-quality training datasets at scale

Mona Nashaat 44 Apr 25, 2022
This repository contains datasets and baselines for benchmarking Chinese text recognition.

Benchmarking-Chinese-Text-Recognition This repository contains datasets and baselines for benchmarking Chinese text recognition. Please see the corres

FudanVI Lab 254 Dec 30, 2022
ncnn is a high-performance neural network inference framework optimized for the mobile platform

ncnn ncnn is a high-performance neural network inference computing framework optimized for mobile platforms. ncnn is deeply considerate about deployme

Tencent 16.2k Jan 05, 2023
Deep Federated Learning for Autonomous Driving

FADNet: Deep Federated Learning for Autonomous Driving Abstract Autonomous driving is an active research topic in both academia and industry. However,

AIOZ AI 12 Dec 01, 2022
A PyTorch implementation of the Relational Graph Convolutional Network (RGCN).

Torch-RGCN Torch-RGCN is a PyTorch implementation of the RGCN, originally proposed by Schlichtkrull et al. in Modeling Relational Data with Graph Conv

Thiviyan Singam 66 Nov 30, 2022
Machine Learning automation and tracking

The Open-Source MLOps Orchestration Framework MLRun is an open-source MLOps framework that offers an integrative approach to managing your machine-lea

873 Jan 04, 2023
Codes for our IJCAI21 paper: Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization

DDAMS This is the pytorch code for our IJCAI 2021 paper Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization [Arxiv Pr

xcfeng 55 Dec 27, 2022
Convert Python 3 code to CUDA code.

Py2CUDA Convert python code to CUDA. Usage To convert a python file say named py_file.py to CUDA, run python generate_cuda.py --file py_file.py --arch

Yuval Rosen 3 Jul 14, 2021
Multi-label classification of retinal disorders

Multi-label classification of retinal disorders This is a deep learning course project. The goal is to develop a solution, using computer vision techn

Sundeep Bhimireddy 1 Jan 29, 2022
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning This repository is the official implementation of CARE.

ChongjianGE 89 Dec 02, 2022
This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities

MLOps with Vertex AI This example implements the end-to-end MLOps process using Vertex AI platform and Smart Analytics technology capabilities. The ex

Google Cloud Platform 238 Dec 21, 2022
A PyTorch-based library for fast prototyping and sharing of deep neural network models.

A PyTorch-based library for fast prototyping and sharing of deep neural network models.

78 Jan 03, 2023