Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes

Overview

Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil - Datasets and Codes

The codes for simulations were written in Fortran and compiled with the Intel Fortran Compiler. Data analysis and figures were done Python 3.10 and the following open source libraries: pandas, matplotlib and seaborn.

In this repository we show codes for simulations and processing data, as well as datasets used.

The preprint is available at https://arxiv.org/abs/2201.03476. The following BibTeX code can be used to cite it:

@misc{costa2022compartmental,
      title={Compartmental epidemic model to assess undocumented infections: applications to SARS-CoV-2 epidemics in Brazil}, 
      author={Guilherme S. Costa and Wesley Cota and Silvio C. Ferreira},
      year={2022},
      eprint={2201.03476},
      archivePrefix={arXiv},
      primaryClass={q-bio.PE}
}

See also Effects of infection fatality ratio and social contact matrices on vaccine prioritization strategies and Outbreak diversity in epidemic waves propagating through distinct geographical scales.

Dictionaries

Municipalities :The files (a) dictES.csv and (b) dictPR.csv yield some information about municipalities of (a) ES (B) PR states. These files have six columns:

  1. ID: numeric key regarding calibration of confirmed cases time series
  2. ibgeID: official code to identify the city
  3. name: name of the city
  4. intermID: official code of intermediate region to which the city belongs
  5. imedID: official code of immediate region to which the city belongs
  6. totPop2019: population of the city estimated in 2019

Immediate and intermediate regions The files (a) dictImed.csv and (b) dictInterm.csv yield some information about (a) Immediate and (b) Intermediate regions of PR and ES. These files have five columns:

  1. ID: numeric key regarding calibration of confirmed cases time series
  2. imedID or \verb|intermID|: official code to identify the region
  3. name: name of the region
  4. state: state to which the region belongs
  5. totPop2019: population of the region estimated in 2019

States The file dictUF.csv yield some information about PR and ES states. These files have five columns:

  1. ID: numeric key regarding calibration of confirmed cases time series
  2. ibgeID: official code to identify the state
  3. name: name of the state
  4. uf: abbreviation of the state's name
  5. totPop2019: population of the state estimated in 2019

Time series

Cases and deaths: The files (a) PR.csv, (b) ES.csv, (c) saopaulo.csv and (d) manaus.csv yield the time series of confirmed cases and deaths since April 1, 2020 for (a) All cities of PR state, (b) All cities of ES state, (c) São Paulo city and (d) Manaus city. These files have seven columns:

  1. date: date
  2. ibgeID: official code to identify the city
  3. newCases: new confirmed cases on that day
  4. newDeaths: new confirmed deaths on that day
  5. city: name of the city
  6. totalCases: accumulated cases
  7. totalDeaths: accumulated deaths

Calibration: Within files (a) imed.zip and (b) state.zip we have the time series of accumulated cases and fatality ratio, aggregated for different geographical levels. In this, we have two types of files: casesXX.dat (XX refers to the calibrating IDs mentioned before) are accumulated cases while lethXX.dat are the daily fatalities).

Calibration Code

The file calibra.f90 is a program written in Fortran that executes the calibration algorithm described on Methods section of the main paper $1000$ times with different epidemiological parameters. This program has four inputs: the time series of accumulated cases and fatality, the initial date for calibration and the population of the region (state, city, etc). Besides that, this program has two output files: epiQuantities.dat and hiddenCompart.dat. The first has seven columns:

  1. Days from the initial time
  2. Calibrated confirmed cases
  3. Reference cases
  4. Effective reproductive number
  5. Fraction of susceptible population
  6. Underreporting coefficient
  7. Sample

On hiddenCompart.dat, we have time series for some compartments in the model: from left to right S, E, A, I, CA + CI, R + RI + RA + D and sample number.

Python scripts and figures

Calculation of underreporting coefficient: the file underreporting.ipynb is a I-python script that calculates the underreporting coefficient starting from a time series of confirmed cases and deaths. At the end, it exhibits a graphic showing the evolution of this coefficient.

Template for figures The majority of figures in this work were generated with matplotlib and seaborn packages of Python 3.7. File format_covid19br.mplstyle contains the template (font family and sizes) for generating those figures and graphics.

👨‍💻 run nanosaur in simulation with Gazebo/Ingnition

🦕 👨‍💻 nanosaur_gazebo nanosaur The smallest NVIDIA Jetson dinosaur robot, open-source, fully 3D printable, based on ROS2 & Isaac ROS. Designed & ma

nanosaur 9 Jul 19, 2022
A TensorFlow 2.x implementation of Masked Autoencoders Are Scalable Vision Learners

Masked Autoencoders Are Scalable Vision Learners A TensorFlow implementation of Masked Autoencoders Are Scalable Vision Learners [1]. Our implementati

Aritra Roy Gosthipaty 59 Dec 10, 2022
Deep Learning for Computer Vision final project

Deep Learning for Computer Vision final project

grassking100 1 Nov 30, 2021
Code of paper Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification.

Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification We provide the codes for repr

12 Dec 12, 2022
GPU-Accelerated Deep Learning Library in Python

Hebel GPU-Accelerated Deep Learning Library in Python Hebel is a library for deep learning with neural networks in Python using GPU acceleration with

Hannes Bretschneider 1.2k Dec 21, 2022
An efficient and easy-to-use deep learning model compression framework

TinyNeuralNetwork 简体中文 TinyNeuralNetwork is an efficient and easy-to-use deep learning model compression framework, which contains features like neura

Alibaba 441 Dec 25, 2022
This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models are Pix2Pix, Pix2PixHD, CycleGAN and PointWise.

RGB2NIR_Experimental This repository contains several image-to-image translation models, whcih were tested for RGB to NIR image generation. The models

5 Jan 04, 2023
Streaming Anomaly Detection Framework in Python (Outlier Detection for Streaming Data)

Python Streaming Anomaly Detection (PySAD) PySAD is an open-source python framework for anomaly detection on streaming multivariate data. Documentatio

Selim Firat Yilmaz 181 Dec 18, 2022
A code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Vanderhaeghe, and Yotam Gingold from SIGGRAPH Asia 2020.

A Benchmark for Rough Sketch Cleanup This is the code repository associated with the paper A Benchmark for Rough Sketch Cleanup by Chuan Yan, David Va

33 Dec 18, 2022
WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose

WHENet: Real-time Fine-Grained Estimation for Wide Range Head Pose Yijun Zhou and James Gregson - BMVC2020 Abstract: We present an end-to-end head-pos

368 Dec 26, 2022
A framework for the elicitation, specification, formalization and understanding of requirements.

A framework for the elicitation, specification, formalization and understanding of requirements.

NASA - Software V&V 161 Jan 03, 2023
Simple API for UCI Machine Learning Dataset Repository (search, download, analyze)

A simple API for working with University of California, Irvine (UCI) Machine Learning (ML) repository Table of Contents Introduction About Page of the

Tirthajyoti Sarkar 223 Dec 05, 2022
Code for our paper "Graph Pre-training for AMR Parsing and Generation" in ACL2022

AMRBART An implementation for ACL2022 paper "Graph Pre-training for AMR Parsing and Generation". You may find our paper here (Arxiv). Requirements pyt

xfbai 60 Jan 03, 2023
ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-tree Complex Wavelet Representation and Contradict Channel Loss (HDCWNet)

ALL Snow Removed: Single Image Desnowing Algorithm Using Hierarchical Dual-tree Complex Wavelet Representation and Contradict Channel Loss (HDCWNet) (

Wei-Ting Chen 49 Dec 27, 2022
A synthetic texture-invariant dataset for object detection of UAVs

A synthetic dataset for object detection of UAVs This repository contains a synthetic datasets accompanying the paper Sim2Air - Synthetic aerial datas

LARICS Lab 10 Aug 13, 2022
Classification of EEG data using Deep Learning

Graduation-Project Classification of EEG data using Deep Learning Epilepsy is the most common neurological disease in the world. Epilepsy occurs as a

Osman Alpaydın 5 Jun 24, 2022
MegEngine implementation of YOLOX

Introduction YOLOX is an anchor-free version of YOLO, with a simpler design but better performance! It aims to bridge the gap between research and ind

旷视天元 MegEngine 77 Nov 22, 2022
Dados coletados e programas desenvolvidos no processo de iniciação científica

Iniciacao_cientifica_FAPESP_2020-14845-6 Dados coletados e programas desenvolvidos no processo de iniciação científica Os arquivos .py são os programa

1 Jan 10, 2022
ZEBRA: Zero Evidence Biometric Recognition Assessment

ZEBRA: Zero Evidence Biometric Recognition Assessment license: LGPLv3 - please reference our paper version: 2020-06-11 author: Andreas Nautsch (EURECO

Voice Privacy Challenge 2 Dec 12, 2021
This repository is for the preprint "A generative nonparametric Bayesian model for whole genomes"

BEAR Overview This repository contains code associated with the preprint A generative nonparametric Bayesian model for whole genomes (2021), which pro

Debora Marks Lab 10 Sep 18, 2022