FedGS: A Federated Group Synchronization Framework Implemented by LEAF-MX.

Overview

FedGS: Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT

Preparation

  • For instructions on generating data, please go to the folder of the corresponding dataset. For FEMNIST, please refer to femnist.

  • NVIDIA-Docker is required.

  • NVIDIA CUDA version 10.1 and higher is required.

How to run FedGS

Build a docker image

Enter the scripts folder and build a docker image named fedgs.

sudo docker build -f build-env.dockerfile -t fedgs .

Modify /home/lizh/fedgs to your actual project path in scripts/run.sh. Then run scripts/run.sh, which will create a container named fedgs.0 if CONTAINER_RANK is set to 0 and starts the task.

chmod a+x run.sh && ./run.sh

The output logs and models will be stored in a logs folder created automatically. For example, outputs of the FEMNIST task with container rank 0 will be stored in logs/femnist/0/.

Hyperparameters

We categorize hyperparameters into default settings and custom settings, and we will introduce them separately.

Default Hyperparameters

These hyperparameters are included in utils/args.py. We list them in the table below (except for custom hyperparameters), but in general, we do not need to pay attention to them.

Variable Name Default Value Optional Values Description
--seed 0 integer Seed for client selection and batch splitting.
--metrics-name "metrics" string Name for metrics file.
--metrics-dir "metrics" string Folder name for metrics files.
--log-dir "logs" string Folder name for log files.
--use-val-set None None Set this option to use the validation set, otherwise the test set is used. (NOT TESTED)

Custom Hyperparameters

These hyperparameters are included in scripts/run.sh. We list them below.

Environment Variable Default Value Description
CONTAINER_RANK 0 This identify the container (e.g., fedgs.0) and log files (e.g., logs/femnist/0/output.0).
BATCH_SIZE 32 Number of training samples in each batch.
LEARNING_RATE 0.01 Learning rate for local optimizers.
NUM_GROUPS 10 Number of groups.
CLIENTS_PER_GROUP 10 Number of clients selected in each group.
SAMPLER gbp-cs Sampler to be used, can be random, brute, bayesian, probability, ga and gbp-cs.
NUM_SYNCS 50 Number of internal synchronizations in each round.
NUM_ROUNDS 500 Total rounds of external synchronizations.
DATASET femnist Dataset to be used, only FEMNIST is supported currently.
MODEL cnn Neural network model to be used.
EVAL_EVERY 1 Interval rounds for model evaluation.
NUM_GPU_AVAILABLE 2 Number of GPUs available.
NUM_GPU_BEGIN 0 Index of the first available GPU.
IMAGE_NAME fedgs Experimental image to be used.

NOTE: If you wish to specify a GPU device (e.g., GPU0), please set NUM_GPU_AVAILABLE=1 and NUM_GPU_BEGIN=0.

NOTE: This script will mount project files /home/lizh/fedgs from the host into the container /root, so please check carefully whether your file path is correct.

Visualization

The visualizer metrics/visualize.py reads metrics logs (e.g., metrics/metrics_stat_0.csv and metrics/metrics_sys_0.csv) and draws curves of accuracy, loss and so on.

Reference

  • This demo is implemented on LEAF-MX, which is a MXNET implementation of the well-known federated learning framework LEAF.

  • Li, Zonghang, Yihong He, Hongfang Yu, et al. "Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT." Submitted to IEEE Internet of Things Journal, (2021).

  • If you get trouble using this repository, please kindly contact us. Our email: [email protected]

Owner
Lizonghang
Intelligent Communication System, Distributed Machine Learning, Federated Learning
Lizonghang
Automatic Image Background Subtraction

Automatic Image Background Subtraction This repo contains set of scripts for automatic one-shot image background subtraction task using the following

Oleg Sémery 6 Dec 05, 2022
UNet model with VGG11 encoder pre-trained on Kaggle Carvana dataset

TernausNet: U-Net with VGG11 Encoder Pre-Trained on ImageNet for Image Segmentation By Vladimir Iglovikov and Alexey Shvets Introduction TernausNet is

Vladimir Iglovikov 1k Dec 28, 2022
Laplace Redux -- Effortless Bayesian Deep Learning

Laplace Redux - Effortless Bayesian Deep Learning This repository contains the code to run the experiments for the paper Laplace Redux - Effortless Ba

Runa Eschenhagen 28 Dec 07, 2022
The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction".

LEAR The implementation our EMNLP 2021 paper "Enhanced Language Representation with Label Knowledge for Span Extraction". See below for an overview of

杨攀 93 Jan 07, 2023
Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition"

CLIPstyler Official Pytorch implementation of "CLIPstyler:Image Style Transfer with a Single Text Condition" Environment Pytorch 1.7.1, Python 3.6 $ c

203 Dec 30, 2022
DECAF: Generating Fair Synthetic Data Using Causally-Aware Generative Networks

DECAF (DEbiasing CAusal Fairness) Code Author: Trent Kyono This repository contains the code used for the "DECAF: Generating Fair Synthetic Data Using

van_der_Schaar \LAB 7 Nov 24, 2022
A very tiny, very simple, and very secure file encryption tool.

Picocrypt is a very tiny (hence "Pico"), very simple, yet very secure file encryption tool. It uses the modern ChaCha20-Poly1305 cipher suite as well

Evan Su 1k Dec 30, 2022
EdMIPS: Rethinking Differentiable Search for Mixed-Precision Neural Networks

EdMIPS is an efficient algorithm to search the optimal mixed-precision neural network directly without proxy task on ImageNet given computation budgets. It can be applied to many popular network arch

Zhaowei Cai 47 Dec 30, 2022
MAg: a simple learning-based patient-level aggregation method for detecting microsatellite instability from whole-slide images

MAg Paper Abstract File structure Dataset prepare Data description How to use MAg? Why not try the MAg_lib! Trained models Experiment and results Some

Calvin Pang 3 Apr 08, 2022
Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485

python-pylontech Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485 What is this lib ? This lib is meant to talk to P

Frank 26 Dec 28, 2022
Code for: Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification

Imagine by Reasoning: A Reasoning-Based Implicit Semantic Data Augmentation for Long-Tailed Classification Prerequisite PyTorch = 1.2.0 Python3 torch

16 Dec 14, 2022
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 09, 2023
AutoML library for deep learning

Official Website: autokeras.com AutoKeras: An AutoML system based on Keras. It is developed by DATA Lab at Texas A&M University. The goal of AutoKeras

Keras 8.7k Jan 08, 2023
Official PyTorch implementation of Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations

Synergies Between Affordance and Geometry: 6-DoF Grasp Detection via Implicit Representations Zhenyu Jiang, Yifeng Zhu, Maxwell Svetlik, Kuan Fang, Yu

UT-Austin Robot Perception and Learning Lab 63 Jan 03, 2023
Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling".

PSSL Source code of CIKM2021 Long Paper "PSSL: Self-supervised Learning for Personalized Search with Contrastive Sampling". It consists of the pre-tra

2 Dec 21, 2021
SparseML is a libraries for applying sparsification recipes to neural networks with a few lines of code, enabling faster and smaller models

SparseML is a toolkit that includes APIs, CLIs, scripts and libraries that apply state-of-the-art sparsification algorithms such as pruning and quantization to any neural network. General, recipe-dri

Neural Magic 1.5k Dec 30, 2022
[CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation

RCIL [CVPR2022] Representation Compensation Networks for Continual Semantic Segmentation Chang-Bin Zhang1, Jia-Wen Xiao1, Xialei Liu1, Ying-Cong Chen2

Chang-Bin Zhang 71 Dec 28, 2022
Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models

Experimental code for paper: Generative Adversarial Networks as Variational Training of Energy Based Models, under review at ICLR 2017 requirements: T

Shuangfei Zhai 18 Mar 05, 2022
BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.

BitPack is a practical tool that can efficiently save quantized neural network models with mixed bitwidth.

Zhen Dong 36 Dec 02, 2022
Code for "The Box Size Confidence Bias Harms Your Object Detector"

The Box Size Confidence Bias Harms Your Object Detector - Code Disclaimer: This repository is for research purposes only. It is designed to maintain r

Johannes G. 24 Dec 07, 2022