FedGS: A Federated Group Synchronization Framework Implemented by LEAF-MX.

Overview

FedGS: Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT

Preparation

  • For instructions on generating data, please go to the folder of the corresponding dataset. For FEMNIST, please refer to femnist.

  • NVIDIA-Docker is required.

  • NVIDIA CUDA version 10.1 and higher is required.

How to run FedGS

Build a docker image

Enter the scripts folder and build a docker image named fedgs.

sudo docker build -f build-env.dockerfile -t fedgs .

Modify /home/lizh/fedgs to your actual project path in scripts/run.sh. Then run scripts/run.sh, which will create a container named fedgs.0 if CONTAINER_RANK is set to 0 and starts the task.

chmod a+x run.sh && ./run.sh

The output logs and models will be stored in a logs folder created automatically. For example, outputs of the FEMNIST task with container rank 0 will be stored in logs/femnist/0/.

Hyperparameters

We categorize hyperparameters into default settings and custom settings, and we will introduce them separately.

Default Hyperparameters

These hyperparameters are included in utils/args.py. We list them in the table below (except for custom hyperparameters), but in general, we do not need to pay attention to them.

Variable Name Default Value Optional Values Description
--seed 0 integer Seed for client selection and batch splitting.
--metrics-name "metrics" string Name for metrics file.
--metrics-dir "metrics" string Folder name for metrics files.
--log-dir "logs" string Folder name for log files.
--use-val-set None None Set this option to use the validation set, otherwise the test set is used. (NOT TESTED)

Custom Hyperparameters

These hyperparameters are included in scripts/run.sh. We list them below.

Environment Variable Default Value Description
CONTAINER_RANK 0 This identify the container (e.g., fedgs.0) and log files (e.g., logs/femnist/0/output.0).
BATCH_SIZE 32 Number of training samples in each batch.
LEARNING_RATE 0.01 Learning rate for local optimizers.
NUM_GROUPS 10 Number of groups.
CLIENTS_PER_GROUP 10 Number of clients selected in each group.
SAMPLER gbp-cs Sampler to be used, can be random, brute, bayesian, probability, ga and gbp-cs.
NUM_SYNCS 50 Number of internal synchronizations in each round.
NUM_ROUNDS 500 Total rounds of external synchronizations.
DATASET femnist Dataset to be used, only FEMNIST is supported currently.
MODEL cnn Neural network model to be used.
EVAL_EVERY 1 Interval rounds for model evaluation.
NUM_GPU_AVAILABLE 2 Number of GPUs available.
NUM_GPU_BEGIN 0 Index of the first available GPU.
IMAGE_NAME fedgs Experimental image to be used.

NOTE: If you wish to specify a GPU device (e.g., GPU0), please set NUM_GPU_AVAILABLE=1 and NUM_GPU_BEGIN=0.

NOTE: This script will mount project files /home/lizh/fedgs from the host into the container /root, so please check carefully whether your file path is correct.

Visualization

The visualizer metrics/visualize.py reads metrics logs (e.g., metrics/metrics_stat_0.csv and metrics/metrics_sys_0.csv) and draws curves of accuracy, loss and so on.

Reference

  • This demo is implemented on LEAF-MX, which is a MXNET implementation of the well-known federated learning framework LEAF.

  • Li, Zonghang, Yihong He, Hongfang Yu, et al. "Data Heterogeneity-Robust Federated Learning via Group Client Selection in Industrial IoT." Submitted to IEEE Internet of Things Journal, (2021).

  • If you get trouble using this repository, please kindly contact us. Our email: [email protected]

Owner
Lizonghang
Intelligent Communication System, Distributed Machine Learning, Federated Learning
Lizonghang
Training Very Deep Neural Networks Without Skip-Connections

DiracNets v2 update (January 2018): The code was updated for DiracNets-v2 in which we removed NCReLU by adding per-channel a and b multipliers without

Sergey Zagoruyko 585 Oct 12, 2022
This code provides a PyTorch implementation for OTTER (Optimal Transport distillation for Efficient zero-shot Recognition), as described in the paper.

Data Efficient Language-Supervised Zero-Shot Recognition with Optimal Transport Distillation This repository contains PyTorch evaluation code, trainin

Meta Research 45 Dec 20, 2022
Face Recognition and Emotion Detector Device

Face Recognition and Emotion Detector Device Orange PI 1 Python 3.10.0 + Django 3.2.9 Project's file explanation Django manage.py Django commands hand

BootyAss 2 Dec 21, 2021
A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion

A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud Completion This repo intends to release code for our work: Zhaoyang Lyu*, Zhifeng

Zhaoyang Lyu 68 Jan 03, 2023
Multiple paper open-source codes of the Microsoft Research Asia DKI group

📫 Paper Code Collection (MSRA DKI Group) This repo hosts multiple open-source codes of the Microsoft Research Asia DKI Group. You could find the corr

Microsoft 249 Jan 08, 2023
Analysing poker data from home games with friends

Poker Game Analysis Analysing poker data from home games with friends. Not a lot of data is collected, so this project is primarily focussed on descri

Stavros Karmaniolos 1 Oct 15, 2022
Official Implementation of SWAD (NeurIPS 2021)

SWAD: Domain Generalization by Seeking Flat Minima (NeurIPS'21) Official PyTorch implementation of SWAD: Domain Generalization by Seeking Flat Minima.

Junbum Cha 97 Dec 20, 2022
This tool converts a Nondeterministic Finite Automata (NFA) into a Deterministic Finite Automata (DFA)

This tool converts a Nondeterministic Finite Automata (NFA) into a Deterministic Finite Automata (DFA)

Quinn Herden 1 Feb 04, 2022
A clean and extensible PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners

A clean and extensible PyTorch implementation of Masked Autoencoders Are Scalable Vision Learners A PyTorch re-implementation of Mask Autoencoder trai

Tianyu Hua 23 Dec 13, 2022
GANfolk: Using AI to create portraits of fictional people to sell as NFTs

GANfolk are AI-generated renderings of fictional people. Each image in the collection was created by a pair of Generative Adversarial Networks (GANs) with names and backstories also created with AI.

Robert A. Gonsalves 32 Dec 02, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.

Pruning Self-attentions into Convolutional Layers in Single Path This is the official repository for our paper: Pruning Self-attentions into Convoluti

Zhuang AI Group 77 Dec 26, 2022
Simple and Robust Loss Design for Multi-Label Learning with Missing Labels

Simple and Robust Loss Design for Multi-Label Learning with Missing Labels Official PyTorch Implementation of the paper Simple and Robust Loss Design

Xinyu Huang 28 Oct 27, 2022
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

tonne 1.4k Dec 29, 2022
COIN the currently largest dataset for comprehensive instruction video analysis.

COIN Dataset COIN is the currently largest dataset for comprehensive instruction video analysis. It contains 11,827 videos of 180 different tasks (i.e

86 Dec 28, 2022
Language Models Can See: Plugging Visual Controls in Text Generation

Language Models Can See: Plugging Visual Controls in Text Generation Authors: Yixuan Su, Tian Lan, Yahui Liu, Fangyu Liu, Dani Yogatama, Yan Wang, Lin

Yixuan Su 195 Dec 22, 2022
ISNAS-DIP: Image Specific Neural Architecture Search for Deep Image Prior [CVPR 2022]

ISNAS-DIP: Image-Specific Neural Architecture Search for Deep Image Prior (CVPR 2022) Metin Ersin Arican*, Ozgur Kara*, Gustav Bredell, Ender Konukogl

Özgür Kara 24 Dec 18, 2022
Mercury: easily convert Python notebook to web app and share with others

Mercury Share your Python notebooks with others Easily convert your Python notebooks into interactive web apps by adding parameters in YAML. Simply ad

MLJAR 2.2k Dec 27, 2022
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Zan Gojcic 124 Dec 27, 2022
Latte: Cross-framework Python Package for Evaluation of Latent-based Generative Models

Cross-framework Python Package for Evaluation of Latent-based Generative Models Latte Latte (for LATent Tensor Evaluation) is a cross-framework Python

Karn Watcharasupat 30 Sep 08, 2022