FLSim a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API

Related tags

Deep LearningFLSim
Overview

Federated Learning Simulator (FLSim)

Federated Learning Simulator (FLSim) is a flexible, standalone library written in PyTorch that simulates FL settings with a minimal, easy-to-use API. FLSim is domain-agnostic and accommodates many use cases such as computer vision and natural text. Currently FLSim supports cross-device FL, where millions of clients' devices (e.g. phones) traing a model collaboratively together.

FLSim is scalable and fast. It supports differential privacy (DP), secure aggregation (secAgg), and variety of compression techniques.

In FL, a model is trained collaboratively by multiple clients that each have their own local data, and a central server moderates training, e.g. by aggregating model updates from multiple clients.

In FLSim, developers only need to define a dataset, model, and metrics reporter. All other aspects of FL training are handled internally by the FLSim core library.

FLSim

Library Structure

FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

Installation

The latest release of FLSim can be installed via pip:

pip install flsim

You can also install directly from the source for the latest features (along with its quirks and potentially ocassional bugs):

git clone https://github.com/facebookresearch/FLSim.git
cd FLSim
pip install -e .

Getting started

To implement a central training loop in the FL setting using FLSim, a developer simply performs the following steps:

  1. Build their own data pipeline to assign individual rows of training data to client devices (to simulate data is distributed across client devices)
  2. Create a corresponding nn/Module model and wrap it in an FL model.
  3. Define a custom metrics reporter that computes and collects metrics of interest (e.g., accuracy) throughout training.
  4. Set the desired hyperparameters in a config.

Usage Example

Tutorials

To see the details, please refer to the tutorials that we have prepared.

Examples

We have prepared the runnable exampels for 2 of the tutorials above:

Contributing

See the CONTRIBUTING for how to contribute to this library.

License

This code is released under Apache 2.0, as found in the LICENSE file.

Comments
  • Bug Fix#36: fix imports in tests.

    Bug Fix#36: fix imports in tests.

    Types of changes

    • [x ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    Bug Fix#36: fix imports in tests.

    How Has This Been Tested (if it applies)

    pytest -ra is able to discover all tests now.

    Checklist

    • [x] The documentation is up-to-date with the changes I made.
    • [x] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [x ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by ghaccount 8
  • Vr

    Vr

    Types of changes

    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [ ] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)
    • [ ] Docs change / refactoring / dependency upgrade

    Motivation and Context / Related issue

    How Has This Been Tested (if it applies)

    Checklist

    • [ ] The documentation is up-to-date with the changes I made.
    • [ ] I have read the CONTRIBUTING document and completed the CLA (see CONTRIBUTING).
    • [ ] All tests passed, and additional code has been covered with new tests.
    CLA Signed 
    opened by JohnlNguyen 6
  • Move optimizer_test_utils to optimizers directory

    Move optimizer_test_utils to optimizers directory

    Summary: it is currently located at the top-level tests directory. However the top-level tests directory does not really make sense as each component is organized into its dedicated directory. optimizer_test_utils.py belongs to the optimizer directory in that sense. In this diff, we move the file to the optimizer directory and fixes the reference.

    Differential Revision: D32241821

    CLA Signed fb-exported Merged 
    opened by jessemin 3
  • Does the backend handle Federated learning asynchronously?

    Does the backend handle Federated learning asynchronously?

    I found this repo from this blog: - https://ai.facebook.com/blog/asynchronous-federated-learning/ However I do not find any mentioning on this repo and also I cannot decipher from the code examples whether this is synchronous version or asynchronous version of Federated learning? Can you please clarify this for me? And also if this is the asynchronous version how can I dive deeper in to the libraries and look at the code of implementation for the asynch handling mechanism?

    Thank you

    opened by 111Kaushal 2
  • Remove test_pytorch_local_dataset_factory

    Remove test_pytorch_local_dataset_factory

    Summary: This test had been very flaky about 1+ year ago an d never been revived since then. Deleting it from the codebase.

    Differential Revision: D32415979

    CLA Signed fb-exported Merged 
    opened by jessemin 2
  • FedSGD with virtual batching

    FedSGD with virtual batching

    🚀 Feature

    Motivation

    Create a memory efficient client to run FedSGD. If a client has many examples, running FedSGD (taking the gradient of the model based on all of the client's data) can lead to OOM. In this PR, we fix this problem by still calling optimizer.step once at the end of local training to simulate the effect of FedSGD.>

    opened by JohnlNguyen 0
  • Add Fednova as a benchmark

    Add Fednova as a benchmark

    Summary:

    What?

    Adding FedNova as a benchmark

    Why?

    FedNova is a well known paper that fixes the objective inconsistency problem

    Differential Revision: D34668291

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • Having to `import flsim.configs`  before creating config from json is unintuitive

    Having to `import flsim.configs` before creating config from json is unintuitive

    🚀 Feature

    This code works

    import flsim.configs <-- 
    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    This code doesn't work

    from flsim.utils.config_utils import fl_config_from_json
    
    json_config = {
        "trainer": {
        }
    }
    cfg = fl_config_from_json(json_config)
    

    Motivation

    Having to import flsim.configs is unintuitive and not clear from the user perspective

    Pitch

    Alternatives

    Additional context

    opened by JohnlNguyen 0
  • Fix sent140 example

    Fix sent140 example

    Summary:

    What?

    Fix tutorial to word embedding to resolve the poor accuracy problem

    Why?

    https://github.com/facebookresearch/FLSim/issues/34

    Differential Revision: D34149392

    CLA Signed fb-exported 
    opened by JohnlNguyen 1
  • low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    low test accuracy in Sentiment classification with LEAF's Sent140 tutorial?

    ❓ Questions and Help

    Until we move the questions to another medium, feel free to use this as your question:

    Question

    I tried this tutorial https://github.com/facebookresearch/FLSim/blob/main/tutorials/sent140_tutorial.ipynb And accuracy is less that random guess (50%)!

    Any suggestions or approaches to improve accuracy for this tutorial?

    from tutorial: Running (epoch = 1, round = 1, global round = 1) for Test (epoch = 1, round = 1, global round = 1), Loss/Test: 0.8683878255035598 (epoch = 1, round = 1, global round = 1), Accuracy/Test: 49.61439588688946 {'Accuracy': 49.61439588688946}

    opened by ghaccount 0
Releases(v0.1.0)
  • v0.0.1(Dec 9, 2021)

    We are excited to announce the release of FLSim 0.0.1.

    Introduction

    How does one train a machine learning model without access to user data? Federated Learning (FL) is the technology that answers this question. In a nutshell, FL is a way for many users to learn a machine learning model without sharing data collaboratively. The two scenarios for FL, cross-silo and cross-device. Cross-silo provides technologies for collaborative learning between a few large organizations with massive silo datasets. Cross-device provides collaborative learning between many small user devices with small local datasets. Cross-device FL, where millions or even billions of users cooperate on learning a model, is a much more complex problem and attracted less attention from the research community. We designed FLSim to address the cross-device FL use case.

    Federated Learning at Scale

    Large-scale cross-device Federated Learning (FL) is a federated learning paradigm with several challenges that differentiate it from cross-silo FL: millions of clients coordinating with a central server and training instability due to the significant cohort problem. With these challenges in mind, we built FLSim to be scalable while easy to use, and FLSim can scale to thousands of clients per round using only 1 GPU. We hope FLSim will equip researchers to tackle problems with federated learning at scale.

    FLSim

    Library Structure

    FLSim core components follow the same semantic as FedAvg. The server comprises three main features: selector, aggregator, and optimizer at a high level. The selector selects clients for training, and the aggregate aggregates client updates until a round is complete. Then, the optimizer optimizes the server model based on the aggregated gradients. The server communicates with the clients via the channel. The channel then compresses the message between the server and the clients. Locally, the client composes of a dataset and a local optimizer. This local optimizer can be SGD, FedProx, or a custom Pytorch optimizer.

    Included Datasets

    Currently, FLSim supports all datasets from LEAF including FEMNIST, Shakespeare, Sent140, CelebA, Synthetic and Reddit. Additionally, we support MNIST and CIFAR-10.

    Included Algorithms

    FLSim supports standard FedAvg, and other federated learning methods such as FedAdam, FedProx, FedAvgM, FedBuff, FedLARS, and FedLAMB.

    What’s next?

    We hope FLSim will foster large-scale cross-device FL research. Soon, we plan to add support for personalization in early 2022. Throughout 2022, we plan to gather feedback and improve usability. We plan to continue to grow our collection of algorithms, datasets, and models.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Research
Meta Research
Computer Vision Paper Reviews with Key Summary of paper, End to End Code Practice and Jupyter Notebook converted papers

Computer-Vision-Paper-Reviews Computer Vision Paper Reviews with Key Summary along Papers & Codes. Jonathan Choi 2021 The repository provides 100+ Pap

Jonathan Choi 2 Mar 17, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
Learning cell communication from spatial graphs of cells

ncem Features Repository for the manuscript Fischer, D. S., Schaar, A. C. and Theis, F. Learning cell communication from spatial graphs of cells. 2021

Theis Lab 77 Dec 30, 2022
Radar-to-Lidar: Heterogeneous Place Recognition via Joint Learning

radar-to-lidar-place-recognition This page is the coder of a pre-print, implemented by PyTorch. If you have some questions on this project, please fee

Huan Yin 37 Oct 09, 2022
Code of the paper "Shaping Visual Representations with Attributes for Few-Shot Learning (ASL)".

Shaping Visual Representations with Attributes for Few-Shot Learning This code implements the Shaping Visual Representations with Attributes for Few-S

chx_nju 9 Sep 01, 2022
yolov5目标检测模型的知识蒸馏(基于响应的蒸馏)

代码地址: https://github.com/Sharpiless/yolov5-knowledge-distillation 教师模型: python train.py --weights weights/yolov5m.pt \ --cfg models/yolov5m.ya

52 Dec 04, 2022
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

scikit-learn 52.5k Jan 08, 2023
[3DV 2021] A Dataset-Dispersion Perspective on Reconstruction Versus Recognition in Single-View 3D Reconstruction Networks

dispersion-score Official implementation of 3DV 2021 Paper A Dataset-dispersion Perspective on Reconstruction versus Recognition in Single-view 3D Rec

Yefan 7 May 28, 2022
Character Grounding and Re-Identification in Story of Videos and Text Descriptions

Character in Story Identification Network (CiSIN) This project hosts the code for our paper. Youngjae Yu, Jongseok Kim, Heeseung Yun, Jiwan Chung and

8 Dec 09, 2022
COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models

COVID-ViT COVID-VIT: Classification of Covid-19 from CT chest images based on vision transformer models This code is to response to te MIA-COV19 compe

17 Dec 30, 2022
Official Pytorch implementation of Scene Representation Networks: Continuous 3D-Structure-Aware Neural Scene Representations

Scene Representation Networks This is the official implementation of the NeurIPS submission "Scene Representation Networks: Continuous 3D-Structure-Aw

Vincent Sitzmann 365 Jan 06, 2023
Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Occlusion Robust 3D face Reconstruction Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee Code for Occlusion Robust 3D Face Reconstruction

Yeongjoon 31 Dec 19, 2022
TrTr: Visual Tracking with Transformer

TrTr: Visual Tracking with Transformer We propose a novel tracker network based on a powerful attention mechanism called Transformer encoder-decoder a

趙 漠居(Zhao, Moju) 66 Dec 27, 2022
A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

A computational optimization project towards the goal of gerrymandering the results of a hypothetical election in the UK.

Emma 1 Jan 18, 2022
Graph Convolutional Networks in PyTorch

Graph Convolutional Networks in PyTorch PyTorch implementation of Graph Convolutional Networks (GCNs) for semi-supervised classification [1]. For a hi

Thomas Kipf 4.5k Dec 31, 2022
A note taker for NVDA. Allows the user to create, edit, view, manage and export notes to different formats.

Quick Notetaker add-on for NVDA The Quick Notetaker add-on is a wonderful tool which allows writing notes quickly and easily anytime and from any app

5 Dec 06, 2022
PyTorch implementations of the paper: "DR.VIC: Decomposition and Reasoning for Video Individual Counting, CVPR, 2022"

DRNet for Video Indvidual Counting (CVPR 2022) Introduction This is the official PyTorch implementation of paper: DR.VIC: Decomposition and Reasoning

tao han 35 Nov 22, 2022
Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Keras implementation of Normalizer-Free Networks and SGD - Adaptive Gradient Clipping

Yam Peleg 63 Sep 21, 2022
TGS Salt Identification Challenge

TGS Salt Identification Challenge This is an open solution to the TGS Salt Identification Challenge. Note Unfortunately, we can no longer provide supp

neptune.ai 123 Nov 04, 2022
Deep Learning Interviews book: Hundreds of fully solved job interview questions from a wide range of key topics in AI.

This book was written for you: an aspiring data scientist with a quantitative background, facing down the gauntlet of the interview process in an increasingly competitive field. For most of you, the

4.1k Dec 28, 2022