A python library for self-supervised learning on images.

Overview

Lightly Logo

GitHub Unit Tests codecov

Lightly is a computer vision framework for self-supervised learning.

We, at Lightly, are passionate engineers who want to make deep learning more efficient. We want to help popularize the use of self-supervised methods to understand and filter raw image data. Our solution can be applied before any data annotation step and the learned representations can be used to analyze and visualize datasets as well as for selecting a core set of samples.

Tutorials

Want to jump to the tutorials and see lightly in action?

Benchmarks

Currently implemented models and their accuracy on cifar10. All models have been evaluated using kNN. We report the max test accuracy over the epochs as well as the maximum GPU memory consumption. All models in this benchmark use the same augmentations as well as the same ResNet-18 backbone. Training precision is set to FP32 and SGD is used as an optimizer with cosineLR.

Model Epochs Batch Size Test Accuracy Peak GPU usage
MoCo 200 128 0.83 2.1 GBytes
SimCLR 200 128 0.78 2.0 GBytes
SimSiam 200 128 0.73 3.0 GBytes
MoCo 200 512 0.85 7.4 GBytes
SimCLR 200 512 0.83 7.8 GBytes
SimSiam 200 512 0.81 7.0 GBytes
MoCo 800 512 0.90 7.2 GBytes
SimCLR 800 512 0.89 7.7 GBytes
SimSiam 800 512 0.91 6.9 GBytes

Terminology

Below you can see a schematic overview of the different concepts present in the lightly Python package. The terms in bold are explained in more detail in our documentation.

Overview of the lightly pip package

Quick Start

Lightly requires Python 3.6+. We recommend installing Lightly in a Linux or OSX environment.

Requirements

  • hydra-core>=1.0.0
  • numpy>=1.18.1
  • pytorch_lightning>=0.10.0
  • requests>=2.23.0
  • torchvision
  • tqdm

Installation

You can install Lightly and its dependencies from PyPI with:

pip3 install lightly

We strongly recommend that you install Lightly in a dedicated virtualenv, to avoid conflicting with your system packages.

Command-Line Interface

Lightly is accessible also through a command-line interface (CLI). To train a SimCLR model on a folder of images you can simply run the following command:

lightly-train input_dir=/mydataset

To create an embedding of a dataset you can use:

lightly-embed input_dir=/mydataset checkpoint=/mycheckpoint

The embeddings with the corresponding filename are stored in a human-readable .csv file.

Next Steps

Head to the documentation and see the things you can achieve with Lightly!

Development

To install dev dependencies (for example to contribute to the framework) you can use the following command:

pip3 install -e ".[dev]"

For more information about how to contribute have a look here.

Running Tests

Unit tests are within the tests folder and we recommend to run them using pytest. There are two test configurations available. By default only a subset will be run. This is faster and should take less than a minute. You can run it using

python -m pytest -s -v

To run all tests (including the slow ones) you can use the following command.

python -m pytest -s -v --runslow

Code Linting

We provide a Pylint config following the Google Python Style Guide.

You can run the linter from your terminal either on a folder

pylint lightly/

or on a specific file

pylint lightly/core.py

Further Reading

Self-supervised Learning:

FAQ

  • Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?

    • Self-supervised learning has become increasingly popular among scientists over the last year because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on your dataset, you can make sure that the representations have all the necessary information about your images.
  • How can I contribute?

    • Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our contribution guide.
  • Is this framework for free?

    • Yes, this framework completely free to use and we provide the code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind lightly commited to keep this framework open-source.
  • If this framework is free, how is the company behind lightly making money?

    • Training self-supervised models is only part of the solution. The company behind lightly focuses on processing and analyzing embeddings created by self-supervised models. By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently. This framework acts as an interface for our platform to easily upload and download datasets, embeddings and models. Whereas the platform will cost for additional features this frameworks will always remain free of charge (even for commercial use).
Comments
  • Add Masked Autoencoder implementation

    Add Masked Autoencoder implementation

    The paper Masked Autoencoders Are Scalable Vision Learners https://arxiv.org/abs/2111.06377 is suggesting that a masked auto-encoder (similar to pre-training on NLP) works very well as a pretext task for self-supervised learning. Let's add it to Lightly.

    image

    type: idea 
    opened by IgorSusmelj 19
  • Feature proposals

    Feature proposals

    Hi; first of all thanks for making this library; it looks very useful.

    There are two papers that id like to try and implement in your repo:

    Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere

    This paper introduces a contrastive loss function with some quite different properties from the more cited ones. Out of all loss functions ive tried for my projects, this is the one I had most success with; and also the one that a-priori seems the most sensible to me, of whats currently been published. The paper provides source and its a trivially implemented additional loss function really. Ive got experience implementing it, plus some generalizations I came up with I could provide. Should be an easy addition to this package. My motivation for contribution is a selfish one, as having it included here in the benchmarks would help me better understand the relative strengths and weaknesses on different types of datasets.

    Scaling Deep Contrastive Learning Batch Size with Almost Constant Peak Memory Usage

    I also recently came across this paper. It also provides (rather ugly imo) torch code. Implementing it in this package first would be easier to me than implementing it in my private repos; since itd be easier to properly test the implementation it using the infrastructure provided here. The title is quite descriptive; given the importance of batch-size for within-batch mining techniques, and the fact that many of us are working with single gpus, being able to scale batch sizes arbitrarily is super useful, and I think this paper has the right idea of how to go about it.

    The contribution guide mentions to first discuss such additions; so tell me what you think!

    type: idea 
    opened by EelcoHoogendoorn 14
  • Adding DINO to lightly

    Adding DINO to lightly

    First of all Kudos on creating this amazing library ❤️.

    I think all of us in self supervised learning community have heard of DINO. For the past couple of weeks, I have been trying to port the DINO implementation in facebook's implementation to PL. I have implemented it atleast for my use case which is a kaggle competition. I initially looked at lightly for implmenetation but I did not see any, so I borrowed and adopted code from original implementation and converted it into lightning.

    Honestly it was a tedious task. I was wondering if you guys would be interested in adding DINO to lightly.

    Here's how I think we should structure the implementations

    1. Augmentations : Dino heavily relies on multi-crop strategy. Since lightly already has a collate_fn for implementing augmentations, dino augmentations can be implemented in the same way.
    2. Model forward pass and Model Heads : The forward pass for the model is weird since we have to deal with multicrop and global crops, so this needs to be implemented as a nn.Module like other heads in lightly.
    3. Loss Functions : I am not sure how this should be implemented for lightly, although FB have a custom class for that.
    4. Utility functions : FB has used some tricks for stable training and results, so these need to be included as well.

    I have used lightning to implement all this and so far at least for my use case I was able to train vit-base-16 due my hardware constraints.

    This goes without saying I would personally like to work on PR :heart:

    Please let me know.

    opened by Atharva-Phatak 11
  • Queue for SwaV

    Queue for SwaV

    Aims to solve https://github.com/lightly-ai/lightly/issues/1006

    Implements queue for SwaV with minimal code change in the training loop. Implemented as per details from the paper:

    Rationale

    image

    A trick for small batch sizes - start using queue prototypes at a later epoch

    image
    opened by ibro45 10
  • Malte lig 1262 advanced selection external docs

    Malte lig 1262 advanced selection external docs

    Description

    • adds docs for selection using the new config
    • adapts existing docs (getting started, docker active learning)
    • already included: DIVERSIFY->DIVERSITY

    Tabs vs. Dropdowns

    Here tabs are used for the SelectionStrategy and Dropdowns for the configuration examples. image

    opened by MalteEbner 8
  • MSN: Masked Siamese Networks for Label-Efficient Learning

    MSN: Masked Siamese Networks for Label-Efficient Learning

    Hello, I would like to share this new interesting SSL method proposed by Meta that achieves SOTA results and shows interesting generalization properties.

    It is based on ViT and I think it could be worth to integrate it in this codebase. Also, the official implementation is available :)

    MSN Paper, MSN GIthub code

    opened by masc-it 8
  • recipe for using the Lightly Docker as API worker

    recipe for using the Lightly Docker as API worker

    opened by MalteEbner 8
  • Guarin lig 364 add download urls function to lightlyapi

    Guarin lig 364 add download urls function to lightlyapi

    Draft for downloading samples with readurl. I included an iterable dataset for completeness.

    The following code takes 20s to load 1k samples:

    import torch
    from tqdm import tqdm
    
    from lightly.data.iterable_dataset import ImageIterableDataset
    from lightly.api import ApiWorkflowClient
    
    client = ApiWorkflowClient(
        token="token",
        dataset_id="dataset id"
    )
    
    samples = client.download_raw_samples(client.dataset_id)
    dataset = ImageIterableDataset(samples)
    dataloader = torch.utils.data.DataLoader(
        dataset,
        num_workers=8,
        batch_size=8
    )
    
    for image, filename, frame_idx in tqdm(dataloader):
        pass
    

    The same code for videos runs in 10s for 5 videos with 300 frames each using only 2 workers.

    opened by guarin 8
  • enhance the upload speed

    enhance the upload speed

    using lightly-uploadand lightly-magic is not very fast and does not utilise all the cores. There seems to be a bottleneck somewhere.

    Tasks

    • [ ] figure out where most of the time goes when processing
      • [ ] split apart numbers of images/s to images/s processing and images/s uploading
    • [ ] ensure when num_workers is set, we really process with the amount of workers
    type: enhancement 
    opened by japrescott 8
  • [Active Learning] Compute Active learning scores for object detection

    [Active Learning] Compute Active learning scores for object detection

    Create a scorer for object detection that can compute scores like prediction entropy, ...

    Input of predictions: For each of the N samples, there are B bounding boxes giving the probability of one of the C classes.

    • [x] Find a good way to represent the model predictions of an object detection model
    • [x] Decide which kind of models to support (Yolo, SSD, ...), see also https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3
    • [x] Decide which kinds of scores to compute and find a good way to compute them, e.g. see https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3 or https://arxiv.org/pdf/2004.04699.pdf
    • [x] Implement it
    type: enhancement 
    opened by MalteEbner 8
  • NNMemoryBank not working with DataParallel

    NNMemoryBank not working with DataParallel

    I have been using the NNMemoryBank as a component in my module and noticed that at each forward pass NNMemoryBank.bank is equal to None. This only occurs when my module is wrapped in DataParallel. As a result, throughout training my NN pairs are always random noise (surprisingly, this only hurt the contrastive learning performance by a few percentage point on linear probing??).

    Here is a simple test case that highlights the issue:

    import torch
    from lightly.models.modules import NNMemoryBankModule
    memory_bank = NNMemoryBankModule(size=1000)
    print(memory_bank.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.bank)
    
    memory_bank = NNMemoryBankModule(size=1000)
    memory_bank = torch.nn.DataParallel(memory_bank, device_ids=[0,1])
    print(memory_bank.module.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.module.bank)
    

    The output of the first is None and a torch.Tensor, as expected. The output for the second is None for both.

    opened by kfallah 7
  • Malte lig 2288 download corrupted files as artifact pip

    Malte lig 2288 download corrupted files as artifact pip

    Description

    • Generated new openapi code
    • Adds the download_compute_worker_run_corruptness_check_information method to download the new corruptness check artifacts

    How was it tested?

    Manually running the example code.

    Why no unittests?

    The other artifact download functions are not tested either, because the code is really tiny. Thus I kept it consistent.

    ONLY MERGE AFTER https://github.com/lightly-ai/lightly-core/pull/2212

    opened by MalteEbner 1
  • MemoryBankModule - no distributed all gather prior to queueing a batch

    MemoryBankModule - no distributed all gather prior to queueing a batch

    If I'm not mistaken, the batch is not all_gathered before the queue is updated. I was just comparing the code against the one it was based on (MoCo's memory bank) and noticed this: https://github.com/facebookresearch/moco/blob/78b69cafae80bc74cd1a89ac3fb365dc20d157d3/moco/builder.py#L55

    Was this done on purpose?

    Is it because different processes would use the same queued batches, unlike the DistributedDataLoader batches which are indexed in DDP way?

    opened by ibro45 0
  • how can I reconstruct images (MAE)

    how can I reconstruct images (MAE)

    Hi and thank you for writing this great library.

    I'm trying to figure out if my MAE training worked beyond looking at mse numbers. I input an image of shape 256x256 (rgb) and then I get predictions of shape bx38x3072 I'm using the default torchvision.models.vit_b_32

    I am not sure how to reshape this back into an image? it also just doesn't make sense based on the numbers - there should be 64 patches of 3072?

    opened by DanTaranis 7
  • SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    Hi,

    Is it possible to use online evaluation of SSL encoder with a linear classifier to run on different data then the one being used for pre-training?

    opened by aqibsaeed 2
Releases(v1.2.40)
Owner
Lightly
Lightly
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Felix Järemo Lawin 35 Nov 02, 2022
Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network

Speech Separation Using an Asynchronous Fully Recurrent Convolutional Neural Network This repository is the official implementation of Speech Separati

Kai Li (李凯) 116 Nov 09, 2022
Code release for NeX: Real-time View Synthesis with Neural Basis Expansion

NeX: Real-time View Synthesis with Neural Basis Expansion Project Page | Video | Paper | COLAB | Shiny Dataset We present NeX, a new approach to novel

538 Jan 09, 2023
PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation.

PyTorch implementation of Progressive Growing of GANs for Improved Quality, Stability, and Variation. Warning: the master branch might collapse. To ob

559 Dec 14, 2022
A TensorFlow implementation of Neural Program Synthesis from Diverse Demonstration Videos

ViZDoom http://vizdoom.cs.put.edu.pl ViZDoom allows developing AI bots that play Doom using only the visual information (the screen buffer). It is pri

Hyeonwoo Noh 1 Aug 19, 2020
XViT - Space-time Mixing Attention for Video Transformer

XViT - Space-time Mixing Attention for Video Transformer This is the official implementation of the XViT paper: @inproceedings{bulat2021space, title

Adrian Bulat 33 Dec 23, 2022
Posterior temperature optimized Bayesian models for inverse problems in medical imaging

Posterior temperature optimized Bayesian models for inverse problems in medical imaging Max-Heinrich Laves*, Malte Tölle*, Alexander Schlaefer, Sandy

Artificial Intelligence in Cardiovascular Medicine (AICM) 6 Sep 19, 2022
torchlm is aims to build a high level pipeline for face landmarks detection, it supports training, evaluating, exporting, inference(Python/C++) and 100+ data augmentations

💎A high level pipeline for face landmarks detection, supports training, evaluating, exporting, inference and 100+ data augmentations, compatible with torchvision and albumentations, can easily instal

DefTruth 142 Dec 25, 2022
PyTorch implementation of our CVPR2021 (oral) paper "Prototype Augmentation and Self-Supervision for Incremental Learning"

PASS - Official PyTorch Implementation [CVPR2021 Oral] Prototype Augmentation and Self-Supervision for Incremental Learning Fei Zhu, Xu-Yao Zhang, Chu

67 Dec 27, 2022
Dynamic Realtime Animation Control

Our project is targeted at making an application that dynamically detects the user’s expressions and gestures and projects it onto an animation software which then renders a 2D/3D animation realtime

Harsh Avinash 10 Aug 01, 2022
pytorch implementation of GPV-Pose

GPV-Pose Pytorch implementation of GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting. (link) UPDATE A new version

40 Dec 01, 2022
PyTorch Implementation of SSTNs for hyperspectral image classifications from the IEEE T-GRS paper "Spectral-Spatial Transformer Network for Hyperspectral Image Classification: A FAS Framework."

PyTorch Implementation of SSTN for Hyperspectral Image Classification Paper links: SSTN published on IEEE T-GRS. Also, you can directly find the imple

Zilong Zhong 54 Dec 19, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
Code of Adverse Weather Image Translation with Asymmetric and Uncertainty aware GAN

Adverse Weather Image Translation with Asymmetric and Uncertainty-aware GAN (AU-GAN) Official Tensorflow implementation of Adverse Weather Image Trans

Jeong-gi Kwak 36 Dec 26, 2022
RL agent to play μRTS with Stable-Baselines3

Gym-μRTS with Stable-Baselines3/PyTorch This repo contains an attempt to reproduce Gridnet PPO with invalid action masking algorithm to play μRTS usin

Oleksii Kachaiev 24 Nov 11, 2022
An exploration of log domain "alternative floating point" for hardware ML/AI accelerators.

This repository contains the SystemVerilog RTL, C++, HLS (Intel FPGA OpenCL to wrap RTL code) and Python needed to reproduce the numerical results in

Facebook Research 373 Dec 31, 2022
An SE(3)-invariant autoencoder for generating the periodic structure of materials

Crystal Diffusion Variational AutoEncoder This software implementes Crystal Diffusion Variational AutoEncoder (CDVAE), which generates the periodic st

Tian Xie 94 Dec 10, 2022
Learning embeddings for classification, retrieval and ranking.

StarSpace StarSpace is a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems: Learning wor

Facebook Research 3.8k Dec 22, 2022
A repository for the updated version of CoinRun used to collect MUGEN, a multimodal video-audio-text dataset.

A repository for the updated version of CoinRun used to collect MUGEN, a multimodal video-audio-text dataset. This repo contains scripts to train RL agents to navigate the closed world and collect vi

MUGEN 11 Oct 22, 2022
A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries.

Yolo-Powered-Detector A object detecting neural network powered by the yolo architecture and leveraging the PyTorch framework and associated libraries

Luke Wilson 1 Dec 03, 2021