Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Overview

Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders

Getting Started

Install requirements with Anaconda:

conda env create -f environment.yml

Activate the conda environment

conda activate tvae

Install the tvae package

Install the tvae package inside of your conda environment. This allows you to run experiments with the tvae command. At the root of the project directory run (using your environment's pip): pip3 install -e .

If you need help finding your environment's pip, try which python, which should point you to a directory such as .../anaconda3/envs/tvae/bin/ where it will be located.

(Optional) Setup Weights & Biases:

This repository uses Weight & Biases for experiment tracking. By deafult this is set to off. However, if you would like to use this (highly recommended!) functionality, all you have to do is set 'wandb_on': True in the experiment config, and set your account's project and entity names in the tvae/utils/logging.py file.

For more information on making a Weight & Biases account see (creating a weights and biases account) and the associated quickstart guide.

Running an experiment

To evaluate the selectivity of pretrained alexnet (the non-topographic baseline), you can run:

  • tvae --name 'ffa_modeling_pretrained_alexnet'

To train and evaluate the selectivity of the TVAE for objects, faces, bodies, and places, you can run:

  • tvae --name 'ffa_modeling_fc6'

To train and evaluate the selectivity of the the TDANN for objects, faces, bodies, and places, you can run:

  • tvae --name 'ffa_modeling_tdann'

To evaluate the selectivity of the TVAE on abstract catagories (animacy vs. inanimacy):

  • tvae --name 'ffa_modeling_fc6_functional'

To evaluate the selectivity of the TDANN on abstract catagories (animacy vs. inanimacy):

  • tvae --name 'ffa_modeling_tdann_functional'

These 'functional' experiment files can also be easily modified to test selectivity to big vs. small objects by simply changing the directories of the input images.

Basics of the framework

  • All experiments can be found in tvae/experiments/, and begin with the model specification, followed by the experiment config.

Model Architecutre Options

  • 'mu_init': int, Initalization value for mu parameter
  • 's_dim': int, Dimensionality of the latent space
  • 'k': int, size of the summation kernel used to define the local topographic structure
  • 'group_kernel': tuple of int, defines the size of the kernel used by the grouper, exact definition and relationship to W varies for each experiment.

Training Options

  • 'wandb_on': bool, if True, use weights & biases logging
  • 'lr': float, learning rate
  • 'momentum': float, standard momentum used in SGD
  • 'max_epochs': int, total training epochs
  • 'eval_epochs': int, epochs between evaluation on the test (for MNIST)
  • 'batch_size': int, number of samples per batch
  • 'n_is_samples': int, number of importance samples when computing the log-likelihood on MNIST.
Official pytorch implementation of DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces

DeformSyncNet: Deformation Transfer via Synchronized Shape Deformation Spaces Minhyuk Sung*, Zhenyu Jiang*, Panos Achlioptas, Niloy J. Mitra, Leonidas

Zhenyu Jiang 21 Aug 30, 2022
Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Selene is a Python library and command line interface for training deep neural networks from biological sequence data such as genomes.

Troyanskaya Laboratory 323 Jan 01, 2023
Official implementation of the ICCV 2021 paper: "The Power of Points for Modeling Humans in Clothing".

The Power of Points for Modeling Humans in Clothing (ICCV 2021) This repository contains the official PyTorch implementation of the ICCV 2021 paper: T

Qianli Ma 158 Nov 24, 2022
NeurIPS 2021 paper 'Representation Learning on Spatial Networks' code

Representation Learning on Spatial Networks This repository is the official implementation of Representation Learning on Spatial Networks. Training Ex

13 Dec 29, 2022
Source code for PairNorm (ICLR 2020)

PairNorm Official pytorch source code for PairNorm paper (ICLR 2020) This code requires pytorch_geometric=1.3.2 usage For SGC, we use original PairNo

62 Dec 08, 2022
Demonstrates iterative FGSM on Apple's NeuralHash model.

apple-neuralhash-attack Demonstrates iterative FGSM on Apple's NeuralHash model. TL;DR: It is possible to apply noise to CSAM images and make them loo

Lim Swee Kiat 11 Jun 23, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

34 Oct 08, 2022
SPRING is a seq2seq model for Text-to-AMR and AMR-to-Text (AAAI2021).

SPRING This is the repo for SPRING (Symmetric ParsIng aNd Generation), a novel approach to semantic parsing and generation, presented at AAAI 2021. Wi

Sapienza NLP group 98 Dec 21, 2022
City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces

City Surfaces: City-scale Semantic Segmentation of Sidewalk Surfaces Paper Temporary GitHub page for City Surfaces paper. More soon! While designing s

14 Nov 10, 2022
the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet]

BGNet This repository contains the code for our CVPR 2021 paper Bilateral Grid Learning for Stereo Matching Network [BGNet] Environment Python 3.6.* C

3DCV developer 87 Nov 29, 2022
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 본 repo 는 mAy-I Inc. 팀으로 참가한 2021 인공지능 온라인 경진대회 중 [이미지] 운전 사고 예방을 위한 운전자 부주의 행동 검출 모델] 태스크 수행을 위한 레포지토리입니다. mAy-I 는 과학기술정보통신부가 주최하

Junhyuk Park 9 Dec 01, 2022
This is a simple framework to make object detection dataset very quickly

FastAnnotation Table of contents General info Requirements Setup General info This is a simple framework to make object detection dataset very quickly

Serena Tetart 1 Jan 24, 2022
Hi Guys, here I am providing examples, which will help you in Lerarning Python

LearningPython Hi guys, here I am trying to include as many practice examples of Python Language, as i Myself learn, and hope these will help you in t

4 Feb 03, 2022
High-Resolution Image Synthesis with Latent Diffusion Models

Latent Diffusion Models arXiv | BibTeX High-Resolution Image Synthesis with Latent Diffusion Models Robin Rombach*, Andreas Blattmann*, Dominik Lorenz

CompVis Heidelberg 5.6k Dec 30, 2022
Pytorch implementation for "Implicit Semantic Response Alignment for Partial Domain Adaptation"

Implicit-Semantic-Response-Alignment Pytorch implementation for "Implicit Semantic Response Alignment for Partial Domain Adaptation" Prerequisites pyt

4 Dec 19, 2022
PyTorch code for ICPR 2020 paper Future Urban Scene Generation Through Vehicle Synthesis

Future urban scene generation through vehicle synthesis This repository contains Pytorch code for the ICPR2020 paper "Future Urban Scene Generation Th

Alessandro Simoni 4 Oct 11, 2021
Multitask Learning Strengthens Adversarial Robustness

Multitask Learning Strengthens Adversarial Robustness

Columbia University 15 Jun 10, 2022
DIR-GNN - Discovering Invariant Rationales for Graph Neural Networks

DIR-GNN "Discovering Invariant Rationales for Graph Neural Networks" (ICLR 2022)

Ying-Xin (Shirley) Wu 70 Nov 13, 2022
Cooperative Driving Dataset: a dataset for multi-agent driving scenarios

Cooperative Driving Dataset (CODD) The Cooperative Driving dataset is a synthetic dataset generated using CARLA that contains lidar data from multiple

Eduardo Henrique Arnold 124 Dec 28, 2022
💡 Type hints for Numpy

Type hints with dynamic checks for Numpy! (❒) Installation pip install nptyping (❒) Usage (❒) NDArray nptyping.NDArray lets you define the shape and

Ramon Hagenaars 377 Dec 28, 2022