Lucid library adapted for PyTorch

Related tags

Deep Learninglucent
Overview

Lucent

Travis build status Code coverage

PyTorch + Lucid = Lucent

The wonderful Lucid library adapted for the wonderful PyTorch!

Lucent is not affiliated with Lucid or OpenAI's Clarity team, although we would love to be! Credit is due to the original Lucid authors, we merely adapted the code for PyTorch and we take the blame for all issues and bugs found here.

Usage

Lucent is still in pre-alpha phase and can be installed locally with the following command:

pip install torch-lucent

In the spirit of Lucid, get up and running with Lucent immediately, thanks to Google's Colab!

You can also clone this repository and run the notebooks locally with Jupyter.

Quickstart

import torch

from lucent.optvis import render
from lucent.modelzoo import inceptionv1

device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = inceptionv1(pretrained=True)
model.to(device).eval()

render.render_vis(model, "mixed4a:476")

Tutorials

Other Notebooks

Here, we have tried to recreate some of the Lucid notebooks! You can also check out the lucent-notebooks repo to clone all the notebooks.

Recommended Readings

Related Talks

Slack

Check out #proj-lucid and #circuits on the Distill slack!

Additional Information

License and Disclaimer

You may use this software under the Apache 2.0 License. See LICENSE.

Comments
  • use custom model?

    use custom model?

    Hi, I see its possible to use models from the modelzoo, is it possible to use a custom trained model? Ay documentation or direction would be appreciated.

    opened by dvschultz 5
  • Add activation grids notebook

    Add activation grids notebook

    Issue #3, reproducing activation grids (https://github.com/tensorflow/lucid/blob/master/notebooks/building-blocks/ActivationGrid.ipynb)

    It's possible to try it here: https://colab.research.google.com/drive/1pEe-KmXeDJcWQYLOHwcMubS69wVVCHLe#scrollTo=xidm-QrXvL2X

    Here are the results so far with inceptionv1 and layer mixed4d:

    • reproduced: https://imgur.com/twmizR4
    • original: https://imgur.com/eaYwEWR

    Some remarks and a question:

    • I added channel_reducer as is from the original repo
    • default transforms in transform.py produce a different size (due to random scaling) each time it is called, then resampling to 224 is done after that to have a fixed size. Is that the same in lucid? I need to debug a little bit in the original repo to be sure to answer that, but please let me know if you have the answer. The reason I ask is that in this specific notebook, the cells in the grid are much smaller than 224. I added an argument "fixed_image_size" to handle this specific case where we want a fixed image size (after resampling) which is not 224.
    • Since all layers are computed and with this commit we can accept smaller images, this means that it's possible to have an exception on higher layers because the image size is not big enough, but it should be fine as long as the layer we are interested in is computed, I handled this exception
    opened by mehdidc 5
  • ValueError in Render

    ValueError in Render

    Hi there,

    I am trying to run the tutorial and am running into the following error:

    >>> import torch
    >>> from lucent.optvis import render, param, transform, objectives
    >>> from lucent.modelzoo import inceptionv1
    >>> device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
    >>> model = inceptionv1(pretrained=True)
    >>> _ = model.to(device).eval()
    >>> _ = render.render_vis(model, "mixed4a:476", show_inline=True)
    
      0%|                                                                                       | 0/512 [00:00<?, ?it/s]
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 113, in render_vis
        optimizer.step(closure)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 26, in decorate_context
        return func(*args, **kwargs)
      File "/Users/tatekeller/.local/lib/python3.8/site-packages/torch/optim/adam.py", line 66, in step
        loss = closure()
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/render.py", line 97, in closure
        model(transform_f(image_f()))
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 85, in inner
        x = transform(x)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/lucent/optvis/transform.py", line 75, in inner
        M = kornia.get_rotation_matrix2d(center, angle, scale).to(device)
      File "/Users/tatekeller/opt/anaconda3/envs/spec/lib/python3.8/site-packages/kornia/geometry/transform/imgwarp.py", line 347, in get_rotation_matrix2d
        raise ValueError("Input scale must be a B tensor. Got {}"
    ValueError: Input scale must be a B tensor. Got torch.Size([1, 2])
    

    I am using a conda environment with Python 3.8.5 and pytorch=1.7.0.

    Any help regarding this error would be much appreciated!

    opened by tatkeller 4
  • Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Utils to show modulename with its repr(); Add Linear weighted activations as objective; Add pretrained GAN as parametrization

    Dear author,

    Thanks so much for implement lucid in PyTorch! I really enjoyed using it in my projects of leveraging deep neural networks as a way to understand real neurons in visual cortices. In my usage, I want to activate multiple channels together to match the selectivity of the biological neuron or units in other networks. We can achieve this by adding up the original channel objective or neuron objective. But it becomes very inefficient in back prop.

    So here are my 2 cents, in this commit I

    • Add linearly weighted activations of the channel, neuron, neuron group as objective. Using tensor operations.
    • Add a function in util.py to output the module names to ease the usage for custom networks.
    opened by Animadversio 4
  • Generating a batch of optimal stimuli, one for each unit in a layer

    Generating a batch of optimal stimuli, one for each unit in a layer

    Hi, I was trying to use Lucent to generate optimal stimuli for several units/neurons of a layer parallely. So, I figured I would leverage the batch processing. As illustrated in the neuron interaction tutorial notebook, I was passing a sum of objectives to the render.render_vis() function. Here is a toy example of what I want and my approach: Units to be visualized = [10,20,30] Layer = 'readout_fc' tot_objective = objectives.channel("readout_fc",10,batch=0)+objectives.channel("readout_fc",20,batch=1)+objectives.channel("readout_fc",30,batch=2) param_f = lambda:param.image(135,batch=3) imgs = render.render_vis(model,tot_objective,param_f=param_f,preprocess=False,fixed_input_image_size=135)

    The parameter settings works beautifully when I try one unit.😄 However, I wasn't sure if this is the correct way to approach this for multiple units in parallel (this gives me seperate images for each unit). Also when the number of units is more, I was hoping to avoid writing it out individually or run an explicit for loop to compute the objective. I tried using reduce as below: neurons = [10,20,30] tot_objective = reduce(lambda x,y: x+objectives.channel("readout_fc",y[0],batch=y[1]),list(zip(neurons,np.arange(len(neurons)))),0) Doing so gives me the same image 3 times. So, I was wondering if there is something wrong in how I am using the objective function to generate optimal stimuli from multiple units in parallel. Thanks in advance.

    opened by arnaghosh 3
  • Q: Do you use the same architecture and weights as Clarity does?

    Q: Do you use the same architecture and weights as Clarity does?

    Hi,

    I am looking for a trainable InceptionV1 model which shares the same weights with the ones Clarity team uses. Reading your code, I've found these lines:

    model_urls = {
        # InceptionV1 model used in Lucid examples, converted by ProGamerGov
        'inceptionv1': 'https://github.com/ProGamerGov/pytorch-old-tensorflow-models/raw/master/inception5h.pth',
    }
    

    Does it mean you're using exactly the same architecture and weights, so your render_vis function can reproduce the same pictures that Clarity has published?

    Thanks!

    opened by gergopool 3
  • Add direction and direction_neuron objectives

    Add direction and direction_neuron objectives

    Hi @greentfrapp,

    Thanks so much for making this repo! It has been a great help for me. For my use-case I needed the direction and direction_neuron objectives so I added them into lucent. I also included two demo files but let me know if they should be rolled into a docstring instead. This PR also lays the groundwork to reproduce the activation atlas notebooks from lucid. Would love to hear your thoughts :)

    opened by ndey96 3
  • Activation Grid Notebook

    Activation Grid Notebook

    Reproduce Lucid's Activation Grid Notebook with PyTorch and Lucent.

    The only new function required seems to be ChannelReducer, which doesn't rely on Tensorflow so it should be relatively simple to port over.

    Help wanted for this!

    help wanted good first issue 
    opened by greentfrapp 3
  • get raw_activations

    get raw_activations

    Hi, thanks for this great library! I'm trying to reproduce the Activation Atlas notebook using lucent, creating grid cells in the end.

    In the notebook, "raw activation" is available in a numpy.ndarray format by "model.layers[7].activations" to utilise in the next Dimensionality reduction section. How can I get this raw activation using lucent?

    I did create visualised images using lucent render_vis first, and then flattened them to use UMAP fit method, but I'm not sure this is correct. Any suggestion would be appreciated.

    opened by 2nayk 2
  • Temporarily freeze kornia at 0.4.0 to prevent breaking change

    Temporarily freeze kornia at 0.4.0 to prevent breaking change

    kornia 0.4.1 released recently and it includes a breaking change to get_rotation_matrix2d

    I've opened an Issue to hopefully address this soon: https://github.com/kornia/kornia/issues/742 but in the meantime, I suggest freezing kornia at 0.4.0 so that the random_rotate transform continues working as before

    opened by ivanzvonkov 2
  • Suggestion for `lucent.optvis.render.hook_model`

    Suggestion for `lucent.optvis.render.hook_model`

    First, thanks for making this. Lifesaver. Two thoughts (Fwiw, the nested functions, higher-order functions and decorators make things a biiiiit hard to follow when debugging):

    1. I initially dun goofed and didn't eval the model (even though the very example notebook I'm using from lucent does lol). Maybe the hook_model function could check for nonetypes and tell the user to eval, if no saved feature maps are found?
    2. PyTorch module names usually use dot notation. Maybe use dots instead of underscores? Or just tell the user which feature map names are available and the user'll figure it out quickly enough

    Suggested replacement for this function: https://github.com/greentfrapp/lucent/blob/a2b015ce95f29460a329f750428077bcde5e4e94/lucent/optvis/render.py#L194

    def hook(layer):
            if layer == "input":
                out = image_f()
            elif layer == "labels":
                out = list(features.values())[-1].features
            else:
                assert layer in features, f"Invalid layer {layer}. Pick from one of {features.keys()}"  # suggestion 2 ish
                out = features[layer].features
            assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See Lucent notebook for example."   # suggestion 1, tell user to eval
            return out
    

    *I ran it on resnet18. Gorgeous and worked out of the box btw.

    download-4 download-5 download-6 download-7

    opened by alvinwan 2
  • Code Breaks as GPU Index > 0

    Code Breaks as GPU Index > 0

    When using GPU, this codebase only works for torch.device('cuda:0') -- the GPU index has to be 0.

    For example, if you choos torch.device('cuda:1'), then when you run the code demo

    import torch
    
    from lucent.optvis import render
    from lucent.modelzoo import inceptionv1
    
    # Let's use cuda:1
    device = torch.device("cuda:1")
    model = inceptionv1(pretrained=True)
    model.to(device).eval()
    
    render.render_vis(model, "mixed4a:476")
    

    you will see an error like

    ..........
    File .....lucent/optvis/render.py:206, in hook_model.<locals>.hook(layer)
        204     assert layer in features, f"Invalid layer {layer}. Retrieve the list of layers with `lucent.modelzoo.util.get_model_layers(model)`."
        205     out = features[layer].features
    --> 206 assert out is not None, "There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example."
        207 return out
    
    AssertionError: There are no saved feature maps. Make sure to put the model in eval mode, like so: `model.to(device).eval()`. See README for example.
    
    opened by Haoxiang-Wang 0
  • Lucent handles greyscale images in function view incorrectly

    Lucent handles greyscale images in function view incorrectly

    When rendering and visualizing greyscale images not inline, i.e., with show_inline=False, PIL throws following error: TypeError: Cannot handle this data type: (1, 1, 1), |u1 The problem is that Lucent passes a tensor of shape [H, W, C] with C=1 and range from 0-225 to PIL, but PIL can handle only two-dimensional tensors with integer values. This Stackoverflow answer provides more information.

    Solution: Lucent should check if shape is [H, W, C=1] and reduce to [H, W]. Alternatively, introduce a param, e.g, greycale=True in the view function.

    opened by neoglez 0
  • activation grid for hierarchical custom model

    activation grid for hierarchical custom model

    Hi, is there a way to visualize the activation grid for a custom model with nested modules, which are not explicitely named as a model's attribute? E.g. when I call get_model_layers(), you can see the following output for this custom model: image

    I followed your notebook on the activation grid (https://colab.research.google.com/github/greentfrapp/lucent-notebooks/blob/master/notebooks/activation_grids.ipynb#scrollTo=BDH9cXnSuu5Q). For example, I choose layer = "net_down1_maxpool_conv" (is there some kind of syntax for specifying the layers?) I also rewrote the get_layer() helper function to parse the networks layer from the string, because that layer is not a direct attribute of the network class. But when I then try to use the rendering function, there is an error in the first line of the objective function: In lines 203-206 of render.py either one of the assertions is thrown, depending on how I choose the layer string. Can you help me with this problem? Many thanks!

    opened by An-nay-marks 1
  • Support batches for CPPN image representation

    Support batches for CPPN image representation

    When representing the optimized image using CPPN network, current implementation allows for optimizing for a single image per run. This limitation prevents using, e.g., "diversity" objectives during optimization. This PR adds support for batching for cppn image representation by creating a batch of networks.

    here's an example of generating a diverse batch=2 images for objective "mixed4d_3x3_bottleneck_pre_relu_conv:139" of inception network.

    image

    opened by shaibagon 0
  • Low GPU utilization

    Low GPU utilization

    I am trying to use Lucent to visualize deep neurons, but whatever I do it seems like GPU is under-utilized: Examining utilization via nvidia-smi I see low utilization (~10%) with occasional peaks at ~50%, but never above that. This happens both for cppn prior as well as fourier image representation.

    Any suggestions?

    opened by shaibagon 0
Releases(v0.1.8)
Owner
Lim Swee Kiat
Lim Swee Kiat
Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency[ECCV 2020]

Self-Supervised Monocular 3D Face Reconstruction by Occlusion-Aware Multi-view Geometry Consistency(ECCV 2020) This is an official python implementati

304 Jan 03, 2023
Pseudo lidar - (CVPR 2019) Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving

Pseudo-LiDAR from Visual Depth Estimation: Bridging the Gap in 3D Object Detection for Autonomous Driving This paper has been accpeted by Conference o

Yan Wang 881 Dec 27, 2022
[SIGGRAPH 2021 Asia] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning

DeepVecFont This is the official Pytorch implementation of the paper: Yizhi Wang and Zhouhui Lian. DeepVecFont: Synthesizing High-quality Vector Fonts

Yizhi Wang 146 Dec 18, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
[CVPR 2021] Counterfactual VQA: A Cause-Effect Look at Language Bias

Counterfactual VQA (CF-VQA) This repository is the Pytorch implementation of our paper "Counterfactual VQA: A Cause-Effect Look at Language Bias" in C

Yulei Niu 94 Dec 03, 2022
UNION: An Unreferenced Metric for Evaluating Open-ended Story Generation

UNION Automatic Evaluation Metric described in the paper UNION: An UNreferenced MetrIc for Evaluating Open-eNded Story Generation (EMNLP 2020). Please

50 Dec 30, 2022
Like Dirt-Samples, but cleaned up

Clean-Samples Like Dirt-Samples, but cleaned up, with clear provenance and license info (generally a permissive creative commons licence but check the

TidalCycles 39 Nov 30, 2022
IhoneyBakFileScan Modify - 批量网站备份文件扫描器,增加文件规则,优化内存占用

ihoneyBakFileScan_Modify 批量网站备份文件泄露扫描工具 2022.2.8 添加、修改内容 增加备份文件fuzz规则 修改备份文件大小判断

VMsec 220 Jan 05, 2023
Breaking the Dilemma of Medical Image-to-image Translation

Breaking the Dilemma of Medical Image-to-image Translation Supervised Pix2Pix and unsupervised Cycle-consistency are two modes that dominate the field

Kid Liet 86 Dec 21, 2022
Torch implementation of "Enhanced Deep Residual Networks for Single Image Super-Resolution"

NTIRE2017 Super-resolution Challenge: SNU_CVLab Introduction This is our project repository for CVPR 2017 Workshop (2nd NTIRE). We, Team SNU_CVLab, (B

Bee Lim 625 Dec 30, 2022
Code for the paper "Improving Vision-and-Language Navigation with Image-Text Pairs from the Web" (ECCV 2020)

Improving Vision-and-Language Navigation with Image-Text Pairs from the Web Arjun Majumdar, Ayush Shrivastava, Stefan Lee, Peter Anderson, Devi Parikh

Arjun Majumdar 44 Dec 14, 2022
TransMVSNet: Global Context-aware Multi-view Stereo Network with Transformers.

TransMVSNet This repository contains the official implementation of the paper: "TransMVSNet: Global Context-aware Multi-view Stereo Network with Trans

旷视研究院 3D 组 155 Dec 29, 2022
Code for Talk-to-Edit (ICCV2021). Paper: Talk-to-Edit: Fine-Grained Facial Editing via Dialog.

Talk-to-Edit (ICCV2021) This repository contains the implementation of the following paper: Talk-to-Edit: Fine-Grained Facial Editing via Dialog Yumin

Yuming Jiang 221 Jan 07, 2023
Robust Video Matting in PyTorch, TensorFlow, TensorFlow.js, ONNX, CoreML!

Robust Video Matting (RVM) English | 中文 Official repository for the paper Robust High-Resolution Video Matting with Temporal Guidance. RVM is specific

flow-dev 2 Aug 21, 2022
An implementation of a sequence to sequence neural network using an encoder-decoder

Keras implementation of a sequence to sequence model for time series prediction using an encoder-decoder architecture. I created this post to share a

Luke Tonin 195 Dec 17, 2022
Single-stage Keypoint-based Category-level Object Pose Estimation from an RGB Image

CenterPose Overview This repository is the official implementation of the paper "Single-stage Keypoint-based Category-level Object Pose Estimation fro

NVIDIA Research Projects 188 Dec 27, 2022
Extreme Rotation Estimation using Dense Correlation Volumes

Extreme Rotation Estimation using Dense Correlation Volumes This repository contains a PyTorch implementation of the paper: Extreme Rotation Estimatio

Ruojin Cai 29 Nov 18, 2022
Json2Xml tool will help you convert from json COCO format to VOC xml format in Object Detection Problem.

JSON 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Json2Xml t

Nguyễn Trường Lâu 6 Aug 22, 2022
Self-Supervised Learning

Self-Supervised Learning Features self_supervised offers features like modular framework support for multi-gpu training using PyTorch Lightning easy t

Robin 1 Dec 14, 2021
Source code of generalized shuffled linear regression

Generalized-Shuffled-Linear-Regression Code for the ICCV 2021 paper: Generalized Shuffled Linear Regression. Authors: Feiran Li, Kent Fujiwara, Fumio

FEI 7 Oct 26, 2022