Lucid Sonic Dreams syncs GAN-generated visuals to music.

Overview

Lucid Sonic Dreams

Lucid Sonic Dreams syncs GAN-generated visuals to music. By default, it uses NVLabs StyleGAN2, with pre-trained models lifted from Justin Pinkney's consolidated repository. Custom weights and other GAN architectures can be used as well.

Sample output can be found on YouTube and Instagram.

Installation

This implementation has been teston on Python 3.6 and 3.7. As per NVLabs' TensorFlow implementation of StyleGAN2, TensorFlow 1.15 is required. TensorFlow 2.x is not supported.

To install, simply run:

pip install lucidsonicdreams

Usage

You may refer to the Lucid Sonic Dreams Tutorial Notebook for full parameter descriptions and sample code templates. A basic visualization snippet is also found below.

Basic Visualization

from lucidsonicdreams import LucidSonicDream


L = LucidSonicDream(song = 'song.mp3',
                    style = 'abstract photos')

L.hallucinate(file_name = 'song.mp4') 
Comments
  • where to place .pkl model files?

    where to place .pkl model files?

    Hi,

    Thanks for the fantastic repo,

    I really want to drop a custom .pkl model file into LucidSonicDreams and it isn't obvious to me where I should put it? I'm working in Colab for the time being.

    Thanks,

    Mark

    opened by markhanslip 1
  • Installing in Ubuntu 20.04 ?

    Installing in Ubuntu 20.04 ?

    Hi I tried now for hours to install Lucid Sonic Dreams in Ubuntu 20.04. How to install it correctly so that it works ? I tried it with anaconda but no luck....a little desperate now ! Update: Installed everything without errors. Used the setup.py to install dependencies. But now i am stuck. Where and how to execute this:

    from lucidsonicdreams import show_styles

    Show valid default style names. show_styles()

    or this ?

    "from lucidsonicdreams import LucidSonicDream

    L = LucidSonicDream(song = 'song.mp3', style = 'abstract photos')

    L.hallucinate(file_name = 'song.mp4') " ??

    Can someone enlighten me please ?

    opened by Colliwomple 0
  • Fix for broken Deps possibly?  Please advise!  LSD colab is BROKEN!  Thanks!

    Fix for broken Deps possibly? Please advise! LSD colab is BROKEN! Thanks!

    See - https://github.com/mikaelalafriz/lucid-sonic-dreams/compare/main...pollinations:lucid-sonic-dreams:main suggestion for pollinations to mod to self refer so their fixes they made can be used, otherwise its referring to the same broken changes that you have that are breaking the colabs for LSD.

    [fuse bias errors mostly to do with incompatibilities in breaking changes to several depenancies and potential v2 v3 python issues with v1/v2 tensorflow.]

    ITs fixable but we need to specify the old working dependencies from what i can see, not the new breaking ones. All this began after the default code attempted to integrate ADA from what i could see? Correct me if i am wrong thanks!

    opened by cleancoindev 1
  • Real time support

    Real time support

    Hi,

    First, thank you for your great work - it's incredible!

    I was wondering if, in your opinion, it would be possible to extend your work to generate the visuals in real-time. This would mean using streaming of audio data (or, possibly, MIDI) rather than pre-rendered files. I guess the frame rate can be a little low at 1024, but it would be still great to have this option for someone who has a lot of GPUs. Do you think it would be anyhow realistic?

    Keep up the amazing work!

    opened by lowlypalace 0
  • ModuleNotFoundError: No module named 'lucidsonicdreams'

    ModuleNotFoundError: No module named 'lucidsonicdreams'

    Im trying to run a test and this is the way i have the python file typed. Any help would be appreciated

    (command i input)= python proud.py (to run the python below)

    from lucidsonicdreams import LucidSonicDream

    L = LucidSonicDream(song = 'proud.mp3', style = 'abstract photos')

    L.hallucinate(file_name = 'proud.mp4', resolution = 360, start = 30, duration = 45)

    files.download("proud of you.mp4")

    Error im getting

    Traceback (most recent call last): File "proud.py" line 1 in from lucidsonicdreams import LucidSonicDream ModuleNotFoundError: No module named 'lucidsonicdreams

    Screenshot (4) '

    opened by Texagon 3
  • index out of bounds

    index out of bounds

    Hi! I am trying out the script in order to sync some images I have generated using VQGAN+CLIP to my audio. Here's the code:

    def load_imgs(noise_batch, class_batch):
        # just loads N images randomly
        return images
    
    L = LucidSonicDream('audio_5.mp3',
                        style = load_imgs, 
                        input_shape = 592,
                        num_possible_classes = 1000)
    
    L.hallucinate('video_sync.mp4',
                  output_audio = 'audio_sync.mp3',
                  speed_fpm = 3,
                  classes = [13, 14, 22, 24, 301, 84, 99, 100, 134, 143, 393, 394],
                  class_shuffle_seconds = 10, 
                  class_shuffle_strength = 0.1,
                  class_complexity = 0.5,
                  class_smooth_seconds = 4,
                  motion_react = 0.35,
                  flash_strength = 1)
                  #contrast_strength = 0.5)
    

    The error appears just at the end of the process:

    IndexError                                Traceback (most recent call last)
    <ipython-input-15-aeedb41e1387> in <module>()
         15               class_smooth_seconds = 4,
         16               motion_react = 0.35,
    ---> 17               flash_strength = 1)
         18               #contrast_strength = 0.5)
    
    2 frames
    /usr/local/lib/python3.7/dist-packages/lucidsonicdreams/main.py in apply_effect(self, array, index)
        742     '''Apply effect to image (array)'''
        743 
    --> 744     amplitude = self.spec[index]
        745     return self.func(array=array, strength = self.strength, amplitude=amplitude)
    
    IndexError: index 207 is out of bounds for axis 0 with size 207
    

    Any idea on how to avoid it? Thanks in advance!

    opened by shoegazerstella 0
Releases(v_04)
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
A micro-game "flappy bird".

1-o-flappy A micro-game "flappy bird". Gameplays The game will be installed at /usr/bin . The name of it is "1-o-flappy". You can type "1-o-flappy" to

1 Nov 06, 2021
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
BBScan py3 - BBScan py3 With Python

BBScan_py3 This repository is forked from lijiejie/BBScan 1.5. I migrated the fo

baiyunfei 12 Dec 30, 2022
a project for 3D multi-object tracking

a project for 3D multi-object tracking

155 Jan 04, 2023
Data from "HateCheck: Functional Tests for Hate Speech Detection Models" (Röttger et al., ACL 2021)

In this repo, you can find the data from our ACL 2021 paper "HateCheck: Functional Tests for Hate Speech Detection Models". "test_suite_cases.csv" con

Paul Röttger 43 Nov 11, 2022
KinectFusion implemented in Python with PyTorch

KinectFusion implemented in Python with PyTorch This is a lightweight Python implementation of KinectFusion. All the core functions (TSDF volume, fram

Jingwen Wang 80 Jan 03, 2023
Convolutional Neural Network for Text Classification in Tensorflow

This code belongs to the "Implementing a CNN for Text Classification in Tensorflow" blog post. It is slightly simplified implementation of Kim's Convo

Denny Britz 5.5k Jan 02, 2023
CVAT is free, online, interactive video and image annotation tool for computer vision

Computer Vision Annotation Tool (CVAT) CVAT is free, online, interactive video and image annotation tool for computer vision. It is being used by our

OpenVINO Toolkit 8.6k Jan 04, 2023
Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Support Vector Machine".

On the Equivalence between Neural Network and Support Vector Machine Codes for NeurIPS 2021 paper "On the Equivalence between Neural Network and Suppo

Leslie 8 Oct 25, 2022
A Web API for automatic background removal using Deep Learning. App is made using Flask and deployed on Heroku.

Automatic_Background_Remover A Web API for automatic background removal using Deep Learning. App is made using Flask and deployed on Heroku. 👉 https:

Gaurav 16 Oct 29, 2022
Official implementation for paper Render In-between: Motion Guided Video Synthesis for Action Interpolation

Render In-between: Motion Guided Video Synthesis for Action Interpolation [Paper] [Supp] [arXiv] [4min Video] This is the official Pytorch implementat

8 Oct 27, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Implementing yolov4 target detection and tracking based on nao robot

Implementing yolov4 target detection and tracking based on nao robot

6 Apr 19, 2022
An elaborate and exhaustive paper list for Named Entity Recognition (NER)

Named-Entity-Recognition-NER-Papers by Pengfei Liu, Jinlan Fu and other contributors. An elaborate and exhaustive paper list for Named Entity Recognit

Pengfei Liu 388 Dec 18, 2022
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
Hand tracking demo for DIY Smart Glasses with a remote computer doing the work

CameraStream This is a demonstration that streams the image from smartglasses to a pc, does the hand recognition on the remote pc and streams the proc

Teemu Laurila 20 Oct 13, 2022
tensorflow implementation of 'YOLO : Real-Time Object Detection'

YOLO_tensorflow (Version 0.3, Last updated :2017.02.21) 1.Introduction This is tensorflow implementation of the YOLO:Real-Time Object Detection It can

Jinyoung Choi 1.7k Nov 21, 2022
PyTorch implementation for the paper Visual Representation Learning with Self-Supervised Attention for Low-Label High-Data Regime

Visual Representation Learning with Self-Supervised Attention for Low-Label High-Data Regime Created by Prarthana Bhattacharyya. Disclaimer: This is n

Prarthana Bhattacharyya 5 Nov 08, 2022
Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework

Books, Presentations, Workshops, Notebook Labs, and Model Zoo for Software Engineers and Data Scientists wanting to learn the TF.Keras Machine Learning framework

Google Cloud Platform 792 Dec 28, 2022