The official project of SimSwap (ACM MM 2020)

Related tags

Deep LearningSimSwap
Overview

SimSwap: An Efficient Framework For High Fidelity Face Swapping

Proceedings of the 28th ACM International Conference on Multimedia

The official repository with Pytorch

Currently, only the test code is available, and training scripts are coming soon simswaplogo

[Arxiv paper]

[ACM DOI paper]

[Google Drive Paper link]

[Baidu Drive Paper link] Password: ummt

Results

Results1

Results2

Video

High-quality videos can be found in the link below:

[Google Drive link for video 1]

[Google Drive link for video 2]

[Baidu Drive link for video] Password: b26n

[Online Video]

Dependencies

  • python3.6+
  • pytorch1.5+
  • torchvision
  • opencv
  • pillow
  • numpy

Usage

To test the pretrained model

python test_one_image.py --isTrain false  --name people --Arc_path arcface_model/arcface_checkpoint.tar --pic_a_path crop_224/6.jpg --pic_b_path crop_224/ds.jpg --output_path output/

--name refers to the SimSwap training logs name.

Pretrained model

Usage

There are two archive files in the drive: checkpoints.zip and arcface_checkpoint.tar

  • Copy the arcface_checkpoint.tar into ./arcface_model
  • Unzip checkpoints.zip, place it in the root dir ./

[Google Drive]

[Baidu Drive] Password: jd2v

To cite our paper

@inproceedings{DBLP:conf/mm/ChenCNG20,
  author    = {Renwang Chen and
               Xuanhong Chen and
               Bingbing Ni and
               Yanhao Ge},
  title     = {SimSwap: An Efficient Framework For High Fidelity Face Swapping},
  booktitle = {{MM} '20: The 28th {ACM} International Conference on Multimedia},
  pages     = {2003--2011},
  publisher = {{ACM}},
  year      = {2020},
  url       = {https://doi.org/10.1145/3394171.3413630},
  doi       = {10.1145/3394171.3413630},
  timestamp = {Thu, 15 Oct 2020 16:32:08 +0200},
  biburl    = {https://dblp.org/rec/conf/mm/ChenCNG20.bib},
  bibsource = {dblp computer science bibliography, https://dblp.org}
}

Related Projects

Please visit our another ACMMM2020 high-quality style transfer project

logo

title

Learn about our other projects [RainNet];

[Sketch Generation];

[CooGAN];

[Knowledge Style Transfer];

[SimSwap];

[ASMA-GAN];

[SNGAN-Projection-pytorch]

[Pretrained_VGG19].

Comments
  • How to RESUME training

    How to RESUME training

    I rounded up the training I was running with Colab. And I have a backup of the pth file generated at that time. How can I start the training from where it left off?

    opened by osushilover 30
  • Watermark - Question ❓ (none issue)

    Watermark - Question ❓ (none issue)

    Sorry if my question sound too rude to ask: Is it possible to remove the watermark with an addition parameter?

    In case the author are not interested allowing to remove the watermark I will respect that of course. Just to be clear I didn't mean to be rude but wondering if possible.

    Thanks ahead and please keep up the good work! :)

    opened by AlonDan 16
  • Limitation in processing number of video frames according to GPU memory?

    Limitation in processing number of video frames according to GPU memory?

    Since I got it to work on my GForce 1050GTX / 2GB , at least for videos not longer than ~ 16 frames, before the GPU runs out of memory I wonder if there is also a limitation for using a 8 GB GPU ?

    I had the same problem using Wav2Lip, but it could be solved by setting the chunk size to 1.

    Would it (theoretically) be possible to process videos in SimSwap in smaller parts or chunks by releasing GPU memory every 15 frames ?

    opened by instant-high 13
  • Simswap 512 (beta version)  rendering

    Simswap 512 (beta version) rendering

    with Simswap 512 ( mode ffhq ) face swap running without error but face not swapped and shown as in image

    image

    and also with use_mask dont change result, generate similar image

    opened by birolkuyumcu 12
  • Training model

    Training model

    Hi, thanks for releasing the amazing work!

    I'm trying to reproduce the training part, and I borrowed a lot code from fs_model.py

    However, in another issues, #27 , you recommend to use "opt.model == 'pix2pixHD" to reprodeuce the base performance.

    I'm wondering how are these two models used?

    Btw, when I'm trying to train the model by using fs_model.py, the Gradient Penalty loss gets very high, even it is assigned by λ=1e-5, do you have any idea about this?

    opened by JosephPai 11
  • How to use GPU ?

    How to use GPU ?

    Hello,

    First, thanks a lot for your work its amazing :)

    Maybe it was already answered but I can't find the solution, so I'm asking for help

    Simswap is on my SSD Y have installed onnxruntime-GPU

    I can't get higher than 4 it / s, and when I look at component usage, my CPU is almost 80% used while my graphics card is 5% used .. So I deduce that there is something wrong.

    Can you help me please ?

    thank you very much in advance

    Capture

    opened by DrBlou 10
  • ERROR: test_video_swapspecific.py

    ERROR: test_video_swapspecific.py

    Since test_video_swapsingle is working perfect, first attempt to swap specific face gives following error (after processing a few frames):

    Traceback (most recent call last): File "test_video_swapspecific.py", line 83, in model, app, opt.output_path,temp_results_dir=opt.temp_path,no_simswaplogo=opt.no_simswaplogo,use_mask=opt.use_mask) File "C:\Tutorial\SimSwap\util\videoswap_specific.py", line 98, in video_swap os.path.join(temp_results_dir, 'frame_{:0>7d}.jpg'.format(frame_index)),no_simswaplogo,pasring_model =net,use_mask= use_mask, norm = spNorm) File "C:\Tutorial\SimSwap\util\reverse2original.py", line 88, in reverse2wholeimage for swaped_img, mat ,source_img in zip(swaped_imgs, mats,b_align_crop_tenor_list): TypeError: zip argument #1 must support iteration

    opened by instant-high 10
  • ❓ Can we change Hair Color and such? - (not issue)

    ❓ Can we change Hair Color and such? - (not issue)

    I've just update to the newest version. While I follow the Preparation I've notice something ✨NEW✨ that now SimSwap uses face parsing so I've downloaded the model from the link and put it inside the correct directory as explained.

    I've tried the --use_mask for no bounding box and it's BEAUTIFUL! so far so good.


    Since SimSwap uses THIS now... Does it mean we can actually go farther and tweak some other masks such as Hair color? (maybe other parts as well?)

    Face-Parsing.PyTorch

    If so... what are the commands that I will need to add, that will be nice thing to test as long as it's not complicated since I'm not a programmer.

    ❓- Can we change SPECIFIC MASK PARTS colors? ❓- Can we decide which SPECIFIC MASK PARTS will be included or not? (selected parts of the face)

    Original Color 1 Color 2

    enhancement 
    opened by AlonDan 10
  • AttributeError EfficientNet object has no attribute act1 same error every time  any one know how to solve it ??

    AttributeError EfficientNet object has no attribute act1 same error every time any one know how to solve it ??

    L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.parallel.data_parallel.DataParallel' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.conv.Conv2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.activation.PReLU' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.MaxPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.container.Sequential' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.pooling.AdaptiveAvgPool2d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.linear.Linear' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.activation.Sigmoid' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.dropout.Dropout' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) L:\spn\Anaconda3\lib\site-packages\torch\serialization.py:786: SourceChangeWarning: source code of class 'torch.nn.modules.batchnorm.BatchNorm1d' has changed. you can retrieve the original source code by accessing the object's source attribute or set torch.nn.Module.dump_patches = True and use the patch tool to revert the changes. warnings.warn(msg, SourceChangeWarning) Traceback (most recent call last): File "C:\Users\spn\Fake\SimSwap\train.py", line 139, in model.initialize(opt) File "C:\Users\spn\Fake\SimSwap\models\projected_model.py", line 57, in initialize self.netD = ProjectedDiscriminator(diffaug=False, interp224=False, **{}) File "C:\Users\spn\Fake\SimSwap\pg_modules\projected_discriminator.py", line 161, in init self.feature_network = F_RandomProj(**backbone_kwargs) File "C:\Users\spn\Fake\SimSwap\pg_modules\projector.py", line 108, in init self.pretrained, self.scratch = _make_projector(im_res=im_res, cout=self.cout, proj_type=self.proj_type, expand=self.expand) File "C:\Users\spn\Fake\SimSwap\pg_modules\projector.py", line 64, in _make_projector pretrained = _make_efficientnet(model) File "C:\Users\spn\Fake\SimSwap\pg_modules\projector.py", line 35, in _make_efficientnet pretrained.layer0 = nn.Sequential(model.conv_stem, model.bn1, model.act1, *model.blocks[0:2]) File "L:\spn\Anaconda3\lib\site-packages\torch\nn\modules\module.py", line 1207, in getattr raise AttributeError("'{}' object has no attribute '{}'".format( AttributeError: 'EfficientNet' object has no attribute 'act1'

    opened by 1YasserAmmar1 8
  • Memory problem

    Memory problem

    Hello everyone I can’t find the solution every time it fails at 95%. I tried two different video ≈ 8-10 min

    Thx for your help.

    (simswap) C:\Users\Max\Desktop\SimSwap\SimSwap-main>python test_video_swap_multispecific.py --no_simswaplogo --crop_size 512 --gpu_ids 0 --use_mask --name people --Arc_path arcface_model/arcface_checkpoint.tar --video_path ./video/video.mp4 --output_path ./output/video.mp4 --temp_path ./temp_results --multisepcific_dir ./video/multispecific

    Arc_path: arcface_model/arcface_checkpoint.tar aspect_ratio: 1.0 batchSize: 8 checkpoints_dir: ./checkpoints cluster_path: features_clustered_010.npy crop_size: 512 data_type: 32 dataroot: ./datasets/cityscapes/ display_winsize: 512 engine: None export_onnx: None feat_num: 3 fineSize: 512 fp16: False gpu_ids: [0] how_many: 50 id_thres: 0.03 image_size: 224 input_nc: 3 instance_feat: False isTrain: False label_feat: False label_nc: 0 latent_size: 512 loadSize: 1024 load_features: False local_rank: 0 max_dataset_size: inf model: pix2pixHD multisepcific_dir: ./video/multispecific nThreads: 2 n_blocks_global: 6 n_blocks_local: 3 n_clusters: 10 n_downsample_E: 4 n_downsample_global: 3 n_local_enhancers: 1 name: people nef: 16 netG: global ngf: 64 niter_fix_global: 0 no_flip: False no_instance: False no_simswaplogo: True norm: batch norm_G: spectralspadesyncbatch3x3 ntest: inf onnx: None output_nc: 3 output_path: ./output/video.mp4 phase: test pic_a_path: ./crop_224/gdg.jpg pic_b_path: ./crop_224/zrf.jpg pic_specific_path: ./crop_224/zrf.jpg resize_or_crop: scale_width results_dir: ./results/ semantic_nc: 3 serial_batches: False temp_path: ./temp_results tf_log: False use_dropout: False use_encoded_image: False use_mask: True verbose: False video_path: ./video/video.mp4 which_epoch: latest -------------- End ---------------- input mean and std: 127.5 127.5 find model: ./insightface_func/models\antelope\glintr100.onnx recognition find model: ./insightface_func/models\antelope\scrfd_10g_bnkps.onnx detection set det-size: (640, 640) (142, 366, 4) 95%|██████████████████████████████████████████████████████████████████████▍ | 10906/11457 [1:47:21<05:25, 1.69it/s] Traceback (most recent call last): File "test_video_swap_multispecific.py", line 98, in model, app, opt.output_path,temp_results_dir=opt.temp_path,no_simswaplogo=opt.no_simswaplogo,use_mask=opt.use_mask,crop_size=crop_size) File "C:\Users\Max\Desktop\SimSwap\SimSwap-main\util\videoswap_multispecific.py", line 114, in video_swap os.path.join(temp_results_dir, 'frame_{:0>7d}.jpg'.format(frame_index)),no_simswaplogo,pasring_model =net,use_mask=use_mask, norm = spNorm) File "C:\Users\Max\Desktop\SimSwap\SimSwap-main\util\reverse2original.py", line 170, in reverse2wholeimage img = img_mask * target_image + (1-img_mask) * img numpy.core._exceptions.MemoryError: Unable to allocate 21.1 MiB for an array with shape (720, 1280, 3) and data type float64

    opened by machinbidule62 7
  • Links to lite and lite-enhanced versions 224 and 512 VGGFace-2 dataset

    Links to lite and lite-enhanced versions 224 and 512 VGGFace-2 dataset

    I made a lite version of the 224 and 512 VGGFace-2 dataset. There are four archives in the folder - light versions of VGGFace-2 cropped and aligned to 224 and 512 and light HQ versions (224 and 512) enhanced with GPEN. All archives contain 15199 images. I know that this is very small for a normal model training, but maybe they will be useful to someone. I'm running out of space on my google drive, so I'll keep these archives on it for about a week, so if someone wants - you can make a mirror and post links!

    Actually, I have a suggestion - it seems that the life cycle of the repository is coming to an end and we are all waiting for the new SimSwap ++, so let's make something like a model zoo and share our models! Perhaps someone has a powerful GPU or a Colab Pro subscription - each of us can contribute to SimSwap support - someone makes scripts or shares experience in training models, someone can share their models.

    It would be realy awesome if @neuralchen will decide to share their latest 512 model!

    Link to Gdrive folder with datasets

    Dataset examples: VGGFace-2 224: 0002_01(2) 0002_01 0375_03

    VGGFace-2 224 enhanced with GPEN: 0002_01(3) 0002_01(1) 0375_03(1)

    VGGFace-2 512: 0005_01 0002_01(7) 0001_01(4)

    VGGFace-2 512 enhanced with GPEN: 0005_01(1) 0002_01(6) 0001_01(3)

    opened by netrunner-exe 7
  • CPU 100% GPU only 5%

    CPU 100% GPU only 5%

    I know this issue had been on other talk. My GPU RTX 5000 on cloud gpu like colab. I think this issue can be solved on https://github.com/mike9251/simswap-inference-pytorch . But i cant find model zoo (...Anaconda3\envs\myenv\Lib\site-packages\insightface\model_zoo\model_zoo.py) because im using cloud gpu. Anyone can help me find this file to edit on cloud gpu / jupyter lab ? thanks for this community

    opened by poepoeng 1
  • ERROR: No matching distribution found for protobuf<4,>=3.20.2 (from onnx->insightface)

    ERROR: No matching distribution found for protobuf<4,>=3.20.2 (from onnx->insightface)

    Thank you for your works. When I followed the guidence of the preparation and tried to install the insightface, I met some problems. It turned out that I can't install the insightface successfully. I typed pip install insightface==0.2.1, it always come to this error. Precisely, it was ERROR: Could not find a version that satisfies the requirement protobuf<4,>=3.20.2 (from onnx) (from versions: 2.0.0b0, 2.0.3, 2.3.0, 2.4.1, 2.5.0, 2.6.0, 2.6.1, 3.0.0a2, 3.0.0a3, 3.0.0b1, 3.0.0b1.post1, 3.0.0b1.post2, 3.0.0b2, 3.0.0b2.post1, 3.0.0b2.post2, 3.0.0b3, 3.0.0b4, 3.0.0, 3.1.0, 3.1.0.post1, 3.2.0rc1, 3.2.0rc1.post1, 3.2.0rc2, 3.2.0, 3.3.0, 3.4.0, 3.5.0.post1, 3.5.1, 3.5.2, 3.5.2.post1, 3.6.0, 3.6.1, 3.7.0rc2, 3.7.0rc3, 3.7.0, 3.7.1, 3.8.0rc1, 3.8.0, 3.9.0rc1, 3.9.0, 3.9.1, 3.9.2, 3.10.0rc1, 3.10.0, 3.11.0rc1, 3.11.0rc2, 3.11.0, 3.11.1, 3.11.2, 3.11.3, 3.12.0rc1, 3.12.0rc2, 3.12.0, 3.12.1, 3.12.2, 3.12.4, 3.13.0rc3, 3.13.0, 3.14.0rc1, 3.14.0rc2, 3.14.0rc3, 3.14.0, 3.15.0rc1, 3.15.0rc2, 3.15.0, 3.15.1, 3.15.2, 3.15.3, 3.15.4, 3.15.5, 3.15.6, 3.15.7, 3.15.8, 3.16.0rc1, 3.16.0rc2, 3.16.0, 3.17.0rc1, 3.17.0rc2, 3.17.0, 3.17.1, 3.17.2, 3.17.3, 3.18.0rc1, 3.18.0rc2, 3.18.0, 3.18.1, 3.18.3, 3.19.0rc1, 3.19.0rc2, 3.19.0, 3.19.1, 3.19.2, 3.19.3, 3.19.4, 3.19.5, 3.19.6, 4.0.0rc1, 4.0.0rc2, 4.21.0rc1, 4.21.0rc2, 4.21.0) ERROR: No matching distribution found for protobuf<4,>=3.20.2 (from onnx) Is there any solution for this error?

    opened by hahachocolate 0
  • Feat: argument to skip frames existing in directory defined by `temp_path`

    Feat: argument to skip frames existing in directory defined by `temp_path`

    I had a few moments when the video process was interrupted (standby, etc.). So I want to propose a flag to skip existing frames and continue the process at the last frame. This option is disabled by default.

    Additionally, I moved the (in my opinion) unnecessary statement that checks after each frame if the temp folder exists. If someone deletes the folder in the meantime, the overall result is corrupt, so the program can also simply close with an error?

    opened by StevenCyb 0
  • Same error messages every time for videos (doesn't happen when making a cropped image)

    Same error messages every time for videos (doesn't happen when making a cropped image)

    File "D:\SimSwap\SimSwap\test_video_swapsingle.py", line 58, in app = Face_detect_crop(name='antelope', root='./insightface_func/models') File "D:\SimSwap\SimSwap\insightface_func\face_detect_crop_single.py", line 40, in init model = model_zoo.get_model(onnx_file) File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 56, in get_model model = router.get_model() File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\insightface\model_zoo\model_zoo.py", line 23, in get_model session = onnxruntime.InferenceSession(self.onnx_file, None) File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 347, in init self._create_inference_session(providers, provider_options, disabled_optimizers) File "C:\Users\chick\AppData\Local\Programs\Python\Python310\lib\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 375, in _create_inference_session raise ValueError( ValueError: This ORT build has ['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'] enabled. Since ORT 1.9, you are required to explicitly set the providers parameter when instantiating InferenceSession. For example, onnxruntime.InferenceSession(..., providers=['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'CPUExecutionProvider'], ...)

    opened by Nugget2920 2
Releases(512_beta)
Owner
Six_God
I am a Phd student of Shanghai Jiao Tong University.
Six_God
Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning"

Code used for the results in the paper "ClassMix: Segmentation-Based Data Augmentation for Semi-Supervised Learning" Getting started Prerequisites CUD

70 Dec 02, 2022
Simple ray intersection library similar to coldet - succedeed by libacc

Ray Intersection This project offers a header only acceleration structure library including implementations for a BVH- and KD-Tree. Applications may i

Nils Moehrle 29 Jun 23, 2022
A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation

##A tensorflow implementation of Fully Convolutional Networks For Semantic Segmentation. #USAGE To run the trained classifier on some images: python w

Alex Seewald 13 Nov 17, 2022
Lightweight mmm - Lightweight (Bayesian) Media Mix Model

Lightweight (Bayesian) Media Mix Model This is not an official Google product. L

Google 342 Jan 03, 2023
DARTS-: Robustly Stepping out of Performance Collapse Without Indicators

[ICLR'21] DARTS-: Robustly Stepping out of Performance Collapse Without Indicators [openreview] Authors: Xiangxiang Chu, Xiaoxing Wang, Bo Zhang, Shun

55 Nov 01, 2022
Bag of Tricks for Natural Policy Gradient Reinforcement Learning

Bag of Tricks for Natural Policy Gradient Reinforcement Learning [ArXiv] Setup Python 3.8.0 pip install -r req.txt Mujoco 200 license Main Files main.

Brennan Gebotys 1 Oct 10, 2022
[TNNLS 2021] The official code for the paper "Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement"

CSDNet-CSDGAN this is the code for the paper "Learning Deep Context-Sensitive Decomposition for Low-Light Image Enhancement" Environment Preparing pyt

Jiaao Zhang 17 Nov 05, 2022
AFLNet: A Greybox Fuzzer for Network Protocols

AFLNet: A Greybox Fuzzer for Network Protocols AFLNet is a greybox fuzzer for protocol implementations. Unlike existing protocol fuzzers, it takes a m

626 Jan 06, 2023
Pytorch library for fast transformer implementations

Transformers are very successful models that achieve state of the art performance in many natural language tasks

Idiap Research Institute 1.3k Dec 30, 2022
Image Restoration Using Swin Transformer for VapourSynth

SwinIR SwinIR function for VapourSynth, based on https://github.com/JingyunLiang/SwinIR. Dependencies NumPy PyTorch, preferably with CUDA. Note that t

Holy Wu 11 Jun 19, 2022
A new data augmentation method for extreme lighting conditions.

Random Shadows and Highlights This repo has the source code for the paper: Random Shadows and Highlights: A new data augmentation method for extreme l

Osama Mazhar 35 Nov 26, 2022
Vertex AI: Serverless framework for MLOPs (ESP / ENG)

Vertex AI: Serverless framework for MLOPs (ESP / ENG) Español Qué es esto? Este repo contiene un pipeline end to end diseñado usando el SDK de Kubeflo

Hernán Escudero 2 Apr 28, 2022
A modular application for performing anomaly detection in networks

Deep-Learning-Models-for-Network-Annomaly-Detection The modular app consists for mainly three annomaly detection algorithms. The system supports model

Shivam Patel 1 Dec 09, 2021
PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

PyTorch implementation of SimCLR: A Simple Framework for Contrastive Learning of Visual Representations

Thalles Silva 1.7k Dec 28, 2022
A script that trains a model to recognize handwritten digits using the MNIST data set.

handwritten-digits-recognition A script that trains a model to recognize handwritten digits using the MNIST data set. Then it loads external files and

Hamza Sayih 1 Oct 30, 2021
Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

Time2box Implementation of [Time in a Box: Advancing Knowledge Graph Completion with Temporal Scopes].

LingCai 4 Aug 23, 2022
A non-linear, non-parametric Machine Learning method capable of modeling complex datasets

Fast Symbolic Regression Symbolic Regression is a non-linear, non-parametric Machine Learning method capable of modeling complex data sets. fastsr aim

VAMSHI CHOWDARY 3 Jun 22, 2022
Gans-in-action - Companion repository to GANs in Action: Deep learning with Generative Adversarial Networks

GANs in Action by Jakub Langr and Vladimir Bok List of available code: Chapter 2: Colab, Notebook Chapter 3: Notebook Chapter 4: Notebook Chapter 6: C

GANs in Action 914 Dec 21, 2022
QKeras: a quantization deep learning library for Tensorflow Keras

QKeras github.com/google/qkeras QKeras 0.8 highlights: Automatic quantization using QKeras; Stochastic behavior (including stochastic rouding) is disa

Google 437 Jan 03, 2023
Graph Convolutional Networks for Temporal Action Localization (ICCV2019)

Graph Convolutional Networks for Temporal Action Localization This repo holds the codes and models for the PGCN framework presented on ICCV 2019 Graph

Runhao Zeng 318 Dec 06, 2022