Old Photo Restoration (Official PyTorch Implementation)

Overview

Old Photo Restoration (Official PyTorch Implementation)

Project Page | Paper (CVPR version) | Paper (Journal version) | Pretrained Model | Colab Demo 🔥

Bringing Old Photos Back to Life, CVPR2020 (Oral)

Old Photo Restoration via Deep Latent Space Translation, PAMI Under Review

Ziyu Wan1, Bo Zhang2, Dongdong Chen3, Pan Zhang4, Dong Chen2, Jing Liao1, Fang Wen2
1City University of Hong Kong, 2Microsoft Research Asia, 3Microsoft Cloud AI, 4USTC

Notes of this project

The code originates from our research project and the aim is to demonstrate the research idea, so we have not optimized it from a product perspective. And we will spend time to address some common issues, such as out of memory issue, limited resolution, but will not involve too much in engineering problems, such as speedup of the inference, fastapi deployment and so on. We welcome volunteers to contribute to this project to make it more usable for practical application.

New

You can now play with our Colab and try it on your photos.

Requirement

The code is tested on Ubuntu with Nvidia GPUs and CUDA installed. Python>=3.6 is required to run the code.

Installation

Clone the Synchronized-BatchNorm-PyTorch repository for

cd Face_Enhancement/models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../../
cd Global/detection_models
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
cd ../../

Download the landmark detection pretrained model

cd Face_Detection/
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
cd ../

Download the pretrained model from Azure Blob Storage, put the file Face_Enhancement/checkpoints.zip under ./Face_Enhancement, and put the file Global/checkpoints.zip under ./Global. Then unzip them respectively.

cd Face_Enhancement/
wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
unzip checkpoints.zip
cd ../
cd Global/
wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
unzip checkpoints.zip
cd ../

Install dependencies:

pip install -r requirements.txt

How to use?

Note: GPU can be set 0 or 0,1,2 or 0,2; use -1 for CPU

1) Full Pipeline

You could easily restore the old photos with one simple command after installation and downloading the pretrained model.

For images without scratches:

python run.py --input_folder [test_image_folder_path] \
              --output_folder [output_path] \
              --GPU 0

For scratched images:

python run.py --input_folder [test_image_folder_path] \
              --output_folder [output_path] \
              --GPU 0 \
              --with_scratch

Note: Please try to use the absolute path. The final results will be saved in ./output_path/final_output/. You could also check the produced results of different steps in output_path.

2) Scratch Detection

Currently we don't plan to release the scratched old photos dataset with labels directly. If you want to get the paired data, you could use our pretrained model to test the collected images to obtain the labels.

cd Global/
python detection.py --test_path [test_image_folder_path] \
                    --output_dir [output_path] \
                    --input_size [resize_256|full_size|scale_256]

3) Global Restoration

A triplet domain translation network is proposed to solve both structured degradation and unstructured degradation of old photos.

cd Global/
python test.py --Scratch_and_Quality_restore \
               --test_input [test_image_folder_path] \
               --test_mask [corresponding mask] \
               --outputs_dir [output_path]

python test.py --Quality_restore \
 --test_input [test_image_folder_path] \
 --outputs_dir [output_path]

4) Face Enhancement

We use a progressive generator to refine the face regions of old photos. More details could be found in our journal submission and ./Face_Enhancement folder.

NOTE: This repo is mainly for research purpose and we have not yet optimized the running performance.

Since the model is pretrained with 256*256 images, the model may not work ideally for arbitrary resolution.

To Do

  • Clean testing code
  • Release pretrained model
  • Collab demo
  • Replace face detection module (dlib) with RetinaFace
  • Release training code

Citation

If you find our work useful for your research, please consider citing the following papers :)

@inproceedings{wan2020bringing,
title={Bringing Old Photos Back to Life},
author={Wan, Ziyu and Zhang, Bo and Chen, Dongdong and Zhang, Pan and Chen, Dong and Liao, Jing and Wen, Fang},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={2747--2757},
year={2020}
}
@misc{2009.07047,
Author = {Ziyu Wan and Bo Zhang and Dongdong Chen and Pan Zhang and Dong Chen and Jing Liao and Fang Wen},
Title = {Old Photo Restoration via Deep Latent Space Translation},
Year = {2020},
Eprint = {arXiv:2009.07047},
}

If you are also interested in the legacy photo/video colorization, please refer to this work.

Maintenance

This project is currently maintained by Ziyu Wan and is for academic research use only. If you have any questions, feel free to contact [email protected].

License

The codes and the pretrained model in this repository are under the MIT license as specified by the LICENSE file. We use our labeled dataset to train the scratch detection model.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact [email protected] with any additional questions or comments.

Comments
  • RuntimeError: CUDA out of memory.

    RuntimeError: CUDA out of memory.

    I get the following error: RuntimeError: CUDA out of memory. Tried to allocate 88.00 MiB (GPU 0; 5.80 GiB total capacity; 4.14 GiB already allocated; 154.56 MiB free; 4.24 GiB reserved in total by PyTorch)

    Is there a way to allocate more memory? I do not get why 4.14Gb are already allocated.

    opened by grenaud 21
  • Software installation/use tutorial

    Software installation/use tutorial

    Could someone insert an installation and use tutorial for Windows 10? I have installed Python, Anaconda, PyThorc and Nvidia Cuda, but I didn't understand how to install and use this artificial intelligence software for restoring old photos (sorry if my English is bad but I'm Italian and I'm using Google Translate)

    opened by m2asp 20
  • My reimplemention of training

    My reimplemention of training

    follow the jouranl edition of the paper, I wrote a pytorch Pseudocode training code

        # build 2 vae network, 3 discriminators but NO transfer network for now
        # and their optimizer
        vae1, xr_recon_d, z_xr_d, \
            vae2, y_recon_d, \
            optimizer_vae1, optimizer_d1, \
            optimizer_vae2, optimizer_d2 = build_model(opt)
        start_iter = 0
        if opt.load_checkpoint_iter>0:
            checkpoint_path = checkpoint_root + f'/global_checkpoint_{opt.load_checkpoint_iter}.pth'
            if not Path(checkpoint_path).exists():
                print(f"ERROR! checkpoint_path {checkpoint_path} is None")
                exit(-1)
            state_dict = torch.load(checkpoint_path)
            start_iter = state_dict['iter']
            assert state_dict['batch_size'] == opt.batch_size, f"ERROR - batch size changed! load: {state_dict['batch_size']}, but now {opt.batch_size}"
            vae1.load_state_dict(state_dict['vae1'])
            xr_recon_d.load_state_dict(state_dict['xr_recon_d'])
            z_xr_d.load_state_dict(state_dict['z_xr_d'])
            vae2.load_state_dict(state_dict['vae2'])
            y_recon_d.load_state_dict(state_dict['y_recon_d'])
            optimizer_vae1.load_state_dict(state_dict['optimizer_vae1'])
            optimizer_d1.load_state_dict(state_dict['optimizer_d1'])
            optimizer_vae2.load_state_dict(state_dict['optimizer_vae2']) 
            optimizer_d2.load_state_dict(state_dict['optimizer_d2']) 
            print("checkpoint load successfully!")
        # create dataloader
        dataLoaderR, dataLoaderXY = get_dataloader(opt)
        dataLoaderXY_iter = iter(dataLoaderXY)
        dataLoaderR_iter = iter(dataLoaderR)
        start = time.perf_counter()
        print("train start!")
        for ii in range(opt.total_iter - start_iter):
            current_iter = ii + start_iter
            try:
                x, y, path_y = dataLoaderXY_iter.next()
            except:
                dataLoaderXY_iter = iter(dataLoaderXY)
                x, y, path_y = dataLoaderXY_iter.next()
            try:
                r, path_r = dataLoaderR_iter.next()
            except:
                dataLoaderR_iter = iter(dataLoaderR)
                r, path_r = dataLoaderR_iter.next()
            ### following the practice in U-GAT-IT:
            ### train D and G iteratively, but not training D multiple times than training G
            r = r.to(opt.device)
            x = x.to(opt.device)
            y = y.to(opt.device)
            if opt.debug and current_iter%500==0:
                torchvision.utils.save_image(y, 'train_vae_y.png', normalize=True)
                torchvision.utils.save_image(x, 'train_vae_x.png', normalize=True)
                torchvision.utils.save_image(r, 'train_vae_r.png', normalize=True)
    
            ### vae1 train d
            # save gpu memory since no need calc grad for net G when train net D
            with torch.no_grad():
                z_x, mean_x, var_x, recon_x = vae1(x)
                z_r, mean_r, var_r, recon_r = vae1(r)
                batch_requires_grad(z_x, mean_x, var_x, recon_x,
                                    z_r, mean_r, var_r, recon_r)
            loss_1 = 0
            adv_loss_d_x = lsgan_d(xr_recon_d(x), xr_recon_d(recon_x))
            adv_loss_d_r = lsgan_d(xr_recon_d(r), xr_recon_d(recon_r))
            # z_x is real and z_r is fake here because let z_r close to z_x
            adv_loss_d_xr = lsgan_d(z_xr_d(z_x), z_xr_d(z_r))
            loss_1_d = adv_loss_d_x + adv_loss_d_r + adv_loss_d_xr
            loss_1_d.backward()
            optimizer_d1.step()
            optimizer_d1.zero_grad()
            ### vae1 train g
            # since we need update weights of G, the result should be re-calculate with grad
            z_x, mean_x, var_x, recon_x = vae1(x)
            z_r, mean_r, var_r, recon_r = vae1(r)
            adv_loss_g_x = lsgan_g(xr_recon_d(recon_x))
            adv_loss_g_r = lsgan_g(xr_recon_d(recon_r))
            # z_x is real and z_r is fake here because let z_r close to z_x
            adv_loss_g_xr = lsgan_g(z_xr_d(z_r))
            KLDloss_1_x = -0.5 * torch.sum(1 + var_x - mean_x.pow(2) - var_x.exp())  # KLD
            L1loss_1_x  = opt.weight_alpha * F.l1_loss(x, recon_x)
            KLDloss_1_r = -0.5 * torch.sum(1 + var_r - mean_r.pow(2) - var_r.exp())  # KLD
            L1loss_1_r  = opt.weight_alpha * F.l1_loss(r, recon_r)
            loss_1_g = adv_loss_g_x + KLDloss_1_x + L1loss_1_x \
                     + adv_loss_g_r + KLDloss_1_r + L1loss_1_r \
                     + adv_loss_g_xr
            loss_1_g.backward()
            optimizer_vae1.step()
            optimizer_vae1.zero_grad()
    
            ### vae2 train d
            # save gpu memory since no need calc grad for net G when train net D
            with torch.no_grad():
                z_y, mean_y, var_y, recon_y = vae2(y)
                batch_requires_grad(z_y, mean_y, var_y, recon_y)
            adv_loss_d_y = lsgan_d(y_recon_d(y), y_recon_d(recon_y))
            loss_2_d = adv_loss_d_y
            loss_2_d.backward()
            optimizer_d2.step()
            optimizer_d2.zero_grad()
            ### vae2 train g
            # since we need update weights of G, the result should be re-calculate with grad
            z_y, mean_y, var_y, recon_y = vae2(y)
            adv_loss_g_y = lsgan_g(y_recon_d(recon_y))
            KLDloss_1_y = -0.5 * torch.sum(1 + var_y - mean_y.pow(2) - var_y.exp())  # KLD
            L1loss_1_y  = opt.weight_alpha * F.l1_loss(y, recon_y)
            loss_2_g = adv_loss_g_y + KLDloss_1_y + L1loss_1_y
            loss_2_g.backward()
            optimizer_vae2.step()
            optimizer_vae2.zero_grad()
            # debug
            if opt.debug and current_iter%500==0:
                # [print(k, 'channel 0:\n', v[0][0]) for k,v in list(model.named_parameters()) if k in ["netG_A.encoder.13.conv_block.5.weight", "netG_A.decoder.4.conv_block.5.weight"]]
                torchvision.utils.save_image(recon_x, 'train_vae_recon_x.png', normalize=True)
                torchvision.utils.save_image(recon_r, 'train_vae_recon_r.png', normalize=True)
                torchvision.utils.save_image(recon_y, 'train_vae_recon_y.png', normalize=True)
           
            if current_iter%500==0:
                print(f"""STEP {current_iter:06d} {time.perf_counter() - start:.1f} s
                loss_1_d = adv_loss_d_x + adv_loss_d_r + adv_loss_d_xr
                {loss_1_d:.3f} = {adv_loss_d_x:.3f} + {adv_loss_d_r:.3f} + {adv_loss_d_xr:.3f}
                loss_1_g = adv_loss_g_x + KLDloss_1_x + L1loss_1_x \
                     + adv_loss_g_r + KLDloss_1_r + L1loss_1_r \
                     + adv_loss_g_xr
                {loss_1_g:.3f} = {adv_loss_g_x:.3f} + {KLDloss_1_x:.3f} + {L1loss_1_x:.3f} \
                     + {adv_loss_g_r:.3f} + {KLDloss_1_r:.3f} + {L1loss_1_r:.3f} \
                     + {adv_loss_g_xr:.3f}
                """)
            if (current_iter+1)%2000==0:
                # finish the current_iter-th step, e.g. finish iter0, save as 1, resume train from iter 1
                state = {
                    'iter': current_iter,
                    'batch_size': opt.batch_size,
                    #
                    'vae1': vae1.state_dict(),
                    'xr_recon_d': xr_recon_d.state_dict(),
                    'z_xr_d': z_xr_d.state_dict(),
                    #
                    'vae2': vae2.state_dict(),
                    'y_recon_d': y_recon_d.state_dict(),
                    #
                    'optimizer_vae1': optimizer_vae1.state_dict(),
                    'optimizer_d1': optimizer_d1.state_dict(),
                    'optimizer_vae2': optimizer_vae2.state_dict(),
                    'optimizer_d2': optimizer_d2.state_dict(),
                    }
                torch.save(state, checkpoint_root + f'/global_checkpoint_{current_iter}.pth')
        print("global", time.perf_counter() - start, ' s')
    

    where the lsgan_d and lsgan_g is defined as following:

    import torch
    import torch.nn.functional as F
    ### lsgan: a=0, b=c=1
    def lsgan_d(d_logit_real, d_logit_fake):
        return F.mse_loss(d_logit_real, torch.ones_like(d_logit_real)) + d_logit_fake.pow(2).mean()
    
    def lsgan_g(d_logit_fake):
        return F.mse_loss(d_logit_fake, torch.ones_like(d_logit_fake))
    
    opened by vegetable09 13
  • cuda requirement

    cuda requirement

    Is it possible to run this on a (recent) Mac, which does not support CUDA? I would have guessed setting --GPU 0 would not attempt to call CUDA, but it fails.

    File "/Users/../Desktop/bopbtl/venv/lib/python3.7/site-packages/torch/cuda/__init__.py", line 61, in _check_driver
        raise AssertionError("Torch not compiled with CUDA enabled")
    AssertionError: Torch not compiled with CUDA enabled
    
    good first issue 
    opened by bpops 13
  • Make frontend for the project

    Make frontend for the project

    Make a interactive frontend for the project Bringing Old Photos Back , where the user can upload photo or click it by camera and then can download the result image and share it.

    opened by Aayush-hub 11
  • [Feature] Added image filter script

    [Feature] Added image filter script

    Fixes: #125 Added script to apply filters on output image. Currently added script to make output image black and white. Will adding more scripts to more beautify images according to choice. This a feature which can help edit old images to look cool in this time.

    Sample Video:

    https://user-images.githubusercontent.com/65889104/112759421-875fa180-9010-11eb-87bd-66cd79222127.mp4

    opened by Aayush-hub 11
  • cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

    cuDNN error: CUDNN_STATUS_INTERNAL_ERROR

    Running Stage 1: Overall restoration Now you are processing 5d6bfe49ge1b23ec32641&690.jpg Skip 5d6bfe49ge1b23ec32641&690.jpg due to an error: cuDNN error: CUDNN_STATUS_INTERNAL_ERROR You can try to repro this exception using the following code snippet. If that doesn't trigger the error, please include your original repro script when reporting this issue.

    import torch torch.backends.cuda.matmul.allow_tf32 = True torch.backends.cudnn.benchmark = True torch.backends.cudnn.deterministic = False torch.backends.cudnn.allow_tf32 = True data = torch.randn([1, 3, 470, 446], dtype=torch.float, device='cuda', requires_grad=True) net = torch.nn.Conv2d(3, 64, kernel_size=[7, 7], padding=[0, 0], stride=[1, 1], dilation=[1, 1], groups=1) net = net.cuda().float() out = net(data) out.backward(torch.randn_like(out)) torch.cuda.synchronize()

    ConvolutionParams data_type = CUDNN_DATA_FLOAT padding = [0, 0, 0] stride = [1, 1, 0] dilation = [1, 1, 0] groups = 1 deterministic = false allow_tf32 = true input: TensorDescriptor 0x54bd8730 type = CUDNN_DATA_FLOAT nbDims = 4 dimA = 1, 3, 470, 446, strideA = 628860, 209620, 446, 1, output: TensorDescriptor 0x54bda2f0 type = CUDNN_DATA_FLOAT nbDims = 4 dimA = 1, 64, 464, 440, strideA = 13066240, 204160, 440, 1, weight: FilterDescriptor 0x3b058e0 type = CUDNN_DATA_FLOAT tensor_format = CUDNN_TENSOR_NCHW nbDims = 4 dimA = 64, 3, 7, 7, Pointer addresses: input: 0x60b80c400 output: 0x60be60000 weight: 0x601660000

    Skipping non-file final_output Skipping non-file stage_1_restore_output Skipping non-file stage_2_detection_output Skipping non-file stage_3_face_output Now you are processing timg.jpg Skip timg.jpg due to an error: CUDA error: an illegal memory access was encountered Finish Stage 1 ...

    Is this because of insufficient video memory?

    opened by wang7143 11
  • No such file or directory: output/stage_1_restore_output/restored_image

    No such file or directory: output/stage_1_restore_output/restored_image

    Need to manually create the restored_image dir otherwise the following failure occurs:

    $ python run.py --input_folder ~/Documents/revive-old-photos/input --output ~/Documents/revive-old-photos/output --GPU 0
    Running Stage 1: Overall restoration
    Traceback (most recent call last):
      File "test.py", line 6, in <module>
        from torch.autograd import Variable
    ImportError: No module named torch.autograd
    Traceback (most recent call last):
      File "run.py", line 79, in <module>
        for x in os.listdir(stage_1_results):
    FileNotFoundError: [Errno 2] No such file or directory: '~/Documents/revive-old-photos/output/stage_1_restore_output/restored_image'
    
    opened by drupanchal 11
  • Should the L1 factor 'alpha' be larger?

    Should the L1 factor 'alpha' be larger?

    I reimplement the training code, however, it is hard to increase SSIM to 0.69 when training VAE1 and VAE2 unless I change the L1 factor 'alpha'. I would like to know that how long did it take you to train VAE1 and VAE2? Would you tell me the rough time about releasing training code?

    opened by HaloKester 9
  • CUDA out of memory

    CUDA out of memory

    With some image throws this exception: Skip a.png due to an error: CUDA out of memory. Tried to allocate 3.10 GiB (GPU 0; 8.00 GiB total capacity; 3.83 GiB already allocated; 2.17 GiB free; 4.40 GiB reserved in total by PyTorch).

    Thanks in advance

    opened by Davydhh 9
  • Couldn't install libraries in requirements.txt

    Couldn't install libraries in requirements.txt

    when i trying to install required libraries i got the following error:

    Running setup.py install for torch ... error ERROR: Command errored out with exit status 1: command: 'c:\users\mahmed.ssmain\appdata\local\programs\python\python38\python.exe' -u -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'C:\Users\mahmed .SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py'"'"'; file='"'"'C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\ setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec '"'"'))' install --record 'C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-record-gm8ndtxb\install-record.txt' --single-version-externally-managed --compile --install-h eaders 'C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\Include\torch' cwd: C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch
    Complete output (23 lines): running install running build_deps Traceback (most recent call last): File "", line 1, in File "C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py", line 225, in setup(name="torch", version="0.1.2.post2", File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\site-packages\setuptools_init_.py", line 145, in setup return distutils.core.setup(**attrs) File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\core.py", line 148, in setup dist.run_commands() File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\dist.py", line 966, in run_commands self.run_command(cmd) File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py", line 99, in run self.run_command('build_deps') File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\cmd.py", line 313, in run_command self.distribution.run_command(command) File "C:\Users\mahmed.SSMAIN\AppData\Local\Programs\Python\Python38\lib\distutils\dist.py", line 985, in run_command cmd_obj.run() File "C:\Users\mahmed.SSMAIN\AppData\Local\Temp\pip-install-5a62zurd\torch\setup.py", line 51, in run from tools.nnwrap import generate_wrappers as generate_nn_wrappers ModuleNotFoundError: No module named 'tools.nnwrap'

    opened by Mahmoud-Tawfeek 8
  • Error in Downloading Checkpoint for Face Enhancement

    Error in Downloading Checkpoint for Face Enhancement

    --2022-12-28 03:36:54-- https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1 Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected. HTTP request sent, awaiting response... 404 The specified resource does not exist. 2022-12-28 03:36:54 ERROR 404: The specified resource does not exist..

    unzip: cannot find or open checkpoints.zip, checkpoints.zip.zip or checkpoints.zip.ZIP. /content/photo_restoration /content/photo_restoration/Global --2022-12-28 03:36:54-- https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1 Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected. HTTP request sent, awaiting response... 404 The specified resource does not exist. 2022-12-28 03:36:55 ERROR 404: The specified resource does not exist..

    Please help in this regards

    opened by m-aliabbas 1
  • ModuleNotFoundError: No module named 'torch'

    ModuleNotFoundError: No module named 'torch'

    image ┌──(root㉿kali)-[/home/kirisawa/Bringing-Old-Photos-Back-to-Life] └─# python3 run.py
    Running Stage 1: Overall restoration Traceback (most recent call last): File "/home/kirisawa/Bringing-Old-Photos-Back-to-Life/Global/test.py", line 6, in from torch.autograd import Variable ModuleNotFoundError: No module named 'torch' Traceback (most recent call last): File "/home/kirisawa/Bringing-Old-Photos-Back-to-Life/run.py", line 102, in for x in os.listdir(stage_1_results): FileNotFoundError: [Errno 2] No such file or directory: '/home/kirisawa/Bringing-Old-Photos-Back-to-Life/output/stage_1_restore_output/restored_image'

    I've run the requirements .txt but I'm still having problems.😭😭

    opened by whitenight1985 2
  • How does this project

    How does this project "generate scratches"

    Hello, author. First of all, thank you very much for the code you posted. I have a question for you, that is, I can't find the code that adds scratches. Can you tell me where it is?

    opened by xiaoyanghuha 0
  • Missing additional download

    Missing additional download

    # pull the syncBN repo
    %cd photo_restoration/Face_Enhancement/models/networks
    !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
    !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
    %cd ../../../
    
    %cd Global/detection_models
    !git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
    !cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
    %cd ../../
    
    # download the landmark detection model
    %cd Face_Detection/
    !wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
    !bzip2 -d shape_predictor_68_face_landmarks.dat.bz2
    %cd ../
    
    # download the pretrained model
    %cd Face_Enhancement/
    !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
    !unzip checkpoints.zip
    %cd ../
    
    %cd Global/
    !wget https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
    !unzip checkpoints.zip
    %cd ../
    
    /content/photo_restoration
    /content/photo_restoration/Face_Enhancement
    --2022-11-09 21:01:48--  https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Face_Enhancement/checkpoints.zip
    Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1
    Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected.
    HTTP request sent, awaiting response... 404 The specified resource does not exist.
    2022-11-09 21:01:49 ERROR 404: The specified resource does not exist..
    
    unzip:  cannot find or open checkpoints.zip, checkpoints.zip.zip or checkpoints.zip.ZIP.
    /content/photo_restoration
    /content/photo_restoration/Global
    --2022-11-09 21:01:49--  https://facevc.blob.core.windows.net/zhanbo/old_photo/pretrain/Global/checkpoints.zip
    Resolving facevc.blob.core.windows.net (facevc.blob.core.windows.net)... 20.60.228.1
    Connecting to facevc.blob.core.windows.net (facevc.blob.core.windows.net)|20.60.228.1|:443... connected.
    HTTP request sent, awaiting response... 404 The specified resource does not exist.
    2022-11-09 21:01:50 ERROR 404: The specified resource does not exist..
    
    unzip:  cannot find or open checkpoints.zip, checkpoints.zip.zip or checkpoints.zip.ZIP.
    /content/photo_restoration
    
    opened by adamsatyr 5
  • New GUI, fixed argparse error in run.py and More.

    New GUI, fixed argparse error in run.py and More.

    • added new GUI with options, settings, multilanguage and futuristic design.
    • run.py now can be imported as library and accept parámeters, Is usted by the GUI.
    • fixed error in run.py: error unknown argument > caused by argparse when the projets was executed, In a path that contain spaces or if input, output contain spaces. I added quotes '\"' to the input and output paths in all stages code.
    opened by Erickesau 5
Releases(v1.0)
Owner
Microsoft
Open source projects and samples from Microsoft
Microsoft
NER for Indian languages

CL-NERIL: A Cross-Lingual Model for NER in Indian Languages Code for the paper - https://arxiv.org/abs/2111.11815 Setup Setup a virtual environment Th

Akshara P 0 Nov 24, 2021
Codes for ACL-IJCNLP 2021 Paper "Zero-shot Fact Verification by Claim Generation"

Zero-shot-Fact-Verification-by-Claim-Generation This repository contains code and models for the paper: Zero-shot Fact Verification by Claim Generatio

Liangming Pan 47 Jan 01, 2023
StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN

StyleGAN of All Trades: Image Manipulation withOnly Pretrained StyleGAN This is the PyTorch implementation of StyleGAN of All Trades: Image Manipulati

360 Dec 28, 2022
Pytorch Implementation for (STANet+ and STANet)

Pytorch Implementation for (STANet+ and STANet) V2-Weakly Supervised Visual-Auditory Saliency Detection with Multigranularity Perception (arxiv), pdf:

GuotaoWang 14 Nov 29, 2022
New AidForBlind - Various Libraries used like OpenCV and other mentioned in Requirements.txt

AidForBlind Recommended PyCharm IDE Various Libraries used like OpenCV and other

Aalhad Chandewar 1 Jan 13, 2022
Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination

Lighthouse: Predicting Lighting Volumes for Spatially-Coherent Illumination Pratul P. Srinivasan, Ben Mildenhall, Matthew Tancik, Jonathan T. Barron,

Pratul Srinivasan 65 Dec 14, 2022
Unsupervised Learning of Video Representations using LSTMs

Unsupervised Learning of Video Representations using LSTMs Code for paper Unsupervised Learning of Video Representations using LSTMs by Nitish Srivast

Elman Mansimov 341 Dec 20, 2022
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
Paddle implementation for "Highly Efficient Knowledge Graph Embedding Learning with Closed-Form Orthogonal Procrustes Analysis" (NAACL 2021)

ProcrustEs-KGE Paddle implementation for Highly Efficient Knowledge Graph Embedding Learning with Orthogonal Procrustes Analysis 🙈 A more detailed re

Lincedo Lab 4 Jun 09, 2021
Hyperparameters tuning and features selection are two common steps in every machine learning pipeline.

shap-hypetune A python package for simultaneous Hyperparameters Tuning and Features Selection for Gradient Boosting Models. Overview Hyperparameters t

Marco Cerliani 422 Jan 08, 2023
Weakly Supervised 3D Object Detection from Point Cloud with Only Image Level Annotation

SCCKTIM Weakly Supervised 3D Object Detection from Point Cloud with Only Image-Level Annotation Our code will be available soon. The class knowledge t

1 Nov 12, 2021
This repo holds codes of the ICCV21 paper: Visual Alignment Constraint for Continuous Sign Language Recognition.

VAC_CSLR This repo holds codes of the paper: Visual Alignment Constraint for Continuous Sign Language Recognition.(ICCV 2021) [paper] Prerequisites Th

Yuecong Min 64 Dec 19, 2022
PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability

PCACE: A Statistical Approach to Ranking Neurons for CNN Interpretability PCACE is a new algorithm for ranking neurons in a CNN architecture in order

4 Jan 04, 2022
Interactive Image Generation via Generative Adversarial Networks

iGAN: Interactive Image Generation via Generative Adversarial Networks Project | Youtube | Paper Recent projects: [pix2pix]: Torch implementation for

Jun-Yan Zhu 3.9k Dec 23, 2022
Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder

ASEGAN: Speech Enhancement Generative Adversarial Network Based on Asymmetric AutoEncoder 中文版简介 Readme with English Version 介绍 基于SEGAN模型的改进版本,使用自主设计的非

Nitin 53 Nov 17, 2022
A modular PyTorch library for optical flow estimation using neural networks

A modular PyTorch library for optical flow estimation using neural networks

neu-vig 113 Dec 20, 2022
Count the MACs / FLOPs of your PyTorch model.

THOP: PyTorch-OpCounter How to install pip install thop (now continously intergrated on Github actions) OR pip install --upgrade git+https://github.co

Ligeng Zhu 3.9k Dec 29, 2022
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"

RandWireNN Unofficial PyTorch Implementation of: Exploring Randomly Wired Neural Networks for Image Recognition. Results Validation result on Imagenet

Seung-won Park 684 Nov 02, 2022
Lyapunov-guided Deep Reinforcement Learning for Stable Online Computation Offloading in Mobile-Edge Computing Networks

PyTorch code to reproduce LyDROO algorithm [1], which is an online computation offloading algorithm to maximize the network data processing capability subject to the long-term data queue stability an

Liang HUANG 87 Dec 28, 2022
Implementation of the paper ''Implicit Feature Refinement for Instance Segmentation''.

Implicit Feature Refinement for Instance Segmentation This repository is an official implementation of the ACM Multimedia 2021 paper Implicit Feature

Lufan Ma 17 Dec 28, 2022