🔥 TensorFlow Code for technical report: "YOLOv3: An Incremental Improvement"

Overview

🆕 Are you looking for a new YOLOv3 implemented by TF2.0 ?

If you hate the fucking tensorflow1.x very much, no worries! I have implemented a new YOLOv3 repo with TF2.0, and also made a chinese blog on how to implement YOLOv3 object detector from scratch.
code | blog | issue

part 1. Quick start

  1. Clone this file
$ git clone https://github.com/YunYang1994/tensorflow-yolov3.git
  1. You are supposed to install some dependencies before getting out hands with these codes.
$ cd tensorflow-yolov3
$ pip install -r ./docs/requirements.txt
  1. Exporting loaded COCO weights as TF checkpoint(yolov3_coco.ckpt)【BaiduCloud
$ cd checkpoint
$ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz
$ tar -xvf yolov3_coco.tar.gz
$ cd ..
$ python convert_weight.py
$ python freeze_graph.py
  1. Then you will get some .pb files in the root path., and run the demo script
$ python image_demo.py
$ python video_demo.py # if use camera, set video_path = 0

part 2. Train your own dataset

Two files are required as follows:

xxx/xxx.jpg 18.19,6.32,424.13,421.83,20 323.86,2.65,640.0,421.94,20 
xxx/xxx.jpg 48,240,195,371,11 8,12,352,498,14
# image_path x_min, y_min, x_max, y_max, class_id  x_min, y_min ,..., class_id 
# make sure that x_max < width and y_max < height
person
bicycle
car
...
toothbrush

2.1 Train on VOC dataset

Download VOC PASCAL trainval and test data

$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtrainval_06-Nov-2007.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
$ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2007/VOCtest_06-Nov-2007.tar

Extract all of these tars into one directory and rename them, which should have the following basic structure.


VOC           # path:  /home/yang/dataset/VOC
├── test
|    └──VOCdevkit
|        └──VOC2007 (from VOCtest_06-Nov-2007.tar)
└── train
     └──VOCdevkit
         └──VOC2007 (from VOCtrainval_06-Nov-2007.tar)
         └──VOC2012 (from VOCtrainval_11-May-2012.tar)
                     
$ python scripts/voc_annotation.py --data_path /home/yang/test/VOC

Then edit your ./core/config.py to make some necessary configurations

__C.YOLO.CLASSES                = "./data/classes/voc.names"
__C.TRAIN.ANNOT_PATH            = "./data/dataset/voc_train.txt"
__C.TEST.ANNOT_PATH             = "./data/dataset/voc_test.txt"

Here are two kinds of training method:

(1) train from scratch:
$ python train.py
$ tensorboard --logdir ./data
(2) train from COCO weights(recommend):
$ cd checkpoint
$ wget https://github.com/YunYang1994/tensorflow-yolov3/releases/download/v1.0/yolov3_coco.tar.gz
$ tar -xvf yolov3_coco.tar.gz
$ cd ..
$ python convert_weight.py --train_from_coco
$ python train.py

2.2 Evaluate on VOC dataset

$ python evaluate.py
$ cd mAP
$ python main.py -na

the mAP on the VOC2012 dataset:

part 3. Other Implementations

-YOLOv3目标检测有了TensorFlow实现,可用自己的数据来训练

-Stronger-yolo

- Implementing YOLO v3 in Tensorflow (TF-Slim)

- YOLOv3_TensorFlow

- Object Detection using YOLOv2 on Pascal VOC2012

-Understanding YOLO

Comments
  • 训练几个echoes之后,会出错IndexError: index 77 is out of bounds for axis 0 with size 76

    训练几个echoes之后,会出错IndexError: index 77 is out of bounds for axis 0 with size 76

    在我运行训练的时候前几个echoes都跑到好好的,但是下一个echoe就会出错 File "/usr/local/lib/python3.5/dist-packages/tqdm/_tqdm.py", line 930, in iter for obj in iterable: File "/workspace/tensorflow-yolov3-master/core/dataset.py", line 76, in next label_sbbox, label_mbbox, label_lbbox, sbboxes, mbboxes, lbboxes = self.preprocess_true_boxes(bboxes) File "/workspace/tensorflow-yolov3-master/core/dataset.py", line 234, in preprocess_true_boxes label[i][yind, xind, iou_mask, :] = 0 IndexError: index 77 is out of bounds for axis 0 with size 76 请问这是什么问题@YunYang1994

    opened by CNUyue 11
  • 中断模型训练后,如何接着中断的地方继续训练?

    中断模型训练后,如何接着中断的地方继续训练?

    各位前辈好! 请问运行train.py的时候,比如已经运行了一段时间了,比如说上午10:00的时候需要暂停训练,到了下午14:00的时候想要继续训练,应该怎么操作?我发现终止train.py运行后,再重新运行train.py的时候,,没有接着中断的地方开始训练,所以我应该怎么做才能从中断的地方接着训练? 我试着修改config.py,把__C.YOLO.ORIGINAL_WEIGHT = "./checkpoint/yolov3_coco.ckpt"改成自己的,比如改成__C.YOLO.ORIGINAL_WEIGHT = "./checkpoint/yolov3_test_loss=16.4555.ckpt",但是再运行train.py的时候,输出的信息没变,依然是:

    => Restoring weights from: ./checkpoint/yolov3_coco_demo.ckpt ... 
      0%|          | 0/4014 [00:00<?, ?it/s]2020-01-11 15:11:44.567005: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 1.73GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    2020-01-11 15:11:45.392378: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    2020-01-11 15:11:45.420473: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    train loss: 4195.52:   0%|          | 1/4014 [00:07<8:15:17,  7.41s/it]2020-01-11 15:11:46.625862: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    2020-01-11 15:11:46.704219: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    train loss: 1974.33:   0%|          | 2/4014 [00:08<6:09:40,  5.53s/it]2020-01-11 15:11:47.816559: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.14GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    2020-01-11 15:11:47.841658: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    train loss: 3769.86:   0%|          | 3/4014 [00:09<4:44:16,  4.25s/it]2020-01-11 15:11:49.013124: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    2020-01-11 15:11:49.089715: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    train loss: 1902.35:   0%|          | 4/4014 [00:10<3:41:04,  3.31s/it]2020-01-11 15:11:50.022054: W tensorflow/core/common_runtime/bfc_allocator.cc:211] Allocator (GPU_0_bfc) ran out of memory trying to allocate 2.13GiB. The caller indicates that this is not a failure, but may mean that there could be performance gains if more memory were available.
    train loss: 13.42: 100%|██████████| 4014/4014 [11:29<00:00,  5.82it/s]
    => Epoch:  1 Time: 2020-01-11 15:23:59 Train loss: 546.57 Test loss: 30.60 Saving ./checkpoint/yolov3_test_loss=30.6046.ckpt ...
    

    说明还是从头开始训练了,所以怎么样才能接着训练,不从头开始? 从checkpoint文件夹来看,第一次训练生成的文件是:

    yolov3_test_loss=30.1690.ckpt-1.data-00000-of-00001
    yolov3_test_loss=30.1690.ckpt-1.index
    yolov3_test_loss=30.1690.ckpt-1.meta
    

    而第二次生成的文件是:

    yolov3_test_loss=30.6046.ckpt-1.data-00000-of-00001
    yolov3_test_loss=30.6046.ckpt-1.index
    yolov3_test_loss=30.6046.ckpt-1.meta
    

    这个ckpt-1出现了2次,上面的是第一次生成的,下面的是第二次生成的。只是每次的loss不一样,然后每组文件的时间戳不一样,所请请问如何接着训练?

    opened by dapsjj 10
  • Anyone has trained darknet53 on coco 2017?

    Anyone has trained darknet53 on coco 2017?

    It’s a nice work but my trained darknet53 on coco trainval2017 is always not good(about 0.33 for AP50). Can anyone give me some advice? REALLY much thanks

    opened by tabsun 9
  • tuple index out of range

    tuple index out of range

    File "train.py", line 44, in loss = model.compute_loss(y_pred, y_true) File ".../tensorflow-yolov3/core/yolov3.py", line 269, in compute_loss loss_class += result[3] IndexError: tuple index out of range

    opened by jiaweichen168 7
  • data_aug resulting index outside the box

    data_aug resulting index outside the box

    data_aug in parse_annotation dataset.py resulting index value outside the array of label in preprocess_true_boxes (dataset.py). So, it is suffered with error (IndexError: index xx is out of bounds for axis 0 with size yy),(xx could be negative?) during training process. please advise how to handle the problem ?

    opened by ramdhan1989 5
  • 多GPU训练的问题

    多GPU训练的问题

    我将train.py修改成了多GPU训练模式,可以正常训练得到ckpt文件,但是运行freeze_graph.py将保存的ckpt文件转成pb文件的时候,却提示错误 NotFoundError (see above for traceback): Key conv52/batch_normalization/beta not found in checkpoint 请问会是什么原因?

    opened by zengqi0730 5
  • 路径正确,但总是无法导入 yolov3_coco_demo.ckpt (解决了,自己的错误,代码正确)

    路径正确,但总是无法导入 yolov3_coco_demo.ckpt (解决了,自己的错误,代码正确)

    我在Pycharm 从 readme 的 3.1 Train VOC dataset 开始做(其他未改动)。 在 (2) train from COCO weights(recommend): 里边做到 python train.py 时,总也导入不了 yolov3_coco_demo.ckpt. 总是说 => ./checkpoint/yolov3_coco_demo.ckpt does not exist !!! => Now it starts to train YOLOV3 from scratch ...

    在 config.py 里的设置也没错 __C.TRAIN.INITIAL_WEIGHT = "./checkpoint/yolov3_coco_demo.ckpt"

    请问有人遇到过这样的问题么?

    opened by FangliangBai 5
  • GPU memory out when training with BATCH_SIZE=8  in SECOND_STAGE

    GPU memory out when training with BATCH_SIZE=8 in SECOND_STAGE

    I used BATCH_SIZE=8 in FISRT_STAGE_EPOCHS for training,there is no problem,but when I used BATCH_SIZE=8 in SECOND_STAGE_EPOCHS for training,the error message is memory out. My GPU is RTX 2070,where is wrong?

    opened by dapsjj 4
  • test loss降到4左右,但是mAP为0.00

    test loss降到4左右,但是mAP为0.00

    我只训练了一个类别cow,从README里提供的voc数据集进行筛选,只留下只含cow的图片,最终是165张和39张。 train_from_coco,batch_size=4,训练了27轮,first_stage20轮,second_stage7轮,最终test loss降到4左右,但是运行evaluate.py时和真实情况相差较大,而且mAP也一直是0. 00。 请问这是我样本太少的原因吗?还是test loss得是0.几才行?

    opened by myrzx 4
  • nan at all epochs

    nan at all epochs

    Hi guys. I have nan at all epochs, what is problem? I add my class to coco.names and my train.txt looks like: /media/spectra/GMS/PDRSV2/nomeroff-net/datasets/train/X119BM199.jpg,364,422,136,451,81

    opened by Spectra456 4
  • Bump pillow from 5.3.0 to 9.3.0 in /docs

    Bump pillow from 5.3.0 to 9.3.0 in /docs

    Bumps pillow from 5.3.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow-gpu from 1.11.0 to 2.9.3 in /docs

    Bump tensorflow-gpu from 1.11.0 to 2.9.3 in /docs

    Bumps tensorflow-gpu from 1.11.0 to 2.9.3.

    Release notes

    Sourced from tensorflow-gpu's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow-gpu's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump numpy from 1.15.1 to 1.22.0 in /docs

    Bump numpy from 1.15.1 to 1.22.0 in /docs

    Bumps numpy from 1.15.1 to 1.22.0.

    Release notes

    Sourced from numpy's releases.

    v1.22.0

    NumPy 1.22.0 Release Notes

    NumPy 1.22.0 is a big release featuring the work of 153 contributors spread over 609 pull requests. There have been many improvements, highlights are:

    • Annotations of the main namespace are essentially complete. Upstream is a moving target, so there will likely be further improvements, but the major work is done. This is probably the most user visible enhancement in this release.
    • A preliminary version of the proposed Array-API is provided. This is a step in creating a standard collection of functions that can be used across application such as CuPy and JAX.
    • NumPy now has a DLPack backend. DLPack provides a common interchange format for array (tensor) data.
    • New methods for quantile, percentile, and related functions. The new methods provide a complete set of the methods commonly found in the literature.
    • A new configurable allocator for use by downstream projects.

    These are in addition to the ongoing work to provide SIMD support for commonly used functions, improvements to F2PY, and better documentation.

    The Python versions supported in this release are 3.8-3.10, Python 3.7 has been dropped. Note that 32 bit wheels are only provided for Python 3.8 and 3.9 on Windows, all other wheels are 64 bits on account of Ubuntu, Fedora, and other Linux distributions dropping 32 bit support. All 64 bit wheels are also linked with 64 bit integer OpenBLAS, which should fix the occasional problems encountered by folks using truly huge arrays.

    Expired deprecations

    Deprecated numeric style dtype strings have been removed

    Using the strings "Bytes0", "Datetime64", "Str0", "Uint32", and "Uint64" as a dtype will now raise a TypeError.

    (gh-19539)

    Expired deprecations for loads, ndfromtxt, and mafromtxt in npyio

    numpy.loads was deprecated in v1.15, with the recommendation that users use pickle.loads instead. ndfromtxt and mafromtxt were both deprecated in v1.17 - users should use numpy.genfromtxt instead with the appropriate value for the usemask parameter.

    (gh-19615)

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
Releases(v1.0)
Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020.

RegNet Pytorch Implementation of "Desigining Network Design Spaces", Radosavovic et al. CVPR 2020. Paper | Official Implementation RegNet offer a very

Vishal R 2 Feb 11, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Depression Asisstant GDSC Challenge Solution

Depression Asisstant can help you give solution. Please using Python version 3.9.5 for contribute.

Ananda Rauf 1 Jan 30, 2022
Junction Tree Variational Autoencoder for Molecular Graph Generation (ICML 2018)

Junction Tree Variational Autoencoder for Molecular Graph Generation Official implementation of our Junction Tree Variational Autoencoder https://arxi

Wengong Jin 418 Jan 07, 2023
Simple STAC Catalogs discovery tool.

STAC Catalog Discovery Simple STAC discovery tool. Just paste the STAC Catalog link and press Enter. Details STAC Discovery tool enables discovering d

Mykola Kozyr 21 Oct 19, 2022
2021:"Bridging Global Context Interactions for High-Fidelity Image Completion"

TFill arXiv | Project This repository implements the training, testing and editing tools for "Bridging Global Context Interactions for High-Fidelity I

Chuanxia Zheng 111 Jan 08, 2023
The official pytorch implementation of our paper "Is Space-Time Attention All You Need for Video Understanding?"

TimeSformer This is an official pytorch implementation of Is Space-Time Attention All You Need for Video Understanding?. In this repository, we provid

Facebook Research 1k Dec 31, 2022
Improving Factual Consistency of Abstractive Text Summarization

Improving Factual Consistency of Abstractive Text Summarization We provide the code for the papers: "Entity-level Factual Consistency of Abstractive T

61 Nov 27, 2022
Official PyTorch implementation of paper: Standardized Max Logits: A Simple yet Effective Approach for Identifying Unexpected Road Obstacles in Urban-Scene Segmentation (ICCV 2021 Oral Presentation)

SML (ICCV 2021, Oral) : Official Pytorch Implementation This repository provides the official PyTorch implementation of the following paper: Standardi

SangHun 61 Dec 27, 2022
3D ResNet Video Classification accelerated by TensorRT

Activity Recognition TensorRT Perform video classification using 3D ResNets trained on Kinetics-400 dataset and accelerated with TensorRT P.S Click on

Akash James 39 Nov 21, 2022
An implementation of the paper "A Neural Algorithm of Artistic Style"

A Neural Algorithm of Artistic Style implementation - Neural Style Transfer This is an implementation of the research paper "A Neural Algorithm of Art

Srijarko Roy 27 Sep 20, 2022
Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021]

Neural Material Official code repository for the paper: Generative Modelling of BRDF Textures from Flash Images [SIGGRAPH Asia, 2021] Henzler, Deschai

Philipp Henzler 80 Dec 20, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
Open Source Light Field Toolbox for Super-Resolution

BasicLFSR BasicLFSR is an open-source and easy-to-use Light Field (LF) image Super-Ressolution (SR) toolbox based on PyTorch, including a collection o

Squidward 50 Nov 18, 2022
Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or columns of a 2d feature map, as a standalone package for Pytorch

Triangle Multiplicative Module - Pytorch Implementation of the Triangle Multiplicative module, used in Alphafold2 as an efficient way to mix rows or c

Phil Wang 22 Oct 28, 2022
Official Implementation of "Learning Disentangled Behavior Embeddings"

DBE: Disentangled-Behavior-Embedding Official implementation of Learning Disentangled Behavior Embeddings (NeurIPS 2021). Environment requirement The

Mishne Lab 12 Sep 28, 2022
UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down. UpChecker - just run file and use project easy

UpChecker UpChecker is a simple opensource project to host it fast on your server and check is server up, view statistic, get messages if it is down.

Yan 4 Apr 07, 2022
Yolox-bytetrack-sample - Python sample of MOT (Multiple Object Tracking) using YOLOX and ByteTrack

yolox-bytetrack-sample YOLOXとByteTrackを用いたMOT(Multiple Object Tracking)のPythonサン

KazuhitoTakahashi 12 Nov 09, 2022
🔪 Elimination based Lightweight Neural Net with Pretrained Weights

ELimNet ELimNet: Eliminating Layers in a Neural Network Pretrained with Large Dataset for Downstream Task Removed top layers from pretrained Efficient

snoop2head 4 Jul 12, 2022
An Approach to Explore Logistic Regression Models

User-centered Regression An Approach to Explore Logistic Regression Models This tool applies the potential of Attribute-RadViz in identifying correlat

0 Nov 12, 2021