Keras implementation of Deeplab v3+ with pretrained weights

Overview

Keras implementation of Deeplabv3+

This repo is not longer maintained. I won't respond to issues but will merge PR
DeepLab is a state-of-art deep learning model for semantic image segmentation.

Model is based on the original TF frozen graph. It is possible to load pretrained weights into this model. Weights are directly imported from original TF checkpoint.

Segmentation results of original TF model. Output Stride = 8




Segmentation results of this repo model with loaded weights and OS = 8
Results are identical to the TF model




Segmentation results of this repo model with loaded weights and OS = 16
Results are still good




How to get labels

Model will return tensor of shape (batch_size, height, width, num_classes). To obtain labels, you need to apply argmax to logits at exit layer. Example of predicting on image1.jpg:

import numpy as np
from PIL import Image
from matplotlib import pyplot as plt

from model import Deeplabv3

# Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
# as original image.  Normalization matches MobileNetV2

trained_image_width=512 
mean_subtraction_value=127.5
image = np.array(Image.open('imgs/image1.jpg'))

# resize to max dimension of images from training dataset
w, h, _ = image.shape
ratio = float(trained_image_width) / np.max([w, h])
resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))

# apply normalization for trained dataset images
resized_image = (resized_image / mean_subtraction_value) - 1.

# pad array to square image to match training images
pad_x = int(trained_image_width - resized_image.shape[0])
pad_y = int(trained_image_width - resized_image.shape[1])
resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')

# make prediction
deeplab_model = Deeplabv3()
res = deeplab_model.predict(np.expand_dims(resized_image, 0))
labels = np.argmax(res.squeeze(), -1)

# remove padding and resize back to original image
if pad_x > 0:
    labels = labels[:-pad_x]
if pad_y > 0:
    labels = labels[:, :-pad_y]
labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))

plt.imshow(labels)
plt.waitforbuttonpress()

How to use this model with custom input shape and custom number of classes

from model import Deeplabv3
deeplab_model = Deeplabv3(input_shape=(384, 384, 3), classes=4#or you can use None as shape
deeplab_model = Deeplabv3(input_shape=(None, None, 3), classes=4)

After that you will get a usual Keras model which you can train using .fit and .fit_generator methods.

How to train this model

Useful parameters can be found in the original repository.

Important notes:

  1. This model doesn’t provide default weight decay, user needs to add it themselves.
  2. Due to huge memory use with OS=8, Xception backbone should be trained with OS=16 and only inferenced with OS=8.
  3. User can freeze feature extractor for Xception backbone (first 356 layers) and only fine-tune decoder. Right now (March 2019), there is a problem with finetuning Keras models with BN. You can read more about it here.

Known issues

This model can be retrained check this notebook. Finetuning is tricky and difficult because of the confusion between training and trainable in Keras. See this issue for a discussion and possible alternatives.

How to load model

In order to load model after using model.save() use this code:

from model import relu6
deeplab_model = load_model('example.h5')

Xception vs MobileNetv2

There are 2 available backbones. Xception backbone is more accurate, but has 25 times more parameters than MobileNetv2.

For MobileNetv2 there are pretrained weights only for alpha=1. However, you can initiate model with different values of alpha.

Requirement

The latest vesrion of this repo uses TF Keras, so you only need TF 2.0+ installed
tensorflow-gpu==2.0.0a0
CUDA==9.0


If you want to use older version, use following commands:

git clone https://github.com/bonlime/keras-deeplab-v3-plus/
cd keras-deeplab-v3-plus/
git checkout 714a6b7d1a069a07547c5c08282f1a706db92e20

tensorflow-gpu==1.13
Keras==2.2.4

Comments
  • Performance degradation

    Performance degradation

    Hi,

    I'm using the converted pretrain-weights to measure mIoU and observed 10% drop. Could you sure you measurement results, did you still get 84% mIoU on PASCAL using the transferred model and weights?

    Thanks

    opened by baoruxiao 13
  • Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Is dialtion_rate useful in keras DepthwiseConv2D layer ?

    Hi, I read the document and found dialtion_rate seems not a hyperparameter in DepthwiseConv2D layer. But you used it in your model x = DepthwiseConv2D((kernel_size, kernel_size), strides=(stride, stride), dilation_rate=(rate, rate)

    I made a toy example to check this:

    model=Sequential()
    model.add(DepthwiseConv2D(3,strides=1,padding='valid',depth_multiplier=2,\
                              dilation_rate=(2,2),input_shape=(9,9,3))) 
    model.summary()
    

    Output:

    _________________________________________________________________
    Layer (type)                 Output Shape              Param #   
    =================================================================
    depthwise_conv2d_6 (Depthwis (None, 7, 7, 6)           60        
    =================================================================
    Total params: 60
    Trainable params: 60
    Non-trainable params: 0
    _________________________________________________________________
    

    If dilation_rate is useful in DepthwiseConv2D , I think Output shape should be (5,5,6) not (7,7,6)

    My English is pretty limited, plz don't mind

    opened by Pofatoezil 10
  • About the softmax layer & label process?

    About the softmax layer & label process?

    Hi, bonlime, Thanks for your work! It helps me a lot! But I have two questions in my training process:

    1. I found there is no 'softmax' on the last layer of the Deeplabv3() model. Is it right if I use model.fit() directly after model=Deeplabv3() without any other process? Or could you please tell me where to use the 'softmax' layer?

    2. In the training image input process, how to correctly deal with labels? In my previous segmentation projects, the training and prediction steps usually operate on single-channel pngs labels, where the value of each pixel corresponds to the class label (for example, 0-21 for the Pascal dataset). Is this project the same as that?

    I'm looking forward to your reply. Thanks!

    opened by YanZhiyuan0918 10
  • The right way of pre-processing for input?

    The right way of pre-processing for input?

    image We use the first img as input, and resize it from (427, 640, 3) to (512, 512, 3), and normalize it from [0,255] to [0,1]. image As below, the output of OS=16 is normal, image

    but the output of OS=8 is worse.

    I can not figure out that, so I think my pre-processiong(such as normalizing) is wrong.

    opened by munanning 9
  • compatibility with tensorflow < 2.0

    compatibility with tensorflow < 2.0

    Hi, as I have cuda 9.0 in Ubuntu 16.04, so tensorflow 2.0 is not an option (tensorflow 1.12 installed instead). when I run the code:

    from model import Deeplabv3 'model=Deeplabv3(input_shape=(None,None,3), backbone='xception',OS=8)I got an error:AttributeError: module 'tensorflow._api.v1.image' has no attribute 'resize'`

    How can I resolve this so that I can run the code in tensorlow 1.120 ? many thanks

    opened by tsing90 7
  • image-level pooling

    image-level pooling

    Hello,

    I checked your code, I think the image-level pooling was not correctly implemented. because it is not computing the global average pooling.

    https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/model.py#L437

    I checked this code: https://github.com/tensorflow/models/blob/4a0ee4a29dd7e4b6e0135ccf6857f4dc58d71a90/research/deeplab/model.py#L397 and Reference are here ref 52 from original paper (https://arxiv.org/pdf/1506.04579.pdf):

    Exploiting the FCN architecture, ParsetNet can directly use global average pooling from the final (or any) feature map, resulting in the feature of the whole image, and use it as context.

    In implementation, this is accomplished by unpooling the context vector and appending the resulting feature map with the standard feature map.

    Specifically, we use global average pooling and pool the context features from the last layer or any layer if that is desired.

    opened by emedinac 7
  • Does the model have errors?

    Does the model have errors?

    When I use fit, it says Error when checking target: expected bilinear_upsampling_2 to have shape (512, 512, 2) but got array with shape (512, 512, 3) ps: My input image.shape is (512,512,3)

    opened by Taylor-Rose 6
  • How can I run the video module to achieve real-time detection?

    How can I run the video module to achieve real-time detection?

    Hello I want to know the speed of deeplabv3+ ,and I try to run that: from keras.preprocessing.image import ImageDataGenerator, array_to_img, img_to_array, load_img from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 deeplab_model = Deeplabv3() def detect_video(deeplab_model): import cv2 vid = cv2.VideoCapture(0) if not vid.isOpened(): raise IOError("Couldn't open webcam or video") accum_time = 10 curr_fps = 10 fps = "20" prev_time = timer() while True: return_value, frame = vid.read() res = deeplab_model.predict(frame)

    	result = array_to_img(res)
        curr_time = timer()
        exec_time = curr_time - prev_time
        prev_time = curr_time
        accum_time = accum_time + exec_time
        curr_fps = curr_fps + 1
        if accum_time > 1:
            accum_time = accum_time - 1
            fps = "FPS: " + str(curr_fps)
            curr_fps = 0
        cv2.putText(result, text=fps, org=(3, 15), fontFace=cv2.FONT_HERSHEY_SIMPLEX,
                    fontScale=0.50, color=(255, 0, 0), thickness=2)
        cv2.namedWindow("result", cv2.WINDOW_NORMAL)
        cv2.imshow("result", result)
        if cv2.waitKey(1) & 0xFF == ord('q'):
            break
    deeplab_model.close_session()
    

    detect_video(deeplab_model)

    but it doesn't work! I thank I need your help thanks.

    opened by ll1214 6
  • How can i get each label's coordinates on segmentation map?

    How can i get each label's coordinates on segmentation map?

    import numpy as np
    from PIL import Image
    from matplotlib import pyplot as plt
    
    from model import Deeplabv3
    
    # Generates labels using most basic setup.  Supports various image sizes.  Returns image labels in same format
    # as original image.  Normalization matches MobileNetV2
    
    trained_image_width=512 
    mean_subtraction_value=127.5
    image = np.array(Image.open('imgs/image1.jpg'))
    
    # resize to max dimension of images from training dataset
    w, h, _ = image.shape
    ratio = float(trained_image_width) / np.max([w, h])
    resized_image = np.array(Image.fromarray(image.astype('uint8')).resize((int(ratio * h), int(ratio * w))))
    
    # apply normalization for trained dataset images
    resized_image = (resized_image / mean_subtraction_value) - 1.
    
    # pad array to square image to match training images
    pad_x = int(trained_image_width - resized_image.shape[0])
    pad_y = int(trained_image_width - resized_image.shape[1])
    resized_image = np.pad(resized_image, ((0, pad_x), (0, pad_y), (0, 0)), mode='constant')
    
    # make prediction
    deeplab_model = Deeplabv3()
    res = deeplab_model.predict(np.expand_dims(resized_image, 0))
    labels = np.argmax(res.squeeze(), -1)
    
    # remove padding and resize back to original image
    if pad_x > 0:
        labels = labels[:-pad_x]
    if pad_y > 0:
        labels = labels[:, :-pad_y]
    labels = np.array(Image.fromarray(labels.astype('uint8')).resize((h, w)))
    
    plt.imshow(labels)
    plt.show()
    

    I run this code and get segmentation map But, I want to get result like https://github.com/bonlime/keras-deeplab-v3-plus/blob/master/imgs/seg_results2.png and each label's coordinates on image. How can i do that?

    opened by Baek2back 5
  • Make model.py compatible with Python 3 and switch to tf.image.resize()

    Make model.py compatible with Python 3 and switch to tf.image.resize()

    • make model.py compatible with Python 3 by changing [tensor].shape to [tensor].shape.as_list()
    • move away from tf.compat.v1.image.resize() due to "resize method is not implemented" error.
    • use tf.image.resize() instead because it now supports align_corners=True.
    opened by yanfengliu 5
  • Custom Dataset: Number of Classes

    Custom Dataset: Number of Classes

    Should num_classes take into account the background class. For example, if I have 3 foreground classes that I have masks for and a background class that I don't care about and don't have a mask for, should my num_classes be 3?

    opened by ad12 5
  • Bump pillow from 6.0.0 to 9.3.0

    Bump pillow from 6.0.0 to 9.3.0

    Bumps pillow from 6.0.0 to 9.3.0.

    Release notes

    Sourced from pillow's releases.

    9.3.0

    https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

    Changes

    ... (truncated)

    Changelog

    Sourced from pillow's changelog.

    9.3.0 (2022-10-29)

    • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

    • Initialize libtiff buffer when saving #6699 [radarhere]

    • Inline fname2char to fix memory leak #6329 [nulano]

    • Fix memory leaks related to text features #6330 [nulano]

    • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

    • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

    • Fixed set_variation_by_name offset #6445 [radarhere]

    • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

    • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

    • Added ExifTags enums #6630 [radarhere]

    • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

    • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

    • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

    • Added GPS TIFF tag info #6661 [radarhere]

    • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

    • Do not attempt normalization if mode is already normal #6644 [radarhere]

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bump tensorflow from 2.5.0rc0 to 2.9.3

    Bumps tensorflow from 2.5.0rc0 to 2.9.3.

    Release notes

    Sourced from tensorflow's releases.

    TensorFlow 2.9.3

    Release 2.9.3

    This release introduces several vulnerability fixes:

    TensorFlow 2.9.2

    Release 2.9.2

    This releases introduces several vulnerability fixes:

    ... (truncated)

    Changelog

    Sourced from tensorflow's changelog.

    Release 2.9.3

    This release introduces several vulnerability fixes:

    Release 2.8.4

    This release introduces several vulnerability fixes:

    ... (truncated)

    Commits
    • a5ed5f3 Merge pull request #58584 from tensorflow/vinila21-patch-2
    • 258f9a1 Update py_func.cc
    • cd27cfb Merge pull request #58580 from tensorflow-jenkins/version-numbers-2.9.3-24474
    • 3e75385 Update version numbers to 2.9.3
    • bc72c39 Merge pull request #58482 from tensorflow-jenkins/relnotes-2.9.3-25695
    • 3506c90 Update RELEASE.md
    • 8dcb48e Update RELEASE.md
    • 4f34ec8 Merge pull request #58576 from pak-laura/c2.99f03a9d3bafe902c1e6beb105b2f2417...
    • 6fc67e4 Replace CHECK with returning an InternalError on failing to create python tuple
    • 5dbe90a Merge pull request #58570 from tensorflow/r2.9-7b174a0f2e4
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Citation in Journal

    Citation in Journal

    I would like to cite this repository in a journal paper I'm currently writing as I have used most of the code here. I have already included the reference to the original DeepLabV3+ paper, but I also want to include this repository. What is the best way to cite it? Thanks!

    opened by pedrogalher 1
  • Model not learning anything

    Model not learning anything

    Hi I'm training DeepLabV3+ Mobilenet backbone on my custom dataset. My Dataset has 1 class. My Model architecture looks like:

    deeplab_model = Deeplabv3(input_shape=(512, 512, 3), classes=2,activation='softmax') deeplab_model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),metrics='accuracy']) deeplab_model.fit(...)

    My Loss is not reducing and the output is all black. My mask is of the shape (512,512) Someone, please guide me what to do? Any help would be great.

    opened by anmol4210 1
  • AttributeError: 'int' object has no attribute 'value'

    AttributeError: 'int' object has no attribute 'value'

    I copy and run this code(and copy images and model.py) :

    from matplotlib import pyplot as plt import cv2 # used for resize. if you dont have it, use anything else import numpy as np from model import Deeplabv3 img = plt.imread("imgs/image1.jpg") print(img.shape) w, h, _ = img.shape ratio = 512. / np.max([w,h]) resized = cv2.resize(img,(int(ratioh),int(ratiow))) resized = resized / 127.5 - 1. new_deeplab_model = Deeplabv3(input_shape=(512,512,3), OS=16)

    pad_x = int(512 - resized.shape[0]) resized2 = np.pad(resized,((0,pad_x),(0,0),(0,0)), mode='constant') res = new_deeplab_model.predict(np.expand_dims(resized2,0)) res_old = old_deeplab_model.predict(np.expand_dims(resized2,0)) labels = np.argmax(res.squeeze(),-1) labels_old = np.argmax(res_old.squeeze(),-1) plt.imshow(labels[:-pad_x]) plt.show() plt.imshow(labels_old[:-pad_x]) plt.show() `

    but I get this error =>

    ERROR:root:An unexpected error occurred while tokenizing input The following traceback may be corrupted or invalid The error message is: ('EOF in multi-line string', (1, 2))


    TypeError Traceback (most recent call last) in () 26 # make prediction 27 deeplab_model = Deeplabv3() ---> 28 res = deeplab_model.predict(np.expand_dims(resized_image, 0)) 29 labels = np.argmax(res.squeeze(), -1) 30

    10 frames /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/func_graph.py in wrapper(*args, **kwargs) 966 func_graph: A FuncGraph object to destroy. func_graph is unusable 967 after this function. --> 968 """ 969 # TODO(b/115366440): Delete this method when a custom OrderedDict is added. 970 # Clearing captures using clear() leaves some cycles around.

    TypeError: in user code:

    TypeError: tf__predict_function() missing 8 required positional arguments: 'x', 'batch_size', 'verbose', 'steps', 'callbacks', 'max_queue_size', 'workers', and 'use_multiprocessing'
    
    opened by alikarimi120 1
Releases(1.2)
Owner
Emil Zakirov. MIPT & Skoltech. Computer Vision Engineer.
Combinatorial model of ligand-receptor binding

Combinatorial model of ligand-receptor binding The binding of ligands to receptors is the starting point for many import signal pathways within a cell

Mobolaji Williams 0 Jan 09, 2022
IOT: Instance-wise Layer Reordering for Transformer Structures

Introduction This repository contains the code for Instance-wise Ordered Transformer (IOT), which is introduced in the ICLR2021 paper IOT: Instance-wi

IOT 19 Nov 15, 2022
TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification

TransMIL: Transformer based Correlated Multiple Instance Learning for Whole Slide Image Classification [NeurIPS 2021] Abstract Multiple instance learn

132 Dec 30, 2022
Geometric Algebra package for JAX

JAXGA - JAX Geometric Algebra GitHub | Docs JAXGA is a Geometric Algebra package on top of JAX. It can handle high dimensional algebras by storing onl

Robin Kahlow 36 Dec 22, 2022
AFLNet: A Greybox Fuzzer for Network Protocols

AFLNet: A Greybox Fuzzer for Network Protocols AFLNet is a greybox fuzzer for protocol implementations. Unlike existing protocol fuzzers, it takes a m

626 Jan 06, 2023
Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study.

APR The repo for the paper Improving Query Representations for DenseRetrieval with Pseudo Relevance Feedback:A Reproducibility Study. Environment setu

ielab 8 Nov 26, 2022
这是一个unet-pytorch的源码,可以训练自己的模型

Unet:U-Net: Convolutional Networks for Biomedical Image Segmentation目标检测模型在Pytorch当中的实现 目录 性能情况 Performance 所需环境 Environment 注意事项 Attention 文件下载 Downl

Bubbliiiing 567 Jan 05, 2023
Deep Learning pipeline for motor-imagery classification.

BCI-ToolBox 1. Introduction BCI-ToolBox is deep learning pipeline for motor-imagery classification. This repo contains five models: ShallowConvNet, De

DongHee 18 Oct 31, 2022
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

KoGPT KoGPT (Korean Generative Pre-trained Transformer) https://github.com/kakaobrain/kogpt https://huggingface.co/kakaobrain/kogpt Model Descriptions

Kakao Brain 799 Dec 28, 2022
Mae segmentation - Reproduction of semantic segmentation using masked autoencoder (mae)

ADE20k Semantic segmentation with MAE Getting started Install the mmsegmentation

97 Dec 17, 2022
B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search

B2EA: An Evolutionary Algorithm Assisted by Two Bayesian Optimization Modules for Neural Architecture Search This is the offical implementation of the

SNU ADSL 0 Feb 07, 2022
Film review classification

Film review classification Решение задачи классификации отзывов на фильмы на положительные и отрицательные с помощью рекуррентных нейронных сетей 1. З

Nikita Dukin 3 Jan 21, 2022
Official implementation of Rich Semantics Improve Few-Shot Learning (BMVC, 2021)

Rich Semantics Improve Few-Shot Learning Paper Link Abstract : Human learning benefits from multi-modal inputs that often appear as rich semantics (e.

Mohamed Afham 11 Jul 26, 2022
Code for the paper "Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are in envir

Michael Janner 269 Jan 05, 2023
Transfer Learning Shootout for PyTorch's model zoo (torchvision)

pytorch-retraining Transfer Learning shootout for PyTorch's model zoo (torchvision). Load any pretrained model with custom final layer (num_classes) f

Alexander Hirner 169 Jun 29, 2022
Speedy Implementation of Instance-based Learning (IBL) agents in Python

A Python library to create single or multi Instance-based Learning (IBL) agents that are built based on Instance Based Learning Theory (IBLT) 1 Instal

0 Nov 18, 2021
K-Means Clustering and Hierarchical Clustering Unsupervised Learning Solution in Python3.

Unsupervised Learning - K-Means Clustering and Hierarchical Clustering - The Heritage Foundation's Economic Freedom Index Analysis 2019 - By David Sal

David Salako 1 Jan 12, 2022
The-Secret-Sharing-Schemes - This interactive script demonstrates the Secret Sharing Schemes algorithm

The-Secret-Sharing-Schemes This interactive script demonstrates the Secret Shari

Nishaant Goswamy 1 Jan 02, 2022
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

ZJUNLP 68 Dec 28, 2022
Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Peter Schaldenbrand 247 Dec 23, 2022