precise iris segmentation

Overview

PI-DECODER

Introduction

PI-DECODER, a decoder structure designed for Precise Iris Segmentation and Location. The decoder structure is shown below:

PI-DECODER

Please check technical paper.pdf in the "reference" subfolder for more details.

How to use?

For african dataset, you can enter the following script on your terminal:

python main.py --mode test --model_path ./models/african_best.pth --test_mode 1 --train_dataset african

Then you have iris mask, pupil mask and outer iris mask that are predicted by the input images. At the same time, the relevant index data will be displayed on your terminal.

(ijcb) PS F:\workspace\code\pytorch\PI-DECODER> python main.py --mode test --model_path ./models/african_best.pth --
test_mode 1 --train_dataset african
Namespace(batch_size=1, beta1=0.9, beta2=0.999, img_size=(640, 640), lr=0.0002, mode='test', model_path='./models/af
rican_best.pth', num_epochs=100, num_workers=2, result_path='./result/', test_mode=1, test_path='./dataset/test/', t
rain_dataset='african', train_path='./dataset/train/', valid_path='./dataset/valid/')
image count in train path :5
image count in valid path :5
image count in test path :40
Using Model: PI-DECODER
0.0688 seconds per image

----------------------------------------------------------------------------------------------------------------
|evaluation     |e1(%)          |e2(%)          |miou(%)        |f1(%)          |miou_back      |f1_back        |
----------------------------------------------------------------------------------------------------------------
|iris seg       |0.384026       |0.192013       |91.175200      |95.350625      |95.386805      |97.574698      |
|iris mask      |0.569627       |0.284813       |93.159855      |96.430411      |96.270919      |98.060105      |
|pupil mask     |0.078793       |0.039396       |93.138878      |96.409347      |96.529547      |98.184718      |
----------------------------------------------------------------------------------------------------------------
|average        |0.344149       |0.172074       |92.491311      |96.063461      |96.062424      |97.939840      |
----------------------------------------------------------------------------------------------------------------

Besides, if you don't have groud-truth files or just want to save the results, use test mode 2.

python main.py --mode test --model_path ./models/african_best.pth --test_mode 2 --train_dataset african

Requirements

The whole experiment was run on the NVIDIA RTX 3060. The following are recommended environment configurations.

matplotlib        3.3.4
numpy             1.19.5
opencv-python     4.5.1.48
pandas            1.1.5
Pillow            8.1.2
pip               21.0.1
pyparsing         2.4.7
python-dateutil   2.8.1
pytz              2021.1
scipy             1.5.4
setuptools        52.0.0.post20210125
six               1.15.0
thop              0.0.31.post2005241907
torch             1.7.0+cu110
torchstat         0.0.7
torchsummary      1.5.1
torchvision       0.8.1+cu110
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Cải thiện Elasticsearch trong bài toán semantic search sử dụng phương pháp Sentence Embeddings

Cải thiện Elasticsearch trong bài toán semantic search sử dụng phương pháp Sentence Embeddings Trong bài viết này mình sẽ sử dụng pretrain model SimCS

Vo Van Phuc 18 Nov 25, 2022
Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"

BP-Transformer This repo contains the code for our paper BP-Transformer: Modeling Long-Range Context via Binary Partition Zihao Ye, Qipeng Guo, Quan G

Zihao Ye 119 Nov 14, 2022
Resources for "Natural Language Processing" Coursera course.

Natural Language Processing course resources This github contains practical assignments for Natural Language Processing course by Higher School of Eco

Advanced Machine Learning specialisation by HSE 1.1k Jan 01, 2023
Fixes mojibake and other glitches in Unicode text, after the fact.

ftfy: fixes text for you print(fix_encoding("(ง'⌣')ง")) (ง'⌣')ง Full documentation: https://ftfy.readthedocs.org Testimonials “My life is li

Luminoso Technologies, Inc. 3.4k Dec 29, 2022
Code for Findings at EMNLP 2021 paper: "Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning"

Learn Continually, Generalize Rapidly: Lifelong Knowledge Accumulation for Few-shot Learning This repo is for Findings at EMNLP 2021 paper: Learn Cont

INK Lab @ USC 6 Sep 02, 2022
Search msDS-AllowedToActOnBehalfOfOtherIdentity

前言 现在进行RBCD的攻击手段主要是搜索mS-DS-CreatorSID,如果机器的创建者是我们可控的话,那就可以修改对应机器的msDS-AllowedToActOnBehalfOfOtherIdentity,利用工具SharpAllowedToAct-Modify 那我们索性也试试搜索所有计算机

Jumbo 26 Dec 05, 2022
LV-BERT: Exploiting Layer Variety for BERT (Findings of ACL 2021)

LV-BERT Introduction In this repo, we introduce LV-BERT by exploiting layer variety for BERT. For detailed description and experimental results, pleas

Weihao Yu 14 Aug 24, 2022
Officile code repository for "A Game-Theoretic Perspective on Risk-Sensitive Reinforcement Learning"

CvarAdversarialRL Official code repository for "A Game-Theoretic Perspective on Risk-Sensitive Reinforcement Learning". Initial setup Create a virtual

Mathieu Godbout 1 Nov 19, 2021
Unsupervised text tokenizer focused on computational efficiency

YouTokenToMe YouTokenToMe is an unsupervised text tokenizer focused on computational efficiency. It currently implements fast Byte Pair Encoding (BPE)

VK.com 847 Dec 19, 2022
Ongoing research training transformer language models at scale, including: BERT & GPT-2

Megatron (1 and 2) is a large, powerful transformer developed by the Applied Deep Learning Research team at NVIDIA.

NVIDIA Corporation 3.5k Dec 30, 2022
Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models.

Tevatron Tevatron is a simple and efficient toolkit for training and running dense retrievers with deep language models. The toolkit has a modularized

texttron 193 Jan 04, 2023
The projects lets you extract glossary words and their definitions from a given piece of text automatically using NLP techniques

Unsupervised technique to Glossary and Definition Extraction Code Files GPT2-DefinitionModel.ipynb - GPT-2 model for definition generation. Data_Gener

Prakhar Mishra 28 May 25, 2021
Unsupervised text tokenizer for Neural Network-based text generation.

SentencePiece SentencePiece is an unsupervised text tokenizer and detokenizer mainly for Neural Network-based text generation systems where the vocabu

Google 6.4k Jan 01, 2023
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022
Graph4nlp is the library for the easy use of Graph Neural Networks for NLP

Graph4NLP Graph4NLP is an easy-to-use library for R&D at the intersection of Deep Learning on Graphs and Natural Language Processing (i.e., DLG4NLP).

Graph4AI 1.5k Dec 23, 2022
Mastering Transformers, published by Packt

Mastering Transformers This is the code repository for Mastering Transformers, published by Packt. Build state-of-the-art models from scratch with adv

Packt 195 Jan 01, 2023
Understand Text Summarization and create your own summarizer in python

Automatic summarization is the process of shortening a text document with software, in order to create a summary with the major points of the original document. Technologies that can make a coherent

Sreekanth M 1 Oct 18, 2022
Chatbot for the Chatango messaging platform

BroiestBot The baddest bot in the game right now. Uses the ch.py framework for joining Chantango rooms and responding to user messages. Commands If a

Todd Birchard 3 Jan 17, 2022
Stack based programming language that compiles to x86_64 assembly or can alternatively be interpreted in Python

lang lang is a simple stack based programming language written in Python. It can

Christoffer Aakre 1 May 30, 2022