Generic framework for historical document processing

Overview

dhSegment

Documentation Status

dhSegment is a tool for Historical Document Processing. Its generic approach allows to segment regions and extract content from different type of documents. See some examples here.

The complete description of the system can be found in the corresponding paper.

It was created by Benoit Seguin and Sofia Ares Oliveira at DHLAB, EPFL.

Installation and usage

The installation procedure and examples of usage can be found in the documentation (see section below).

Demo

Have a try at the demo to train (optional) and apply dhSegment in page extraction using the demo.py script.

Documentation

The documentation is available on readthedocs.

If you are using this code for your research, you can cite the corresponding paper as :

@inproceedings{oliveiraseguinkaplan2018dhsegment,
  title={dhSegment: A generic deep-learning approach for document segmentation},
  author={Ares Oliveira, Sofia and Seguin, Benoit and Kaplan, Frederic},
  booktitle={Frontiers in Handwriting Recognition (ICFHR), 2018 16th International Conference on},
  pages={7--12},
  year={2018},
  organization={IEEE}
}
Comments
  • How to use multilabel prediction type?

    How to use multilabel prediction type?

    when i change prediction_type from CLASSIFICATION' to 'MULTILABEL

    result.shape[1] > 3, "The number of columns should be greater in multi-label framework"
    

    so how to use multi-label?

    Thanks!

    opened by duchengyao 32
  • Can't be installed under Windows

    Can't be installed under Windows

    dhSegment is AWESOME and EXACTLY what my wife and I need for our post-cancer #PayItForward Bonus Round activity doing grassroots #CitizenScience #digitalhumanities research in support of eResearch and machine-learning in the domain of digitization of serial publications, primarily modern commercial magazines. We are working on the development of the #MAGAZINEgts ground-truth storage format providing standards-based (#cidocCRM/FRBRoo/PRESSoo) integrated complex document structure and content depiction models.

    When a tweet about dhSegment surfaced through my feed, I could barely contain myself... we have detailed, multi-valued metadata -- based on a metamodel of fine-grained use of PRESSoo's Issuing Rules -- that describe the location, bounding box, size, shape, number of colors, products featured, etc. for 7,157 advertisements appearing in the 48 issues of Softalk magazine (https://archive.org/details/softalkapple). It will be trivial for me to generate the annotated label images for all these ads as we have already programmatically extracted the ad sub-images from the full pages once we used our "Ad Ferret" to discovery and curate the specification for every ad.

    Once we have a dhSegment instance trained on the Softalk ads, there are over 1.5M pages just within the "collection of collections" of computer magazines at the Internet Archive, and many millions more pages of content in magazines of all types over considerable time periods of their serial publication. The #MAGAZINEgts format, together with brilliant technical achievements like dhSegment, can open new levels of scholarship and machine access to digital collections. We believe dhSegment will be a valuable component for our research platform/framework.

    With great excitement I chased down and have installed and tested the prerequisite CUDA and cuDNN frameworks/platforms under Windows. I have these features now working at the 9.1 version. (This alone was tricky, but I got it working.)

    Unfortunately, the current implementation of the incredibly important dhSegment environment cannot be installed under Windows 10. After the stock Anaconda environment yml file died somewhat dramatically, I then took that file and attempted to search for and install each package individually. (NOTE: I am not a Python expert, so what I report here is subject to refinement by someone who knows better...) Here is what is NOT available under Windows:

    # Python packages for dh_segment not available under Windows
    - dbus=1.12.2
    - fontconfig
    - glib=2.53.6
    - gmp=6.1.2
    - graphite2=1.3.10
    - gst-plugins-base
    - gstreamer=1.12.4
    - harfbuzz=1.7.4
    - jasper=1.900.1
    - libedit=3.1
    - libffi=3.2.1
    - libgcc-ng=7.2.0
    - libgfortran-ng=7.2.0
    - libopus=1.2.1
    - libstdcxx-ng=7.2.0
    - libvpx=1.6.1
    - ncurses=6.0
    - ptyprocess=0.5.2
    - readline=7.0
    - pip:
      - tensorflow-gpu==1.4.1 (I did find and installed 1.8.0 instead) 
    

    Anything not on this list made it into my Windows-based Anaconda environment, the yml for which I have included here as a file attachment.

    win10_dh_segment.yml.txt

    I am so disappointed to not be able to install and use dhSegment under Windows. While a docker image would likely be possible to create, I am skeptical that it would work at the level needed for interfacing with the NVIDIA hardware and its CUDA/cuDNN frameworks, etc. Alternatively, perhaps a cloud-based dev platform would work for us (that is affordable as we are independent and unfunded #CitizenScientists). Your workaround/alternative suggestions are welcome.

    At any rate, sorry for the overly long initial issue posting. But I wanted to explain my and my wife's great interest in this important technology as well as provide what I hope is useful feedback with regard to its potential use under Windows. Looking forward, I am very interested in evolving a collaborative relationship with you good folks of DHLAB.

    ITMT, I am going to generate the labeled training images. :-)

    Happy-Healthy Vibes, FactMiner Jim

    P.S. Here is our #DATeCH2017 poster that will further explain the focus of our research. salmonsbabitsky_factminerssoftalk_poster

    P.P.S. And here is a screenshot showing a typical metadata "spec" for an ad. The simple integer value for the AdLocation is used in concert with an embedded DSL in the fine-grained Issuing Rules of the Advertising Model. This DSL provides a resolution-independent means to describe and compute the upper-left and bounding box of an ad. For example, the four locations of a 1/4 pg sized ad on a page with a 2-column page grid are numbered 1-4, left-to-right top-to-bottom. The proportions of these page segments based on simple geometric proportional computations. magazinegts_documentstructure_advertisements

    And finally, the evolving #MAGAZINEgts for the Softalk magazine collection at the Internet Archive is available here: https://archive.org/download/softalkapple/softalkapple_publication.xml

    opened by Jim-Salmons 9
  • detecting multiple instances of same object

    detecting multiple instances of same object

    Like the way this page shows multiple ornament extraction on same page, My model never detects more than one instance of a similar object.

    I am using the same demo.py as in master branch.

    Can someone help me ?

    opened by ankur7721 4
  • HOW TO USE IT ON TF SERVING BATCH PREDICTION

    HOW TO USE IT ON TF SERVING BATCH PREDICTION

    I have retrained the model using my own dataset, but when I try to get prediction using TF serving using gRPC API call I am not able to pass the images in a batch, it gives out dimensions error but when I pass single image I am able to get predictions. can some help with me on using this model on batch prediction when served.

    opened by anish9 4
  • Original Training image with XML labels to extract data from documents

    Original Training image with XML labels to extract data from documents

    Hi,

    I'm working in a page layout analysis and information extractor and I found that dhSegment might work ok in this task. However, I don't know exactly if dhSegment can work with XML-based anotations (TextRegion, SeparatorRegion, TableRegion, ImageRegion, points defining bounds of each region...) for training besides the RGB styled section definitions. I see in the main page of the project that there is a Layout Analysis example under Use Cases section. That is the case that most resembles to the one I want to implement. Also, I want to extract text from the detected regions.

    How can I do that? Can I still use dhSegment or I have to implement my own detector?

    Thanks.

    Regards.

    opened by Omua 4
  • PredictionType.CLASSIFICATION and extracting rectangles

    PredictionType.CLASSIFICATION and extracting rectangles

    I am attempting CLASSIFICATION now, not MULTILABEL (issue https://github.com/dhlab-epfl/dhSegment/issues/29 was helpful in mentioning that mutually-exclusive areas mean classification, not multilabel. This is clear in retrospect ;^)

    Now I need to extract rectangles and I have hit a big gap in dhSegment. The demo.py code shows how to generate the rectangle corresponding to a skewed page, but there is only one class. I modified demo.py to identify rectangles for each label. When there are multiple classes, there can be spurious, overlapping rectangles.

    How can I:

    1. Identify the highest confidence class instances
    2. That are not overlapping

    The end result I want is one or more jpegs associated with a particular class label plus the coordinates within the input image.

    Perhaps the labels plane in the prediction result offers some help here? demo.py does not use the labels plane.

    opened by tralfamadude 3
  • Feature/table cells

    Feature/table cells

    The PAGE-XML functionality has been extented in order to be able to create TableCell elements. Also an error was fixed which occured when trying to transform a list to points.

    opened by CrazyCrud 3
  • Need a short guide of layout detection and line detection

    Need a short guide of layout detection and line detection

    Hello, I have a large collection of scans of written text in table forms with complex layout structure and printed only vertical borders. My plan the a segmentation table rows cell by cell ,line detection inside each cell and then a trial of recognition. I passed through dhSegment demo,it'sok but met problems with operations. Could you please provide any examples of use cases described in the overview https://dhlab-epfl.github.io/dhSegment/ ? I'm ready to label training dataset from my collection but cannot get a start. Any notebook or video guide? One more question is about READ-BAD dataset that was suggested in a couple of issues discussions. I see the article PDF in arxive.org but didn't find a link to download the image collection. What did I miss?

    opened by longwall 3
  • A more efficient neural architecture

    A more efficient neural architecture

    @solivr @SeguinBe Thank you for your hard work,

    Can you merge Mobilenet v2 with master, along with adding a demo for using it. Thank you

    Waiting for your reply

    opened by mrocr 3
  • Convert generated VIA binary masks (black and white) into RGB expected format

    Convert generated VIA binary masks (black and white) into RGB expected format

    First, thanks for your work !

    I tried to create masks from VIA project file (doc here). It works but how to convert the black and white generated masks into RGB masks (with classes.txt) ?

    I may have missed something but I did not find the code to do it.

    Thanks for your help !

    opened by loic001 3
  • Model loading/training error

    Model loading/training error

    When executing the following command: python train.py with demo/demo_config.json I get this error. FYI I've followed the installation instructions with conda.

    InternalError (see above for traceback): cuDNN launch failure : input shape([1,3,1095,538]) filter shape([7,7,3,64]) [[{{node resnet_v1_50/conv1/Conv2D}} = Conv2D[T=DT_FLOAT, data_format="NCHW", dilations=[1, 1, 1, 1], padding="VALID", strides=[1, 1, 2, 2], use_cudnn_on_gpu=true, _device="/job:localhost/replica:0/task:0/device:GPU:0"](gradients/resnet_v1_50/conv1/Conv2D_grad/Conv2DBackpropFilter-0-TransposeNHWCToNCHW-LayoutOptimizer, resnet_v1_50/conv1/weights/read)]]

    opened by lvaleriu 3
  • Suggest to loosen the dependency on sacred

    Suggest to loosen the dependency on sacred

    Hi, your project dhSegment(commit id: cca94e94aec52baa9350eaaa60c006d7fde103b7) requires "sacred==0.7.4" in its dependency. After analyzing the source code, we found that the following versions of sacred can also be suitable, i.e., sacred 0.7.3, since all functions that you directly (1 APIs: sacred.experiment.Experiment.init) or indirectly (propagate to 19 sacred's internal APIs and 17 outsider APIs) used from the package have not been changed in these versions, thus not affecting your usage.

    Therefore, we believe that it is quite safe to loose your dependency on sacred from "sacred==0.7.4" to "sacred>=0.7.3,<=0.7.4". This will improve the applicability of dhSegment and reduce the possibility of any further dependency conflict with other projects.

    May I pull a request to further loosen the dependency on sacred?

    By the way, could you please tell us whether such an automatic tool for dependency analysis may be potentially helpful for maintaining dependencies easier during your development?

    opened by Agnes-U 0
  • Performance issue in the definition of model_fn, dh_segment/estimator_fn.py(P1)

    Performance issue in the definition of model_fn, dh_segment/estimator_fn.py(P1)

    Hello, I found a performance issue in the definition of model_fn, dh_segment/estimator_fn.py, tf.cast(tf.shape(network_output)[1:3] will be calculated repeatedly during program execution, resulting in reduced efficiency. I think it should be created before the loop.

    Looking forward to your reply. Btw, I am very glad to create a PR to fix it if you are too busy.

    opened by DLPerf 0
  • Tensorflow 2.4 (request for permission to upgrade this repo to this)

    Tensorflow 2.4 (request for permission to upgrade this repo to this)

    Hi!

    I have locally upgraded this repo to Tensorflow 2.4.1. I thought it might be helpful if I shared this code with you. If you would like I can create a pull request with this update for the repo, I would just need permissions to do so. Let me know!

    Toby

    opened by tobyDickinson 3
  • Reproducing baseline detection results

    Reproducing baseline detection results

    Hello,

    I'm trying to reproduce the baseline detection results in your paper. What was the training/validation split used? Also, is it the case that demo/demo_cbad_config.json is the same configuration used to achieve your results? Thank you!

    opened by jason-vega 0
  • Speed of Inference on GeForce GTX 1080

    Speed of Inference on GeForce GTX 1080

    My testing based on a variation of demo.py for classification of 7 labels/classes is showing choppy performance on a GPU. Excluding python post-processing and ignoring the first two inferences, I see processing durations like 0.09, 0.089, 0.56, 0.56, 0.079, 0.39, 0.09 ... ; average over 19 images is 0.19sec per image.

    I'm surprised by the variance.

    At 5/sec it is workable, but could be better. Would tensorflow-serving help by getting python out of the loop? I need to process 1M images per day.

    (The GPU is GeForce GTX 1080 and is using 10.8GB of 11GB RAM, only one TF session is used for multiple inferences.)

    opened by tralfamadude 1
  • Mulilabel limitation should be documented

    Mulilabel limitation should be documented

    Only 7 labels are supported and this is not documented. Since effort can be expended to prepare training data, finding out this limitation when running train.py is wasteful.

    opened by tralfamadude 1
A Python wrapper for Google Tesseract

Python Tesseract Python-tesseract is an optical character recognition (OCR) tool for python. That is, it will recognize and "read" the text embedded i

Matthias A Lee 4.6k Jan 06, 2023
Deep LearningImage Captcha 2

滑动验证码深度学习识别 本项目使用深度学习 YOLOV3 模型来识别滑动验证码缺口,基于 https://github.com/eriklindernoren/PyTorch-YOLOv3 修改。 只需要几百张缺口标注图片即可训练出精度高的识别模型,识别效果样例: 克隆项目 运行命令: git cl

Python3WebSpider 117 Dec 28, 2022
Geometric Augmentation for Text Image

Text Image Augmentation A general geometric augmentation tool for text images in the CVPR 2020 paper "Learn to Augment: Joint Data Augmentation and Ne

Canjie Luo 440 Jan 05, 2023
Isearch (OSINT) 🔎 Face recognition reverse image search on Instagram profile feed photos.

isearch is an OSINT tool on Instagram. Offers a face recognition reverse image search on Instagram profile feed photos.

Malek salem 20 Oct 25, 2022
Primary QPDF source code and documentation

QPDF QPDF is a command-line tool and C++ library that performs content-preserving transformations on PDF files. It supports linearization, encryption,

QPDF 2.2k Jan 04, 2023
A simple QR-Code Reader in Python

A simple QR-Code Reader written in Python, that copies the content of a QR-Code directly into the copy clipboard.

Eric 1 Oct 28, 2021
Ready-to-use OCR with 80+ supported languages and all popular writing scripts including Latin, Chinese, Arabic, Devanagari, Cyrillic and etc.

EasyOCR Ready-to-use OCR with 80+ languages supported including Chinese, Japanese, Korean and Thai. What's new 1 February 2021 - Version 1.2.3 Add set

Jaided AI 16.7k Jan 03, 2023
Automatically fishes for you while you are afk :)

Dank-memer-afk-script A simple and quick way to make easy money in Dank Memer! How to use Open a discord channel which has the Dank Memer bot enabled.

Pranav Doshi 9 Nov 11, 2022
Train custom VR face tracking parameters

Pal Buddy Guy: The anipal's best friend This is a small script to improve upon the tracking capabilities of the Vive Pro Eye and facial tracker. You c

7 Dec 12, 2021
Handwritten Character Recognition using CNN

Handwritten Character Recognition using CNN Problem Definition The main objective of this project is to solve the problem of handwritten character rec

Mohit Kaushik 4 Mar 02, 2022
MeshToGeotiff - A fast Python algorithm to convert a 3D mesh into a GeoTIFF

MeshToGeotiff - A fast Python algorithm to convert a 3D mesh into a GeoTIFF Python class for converting (very fast) 3D Meshes/Surfaces to Raster DEMs

8 Sep 10, 2022
Multi-choice answer sheet correction system using computer vision with opencv & python.

Multi choice answer correction 🔴 5 answer sheet samples with a specific solution for detecting answers and sheet correction. 🔴 By running the soluti

Reza Firouzi 7 Mar 07, 2022
Automatically resolve RidderMaster based on TensorFlow & OpenCV

AutoRiddleMaster Automatically resolve RidderMaster based on TensorFlow & OpenCV 基于 TensorFlow 和 OpenCV 实现的全自动化解御迷士小马谜题 Demo How to use Deploy the ser

神龙章轩 5 Nov 19, 2021
Fine tuning keras-ocr python package with custom synthetic dataset from scratch

OCR-Pipeline-with-Keras The keras-ocr package generally consists of two parts: a Detector and a Recognizer: Detector is responsible for creating bound

Eugene 1 Jan 05, 2022
TextBoxes++: A Single-Shot Oriented Scene Text Detector

TextBoxes++: A Single-Shot Oriented Scene Text Detector Introduction This is an application for scene text detection (TextBoxes++) and recognition (CR

Minghui Liao 930 Jan 04, 2023
Official code for "Bridging Video-text Retrieval with Multiple Choice Questions", CVPR 2022 (Oral).

Bridging Video-text Retrieval with Multiple Choice Questions, CVPR 2022 (Oral) Paper | Project Page | Pre-trained Model | CLIP-Initialized Pre-trained

Applied Research Center (ARC), Tencent PCG 99 Jan 06, 2023
原神风花节自动弹琴辅助

GenshinAutoPlayBalladsofBreeze 原神风花节自动弹琴辅助(已适配1920*1080分辨率) 本程序基于opencv图像识别技术,不存在任何封号。 因为正确率取决于你的cpu性能,10900k都不一定全对。 由于图像识别存在误差,根本无法确定出错时间。更不用说被检测到了。

晓轩 20 Oct 27, 2022
Image Detector and Convertor App created using python's Pillow, OpenCV, cvlib, numpy and streamlit packages.

Image Detector and Convertor App created using python's Pillow, OpenCV, cvlib, numpy and streamlit packages.

Siva Prakash 11 Jan 02, 2022
Rubik's Cube in pygame with OpenGL

Rubik Rubik's Cube in pygame with OpenGL The script show on the screen a Rubik Cube buit with OpenGL. Then I have also implemented all the possible mo

Gabro 2 Apr 15, 2022
Histogram specification using openCV in python .

histogram specification using openCV in python . Have to input miu and sigma to draw gausssian distribution which will be used to map the input image . Example input can be miu = 128 sigma = 30

Tamzid hasan 6 Nov 17, 2021