🐍💯pySBD (Python Sentence Boundary Disambiguation) is a rule-based sentence boundary detection that works out-of-the-box.

Overview

PySBD logo

pySBD: Python Sentence Boundary Disambiguation (SBD)

Python package codecov License PyPi GitHub

pySBD - python Sentence Boundary Disambiguation (SBD) - is a rule-based sentence boundary detection module that works out-of-the-box.

This project is a direct port of ruby gem - Pragmatic Segmenter which provides rule-based sentence boundary detection.

pysbd_code

Highlights

'PySBD: Pragmatic Sentence Boundary Disambiguation' a short research paper got accepted into 2nd Workshop for Natural Language Processing Open Source Software (NLP-OSS) at EMNLP 2020.

Research Paper:

https://arxiv.org/abs/2010.09657

Recorded Talk:

pysbd_talk

Poster:

name

Install

Python

pip install pysbd

Usage

  • Currently pySBD supports 22 languages.
import pysbd
text = "My name is Jonas E. Smith. Please turn to p. 55."
seg = pysbd.Segmenter(language="en", clean=False)
print(seg.segment(text))
# ['My name is Jonas E. Smith.', 'Please turn to p. 55.']
import spacy
from pysbd.utils import PySBDFactory

nlp = spacy.blank('en')

# explicitly adding component to pipeline
# (recommended - makes it more readable to tell what's going on)
nlp.add_pipe(PySBDFactory(nlp))

# or you can use it implicitly with keyword
# pysbd = nlp.create_pipe('pysbd')
# nlp.add_pipe(pysbd)

doc = nlp('My name is Jonas E. Smith. Please turn to p. 55.')
print(list(doc.sents))
# [My name is Jonas E. Smith., Please turn to p. 55.]

Contributing

If you want to contribute new feature/language support or found a text that is incorrectly segmented using pySBD, then please head to CONTRIBUTING.md to know more and follow these steps.

  1. Fork it ( https://github.com/nipunsadvilkar/pySBD/fork )
  2. Create your feature branch (git checkout -b my-new-feature)
  3. Commit your changes (git commit -am 'Add some feature')
  4. Push to the branch (git push origin my-new-feature)
  5. Create a new Pull Request

Citation

If you use pysbd package in your projects or research, please cite PySBD: Pragmatic Sentence Boundary Disambiguation.

@inproceedings{sadvilkar-neumann-2020-pysbd,
    title = "{P}y{SBD}: Pragmatic Sentence Boundary Disambiguation",
    author = "Sadvilkar, Nipun  and
      Neumann, Mark",
    booktitle = "Proceedings of Second Workshop for NLP Open Source Software (NLP-OSS)",
    month = nov,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.nlposs-1.15",
    pages = "110--114",
    abstract = "We present a rule-based sentence boundary disambiguation Python package that works out-of-the-box for 22 languages. We aim to provide a realistic segmenter which can provide logical sentences even when the format and domain of the input text is unknown. In our work, we adapt the Golden Rules Set (a language specific set of sentence boundary exemplars) originally implemented as a ruby gem pragmatic segmenter which we ported to Python with additional improvements and functionality. PySBD passes 97.92{\%} of the Golden Rule Set examplars for English, an improvement of 25{\%} over the next best open source Python tool.",
}

Credit

This project wouldn't be possible without the great work done by Pragmatic Segmenter team.

Comments
  • Question marks at the end swallowed

    Question marks at the end swallowed

    Looks like the example with just question marks is good now:

    >>> segmenter.segment("??")
    ['??']
    

    but the example with double question marks as a token at the end of a sentence still loses the question marks:

    >>> segmenter.segment("T stands for the vector transposition. As shown in Fig. ??")
    ['T stands for the vector transposition.', 'As shown in Fig.']
    

    looks like this is the minimal repro:

    >>> segmenter.segment("Fig. ??")
    ['Fig.']
    
    bug edge-cases 
    opened by dakinggg 11
  • Pysbd just hangs🐛

    Pysbd just hangs🐛

    Describe the bug The process hangs .

    To Reproduce Steps to reproduce the behavior: Input text - <f.302205302116302416302500302513915bd> flat = "f.302205302116302416302500302513915bd" print(flat) x=segClean = pysbd.Segmenter(language="en", clean=True, char_span=False) for z in x.segment(flat): print(z)

    Example: Input text - "My name is Jonas E. Smith. Please turn to p. 55."

    Expected behavior Return f.302205302116302416302500302513915

    Example: ['f.302205302116302416302500302513915bd']

    Additional context Add any other context about the problem here.

    help wanted 
    opened by kariato 8
  • Incorrect text span start and end returned

    Incorrect text span start and end returned

    Looks like something weird happening in this case, note that the indices of the second text span are incorrect:

    >>> seg = pysbd.Segmenter(language='en', clean=False, char_span=True)
    >>> seg.segment("1) The first item. 2) The second item.")                                                                                
    [TextSpan(sent='1) The first item.', start=0, end=18), TextSpan(sent='2) The second item.', start=0, end=19)] 
    
    bug 
    opened by dakinggg 7
  • Performance improvement?

    Performance improvement?

    I am not certain of this, but I suspect there might be room for performance improvement by using re.compile to precompile all of the needed regexs. Otherwise they will have to be compiled regularly (once the re cache of 100 has been exceeded)

    question 
    opened by dakinggg 7
  • Slovak lang support

    Slovak lang support

    We've added support for SBD in Slovak language text.

    Language specific improvements:

    • list of common slovak abbreviations
    • list of prepositive abbreviations
    • list of number abbreviations
    • handling of roman numerals
    • handling of „ text “ quotes, that are common in Slovak language
    • handling of ordinal numerals in dates, such as 17. Apríl 2020
    • modified the replacement of periods in abbreviations, so it can consistently handle common Slovak abbreviations such as Company Name s. r. o.
    • disabled processing of alphabetical lists, because of conflicts with some common abbreviations

    The code has been tested for stability on a very large corpus of web text. The has been no rigorous testing for segmentation quality, but the subjective feeling in the team is very positive.

    language 
    opened by misotrnka 6
  • Different segmentation with Spacy and when using pySBD directly

    Different segmentation with Spacy and when using pySBD directly

    Firstly thank you for this project - I was lucky to find it and it is really useful

    I seem to have found a case where the segmentation is behaving differently when run within the Spacy pipeline and when run using pySBD directly. I stumbled on it with my own text where a sentence after a previous sentence that was in quotes was being lumped together. I looked through the Golden Rules and found this wasn't expected and then noticed that even with the text in one of your tests it acts differently in Spacy.

    To reproduce run these two bits of code:

    from pysbd.utils import PySBDFactory
    nlp = spacy.blank('en')
    nlp.add_pipe(PySBDFactory(nlp))
    doc = nlp("She turned to him, \"This is great.\" She held the book out to show him.")
    for sent in doc.sents:
        print(str(sent).strip() + '\n')
    

    She turned to him, "This is great." She held the book out to show him.

    import pysbd
    text = "She turned to him, \"This is great.\" She held the book out to show him."
    seg = pysbd.Segmenter(language="en", clean=False)
    #print(seg.segment(text))
    for sent in seg.segment(text):
        print(str(sent).strip() + '\n')
    

    She turned to him, "This is great."

    She held the book out to show him.

    The second way is the desired output (based on the rules at least)

    bug help wanted 
    opened by nmstoker 6
  • destructive behaviour in edge-cases

    destructive behaviour in edge-cases

    As of v0.3.3, pySBD shows destructive behavior in some edge-cases even when setting the option clean to False. When dealing with OCR text, pySBD removes whitespace after multiple periods.

    To reproduce

    import pysbd
    
    splitter = pysbd.Segmenter(language="fr", clean=False)
    
    text = "Maissen se chargea du reste .. Logiquement,"
    print(splitter.segment(text))
    
    text = "Maissen se chargea du reste ... Logiquement,"
    print(splitter.segment(text))
    
    text = "Maissen se chargea du reste .... Logiquement,"
    print(splitter.segment(text))
    

    Actual output Please note the missing whitespace after the final period in the example with .. and .....

    ['Maissen se chargea du reste .', '.', 'Logiquement,']
    ['Maissen se chargea du reste ... ', 'Logiquement,']
    ['Maissen se chargea du reste .', '...', 'Logiquement,']
    

    Expected output

    ['Maissen se chargea du reste .', '. ', 'Logiquement,']
    ['Maissen se chargea du reste ... ', 'Logiquement,']
    ['Maissen se chargea du reste .', '... ', 'Logiquement,']
    

    In general, pySBD works well. Many thanks @nipunsadvilkar. I can also look into this as soon as I find some time and open a pull request.

    bug edge-cases 
    opened by aflueckiger 5
  • 🏎 ⚡️ 💯 [Rough] Benchmark across Segmentation Tools, Libraries and Algorithms

    🏎 ⚡️ 💯 [Rough] Benchmark across Segmentation Tools, Libraries and Algorithms

    Segmentation Tools, Libraries and Algorithms:

    • [x] Stanza
    • [x] syntok
    • [x] NLTK
    • [x] spaCy
    • [x] blingfire

    | Tool | Accuracy | Speed (ms) | |-----------|----------|------------| | blingfire | 75.00% | 49.91 | | pySBD | 97.92% | 2449.18 | | syntok | 68.75% | 783.73 | | spaCy | 52.08% | 473.96 | | stanza | 72.92% | 120803.37 | | NLTK | 56.25% | 342.98 |

    opened by nipunsadvilkar 5
  • ✨ 💫  Support Multiple languages

    ✨ 💫 Support Multiple languages

    Languages to be supported:

    • [x] English
    • [x] Bulgarian
    • [x] Spanish
    • [x] Russian
    • [x] Arabic
    • [x] Amharic
    • [x] Marathi
    • [x] Hindi
    • [x] Armenian
    • [x] Persian
    • [x] Urdu
    • [x] Polish
    • [x] Chinese
    • [x] Dutch
    • [x] Danish
    • [x] French
    • [x] Italian
    • [x] Greek
    • [x] Burmese
    • [x] Japanese
    • [x] Deutsch
    • [x] Kazakh
    enhancement 
    opened by nipunsadvilkar 4
  • Regexp issues

    Regexp issues

    I'm getting errors because the regexp engine interprets parentesis: "unterminated subpattern" and "unbalanced parenthesis".

    I'm analysing very large amounts of text, so not sure how these were triggered.

    opened by mollerhoj 4
  • Reduce some calls to re.sub

    Reduce some calls to re.sub

    So calls to re.compile are not a problem. The main thing slowing it down is lots of calls to re.sub in abbreviation_replacer.py. I reduced some of these calls which speeds it up by a factor of ~3-3.5x on my machine, for the specific (longish) document that I tested with. I also included the script I used to test timing. Given that you are much more familiar with the codebase, see if my changes look reasonable, but all the tests do still pass. There are probably some more ways to speed up the calls in that file.

    enhancement 
    opened by dakinggg 4
  • How is accuracy on OPUS-100 computed?

    How is accuracy on OPUS-100 computed?

    Hi! Thanks for this library.

    Since there is no notion of documents in the OPUS-100 dataset it is not clear to me how accuracy is computed. I tried a naive approach using pairwise joining of sentences:

    from datasets import load_dataset
    import pysbd
    
    if __name__ == "__main__":
        sentences = [
            sample["de"].strip()
            for sample in load_dataset("opus100", "de-en", split="test")["translation"]
        ]
    
        correct = 0
        total = 0
    
        segmenter = pysbd.Segmenter(language="de")
    
        for sent1, sent2 in zip(sentences, sentences[1:]):
            out = tuple(
                s.strip() for s in segmenter.segment(sent1 + " " + sent2)
            )
    
            total += 1
    
            if out == (sent1, sent2):
                correct += 1
    
        print(f"{correct}/{total} = {correct / total}")
    

    But I get 1011/1999 = 50.6% Accuracy which is not close to the 80.95% Accuracy reported in the paper.

    Thanks for any help!

    opened by bminixhofer 1
  • Added decorator as required by latest SpaCy

    Added decorator as required by latest SpaCy

    Hello!

    In using pySBD, I've noticed that the current example script no longer works with the latest version of SpaCy (3.3.0). This is the traceback I get:

    Traceback (most recent call last):
      File "/Users/lucas/Code/significant-statements-extraction/scripts/test_pysbd.py", line 27, in <module>
        nlp.add_pipe(pysbd_sentence_boundaries)
      File "/Users/lucas/miniforge3/envs/pytorch_p39/lib/python3.9/site-packages/spacy/language.py", line 773, in add_pipe
        raise ValueError(err)
    ValueError: [E966] `nlp.add_pipe` now takes the string name of the registered component factory, not a callable component. Expected string, but got <function pysbd_sentence_boundaries at 0x11ffa9160> (name: 'None').
    
    - If you created your component with `nlp.create_pipe('name')`: remove nlp.create_pipe and call `nlp.add_pipe('name')` instead.
    
    - If you passed in a component like `TextCategorizer()`: call `nlp.add_pipe` with the string name instead, e.g. `nlp.add_pipe('textcat')`.
    
    - If you're using a custom component: Add the decorator `@Language.component` (for function components) or `@Language.factory` (for class components / factories) to your custom component and assign it a name, e.g. `@Language.component('your_name')`. You can then run `nlp.add_pipe('your_name')` to add it to the pipeline.
    

    This pull requests add a @Language.component decorator to make pySBD available in SpaCy again.

    opened by soldni 0
  • Arabic sentence split on the Arabic comma

    Arabic sentence split on the Arabic comma

    Describe the bug Arabic sentence split on the Arabic comma.

    To Reproduce Steps to reproduce the behavior:

    import pysbd
    text = "هذه تجربة، للغة العربية"
    seg = pysbd.Segmenter(language="ar", clean=True)
    >>> print(seg.segment(text))
    

    Output: ['هذه تجربة،', 'للغة العربية']

    Expected behavior The text should not be split on the Arabic comma. Expected output: ['هذه تجربة، للغة العربية']

    Additional context I locally fixed it by modifying the file: pysbd/lang/arabic.py, deleting ، from SENTENCE_BOUNDARY_REGEX.

    opened by ymoslem 0
  • Does pysbd delete sentences after detection ?

    Does pysbd delete sentences after detection ?

    Hey there, So ive been using pysbd to detect boundries in hindi and marathi language and then save the same data rearranged from a paragraph to one sentence boundry per sample. Unfortunately the storage size has gone down from 22GB to 14.5 GB after just detecting boundries and just saving them per sentence. and yes i did turn off the clean args.

    opened by StephennFernandes 0
  • Update pysbd_as_spacy_component.py

    Update pysbd_as_spacy_component.py

    Thanks for a great sentence splitting package. A small contribution, after troubleshooting, why the code was not working out of the box. The spacy v3 requires a string in the add_pipe() call. The component need to be declared using the language decorator. See also https://spacy.io/usage/processing-pipelines#custom-components. Hope it helps other users.

    opened by guebeln0 0
Releases(v0.3.4)
  • v0.3.4(Feb 11, 2021)

  • v0.3.3(Oct 8, 2020)

  • v0.3.2(Sep 11, 2020)

  • v0.3.1(Aug 11, 2020)

  • v0.3.0(Aug 11, 2020)

    v0.3.0

    • ✨ 💫 Support Multiple languages - #2
    • 🏎⚡️💯 Benchmark across Segmentation Tools, Libraries and Algorithms
    • 🎨 ♻️ Update sentence char_span logic
    • ⚡️ Performance improvements - #41
    • ♻️🐛 Refactor AbbreviationReplacer
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0rc(Jun 9, 2020)

    • ✨ 💫 sent char_span through with spaCy & regex approach - #63
    • ♻️ Refactoring to support multiple languages
    • ✨ 💫Initial language support for - Hindi, Marathi, Chinese, Spanish
    • ✅ Updated tests - more coverage & regression tests for issues
    • 👷👷🏻‍♀️ GitHub actions for CI-CD
    • 💚☂️ Add code coverage - coverage.py Add Codecov
    • 🐛 Fix incorrect text span & vanilla pysbd vs spacy output discrepancy - #49, #53, #55 , #59
    • 🐛 Fix NUMBERED_REFERENCE_REGEX for zero or one time - #58
    • 🔐Fix security vulnerability bleach - #62
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Nov 13, 2019)

  • v0.2.2(Nov 1, 2019)

  • v0.2.1(Oct 30, 2019)

  • v0.2.0(Oct 25, 2019)

    • ✨Add char_span parameter (optional) to get sentence & its (start, end) char offsets from original text
    • ✨pySBD as a spaCy component example
    • 🐛 Fix double question mark swallow bug - #39
    Source code(tar.gz)
    Source code(zip)
  • v0.1.5(Oct 24, 2019)

  • v0.1.4(Oct 20, 2019)

    • ✨ ✅ Handle intermittent punctuations added special case: r"[。..!!?].*" to handle intermittent dots, exclaimation, etc. special cases group can be updated as per developer needs- #34
    Source code(tar.gz)
    Source code(zip)
  • v0.1.3(Oct 19, 2019)

    • 🐛 Fix lists_item_replacer - #29
    • 🐛 Fix & ♻️ refactor replace_multi_period_abbreviations - #30
    • 🐛 Fix abbreviation_replacer - #31
    • ✅ Add regression tests for issues
    Source code(tar.gz)
    Source code(zip)
  • v0.1.2(Oct 18, 2019)

  • v0.1.1(Oct 9, 2019)

Owner
Nipun Sadvilkar
I like to explore Jungle of Data with Python as my swiss knife with pandas, numpy, matplotlib and scikit-learn as its multi-tools😅
Nipun Sadvilkar
Control the classic General Instrument SP0256-AL2 speech chip and AY-3-8910 sound generator with a Raspberry Pi and this Python library.

GI-Pi Control the classic General Instrument SP0256-AL2 speech chip and AY-3-8910 sound generator with a Raspberry Pi and this Python library. The SP0

Nick Bild 8 Dec 15, 2021
Rootski - Full codebase for rootski.io (without the data)

📣 Welcome to the Rootski codebase! This is the codebase for the application run

Eric 20 Nov 18, 2022
A Pytorch implementation of "Splitter: Learning Node Representations that Capture Multiple Social Contexts" (WWW 2019).

Splitter ⠀⠀ A PyTorch implementation of Splitter: Learning Node Representations that Capture Multiple Social Contexts (WWW 2019). Abstract Recent inte

Benedek Rozemberczki 201 Nov 09, 2022
An open collection of annotated voices in Japanese language

声庭 (Koniwa): オープンな日本語音声とアノテーションのコレクション Koniwa (声庭): An open collection of annotated voices in Japanese language 概要 Koniwa(声庭)は利用・修正・再配布が自由でオープンな音声とアノテ

Koniwa project 32 Dec 14, 2022
Th2En & Th2Zh: The large-scale datasets for Thai text cross-lingual summarization

Th2En & Th2Zh: The large-scale datasets for Thai text cross-lingual summarization 📥 Download Datasets 📥 Download Trained Models INTRODUCTION TH2ZH (

Nakhun Chumpolsathien 5 Jan 03, 2022
Just Another Telegram Ai Chat Bot Written In Python With Pyrogram.

OkaeriChatBot Just another Telegram AI chat bot written in Python using Pyrogram. Requirements Python 3.7 or higher.

Wahyusaputra 2 Dec 23, 2021
Continuously update some NLP practice based on different tasks.

NLP_practice We will continuously update some NLP practice based on different tasks. prerequisites Software pytorch = 1.10 torchtext = 0.11.0 sklear

0 Jan 05, 2022
The swas programming language

The Swas programming language This is a language that was made for fun. Installation Step 0: Make sure you have python installed Step 1. Clone this re

Swas.py 19 Jul 18, 2022
🤖 Basic Financial Chatbot with handoff ability built with Rasa

Financial Services Example Bot This is an example chatbot demonstrating how to build AI assistants for financial services and banking with Rasa. It in

Mohammad Javad Hossieni 4 Aug 10, 2022
Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

0 Feb 13, 2022
Named-entity recognition using neural networks. Easy-to-use and state-of-the-art results.

NeuroNER NeuroNER is a program that performs named-entity recognition (NER). Website: neuroner.com. This page gives step-by-step instructions to insta

Franck Dernoncourt 1.6k Dec 27, 2022
This project deals with a simplified version of a more general problem of Aspect Based Sentiment Analysis.

Aspect_Based_Sentiment_Extraction Created on: 5th Jan, 2022. This project deals with an important field of Natural Lnaguage Processing - Aspect Based

Naman Rastogi 4 Jan 01, 2023
vits chinese, tts chinese, tts mandarin

vits chinese, tts chinese, tts mandarin 史上训练最简单,音质最好的语音合成系统

AmorTX 12 Dec 14, 2022
This project is part of Eleuther AI's quest to create a massive repository of high quality text data for training language models.

This project is part of Eleuther AI's quest to create a massive repository of high quality text data for training language models.

EleutherAI 42 Dec 13, 2022
Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 B) on a single 16 GB VRAM V100 Google Cloud instance with Huggingface Transformers using DeepSpeed

Guide: Finetune GPT2-XL (1.5 Billion Parameters) and GPT-NEO (2.7 Billion Parameters) on a single 16 GB VRAM V100 Google Cloud instance with Huggingfa

289 Jan 06, 2023
XLNet: Generalized Autoregressive Pretraining for Language Understanding

Introduction XLNet is a new unsupervised language representation learning method based on a novel generalized permutation language modeling objective.

Zihang Dai 6k Jan 07, 2023
A simple implementation of N-gram language model.

About A simple implementation of N-gram language model. Requirements numpy Data preparation Corpus Training data for the N-gram model, a text file lik

4 Nov 24, 2021
Line as a Visual Sentence: Context-aware Line Descriptor for Visual Localization

Line as a Visual Sentence with LineTR This repository contains the inference code, pretrained model, and demo scripts of the following paper. It suppo

SungHo Yoon 158 Dec 27, 2022
An implementation of the Pay Attention when Required transformer

Pay Attention when Required (PAR) Transformer-XL An implementation of the Pay Attention when Required transformer from the paper: https://arxiv.org/pd

7 Aug 11, 2022
Multiple implementations for abstractive text summurization , using google colab

Text Summarization models if you are able to endorse me on Arxiv, i would be more than glad https://arxiv.org/auth/endorse?x=FRBB89 thanks This repo i

463 Dec 26, 2022