A Domain Specific Language (DSL) for building language patterns. These can be later compiled into spaCy patterns, pure regex, or any other format

Overview

Rita Logo

RITA DSL

Documentation Status codecov made-with-python Maintenance PyPI version fury.io PyPI download month GitHub license

This is a language, loosely based on language Apache UIMA RUTA, focused on writing manual language rules, which compiles into either spaCy compatible patterns, or pure regex. These patterns can be used for doing manual NER as well as used in other processes, like retokenizing and pure matching

An Introduction Video

Intro

Links

Support

reddit Gitter

Install

pip install rita-dsl

Simple Rules example

rules = """
cuts = {"fitted", "wide-cut"}
lengths = {"short", "long", "calf-length", "knee-length"}
fabric_types = {"soft", "airy", "crinkled"}
fabrics = {"velour", "chiffon", "knit", "woven", "stretch"}

{IN_LIST(cuts)?, IN_LIST(lengths), WORD("dress")}->MARK("DRESS_TYPE")
{IN_LIST(lengths), IN_LIST(cuts), WORD("dress")}->MARK("DRESS_TYPE")
{IN_LIST(fabric_types)?, IN_LIST(fabrics)}->MARK("DRESS_FABRIC")
"""

Loading in spaCy

import spacy
from rita.shortcuts import setup_spacy


nlp = spacy.load("en")
setup_spacy(nlp, rules_string=rules)

And using it:

>>> r = nlp("She was wearing a short wide-cut dress")
>>> [{"label": e.label_, "text": e.text} for e in r.ents]
[{'label': 'DRESS_TYPE', 'text': 'short wide-cut dress'}]

Loading using Regex (standalone)

import rita

patterns = rita.compile_string(rules, use_engine="standalone")

And using it:

>>> list(patterns.execute("She was wearing a short wide-cut dress"))
[{'end': 38, 'label': 'DRESS_TYPE', 'start': 18, 'text': 'short wide-cut dress'}]
Comments
  • Jetbrains RITA Plugin not compatible with PyCharm 2020.2.1

    Jetbrains RITA Plugin not compatible with PyCharm 2020.2.1

    Plugin Version: 1.2 https://plugins.jetbrains.com/plugin/15011-rita-language/versions/

    Tested Version: PyCharm 2020.2.1 (Professional Edition)

    Error when trying to install from disk: grafik

    On the plugin site https://plugins.jetbrains.com/plugin/15011-rita-language/versions/ it says, that this should be uncompitable for all IntellJ-based IDEs in the 2020.2 version:

    The list of supported products was determined by dependencies defined in the plugin.xml: Android Studio — build 201.7223 — 201.* DataGrip — 2020.1.3 — 2020.1.5 IntelliJ IDEA Ultimate — 2020.1.1 — 2020.1.4 Rider — 2020.1.3 PyCharm Professional — 2020.1.1 — 2020.1.4 PyCharm Community — 2020.1.1 — 2020.1.4 PhpStorm — 2020.1.1 — 2020.1.4 IntelliJ IDEA Educational — 2020.1.1 — 2020.1.2 CLion — 2020.1.1 — 2020.1.3 PyCharm Educational — 2020.1.1 — 2020.1.2 GoLand — 2020.1.1 — 2020.1.4 AppCode — 2020.1.2 — 2020.1.6 RubyMine — 2020.1.1 — 2020.1.4 MPS — 2020.1.1 — 2020.1.4 IntelliJ IDEA Community — 2020.1.1 — 2020.1.4 WebStorm — 2020.1.1 — 2020.1.4

    opened by rolandmueller 3
  • IN_LIST ignores OP quantifier

    IN_LIST ignores OP quantifier

    Somehow I get this unexpected behaviour when using OP quantifiers (?, *, +, etc) with the IN_LIST element:

    rules = """
    list_elements = {"one", "two"}
    {IN_LIST(list_elements)?}->MARK("LABEL")
    """
    rules = rita.compile_string(rules)
    expected_result = "[{'label': 'LABEL', 'pattern': [{'LOWER': {'REGEX': '^(one|two)$'}, 'OP': '?'}]}]"
    print("expected_result:", expected_result)
    print("result:", rules)
    assert str(rules) == expected_result
    

    Version: 0.5.0

    bug 
    opened by rolandmueller 3
  • Add module regex

    Add module regex

    This feature would introduce the REGEX element as a module.

    Matches words based on a Regex pattern e.g. all words that start with an 'a' would be REGEX("^a")

    !IMPORT("rita.modules.regex")
    
    {REGEX("^a")}->MARK("TAGGED_MATCH")
    
    opened by rolandmueller 2
  • Feature/pluralize

    Feature/pluralize

    Add a new module for a PLURALIZE tag For a noun or a list of nouns, it will match any singular or plural word. Usage for a single word, e.g.:

    PLURALIZE("car")
    

    Usage for lists, e.g.:

    vehicles = {"car", "bicycle", "ship"}
    PLURALIZE(vehicles)
    

    Will work even for regex or if the lemmatizer of spaCy is making an error. Has dependency to the Python inflect package https://pypi.org/project/inflect/

    opened by rolandmueller 2
  • Feature/regex tag

    Feature/regex tag

    This feature would introduce the TAG element as a module. Needs a new parser for the SpaCy translate. Would allow more flexible matching of detailed part-of-speech tag, like all adjectives or nouns: TAG("^NN|^JJ").

    opened by rolandmueller 2
  • Feature/improve robustness

    Feature/improve robustness

    In general - measure how long it takes to compile and avoid situations when pattern creates infinite loop (possible to get to this situation using regex).

    Closes: https://github.com/zaibacu/rita-dsl/issues/78

    opened by zaibacu 1
  • Add TAG_WORD macro to Tag module

    Add TAG_WORD macro to Tag module

    This feature would introduce the TAG_WORD element to the Tag module

    TAG_WORD is for generating TAG patterns with a word or a list.

    e.g. match only "proposed" when it is in the sentence a verb (and not an adjective):

    !IMPORT("rita.modules.tag")
    
    TAG_WORD("^VB", "proposed")
    

    or e.g. match a list of words only to verbs

    !IMPORT("rita.modules.tag")
    
    words = {"percived", "proposed"}
    {TAG_WORD("^VB", words)}->MARK("LABEL")
    
    opened by rolandmueller 1
  • Add Orth module

    Add Orth module

    This feature would introduce the ORTH element as a module.

    Ignores case-insensitive configuration and checks words as written that means case-sensitive even if configuration is case-insensitive. Especially useful for acronyms and proper names.

    Works only with spaCy engine

    Usage:

    !IMPORT("rita.modules.orth")
    
    {ORTH("IEEE")}->MARK("TAGGED_MATCH")
    
    opened by rolandmueller 1
  • Add conifugration for implicit hyphon characters between words

    Add conifugration for implicit hyphon characters between words

    Add a new Configuration implicit_hyphon (default false) for automatically adding hyphon characters - to the rules. Enabling implicit_hyphon is disabling implicit_punct. Rationale: implicit_punct is often to much inclusive. The implicit_punct has the hyphon token included, but it is adding (at least in my use case) unwanted tokens (like parentheses) to the matches, especially for more complex rules. So implicit_hyphon is a little bit more strict than implicit_punct.

    opened by rolandmueller 1
  • Fix sequencial optional

    Fix sequencial optional

    Closes https://github.com/zaibacu/rita-dsl/issues/69

    Turns out it is a bug related to - character which in most cases used as a splitter, but in this case as a stand alone word

    opened by zaibacu 1
  • Method to validate syntax

    Method to validate syntax

    Currently it can be partially done:

    from rita.parser import RitaParser
    from rita.config import SessionConfig
    config = SessionConfig()
    p = RitaParser(config)
    p.build()
    result = p.parse(rules)
    if result is None:
        raise RuntimeError("... Something is wrong with syntax")
    

    But it would be nice to have single method for that and have actual error info.

    enhancement 
    opened by zaibacu 0
  • Dynamic case sensitivity for Standalone Engine

    Dynamic case sensitivity for Standalone Engine

    We want to be able to make specified word inside pattern to be case sensitive, while rest of the pattern is case insensitive.

    It looks like it can be achieved using inline modifier groups regex feature, it requires Python3.6+ version

    enhancement 
    opened by zaibacu 0
  • JS rule engine

    JS rule engine

    Should work similarly to standalone engine, maybe even inherit most of it, but it should result into valid JavaScript code, preferably a single function to which you give raw text and get result of multiple parsed entities

    enhancement help wanted 
    opened by zaibacu 0
  • Allow LOAD macro to load from external locations

    Allow LOAD macro to load from external locations

    now LOAD(file_name) macro searches text file in current path.

    Usually reading from the local file is the best, but it should be cool, to be able just give like github GIST url and just load everything we need. This would be very useful for Demo page case

    good first issue 
    opened by zaibacu 0
Releases(0.7.0)
  • 0.7.0(Feb 2, 2021)

    0.7.0 (2021-02-02)


    Features

    • standalone engine now will return submatches list containing start and end for each part of match #93

    • Partially covered https://github.com/zaibacu/rita-dsl/issues/70

      Allow nested patterns, like:

          num_with_fractions = {NUM, WORD("-")?, IN_LIST(fractions)}
          complex_number = {NUM|PATTERN(num_with_fractions)}
    
          {PATTERN(complex_number)}->MARK("NUMBER")
    

    #95

    • Submatches for rita-rust engine #96

    • Regex module which allows to specify word pattern, eg. REGEX(^a) means word must start with letter "a"

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #101

    • ORTH module which allows you to specify case sensitive entry while rest of the rules ignores case. Used for acronyms and proper names

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #102

    • Additional macro for tag module, allowing to tag specific word/list of words

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #103

    • Added names module which allows to generate person names variations #105

    • spaCy v3 Support #109

    Fix

    • Optimizations for Rust Engine

      • No need for passing text forward and backward, we can calculate from text[start:end]

      • Grouping and sorting logic can be done in binary code #88

    • Fix NUM parsing bug #90

    • Switch from (^\s) to \b when doing IN_LIST. Should solve several corner cases #91

    • Fix floating point number matching #92

    • revert #91 changes. Keep old way for word boundary #94

    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Aug 29, 2020)

    0.6.0 (2020-08-29)


    Features

    • Implemented ability to alias macros, eg.:
          numbers = {"one", "two", "three"}
          @alias IN_LIST IL
    
          IL(numbers) -> MARK("NUMBER")
    

    Now using "IL" will actually call "IN_LIST" macro. #66

    • introduce the TAG element as a module. Needs a new parser for the SpaCy translate. Would allow more flexible matching of detailed part-of-speech tag, like all adjectives or nouns: TAG("^NN|^JJ").

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #81

    • Add a new module for a PLURALIZE tag For a noun or a list of nouns, it will match any singular or plural word.

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #82

    • Add a new Configuration implicit_hyphon (default false) for automatically adding hyphon characters - to the rules.

      Implemented by: Roland M. Mueller (https://github.com/rolandmueller) #84

    • Allow to give custom regex impl. By default re is used #86

    • An interface to be able to use rust engine.

      In general it's identical to standalone, but differs in one crucial part - all of the rules are compiled into actual binary code and that provides large performance boost. It is proprietary, because there are various caveats, engine itself is a bit more fragile and needs to be tinkered to be optimized to very specific case (eg. few long texts with many matches vs a lot short texts with few matches). #87

    Fix

    • Fix - bug when it is used as stand alone word #71
    • Fix regex matching, when shortest word is selected from IN_LIST #72
    • Fix IN_LIST regex so that it wouldn't take part of word #75
    • Fix IN_LIST operation bug - it was ignoring them #77
    • Use list branching only when using spaCy Engine #80
    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Jun 18, 2020)

    Features

    • Added PREFIX macro which allows to attach word in front of list items or words #47

    • Allow to pass variables directly when doing compile and compile_string #51

    • Allow to compile (and later load) rules using rita CLI while using standalone engine (spacy is already supported) #53

    • Added ability to import rule files into rule file. Recursive import is supported as well. #55

    • Added possibility to define pattern as a variable and reuse it in other patterns:

      Example:

    ComplexNumber = {NUM+, WORD("/")?, NUM?}
    
    {PATTERN(ComplexNumber), WORD("inches"), WORD("Height")}->MARK("HEIGHT")
    {PATTERN(ComplexNumber), WORD("inches"), WORD("Width")}->MARK("WIDTH")
    

    #64

    Fix

    • Fix issue with multiple wildcard words using standalone engine #46
    • Don't crash when no rules are provided #50
    • Fix Number and ANY-OF parsing #59
    • Allow escape characters inside LITERAL #62
    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Jan 25, 2020)

    0.4.0 (2020-01-25)


    Features

    • Support for deaccent. In general, if accented version of word is given, both deaccented and accented will be used to match. To turn iit off - !CONFIG("deaccent", "N") #38
    • Added shortcuts module to simplify injecting into spaCy #42

    Fix

    • Fix issue regarding Spacy rules with IN_LIST and using case-sensitive mode. It was creating Regex pattern which is not valid spacy pattern #40
    Source code(tar.gz)
    Source code(zip)
  • 0.3.2(Dec 19, 2019)

    Features

      • Introduced towncrier to track changes
      • Added linter flake8
      • Refactored code to match pep8 #32

    Fix

      • Fix WORD split by -

      • Split by (empty space) as well

      • Coverage score increase #35

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Dec 14, 2019)

    Now there's one global config and child config created per-session (one session = one rule file compilation). Imports and variables are stored in this config as well.

    Remove context argument from MACROS, making code cleaner and easier to read

    Source code(tar.gz)
    Source code(zip)
  • 0.2.2(Dec 8, 2019)

    Features of up to this point:

    • Standalone parser - can use internal regex rather than spaCy if you need to
    • Ability to do logical OR in rule. eg.: {WORD(w1)|WORD(w2),WORD(w3)} would result into two rules: {WORD(w1),WORD(w3)} and {WORD(w2),WORD(w3)}
    • Exclude operator {WORD(w1), WORD(w2)!} would match w1 and anything but w2
    Source code(tar.gz)
    Source code(zip)
Owner
Šarūnas Navickas
Data Engineer @ TokenMill. Doing BJJ @ Voras-Bjj. Dad @ Home.
Šarūnas Navickas
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
Official PyTorch implementation of SegFormer

SegFormer: Simple and Efficient Design for Semantic Segmentation with Transformers Figure 1: Performance of SegFormer-B0 to SegFormer-B5. Project page

NVIDIA Research Projects 1.4k Dec 29, 2022
DELTA is a deep learning based natural language and speech processing platform.

DELTA - A DEep learning Language Technology plAtform What is DELTA? DELTA is a deep learning based end-to-end natural language and speech processing p

DELTA 1.5k Dec 26, 2022
Grading tools for Advanced NLP (11-711)Grading tools for Advanced NLP (11-711)

Grading tools for Advanced NLP (11-711) Installation You'll need docker and unzip to use this repo. For docker, visit the official guide to get starte

Hao Zhu 2 Sep 27, 2022
PyTorch implementation of convolutional neural networks-based text-to-speech synthesis models

Deepvoice3_pytorch PyTorch implementation of convolutional networks-based text-to-speech synthesis models: arXiv:1710.07654: Deep Voice 3: Scaling Tex

Ryuichi Yamamoto 1.8k Dec 30, 2022
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.

Speech-Backbones This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab. Grad-TTS Official implementation of the Grad-

HUAWEI Noah's Ark Lab 295 Jan 07, 2023
Pretrained language model and its related optimization techniques developed by Huawei Noah's Ark Lab.

Pretrained Language Model This repository provides the latest pretrained language models and its related optimization techniques developed by Huawei N

HUAWEI Noah's Ark Lab 2.6k Jan 08, 2023
This is a Prototype of an Ai ChatBot "Tea and Coffee Supplier" using python.

Ai-ChatBot-Python A chatbot is an intelligent system which can hold a conversation with a human using natural language in real time. Due to the rise o

1 Oct 30, 2021
Text Classification Using LSTM

Text classification is the task of assigning a set of predefined categories to free text. Text classifiers can be used to organize, structure, and categorize pretty much anything. For example, new ar

KrishArul26 3 Jan 03, 2023
Utility for Google Text-To-Speech batch audio files generator. Ideal for prompt files creation with Google voices for application in offline IVRs

Google Text-To-Speech Batch Prompt File Maker Are you in the need of IVR prompts, but you have no voice actors? Let Google talk your prompts like a pr

Ponchotitlán 1 Aug 19, 2021
Python bot created with Selenium that can guess the daily Wordle word correct 96.8% of the time.

Wordle_Bot Python bot created with Selenium that can guess the daily Wordle word correct 96.8% of the time. It will log onto the wordle website and en

Lucas Polidori 15 Dec 11, 2022
🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

🤗 The largest hub of ready-to-use NLP datasets for ML models with fast, easy-to-use and efficient data manipulation tools

Hugging Face 15k Jan 02, 2023
BeautyNet is an AI powered model which can tell you whether you're beautiful or not.

BeautyNet BeautyNet is an AI powered model which can tell you whether you're beautiful or not. Download Dataset from here:https://www.kaggle.com/gpios

Ansh Gupta 0 May 06, 2022
PyTorch Implementation of "Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging" (Findings of ACL 2022)

Feature_CRF_AE Feature_CRF_AE provides a implementation of Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging

Jacob Zhou 6 Apr 29, 2022
VoiceFixer VoiceFixer is a framework for general speech restoration.

VoiceFixer VoiceFixer is a framework for general speech restoration. We aim at the restoration of severly degraded speech and historical speech. Paper

Leo 174 Jan 06, 2023
ZUNIT - Toward Zero-Shot Unsupervised Image-to-Image Translation

ZUNIT Dependencies you can install all the dependencies by pip install -r requirements.txt Datasets Download CUB dataset. Unzip the birds.zip at ./da

Chen Yuanqi 9 Jun 24, 2022
TextFlint is a multilingual robustness evaluation platform for natural language processing tasks,

TextFlint is a multilingual robustness evaluation platform for natural language processing tasks, which unifies general text transformation, task-specific transformation, adversarial attack, sub-popu

TextFlint 587 Dec 20, 2022
Converts python code into c++ by using OpenAI CODEX.

🦾 codex_py2cpp 🤖 OpenAI Codex Python to C++ Code Generator Your Python Code is too slow? 🐌 You want to speed it up but forgot how to code in C++? ⌨

Alexander 423 Jan 01, 2023
CoNLL-English NER Task (NER in English)

CoNLL-English NER Task en | ch Motivation Course Project review the pytorch framework and sequence-labeling task practice using the transformers of Hu

Kevin 2 Jan 14, 2022
Coreference resolution for English, German and Polish, optimised for limited training data and easily extensible for further languages

Coreferee Author: Richard Paul Hudson, msg systems ag 1. Introduction 1.1 The basic idea 1.2 Getting started 1.2.1 English 1.2.2 German 1.2.3 Polish 1

msg systems ag 169 Dec 21, 2022