ChirpText is a collection of text processing tools for Python 3.

Overview

ChirpText is a collection of text processing tools for Python 3.

Documentation Status Total alerts Language grade: Python

It is not meant to be a powerful tank like the popular NTLK but a small package which you can pip-install anywhere and write a few lines of code to process textual data.

Main features

  • Simple file data manipulation using an enhanced open() function (txt, gz, binary, etc.)
  • CSV helper functions
  • Parse Japanese text with mecab library (Does not require mecab-python3 package even on Windows, only a binary release (i.e. mecab.exe) is required)
  • Built-in "lite" text annotation formats (texttaglib TTL/CSV and TTL/JSON)
  • Helper functions and useful data for processing English, Japanese, Chinese and Vietnamese.
  • Application configuration files management which can make educated guess about config files' whereabouts
  • Quick text-based report generation

Installation

chirptext is available on PyPI and can be installed using pip

pip install chirptext

Parsing Japanese text

chirptext supports parsing Japanese text using different parsers (mecab, Janome, and igo-python)

>> doc = deko.parse_doc("猫が好きです。\n犬も好きです。") >>> for sent in doc: ... print(sent, sent.tokens.values()) ... 猫が好きです。 ['猫', 'が', '好き', 'です', '。'] 犬も好きです。 ['犬', 'も', '好き', 'です', '。'] ">
>>> from chirptext import deko
>>> sent = deko.parse('猫が好きです。')
>>> sent.tokens
['`猫`<0:1>', '`が`<1:2>', '`好き`<2:4>', '`です`<4:6>', '`。`<6:7>']
>>> sent.tokens.values()
['猫', 'が', '好き', 'です', '。']
>>> sent[0]
`猫`<0:1>
>>> sent[0].pos
'名詞'
>>> sent[1].lemma
'が'
>>> sent[2].reading
'スキ'

# tokenize
>>> deko.tokenize('猫が好きです。')
['猫', 'が', '好き', 'です', '。']

# split sentences
>>> deko.tokenize_sent("猫が好きです。\n犬も好きです。")
['猫が好きです。', '犬も好きです。']

# parse a document (i.e. multiple sentences)
>>> doc = deko.parse_doc("猫が好きです。\n犬も好きです。")
>>> for sent in doc:
...     print(sent, sent.tokens.values())
... 
猫が好きです。 ['猫', 'が', '好き', 'です', '。']
犬も好きです。 ['犬', 'も', '好き', 'です', '。']

Notes: At least one of the following tools must be installed to use chirptext Japanese parsing:

  1. mecab: http://taku910.github.io/mecab/#download
  2. Janome: available on PyPI, install with pip install Janome
  3. igo-python: available on PyPI, install with pip install igo-python

Convenient IO APIs

>>> from chirptext import chio
>>> chio.write_tsv('data/test.tsv', [['a', 'b'], ['c', 'd']])
>>> chio.read_tsv('data/tes.tsv')
[['a', 'b'], ['c', 'd']]

>>> chio.write_file('data/content.tar.gz', 'Support writing to .tar.gz file')
>>> chio.read_file('data/content.tar.gz')
'Support writing to .tar.gz file'

>>> for row in chio.read_tsv_iter('data/test.tsv'):
...     print(row)
... 
['a', 'b']
['c', 'd']

Sample TextReport

# a string report
rp = TextReport()  # by default, TextReport will write to standard output, i.e. terminal
rp = TextReport(TextReport.STDOUT)  # same as above
rp = TextReport('~/tmp/my-report.txt')  # output to a file
rp = TextReport.null()  # ouptut to /dev/null, i.e. nowhere
rp = TextReport.string()  # output to a string. Call rp.content() to get the string
rp = TextReport(TextReport.STRINGIO)  # same as above

# TextReport will close the output stream automatically by using the with statement
with TextReport.string() as rp:
    rp.header("Lorem Ipsum Analysis", level="h0")
    rp.header("Raw", level="h1")
    rp.print(LOREM_IPSUM)
    rp.header("Top 5 most common letters")
    ct.summarise(report=rp, limit=5)
    print(rp.content())

Output

+---------------------------------------------------------------------------------- 
| Lorem Ipsum Analysis 
+---------------------------------------------------------------------------------- 
 
Raw 
------------------------------------------------------------ 
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur. Excepteur sint occaecat cupidatat non proident, sunt in culpa qui officia deserunt mollit anim id est laborum. 
 
Top 5 most common letters
------------------------------------------------------------ 
i: 42 
e: 37 
t: 32 
o: 29 
a: 29 

Useful links

You might also like...
Paranoid text spacing in Python

pangu.py Paranoid text spacing for good readability, to automatically insert whitespace between CJK (Chinese, Japanese, Korean) and half-width charact

py-trans is a Free Python library for translate text into different languages.

Free Python library to translate text into different languages.

a python package that lets you add custom colors and text formatting to your scripts in a very easy way!
a python package that lets you add custom colors and text formatting to your scripts in a very easy way!

colormate Python script text formatting package What is colormate? colormate is a python library that lets you add text formatting to your scripts, it

Text Summarizationcls app with python

Text Summarizationcls app This is the repo for the Text Summarization AI Project. It makes use of pre-trained Hugging Face models Packages Used The pa

This is a text summarizing tool written in Python
This is a text summarizing tool written in Python

Summarize Written by: Ling Li Ya This is a text summarizing tool written in Python. User Guide Some things to note: The application is accessible here

Simple python program to auto credit your code, text, book, whatever!

Credit Simple python program to auto credit your code, text, book, whatever! Setup First change credit_text to whatever text you would like to credit

Parse Any Text With Python

ParseAnyText A small package to parse strings. What is the work of it? Well It's a module to creates parser that helps to parse a text easily with les

Adventura is an open source Python Text Adventure Engine

Adventura Adventura is an open source Python Text Adventure Engine, Not yet uplo

Skype export archive to text converter for python

Skype export archive to text converter This software utility extracts chat logs

Comments
  • Asking for a new release on PyPi

    Asking for a new release on PyPi

    Hi,

    Version 0.1a18 is a bit outdated, could you update a newer version to PyPi?

    I only need this commit, but since long time is passed i think most of the master change are stable.

    opened by matteofumagalli1275 1
  • Add CodeQL workflow for GitHub code scanning

    Add CodeQL workflow for GitHub code scanning

    Hi letuananh/chirptext!

    This is a one-off automatically generated pull request from LGTM.com :robot:. You might have heard that we’ve integrated LGTM’s underlying CodeQL analysis engine natively into GitHub. The result is GitHub code scanning!

    With LGTM fully integrated into code scanning, we are focused on improving CodeQL within the native GitHub code scanning experience. In order to take advantage of current and future improvements to our analysis capabilities, we suggest you enable code scanning on your repository. Please take a look at our blog post for more information.

    This pull request enables code scanning by adding an auto-generated codeql.yml workflow file for GitHub Actions to your repository — take a look! We tested it before opening this pull request, so all should be working :heavy_check_mark:. In fact, you might already have seen some alerts appear on this pull request!

    Where needed and if possible, we’ve adjusted the configuration to the needs of your particular repository. But of course, you should feel free to tweak it further! Check this page for detailed documentation.

    Questions? Check out the FAQ below!

    FAQ

    Click here to expand the FAQ section

    How often will the code scanning analysis run?

    By default, code scanning will trigger a scan with the CodeQL engine on the following events:

    • On every pull request — to flag up potential security problems for you to investigate before merging a PR.
    • On every push to your default branch and other protected branches — this keeps the analysis results on your repository’s Security tab up to date.
    • Once a week at a fixed time — to make sure you benefit from the latest updated security analysis even when no code was committed or PRs were opened.

    What will this cost?

    Nothing! The CodeQL engine will run inside GitHub Actions, making use of your unlimited free compute minutes for public repositories.

    What types of problems does CodeQL find?

    The CodeQL engine that powers GitHub code scanning is the exact same engine that powers LGTM.com. The exact set of rules has been tweaked slightly, but you should see almost exactly the same types of alerts as you were used to on LGTM.com: we’ve enabled the security-and-quality query suite for you.

    How do I upgrade my CodeQL engine?

    No need! New versions of the CodeQL analysis are constantly deployed on GitHub.com; your repository will automatically benefit from the most recently released version.

    The analysis doesn’t seem to be working

    If you get an error in GitHub Actions that indicates that CodeQL wasn’t able to analyze your code, please follow the instructions here to debug the analysis.

    How do I disable LGTM.com?

    If you have LGTM’s automatic pull request analysis enabled, then you can follow these steps to disable the LGTM pull request analysis. You don’t actually need to remove your repository from LGTM.com; it will automatically be removed in the next few months as part of the deprecation of LGTM.com (more info here).

    Which source code hosting platforms does code scanning support?

    GitHub code scanning is deeply integrated within GitHub itself. If you’d like to scan source code that is hosted elsewhere, we suggest that you create a mirror of that code on GitHub.

    How do I know this PR is legitimate?

    This PR is filed by the official LGTM.com GitHub App, in line with the deprecation timeline that was announced on the official GitHub Blog. The proposed GitHub Action workflow uses the official open source GitHub CodeQL Action. If you have any other questions or concerns, please join the discussion here in the official GitHub community!

    I have another question / how do I get in touch?

    Please join the discussion here to ask further questions and send us suggestions!

    opened by lgtm-com[bot] 0
  • Revamp TTL APIs for more complex usecases

    Revamp TTL APIs for more complex usecases

    • simplify multi-tag handling (i.e. sense candidates, chunk languages, annotators, etc.)
    • Built-in support for CoNLL
    • use first tag slot for scalar tags (i.e. POS, lemma, surface, languages)
    • Re-design TTL JSON
    opened by letuananh 3
  • Add support for Leipzig and Penn Treebank tagset

    Add support for Leipzig and Penn Treebank tagset

    • Leipzig
      • Reference: https://www.eva.mpg.de/lingua/resources/glossing-rules.php
    • Penn Treebank tag set
      • Version 1: https://www.sketchengine.eu/penn-treebank-tagset/
      • Version 2: https://www.sketchengine.eu/english-treetagger-pipeline-2
    enhancement 
    opened by letuananh 0
Releases(0.2a2)
  • 0.2a2(May 20, 2021)

    Changes

    • Added missing keyword arguments newline and encoding to chio.write_csv and chio.write_tsv
    • Updated test cases

    PyPI link: https://pypi.org/project/chirptext/0.2.a2/

    Source code(tar.gz)
    Source code(zip)
  • chirptext-0.1.2(May 20, 2021)

    chirptext 0.1.2 stable maintenance release for supporting texttaglib legacy APIs

    Changes:

    • [v0.1.2] Added missing keyword arguments newline and encoding to chio.write_csv and chio.write_tsv
    • [v0.1.2] Updated test cases

    PyPI link: https://pypi.org/project/chirptext/0.1.2/

    To use Japanese parsing with chirptext, see chirptext 0.1.1 stable release

    Source code(tar.gz)
    Source code(zip)
  • 0.2a1(May 17, 2021)

  • chirptext-0.1.1(May 17, 2021)

  • chirptext-0.1(May 13, 2021)

  • 0.1rc1(May 2, 2021)

  • 0.1a21(Apr 23, 2021)

  • 0.1a19(Jun 1, 2020)

    • Improved texttaglib (lite) module
      • Better TTL-JSON support
      • Standardized TTL access methods (find(), find_all() to get_tag(), get_tags())
    • Improved chirptext.sino module (Kangxi radical information)
    • Rename TextReport.file to TextReport.stream (more intuitive)
    • Show fewer mecab related warnings
    • Use Markdown for PyPI project README file
    Source code(tar.gz)
    Source code(zip)
  • 0.1a18(Jul 18, 2018)

  • 0.1a14(Apr 11, 2018)

    Deko can be used without mecab-python3 with this release.

    from chirptext import deko
    deko.set_mecab_bin("C:\\mecab\\bin\\mecab.exe")
    # Now we can use deko as usual
    sent = deko.txt2mecab("雨が降る。")
    print(sent.words)
    print(sent[0].pos)
    
    Source code(tar.gz)
    Source code(zip)
  • 0.1a11(Apr 2, 2018)

    • Added TxtWriter and TxtReader to texttaglib module for faster reading
    • Added DataObject to anhxa
    • Deko documents and sentences can be exported to TTL format
    • etc.
    Source code(tar.gz)
    Source code(zip)
  • 0.1a4(Feb 5, 2018)

    • Made WebHelper accept string as path to cache DB
    • Added WebHelper.fetch_json() method
    • Added some bug fixes
    • Added README file with some code samples
    Source code(tar.gz)
    Source code(zip)
  • 0.1a2(Jan 24, 2018)

Owner
Le Tuan Anh
computational linguist, semanticist, deeply interested in well-being, languages, and free software
Le Tuan Anh
pydantic-i18n is an extension to support an i18n for the pydantic error messages.

pydantic-i18n is an extension to support an i18n for the pydantic error messages

Boardpack 48 Dec 21, 2022
Production First and Production Ready End-to-End Keyword Spotting Toolkit

WeKws Production First and Production Ready End-to-End Keyword Spotting Toolkit. The goal of this toolkit it to... Small footprint keyword spotting (K

222 Dec 30, 2022
Extract price amount and currency symbol from a raw text string

price-parser is a small library for extracting price and currency from raw text strings.

Scrapinghub 252 Dec 31, 2022
一个可以可以统计群组用户发言,并且能将聊天内容生成词云的机器人

当前版本 v2.2 更新维护日志 更新维护日志 有问题请加群组反馈 Telegram 交流反馈群组 点击加入 演示 配置要求 内存:1G以上 安装方法 使用 Docker 安装 Docker官方安装

机器人总动员 117 Dec 29, 2022
从flomo导出的笔记中生成词云

flomo-word-cloud 从flomo导出的笔记中生成词云 如何使用? 将本项目克隆到你的电脑上,使用如下的命令,安装所需python库 pip install -r requirements.txt 在项目里新建一个file文件夹,把所有从flomo导出的html文件放入其中 运行main

Hannnk 9 Dec 30, 2022
Paranoid text spacing in Python

pangu.py Paranoid text spacing for good readability, to automatically insert whitespace between CJK (Chinese, Japanese, Korean) and half-width charact

Vinta Chen 194 Nov 19, 2022
A Python3 script that simulates the user typing a text on their keyboard.

A Python3 script that simulates the user typing a text on their keyboard. (control the speed, randomness, rate of typos and more!)

Jose Gracia Berenguer 3 Feb 22, 2022
Meeting, rendezvous, confluence (Finnish kohtaaminen) mark up, down, and up again.

kohtaaminen Meeting, rendezvous, confluence (Finnish kohtaaminen) mark up, down, and up again. Given a zip file containing a tree of html and media fi

Stefan Hagen 2 Dec 14, 2022
Shows twitch pay for any streamer from Twitch leaked CSV files.

twitch_leak_csv_reader Shows twitch pay for any streamer from Twitch leaked CSV files. Requirements: You need python3 (you can install python 3 from o

5 Nov 11, 2022
汉字转拼音(pypinyin)

汉字拼音转换工具(Python 版) 将汉字转为拼音。可以用于汉字注音、排序、检索(Russian translation) 。 基于 hotoo/pinyin 开发。 Documentation: http://pypinyin.rtfd.io/ GitHub: https://github.co

Huang Huang 4.2k Jan 03, 2023
Goblin-sim - Procedural fantasy world generator

goblin-sim This project is an attempt to create a procedural goblin fantasy worl

3 May 18, 2022
Question answering on russian with XLMRobertaLarge as a service

QA Roberta Ru SaaS Question answering on russian with XLMRobertaLarge as a service. Thanks for the model to Alexander Kaigorodov. Stack Flask Gunicorn

Gladkikh Prohor 21 Jul 04, 2022
Bidirectionally transformed strings

bistring The bistring library provides non-destructive versions of common string processing operations like normalization, case folding, and find/repl

Microsoft 352 Dec 19, 2022
Fuzzy String Matching in Python

FuzzyWuzzy Fuzzy string matching like a boss. It uses Levenshtein Distance to calculate the differences between sequences in a simple-to-use package.

SeatGeek 8.8k Jan 08, 2023
The project is investigating methods to extract human-marked data from document forms such as surveys and tests.

The project is investigating methods to extract human-marked data from document forms such as surveys and tests. They can read questions, multiple-choice exam papers, and grade.

Harry 5 Mar 27, 2022
Translate .sbv subtitle files

deepl4subtitle Deeplを使って字幕ファイル(.sbv)を翻訳します。タイムスタンプも含めて出力しますが、翻訳時はタイムスタンプは文の一部とは切り離されるので、.sbvファイルをそのまま翻訳機に突っ込むよりも高精度な翻訳ができるはずです。 つかいかた 入力する.sbvファイルの前処理

Yasunori Toshimitsu 1 Oct 20, 2021
Convert ebooks with few clicks on Telegram!

E-Book Converter Bot A bot that converts e-books to various formats, powered by calibre! It currently supports 34 input formats and 19 output formats.

Youssif Shaaban Alsager 45 Jan 05, 2023
Repositori untuk belajar pemrograman Python dalam bahasa Indonesia

Python Repositori ini berisi kumpulan dari berbagai macam contoh struktur data, algoritma dan komputasi matematika yang diimplementasikan dengan mengg

Bellshade 111 Dec 19, 2022
A Python package to facilitate research on building and evaluating automated scoring models.

Rater Scoring Modeling Tool Introduction Automated scoring of written and spoken test responses is a growing field in educational natural language pro

ETS 59 Oct 10, 2022
Microsoft's Cascadia Code font customized to my liking.

Microsoft's Cascadia Code font customized to my liking. Also includes some simple batch patch and bake scripts to batch patch glyphs and bake font features into fonts!

Frederik List 3 Jan 29, 2022