Fixes mojibake and other glitches in Unicode text, after the fact.

Overview

ftfy: fixes text for you

Travis PyPI package Docs

>>> print(fix_encoding("(ง'⌣')ง"))
(ง'⌣')ง

Full documentation: https://ftfy.readthedocs.org

Testimonials

  • “My life is livable again!” — @planarrowspace
  • “A handy piece of magic” — @simonw
  • “Saved me a large amount of frustrating dev work” — @iancal
  • “ftfy did the right thing right away, with no faffing about. Excellent work, solving a very tricky real-world (whole-world!) problem.” — Brennan Young
  • “Hat mir die Tage geholfen. Im Übrigen bin ich der Meinung, dass wir keine komplexen Maschinen mit Computern bauen sollten solange wir nicht einmal Umlaute sicher verarbeiten können. :D” — Bruno Ranieri
  • “I have no idea when I’m gonna need this, but I’m definitely bookmarking it.” — /u/ocrow
  • “9.2/10” — pylint

Developed at Luminoso

Luminoso makes groundbreaking software for text analytics that really understands what words mean, in many languages. Our software is used by enterprise customers such as Sony, Intel, Mars, and Scotts, and it's built on Python and open-source technologies.

We use ftfy every day at Luminoso, because the first step in understanding text is making sure it has the correct characters in it!

Luminoso is growing fast and hiring. If you're interested in joining us, take a look at our careers page.

What it does

ftfy fixes Unicode that's broken in various ways.

The goal of ftfy is to take in bad Unicode and output good Unicode, for use in your Unicode-aware code. This is different from taking in non-Unicode and outputting Unicode, which is not a goal of ftfy. It also isn't designed to protect you from having to write Unicode-aware code. ftfy helps those who help themselves.

Of course you're better off if your input is decoded properly and has no glitches. But you often don't have any control over your input; it's someone else's mistake, but it's your problem now.

ftfy will do everything it can to fix the problem.

Mojibake

The most interesting kind of brokenness that ftfy will fix is when someone has encoded Unicode with one standard and decoded it with a different one. This often shows up as characters that turn into nonsense sequences (called "mojibake"):

  • The word schön might appear as schön.
  • An em dash () might appear as —.
  • Text that was meant to be enclosed in quotation marks might end up instead enclosed in “ and â€<9d>, where <9d> represents an unprintable character.

ftfy uses heuristics to detect and undo this kind of mojibake, with a very low rate of false positives.

This part of ftfy now has an unofficial Web implementation by simonw: https://ftfy.now.sh/

Examples

fix_text is the main function of ftfy. This section is meant to give you a taste of the things it can do. fix_encoding is the more specific function that only fixes mojibake.

Please read the documentation for more information on what ftfy does, and how to configure it for your needs.

>>> print(fix_text('This text should be in “quotesâ€\x9d.'))
This text should be in "quotes".

>>> print(fix_text('ünicode'))
ünicode

>>> print(fix_text('Broken text&hellip; it&#x2019;s flubberific!',
...                normalization='NFKC'))
Broken text... it's flubberific!

>>> print(fix_text('HTML entities &lt;3'))
HTML entities <3

>>> print(fix_text('<em>HTML entities in HTML &lt;3</em>'))
<em>HTML entities in HTML &lt;3</em>

>>> print(fix_text('\001\033[36;44mI&#x92;m blue, da ba dee da ba '
...               'doo&#133;\033[0m', normalization='NFKC'))
I'm blue, da ba dee da ba doo...

>>> print(fix_text('LOUD NOISES'))
LOUD NOISES

>>> print(fix_text('LOUD NOISES', fix_character_width=False))
LOUD NOISES

Installing

ftfy is a Python 3 package that can be installed using pip:

pip install ftfy

(Or use pip3 install ftfy on systems where Python 2 and 3 are both globally installed and pip refers to Python 2.)

If you're on Python 2.7, you can install an older version:

pip install 'ftfy<5'

You can also clone this Git repository and install it with python setup.py install.

Who maintains ftfy?

I'm Robyn Speer ([email protected]). I develop this tool as part of my text-understanding company, Luminoso, where it has proven essential.

Luminoso provides ftfy as free, open source software under the extremely permissive MIT license.

You can report bugs regarding ftfy on GitHub and we'll handle them.

Citing ftfy

ftfy has been used as a crucial data processing step in major NLP research.

It's important to give credit appropriately to everyone whose work you build on in research. This includes software, not just high-status contributions such as mathematical models. All I ask when you use ftfy for research is that you cite it.

ftfy has a citable record on Zenodo. A citation of ftfy may look like this:

Robyn Speer. (2019). ftfy (Version 5.5). Zenodo.
http://doi.org/10.5281/zenodo.2591652

In BibTeX format, the citation is::

@misc{speer-2019-ftfy,
  author       = {Robyn Speer},
  title        = {ftfy},
  note         = {Version 5.5},
  year         = 2019,
  howpublished = {Zenodo},
  doi          = {10.5281/zenodo.2591652},
  url          = {https://doi.org/10.5281/zenodo.2591652}
}
Comments
  • Bump certifi from 2021.10.8 to 2022.12.7

    Bump certifi from 2021.10.8 to 2022.12.7

    Bumps certifi from 2021.10.8 to 2022.12.7.

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies 
    opened by dependabot[bot] 0
  • Performance improvements using google-re2. 2 times faster to run fix_text()

    Performance improvements using google-re2. 2 times faster to run fix_text()

    Hi, thanks for the great lib!

    In our real time inference server, we are using ftfy to clean inputs coming from users. We noticed that processing time can be huge with a lot of text. So I run this little experiment to usegoogle-re2 which is a regex engine optimized for performance. On my test file of 10000 lines, I was able to clean the text, 2 times faster. On a run of 10, I'm getting 16.15 seconds with vanilla ftfy and 8.71 seconds with the optimizations made in this PR.

    As is, this PR is not mergable, its implies a big change for the lib. I think it should be better to have a way of choosing regex engine. If you are interested in merging it, I can make the necessary changes. I'm publishing it just for you and the community to know it's possible and what the expected outcomes can be. Of course, I made sure than all the tests are green.

    Anyone can test it by installing this branch pip install git+https://@github.com/ablanchard/[email protected]

    Notes on the PR :

    • re.VERBOSE is not supported by google-re2. To keep comments and line returns, I process it by "hand" using a regex. Bit of a hack but it works.
    • lookahead and lookbehind arenot supported by google-re2 so I splited the UTF8 detector and the a grave regex in 2 separate regexes in order to keep the same behavior. Meaning that UTF8_DETECTOR_RE.search() doesn't return the same results as before so you have to call the method utf8_detector(). The same idea goes for the sub method.
    • By default google-re2 uses utf8 for encoding regexes so to use binary string you have to pass options=LATIN_OPTIONS
    • I didn't migrate the surrogates for utf-16. In my understanding,it's not supported by google-re2. So I left it as it was.

    PS: Code used for the benchmark:

    import time
    import ftfy
    import pandas as pd
    import sys
    
    df = pd.read_csv(sys.argv[1])
    texts = df['input_text'].tolist()
    start_time = time.time()
    res = [ftfy.fix_text(text) for text in texts]
    print(time.time() - start_time)
    
    opened by ablanchard 0
  • Restore Python 36 support

    Restore Python 36 support

    Hi! There is not much that prohibits to still support Python 3.6 which is still widely supported on Linux distros. This PE re-enables Python 3.6 support I also removed some upper bounds on deps to avoid some issues, as highlighted in https://iscinumpy.dev/post/bound-version-constraints/ Thanks for your kind consideration!

    opened by pombredanne 0
  • İ and Ä« not detected as mojibake

    İ and ī not detected as mojibake

    Hi @rspeer. Many thanks for creating and maintaining FTFY! We're using it at Sectigo to help prevent mojibake from finding its way into string fields in the digital certificates that we issue. We've noticed a couple of mojibake sequences that FTFY doesn't currently detect and fix:

    Desired behaviour:

    $ echo "İstanbul" | iconv -t WINDOWS-1252
    İstanbul
    $ echo "Rīga" | iconv -t WINDOWS-1252
    Rīga
    

    Current FTFY behaviour:

    $ echo "İstanbul" | ftfy
    İstanbul
    $ echo "Rīga" | ftfy
    Rīga
    

    Would it be possible to make FTFY handle these cases?

    opened by robstradling 0
  • On the wish list:

    On the wish list: "Pyreneeu00ebn" being explained as "Pyreneeën 71"

    A while ago I blogged about "Pyreneeën 71" on a web-site being incorrectly represented as "Pyreneeu00ebn".

    Basically the Unicode code point U+00EB : LATIN SMALL LETTER E WITH DIAERESIS is being represented as u00eb.

    Is this something that ftfy could potentially recognise?

    Right now It does not:

    >>> ftfy.fix_and_explain("Pyreneeu00ebn")
    ExplainedText(text='Pyreneeu00ebn', explanation=[])
    
    opened by jpluimers 2
  • Any idea which encoding failure could cause

    Any idea which encoding failure could cause "beëindiging" to be printed in a letter as "beᅵindiging"?

    opened by jpluimers 0
Releases(v6.0.3)
  • v6.0.3(Aug 23, 2021)

    Updates in 6.0.x:

    • New function: ftfy.fix_and_explain() can describe all the transformations that happen when fixing a string. This is similar to what ftfy.fixes.fix_encoding_and_explain() did in previous versions, but it can fix more than the encoding.
    • fix_and_explain() and fix_encoding_and_explain() are now in the top-level ftfy module.
    • Changed the heuristic entirely. ftfy no longer needs to categorize every Unicode character, but only characters that are expected to appear in mojibake.
    • Because of the new heuristic, ftfy will no longer have to release a new version for every new version of Unicode. It should also run faster and use less RAM when imported.
    • The heuristic ftfy.badness.is_bad(text) can be used to determine whether there appears to be mojibake in a string. Some users were already using the old function sequence_weirdness() for that, but this one is actually designed for that purpose.
    • Instead of a pile of named keyword arguments, ftfy functions now take in a TextFixerConfig object. The keyword arguments still work, and become settings that override the defaults in TextFixerConfig.
    • Added support for UTF-8 mixups with Windows-1253 and Windows-1254.
    • Overhauled the documentation: https://ftfy.readthedocs.org
    • Requires Python 3.6 or later.
    Source code(tar.gz)
    Source code(zip)
  • v5.5.1(Mar 12, 2019)

Owner
Luminoso Technologies, Inc.
Luminoso Technologies, Inc.
Code associated with the Don't Stop Pretraining ACL 2020 paper

dont-stop-pretraining Code associated with the Don't Stop Pretraining ACL 2020 paper Citation @inproceedings{dontstoppretraining2020, author = {Suchi

AI2 449 Jan 04, 2023
Tutorial to pretrain & fine-tune a 🤗 Flax T5 model on a TPUv3-8 with GCP

Pretrain and Fine-tune a T5 model with Flax on GCP This tutorial details how pretrain and fine-tune a FlaxT5 model from HuggingFace using a TPU VM ava

Gabriele Sarti 41 Nov 18, 2022
nlp-tutorial is a tutorial for who is studying NLP(Natural Language Processing) using Pytorch

nlp-tutorial is a tutorial for who is studying NLP(Natural Language Processing) using Pytorch. Most of the models in NLP were implemented with less than 100 lines of code.(except comments or blank li

Tae-Hwan Jung 11.9k Jan 08, 2023
End-to-end image captioning with EfficientNet-b3 + LSTM with Attention

Image captioning End-to-end image captioning with EfficientNet-b3 + LSTM with Attention Model is seq2seq model. In the encoder pretrained EfficientNet

2 Feb 10, 2022
This repository structures data in title, summary, tags, sentiment given a fragment of a conversation

Understand-conversation-AI This repository structures data in title, summary, tags, sentiment given a fragment of a conversation How to install: pip i

Juan Camilo López Montes 1 Jan 11, 2022
Training and evaluation codes for the BertGen paper (ACL-IJCNLP 2021)

BERTGEN This repository is the implementation of the paper "BERTGEN: Multi-task Generation through BERT" (https://arxiv.org/abs/2106.03484). The codeb

<a href=[email protected]"> 9 Oct 26, 2022
Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Code for EMNLP'21 paper "Types of Out-of-Distribution Texts and How to Detect Them"

Udit Arora 19 Oct 28, 2022
中文生成式预训练模型

T5 PEGASUS 中文生成式预训练模型,以mT5为基础架构和初始权重,通过类似PEGASUS的方式进行预训练。 详情可见:https://kexue.fm/archives/8209 Tokenizer 我们将T5 PEGASUS的Tokenizer换成了BERT的Tokenizer,它对中文更

410 Jan 03, 2023
This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

This repo is to provide a list of literature regarding Deep Learning on Graphs for NLP

Graph4AI 230 Nov 22, 2022
BERT Attention Analysis

BERT Attention Analysis This repository contains code for What Does BERT Look At? An Analysis of BERT's Attention. It includes code for getting attent

Kevin Clark 401 Dec 11, 2022
HAN2HAN : Hangul Font Generation

HAN2HAN : Hangul Font Generation

Changwoo Lee 36 Dec 28, 2022
A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

GuwenModels: 古文自然语言处理模型合集, 收录互联网上的古文相关模型及资源. A collection of Classical Chinese natural language processing models, including Classical Chinese related models and resources on the Internet.

Ethan 66 Dec 26, 2022
source code for paper: WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach.

WhiteningBERT Source code and data for paper WhiteningBERT: An Easy Unsupervised Sentence Embedding Approach. Preparation git clone https://github.com

49 Dec 17, 2022
UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language

UA-GEC: Grammatical Error Correction and Fluency Corpus for the Ukrainian Language This repository contains UA-GEC data and an accompanying Python lib

Grammarly 227 Jan 02, 2023
Implementation of TTS with combination of Tacotron2 and HiFi-GAN

Tacotron2-HiFiGAN-master Implementation of TTS with combination of Tacotron2 and HiFi-GAN for Mandarin TTS. Inference In order to inference, we need t

SunLu Z 7 Nov 11, 2022
The code for the Subformer, from the EMNLP 2021 Findings paper: "Subformer: Exploring Weight Sharing for Parameter Efficiency in Generative Transformers", by Machel Reid, Edison Marrese-Taylor, and Yutaka Matsuo

Subformer This repository contains the code for the Subformer. To help overcome this we propose the Subformer, allowing us to retain performance while

Machel Reid 10 Dec 27, 2022
Applied Natural Language Processing in the Enterprise - An O'Reilly Media Publication

Applied Natural Language Processing in the Enterprise This is the companion repo for Applied Natural Language Processing in the Enterprise, an O'Reill

Applied Natural Language Processing in the Enterprise 95 Jan 05, 2023
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/

Texar is a toolkit aiming to support a broad set of machine learning, especially natural language processing and text generation tasks. Texar provides

ASYML 2.3k Jan 07, 2023
🗣️ NALP is a library that covers Natural Adversarial Language Processing.

NALP: Natural Adversarial Language Processing Welcome to NALP. Have you ever wanted to create natural text from raw sources? If yes, NALP is for you!

Gustavo Rosa 21 Aug 12, 2022
Segmenter - Transformer for Semantic Segmentation

Segmenter - Transformer for Semantic Segmentation

592 Dec 27, 2022