Tools, wrappers, etc... for data science with a concentration on text processing

Overview

Rosetta

Tools for data science with a focus on text processing.

  • Focuses on "medium data", i.e. data too big to fit into memory but too small to necessitate the use of a cluster.
  • Integrates with existing scientific Python stack as well as select outside tools.

Examples

  • See the examples/ directory.
  • The docs contain plots of example output.

Packages

cmdutils

  • Unix-like command line utilities. Filters (read from stdin/write to stdout) for files.
  • Focus on stream processing and csv files.

parallel

  • Wrappers for Python multiprocessing that add ease of use
  • Memory-friendly multiprocessing

text

  • Stream text from disk to formats used in common ML processes
  • Write processed text to sparse formats
  • Helpers for ML tools (e.g. Vowpal Wabbit, Gensim, etc...)
  • Other general utilities

workflow

  • High-level wrappers that have helped with our workflow and provide additional examples of code use

modeling

  • General ML modeling utilities

Install

Check out the master branch from the rosettarepo. Then, (so long as you have pip).

cd rosetta
make
make test

If you update the source, you can do

make reinstall
make test

The above make targets use pip, so you can of course do pip uninstall at any time.

Getting the source (above) is the preferred method since the code changes often, but if you don't use Git you can download a tagged release (tarball) here. Then

pip install rosetta-X.X.X.tar.gz

Development

Code

You can get the latest sources with

git clone git://github.com/columbia-applied-data-science/rosetta

Contributing

Feel free to contribute a bug report or a request by opening an issue

The preferred method to contribute is to fork and send a pull request. Before doing this, read CONTRIBUTING.md

Dependencies

  • Major dependencies on Pandas and numpy.
  • Minor dependencies on Gensim and statsmodels.
  • Some examples need scikit-learn.
  • Minor dependencies on docx
  • Minor dependencies on the unix utilities pdftotext and catdoc

Testing

From the base repo directory, rosetta/, you can run all tests with

make test

Documentation

Documentation for releases is hosted at pypi. This does NOT auto-update.

History

Rosetta refers to the Rosetta Stone, the ancient Egyptian tablet discovered just over 200 years ago. The tablet contained fragmented text in three different languages and the uncovering of its meaning is considered an essential key to our understanding of Ancient Egyptian civilization. We would like this project to provide individuals the necessary tools to process and unearth insight in the ever-growing volumes of textual data of today.

Comments
  • Fix broken test suite, use protected imports, limit dependencies, or start using requirements.txt

    Fix broken test suite, use protected imports, limit dependencies, or start using requirements.txt

    The use of from rosetta.txt.api import * inside tests has created dependencies that break tests. This import * statement makes it so every test depends on every import statement in the rosetta api. Since MySQLdb doesn't import for me (after 10 minutes of setting it up it still doesn't), and docx has issues that prevents it from working for many people, I can no longer run tests for anything.

    It would be safer to import only what is needed. Also, since things like docx and mysql are problematic and/or difficult to fully install, it might make sense to protect these imports like in this pull request.

    bug 
    opened by langmore 16
  • Error in LDAResults

    Error in LDAResults

    Following the example in https://github.com/columbia-applied-data-science/rosetta/blob/master/examples/vw_helpers.md

    There is an error when I run LDAResults() the following error prints:

    ImportError Traceback (most recent call last) in () 3 lda = LDAResults('C:\Users\Desktop\DATA\LDA\topics.dat', 4 'C:\Users\Desktop\DATA\LDA\predictions.dat', 'C:/Users/Desktop/DATA/LDA' + '/sff_basic.pkl', ----> 5 num_topics=num_topics) 6 lda.print_topics()

    C:\Anaconda\lib\site-packages\rosetta\text\vw_helpers.pyc in init(self, topics_file, predictions_file, sfile_filter, num_topics, alpha, verbose) 230 231 if not isinstance(sfile_filter, text_processors.SFileFilter): --> 232 sfile_filter = text_processors.SFileFilter.load(sfile_filter) 233 234 self.sfile_frame = sfile_filter.to_frame()

    C:\Anaconda\lib\site-packages\rosetta\common_abc.pyc in load(cls, loadfile) 40 """ 41 with smart_open(loadfile, 'rb') as f: ---> 42 return cPickle.load(f)

    ImportError: No module named text_processors

    opened by BrianMiner 12
  • Generic filters2

    Generic filters2

    I think I hit on the major points and recommendations I've gotten (but yell at me if I forgot any!). I changed it so that the filtering is done updating the original dict as much as possible and I made it clear in the documentation that that's the idea. I added back in the _done_check() method's functionality that I previously removed. I also wrote a couple tests.

    So this is probably (at least from my perspective) pretty close to being mergable. Let me know what you guys think!

    opened by ApproximateIdentity 10
  • small improvement on nlp.word_tokenize?

    small improvement on nlp.word_tokenize?

    Hi guys,

    I'm working with word_tokenize, and it doesn't handle acronyms with dots very nicely. For example in the sentence "The U.S. official said", we get 'U' and 'S' as separate tokens. I could imagine we'd replace the the line:

    text = re.sub(r'(?:\s|\[|\]|\(|\)|\{|\}|\.|;|,|:|\n|\r|\?|\!)', r'  ', text)
    

    by:

    text = re.sub(r'(?:\s|\[|\]|\(|\)|\{|\}|;|,|:|\n|\r|\?|\!)', r'  ', text)
    text = text.replace('.', '')
    

    That is, omit the period from the first replacement, and put it in the second line. Any comments? Thank you! David

    opened by davaco 7
  • LDAResults.predict speedup and cmd module rename

    LDAResults.predict speedup and cmd module rename

    cleaned up changes to vw_helpers.py to hide tokenset per Ian's comments

    sped up LDAResults.predict by ~5x. Renamed 'cmd' module to 'cmdutils' to avoid conflict with native python 'cmd'. I don't know how you guys feel about pull requests, but I think these changes would be useful for others. Thanks for letting me use your code. -Louis

    opened by zigeuner 6
  • Question :  Interpretation of prob_token_topic

    Question : Interpretation of prob_token_topic

    Hey guys,

    I was curious about how to interpret output from the prob_token_topic function. I noticed that the probability outcome changes depending on the number of topics being conditioned on.

    The probability of 'kennedy' in topic 0 will be different under the following: lda.prob_token_topic(token='kennedy', c_topic=['topic_0']) lda.prob_token_topic(token='kennedy', c_topic=['topic_0', 'topic_3'])

    Is this as expected? How should these outcomes be interpreted?

    opened by AllardJM 5
  • Add SqliteDBStreamer, converters, and tests

    Add SqliteDBStreamer, converters, and tests

    This is in reference to the following issue: https://github.com/columbia-applied-data-science/rosetta/issues/21

    I wrote a class called SqliteDBStreamer which is intended to mirror the usage of TextFileStreamer except instead of having a folder of text files as the main source of data, the files are kept in a sqlite3 database which makes many standard file operations much faster.

    I personally think this is NOT READY to merge. Each time I look at it, I find little weird things left-over from how I was using the code a month ago (when I basically just hacked it together). But I wanted to throw it up here in case you guys see something really bazaar. I tried to make it as much like TextFileStreamer as possible. In the info_stream, for example, I do not currently have fields like "modification_date". I could add that as a trigger to the sqlitedb file, but I wasn't sure how necessary it was. I also have a few tweaks I need to add that speed this up, but those are minor details aside from the overall setup.

    At this point, I'm basically only treating the sqlitedb as a never-changing object (i.e. add files once and then leave them there). Now this probably doesn't make sense (though I've personally not yet had any other use case), but I'm still thinking the best way to deal with those problems. In it's current state basically all sqlite details are hidden, which is pretty nice at the moment.

    Anyway I imagine myself making many changes to this, but I figured I might as well throw this up here so you guys can give me some feedback if I'm doing something really stupid. Also I have an analysis that I'm doing for declass which could be used as a guide for using these classes, but I need to adapt it to this newer code (though on the surface it looks basically identical to how it's done with TextFileStreamer). Once I do that it could be good documentation.

    opened by ApproximateIdentity 5
  • Separate streaming and database streaming.  Python 3-ify

    Separate streaming and database streaming. Python 3-ify

    This would be a breaking change since it separates text.streamers from text.database_streamers. The difference is that people can use rosetta.text without having to have database dependencies like pymongo and MySQL-python (the latter of which requires a mysql client dependency, which is kind of annoying to carry around if you aren't using mysql). This would partially address people's install issues (e.g. https://github.com/columbia-applied-data-science/rosetta/issues/48)

    The other changes are so that rosetta installs using into python3 environments. There are a couple slightly dirty solutions here, implemented by catching ImportError, but for the most part rosetta is python3 compatible so it might as well work there.

    opened by mdeland 4
  • Lda sums

    Lda sums

    Both parse_lda_topics() and parse_lda_predictions() were painfully slow. Most (~90%) of this was due to unnecessary formatting and re-casting on every iteration.

    Speedups on this branch, using

    time python lda_sums_test.py
    

    in terminal are the following:

    for a 1mil row predictions.dat file (200k unique doc ids from vw lda run 5 passes, 10 topics) current branch running

    import rosetta.text.vw_helpers as ros_vw_h
    
    predictions_file = '/tmp/prediction_large.dat'
    
    num_topics=10
    
    start_line = ros_vw_h.find_start_line_lda_predictions(predictions_file, num_topics)
    pred_iter = ros_vw_h.parse_lda_predictions(predictions_file, num_topics, start_line, normalize=False,
                                                                           get_iter=False)
    

    gets

    real    0m5.092s
    user    0m4.628s
    sys 0m0.465s
    

    vs

    real    10m49.159s
    user    10m46.165s
    sys 0m2.182s
    

    on the master branch. (Note: the fine_start_line() wasn't altered between branches and is relatively fast compared to the parser, i.e. ~1.5s)

    for a 30k row topics.dat file (same lda run as above) current branch running

    import rosetta.text.vw_helpers as ros_vw_h
    
    topics_file = '/tmp/topics_large.dat'
    
    topics_iter = ros_vw_h.parse_lda_topics(topics_file, num_topics,
                                                                      normalize=False, get_iter=False)
    
    

    get

    real    0m1.461s
    user    0m1.219s
    sys 0m0.248s
    

    vs

    real    1m48.375s
    user    1m47.796s
    sys 0m0.505s
    

    on the master branch.

    You can run kernprof to see line by line profile comparisons. The differences overall are quite significant in both time and cpu load....

    Aside: due to a name/indexing bug in pandas 0.16.2+ which is not going to be fixed until 0.17 some tests started failing after a latest update. Probably the best for now is to simply ignore the name check in assert_series_equal in tests; this doesn't alter the validity of the tests in question.

    opened by dkrasner 4
  • Ldaresults

    Ldaresults

    Some cleanup:

    • removed redundant probability data frame in LDAResults * cleaned up and made more uniform
    • removed catdoc from list of unix file converter utils since it's no longer supported and fails on OSX; replaced it with antiword utility
    • test cleanup to reflect above
    opened by dkrasner 4
  • Vwresults

    Vwresults

    Parsing the lda topics file, i.e. --readable_model output of a vw lda run, parsed the entire file into memory ignoring the fact that possible many of the tokens are "garbage," i.e. not included in the set of user provided tokens (hashes). This forced classes like (LDAResults)[https://github.com/columbia-applied-data-science/rosetta/blob/vwresults/rosetta/text/vw_helpers.py#L205] to load more data than necessary into memory. The following PR adds

        * a max token hash number argument to (parse_lda_topics)[https://github.com/columbia-applied-data-science/rosetta/blob/vwresults/rosetta/text/vw_helpers.py#L205]. 
        * checks first for a max token hash number coming from (s_file_filter)[https://github.com/columbia-applied-data-science/rosetta/blob/vwresults/rosetta/text/vw_helpers.py#L239] in LDAResults 
    
    opened by dkrasner 4
  • Document Dependency on NLTK

    Document Dependency on NLTK

    The README file lists out some dependencies, but excludes NLTK. Without NLTK, I cannot import Rosetta, see below. Is there any way to load Rosetta without installing NLTK (as I really just wanted to look at the parallel API)? If not, it should be documented.

    Thanks!

    ---------------------------------------------------------------------------
    ImportError                               Traceback (most recent call last)
    <ipython-input-4-03c43361a895> in <module>()
    ----> 1 import rosetta.parallel
    
    /usr/local/lib/python3.5/dist-packages/rosetta/__init__.py in <module>()
    ----> 1 from rosetta.text.api import *
    
    /usr/local/lib/python3.5/dist-packages/rosetta/text/api.py in <module>()
    ----> 1 from rosetta.text.streamers import TextFileStreamer
          2 
          3 from rosetta.text.text_processors import \
          4     TokenizerBasic, MakeTokenizer, SFileFilter, VWFormatter
          5 
    
    /usr/local/lib/python3.5/dist-packages/rosetta/text/streamers.py in <module>()
         15 from .. import common
         16 from ..common import lazyprop, smart_open, DocIDError
    ---> 17 from . import filefilter, text_processors
         18 
         19 
    
    /usr/local/lib/python3.5/dist-packages/rosetta/text/text_processors.py in <module>()
         22 import math
         23 
    ---> 24 import nltk
         25 import numpy as np
         26 import pandas as pd
    
    ImportError: No module named 'nltk'
    
    opened by jquacinella 0
  • "Killed" error on Step3 - LDA in VW using Rosetta

    I am trying to run LDA in VW using Rosetta. It seems to be working fine for smaller number of topics but as soon as I go to 50 or 100, step 3: read the results with LDAResults fails => I get a "Killed" error. I don't think this is a memory problem because I am running my code on a robust machine with 50GB of RAM. What's going on? Is this a VW or Rosetta issue? How can I solve it? Thanks!

    Once I have doc_tokens.vw, this is what I am running in order: Step 1: `from rosetta.text.text_processors import SFileFilter, VWFormatter sff = SFileFilter(VWFormatter()) sff.load_sfile('doc_tokens.vw')

    df = sff.to_frame() df.head() df.describe()

    sff.filter_extremes(doc_freq_min=500, doc_fraction_max=0.8) sff.compactify() sff.save('sff_file.pkl')`

    Step 2: rm -f *cache vw --lda 100 --lda_alpha 0.1 --lda_rho 0.1 --cache_file ddrs.cache --passes 10 -p prediction.dat --readable_model topics.dat --bit_precision 16 doc_tokens_filtered.vw

    Step 3: from rosetta.text.vw_helpers import LDAResults num_topics = 5 lda = LDAResults('topics.dat', 'prediction.dat', 'sff_file.pkl', num_topics=num_topics) lda.print_topics()

    opened by bhaskar2khaneja 0
  • Cannot generate sff_file unlabelled data set file

    Cannot generate sff_file unlabelled data set file

    My vw data is of this format

    | this is great
    | I try to learn English everyday
    [...]
    

    saved as data.vw I try to run this code:

    from rosetta.text.vw_helpers import LDAResults
    from rosetta.text.text_processors import SFileFilter, VWFormatter
    
    def generate_filefilter():
        sff = SFileFilter(VWFormatter())
        sff.load_sfile('data.lda.vw')
    
        df = sff.to_frame()
        df.head()
        df.describe()
    
        sff.filter_extremes(doc_freq_min=5, doc_fraction_max=0.8)
        sff.compactify()
        sff.save('sff_file.pkl')
    
    if __name__ == '__main__':
        generate_filefilter()
    

    And the error is:

    Traceback (most recent call last):
      File "/<home>/.venv/lib/python2.7/site-packages/rosetta/text/text_processors.py", line 380, in _parse_preamble
        if preamble[-1] != ' ':
    IndexError: string index out of range
    
    opened by binhngoc17 1
  • ImportErrors

    ImportErrors

    I installed rosetta, and tried the run examples/plot_classifiers.py - got:

    /usr/local/lib/python3.4/site-packages/rosetta/text/streamers.py in <module>()
         10 import os
         11 from scipy import sparse
    ---> 12 import MySQLdb
         13 import MySQLdb.cursors
         14 import pymongo
    
    ImportError: No module named 'MySQLdb'
    

    ^ This isn't listed as a dependency on the readme. Should it be ? Futhermore, it wasn't installed when I used pip to install rosetta.

    >>> import rosetta
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "rosetta/__init__.py", line 1, in <module>
        from rosetta.text.api import *
      File "rosetta/text/api.py", line 1, in <module>
        from rosetta.text.streamers import TextFileStreamer
      File "rosetta/text/streamers.py", line 20, in <module>
        import pymongo
    ImportError: No module named pymongo
    

    Pymongo too.

    >>> import rosetta
    Traceback (most recent call last):
      File "<stdin>", line 1, in <module>
      File "/Users/aljohnson/code/rosetta/rosetta/__init__.py", line 1, in <module>
        from rosetta.text.api import *
      File "/Users/aljohnson/code/rosetta/rosetta/text/api.py", line 1, in <module>
        from rosetta.text.streamers import TextFileStreamer
      File "/Users/aljohnson/code/rosetta/rosetta/text/streamers.py", line 22, in <module>
        from rosetta.parallel.parallel_easy import imap_easy, parallel_apply
      File "/Users/aljohnson/code/rosetta/rosetta/parallel/parallel_easy.py", line 13, in <module>
        import cPickle
    ImportError: No module named 'cPickle'
    

    cPickle too...

    The weird thing... is that I'm seeing all these listed in the requirements.txt. So at this point I'm like wtf I'm just going to use virtual env.

    [email protected] :
    ~/code/rosetta
    $ virtualenv -p `which python` rosetta_env/ 
    Running virtualenv with interpreter /usr/bin/python
    New python executable in rosetta_env/bin/python
    Installing setuptools, pip, wheel...done.
    [email protected] :
    ~/code/rosetta
    $ source rosetta_env/bin/activate
    (rosetta_env)
    [email protected] :
    ~/code/rosetta
    $ ls
    CONTRIBUTING.md  MANIFEST.in      README_data.md   examples         notebooks        requirements.txt rosetta_env      setup.py
    LICENSE.txt      README.md        docs             makefile         notes            rosetta          scripts
    (rosetta_env)
    [email protected] :
    ~/code/rosetta
    $ pip install -r requirements.txt 
    /Users/aljohnson/code/rosetta/rosetta_env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
    Collecting pandas (from -r requirements.txt (line 1))
    /Users/aljohnson/code/rosetta/rosetta_env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
      Downloading pandas-0.16.2-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (7.3MB)
        100% |████████████████████████████████| 7.3MB 77kB/s 
    Collecting scikit-learn (from -r requirements.txt (line 2))
      Downloading scikit_learn-0.16.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (5.4MB)
        100% |████████████████████████████████| 5.4MB 106kB/s 
    Collecting statsmodels (from -r requirements.txt (line 3))
      Downloading statsmodels-0.6.1-cp27-none-macosx_10_6_intel.macosx_10_9_intel.macosx_10_9_x86_64.macosx_10_10_intel.macosx_10_10_x86_64.whl (4.0MB)
        100% |████████████████████████████████| 4.0MB 73kB/s 
    Collecting gensim (from -r requirements.txt (line 4))
      Using cached gensim-0.12.1.tar.gz
    Collecting docx (from -r requirements.txt (line 5))
      Using cached docx-0.2.4.tar.gz
    Collecting pyth (from -r requirements.txt (line 6))
      Using cached pyth-0.6.0.tar.gz
    Collecting pymongo (from -r requirements.txt (line 7))
    /Users/aljohnson/code/rosetta/rosetta_env/lib/python2.7/site-packages/pip/_vendor/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
      InsecurePlatformWarning
      Downloading pymongo-3.0.3-cp27-none-macosx_10_8_intel.whl (239kB)
        100% |████████████████████████████████| 241kB 2.0MB/s 
    Collecting MySQL-python (from -r requirements.txt (line 8))
      Using cached MySQL-python-1.2.5.zip
        Complete output from command python setup.py egg_info:
        sh: mysql_config: command not found
        Traceback (most recent call last):
          File "<string>", line 20, in <module>
          File "/private/var/folders/vj/_mcyrpkn30d2tph7c56yvzxxf5_jlv/T/pip-build-ylIxmf/MySQL-python/setup.py", line 17, in <module>
            metadata, options = get_config()
          File "setup_posix.py", line 43, in get_config
            libs = mysql_config("libs_r")
          File "setup_posix.py", line 25, in mysql_config
            raise EnvironmentError("%s not found" % (mysql_config.path,))
        EnvironmentError: mysql_config not found
    
        ----------------------------------------
    Command "python setup.py egg_info" failed with error code 1 in /private/var/folders/vj/_mcyrpkn30d2tph7c56yvzxxf5_jlv/T/pip-build-ylIxmf/MySQL-python
    (rosetta_env)
    

    At this point I'm not sure what the hell is going on. Its probably my fault in the end but, essentially, this is seeming a lot harder than it should be. Just to import rosetta.

    opened by metasyn 1
  • Add token scores to BaseStreamer.to_scipysparse()

    Add token scores to BaseStreamer.to_scipysparse()

    When the token_col dictionary is updated it would be good to also update an overall count for each token, to later use for feature selection/filtering etc.

    Perhaps the self.token_col_map should be self.token_col_count_map or there should be two separate attributes self.token_col_map and self.token_count_map - thoughts?

    opened by dkrasner 0
Releases(v0.3.0)
A raytrace framework using taichi language

ti-raytrace The code use Taichi programming language Current implement acceleration lvbh disney brdf How to run First config your anaconda workspace,

蕉太狼 73 Dec 11, 2022
A python framework to transform natural language questions to queries in a database query language.

__ _ _ _ ___ _ __ _ _ / _` | | | |/ _ \ '_ \| | | | | (_| | |_| | __/ |_) | |_| | \__, |\__,_|\___| .__/ \__, | |_| |_| |___/

Machinalis 1.2k Dec 18, 2022
:mag: Transformers at scale for question answering & neural search. Using NLP via a modular Retriever-Reader-Pipeline. Supporting DPR, Elasticsearch, HuggingFace's Modelhub...

Haystack is an end-to-end framework that enables you to build powerful and production-ready pipelines for different search use cases. Whether you want

deepset 6.4k Jan 09, 2023
A simple word search made in python

Word Search Puzzle A simple word search made in python Usage $ python3 main.py -h usage: main.py [-h] [-c] [-f FILE] Generates a word s

Magoninho 16 Mar 10, 2022
Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Pytorch-version BERT-flow: One can apply BERT-flow to any PLM within Pytorch framework.

Ubiquitous Knowledge Processing Lab 59 Dec 01, 2022
This is a GUI program that will generate a word search puzzle image

Word Search Puzzle Generator Table of Contents About The Project Built With Getting Started Prerequisites Installation Usage Roadmap Contributing Cont

11 Feb 22, 2022
The guide to tackle with the Text Summarization

The guide to tackle with the Text Summarization

Takahiro Kubo 1.2k Dec 30, 2022
An evaluation toolkit for voice conversion models.

Voice-conversion-evaluation An evaluation toolkit for voice conversion models. Sample test pair Generate the metadata for evaluating models. The direc

30 Aug 29, 2022
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
Command Line Text-To-Speech using Google TTS

cli-tts Thanks to gTTS by @pndurette! This is an interactive command line text-to-speech tool using Google TTS. Just type text and the voice will be p

ReekyStive 3 Nov 11, 2022
Fine-tuning scripts for evaluating transformer-based models on KLEJ benchmark.

The KLEJ Benchmark Baselines The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language und

Allegro Tech 17 Oct 18, 2022
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
apple's universal binaries BUT MUCH WORSE (PRACTICAL SHITPOST) (NOT PRODUCTION READY)

hyperuniversality investment opportunity: what if we could run multiple architectures in a single file, again apple universal binaries, but worse how

luna 2 Oct 19, 2021
📝An easy-to-use package to restore punctuation of the text.

✏️ rpunct - Restore Punctuation This repo contains code for Punctuation restoration. This package is intended for direct use as a punctuation restorat

Daulet Nurmanbetov 72 Dec 30, 2022
Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System

Multi-Task Pre-Training for Plug-and-Play Task-Oriented Dialogue System Authors: Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai

Amazon Web Services - Labs 124 Jan 03, 2023
this repository has datasets containing information of Uber pickups in NYC from April 2014 to September 2014 and January to June 2015. data Analysis , virtualization and some insights are gathered here

uber-pickups-analysis Data Source: https://www.kaggle.com/fivethirtyeight/uber-pickups-in-new-york-city Information about data set The dataset contain

1 Nov 02, 2021
Code to reproduce the results of the paper 'Towards Realistic Few-Shot Relation Extraction' (EMNLP 2021)

Realistic Few-Shot Relation Extraction This repository contains code to reproduce the results in the paper "Towards Realistic Few-Shot Relation Extrac

Bloomberg 8 Nov 09, 2022
Wrapper to display a script output or a text file content on the desktop in sway or other wlroots-based compositors

nwg-wrapper This program is a part of the nwg-shell project. This program is a GTK3-based wrapper to display a script output, or a text file content o

Piotr Miller 94 Dec 27, 2022
Facebook AI Research Sequence-to-Sequence Toolkit written in Python.

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language mod

13.2k Jul 07, 2021
Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks

Official PyTorch code for ClipBERT, an efficient framework for end-to-end learning on image-text and video-text tasks. It takes raw videos/images + text as inputs, and outputs task predictions. ClipB

Jie Lei 雷杰 612 Jan 04, 2023