Codeflare - Scale complex AI/ML pipelines anywhere

Overview

License Build Status PyPI Downloads Documentation Status GitHub

Scale complex AI/ML pipelines anywhere

CodeFlare is a framework to simplify the integration, scaling and acceleration of complex multi-step analytics and machine learning pipelines on the cloud.

Its main features are:

  • Pipeline execution and scaling: CodeFlare Pipelines faciltates the definition and parallel execution of pipelines. It unifies pipeline workflows across multiple frameworks while providing nearly optimal scale-out parallelism on pipelined computations.
  • Deploy and integrate anywhere: CodeFlare simplifies deployment and integration by enabling a serverless user experience with the integration with Red Hat OpenShift and IBM Cloud Code Engine and providing adapters and connectors to make it simple to load data and connect to data services.

Release status

This project is under active development. See the Documentation for design descriptions and the latest version of the APIs.

Quick start

Run in your laptop

Instaling locally

CodeFlare can be installed from PyPI.

Prerequisites:

We recommend installing Python 3.8.6 using pyenv. You can find here recommended steps to set up the Python environment.

Install from PyPI:

pip3 install --upgrade pip          # CodeFlare requires pip >21.0
pip3 install --upgrade codeflare

Alternatively, you can also build locally with:

git clone https://github.com/project-codeflare/codeflare.git
cd codeflare
pip3 install --upgrade pip
pip3 install .

Using Docker

You can try CodeFlare by running the docker image from Docker Hub:

  • projectcodeflare/codeflare:latest has the latest released version installed.

The command below starts the most recent development build in a clean environment:

docker run --rm -it -p 8888:8888 projectcodeflare/codeflare:latest

It should produce an output similar to the one below, where you can then find the URL to run CodeFlare from a Jupyter notebook in your local browser.

[I 
   
     ServerApp] Jupyter Server 
    
      is running at:
...
[I 
     
       ServerApp]     http://127.0.0.1:8888/lab

     
    
   

Using Binder service

You can try out some of CodeFlare features using the My Binder service.

Click on the link below to try CodeFlare, on a sandbox environment, without having to install anything.

Binder

Pipeline execution and scaling

CodeFlare Pipelines reimagined pipelines to provide a more intuitive API for the data scientist to create AI/ML pipelines, data workflows, pre-processing, post-processing tasks, and many more which can scale from a laptop to a cluster seamlessly.

See the API documentation here, and reference use case documentation in the Examples section.

A set of reference examples are provided as executable notebooks.

To run examples, if you haven't done so yet, clone the CodeFlare project with:

git clone https://github.com/project-codeflare/codeflare.git

Example notebooks require JupyterLab, which can be installed with:

pip3 install --upgrade jupyterlab

Use the command below to run locally:

jupyter-lab codeflare/notebooks/<example_notebook>

The step above should automatically open a browser window and connect to a running Jupyter server.

If you are using any one of the recommended cloud based deployments (see below), examples are found in the codeflare/notebooks directory in the container image. The examples can be executed directly from the Jupyter environment.

As a first example of the API usage, see the sample pipeline.

For an example of how CodeFlare Pipelines can be used to scale out common machine learning problems, see the grid search example. It shows how hyperparameter optimization for a reference pipeline can be scaled and accelerated with both task and data parallelism.

Deploy and integrate anywhere

Unleash the power of pipelines by seamlessly scaling on the cloud. CodeFlare can be deployed on any Kubernetes-based platform, including IBM Cloud Code Engine and Red Hat OpenShift Container Platform.

  • IBM Cloud Code Engine for detailed instructions on how to run CodeFlare on a serverless platform.
  • Red Hat OpenShift for detailed instructions on how to run CodeFlare on OpenShift Container Platform.

Contributing

Join us in making CodeFlare Better! We encourage you to take a look at our Contributing page.

Blog

CodeFlare related blogs are published on our Medium publication.

License

CodeFlare is an open-source project with an Apache 2.0 license.

Comments
  • Error running notebook

    Error running notebook "RaySystemError: System error: buffer source array is read-only"

    Describe the bug I'm trying to run the example notebooks (in codeflare/notebooks), and came across this error. The error persisted thru attempts to restart my kernel, entire machine, and re-cloning the repo. Any help, or an explanation of the root cause, is much appreciated!

    To Reproduce Steps to reproduce the behavior:

    1. Go to notebooks/plot_nca_classification.ipynb
    2. Run 2nd code block. It uses Ray and Codeflare.
    3. This line produces the error knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
    4. See error: RaySystemError: System error: buffer source array is read-only

    Full stack trace:

    RaySystemError: System error: buffer source array is read-only
    traceback: Traceback (most recent call last):
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 268, in deserialize_objects
        obj = self._deserialize_object(data, metadata, object_ref)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 191, in _deserialize_object
        return self._deserialize_msgpack_data(data, metadata_fields)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 169, in _deserialize_msgpack_data
        python_objects = self._deserialize_pickle5_data(pickle5_data)
      File "/home/kastan/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/serialization.py", line 157, in _deserialize_pickle5_data
        obj = pickle.loads(in_band, buffers=buffers)
      File "sklearn/neighbors/_dist_metrics.pyx", line 223, in sklearn.neighbors._dist_metrics.DistanceMetric.__setstate__
      File "stringsource", line 658, in View.MemoryView.memoryview_cwrapper
      File "stringsource", line 349, in View.MemoryView.memoryview.__cinit__
    ValueError: buffer source array is read-only
    
    
    ---------------------------------------------------------------------------
    RaySystemError                            Traceback (most recent call last)
    /tmp/ipykernel_1251/3313313255.py in <module>
          9 test_input.add_xy_arg(node_scalar, dm.Xy(X_test, y_test))
         10 
    ---> 11 knn_pipeline = rt.select_pipeline(pipeline_fitted, pipeline_fitted.get_xyrefs(node_knn)[0])
         12 knn_score = ray.get(rt.execute_pipeline(knn_pipeline, ExecutionType.SCORE, test_input)
         13                     .get_xyrefs(node_knn)[0].get_yref())
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/codeflare/pipelines/Runtime.py in select_pipeline(pipeline_output, chosen_xyref)
        381         curr_xyref = xyref_queue.get()
        382         curr_node_state_ptr = curr_xyref.get_curr_node_state_ref()
    --> 383         curr_node = ray.get(curr_node_state_ptr)
        384         prev_xyrefs = curr_xyref.get_prev_xyrefs()
        385 
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/_private/client_mode_hook.py in wrapper(*args, **kwargs)
         87             if func.__name__ != "init" or is_client_mode_enabled_by_default:
         88                 return getattr(ray, func.__name__)(*args, **kwargs)
    ---> 89         return func(*args, **kwargs)
         90 
         91     return wrapper
    
    ~/.pyenv/versions/3.8.6/lib/python3.8/site-packages/ray/worker.py in get(object_refs, timeout)
       1621                     raise value.as_instanceof_cause()
       1622                 else:
    -> 1623                     raise value
       1624 
       1625         if is_individual_id:
    
    

    Expected behavior Expected is selecting the pipeline and evaluating its score via a 'SCORE' pipeline.

    Desktop

    • OS: Ubuntu 20.04 via WSL2 on Windows.
    • Python 3.8.6

    Thank you for any help! I am a University of Illinois at Urbana-Champaign grad student trying to make the most of your work!

    opened by KastanDay 6
  • Replace SimlpleQueue

    Replace SimlpleQueue

    Overview

    Currently, lineage uses SimpleQueue to realize pipelines. But this is available only in Python >=3.8. This reduces adoption, moving to Queue will give us broader Python version coverage.

    Acceptance Criteria

    • [x] Replace SimpleQueue with Queue
    • [x] Ensure tests pass

    Questions

    • What are the drawbacks of using Queue vs SimpleQueue?

    Assumptions

    Reference

    • https://towardsdatascience.com/dive-into-queue-module-in-python-its-more-than-fifo-ce86c40944ef
    enhancement 
    opened by raghukiran1224 5
  • CodeFlare resiliency tool: initial commit

    CodeFlare resiliency tool: initial commit

    What does this PR do? This is first step towards improving resiliency and performance in Ray without modifying the source code. This PR includes a new tool that helps configure Ray cluster conveniently. The tool helps in fetching and parsing ray configurations, and generating resiliency profiles (e.g., strict, relaxed, recommended). Currently, we are working on deciding configuration options for each resiliency profile manually by evaluating them on various ray workloads. We'll update this PR accordingly.

    Description of Changes The changes in this PR is currently independent of the main codeFlare code. We intend to put this tool in a new folder called utils in the codeFlare root directory.

    opened by JainTwinkle 2
  • Predicted output is not properly assigned to get_yref(), instead is assigned to get_Xref()

    Predicted output is not properly assigned to get_yref(), instead is assigned to get_Xref()

    Describe the bug After running a PREDICT, the y_predcannot be obtained via get_yref(), instead can be obtained via the get_Xref(). Semantically, this seems weird.

    To Reproduce Steps to reproduce the behavior:

    1. Go to https://github.ibm.com/codeflare/ray-pipeline/blob/complex-example-1/notebooks/plot_feature_selection_pipeline.ipynb
    2. Scroll down to `y_pred = ray.get(predict_clf_output[0].get_yref())
    3. If you change that statement to y_pred = ray.get(predict_clf_output[0].get_Xref()) the output would match the original sklearn pipeline on the top.

    Expected behavior The predicted output should be obtained from calling get_yref().

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug good first issue cfp-runtime cfp-datamodel 
    opened by raghukiran1224 2
  • sample pipeline jupyter notebook on binder errors-out

    sample pipeline jupyter notebook on binder errors-out

    Describe the bug sample pipeline jupyter notebook errors out due to undefined variable

    To Reproduce Steps to reproduce the behavior:

    1. Go to binder
    2. Click on sample pipeline jupyter notebook
    3. Run

    Expected behavior Jupyter notebook on binder should run without exception

    Additional context error while executing cell:

    pipeline_output = rt.execute_pipeline(pipeline, ExecutionType.FIT, pipeline_input)
    node_0_output = pipeline_output.get_xyrefs(node_0)
    
    
    In [74]:
    
    outputs[0]
    
    
    
    ---------------------------------------------------------------------------
    NameError                                 Traceback (most recent call last)
    <ipython-input-74-a45df8d4a457> in <module>
    ----> 1 outputs[0]
    
    NameError: name 'outputs' is not defined
    
    opened by asm582 2
  • Jupyter notebook plot_scalable_poly_kernels dies when run on binder

    Jupyter notebook plot_scalable_poly_kernels dies when run on binder

    Describe the bug Jupyter notebook kernel dies

    To Reproduce Steps to reproduce the behavior:

    1. Go to Binder
    2. Click on plot_scalable_poly_kernels
    3. Run the notebook

    Expected behavior The jupyter notebook should run without error

    opened by asm582 2
  • Grid search jupyter notebook on binder missing graphviz library

    Grid search jupyter notebook on binder missing graphviz library

    Graph viz missing from binder service

    To Reproduce Steps to reproduce the behavior:

    1. Go to binder service 2.Run Grid search notebook

    Additional context

    Below error caused by execution of cell:

    non_param_graph = cf_utils.pipeline_to_graph(pipeline)
    non_param_graph
    

    ExecutableNotFound: failed to execute ['dot', '-Kdot', '-Tsvg'], make sure the Graphviz executables are on your systems' PATH

    bug wontfix 
    opened by asm582 2
  • Refactor configuration utility tool; added support for latest Ray version

    Refactor configuration utility tool; added support for latest Ray version

    Related PRs Extending #37

    What does this PR do?

    This PR extends the Ray resiliency config tool. The PR does the following:

    1. The Ray config utility script now supports configurations from Ray v1.6, 1.7, and 1.8.

    2. The tool now saves config. files into their respective version directory. This is more organized as compared to saving files from all Ray versions into a single folder. For example, now the tools save output config. files in the following manner by default. ├── configs │ ├── 1.0.0 │ │ ├── Ray 1.0.0 related config files │ ├── 1.1.0 │ │ └── Ray 1.1.0 related config files

    3. The configuration parsing code is more generalized than before. Removed some hard-coded conditions and added functions to make the code less cluttered.

    4. Added a new field called config_string in the output config file. This field stores the original string from which we parsed the default value of the configuration. The config_string stores string whenever the default value is not a simple value but a conditional statement. This field will help in explaining how the associated environment variable's value will determine the default value. For example: For raylet_start_wait_time_s configuration, the signature/input is following:

    RAY_CONFIG(uint32_t, raylet_start_wait_time_s,
               std::getenv("RAY_preallocate_plasma_memory") != nullptr &&
                       std::getenv("RAY_preallocate_plasma_memory") == std::string("1")
                   ? 120
                   : 10)
    

    And, the script dumps following Yaml entry in the .conf file:

    raylet_start_wait_time_s:
      config_string: 'std::getenv("RAY_preallocate_plasma_memory") != nullptr && std::getenv("RAY_preallocate_plasma_memory") == std::string("1") ? 120 : 10'
      default: '10'
      env: RAY_preallocate_plasma_memory
      type: uint32_t
      value_for_this_mode: '10'
    

    The new field i.e. config_string is informatory and gives an idea about how the associated environment variable will be processed.

    1. The config tool now uses YAML format variable instead of a hardcoded string for system-config map YAML (system_cm.yaml)
    opened by JainTwinkle 1
  • Fix corner case with a singleton node

    Fix corner case with a singleton node

    Related Issue

    Supports #27

    Related PRs

    Reopen PR 31 after PR 27 merged with develop.

    What does this PR do?

    Description of Changes Checked node exists in pipeline post_graph Added ExecutionType.TRANSFORM Added a unit test

    bug 
    opened by yuanchi2807 1
  • Fix yref assignment for pipeline PREDICT and SCORE

    Fix yref assignment for pipeline PREDICT and SCORE

    Related Issue

    Supports #22

    Related PRs

    This PR is not dependent on any other PR

    What does this PR do?

    Description of Changes

    Assign PREDICT and SCORE results to yref as appropriate in Runtime.py. Updated unit tests and notebook examples.

    What gif most accurately describes how I feel towards this PR?

    Example of a gif

    bug 
    opened by yuanchi2807 1
  • Pipeline with a single dangling estimator node triggers an exception

    Pipeline with a single dangling estimator node triggers an exception

    Describe the bug Possibly a corner case? ray-pipeline/codeflare/pipelines/Datamodel.py in get_pre_edges(self, node) 640 """ 641 pre_edges = [] --> 642 pre_nodes = self.pre_graph[node] 643 # Empty pre 644 if not pre_nodes:

    KeyError: <codeflare.pipelines.Datamodel.EstimatorNode object at 0x7fa2d8920f10>

    To Reproduce

    ## initialize codeflare pipeline by first creating the nodes
    pipeline = dm.Pipeline()
    node_a = dm.EstimatorNode('a', MinMaxScaler())
    node_b = dm.EstimatorNode('b', StandardScaler())
    node_c = dm.EstimatorNode('c', MaxAbsScaler())
    node_d = dm.EstimatorNode('d', RobustScaler())
    
    node_e = dm.AndNode('e', FeatureUnion())
    node_f = dm.AndNode('f', FeatureUnion())
    node_g = dm.AndNode('g', FeatureUnion())
    
    ## codeflare nodes are then connected by edges
    pipeline.add_edge(node_a, node_e)
    pipeline.add_edge(node_b, node_e)
    pipeline.add_edge(node_c, node_f)
    ## node_d does not have a downstream node
    # pipeline.add_edge(node_d, node_f)
    pipeline.add_edge(node_e, node_g)
    pipeline.add_edge(node_f, node_g)
    
    pipeline_input = dm.PipelineInput()
    xy = dm.Xy(X,y)
    pipeline_input.add_xy_arg(node_a, xy)
    pipeline_input.add_xy_arg(node_b, xy)
    pipeline_input.add_xy_arg(node_c, xy)
    pipeline_input.add_xy_arg(node_d, xy)
    
    ## execute the codeflare pipeline
    pipeline_output = rt.execute_pipeline(pipeline, ExecutionType.FIT, pipeline_input)
    
    

    Expected behavior A clear and concise description of what you expected to happen.

    Screenshots If applicable, add screenshots to help explain your problem.

    Desktop (please complete the following information):

    • OS: [e.g. iOS]
    • Browser [e.g. chrome, safari]
    • Version [e.g. 22]

    Smartphone (please complete the following information):

    • Device: [e.g. iPhone6]
    • OS: [e.g. iOS8.1]
    • Browser [e.g. stock browser, safari]
    • Version [e.g. 22]

    Additional context Add any other context about the problem here.

    bug cfp-runtime 
    opened by raghukiran1224 1
  • Ray cluster on OpenShift fails due to missing file

    Ray cluster on OpenShift fails due to missing file

    Describe the bug Cannot bring up Ray cluster as defined in the OCP tutorial

    To Reproduce Steps to reproduce the behavior:

    1. Go to https://codeflare.readthedocs.io/en/latest/getting_started/starting.html#Openshift-Ray-Cluster-Operator
    2. Run pip3 install --upgrade codeflare
    3. Create namespace oc create namespace codeflare
    4. Run ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml fails:
    $ ray up ray/python/ray/autoscaler/kubernetes/example-full.yaml
    Provided cluster configuration file (ray/python/ray/autoscaler/kubernetes/example-full.yaml) does not exist
    

    Expected behavior Bring up Ray cluster on OCP

    Desktop (please complete the following information):

    • OS: MacOS

    Additional context OCP Cluster running on IBM Cloud.

    $ oc cluster-info
    Kubernetes master is running at https://c100-e.jp-tok.containers.cloud.ibm.com:31129
    
    To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
    

    CodeFlare commit hash commit a2b290a115b0cc1317270cef6059d5281215842e

    opened by cmisale 0
  • Data splitter

    Data splitter

    Overview

    As a CFP user, I would like to split a dataset (e.g., np array, pandas dataframe) into smaller objects that can then be fed into other nodes/pipeline. This is especially useful when we have compute intensive tasks and would like to parallelize it easily.

    Acceptance Criteria

    • [x] Design for splitter, should be simple and intuitive
    • [ ] Implementation as an extension to the Node construct
    • [x] Tests

    Questions

    • What type of semantics does the splitter node define?

    Assumptions

    Reference

    good first issue help wanted cfp-datamodel user-story Prio1 
    opened by raghukiran1224 1
  • Support better integration between Ray and Spark in passing ObjectRef without actually moving data

    Support better integration between Ray and Spark in passing ObjectRef without actually moving data

    Overview

    As a Codeflare user, I want to use Ray and Spark alternately to execute my end-to-end ML jobs. Some steps might be executed more efficiently using Ray, while others using Spark. The plasma store in Ray seems to provide an efficient way to share ObjectRef between Ray and Spark. Currently, RayDP project supports from Spark to Ray in some limited way, by running Spark as a Ray actor. However, ObjectRef cannot be shared easily in both directions, Spark-2-Ray and Ray-2-Spark.

    Acceptance Criteria

    • Pandas dataframe created by remote tasks in local Ray plasma stores can be passed with ObjectRefto the Spark driver to create a Spark dataframecontaining list of ObjectRef.
    • Once that is done, on the Spark side, the executors of Spark can then access to the original Pandas dataframe locally.
    • From Spark to Ray: Spark preserves groupby() partition semantics and writes these partitions to plasma store, instead of using hashPartition().

    Questions

    • In RayDP, only the driver node knows about and can access Ray. The executors of PySpark doesn't have access to Ray. This will prevent the PySpark executors from accessing the Ray plasma store. As a result, it is not possible to seamlessly pass ObjectRefbetween Ray workers and Spark executors.

    Assumptions

    • Ray and Spark can share data seamlessly by exchanging ObjectRef among Ray workers and Spark executors.

    Reference

    [Reference] I have opened an issue on the RayDP repo: https://github.com/oap-project/raydp/issues/164

    ray-related 
    opened by klwuibm 3
  • Nested pipelines

    Nested pipelines

    Overview

    As a CF pipelines user, support for nested pipelines, where the node of a pipeline can be a pipeline itself.

    Acceptance Criteria

    • [ ] Nested pipeline API
    • [ ] Nested pipeline implementation
    • [ ] ADR for supporting nested pipelines
    • [ ] Tests

    Questions

    • Given that pipelines are not estimators by themselves, how can we support nesting easily?

    Assumptions

    Reference

    cfp-runtime cfp-datamodel user-story Prio1 
    opened by raghukiran1224 0
  • Investigate and measure zero copy for pipelines

    Investigate and measure zero copy for pipelines

    Overview

    As a CF pipelines user, I would like to understand the memory consumption when pipelines are executed. Given pipelines accept nparrays, will zero copy sharing of Ray help?

    Acceptance Criteria

    • [ ] Memory growth as pipelines are executed
    • [ ] Clear documentation on this
    • [ ] A potential story explaining this in more detail

    Questions

    Assumptions

    Reference

    help wanted cfp-runtime Prio1 benchmark 
    opened by raghukiran1224 0
  • Select best/k-best pipelines

    Select best/k-best pipelines

    Overview

    As a CF pipelines user, I would like the ability to select the best or k-best pipelines from a parameter grid search output.

    Acceptance Criteria

    • [ ] Best pipeline selection
    • [ ] K-best pipeline selection
    • [ ] Tests and compatibility with sklearn outputs

    Questions

    Assumptions

    Reference

    enhancement good first issue help wanted cfp-runtime 
    opened by raghukiran1224 0
Releases(0.1.2.dev0)
  • 0.1.2.dev0(Jul 9, 2021)

    Addressing the python version needs of IBM Cloud Watson Studio, we removed the deps on SimpleQueue and used Queue instead. This removes CodeFlare pipelines dep on >=3.8 and can do >=3.7.

    Shout out to @aviolante for helping with this fix!

    Install can be now done from pypi using pip3 install codeflare, default version updated to 0.1.2

    Source code(tar.gz)
    Source code(zip)
Owner
CodeFlare
Scaling complex pipelines anywhere
CodeFlare
Segmentation models with pretrained backbones. Keras and TensorFlow Keras.

Python library with Neural Networks for Image Segmentation based on Keras and TensorFlow. The main features of this library are: High level API (just

Pavel Yakubovskiy 4.2k Jan 09, 2023
The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue. How do I cite D-REX? For now, cite

Alon Albalak 6 Mar 31, 2022
TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

TensorFlow Ranking is a library for Learning-to-Rank (LTR) techniques on the TensorFlow platform

2.6k Jan 04, 2023
Pytoydl: A toy deep learning framework built upon numpy.

Documents: https://pytoydl.readthedocs.io/zh/latest/ Pytoydl A toy deep learning framework built upon numpy. You can star this repository to keep trac

28 Dec 10, 2022
A PyTorch based deep learning library for drug pair scoring.

Documentation | External Resources | Datasets | Examples ChemicalX is a deep learning library for drug-drug interaction, polypharmacy side effect and

AstraZeneca 597 Dec 30, 2022
Human Activity Recognition example using TensorFlow on smartphone sensors dataset and an LSTM RNN. Classifying the type of movement amongst six activity categories - Guillaume Chevalier

LSTMs for Human Activity Recognition Human Activity Recognition (HAR) using smartphones dataset and an LSTM RNN. Classifying the type of movement amon

Guillaume Chevalier 3.1k Dec 30, 2022
Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph".

multilingual-mrc-isdg Code for the AAAI 2022 paper "Zero-Shot Cross-Lingual Machine Reading Comprehension via Inter-Sentence Dependency Graph". This r

Liyan 5 Dec 07, 2022
Distinguishing Commercial from Editorial Content in News

Distinguishing Commercial from Editorial Content in News In this repository you can find the following: An anonymized version of the data used for my

Timo Kats 3 Sep 26, 2022
A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks)

A PyTorch implementation for PyramidNets (Deep Pyramidal Residual Networks) This repository contains a PyTorch implementation for the paper: Deep Pyra

Greg Dongyoon Han 262 Jan 03, 2023
FluidNet re-written with ATen tensor lib

fluidnet_cxx: Accelerating Fluid Simulation with Convolutional Neural Networks. A PyTorch/ATen Implementation. This repository is based on the paper,

JoliBrain 50 Jun 07, 2022
Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization

Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization This repository contains the source code for the paper (link wi

Rakuten Group, Inc. 0 Nov 19, 2021
Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

EdiTTS: Score-based Editing for Controllable Text-to-Speech Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech. Au

Neosapience 98 Dec 25, 2022
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
Mall-Customers-Segmentation - Customer Segmentation Using K-Means Clustering

Overview Customer Segmentation is one the most important applications of unsupervised learning. Using clustering techniques, companies can identify th

NelakurthiSudheer 2 Jan 03, 2022
[TOG 2021] PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling.

This repository contains the official PyTorch implementation for the paper: SofGAN: A Portrait Image Generator with Dynamic Styling. We propose a SofGAN image generator to decouple the latent space o

Anpei Chen 694 Dec 23, 2022
PyTorch code for the paper: FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning

FeatMatch: Feature-Based Augmentation for Semi-Supervised Learning This is the PyTorch implementation of our paper: FeatMatch: Feature-Based Augmentat

43 Nov 19, 2022
TF Image Segmentation: Image Segmentation framework

TF Image Segmentation: Image Segmentation framework The aim of the TF Image Segmentation framework is to provide/provide a simplified way for: Convert

Daniil Pakhomov 546 Dec 17, 2022
Implementation of Hourglass Transformer, in Pytorch, from Google and OpenAI

Hourglass Transformer - Pytorch (wip) Implementation of Hourglass Transformer, in Pytorch. It will also contain some of my own ideas about how to make

Phil Wang 61 Dec 25, 2022
GPU Programming with Julia - course at the Swiss National Supercomputing Centre (CSCS), ETH Zurich

Course Description The programming language Julia is being more and more adopted in High Performance Computing (HPC) due to its unique way to combine

Samuel Omlin 192 Jan 03, 2023
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

Mesa: A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for

Zhuang AI Group 105 Dec 06, 2022