Unified Interface for Constructing and Managing Workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.

Overview

CI Slack Twitter

Couler

What is Couler?

Couler aims to provide a unified interface for constructing and managing workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.

Couler is included in CNCF Cloud Native Landscape and LF AI Landscape.

Who uses Couler?

You can find a list of organizations who are using Couler in ADOPTERS.md. If you'd like to add your organization to the list, please send us a pull request.

Why use Couler?

Many workflow engines exist nowadays, e.g. Argo Workflows, Tekton Pipelines, and Apache Airflow. However, their programming experience varies and they have different level of abstractions that are often obscure and complex. The code snippets below are some examples for constructing workflows using Apache Airflow and Kubeflow Pipelines.

Apache Airflow Kubeflow Pipelines

def create_dag(dag_id,
               schedule,
               dag_number,
               default_args):
    def hello_world_py(*args):
        print('Hello World')

    dag = DAG(dag_id,
              schedule_interval=schedule,
              default_args=default_args)
    with dag:
        t1 = PythonOperator(
            task_id='hello_world',
            python_callable=hello_world_py,
            dag_number=dag_number)
    return dag

for n in range(1, 10):
    default_args = {'owner': 'airflow',
                    'start_date': datetime(2018, 1, 1)
                    }
    globals()[dag_id] = create_dag(
        'hello_world_{}'.format(str(n)),
        '@daily',
        n,
        default_args)

class FlipCoinOp(dsl.ContainerOp):
    """Flip a coin and output heads or tails randomly."""
    def __init__(self):
        super(FlipCoinOp, self).__init__(
            name='Flip',
            image='python:alpine3.6',
            command=['sh', '-c'],
            arguments=['python -c "import random; result = \'heads\' if random.randint(0,1) == 0 '
                       'else \'tails\'; print(result)" | tee /tmp/output'],
            file_outputs={'output': '/tmp/output'})

class PrintOp(dsl.ContainerOp):
    """Print a message."""
    def __init__(self, msg):
        super(PrintOp, self).__init__(
            name='Print',
            image='alpine:3.6',
            command=['echo', msg],
        )

# define the recursive operation
@graph_component
def flip_component(flip_result):
    print_flip = PrintOp(flip_result)
    flipA = FlipCoinOp().after(print_flip)
    with dsl.Condition(flipA.output == 'heads'):
        flip_component(flipA.output)

@dsl.pipeline(
    name='pipeline flip coin',
    description='shows how to use graph_component.'
)
def recursive():
    flipA = FlipCoinOp()
    flipB = FlipCoinOp()
    flip_loop = flip_component(flipA.output)
    flip_loop.after(flipB)
    PrintOp('cool, it is over. %s' % flipA.output).after(flip_loop)

Couler provides a unified interface for constructing and managing workflows that provides the following:

  • Simplicity: Unified interface and imperative programming style for defining workflows with automatic construction of directed acyclic graph (DAG).
  • Extensibility: Extensible to support various workflow engines.
  • Reusability: Reusable steps for tasks such as distributed training of machine learning models.
  • Efficiency: Automatic workflow and resource optimizations under the hood.

Please see the following sections for installation guide and examples.

Installation

  • Couler currently only supports Argo Workflows. Please see instructions here to install Argo Workflows on your Kubernetes cluster.
  • Install Python 3.6+
  • Install Couler Python SDK via the following pip command:
pip install git+https://github.com/couler-proj/couler

Alternatively, you can clone this repository and then run the following to install:

python setup.py install

Examples

Coin Flip

This example combines the use of a Python function result, along with conditionals, to take a dynamic path in the workflow. In this example, depending on the result of the first step defined in flip_coin(), the template will either run the heads() step or the tails() step.

Steps can be defined via either couler.run_script() for Python functions or couler.run_container() for containers. In addition, the conditional logic to decide whether to flip the coin in this example is defined via the combined use of couler.when() and couler.equal().

import couler.argo as couler
from couler.argo_submitter import ArgoSubmitter


def random_code():
    import random

    res = "heads" if random.randint(0, 1) == 0 else "tails"
    print(res)


def flip_coin():
    return couler.run_script(image="python:alpine3.6", source=random_code)


def heads():
    return couler.run_container(
        image="alpine:3.6", command=["sh", "-c", 'echo "it was heads"']
    )


def tails():
    return couler.run_container(
        image="alpine:3.6", command=["sh", "-c", 'echo "it was tails"']
    )


result = flip_coin()
couler.when(couler.equal(result, "heads"), lambda: heads())
couler.when(couler.equal(result, "tails"), lambda: tails())

submitter = ArgoSubmitter()
couler.run(submitter=submitter)

DAG

This example demonstrates different ways to define the workflow as a directed-acyclic graph (DAG) by specifying the dependencies of each task via couler.set_dependencies() and couler.dag(). Please see the code comments for the specific shape of DAG that we've defined in linear() and diamond().

import couler.argo as couler
from couler.argo_submitter import ArgoSubmitter


def job_a(message):
    couler.run_container(
        image="docker/whalesay:latest",
        command=["cowsay"],
        args=[message],
        step_name="A",
    )


def job_b(message):
    couler.run_container(
        image="docker/whalesay:latest",
        command=["cowsay"],
        args=[message],
        step_name="B",
    )


def job_c(message):
    couler.run_container(
        image="docker/whalesay:latest",
        command=["cowsay"],
        args=[message],
        step_name="C",
    )


def job_d(message):
    couler.run_container(
        image="docker/whalesay:latest",
        command=["cowsay"],
        args=[message],
        step_name="D",
    )

#     A
#    / \
#   B   C
#  /
# D
def linear():
    couler.set_dependencies(lambda: job_a(message="A"), dependencies=None)
    couler.set_dependencies(lambda: job_b(message="B"), dependencies=["A"])
    couler.set_dependencies(lambda: job_c(message="C"), dependencies=["A"])
    couler.set_dependencies(lambda: job_d(message="D"), dependencies=["B"])


#   A
#  / \
# B   C
#  \ /
#   D
def diamond():
    couler.dag(
        [
            [lambda: job_a(message="A")],
            [lambda: job_a(message="A"), lambda: job_b(message="B")],  # A -> B
            [lambda: job_a(message="A"), lambda: job_c(message="C")],  # A -> C
            [lambda: job_b(message="B"), lambda: job_d(message="D")],  # B -> D
            [lambda: job_c(message="C"), lambda: job_d(message="D")],  # C -> D
        ]
    )


linear()
submitter = ArgoSubmitter()
couler.run(submitter=submitter)

Note that the current version only works with Argo Workflows but we are actively working on the design of the unified interface that is extensible to additional workflow engines. Please stay tuned for more updates and we welcome any feedback and contributions from the community.

Community Blogs and Presentations

Comments
  • feat: Support all script template fields

    feat: Support all script template fields

    What changes were proposed in this pull request?

    This PR makes it so that script templates inherit from the container template and support all the fields that Argo supports.

    Why are the changes needed?

    We were not able to use the script template as is since a lot fields were missing.

    Does this PR introduce any user-facing change?

    Yes, the script template signature has additional arguments.

    How was this patch tested?

    We made sure the current tests pass and this use case for script templates is supported.

    opened by rushtehrani 12
  • fix: Fix issue with volume mount

    fix: Fix issue with volume mount

    What changes were proposed in this pull request?

    fix Issue #193

    Why are the changes needed?

    fix Issue #193

    Does this PR introduce any user-facing change?

    No

    How was this patch tested?

    volume_test.py

    opened by peiniliu 8
  • feat: removing stray yaml dump

    feat: removing stray yaml dump

    What changes were proposed in this pull request?

    This fixes https://github.com/couler-proj/couler/issues/248

    In that issue, @merlintang mentions this section can be removed.

    Why are the changes needed?

    Since this code is not contained in a method, it ends up being called outside of the context of couler/argo.

    Does this PR introduce any user-facing change?

    Only the reduction of output.

    How was this patch tested?

    Removed this and raised an exception and didn't get the yaml dump

    opened by dmerrick 7
  • feat: Add image pull secrect support for python src

    feat: Add image pull secrect support for python src

    What changes were proposed in this pull request?

    add image pull secret config for workflow

    Why are the changes needed?

    because our project need pull image from private hub

    Does this PR introduce any user-facing change?

    i think no.it was a new feature.

    How was this patch tested?

    add one unit test and our project use this code to pull image

    opened by lcgash 7
  • Install instructions don't work in Pycharm `venv`

    Install instructions don't work in Pycharm `venv`

    Summary

    I'm working with a PyCharm generated venv (3.8.5) and running install scripts supplied here doesn't work. The folder in site-packages has no real code in it.

    When I run it from a normal terminal, that is Python 3.8.5 sans venv, it install an egg. I don't think this is expected behaviour - I was expecting a wheel...

    Diagnostics

    Mac BigSur 11.3.1 Latest repository version 0.1.1rc8.

    bug 
    opened by moshewe 7
  • [RFC] Add initial design doc for core Couler API

    [RFC] Add initial design doc for core Couler API

    Hi community,

    We are opening this PR to share our thoughts on supporting multiple workflow engines. We'd appreciate any feedback and suggestions. In addition, if you are interested in contributing either a new backend or functionalities of the existing backend, please let us know in this pull request.

    opened by terrytangyuan 7
  • fix: Relax dependency version pinning.

    fix: Relax dependency version pinning.

    What changes were proposed in this pull request?

    To avoid depencency version conflicts with other libraries, pin dependencies to version ranges rather than exact versions.

    Why are the changes needed?

    Let me know if you disagree, but in general I think library code should try to avoid pinning to exact versions. Otherwise it can get very tricky to resolve conflicts for non-trivial projects.

    Does this PR introduce any user-facing change?

    It's possible that there are breaking changes in dependencies that I don't know about! I'll wait for tests to run and see if anything breaks.

    How was this patch tested?

    See above: this change shouldn't require new tests.

    opened by jmcarp 6
  • container volumeMounts can not mount exsist PVC.

    container volumeMounts can not mount exsist PVC.

    Summary

    What happened/what you expected to happen?

    We expect the container use the existing volume defined inside the workflow. Such as: volumes-existing.yaml

    example for testing:

    import os
    
    import couler.argo as couler
    from couler.argo_submitter import ArgoSubmitter
    from couler.core.templates.volume import VolumeMount, Volume
    
    couler.add_volume(Volume("apppath", "mnist"))
    
    mount = VolumeMount("apppath", "/data/")
    command = ["ls", mount.mount_path]
    
    couler.run_container(
            image="alpine:3.12.0", command=command, volume_mounts=[mount]
        )
    
    submitter = ArgoSubmitter(namespace="testagent")
    couler.run(submitter=submitter)
    

    OrderedDict([('apiVersion', 'argoproj.io/v1alpha1'), ('kind', 'Workflow'), ('metadata', {'generateName': 'runpy-'}), ('spec', {'entrypoint': 'runpy', 'volumes': [OrderedDict([('name', 'apppath'), ('persistentVolumeClaim', {'claimName': 'mnist'})])], 'templates': [{'name': 'runpy', 'steps': [[OrderedDict([('name', 'module-3418'), ('template', 'module')])]]}, OrderedDict([('name', 'module'), ('container', OrderedDict([('image', 'alpine:3.12.0'), ('command', ['ls', '/data/']), ('volumeMounts', [OrderedDict([('name', 'apppath'), ('mountPath', '/data/')])])])), (**'volumes', [{'name': 'apppath', 'emptyDir': {}}**])])]})])

    This will raise the problem because the volumeMounts inside the container find volumes inside the container('emptyDir') rather than volumes inside the workflow(PVC).

    The reason is this code, because inside the container, automatically generate volumes for volumeMounts,

    https://github.com/couler-proj/couler/blob/d20f874882e55c5e3aa53ffaf78670f6b4d314a0/couler/core/templates/container.py#L146-L153

    After removing the automatically generated 'emptydir{}', the volumeMount point to the right volume definition inside the workflow.

    OrderedDict([('apiVersion', 'argoproj.io/v1alpha1'), ('kind', 'Workflow'), ('metadata', {'generateName': 'runpy-'}), ('spec', {'entrypoint': 'runpy', 'volumes': [OrderedDict([('name', 'apppath'), ('persistentVolumeClaim', {'claimName': 'mnist'}**)])], 'templates': [{'name': 'runpy', 'steps': [[OrderedDict([('name', 'module-3418'), ('template', 'module')])]]}, OrderedDict([('name', 'module'), ('container', OrderedDict([('image', 'alpine:3.12.0'), ('command', ['ls', '/data/']), ('volumeMounts', [OrderedDict([('name', 'apppath'), ('mountPath', '/data/')])])])), ('volumes', [])**])]})])

    Diagnostics

    What is the version of Couler you are using?

    latest v0.1.1rc8

    What is the version of the workflow engine you are using?

    argo: v3.0.0-rc3

    Any logs or other information that could help debugging?


    Message from the maintainers:

    Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

    bug good first issue 
    opened by peiniliu 6
  • fix: Support multiple function arguments in couler.map()

    fix: Support multiple function arguments in couler.map()

    What changes were proposed in this pull request?

    Added changes to accept multiple function arguments in couler.map() #169 continuing the fix adding the test function for the modifications.

    Had to start again from scratch. Reason: The previous idea just pretty much loops the arguments in the map().

    return map(map(function, input_list), *other)

    But the return value is a Step Class not a function so the first map() will succeed but not the second one or the third... because the map() checks the function first.

    inner_step = Step(name=inner_dict["id"], template=template_name)

    return inner_step #32 issue

    Why are the changes needed?

    #32 issue

    Does this PR introduce any user-facing change?

    No.

    How was this patch tested?

    A test was created for it. similar to the one-argument test. Test

    opened by nooraangelva 5
  • fix: Volume_claim to dynamic

    fix: Volume_claim to dynamic

    What changes were proposed in this pull request?

    The proposal would make the VolumeClaimTemplates size and accessModes dynamic by letting the user input them so they would not be hardcoded.

    Why are the changes needed?

    #210 Working on a workflow I noticed that my workflow would require the accessModes to be ReadWriteMany, not ReadWriteOnce. ReadWriteOnce was hardcoded, I thought that it would be better if the workflow creator could set it the way she/he needs it. Same for the size.

    Does this PR introduce any user-facing change?

    Users would have to write 3 arguments to the VolumeClaimTemplate instead of one.

    Previous version: volume = VolumeClaimTemplate("workdir")

    Updated version: volume = VolumeClaimTemplate("workdir", ['ReadWriteMany'], '1Gi')

    How was this patch tested?

    The testing was done by abiding Coulers instructions

    I added a new scripts/integration_tests.Unix.sh. Because I encountered the following problem the solution is in the link also.

    opened by nooraangelva 5
  • How to specify securityContext

    How to specify securityContext

    The k8s cluster I deploy to has a pod security policy, and requires that the Argo workflows have the following, top-level securityContext:

    apiVersion: argoproj.io/v1alpha1
    kind: Workflow
    metadata:
      generateName: main-
    spec:
      securityContext:
         fsGroup: 2000
         runAsNonRoot: true
         runAsUser: 1000
    ...
    

    How can I specify that via couler? I couldn't find anything in the docs.

    opened by kodeninja 5
  • Kubernetes API exception 404

    Kubernetes API exception 404

    Hi I am trying out one of the example using argo workflow and I am getting k8s api exeception. Please find the logs:

    (venv) (base) [email protected] pythonProject1 % python main.py INFO:root:Argo submitter namespace: argo INFO:root:Found local kubernetes config. Initialized with kube_config. INFO:root:Checking workflow name/generatedName main- INFO:root:Submitting workflow to Argo ERROR:root:Failed to submit workflow Traceback (most recent call last): File "main.py", line 9, in <module> result = couler.run(submitter=submitter) File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/couler/argo.py", line 73, in run res = submitter.submit(wf, secrets=secrets) File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/couler/argo_submitter.py", line 151, in submit return self._create_workflow(workflow_yaml) File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/couler/argo_submitter.py", line 174, in _create_workflow raise e File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/couler/argo_submitter.py", line 158, in _create_workflow response = self._custom_object_api_client.create_namespaced_custom_object( # noqa: E501 File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/api/custom_objects_api.py", line 225, in create_namespaced_custom_object return self.create_namespaced_custom_object_with_http_info(group, version, namespace, plural, body, **kwargs) # noqa: E501 File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/api/custom_objects_api.py", line 344, in create_namespaced_custom_object_with_http_info return self.api_client.call_api( File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 348, in call_api return self.__call_api(resource_path, method, File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 180, in __call_api response_data = self.request( File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/api_client.py", line 391, in request return self.rest_client.POST(url, File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/rest.py", line 275, in POST return self.request("POST", url, File "/Users/saurav/PycharmProjects/pythonProject1/venv/lib/python3.8/site-packages/kubernetes/client/rest.py", line 234, in request raise ApiException(http_resp=r) kubernetes.client.exceptions.ApiException: (404) Reason: Not Found HTTP response headers: HTTPHeaderDict({'Audit-Id': '47ae3075-3530-4e82-b495-fc359d544f51', 'Cache-Control': 'no-cache, private', 'Content-Type': 'text/plain; charset=utf-8', 'X-Content-Type-Options': 'nosniff', 'X-Kubernetes-Pf-Flowschema-Uid': '52efdd2c-fc48-4b6a-a014-0f20cb376b7f', 'X-Kubernetes-Pf-Prioritylevel-Uid': '4dbba312-e8f6-4b16-97b2-90fe3b9d7dc0', 'Date': 'Tue, 26 Jul 2022 06:20:02 GMT', 'Content-Length': '19'}) HTTP response body: 404 page not found

    bug 
    opened by smetal1 0
  • Documentation: Difference to other meta workflow engines

    Documentation: Difference to other meta workflow engines

    Couler is a meta workflow engine, but I know of at least ZenML and Kedro which are meta workflow engines as well. While the self-presentation of these other two is towards machine learning, it seems to me they are pretty much generally usable for any type of work. What is Couler aiming to do differently than ZenML and/or Kedro?

    opened by Make42 3
  • Need additional clarification/examples around using set_dependencies+map

    Need additional clarification/examples around using set_dependencies+map

    Summary

    I'm confused on how to properly use dependencies. Let's say I have a workflow with 4 groups of steps (A, B, C, D) and each has multiple subtasks that can happen in parallel (A1, A2, ..., B1, B2, ...). Currently, I'm adding all the A steps using couler.map, then adding all the B steps with couler.map, etc. This correctly parallelizes across A1, A2, ..., but none of the B steps start until all the A steps have completed, despite the fact that I never explicitly set dependencies.

    In this case, I want A and B to run in parallel, then C then D. Having this run sequentially as A, B, C, D is technically correct, but not ideally performant. However, given that I'm not setting dependencies, and they're still running sequentially, I feel like using the set_dependencies function wouldn't help. Also, when I tried to use the set_dependencies function, the couler code errored on parsing its own generated yaml due to duplicate anchor definitions. Would definitely like to see a more in-depth example than those currently present in the README which shows how to properly use set_dependencies in combination with functions like map.

    Use Cases

    Mostly explained above.


    Message from the maintainers:

    Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

    enhancement 
    opened by varunm22 2
  • Revert

    Revert "home brew" code to official Python client

    Summary

    Change custom code to use the official Python API

    Use Cases

    It's difficult developing new features and onboarding new developers to the codebase, as the underlying structures don't follow the (well-documented) official Python API.


    Message from the maintainers:

    Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

    enhancement 
    opened by moshewe 6
  • Explicit parameter passing between steps

    Explicit parameter passing between steps

    Summary

    Example:

    out1 = couler.create_parameter_artifact(path="/mnt/test.txt")
    out2 = couler.create_parameter_artifact(path="/mnt/test2.txt")
    
    
    def producer(name):
        return couler.run_container(
            image="alpine:3.6", command=["sh", "-c", 'echo "test" > /mnt/test.txt']
            , step_name=name, output=[out1, out2]
        )
    
    
    def consumer(name):
        inputs = couler.get_step_output(step_name="1")
        return couler.run_container(
            image="alpine:3.6", command=["sh", "-c", 'cat /mnt/test.txt']
            , step_name=name, args=[inputs[0]],
        )
    
    
    couler.set_dependencies(lambda: producer("1"), dependencies=None)
    couler.set_dependencies(lambda: consumer("2"), dependencies=["1"])
    

    Now arguments in consumer template look like this:

    arguments:
                  parameters:
                    - name: para-2-0
                      value: "{{tasks.1.outputs.parameters.output-id-15}}"
                    - name: para-2-1
                      value: "{{tasks.1.outputs.parameters.output-id-15}}"
                    - name: para-2-2
                      value: "{{tasks.1.outputs.parameters.output-id-16}}"
    

    It would be useful if there was a way for setting dependency between steps without implicit parameter passing.

    For myself I just added a flag to run_container that just turns off this behavior.

    Use Cases

    I have one parent step that generates couple outputs, and I have multiple children steps, each one of them only needs proper subset of parent outputs, the rest of information would be redundant.


    Message from the maintainers:

    Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

    enhancement 
    opened by hcnt 1
  • Example for usage of secret

    Example for usage of secret

    Summary

    What change needs making? Example script on how to use created secret successfully. I have tried myself using the following code from test It creates a secret successfully but fails to echo the secrets value.

    Use Cases

    When would you use this? when I need to use a secret for example in the authentication.

    Message from the maintainers:

    Impacted by this bug? Give it a 👍. We prioritize the issues with the most 👍.

    enhancement good first issue help wanted 
    opened by nooraangelva 2
Releases(v0.1.1rc8-stable)
  • v0.1.1rc8-stable(Apr 12, 2021)

    This release includes the compatibility fixes with different protobuf versions as well as fix for unnecessarily raised exception when using /tmp as mount path. This release also introduces the support for workflow memoization caches.

    List of changes since the last release can be found here.

    Source code(tar.gz)
    Source code(zip)
  • v0.1.1rc8(Mar 23, 2021)

  • v0.1.1rc7(Dec 2, 2020)

  • v0.1.1rc6(Sep 25, 2020)

    This release includes several bug fixes and enhancements. Below are some of the notable changes:

    • Bump the dependency of Argo Python client to v3.5.1 and re-enable Argo Workflow spec validation.
    • Fix incorrect ApiException import path for Kubernetes Python client with version 11.0.0 and above.
    • Support callable for Couler core APIs in stead of previously only types.FunctionType.
    • Switch to use Argo Workflows v2.10.2 for integration tests.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1rc5(Sep 15, 2020)

Owner
Couler Project
Unified Interface for Constructing and Managing Workflows
Couler Project
YOLOv5 Series Multi-backbone, Pruning and quantization Compression Tool Box.

YOLOv5-Compression Update News Requirements 环境安装 pip install -r requirements.txt Evaluation metric Visdrone Model mAP ZhangYuan 719 Jan 02, 2023

Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun

ARAE Code for the paper "Adversarially Regularized Autoencoders (ICML 2018)" by Zhao, Kim, Zhang, Rush and LeCun https://arxiv.org/abs/1706.04223 Disc

Junbo (Jake) Zhao 399 Jan 02, 2023
Listing arxiv - Personalized list of today's articles from ArXiv

Personalized list of today's articles from ArXiv Print and/or send to your gmail

Lilianne Nakazono 5 Jun 17, 2022
Решения, подсказки, тесты и утилиты для тренировки по алгоритмам от Яндекса.

Решения и подсказки к тренировке по алгоритмам от Яндекса Что есть внутри Решения с подсказками и комментариями; рекомендую сначала смотреть md файл п

Yankovsky Andrey 50 Dec 26, 2022
A Unified Generative Framework for Various NER Subtasks.

This is the code for ACL-ICJNLP2021 paper A Unified Generative Framework for Various NER Subtasks. Install the package in the requirements.txt, then u

177 Jan 05, 2023
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
Official implementation of Protected Attribute Suppression System, ICCV 2021

Official implementation of Protected Attribute Suppression System, ICCV 2021

Prithviraj Dhar 6 Jan 01, 2023
Shared Attention for Multi-label Zero-shot Learning

Shared Attention for Multi-label Zero-shot Learning Overview This repository contains the implementation of Shared Attention for Multi-label Zero-shot

dathuynh 26 Dec 14, 2022
RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into tables through jointly extracting intervention, outcome and outcome measure entities and their relations.

Randomised controlled trial abstract result tabulator RCT-ART is an NLP pipeline built with spaCy for converting clinical trial result sentences into

2 Sep 16, 2022
Pytorch implementation of Decoupled Spatial-Temporal Transformer for Video Inpainting

Decoupled Spatial-Temporal Transformer for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu Sun, Xiaogang Wang, J

51 Dec 13, 2022
This Repostory contains the pretrained DTLN-aec model for real-time acoustic echo cancellation.

This Repostory contains the pretrained DTLN-aec model for real-time acoustic echo cancellation.

Nils L. Westhausen 182 Jan 07, 2023
Human Detection - Pedestrian Detection using OpenCV Python

Pedestrian Detection using OpenCV Python Follow us on Instagram for Machine Lear

Hrishikesh Dutta 1 Jan 23, 2022
TCNN Temporal convolutional neural network for real-time speech enhancement in the time domain

TCNN Pandey A, Wang D L. TCNN: Temporal convolutional neural network for real-time speech enhancement in the time domain[C]//ICASSP 2019-2019 IEEE Int

凌逆战 16 Dec 30, 2022
R3Det based on mmdet 2.19.0

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object Installation # install mmdetection first if you haven't installed it

SJTU-Thinklab-Det 38 Dec 15, 2022
A PyTorch Implementation of Neural IMage Assessment

NIMA: Neural IMage Assessment This is a PyTorch implementation of the paper NIMA: Neural IMage Assessment (accepted at IEEE Transactions on Image Proc

yunxiaos 418 Dec 29, 2022
Image morphing without reference points by applying warp maps and optimizing over them.

Differentiable Morphing Image morphing without reference points by applying warp maps and optimizing over them. Differentiable Morphing is machine lea

Alex K 380 Dec 19, 2022
Cross-modal Retrieval using Transformer Encoder Reasoning Networks (TERN). With use of Metric Learning and FAISS for fast similarity search on GPU

Cross-modal Retrieval using Transformer Encoder Reasoning Networks This project reimplements the idea from "Transformer Reasoning Network for Image-Te

Minh-Khoi Pham 5 Nov 05, 2022
A booklet on machine learning systems design with exercises

Machine Learning Systems Design Read this booklet here. This booklet covers four main steps of designing a machine learning system: Project setup Data

Chip Huyen 7.6k Jan 08, 2023
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
Keras Image Embeddings using Contrastive Loss

Image to Embedding projection in vector space. Implementation in keras and tensorflow of batch all triplet loss for one-shot/few-shot learning.

Shravan Anand K 5 Mar 21, 2022