Poetry PEP 517 Build Backend & Core Utilities

Overview

Poetry Core

PyPI version Python Versions License: MIT Code style: black

A PEP 517 build backend implementation developed for Poetry. This project is intended to be a light weight, fully compliant, self-contained package allowing PEP 517 compatible build frontends to build Poetry managed projects.

Usage

In most cases, the usage of this package is transparent to the end-user as it is either made use by Poetry itself or a PEP 517 frontend (eg: pip).

In order to enable the use poetry-core as your build backend, the following snippet must be present in your project's pyproject.toml file.

[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"

Once this is present, a PEP 517 frontend like pip can build and install your project from source without the need for Poetry or any of it's dependencies.

# install to current environment
pip install /path/to/poetry/managed/project

# build a wheel package
pip wheel /path/to/poetry/managed/project

Why is this required?

Prior to the release of version 1.1.0, Poetry was a build as a project management tool that included a PEP 517 build backend. This was inefficient and time consuming in majority cases a PEP 517 build was required. For example, both pip and tox (with isolated builds) would install Poetry and all dependencies it required. Most of these dependencies are not required when the objective is to simply build either a source or binary distribution of your project.

In order to improve the above situation, poetry-core was created. Shared functionality pertaining to PEP 517 build backends, including reading lock file, pyproject.toml and building wheel/sdist, were implemented in this package. This makes PEP 517 builds extremely fast for Poetry managed packages.

Comments
  • Introduce support for relative include of non-package modules (

    Introduce support for relative include of non-package modules ("workspaces")

    The purpose of the changes here is to enable Workspace support. A workspace is a place for code and projects. Within the workspace, code can be shared. A workspace is usually at the root of your repository.

    To identify a workspace in a Python repo, an empty workspace.toml file is put at the top of the workspace. Future plugins that extends workspaces could use that file to store configuration and settings.

    The feature in this pull request will make this this plugin redundant 😄 (I am the author of that plugin)

    Why workspaces? A workspace can contain more than one project. Different projects will likely use the same code. A very simplistic example would be a logger. To avoid code duplication, code could be moved out from the project into packages, and each project can reference the package from the project specific pyproject.toml file.

    This requires that Poetry allows package includes (note the difference from dependencies) that are "outside" of the project path, but within a workspace. That's what this pull request will do.

    An example & simplified tree workspace structure (note the namespacing for shared package includes):

    projects/
      my_app/
        pyproject.toml (including a shared package)
    
      my_service/
        pyproject.toml (including other shared packages)
    
    shared/
      my_namespace/
        my_package/
          __init__.py
          code.py
    
        my_other_package/
          __init__.py
          code.py
    
    workspace.toml (a file that tells the plugin where to find the workspace root)
    

    I think this feature resolves the issues raised in: https://github.com/python-poetry/poetry/issues/936 and probably also https://github.com/python-poetry/poetry/issues/2270

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by DavidVujic 36
  • Added script file feature

    Added script file feature

    Resolves: python-poetry#241

    • [x] Added tests for changed code.
    • [x] Updated documentation for changed code.

    Feature: ability to add script files to distribution packages

    This one is a long overdue, see: https://github.com/sdispater/poetry/issues/241 (June 2018)

    The implementation is based on the official specification by @abn https://github.com/python-poetry/poetry/issues/2310

    This is a very basic feature of setup.py See here: docs.

    Note: this is not the same as entry_point! This setuptools feature basically just copies over the given files to the virtualenv's bin folder so that it'll be available on the $PATH and you can simply invoke it.

    Example

    [tools.poetry]
    package = "my_package"
    
    [tools.poetry.scripts]
    migration = { source = "bin/run_db_migrations.sh", type = "file" }
    setup_script = { source = "bin/run_setup_script.sh" } # type can be omitted!
    

    Then:

    poetry install
    poetry run sh run_db_migrations.sh
    poetry run sh run_setup_script.sh
    

    Testing

    • I added a couple of automated tests
    • I tested this manually with one of our repositories and it worked
    Feature 
    opened by peterdeme 36
  • Fix bug/crash when source dir is outside project dir

    Fix bug/crash when source dir is outside project dir

    Hello poetry team,

    here is a draft fix for issue python-poetry#6521. It does not currently include a test, as I am not sure how to write a test for it. I tried perusing the unit tests in tests/masonry/builders/test_builder.py and got a bit lost :sweat_smile: Any help or guidance on unit tests would be much appreciated

    Documentation... I'm not sure if that's necessary? The purpose of this change is to make poetry do the right thing, instead of failing unexpectedly... I will defer to your judgement on that.

    replace usage of pathlib.Path.relative_to with os.path.relpath, as per https://github.com/python-poetry/poetry/issues/5621

    Resolves: python-poetry#6521

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by alextremblay 16
  • Fixes unstable next breaking version when major is 0

    Fixes unstable next breaking version when major is 0

    Resolves: https://github.com/python-poetry/poetry/issues/6519

    • [x] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    What this PR contains:

    • moves Version.stable to PEP440Version.stable as Version.stable was breaking the precision half of the time.
    • adds bunch of test
    • fixes Version.next_breaking to allow for version with major == 0

    Extra info

    It would be great to have to have a poetry build step that checks the requires_dist output against poetry to make sure the package is actually usable.

    opened by mazinesy 14
  • Allow appending git username / password to dependency

    Allow appending git username / password to dependency

    Resolves: https://github.com/python-poetry/poetry/issues/2062, https://github.com/python-poetry/poetry/issues/2348. Builds on top of #96.

    Changes in #96 (copied from here):

    This PR allows appending deployment keys to usernames as used by gitlab.

    Furthermore a new property is_unsafe is introduced for ParsedUrl, which can be used for cli command like poetry add to easily return whether the git dependency contains a password.

    According to gitlab's docs a + is allowed in usernames. This is fixed as well.

    Fixes: python-poetry/poetry#2062

    (This was initially submitted to the poetry repository: python-poetry/poetry#2169)

    Changes I made on top of @finswimmer's work in #96 (rebased to abe9fe5):

    • Add tests for when x-oauth-basic GitHub URL is used in package link (https://github.com/python-poetry/poetry/issues/2348).

    • Add warning and log for when a dependency stores a password in plain text by using the is_unsafe property (based on discussion in https://github.com/python-poetry/poetry/pull/2169#issuecomment-705587436).

    • [x] Added tests for changed code.

    • [x] ~Updated documentation for changed code.~ NA

    Hoping this is at a good enough point to review and consider merging into main, to be included for a soon-ish release.

    opened by setu4993 14
  • Feature: Declare ext_modules and libraries in pyproject.toml

    Feature: Declare ext_modules and libraries in pyproject.toml

    PR summary

    A declarative approach to extensions using pyproject.toml. Eliminates the need for build.py in many cases.

    How to use

    Extension modules can now be declared in pyproject.toml:

    [tool.poetry.ext_modules.mymodule] sources = ["mymodule/mymodule.c"]

    The new logic will validate this config and use it to construct a distutils.extension.Extension object to be passed to distutils.setup()

    Config properties

    The new logic adds support for all options normally passed to distutils.extension.Extension(), and also within the libraries argument to distutils::setup(). The full list of supported config properties are:

    Extension modules [tool.poetry.ext_modules]

    • sources (required) - A list of source filenames or globs
    • include_dirs - A list of directories to search for C/C++ header files
    • define_macros - A list of macros to define; each macro is defined using a 2-tuple (name, value)
    • undef_macros - A list of macros to undefine explicitly
    • library_dirs - A list of directories to search for C/C++ libraries at link time
    • libraries - A list of library names (not filenames or paths) to link against
    • runtime_library_dirs - A list of directories to search for C/C++ libraries at run time
    • extra_objects - A list of paths or globs of extra files to link with
    • extra_compile_args - Any extra platform- and compiler-specific information to use when compiling the source files
    • extra_link_args - Any extra platform- and compiler-specific information to use when linking object files together
    • export_symbols - A list of symbols to be exported from a shared extension
    • depends - A list of paths or globs of files that the extension depends on
    • language - The extension language (i.e. 'c', 'c++', 'objc')
    • optional - Boolean, specifies that a build failure in the extension should not abort the build process

    C libraries [tool.poetry.libraries]

    • sources (required) - A list of source filenames or globs
    • include_dirs - A list of directories to search for C/C++ header files
    • macros - A list of macros to define; each macro is defined using a 2-tuple (name, value)

    Why?

    I wanted a tool that would limit everything to a single configuration file to do: dependency management, packaging and publishing.

    By eliminating the need for build.py in many cases, this PR better accomplishes the stated intent of Poetry (reducing project complexity).

    What if I still want to use build.py?

    This doesn't stop you, it just gives you another option. If your build logic is too complex (e.g. moving files around), the declarative approach may not be sufficient.

    To promote clarity and to reduce the complexity of build logic, this PR doesn't allow you to mix both approaches.

    opened by bostonrwalker 12
  • Type checking

    Type checking

    Resolves: python-poetry#

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    More typechecking.

    Things that I am not entirely comfortable with:

    • returning empty strings from VCSDependency.reference() and VCSDependency.pretty_constraint()
    • https://github.com/python-poetry/poetry-core/blob/26e978d3b72ff58039ac921524b5ed47cb8b7353/src/poetry/core/packages/constraints/any_constraint.py#L25 looks like nonsense, what we surely want to return here is some sort of negation of other?
    • I've thrown in a few exceptions for places in constraints where cases simply weren't handled. The code would have failed anyway, I've just made it explicit: but still WIBNI these cases were handled
    opened by dimbleby 12
  • Added trusted repository option

    Added trusted repository option

    opened by maayanbar13 12
  • Explicitly print gitignored

    Explicitly print gitignored

    Resolves: python-poetry/poetry#6443

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    Didn't add tests as it's just a matter of calling the logger. I also fixed a typo, changing explicitely_<excluded | included> to explicitly_<excluded | included>.

    PR for https://github.com/python-poetry/poetry/issues/6443

    opened by evanrittenhouse 11
  • set up mypy hook for incremental adoption

    set up mypy hook for incremental adoption

    as per the comment here - https://github.com/python-poetry/poetry/pull/4510

    for some reason i had to use different config to successfully exclude the _vendor directory

    opened by danieleades 11
  • Allow non-existant path dependencies

    Allow non-existant path dependencies

    Currently poetry lock --no-update fails when you remove any path dependency because it first loads all dependencies from the lock file (which still has a reference to the dependency you just deleted) and it fails at that point because when DirectoryDependency is instantiated with a non-existent path it immediately throws an exception.

    This change allows the process to continue, which is the same behavior VCS dependencies have.

    If the pyproject.toml has a path dependency that doesn't exist we will want to error for poetry install and poetry lock (poetry build with path dependencies is already kinda a no-go). poetry install already fails gracefully (if you lock the project then delete the path dependency and try to install; i.e. when no locking happens before we start installing), I opened https://github.com/python-poetry/poetry/pull/6844 to make poetry lock fail gracefully.

    opened by adriangb 10
  • performance fix for simplified marker simplifications

    performance fix for simplified marker simplifications

    While working on full support for overlapping markers, I encountered a combinatorial explosion when building a union of markers (see test case).

    The issue is that we introduced cnf without introducing a counterpart of union_simplify, which leads to unnecessary big markers. Therefore, I added intersect_simplify analoguous to union_simplify.

    Further, the simplifications in #530 went a bit too far in removing the common_markers/unique_markers simplification in union_simplify. I reverted this removal and added analogue functionality to intersect_simplify.

    Comparing the number of items in itertools.product in dnf of union (after cnf) shows the benefit of the simplifications:

    |union (see test case)|number and length of markers in conjunction after cnf|number of items in itertools.product in dnf| |---|---|---| without PR | 31 (1-4) | > 2**44 with simple intersect_simplifiy | 9 (1-2) | 256 with revival of common_markers/unique_markers simplification | 3 (1-2) | 4

    opened by radoering 1
  • markers: fix `get_python_constraint_by_marker()` for multi markers and marker unions without `python_version`

    markers: fix `get_python_constraint_by_marker()` for multi markers and marker unions without `python_version`

    Actually the same as #307 which only covered single markers.

    The actual fix is in only() of MultiMarker and MarkerUnion. The test shows that get_python_constraint_by_marker() is fixed.

    There were some test cases testing the faulty behaviour of only(). I did not only change the expectation (which would have been "" in most cases) but also the input in some cases to cover some more interesting cases.

    • [x] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by radoering 1
  • builders/wheel: Ensure dist-info is written determinisically

    builders/wheel: Ensure dist-info is written determinisically

    glob() returns values in "on disk" order. To make the RECORD file deterministic and consistent between builds we need to sort the data before adding to the records list.

    Signed-off-by: Richard Purdie [email protected]

    Resolves: python-poetry#

    • [ ] Added tests for changed code.
    • [ ] Updated documentation for changed code.
    opened by rpurdie 3
  • PEP 440 compliance: do not implicitly allow pre-releases

    PEP 440 compliance: do not implicitly allow pre-releases

    Extracted from #402. Besides being a precondition of #402 it's a fix on its own. And it requires some coordination with downstream.

    Requires: python-poetry/poetry#7225 and python-poetry/poetry#7236

    PEP 440 says:

    Pre-releases of any kind, including developmental releases, are implicitly excluded from all version specifiers, unless they are already present on the system, explicitly requested by the user, or if the only available version that satisfies the version specifier is a pre-release.

    Nothing special about "> pre-release" or ">= pre-release"

    opened by radoering 1
  • validate extras against dependencies and in schema

    validate extras against dependencies and in schema

    Resolves: python-poetry/poetry/issues/7226

    • [x] Added tests for changed code.
    • [ ] Updated documentation for changed code.

    With this change, extras are validated to only contain valid characters and extras that reference dependencies that can't be found in the main dependency group raise a warning in poetry check (not in poetry lock/poetry install, though)

    opened by Panaetius 1
Releases(1.4.0)
Owner
Poetry
Python packaging and dependency management made easy
Poetry
Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP

Stat4ML Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP This is the first course from our trio courses: Statistics Foundatio

Omid Safarzadeh 83 Dec 29, 2022
GVT is a generic translation tool for parts of text on the PC screen with Text to Speak functionality.

GVT is a generic translation tool for parts of text on the PC screen with Text to Speech functionality. I wanted to create it because the existing tools that I experimented with did not satisfy me in

Nuked 1 Aug 21, 2022
LightSeq: A High-Performance Inference Library for Sequence Processing and Generation

LightSeq is a high performance inference library for sequence processing and generation implemented in CUDA. It enables highly efficient computation of modern NLP models such as BERT, GPT2, Transform

Bytedance Inc. 2.5k Jan 03, 2023
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)

Language Models are Few-shot Multilingual Learners Paper This is the source code of the paper [Arxiv] [ACL Anthology]: This code has been written usin

Genta Indra Winata 45 Nov 21, 2022
Stand-alone language identification system

langid.py readme Introduction langid.py is a standalone Language Identification (LangID) tool. The design principles are as follows: Fast Pre-trained

2k Jan 04, 2023
Baseline code for Korean open domain question answering(ODQA)

Open-Domain Question Answering(ODQA)는 다양한 주제에 대한 문서 집합으로부터 자연어 질의에 대한 답변을 찾아오는 task입니다. 이때 사용자 질의에 답변하기 위해 주어지는 지문이 따로 존재하지 않습니다. 따라서 사전에 구축되어있는 Knowl

VUMBLEB 69 Nov 04, 2022
Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch

COCO LM Pretraining (wip) Implementation of COCO-LM, Correcting and Contrasting Text Sequences for Language Model Pretraining, in Pytorch. They were a

Phil Wang 44 Jul 28, 2022
Code associated with the Don't Stop Pretraining ACL 2020 paper

dont-stop-pretraining Code associated with the Don't Stop Pretraining ACL 2020 paper Citation @inproceedings{dontstoppretraining2020, author = {Suchi

AI2 449 Jan 04, 2023
Ask for weather information like a human

weather-nlp About Ask for weather information like a human. Goals Understand typical questions like: Hourly temperatures in Potsdam on 2020-09-15. Rai

5 Oct 29, 2022
Open-Source Toolkit for End-to-End Speech Recognition leveraging PyTorch-Lightning and Hydra.

🤗 Contributing to OpenSpeech 🤗 OpenSpeech provides reference implementations of various ASR modeling papers and three languages recipe to perform ta

Openspeech TEAM 513 Jan 03, 2023
Contains the code and data for our #ICSE2022 paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Naming Sequences"

CodeFill This repository contains the code for our paper titled as "CodeFill: Multi-token Code Completion by Jointly Learning from Structure and Namin

Software Analytics Lab 11 Oct 31, 2022
TTS is a library for advanced Text-to-Speech generation.

TTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality. TTS comes with pretra

Mozilla 6.5k Jan 08, 2023
Sequence-to-Sequence learning using PyTorch

Seq2Seq in PyTorch This is a complete suite for training sequence-to-sequence models in PyTorch. It consists of several models and code to both train

Elad Hoffer 514 Nov 17, 2022
Open source annotation tool for machine learning practitioners.

doccano doccano is an open source text annotation tool for humans. It provides annotation features for text classification, sequence labeling and sequ

7.1k Jan 01, 2023
Write Python in Urdu - اردو میں کوڈ لکھیں

UrduPython Write simple Python in Urdu. How to Use Write Urdu code in سامپل۔پے The mappings are as following: "۔": ".", "،":

Saad A. Bazaz 26 Nov 27, 2022
DeepSpeech - Easy-to-use Speech Toolkit including SOTA ASR pipeline, influential TTS with text frontend and End-to-End Speech Simultaneous Translation.

(简体中文|English) Quick Start | Documents | Models List PaddleSpeech is an open-source toolkit on PaddlePaddle platform for a variety of critical tasks i

5.6k Jan 03, 2023
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

Tencent 633 Dec 28, 2022
Tool to check whether a GCP bucket is public or not.

Tool to check publicly accessible GCP bucket. Blog https://justm0rph3u5.medium.com/gcp-inspector-auditing-publicly-exposed-gcp-bucket-ac6cad55618c Wha

DIVYANSHU SHUKLA 7 Nov 24, 2022
Auto_code_complete is a auto word-completetion program which allows you to customize it on your needs

auto_code_complete is a auto word-completetion program which allows you to customize it on your needs. the model for this program is one of the deep-learning NLP(Natural Language Process) model struc

RUO 2 Feb 22, 2022
Nested Named Entity Recognition

Nested Named Entity Recognition Training Dataset: CBLUE: A Chinese Biomedical Language Understanding Evaluation Benchmark url: https://tianchi.aliyun.

8 Dec 25, 2022