Schema validation just got Pythonic

Overview

Schema validation just got Pythonic

schema is a library for validating Python data structures, such as those obtained from config-files, forms, external services or command-line parsing, converted from JSON/YAML (or something else) to Python data-types.

https://secure.travis-ci.org/keleshev/schema.svg?branch=master

Example

Here is a quick example to get a feeling of schema, validating a list of entries with personal information:

>>> from schema import Schema, And, Use, Optional, SchemaError

>>> schema = Schema([{'name': And(str, len),
...                   'age':  And(Use(int), lambda n: 18 <= n <= 99),
...                   Optional('gender'): And(str, Use(str.lower),
...                                           lambda s: s in ('squid', 'kid'))}])

>>> data = [{'name': 'Sue', 'age': '28', 'gender': 'Squid'},
...         {'name': 'Sam', 'age': '42'},
...         {'name': 'Sacha', 'age': '20', 'gender': 'KID'}]

>>> validated = schema.validate(data)

>>> assert validated == [{'name': 'Sue', 'age': 28, 'gender': 'squid'},
...                      {'name': 'Sam', 'age': 42},
...                      {'name': 'Sacha', 'age' : 20, 'gender': 'kid'}]

If data is valid, Schema.validate will return the validated data (optionally converted with Use calls, see below).

If data is invalid, Schema will raise SchemaError exception. If you just want to check that the data is valid, schema.is_valid(data) will return True or False.

Installation

Use pip or easy_install:

pip install schema

Alternatively, you can just drop schema.py file into your project—it is self-contained.

  • schema is tested with Python 2.6, 2.7, 3.2, 3.3, 3.4, 3.5, 3.6, 3.7 and PyPy.
  • schema follows semantic versioning.

How Schema validates data

Types

If Schema(...) encounters a type (such as int, str, object, etc.), it will check if the corresponding piece of data is an instance of that type, otherwise it will raise SchemaError.

>>> from schema import Schema

>>> Schema(int).validate(123)
123

>>> Schema(int).validate('123')
Traceback (most recent call last):
...
schema.SchemaUnexpectedTypeError: '123' should be instance of 'int'

>>> Schema(object).validate('hai')
'hai'

Callables

If Schema(...) encounters a callable (function, class, or object with __call__ method) it will call it, and if its return value evaluates to True it will continue validating, else—it will raise SchemaError.

>>> import os

>>> Schema(os.path.exists).validate('./')
'./'

>>> Schema(os.path.exists).validate('./non-existent/')
Traceback (most recent call last):
...
schema.SchemaError: exists('./non-existent/') should evaluate to True

>>> Schema(lambda n: n > 0).validate(123)
123

>>> Schema(lambda n: n > 0).validate(-12)
Traceback (most recent call last):
...
schema.SchemaError: <lambda>(-12) should evaluate to True

"Validatables"

If Schema(...) encounters an object with method validate it will run this method on corresponding data as data = obj.validate(data). This method may raise SchemaError exception, which will tell Schema that that piece of data is invalid, otherwise—it will continue validating.

An example of "validatable" is Regex, that tries to match a string or a buffer with the given regular expression (itself as a string, buffer or compiled regex SRE_Pattern):

>>> from schema import Regex
>>> import re

>>> Regex(r'^foo').validate('foobar')
'foobar'

>>> Regex(r'^[A-Z]+$', flags=re.I).validate('those-dashes-dont-match')
Traceback (most recent call last):
...
schema.SchemaError: Regex('^[A-Z]+$', flags=re.IGNORECASE) does not match 'those-dashes-dont-match'

For a more general case, you can use Use for creating such objects. Use helps to use a function or type to convert a value while validating it:

>>> from schema import Use

>>> Schema(Use(int)).validate('123')
123

>>> Schema(Use(lambda f: open(f, 'a'))).validate('LICENSE-MIT')
<_io.TextIOWrapper name='LICENSE-MIT' mode='a' encoding='UTF-8'>

Dropping the details, Use is basically:

class Use(object):

    def __init__(self, callable_):
        self._callable = callable_

    def validate(self, data):
        try:
            return self._callable(data)
        except Exception as e:
            raise SchemaError('%r raised %r' % (self._callable.__name__, e))

Sometimes you need to transform and validate part of data, but keep original data unchanged. Const helps to keep your data safe:

>> from schema import Use, Const, And, Schema

>> from datetime import datetime

>> is_future = lambda date: datetime.now() > date

>> to_json = lambda v: {"timestamp": v}

>> Schema(And(Const(And(Use(datetime.fromtimestamp), is_future)), Use(to_json))).validate(1234567890)
{"timestamp": 1234567890}

Now you can write your own validation-aware classes and data types.

Lists, similar containers

If Schema(...) encounters an instance of list, tuple, set or frozenset, it will validate contents of corresponding data container against all schemas listed inside that container and aggregate all errors:

>>> Schema([1, 0]).validate([1, 1, 0, 1])
[1, 1, 0, 1]

>>> Schema((int, float)).validate((5, 7, 8, 'not int or float here'))
Traceback (most recent call last):
...
schema.SchemaError: Or(<class 'int'>, <class 'float'>) did not validate 'not int or float here'
'not int or float here' should be instance of 'int'
'not int or float here' should be instance of 'float'

Dictionaries

If Schema(...) encounters an instance of dict, it will validate data key-value pairs:

>>> d = Schema({'name': str,
...             'age': lambda n: 18 <= n <= 99}).validate({'name': 'Sue', 'age': 28})

>>> assert d == {'name': 'Sue', 'age': 28}

You can specify keys as schemas too:

>>> schema = Schema({str: int,  # string keys should have integer values
...                  int: None})  # int keys should be always None

>>> data = schema.validate({'key1': 1, 'key2': 2,
...                         10: None, 20: None})

>>> schema.validate({'key1': 1,
...                   10: 'not None here'})
Traceback (most recent call last):
...
schema.SchemaError: Key '10' error:
None does not match 'not None here'

This is useful if you want to check certain key-values, but don't care about others:

>>> schema = Schema({'<id>': int,
...                  '<file>': Use(open),
...                  str: object})  # don't care about other str keys

>>> data = schema.validate({'<id>': 10,
...                         '<file>': 'README.rst',
...                         '--verbose': True})

You can mark a key as optional as follows:

>>> from schema import Optional
>>> Schema({'name': str,
...         Optional('occupation'): str}).validate({'name': 'Sam'})
{'name': 'Sam'}

Optional keys can also carry a default, to be used when no key in the data matches:

>>> from schema import Optional
>>> Schema({Optional('color', default='blue'): str,
...         str: str}).validate({'texture': 'furry'}
...       ) == {'color': 'blue', 'texture': 'furry'}
True

Defaults are used verbatim, not passed through any validators specified in the value.

default can also be a callable:

>>> from schema import Schema, Optional
>>> Schema({Optional('data', default=dict): {}}).validate({}) == {'data': {}}
True

Also, a caveat: If you specify types, schema won't validate the empty dict:

>>> Schema({int:int}).is_valid({})
False

To do that, you need Schema(Or({int:int}, {})). This is unlike what happens with lists, where Schema([int]).is_valid([]) will return True.

schema has classes And and Or that help validating several schemas for the same data:

>>> from schema import And, Or

>>> Schema({'age': And(int, lambda n: 0 < n < 99)}).validate({'age': 7})
{'age': 7}

>>> Schema({'password': And(str, lambda s: len(s) > 6)}).validate({'password': 'hai'})
Traceback (most recent call last):
...
schema.SchemaError: Key 'password' error:
<lambda>('hai') should evaluate to True

>>> Schema(And(Or(int, float), lambda x: x > 0)).validate(3.1415)
3.1415

In a dictionary, you can also combine two keys in a "one or the other" manner. To do so, use the Or class as a key:

>>> from schema import Or, Schema
>>> schema = Schema({
...    Or("key1", "key2", only_one=True): str
... })

>>> schema.validate({"key1": "test"}) # Ok
{'key1': 'test'}

>>> schema.validate({"key1": "test", "key2": "test"}) # SchemaError
Traceback (most recent call last):
...
schema.SchemaOnlyOneAllowedError: There are multiple keys present from the Or('key1', 'key2') condition

Hooks

You can define hooks which are functions that are executed whenever a valid key:value is found. The Forbidden class is an example of this.

You can mark a key as forbidden as follows:

>>> from schema import Forbidden
>>> Schema({Forbidden('age'): object}).validate({'age': 50})
Traceback (most recent call last):
...
schema.SchemaForbiddenKeyError: Forbidden key encountered: 'age' in {'age': 50}

A few things are worth noting. First, the value paired with the forbidden key determines whether it will be rejected:

>>> Schema({Forbidden('age'): str, 'age': int}).validate({'age': 50})
{'age': 50}

Note: if we hadn't supplied the 'age' key here, the call would have failed too, but with SchemaWrongKeyError, not SchemaForbiddenKeyError.

Second, Forbidden has a higher priority than standard keys, and consequently than Optional. This means we can do that:

>>> Schema({Forbidden('age'): object, Optional(str): object}).validate({'age': 50})
Traceback (most recent call last):
...
schema.SchemaForbiddenKeyError: Forbidden key encountered: 'age' in {'age': 50}

You can also define your own hooks. The following hook will call _my_function if key is encountered.

from schema import Hook
def _my_function(key, scope, error):
    print(key, scope, error)

Hook("key", handler=_my_function)

Here's an example where a Deprecated class is added to log warnings whenever a key is encountered:

from schema import Hook, Schema
class Deprecated(Hook):
    def __init__(self, *args, **kwargs):
        kwargs["handler"] = lambda key, *args: logging.warn(f"`{key}` is deprecated. " + (self._error or ""))
        super(Deprecated, self).__init__(*args, **kwargs)

Schema({Deprecated("test", "custom error message."): object}, ignore_extra_keys=True).validate({"test": "value"})
...
WARNING: `test` is deprecated. custom error message.

Extra Keys

The Schema(...) parameter ignore_extra_keys causes validation to ignore extra keys in a dictionary, and also to not return them after validating.

>>> schema = Schema({'name': str}, ignore_extra_keys=True)
>>> schema.validate({'name': 'Sam', 'age': '42'})
{'name': 'Sam'}

If you would like any extra keys returned, use object: object as one of the key/value pairs, which will match any key and any value. Otherwise, extra keys will raise a SchemaError.

User-friendly error reporting

You can pass a keyword argument error to any of validatable classes (such as Schema, And, Or, Regex, Use) to report this error instead of a built-in one.

>>> Schema(Use(int, error='Invalid year')).validate('XVII')
Traceback (most recent call last):
...
schema.SchemaError: Invalid year

You can see all errors that occurred by accessing exception's exc.autos for auto-generated error messages, and exc.errors for errors which had error text passed to them.

You can exit with sys.exit(exc.code) if you want to show the messages to the user without traceback. error messages are given precedence in that case.

A JSON API example

Here is a quick example: validation of create a gist request from github API.

>>> gist = '''{"description": "the description for this gist",
...            "public": true,
...            "files": {
...                "file1.txt": {"content": "String file contents"},
...                "other.txt": {"content": "Another file contents"}}}'''

>>> from schema import Schema, And, Use, Optional

>>> import json

>>> gist_schema = Schema(And(Use(json.loads),  # first convert from JSON
...                          # use str since json returns unicode
...                          {Optional('description'): str,
...                           'public': bool,
...                           'files': {str: {'content': str}}}))

>>> gist = gist_schema.validate(gist)

# gist:
{u'description': u'the description for this gist',
 u'files': {u'file1.txt': {u'content': u'String file contents'},
            u'other.txt': {u'content': u'Another file contents'}},
 u'public': True}

Using schema with docopt

Assume you are using docopt with the following usage-pattern:

Usage: my_program.py [--count=N] <path> <files>...

and you would like to validate that <files> are readable, and that <path> exists, and that --count is either integer from 0 to 5, or None.

Assuming docopt returns the following dict:

>>> args = {'<files>': ['LICENSE-MIT', 'setup.py'],
...         '<path>': '../',
...         '--count': '3'}

this is how you validate it using schema:

>>> from schema import Schema, And, Or, Use
>>> import os

>>> s = Schema({'<files>': [Use(open)],
...             '<path>': os.path.exists,
...             '--count': Or(None, And(Use(int), lambda n: 0 < n < 5))})

>>> args = s.validate(args)

>>> args['<files>']
[<_io.TextIOWrapper name='LICENSE-MIT' ...>, <_io.TextIOWrapper name='setup.py' ...]

>>> args['<path>']
'../'

>>> args['--count']
3

As you can see, schema validated data successfully, opened files and converted '3' to int.

JSON schema

You can also generate standard draft-07 JSON schema from a dict Schema. This can be used to add word completion, validation, and documentation directly in code editors. The output schema can also be used with JSON schema compatible libraries.

JSON: Generating

Just define your schema normally and call .json_schema() on it. The output is a Python dict, you need to dump it to JSON.

>>> from schema import Optional, Schema
>>> import json
>>> s = Schema({"test": str,
...             "nested": {Optional("other"): str}
...             })
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"))

# json_schema
{
    "type":"object",
    "properties": {
        "test": {"type": "string"},
        "nested": {
            "type":"object",
            "properties": {
                "other": {"type": "string"}
            },
            "required": [],
            "additionalProperties": false
        }
    },
    "required":[
        "test",
        "nested"
    ],
    "additionalProperties":false,
    "$id":"https://example.com/my-schema.json",
    "$schema":"http://json-schema.org/draft-07/schema#"
}

You can add descriptions for the schema elements using the Literal object instead of a string. The main schema can also have a description.

These will appear in IDEs to help your users write a configuration.

>>> from schema import Literal, Schema
>>> import json
>>> s = Schema({Literal("project_name", description="Names must be unique"): str}, description="Project schema")
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "project_name": {
            "description": "Names must be unique",
            "type": "string"
        }
    },
    "required": [
        "project_name"
    ],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#",
    "description": "Project schema"
}

JSON: Supported validations

The resulting JSON schema is not guaranteed to accept the same objects as the library would accept, since some validations are not implemented or have no JSON schema equivalent. This is the case of the Use and Hook objects for example.

Implemented

Object properties

Use a dict literal. The dict keys are the JSON schema properties.

Example:

Schema({"test": str})

becomes

{'type': 'object', 'properties': {'test': {'type': 'string'}}, 'required': ['test'], 'additionalProperties': False}.

Please note that attributes are required by default. To create optional attributes use Optional, like so:

Schema({Optional("test"): str})

becomes

{'type': 'object', 'properties': {'test': {'type': 'string'}}, 'required': [], 'additionalProperties': False}

additionalProperties is set to true when at least one of the conditions is met:
  • ignore_extra_keys is True
  • at least one key is str or object

For example:

Schema({str: str}) and Schema({}, ignore_extra_keys=True)

both becomes

{'type': 'object', 'properties' : {}, 'required': [], 'additionalProperties': True}

and

Schema({})

becomes

{'type': 'object', 'properties' : {}, 'required': [], 'additionalProperties': False}

Types

Use the Python type name directly. It will be converted to the JSON name:

Example:

Schema(float)

becomes

{"type": "number"}

Array items

Surround a schema with [].

Example:

Schema([str]) means an array of string and becomes:

{'type': 'array', 'items': {'type': 'string'}}

Enumerated values

Use Or.

Example:

Schema(Or(1, 2, 3)) becomes

{"enum": [1, 2, 3]}

Constant values

Use the value itself.

Example:

Schema("name") becomes

{"const": "name"}

Regular expressions

Use Regex.

Example:

Schema(Regex("^v\d+")) becomes

{'type': 'string', 'pattern': '^v\\d+'}

Annotations (title and description)

You can use the name and description parameters of the Schema object init method.

To add description to keys, replace a str with a Literal object.

Example:

Schema({Literal("test", description="A description"): str})

is equivalent to

Schema({"test": str})

with the description added to the resulting JSON schema.

Combining schemas with allOf

Use And

Example:

Schema(And(str, "value"))

becomes

{"allOf": [{"type": "string"}, {"const": "value"}]}

Note that this example is not really useful in the real world, since const already implies the type.

Combining schemas with anyOf

Use Or

Example:

Schema(Or(str, int))

becomes

{"anyOf": [{"type": "string"}, {"type": "integer"}]}

Not implemented

The following JSON schema validations cannot be generated from this library.

JSON: Minimizing output size

Explicit Reuse

If your JSON schema is big and has a lot of repetition, it can be made simpler and smaller by defining Schema objects as reference. These references will be placed in a "definitions" section in the main schema.

You can look at the JSON schema documentation for more information

>>> from schema import Optional, Schema
>>> import json
>>> s = Schema({"test": str,
...             "nested": Schema({Optional("other"): str}, name="nested", as_reference=True)
...             })
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "test": {
            "type": "string"
        },
        "nested": {
            "$ref": "#/definitions/nested"
        }
    },
    "required": [
        "test",
        "nested"
    ],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#",
    "definitions": {
        "nested": {
            "type": "object",
            "properties": {
                "other": {
                    "type": "string"
                }
            },
            "required": [],
            "additionalProperties": false
        }
    }
}

This becomes really useful when using the same object several times

>>> from schema import Optional, Or, Schema
>>> import json
>>> language_configuration = Schema({"autocomplete": bool, "stop_words": [str]}, name="language", as_reference=True)
>>> s = Schema({Or("ar", "cs", "de", "el", "eu", "en", "es", "fr"): language_configuration})
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json"), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "ar": {
            "$ref": "#/definitions/language"
        },
        "cs": {
            "$ref": "#/definitions/language"
        },
        "de": {
            "$ref": "#/definitions/language"
        },
        "el": {
            "$ref": "#/definitions/language"
        },
        "eu": {
            "$ref": "#/definitions/language"
        },
        "en": {
            "$ref": "#/definitions/language"
        },
        "es": {
            "$ref": "#/definitions/language"
        },
        "fr": {
            "$ref": "#/definitions/language"
        }
    },
    "required": [],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#",
    "definitions": {
        "language": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "type": "boolean"
                },
                "stop_words": {
                    "type": "array",
                    "items": {
                        "type": "string"
                    }
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false
        }
    }
}

Automatic reuse

If you want to minimize the output size without using names explicitly, you can have the library generate hashes of parts of the output JSON schema and use them as references throughout.

Enable this behaviour by providing the parameter use_refs to the json_schema method.

Be aware that this method is less often compatible with IDEs and JSON schema libraries. It produces a JSON schema that is more difficult to read by humans.

>>> from schema import Optional, Or, Schema
>>> import json
>>> language_configuration = Schema({"autocomplete": bool, "stop_words": [str]})
>>> s = Schema({Or("ar", "cs", "de", "el", "eu", "en", "es", "fr"): language_configuration})
>>> json_schema = json.dumps(s.json_schema("https://example.com/my-schema.json", use_refs=True), indent=4)

# json_schema
{
    "type": "object",
    "properties": {
        "ar": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "type": "boolean",
                    "$id": "#6456104181059880193"
                },
                "stop_words": {
                    "type": "array",
                    "items": {
                        "type": "string",
                        "$id": "#1856069563381977338"
                    }
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false
        },
        "cs": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "$ref": "#6456104181059880193"
                },
                "stop_words": {
                    "type": "array",
                    "items": {
                        "$ref": "#1856069563381977338"
                    },
                    "$id": "#-5377945144312515805"
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false
        },
        "de": {
            "type": "object",
            "properties": {
                "autocomplete": {
                    "$ref": "#6456104181059880193"
                },
                "stop_words": {
                    "$ref": "#-5377945144312515805"
                }
            },
            "required": [
                "autocomplete",
                "stop_words"
            ],
            "additionalProperties": false,
            "$id": "#-8142886105174600858"
        },
        "el": {
            "$ref": "#-8142886105174600858"
        },
        "eu": {
            "$ref": "#-8142886105174600858"
        },
        "en": {
            "$ref": "#-8142886105174600858"
        },
        "es": {
            "$ref": "#-8142886105174600858"
        },
        "fr": {
            "$ref": "#-8142886105174600858"
        }
    },
    "required": [],
    "additionalProperties": false,
    "$id": "https://example.com/my-schema.json",
    "$schema": "http://json-schema.org/draft-07/schema#"
}
Comments
  • Fix error formatting for validation with callable

    Fix error formatting for validation with callable

    There is a feature (although not documented and not tested) that allows passing format strings as error messages to the validators, which format them with the validated data if a SchemaError is thrown.

    I find this feature very useful, but it does not work when validating using a callable. For example, with the current behavior: schema.Schema(lambda d: False, error='{}').validate('This should be the error message') -> SchemaError: {} After the fix, the error will be SchemaError: This should be the error message

    opened by kmaork 24
  • Custom Schema Names

    Custom Schema Names

    In Schema.py

    Added parameter "name" to the Schema class that defaults to an empty string.

    Added the function set_schema_name() to the Schema class that formats and returns the Schema name if it isn't empty. set_schema_name() also takes in one string argument so the way it formats the name can be altered based on the schema error type.

    The formatted name it returns is then used to be substituted into the error message that gets printed when a schema error is raised.

    opened by SnapperGee 21
  • Adding regular expression (regex) support

    Adding regular expression (regex) support

    Hi,

    Here is a proposition to add regex support to the library, with the use of re.pattern.search method from the Python core regex library (support was tested on python 2.6.9, 2.7.12, 3.3.0, 3.4.3, 3.5.2, pypy-5.3 and pypy3-2.4.0).

    Four simple test cases were added as well.

    Note that support for python 3.2 was not tested because of an issue with tox and virtualenv: https://github.com/travis-ci/travis-ci/issues/5517

    opened by gusmonod 20
  • When ignoring extra keys,  Or's only_one should still be handled

    When ignoring extra keys, Or's only_one should still be handled

    Sorry for another PR about this. I noticed that Or's only_one condition didn't work when mixed with ìgnore_extra_keys as it was relying on the WrongKey exception. I instead implemented a type of Error that stops the execution immediately.

    Let me know if you see anything I might have missed. Thanks

    opened by julienduchesne 18
  • Do not drop previous errors within an Or criterion.

    Do not drop previous errors within an Or criterion.

    When raising a SchemaError with a user readable error message this message would be dropped if there was more than one validator in an Or() clause.

    BTW: Thanks for the library, we use it extensively to keep our model correct.

    opened by blaa 17
  • add strict flag in Schema class to skip wrong key validation without wildcard

    add strict flag in Schema class to skip wrong key validation without wildcard

    To avoid SchemaError for validations with more keys of the schema:

    >>> Schema({'key', 'value'}, strict=False).validate({'key', 'value', 'foo': 'bar})
    >>> {'key', 'value'}
    
    opened by drgarcia1986 15
  • Handle wrong keys better

    Handle wrong keys better

    fixes #3 and #15.

    On top of original pull request https://github.com/halst/schema/pull/18 to improve messages for values, I made the error messages more clear when input contain unexpected keys, e.g.:

    >>> Schema({'a':int}).validate({'a': 1, 'bad': 5, 'bad2':None})
    
    Traceback (most recent call last):
    ...
    SchemaError: wrong keys 'bad', 'bad2' in {'a': 1, 'bad': 5, 'bad2': None}
    

    P.S. tried merging with halst:master but the master currently seem partially broken as some tests are failing there even before my merge (probably connected to Optional() fix), so I'll leave the merging out for now...

    opened by vidma 15
  • Made schema more extendable with simple trick. Issue #63 #64

    Made schema more extendable with simple trick. Issue #63 #64

    I made feature request #64 then read issue #63, and after some thought and seeing code found out that it's ultra easy to achieve. I don't think #64 and #63 need to be implemented, but it's nice to have that possibility without modifying base schema code. Doc class from #63 is implemented as a test test_validate_kwargs_doc_example, to show how easy it is.

    opened by kosz85 14
  • added coverage and pep8 checks

    added coverage and pep8 checks

    • integration with coveralls
    • check pep8 with flake8
    • fixed pep8 to pass, for now we use max-line 90 & two issues are ignored until decided otherwise:
      • disabled E701 (multiple statements on one line) and E126 (continuation line over-indented for hanging indent)
    • as now pep8 passes (and is quite loose), it can be activated to impact travis build status

    Note: based on https://github.com/petrblaho/python-tuskarclient/blob/master/

    opened by vidma 13
  • Types inside `And` are ignored when creating JSON schema

    Types inside `And` are ignored when creating JSON schema

    Currently, it is as follows:

    >>> Schema({'name': str}).json_schema("example_schema")
    {
        'type': 'object',
        'properties': {'name': {'type': 'string'}},
        'required': ['name']
        'additionalProperties': False,
        'id': 'example_schema',
        '$schema': 'http://json-schema.org/draft-07/schema#'
    }
    
    >>> Schema({'name': And(str, Use(str.lower))}).json_schema("example_schema")
    {
        'type': 'object',
        'properties': {'name': {}},
        'required': ['name']
        'additionalProperties': False,
        'id': 'example_schema',
        '$schema': 'http://json-schema.org/draft-07/schema#'
    }
    

    Converting str to And(str, Use(str.lower)) results in dropping the {'type': 'string'} in JSON schema. However, if there is a single type in And, it should be preserved in JSON schema.

    opened by berkanteber 9
  • Feature/json schema descriptions

    Feature/json schema descriptions

    Another feature for #180

    • Added the Literal type for adding a description to JSON schemas
    • Fixed new bugs introduced by the Literal type
    • Added tests to ensure the Literal type was working as expected

    This is a direct continuation to PR #206

    opened by jcbedard 9
  • Update

    Update "User-friendly error reporting" example

    Updating the "User-friendly error reporting" example to reflect the feature implemented in the following PR https://github.com/keleshev/schema/pull/107/files.

    opened by garrettprimm 0
  • Unexpected behavior: Can not double validate keys

    Unexpected behavior: Can not double validate keys

    from schema import Schema, And, Or, Use, Optional, SchemaError, Forbidden
    
    
    x = Schema({
        Or('request', 'requests', only_one=True) : dict,
        Optional('requests'): {Use(int): dict}
    })
    
    x.validate({
        'requests': {1:{}}
    })
    

    The expected result here is that a user can supply a dict using the key 'request' or 'requests'. If 'requests' is used, then the nested dict keys should be numbered. I assumed that the dict would validate Or('request', 'requests', only_one=True) : dict, and then see the optional key 'requests' was supplied. It should also validate Optional('requests'): {Use(int): dict}. But the output seems to only allow one validation per key,

    See output:

    ---------------------------------------------------------------------------
    SchemaMissingKeyError                     Traceback (most recent call last)
    Input In [298], in <cell line: 1>()
    ----> 1 x.validate({
          2     'requests': {1:{}}
          3 })
    
    File ~\AppData\Local\Programs\Python\Python39\lib\site-packages\schema.py:420, in Schema.validate(self, data, **kwargs)
        418     message = "Missing key%s: %s" % (_plural_s(missing_keys), s_missing_keys)
        419     message = self._prepend_schema_name(message)
    --> 420     raise SchemaMissingKeyError(message, e.format(data) if e else None)
        421 if not self._ignore_extra_keys and (len(new) != len(data)):
        422     wrong_keys = set(data.keys()) - set(new.keys())
    
    SchemaMissingKeyError: Missing key: Or('request', 'requests')
    
    opened by MrChadMWood 3
  • Potential Bug; Unexpeted Behavior (Can't find reason)

    Potential Bug; Unexpeted Behavior (Can't find reason)

    I am attempting to validate filters passed via JSON like so:

    schema_numericValue = Schema(Or({'int64_value':int}, {'double_value':float}))
    
    schema_stringFilter = Schema({
        'match_type' : Or(
            'EXACT',
            'BEGINS_WITH',
            'ENDS_WITH',
            'CONTAINS',
            'FULL_REGEXP',
            'PARTIAL_REGEXP',
            only_one=True
        ),
        'value' : str,
        Optional('case_sensitive') : bool
    })
    
    schema_inListFilter = Schema({
        'values' : [str],
        Optional('case_sensitive') : bool
    })
    
    schema_betweenFilter = Schema({
        'from_value' : schema_numericValue,
        'to_value' : schema_numericValue
    })
    
    schema_numericFilter = Schema({
        'operation' : Or(
            'EQUAL',
            'GREATER_THAN',
            'GREATER_THAN_OR_EQUAL',
            'LESS_THAN',
            'LESS_THAN_OR_EQUAL',
            only_one=True
        ),
        'value' : schema_numericValue
    })
    
    schema_filter = Schema({
        'field_name' : str,
        Or(
            'string_filter', 
            'in_list_filter', 
            'numeric_filter', 
            'between_filter',
            only_one=True
        ) : Or(schema_inListFilter,
               schema_stringFilter,
               schema_numericFilter,
               schema_betweenFilter,
               only_one=True
              )
    })
    
    schema_basicFilterExpression = Schema({
        Or(
            'filter', 
            'not_expression', 
            only_one=True
        ) : schema_filter
    })
    
    schema_basicFilterExpressionList = Schema({
        'expressions' : [schema_basicFilterExpression]
    })
    
    schema_intermediateFilterExpression = Schema({
        Or(
            'and_group',
            'or_group',
            'not_expression',
            'filter',
            only_one=True
        ) : Or(schema_basicFilterExpressionList,
               schema_basicFilterExpression,
               schema_filter,
               only_one=True)
    })
    
    schema_intermediateFilterExpressionList = Schema({
        'expressions' : [schema_intermediateFilterExpression]
    })
    
    schema_FilterExpression = Schema({
        Or(
            'and_group',
            'or_group',
            'not_expression',
            'filter',
            only_one=True
        ) : Or(schema_intermediateFilterExpressionList,
               schema_intermediateFilterExpression,
               only_one=True)
    })
    

    Attempting to validate at a sub-level like so

    schema_filter.validate({'field_name': 'p', 'in_list_filter': {'values': ['p', 'p']}})
    

    This produces the following error:

    Key 'in_list_filter' error:
    Or(Schema({'values': [<class 'str'>], Optional('case_sensitive'): <class 'bool'>}), Schema({'match_type': Or('EXACT', 'BEGINS_WITH', 'ENDS_WITH', 'CONTAINS', 'FULL_REGEXP', 'PARTIAL_REGEXP'), 'value': <class 'str'>, Optional('case_sensitive'): <class 'bool'>}), Schema({'operation': Or('EQUAL', 'GREATER_THAN', 'GREATER_THAN_OR_EQUAL', 'LESS_THAN', 'LESS_THAN_OR_EQUAL'), 'value': Schema(Or({'int64_value': <class 'int'>}, {'double_value': <class 'float'>}))}), Schema({'from_value': Schema(Or({'int64_value': <class 'int'>}, {'double_value': <class 'float'>})), 'to_value': Schema(Or({'int64_value': <class 'int'>}, {'double_value': <class 'float'>}))})) did not validate {'values': ['p', 'p']}
    

    I attempt removing the Or() statement from the value section. only checking for the schema I'm passing at the moment. When I do this, it works.

    opened by MrChadMWood 1
  • Question - How do I define conditional rules?

    Question - How do I define conditional rules?

    Say I have a dictionary where the keys are conditional. How do I write a schema to check it?

    For example, the following 2 are ok.

    {
        'name': 'x',
        'x_val': 1
    }
    
    {
        'name': 'y',
        'y_val': 1
    }
    

    But the following 2 are not

    {
        'name': 'x',
        'y_val': 1
    }
    
    {
        'name': 'y',
        'x_val': 1
    }
    

    The existence of what keys need to be in the dict is conditional on the value of name. I can't just say this is a dict with these keys. Name x has a certain set of keys (in this case, x_val), and name y has a different set of keys (y_val).

    Of course, I can write my own lambda function that takes in such a dictionary and performs the checks. But I was wondering if there's some kind of out of the box solution for this type of validation.

    thanks

    opened by fopguy41 1
  • ignore_extra_keys is ignored if flavor == VALIDATOR

    ignore_extra_keys is ignored if flavor == VALIDATOR

    For a case when a validator like Or, And etc is used the ignore_extra_keys is taken as false and any extra fields cause a validation error For example: object {"str":"str", "extra":125} is valid for schema {"str":str} and ignore_extra_keys=true but it is not valid for schema Or({"str":str}, None) and ignore_extra_keys=true, however, it should be valid as well

    Finally, I understood I needed to set it as Or({"str":str}, None, ignore_extra_keys=true), however, it is not evident enough, so it could be more convenient if Or to use ignore_extra_keys parameter from the Scheme object

    opened by ukrsms 1
Releases(v0.7.5)
Owner
Vladimir Keleshev
OCaml developer at SimCorp
Vladimir Keleshev
Yata is a fast, simple and easy Data Visulaization tool, running on python dash

Yata is a fast, simple and easy Data Visulaization tool, running on python dash. The main goal of Yata is to provide a easy way for persons with little programming knowledge to visualize their data e

Cybercreek 3 Jun 28, 2021
A dashboard built using Plotly-Dash for interactive visualization of Dex-connected individuals across the country.

Dashboard For The DexConnect Platform of Dexterity Global Working prototype submission for internship at Dexterity Global Group. Dashboard for real ti

Yashasvi Misra 2 Jun 15, 2021
This is a Boids Simulation, written in Python with Pygame.

PyNBoids A Python Boids Simulation This is a Boids simulation, written in Python3, with Pygame2 and NumPy. To use: Save the pynboids_sp.py file (and n

Nik 17 Dec 18, 2022
Fractals plotted on MatPlotLib in Python.

About The Project Learning more about fractals through the process of visualization. Built With Matplotlib Numpy License This project is licensed unde

Akeel Ather Medina 2 Aug 30, 2022
daily report of @arkinvest ETF activity + data collection

ark_invest daily weekday report of @arkinvest ETF activity + data collection This script was created to: Extract and save daily csv's from ARKInvest's

T D 27 Jan 02, 2023
A TileDB backend for xarray.

TileDB-xarray This library provides a backend engine to xarray using the TileDB Storage Engine. Example usage: import xarray as xr dataset = xr.open_d

TileDB, Inc. 14 Jun 02, 2021
A high performance implementation of HDBSCAN clustering. http://hdbscan.readthedocs.io/en/latest/

HDBSCAN Now a part of scikit-learn-contrib HDBSCAN - Hierarchical Density-Based Spatial Clustering of Applications with Noise. Performs DBSCAN over va

Leland McInnes 91 Dec 29, 2022
BrowZen correlates your emotional states with the web sites you visit to give you actionable insights about how you spend your time browsing the web.

BrowZen BrowZen correlates your emotional states with the web sites you visit to give you actionable insights about how you spend your time browsing t

Nick Bild 36 Sep 28, 2022
Data Visualizer Web-Application

Viz-It Data Visualizer Web-Application If I ask you where most of the data wrangler looses their time ? It is Data Overview and EDA. Presenting "Viz-I

Sagnik Roy 17 Nov 20, 2022
An interactive dashboard for visualisation, integration and classification of data using Active Learning.

AstronomicAL An interactive dashboard for visualisation, integration and classification of data using Active Learning. AstronomicAL is a human-in-the-

45 Nov 28, 2022
A collection of 100 Deep Learning images and visualizations

A collection of Deep Learning images and visualizations. The project has been developed by the AI Summer team and currently contains almost 100 images.

AI Summer 65 Sep 12, 2022
A simple, fast, extensible python library for data validation.

Validr A simple, fast, extensible python library for data validation. Simple and readable schema 10X faster than jsonschema, 40X faster than schematic

kk 209 Sep 19, 2022
在原神中使用围栏绘图

yuanshen_draw 在原神中使用围栏绘图 文件说明 toLines.py 将一张图片转换为对应的线条集合,视频可以按帧转换。 draw.py 在原神家园里绘制一张线条图。 draw_video.py 在原神家园里绘制视频(自动按帧摆放,截图(win)并回收) cat_to_video.py

14 Oct 08, 2022
Python toolkit for defining+simulating+visualizing+analyzing attractors, dynamical systems, iterated function systems, roulette curves, and more

Attractors A small module that provides functions and classes for very efficient simulation and rendering of iterated function systems; dynamical syst

1 Aug 04, 2021
Tandem Mass Spectrum Prediction with Graph Transformers

MassFormer This is the original implementation of MassFormer, a graph transformer for small molecule MS/MS prediction. Check out the preprint on arxiv

Röst Lab 13 Oct 27, 2022
Make scripted visualizations in blender

Scripted visualizations in blender The goal of this project is to script 3D scientific visualizations using blender. To achieve this, we aim to bring

Praneeth Namburi 10 Jun 01, 2022
Multi-class confusion matrix library in Python

Table of contents Overview Installation Usage Document Try PyCM in Your Browser Issues & Bug Reports Todo Outputs Dependencies Contribution References

Sepand Haghighi 1.3k Dec 31, 2022
Minimalistic tool to visualize how the routes to a given target domain change over time, feat. Python 3.10 & mermaid.js

Minimalistic tool to visualize how the routes to a given target domain change over time, feat. Python 3.10 & mermaid.js

Péter Ferenc Gyarmati 1 Jan 17, 2022
Generate a 3D Skyline in STL format and a OpenSCAD file from Gitlab contributions

Your Gitlab's contributions in a 3D Skyline gitlab-skyline is a Python command to generate a skyline figure from Gitlab contributions as Github did at

Félix Gómez 70 Dec 22, 2022
This is a web application to visualize various famous technical indicators and stocks tickers from user

Visualizing Technical Indicators Using Python and Plotly. Currently facing issues hosting the application on heroku. As soon as I am able to I'll like

4 Aug 04, 2022