Python document object mapper (load python object from JSON and vice-versa)

Overview

lupin is a Python JSON object mapper

Build Status

lupin is meant to help in serializing python objects to JSON and unserializing JSON data to python objects.

Installation

pip install lupin

Usage

lupin uses schemas to create a representation of a python object.

A schema is composed of fields which represents the way to load and dump an attribute of an object.

Define schemas

from datetime import datetime
from lupin import Mapper, Schema, fields as f


# 1) Define your models
class Thief(object):
    def __init__(self, name, stolen_items):
        self.name = name
        self.stolen_items = stolen_items


class Painting(object):
    def __init__(self, name, author):
        self.name = name
        self.author = author


class Artist(object):
    def __init__(self, name, birth_date):
        self.name = name
        self.birth_date = birth_date


# 2) Create schemas
artist_schema = Schema({
    "name": f.String(),
    "birthDate": f.DateTime(binding="birth_date", format="%Y-%m-%d")
}, name="artist")

painting_schema = Schema({
    "name": f.String(),
    "author": f.Object(artist_schema)
}, name="painting")

thief_schema = Schema({
    "name": f.String(),
    "stolenItems": f.List(painting_schema, binding="stolen_items")
}, name="thief")

# 3) Create a mapper and register a schema for each of your models you want to map to JSON objects
mapper = Mapper()

mapper.register(Artist, artist_schema)
mapper.register(Painting, painting_schema)
mapper.register(Thief, thief_schema)


# 4) Create some sample data
leonardo = Artist(name="Leonardo da Vinci", birth_date=datetime(1452, 4, 15))
mona_lisa = Painting(name="Mona Lisa", author=leonardo)
arsene = Thief(name="Arsène Lupin", stolen_items=[mona_lisa])

Dump objects

# use mapper to dump python objects
assert mapper.dump(leonardo) == {
    "name": "Leonardo da Vinci",
    "birthDate": "1452-04-15"
}

assert mapper.dump(mona_lisa) == {
    "name": "Mona Lisa",
    "author": {
        "name": "Leonardo da Vinci",
        "birthDate": "1452-04-15"
    }
}

assert mapper.dump(arsene) == {
    "name": "Arsène Lupin",
    "stolenItems": [
        {
            "name": "Mona Lisa",
            "author": {
                "name": "Leonardo da Vinci",
                "birthDate": "1452-04-15"
            }
        }
    ]
}

Load objects

# use mapper to load JSON data
data = {
    "name": "Mona Lisa",
    "author": {
        "name": "Leonardo da Vinci",
        "birthDate": "1452-04-15"
    }
}
painting = mapper.load(data, "painting")  # "painting" is the name of the schame you want to use
artist = painting.author

assert isinstance(painting, Painting)
assert painting.name == "Mona Lisa"

assert isinstance(artist, Artist)
assert artist.name == "Leonardo da Vinci"
assert artist.birth_date == datetime(1452, 4, 15)

Polymorphic lists

Sometimes a list can contain multiple type of objects. In such cases you will have to use a PolymorphicList, you will also need to add a key in the items schema to store the type of the object (you can use a Constant field).

Say that our thief has level up and has stolen a diamond.

class Diamond(object):
    def __init__(self, carat):
        self.carat = carat


mapper = Mapper()

# Register a schema for diamonds
diamond_schema = Schema({
    "carat": f.Field(),
    "type": f.Constant("diamond")  # this will be used to know which schema to used while loading JSON
}, name="diamond")
mapper.register(Diamond, diamond_schema)

# Change our painting schema in order to include a `type` field
painting_schema = Schema({
    "name": f.String(),
    "type": f.Constant("painting"),
    "author": f.Object(artist_schema)
}, name="painting")
mapper.register(Painting, painting_schema)

# Use `PolymorphicList` for `stolen_items`
thief_schema = Schema({
    "name": f.String(),
    "stolenItems": f.PolymorphicList(on="type",  # JSON key to lookup for the polymorphic type
                                     binding="stolen_items",
                                     schemas={
                                         "painting": painting_schema,  # if `type == "painting"` then use painting_schema
                                         "diamond": diamond_schema  # if `type == "diamond"` then use diamond_schema
                                     })
}, name="thief")
mapper.register(Thief, thief_schema)


diamond = Diamond(carat=20)
arsene.stolen_items.append(diamond)

# Dump object
data = mapper.dump(arsene)
assert data == {
    "name": "Arsène Lupin",
    "stolenItems": [
        {
            "name": "Mona Lisa",
            "type": "painting",
            "author": {
                "name": "Leonardo da Vinci",
                "birthDate": "1452-04-15"
            }
        },
        {
            "carat": 20,
            "type": "diamond"
        }
    ]
}

# Load data
thief = mapper.load(data, "thief")
assert isinstance(thief.stolen_items[0], Painting)
assert isinstance(thief.stolen_items[1], Diamond)

Validation

Lupin provides a set of builtin validators, you can find them in the lupin/validators folder.

While creating your schemas you can assign validators to the fields. Before loading a document lupin will validate its format. If one field is invalid, an InvalidDocument is raised with all the error detected in the data.

Example :

from lupin import Mapper, Schema, fields as f, validators as v
from lupin.errors import InvalidDocument, InvalidLength
from models import Artist

mapper = Mapper()

artist_schema = Schema({
    "name": f.String(validators=v.Length(max=10)),
}, name="artist")
mapper.register(Artist, artist_schema)

data = {
    "name": "Leonardo da Vinci"
}

try:
    mapper.load(data, artist_schema, allow_partial=True)
except InvalidDocument as errors:
    error = errors[0]
    assert isinstance(error, InvalidLength)
    assert error.path == ["name"]

Current validators are :

  • DateTimeFormat (validate that value is a valid datetime format)
  • Equal (validate that value is equal to a predefined one)
  • In (validate that a value is contained in a set of value)
  • Length (validate the length of a value)
  • Match (validate the format of a value with a regex)
  • Type (validate the type of a value, this validator is already included in all fields to match the field type)
  • URL (validate an URL string format)
  • IsNone (validate that value is None)
  • Between (validate that value belongs to a range)

Combination

You can build validators combinations using the & and | operator.

Example :

from lupin import validators as v
from lupin.errors import ValidationError

validators = v.Equal("Lupin") | v.Equal("Andrésy")
# validators passes only if value is "Lupin" or "Andrésy"

validators("Lupin", [])

try:
    validators("Holmes", [])
except ValidationError:
    print("Validation error")
Owner
Aurélien Amilin
Aurélien Amilin
Version bêta d'un système pour suivre les prix des livres chez Books to Scrape,

Version bêta d'un système pour suivre les prix des livres chez Books to Scrape, un revendeur de livres en ligne. En pratique, dans cette version bêta, le programme n'effectuera pas une véritable surv

Mouhamed Dia 1 Jan 06, 2022
Project documentation with Markdown.

MkDocs Project documentation with Markdown. View the MkDocs documentation. Project release notes. Visit the MkDocs wiki for community resources, inclu

MkDocs 15.6k Jan 02, 2023
Fully typesafe, Rust-like Result and Option types for Python

safetywrap Fully typesafe, Rust-inspired wrapper types for Python values Summary This library provides two main wrappers: Result and Option. These typ

Matthew Planchard 32 Dec 25, 2022
Documentation for GitHub Copilot

NOTE: GitHub Copilot discussions have moved to the Copilot Feedback forum. GitHub Copilot Welcome to the GitHub Copilot user community! In this reposi

GitHub 21.3k Dec 28, 2022
Documentation generator for C++ based on Doxygen and mosra/m.css.

mosra/m.css is a Doxygen-based documentation generator that significantly improves on Doxygen's default output by controlling some of Doxygen's more unruly options, supplying it's own slick HTML+CSS

Mark Gillard 109 Dec 07, 2022
Data-Scrapping SEO - the project uses various data scrapping and Google autocompletes API tools to provide relevant points of different keywords so that search engines can be optimized

Data-Scrapping SEO - the project uses various data scrapping and Google autocompletes API tools to provide relevant points of different keywords so that search engines can be optimized; as this infor

Vibhav Kumar Dixit 2 Jul 18, 2022
SamrSearch - SamrSearch can get user info and group info with MS-SAMR

SamrSearch SamrSearch can get user info and group info with MS-SAMR.like net use

knight 10 Oct 06, 2022
ReStructuredText and Sphinx bridge to Doxygen

Breathe Packagers: PGP signing key changes for Breathe = v4.23.0. https://github.com/michaeljones/breathe/issues/591 This is an extension to reStruct

Michael Jones 643 Dec 31, 2022
Rust Markdown Parsing Benchmarks

Rust Markdown Parsing Benchmarks This repo tries to assess Rust markdown parsing

Ed Page 1 Aug 24, 2022
Plugins for MkDocs.

Plugins for MkDocs and Python Markdown pip install neoteroi-mkdocs This package includes the following plugins and extensions: Name Description Type m

35 Dec 23, 2022
Hasköy is an open-source variable sans-serif typeface family

Hasköy Hasköy is an open-source variable sans-serif typeface family. Designed with powerful opentype features and each weight includes latin-extended

67 Jan 04, 2023
Build AGNOS, the operating system for your comma three

agnos-builder This is the tool to build AGNOS, our Ubuntu based OS. AGNOS runs on the comma three devkit. NOTE: the edk2_tici and agnos-firmare submod

comma.ai 21 Dec 24, 2022
100 Days of Code Learning program to keep a habit of coding daily and learn things at your own pace with help from our remote community.

100 Days of Code Learning program to keep a habit of coding daily and learn things at your own pace with help from our remote community.

Git Commit Show by Invide 41 Dec 30, 2022
Create docsets for Dash.app-compatible API browser.

doc2dash: Create Docsets for Dash.app and Clones doc2dash is an MIT-licensed extensible Documentation Set generator intended to be used with the Dash.

Hynek Schlawack 498 Dec 30, 2022
This repo provides a package to automatically select a random seed based on ancient Chinese Xuanxue

🤞 Random Luck Deep learning is acturally the alchemy. This repo provides a package to automatically select a random seed based on ancient Chinese Xua

Tong Zhu(朱桐) 33 Jan 03, 2023
A `:github:` role for Sphinx

sphinx-github-role A github role for Sphinx. Usage Basic usage MyST: :caption: index.md See {github}`astrojuanlu/sphinx-github-role#1`. reStructuredT

Juan Luis Cano Rodríguez 4 Nov 22, 2022
Software engineering course project. Secondhand trading system.

PigeonSale Software engineering course project. Secondhand trading system. Documentation API doumenatation: list of APIs Backend documentation: notes

Harry Lee 1 Sep 01, 2022
learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your personal portfolio

learn python in 100 days, a simple step could be follow from beginner to master of every aspect of python programming and project also include side project which you can use as demo project for your

BDFD 6 Nov 05, 2022
✨ Real-life Data Analysis and Model Training Workshop by Global AI Hub.

🎓 Data Analysis and Model Training Course by Global AI Hub Syllabus: Day 1 What is Data? Multimedia Structured and Unstructured Data Data Types Data

Global AI Hub 71 Oct 28, 2022
k3heap is a binary min heap implemented with reference

k3heap k3heap is a binary min heap implemented with reference k3heap is a component of pykit3 project: a python3 toolkit set. In this module RefHeap i

pykit3 1 Nov 13, 2021