A toy compiler that can convert Python scripts to pickle bytecode 🥒

Overview

Pickora 🐰

A small compiler that can convert Python scripts to pickle bytecode.

Requirements

  • Python 3.8+

No third-party modules are required.

Usage

usage: pickora.py [-h] [-d] [-r] [-l {none,python,pickle}] [-o OUTPUT] file

A toy compiler that can convert Python scripts to pickle bytecode.

positional arguments:
  file                  the Python script to compile

optional arguments:
  -h, --help            show this help message and exit
  -d, --dis             disassamble compiled pickle bytecode
  -r, --eval, --run     run the pickle bytecode
  -l {none,python,pickle}, --lambda {none,python,pickle}
                        choose lambda compiling mode
  -o OUTPUT, --output OUTPUT
                        write compiled pickle to file

Lambda syntax is disabled (--lambda=none) by default.

For exmple, you can run:

python3 pickora.py -d samples/hello.py -o output.pkl

to compile samples/hello.py to output.pkl and show the disassamble result of the compiled pickle bytecode.

But this won't run the pickle for you. If you want you should add -r option, or execute the following command after compile:

python3 -m pickle output.pkl

Special Syntax

RETURN

RETURN is a keyword reserved for specifying pickle.load(s) result. This keyword should only be put in the last statement alone, and you can assign any value / expression to it.

For example, after you compile the following code and use pickle.loads to load the compiled pickle, it returns a string 'INT_MAX=2147483647'.

# source.py
n = pow(2, 31) - 1
RETURN = "INT_MAX=%d" % n

It might look like this:

$ python3 pickora.py source.py -o output.pkl
Saving pickle to output.pkl

$ python3 -m pickle output.pkl
'INT_MAX=2147483647'

Todos

  • Operators (compare, unary, binary, subscript)
  • Unpacking assignment
  • Augmented assignment
  • Macros (directly using GLOBAL, OBJECT bytecodes)
  • Lambda (I don't want to support normal function, because it seems not "picklic" for me)
    • Python bytecode mode
    • Pickle bytecode mode

Impracticable

  • Function call with kwargs
    • NEWOBJ_EX only support type object (it calls __new__)

FAQ

What is pickle?

RTFM.

Why?

It's cool.

Is it useful?

No, not at all, it's definitely useless.

So, is this garbage?

Yep, it's cool garbage.

Would it support syntaxes like if / while / for ?

No. All pickle can do is just simply define a variable or call a function, so this kind of syntax wouldn't exist.

But if you want to do things like:

ans = input("Yes/No: ")
if ans == 'Yes':
  print("Great!")
elif ans == 'No':
  exit()

It's still achievable! You can rewrite your code to this:

from functools import partial
condition = {'Yes': partial(print, 'Great!'), 'No': exit}
ans = input("Yes/No: ")
condition.get(ans, repr)()

ta-da!

For the loop syntax, you can try to use map / reduce ... .

And yes, you are right, it's functional programming time!

Owner
ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ
I hate coding.
ꌗᖘ꒒ꀤ꓄꒒ꀤꈤꍟ
[ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization

F8Net Fixed-Point 8-bit Only Multiplication for Network Quantization (ICLR 2022 Oral) OpenReview | arXiv | PDF | Model Zoo | BibTex PyTorch implementa

Snap Research 76 Dec 13, 2022
Zero-shot Synthesis with Group-Supervised Learning (ICLR 2021 paper)

GSL - Zero-shot Synthesis with Group-Supervised Learning Figure: Zero-shot synthesis performance of our method with different dataset (iLab-20M, RaFD,

Andy_Ge 62 Dec 21, 2022
Latent Execution for Neural Program Synthesis

Latent Execution for Neural Program Synthesis This repo provides the code to replicate the experiments in the paper Xinyun Chen, Dawn Song, Yuandong T

Xinyun Chen 16 Oct 02, 2022
An open source object detection toolbox based on PyTorch

MMDetection is an open source object detection toolbox based on PyTorch. It is a part of the OpenMMLab project.

Bo Chen 24 Dec 28, 2022
The CLRS Algorithmic Reasoning Benchmark

Learning representations of algorithms is an emerging area of machine learning, seeking to bridge concepts from neural networks with classical algorithms.

DeepMind 251 Jan 05, 2023
Continual World is a benchmark for continual reinforcement learning

Continual World Continual World is a benchmark for continual reinforcement learning. It contains realistic robotic tasks which come from MetaWorld. Th

41 Dec 24, 2022
code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology"

GIANT Code and data for paper "GIANT: Scalable Creation of a Web-scale Ontology" https://arxiv.org/pdf/2004.02118.pdf Please cite our paper if this pr

Excalibur 39 Dec 29, 2022
Assessing syntactic abilities of BERT

BERT-Syntax Assesing the syntactic abilities of BERT. What Evaluate Google's BERT-Base and BERT-Large models on the syntactic agreement datasets from

Yoav Goldberg 147 Aug 02, 2022
Convert Mission Planner (ArduCopter) Waypoint Missions to Litchi CSV Format to execute on DJI Drones

Mission Planner to Litchi Convert Mission Planner (ArduCopter) Waypoint Surveys to Litchi CSV Format to execute on DJI Drones Litchi doesn't support S

Yaros 24 Dec 09, 2022
A nutritional label for food for thought.

Lexiscore As a first effort in tackling the theme of information overload in content consumption, I've been working on the lexiscore: a nutritional la

Paul Bricman 34 Nov 08, 2022
Class activation maps for your PyTorch models (CAM, Grad-CAM, Grad-CAM++, Smooth Grad-CAM++, Score-CAM, SS-CAM, IS-CAM, XGrad-CAM, Layer-CAM)

TorchCAM: class activation explorer Simple way to leverage the class-specific activation of convolutional layers in PyTorch. Quick Tour Setting your C

F-G Fernandez 1.2k Dec 29, 2022
PyTorch implementation of normalizing flow models

PyTorch implementation of normalizing flow models

Vincent Stimper 242 Jan 02, 2023
VarCLR: Variable Semantic Representation Pre-training via Contrastive Learning

    VarCLR: Variable Representation Pre-training via Contrastive Learning New: Paper accepted by ICSE 2022. Preprint at arXiv! This repository contain

squaresLab 32 Oct 24, 2022
[CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong Chen, Zhenyu Zhang, Yu Cheng, Ahmed Awadallah, Zhangyang Wang

The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy Codes for this paper: [CVPR 2022] The Pr

VITA 16 Nov 26, 2022
[CVPR 2021 Oral] ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis

ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis ForgeryNet: A Versatile Benchmark for Comprehensive Forgery Analysis [arxiv|pdf|v

Yinan He 78 Dec 22, 2022
Short and long time series classification using convolutional neural networks

time-series-classification Short and long time series classification via convolutional neural networks In this project, we present a novel framework f

35 Oct 22, 2022
Self-Regulated Learning for Egocentric Video Activity Anticipation

Self-Regulated Learning for Egocentric Video Activity Anticipation Introduction This is a Pytorch implementation of the model described in our paper:

qzhb 13 Sep 23, 2022
Towards Debiasing NLU Models from Unknown Biases

Towards Debiasing NLU Models from Unknown Biases Abstract: NLU models often exploit biased features to achieve high dataset-specific performance witho

Ubiquitous Knowledge Processing Lab 22 Jun 14, 2022
A python library for self-supervised learning on images.

Lightly is a computer vision framework for self-supervised learning. We, at Lightly, are passionate engineers who want to make deep learning more effi

Lightly 2k Jan 08, 2023
Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt)

Deep Learning for Natural Language Processing SS 2021 (TU Darmstadt) Task Training huge unsupervised deep neural networks yields to strong progress in

2 Aug 05, 2022