A symbolic-model-guided fuzzer for TLS

Overview

tlspuffin

Logo with Penguin

TLS Protocol Under FuzzINg
A symbolic-model-guided fuzzer for TLS

Disclaimer: The term "symbolic-model-guided" should not be confused with symbolic execution or concolic fuzzing.

Description

Fuzzing implementations of cryptographic protocols is challenging. In contrast to traditional fuzzing of file formats, cryptographic protocols require a specific flow of cryptographic and mutually dependent messages to reach deep protocol states. The specification of the TLS protocol describes sound flows of messages and cryptographic operations.

Although the specification has been formally verified multiple times with significant results, a gap has emerged from the fact that implementations of the same protocol have not undergone the same logical analysis. Because the development of cryptographic protocols is error-prone, multiple security vulnerabilities have already been discovered in implementations in TLS which are not present in its specification.

Inspired by symbolic protocol verification, we present a reference implementation of a fuzzer named tlspuffin which employs a concrete semantic to execute TLS 1.2 and 1.3 symbolic traces. In fact attacks which mix \TLS versions are in scope of this implementation. This method allows us to utilize a genetic fuzzing algorithm to fuzz protocol flows, which is described by the following three stages.

  • By mutating traces we can deviate from the specification to test logical flaws.
  • Selection of interesting protocol flows advance the fuzzing procedure.
  • A security violation oracle supervises executions for the absence of vulnerabilities.

The novel approach allows rediscovering known vulnerabilities, which are out-of-scope for classical bit-level fuzzers. This proves that it is capable of reaching critical protocol states. In contrast to the promising methodology no new vulnerabilities were found by tlspuffin. This can can be explained by the fact that the implementation effort of TLS protocol primitives and extensions is high and not all features of the specification have been implemented. Nonetheless, the innovating approach is promising in terms of quickly reaching high edge coverage, expressiveness of executable protocol traces and stable and extensible implementation.

Features

  • Uses the LibAFL fuzzing framework
  • Fuzzer which is inspired by the Dolev-Yao symbolic model used in protocol verification
  • Domain specific mutators for Protocol Fuzzing!
  • Supported Libraries Under Test: OpenSSL 1.0.1f, 1.0.2u, 1.1.1k and LibreSSL 3.3.3
  • Reproducible for each LUT. We use Git submodules to link to forks this are in the tlspuffin organisation
  • 70% Test Coverage
  • Writtin in Rust!

Building

Now, to build the project:

git clone [email protected]/tlspuffin/tlspuffin
git submodule update --init --recursive
cargo build

Running

Fuzz using three clients:

cargo run --bin tlspuffin -- --cores 0-3

Note: After switching the Library Under Test or its version do a clean rebuild (cargo clean). For example when switching from OpenSSL 1.0.1 to 1.1.1.

Testing

cargo test

Command-line Interface

The syntax for the command-line of is:

      tlspuffin [⟨options] [⟨sub-commands⟩]

Global Options

Before we explain each sub-command, we first go over the options in the following.

  • -c, --cores ⟨spec⟩

    This option specifies on which cores the fuzzer should assign its worker processes. It can either be specified as a list by using commas "0,1,2,7" or as a range "0-7". By default, it runs just on core 0.

  • -i, --max-iters ⟨i⟩

    This option allows to bound the amount of iterations the fuzzer does. If omitted, then infinite iterations are done.

  • -p, --port ⟨n⟩

    As specified in [sec:design-multiprocessing] the initial communication between the fuzzer broker and workers happens over TCP/IP. Therefore, the broker requires a port allocation. The default port is 1337.

  • -s, --seed ⟨n⟩

    Defines an initial seed for the prng used for mutations. Note that this does not make the fuzzing deterministic, because of randomness introduced by the multiprocessing (see [sec:design-multiprocessing]).

Sub-commands

Now we will go over the sub-commands execute, plot, experiment, and seed.

  • execute ⟨input⟩

    This sub-command executes a single trace persisted in a file. The path to the file is provided by the ⟨input⟩ argument.

  • plot ⟨input⟩ ⟨format⟩ ⟨output_prefix⟩

    This sub-command plots the trace stored at ⟨input⟩ in the format specified by ⟨format⟩. The created graphics are stored at a path provided by ⟨output_prefix⟩. The option --multiple can be provided to create for each step in the trace a separate file. If the option --tree is given, then only a single graphic which contains all steps is produced.

  • experiment

    This sub-command initiates an experiment. Experiments are stored in a directory named experiments/ in the current working directory. An experiment consists of a directory which contains . The title and description of the experiment can be specified with --title ⟨t⟩ and --description ⟨d⟩ respectively. Both strings are persisted in the metadata of the experiment, together with the current commit hash of , the version and the current date and time.

  • seed

    This sub-command serializes the default seed corpus in a directory named corpus/ in the current working directory. The default corpus is defined in the source code of using the trace dsl.

Rust Setup

Install rustup.

The toolchain will be automatically downloaded when building this project. See ./rust-toolchain.toml for more details about the toolchain.

Make sure that you have the clang compiler installed. Optionally, also install llvm to have additional tools like sancov available. Also make sure that you have the usual tools for building it like make, gcc etc. installed. They may be needed to build OpenSSL.

Advanced Features

Running with ASAN

ASAN_OPTIONS=abort_on_error=1 \
    cargo run --bin tlspuffin --features asan -- --cores 0-3

It is important to enable abort_on_error, else the fuzzer workers fail to restart on crashes.

Generate Corpus Seeds

cargo run --bin tlspuffin -- seed

Plot Symbolic Traces

To plot SVGs do the following:

cargo run --bin tlspuffin -- plot corpus/seed_client_attacker12.trace svg ./plots/seed_client_attacker12

Note: This requires that the dot binary is in on your path. Note: The utility tools/plot-corpus.sh plots a whole directory

Execute a Symbolic Trace (with ASAN)

To analyze crashes you can also execute a trace which crashes the testing harness using ASAN:

cargo run --bin tlspuffin -- execute test.trace

To do the same with ASAN enabled:

ASAN_OPTIONS=detect_leaks=0 \
      cargo run --bin tlspuffin --features asan -- execute test.trace

Crash Deduplication

Creates log files for each crash and parses ASAN crashes to group crashes together.

tools/analyze-crashes.sh

Benchmarking

There is a benchmark which compares the execution of the dynamic functions to directly executing them in benchmark.rs. You can run them using:

cargo bench
xdg-open target/criterion/report/index.html

Documentation

This generates the documentation for this crate and opens the browser. This also includes the documentation of every dependency like LibAFL or rustls.

cargo doc --open

You can also view the up-to-date documentation here.

You might also like...
Theano is a Python library that allows you to define, optimize, and evaluate mathematical expressions involving multi-dimensional arrays efficiently. It can use GPUs and perform efficient symbolic differentiation.

============================================================================================================ `MILA will stop developing Theano https:

ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]

piglet PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021] This repo contains code and data for PIGLeT. If you like

Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis in JAX

SYMPAIS: Symbolic Parallel Adaptive Importance Sampling for Probabilistic Program Analysis Overview | Installation | Documentation | Examples | Notebo

Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

Source code and Dataset creation for the paper "Neural Symbolic Regression That Scales"

NeuralSymbolicRegressionThatScales Pytorch implementation and pretrained models for the paper "Neural Symbolic Regression That Scales", presented at I

Data and Code for ACL 2021 Paper
Data and Code for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning"

Introduction Code and data for ACL 2021 Paper "Inter-GPS: Interpretable Geometry Problem Solving with Formal Language and Symbolic Reasoning". We cons

PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning
PyBullet CartPole and Quadrotor environments—with CasADi symbolic a priori dynamics—for learning-based control and reinforcement learning

safe-control-gym Physics-based CartPole and Quadrotor Gym environments (using PyBullet) with symbolic a priori dynamics (using CasADi) for learning-ba

Driller: augmenting AFL with symbolic execution!

Driller Driller is an implementation of the driller paper. This implementation was built on top of AFL with angr being used as a symbolic tracer. Dril

Comments
  • Support for WolfSSL through a new rust-wolfssl crate

    Support for WolfSSL through a new rust-wolfssl crate

    I tried another approach as I struggled with #136 to debug some SEGFAULT.

    Here the approach is to clone rust-openssl, minimize the exposed interface while exposing what we need for wolfssl_binding.rs, then plug in wolfssl-sys instead of openssl-sys.

    opened by LCBH 3
  • Rediscover wolfSSL vulnerabilities

    Rediscover wolfSSL vulnerabilities

    • CVE-2020-12457 in <4.5.0 https://nvd.nist.gov/vuln/detail/CVE-2020-12457 (DDOS against server, needs change_cipher_spec (CCS) message mutations)
    • CVE 2020-24613 in <4.5.0 https://nvd.nist.gov/vuln/detail/CVE-2020-24613 in < 4.5.0, TLS 1.3 server auth. bypass
    • CVE-2021-3336 in < 4.7.0 https://nvd.nist.gov/vuln/detail/CVE-2021-3336, TLS 1.3 server auth. bypass
    • CVE 2022-25638 in < 5.2.0 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25638, TLS 1.3 server auth. bypass
    • CVE-2022-25640 in <5.2.0 https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2022-25640 (attack on client authentication, needs client authentication through cert.)
    opened by maxammann 1
Releases(evaluation)
PyTorch implementation of the Quasi-Recurrent Neural Network - up to 16 times faster than NVIDIA's cuDNN LSTM

Quasi-Recurrent Neural Network (QRNN) for PyTorch Updated to support multi-GPU environments via DataParallel - see the the multigpu_dataparallel.py ex

Salesforce 1.3k Dec 28, 2022
The repository forked from NVlabs uses our data. (Differentiable rasterization applied to 3D model simplification tasks)

nvdiffmodeling [origin_code] Differentiable rasterization applied to 3D model simplification tasks, as described in the paper: Appearance-Driven Autom

Qiujie (Jay) Dong 2 Oct 31, 2022
Robust fine-tuning of zero-shot models

Robust fine-tuning of zero-shot models This repository contains code for the paper Robust fine-tuning of zero-shot models by Mitchell Wortsman*, Gabri

224 Dec 29, 2022
Reproduction of Vision Transformer in Tensorflow2. Train from scratch and Finetune.

Vision Transformer(ViT) in Tensorflow2 Tensorflow2 implementation of the Vision Transformer(ViT). This repository is for An image is worth 16x16 words

sungjun lee 42 Dec 27, 2022
PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentation.

Shape-aware Convolutional Layer (ShapeConv) PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentatio

Hanchao Leng 82 Dec 29, 2022
Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic Scenes", ICCV 2021.

Deep 3D Mask Volume for View Synthesis of Dynamic Scenes Official PyTorch Implementation of paper "Deep 3D Mask Volume for View Synthesis of Dynamic S

Ken Lin 17 Oct 12, 2022
Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method

C++/ROS Source Codes for "Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method" published in IEEE Trans. Intelligent Transportation Systems

Bai Li 88 Dec 23, 2022
Gated-Shape CNN for Semantic Segmentation (ICCV 2019)

GSCNN This is the official code for: Gated-SCNN: Gated Shape CNNs for Semantic Segmentation Towaki Takikawa, David Acuna, Varun Jampani, Sanja Fidler

859 Dec 26, 2022
Code related to the manuscript "Averting A Crisis In Simulation-Based Inference"

Abstract We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms are inadequate for the falsificat

Montefiore Artificial Intelligence Research 3 Nov 14, 2022
Official implementation for "Image Quality Assessment using Contrastive Learning"

Image Quality Assessment using Contrastive Learning Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik This is the offi

Pavan Chennagiri 67 Dec 30, 2022
An implementation of the Contrast Predictive Coding (CPC) method to train audio features in an unsupervised fashion.

CPC_audio This code implements the Contrast Predictive Coding algorithm on audio data, as described in the paper Unsupervised Pretraining Transfers we

8 Nov 14, 2022
The official project of SimSwap (ACM MM 2020)

SimSwap: An Efficient Framework For High Fidelity Face Swapping Proceedings of the 28th ACM International Conference on Multimedia The official reposi

Six_God 2.6k Jan 08, 2023
[CVPR 2016] Unsupervised Feature Learning by Image Inpainting using GANs

Context Encoders: Feature Learning by Inpainting CVPR 2016 [Project Website] [Imagenet Results] Sample results on held-out images: This is the trainin

Deepak Pathak 829 Dec 31, 2022
Pytorch implementation for "Open Compound Domain Adaptation" (CVPR 2020 ORAL)

Open Compound Domain Adaptation [Project] [Paper] [Demo] [Blog] Overview Open Compound Domain Adaptation (OCDA) is the author's re-implementation of t

Zhongqi Miao 137 Dec 15, 2022
A Fast Monotone Rotating Shallow Water model

pyRSW A Fast Monotone Rotating Shallow Water model How fast? As fast as a sustained 2 Gflop/s per core on a 2.5 GHz cpu (or 2048 Gflop/s with 1024 cor

Guillaume Roullet 13 Sep 28, 2022
Official code for the paper "Self-Supervised Prototypical Transfer Learning for Few-Shot Classification"

Self-Supervised Prototypical Transfer Learning for Few-Shot Classification This repository contains the reference source code and pre-trained models (

EPFL INDY 44 Nov 04, 2022
Code for our paper 'Generalized Category Discovery'

Generalized Category Discovery This repo is a placeholder for code for our paper: Generalized Category Discovery Abstract: In this paper, we consider

107 Dec 28, 2022
Optimized code based on M2 for faster image captioning training

Transformer Captioning This repository contains the code for Transformer-based image captioning. Based on meshed-memory-transformer, we further optimi

lyricpoem 16 Dec 16, 2022
A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis

A Shading-Guided Generative Implicit Model for Shape-Accurate 3D-Aware Image Synthesis Project Page | Paper A Shading-Guided Generative Implicit Model

Xingang Pan 115 Dec 18, 2022
Roger Labbe 13k Dec 29, 2022