Security evaluation module with onnx, pytorch, and SecML.

Overview

🚀 🐼 🔥 PandaVision

Integrate and automate security evaluations with onnx, pytorch, and SecML!

Installation

Starting the server without Docker

If you want to run the server with docker, skip to the next section.

This project uses Redis-RQ for handling the queue of requested jobs. Please install Redis if you plan to run this Flask server without using Docker.

Then, install the Python requirements, running the following command in your shell:

pip install -r requirements.txt

Make sure your Redis server is running on your local machine. Test the Redis connection with the following command:

redis-cli ping

The response PONG should appear in the shell.

If the database servers is down, check the linked docs for finding out how to restart it in your system.

Notice: the code is expected to connect to the database through its default port, 6379 for Redis.

Now we are ready to start the server. Don't forget that this system uses external workers to process the long-running tasks, so we need to start the workers along with the sever. Run the following commands from the app folder:

python app/worker.py

Now open another shell and run the server:

python app/runserver.py

Starting the server with docker

If you already started the server locally, you can skip to the next section.

If you already started the server locally, but you want to start it with docker instead, you should stop the running services. On linux, press CTRL + C to stop the server and the worker, then stop the redis service on the machine.

sudo service redis stop

In order to use the docker-compose file provided, install Docker and start the Docker service.

Since this project uses different interconnected containers, it is recommended to install and use Docker Compose.

Once set up, Docker Compose will automatically take care of the setup process. Just type the following commands in your shell, from the app path:

docker build . -t pandavision && docker-compose build && docker-compose up

If you want to use more workers, the following command should be used(replace the number 2 with the number of workers you want to set up):

docker-compose up --scale worker=2

Usage

Quick start

For a demo example, you can download a sample containing few images of the imagenet dataset and a resnet50-pretrained model from the onnx zoo.

Download the files and place them in a known directory.

Supported models

You can export your own ONNX pretrained model from the library of your choice, and pass them to the module. This project uses onnx2pytorch as a dependency to load the ONNX models. Check out the supported operations if you encounter problems when importing the models. A list of pretrained models is also available in the main page.

Data preparation

The module accepts HDF5 files as data sources. The file should contain the samples as the format NCHV.

Note that, while the standardization can be performed through the APIs themselves (preferred), the preprocessing such as resize, reshape, rotation and normalization should be applied in this step.

An example, that creates a subset of the imagenet dataset, can be found in this gist.

How to start a security evaluation job

The easy way

You can access the APIs through the web interface by connecting at http://localhost:8080. You will be prompted to the home page of the service. Click then on the "Try it out!" button, and you will see a form to configure the security evaluation. Upload the model and the dataset of choice, then select the paramters. Finally, click "Submit", and wait for the evaluation to finish. As soon as the worker finishes processing the data, you will see the security evaluation curve on the interface.

You can follow this video tutorial (click for YouTube video) for configuring the security evaluation:

Demo PandaVision

Coming soon ➡️ download data in csv format.

The nerdy way

A security evaluation job can be enqueued with a POST request to /security_evaluations. The API returns the job unique ID that can be used to access job status and results. Running workers will wait for new jobs in the queue and consume them with a FIFO rule.

The request should specify the following parameters in its body:

  • dataset (string): the path where to find the dataset to be loaded (validation dataset should be used, otherwise check out the "indexes" input parameter).
  • trained-model (string): the path of the onnx trained model.
  • performance-metric (string): the performance metric type that should be used to evaluate the system adversarial robustness. Currently implemented only the classification-accuracy metric.
  • evaluation-mode (string): one of 'fast', 'complete'. A fast evaluation will perform the experiment with a subset of the whole dataset (100 samples). For more info on the fast evaluation, see this paper.
  • task (string): type of task that the model is supposed to perform. This determines the attack scenario. (available: "classification" - support for more use cases will be provided in the future).
  • perturbation-type (string): type of perturbation to apply (available: "max-norm" or "random").
  • perturbation-values (Array of floats): array of values to use for crafting the adversarial examples. These are specified as percentage of the input range, fixed, in [0, 1] (e.g., a value of 0.05 will apply a perturbation of maximum 5% of the input scale).
  • indexes (Array of ints): if the list of indexes is specified, it will be used for creating a specific sample from the dataset.
  • preprocessing (dict): dictionary with keys "mean" and "std" for defining custom preprocessing. The values should be expressed as lists. If not set, standard imagenet preprocessing will be applied. Otherwise, specify an empty dict for no preprocessing.
{
  "dataset": "<dataset-path>.hdf5",
  "trained-model": "<model_path>.onnx",
  "performance-metric": "classification-accuracy",
  "evaluation-mode": "fast",
  "task": "classification",
  "perturbation-type": "max-norm",
  "perturbation-values": [
    0, 0.01, 0.02, 0.03, 0.04, 0.05
  ]
}

The API can also be tested with Postman (it is configured already to get the ID and use it for fetching results):

Run in Postman

Job status API

Job status can be retrieved by sending a GET request to /security_evaluations/{id}, where the id of the job should be replaced with the job ID of the previous point. A GET to /security_evaluations will return the status of all jobs found in the queues and in the finished job registries.

Job results API

Job results can be retrieved, once the job has entered the finished state, with a GET request to /security_evaluations/{id}/output. A request to this path with a job ID that is not yet in the finished status will redirect to the job status API.

Contributing

Contributions are what make the open source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

If you have a suggestion that would make this better, please fork the repo and create a pull request. You can also simply open an issue with the tag "enhancement". Don't forget to give the project a star! Thanks again!

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

If you don't have time to contribute yourself, feel free to open an issue with your suggestions.

License

This project is licensed under the terms of the MIT license. See LICENSE for more information.

Credits

Based on the Security evaluation module - ALOHA.eu project

Comments
  • Adv examples api (PGD support)

    Adv examples api (PGD support)

    Changelog

    • [x] Add caching for PGD attack

    • [x] Add curve visualization for PGD attack

    • [x] Add adversarial example visualization for PGD attack

    • [x] Extend to other attacks

    • [x] Fix min-distance attacks and PGD caching

    • [x] Document the changes

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Updates - attack logging, adversarial example inspection, debugging.

    • Does this PR introduce a breaking change? (What changes might users need to make in their application due to this PR?) Major changes.

    • Other information:(does the pr fix some issues? Tag them with #)

      Fixes #6 .

    opened by maurapintor 0
  • Fix ram problems

    Fix ram problems

    Changelog

    • Fixed CW attack memory problem
    • Efficient computation of adversarial examples in maximum-norm case

    What kind of change does this PR introduce?

    • Clear cache for CW attack (temporary fix until secml is updated to support optional caching).

    • PGD attack is run, for each value of perturbation, only in the cases that were not found adversarial for smaller norms.

    • Other information:

      Fixes #21

    opened by maurapintor 0
  • Memory problems when running complete evaluation

    Memory problems when running complete evaluation

    Evaluation fails with some particular configuration of parameters. The reason seems to be related to cached adversarial examples.

    Expected Behavior

    The attack should not make the ram memory explode.

    Current Behavior

    The ram memory fills, then the swap memory, then everything freezes.

    Possible Solution

    Possibly free unused data, such as the attack paths.

    Steps to Reproduce

    The evaluation fails with the following set of parameters:

    • resnet 50 net
    • imagenet data from the demo data
    • L2 CW attack

    Context (Environment)

    • OS: Ubuntu 20.04 LTS
    • Python Version: 3.8
    • Pandavision Version: 0.3
    • Browser: Mozilla Firefox
    bug enhancement 
    opened by maurapintor 0
  • fixed conflict for picker

    fixed conflict for picker

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix

    • What is the current behavior? Now the GUI updates the attack selection and the perturbation size choices simultaneously.

    • Other information: Fixes #19

    opened by maurapintor 0
  • Attack selector bug

    Attack selector bug

    Attack choices not shown.

    Expected Behavior

    On the GUI, if the perturbation type is picked, the selector for the attack should visualize the attack choices for the specified perturbation model.

    Current Behavior

    The attack choices are not updated.

    Possible Solution

    Possible conflict with the jquery call that updates the perturbation values.

    bug 
    opened by maurapintor 0
  • Fix docker compose version

    Fix docker compose version

    • What kind of change does this PR introduce? (Bug fix, feature, docs update, ...) Bug fix for docker container. Feature: picker for perturbation size.

    • What is the new behavior?

    • Now the docker-compose should at least be v1.16, as it supports the yaml file format used in this repo for building the pandavision architecture.
    • The GUI now allows to pick the perturbation sizes for the evaluation.
    • Other information: Fixes #14 Fixes #17
    opened by maurapintor 0
  • Docker compose problem with services key

    Docker compose problem with services key

    Docker compose file format is incompatible with old versions.

    Expected Behavior

    The command:

    docker build . -t pandavision && docker-compose build && docker-compose up
    

    should build the container and run smoothly.

    Current Behavior

    The command produces, with some Docker-compose versions, the following output:

    Successfully tagged pandavision:latest ERROR: The Compose file './docker-compose.yml' is invalid because: Unsupported config option for services: 'worker'

    Possible Solution

    The problem seems related to the docker-compose versions that have incompatible specifications for the expected yaml: https://docs.docker.com/compose/compose-file/compose-versioning/#versioning

    A suggested solution, from this StackOverflow question, is to upgrade the docker-compose version, and specify the version number in the top of the yaml file.

    Possible Implementation

    1. add line in the yaml file, stating version: "3" in the header.
    2. suggest minimum version required for docker-compose, i.e. at least 1.6, in the readme file.
    opened by maurapintor 0
  • Chart x-axis based on eps values rather than order

    Chart x-axis based on eps values rather than order

    The sec-eval curve is now presenting results in a "linspace" way. The possibility of adding scatter values should be added, so that the list of eps values can be dynamically adjusted to arbitrary ranges.

    bug enhancement 
    opened by maurapintor 0
  • GUI for security evaluations

    GUI for security evaluations

    Add visual interface for testing APIs. It should display at least the model and data selection, plus the results of the security evaluation when completed.

    enhancement 
    opened by maurapintor 0
  • Sequential attacks

    Sequential attacks

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    A multi-attack interface should be used. The interface should allow to specify a sequence of attacks that is used for testing the robustness of a model. The sequence will run the first attack on the whole dataset, then run the next attack in the sequence only on the points that fail for the given perturbation model.

    enhancement 
    opened by maurapintor 0
  • RobustBench models

    RobustBench models

    I'm submitting a ...

    • feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    Models from RobustBench should be available through the interface. The choice should be available next to the upload model button, where a dropdown menu should be displayed.

    enhancement 
    opened by maurapintor 0
  • Dataset samples

    Dataset samples

    I'm submitting a ...

    [x] feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    The interface should allow for selecting subsamples of commonly-used datasets without uploading them to the server. At least a sample from the following datasets should be included:

    • [ ] MNIST
    • [ ] CIFAR10
    • [ ] CIFAR100
    • [ ] ImageNet
    enhancement 
    opened by maurapintor 0
  • Feature request: other tasks

    Feature request: other tasks

    I'm submitting a ...

    • Feature request

    Other information (e.g. detailed explanation, stacktraces, related issues, suggestions how to fix, links for us to have context, eg. stackoverflow, gitter, etc)

    More use cases could be supported, as in https://gitlab.com/aloha.eu/security_evaluation. Possible use cases are:

    • detection
    • segmentation
    enhancement 
    opened by maurapintor 0
  • GPU support for container

    GPU support for container

    GPU can be currently used by running the server and worker locally. Using a container that also works with GPU might be beneficial for speedups and ease installation.

    enhancement help wanted 
    opened by maurapintor 0
Releases(v0.5)
Owner
Maura Pintor
🐼 Fighting evil adversarial pandas.
Maura Pintor
Flexible Option Learning - NeurIPS 2021

Flexible Option Learning This repository contains code for the paper Flexible Option Learning presented as a Spotlight at NeurIPS 2021. The implementa

Martin Klissarov 7 Nov 09, 2022
This repository allows the user to automatically scale a 3D model/mesh/point cloud on Agisoft Metashape

Metashape-Utils This repository allows the user to automatically scale a 3D model/mesh/point cloud on Agisoft Metashape, given a set of 2D coordinates

INSCRIBE 4 Nov 07, 2022
Diverse Branch Block: Building a Convolution as an Inception-like Unit

Diverse Branch Block: Building a Convolution as an Inception-like Unit (PyTorch) (CVPR-2021) DBB is a powerful ConvNet building block to replace regul

253 Dec 24, 2022
Video Instance Segmentation using Inter-Frame Communication Transformers (NeurIPS 2021)

Video Instance Segmentation using Inter-Frame Communication Transformers (NeurIPS 2021) Paper Video Instance Segmentation using Inter-Frame Communicat

Sukjun Hwang 81 Dec 29, 2022
Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

Step by Step on how to create an vision recognition model using LOBE.ai, export the model and run the model in an Azure Function

El Bruno 3 Mar 30, 2022
A MNIST-like fashion product database. Benchmark

Fashion-MNIST Table of Contents Why we made Fashion-MNIST Get the Data Usage Benchmark Visualization Contributing Contact Citing Fashion-MNIST License

Zalando Research 10.5k Jan 08, 2023
An NVDA add-on to split screen reader and audio from other programs to different sound channels

An NVDA add-on to split screen reader and audio from other programs to different sound channels (add-on idea credit: Tony Malykh)

Joseph Lee 7 Dec 25, 2022
Generative Adversarial Text to Image Synthesis

Text To Image Synthesis This is a tensorflow implementation of synthesizing images. The images are synthesized using the GAN-CLS Algorithm from the pa

Hao 575 Jan 08, 2023
ATAC: Adversarially Trained Actor Critic

ATAC: Adversarially Trained Actor Critic Adversarially Trained Actor Critic for Offline Reinforcement Learning by Ching-An Cheng*, Tengyang Xie*, Nan

Microsoft 41 Dec 08, 2022
Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding

Vision Longformer This project provides the source code for the vision longformer paper. Multi-Scale Vision Longformer: A New Vision Transformer for H

Microsoft 209 Dec 30, 2022
In-Place Activated BatchNorm for Memory-Optimized Training of DNNs

In-Place Activated BatchNorm In-Place Activated BatchNorm for Memory-Optimized Training of DNNs In-Place Activated BatchNorm (InPlace-ABN) is a novel

1.3k Dec 29, 2022
Homepage of paper: Paint Transformer: Feed Forward Neural Painting with Stroke Prediction, ICCV 2021.

Paint Transformer: Feed Forward Neural Painting with Stroke Prediction [Paper] [PaddlePaddle Implementation] Homepage of paper: Paint Transformer: Fee

442 Dec 16, 2022
Based on the paper "Geometry-aware Instance-reweighted Adversarial Training" ICLR 2021 oral

Geometry-aware Instance-reweighted Adversarial Training This repository provides codes for Geometry-aware Instance-reweighted Adversarial Training (ht

Jingfeng 47 Dec 22, 2022
Random Walk Graph Neural Networks

Random Walk Graph Neural Networks This repository is the official implementation of Random Walk Graph Neural Networks. Requirements Code is written in

Giannis Nikolentzos 38 Jan 02, 2023
A semismooth Newton method for elliptic PDE-constrained optimization

sNewton4PDEOpt The Python module implements a semismooth Newton method for solving finite-element discretizations of the strongly convex, linear ellip

2 Dec 08, 2022
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
Python binding for Khiva library.

Khiva-Python Build Documentation Build Linux and Mac OS Build Windows Code Coverage README This is the Khiva Python binding, it allows the usage of Kh

Shapelets 46 Oct 16, 2022
JupyterLite demo deployed to GitHub Pages 🚀

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➡️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
Official implementation for paper: A Latent Transformer for Disentangled Face Editing in Images and Videos.

A Latent Transformer for Disentangled Face Editing in Images and Videos Official implementation for paper: A Latent Transformer for Disentangled Face

InterDigital 108 Dec 09, 2022
Learning from graph data using Keras

Steps to run = Download the cora dataset from this link : https://linqs.soe.ucsc.edu/data unzip the files in the folder input/cora cd code python eda

Mansar Youness 64 Nov 16, 2022