PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j.

Related tags

Data Analysispostqf
Overview

PostQF

Copyright © 2022 Ralph Seichter

PostQF is a user-friendly Postfix queue data filter which operates on data produced by postqueue -j. See the manual page's subsection titled "JSON object format" for details. PostQF offers convenient features for analysis and and cleanup of Postfix mail queues.

I have used the all-purpose JSON manipulation utility "jq" before, but found it inconvenient for everyday Postfix administration tasks. "jq" offers great flexibility and handles all sorts of JSON input, but it comes at the cost of complexity. PostQF is an alternative specifically tailored for easier access to Postfix queues.

To facilitate the use of Unix-like pipelines, PostQF usually reads from stdin and writes to stdout. Using command line arguments, you can override this behaviour and define one or more input files and/or an output file. Depending on the context, a horizontal dash - represents either stdin or stdout. See the command line usage description below.

Example usage

Find all messages in the deferred queue where the delay reason contains the string connection timed out.

postqueue -j | postqf -q deferred -d 'connection timed out'

Find all messages in the active or hold queues which have at least one recipient in the example.com or example.org domains, and write the matching JSON records into the file /tmp/output.

postqueue -j | postqf -q 'active|hold' -r '@example\.(com|org)' -o /tmp/output

Find all messages all queues for which the sender address is [email protected] or [email protected], and pipe the queue IDs to postsuper in order to place the matching messages on hold.

postqueue -j | postqf -s '^(alice|bob)@gmail\.com$' -i | postsuper -h -

Print the number of messages which arrived during the last 30 minutes.

postqueue -j | postqf -a 30m | wc -l

The final example assumes a directory /tmp/data with several files, each containing JSON output produced at some previous time. The command pipes all queue IDs which have ever been in the hold queue into the file idlist, relying on BASH wildcard expansion to generate a list of input files.

postqf -i -q hold /tmp/data/*.json > idlist

Filters

Queue entries can be easily filtered by

  • Arrival time
  • Delay reason
  • Queue name
  • Recipient address
  • Sender address

and combinations thereof, using regular expressions. Anchoring is optional, meaning that plain text is treated as a substring pattern.

The arrival time filters do not use regular expressions, but instead a human-readable representation of a time difference. The format is W unit, without spaces. W is a "whole number" (i.e. a number ≥ 0). The unit is a single letter among s, m, h, d (seconds, minutes, hours, days).

-b 3d and -a 90m are both examples of valid command line arguments. Note that arrival filters are interpreted relative to the time PostQF is run. The two examples signify "message arrived more than 3 days ago" (before timestamp) and "message arrived less than 90 minutes ago" (after timestamp), respectively.

Command line usage

postqf [-h] [-i] [-d [REGEX]] [-q [REGEX]] [-r [REGEX]] [-s [REGEX]]
       [-a [TS] | -b [TS]] [-o [PATH]] [PATH [PATH ...]]

positional arguments:
  PATH        Input file. Use a dash "-" for standard input.

optional arguments:
  -h, --help  show this help message and exit
  -i          ID output only
  -o [PATH]   Output file. Use a dash "-" for standard output.

Regular expression filters:
  -d [REGEX]  Delay reason filter
  -q [REGEX]  Queue name filter
  -r [REGEX]  Recipient address filter
  -s [REGEX]  Sender address filter

Arrival time filters (mutually exclusive):
  -a [TS]     Message arrived after TS
  -b [TS]     Message arrived before TS

Installation

The only installation requirement is Python 3.7 or newer. PostQF is distributed via PyPI.org and can usually be installed using pip. If this fails, or if both Python 2.x and 3.x are installed on your machine, use pip3 instead.

If possible, use the recommended installation with a Python virtual environment. Site-wide installation usually requires root privileges.

# Recommended: Installation using a Python virtual environment.
mkdir ~/postqf
cd ~/postqf
python3 -m venv .venv
source .venv/bin/activate
pip install -U pip postqf
# Alternative: Site-wide installation, requires root access.
sudo pip install postqf

The pip installation process adds a launcher executable postqf, either site-wide or in the Python virtual environment. In the latter case, the launcher will be placed into the directory .venv/bin which is automatically added to your PATH variable when you activate the venv environment as shown above.

Contact

The project is hosted on GitHub in the rseichter/postqf repository. If you have suggestions or run into any problems, please check the discussions section first. There is also an issue tracker available, and the build configuration file contains a contact email address.

You might also like...
Functional Data Analysis, or FDA, is the field of Statistics that analyses data that depend on a continuous parameter. Fancy data functions that will make your life as a data scientist easier.
Fancy data functions that will make your life as a data scientist easier.

WhiteBox Utilities Toolkit: Tools to make your life easier Fancy data functions that will make your life as a data scientist easier. Installing To ins

A Big Data ETL project in PySpark on the historical NYC Taxi Rides data

Processing NYC Taxi Data using PySpark ETL pipeline Description This is an project to extract, transform, and load large amount of data from NYC Taxi

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.
Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Created covid data pipeline using PySpark and MySQL that collected data stream from API and do some processing and store it into MYSQL database.

Utilize data analytics skills to solve real-world business problems using Humana’s big data

Humana-Mays-2021-HealthCare-Analytics-Case-Competition- The goal of the project is to utilize data analytics skills to solve real-world business probl

Python data processing, analysis, visualization, and data operations

Python This is a Python data processing, analysis, visualization and data operations of the source code warehouse, book ISBN: 9787115527592 Descriptio

PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift
PrimaryBid - Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift

Transform application Lifecycle Data and Design and ETL pipeline architecture for ingesting data from multiple sources to redshift This project is composed of two parts: Part1 and Part2

Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials
Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Data Scientist Learning Plan Demonstrate the breadth and depth of your data science skills by earning all of the Databricks Data Scientist credentials

Catalogue data - A Python Scripts to prepare catalogue data

catalogue_data Scripts to prepare catalogue data. Setup Clone this repo. Install

Comments
  • Permit using

    Permit using "before" and "after" time filters at the same time

    The command line arguments -a and -b are mutually exclusive as of release 0.1. If using both at the same time was permitted, users could express an interval, allowing searches for "message arrived between timestamps X and Y".

    enhancement 
    opened by rseichter 1
  • Support absolute time for before/after filter arguments

    Support absolute time for before/after filter arguments

    Command line arguments -a and -b currently allow only passing a time difference like 45m or 3d. It would be helpful to also support strings representing absolute points in time. Here is an example for how it might look when using the ISO 8601 format:

    $ date --iso-8601=s
    2022-01-23T22:10:56+01:00
    
    $ postqueue -b '2022-01-23T22:10:56+01:00'
    

    It would also be useful to allow passing epoch time arguments, because postqueue -j returns message arrival times as seconds since the Epoch.

    enhancement 
    opened by rseichter 1
Releases(0.5)
  • 0.5(Feb 6, 2022)

    In addition to filtering JSON input and producing JSON output in the process, PostQF can now also generate a number of simple reports to answer some frequently asked questions about message queue content. The following data can be shown in reports:

    • Delay reason
    • Recipient address
    • Recipient domain
    • Sender address
    • Sender domain
    Source code(tar.gz)
    Source code(zip)
  • 0.4(Feb 2, 2022)

  • 0.3(Jan 28, 2022)

    • Output is now correctly rendered as JSON instead of a Python dict.
    • Simplified installation process. In addition to pip based setup, an installation BASH script is now provided.
    Source code(tar.gz)
    Source code(zip)
  • 0.2(Jan 24, 2022)

    • Release 0.2 introduces the ability to use both -a and -b time filters simultaneously, in order to specify time intervals.
    • Time filter strings can now use ISO 8601 strings and Unix time in addition to relative time differences expressed in the form 42m or 2d. This allows users to also specify absolute points in time as arrival thresholds.
    Source code(tar.gz)
    Source code(zip)
  • 0.1(Jan 23, 2022)

Owner
Ralph Seichter
Ralph Seichter
PyPSA: Python for Power System Analysis

1 Python for Power System Analysis Contents 1 Python for Power System Analysis 1.1 About 1.2 Documentation 1.3 Functionality 1.4 Example scripts as Ju

758 Dec 30, 2022
Python script to automate the plotting and analysis of percentage depth dose and dose profile simulations in TOPAS.

topas-create-graphs A script to automatically plot the results of a topas simulation Works for percentage depth dose (pdd) and dose profiles (dp). Dep

Sebastian Schäfer 10 Dec 08, 2022
Transform-Invariant Non-Negative Matrix Factorization

Transform-Invariant Non-Negative Matrix Factorization A comprehensive Python package for Non-Negative Matrix Factorization (NMF) with a focus on learn

EMD Group 6 Jul 01, 2022
Evaluation of a Monocular Eye Tracking Set-Up

Evaluation of a Monocular Eye Tracking Set-Up As part of my master thesis, I implemented a new state-of-the-art model that is based on the work of Che

Pascal 19 Dec 17, 2022
The official pytorch implementation of ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias

ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias Introduction | Updates | Usage | Results&Pretrained Models | Statement | Intr

104 Nov 27, 2022
A probabilistic programming library for Bayesian deep learning, generative models, based on Tensorflow

ZhuSuan is a Python probabilistic programming library for Bayesian deep learning, which conjoins the complimentary advantages of Bayesian methods and

Tsinghua Machine Learning Group 2.2k Dec 28, 2022
PySpark Structured Streaming ROS Kafka ApacheSpark Cassandra

PySpark-Structured-Streaming-ROS-Kafka-ApacheSpark-Cassandra The purpose of this project is to demonstrate a structured streaming pipeline with Apache

Zekeriyya Demirci 5 Nov 13, 2022
DenseClus is a Python module for clustering mixed type data using UMAP and HDBSCAN

DenseClus is a Python module for clustering mixed type data using UMAP and HDBSCAN. Allowing for both categorical and numerical data, DenseClus makes it possible to incorporate all features in cluste

Amazon Web Services - Labs 53 Dec 08, 2022
statDistros is a Python library for dealing with various statistical distributions

StatisticalDistributions statDistros statDistros is a Python library for dealing with various statistical distributions. Now it provides various stati

1 Oct 03, 2021
Wafer Fault Detection - Wafer circleci with python

Wafer Fault Detection Problem Statement: Wafer (In electronics), also called a slice or substrate, is a thin slice of semiconductor, such as a crystal

Avnish Yadav 14 Nov 21, 2022
Using Python to scrape some basic player information from www.premierleague.com and then use Pandas to analyse said data.

PremiershipPlayerAnalysis Using Python to scrape some basic player information from www.premierleague.com and then use Pandas to analyse said data. No

5 Sep 06, 2021
Python Project on Pro Data Analysis Track

Udacity-BikeShare-Project: Python Project on Pro Data Analysis Track Basic Data Exploration with pandas on Bikeshare Data Basic Udacity project using

Belal Mohammed 0 Nov 10, 2021
Approximate Nearest Neighbor Search for Sparse Data in Python!

Approximate Nearest Neighbor Search for Sparse Data in Python! This library is well suited to finding nearest neighbors in sparse, high dimensional spaces (like text documents).

Meta Research 906 Jan 01, 2023
Pip install minimal-pandas-api-for-polars

Minimal Pandas API for Polars Install From PyPI: pip install minimal-pandas-api-for-polars Example Usage (see tests/test_minimal_pandas_api_for_polars

Austin Ray 6 Oct 16, 2022
Udacity-api-reporting-pipeline - Udacity api reporting pipeline

udacity-api-reporting-pipeline In this exercise, you'll use portions of each of

Fabio Barbazza 1 Feb 15, 2022
A Python adaption of Augur to prioritize cell types in perturbation analysis.

A Python adaption of Augur to prioritize cell types in perturbation analysis.

Theis Lab 2 Mar 29, 2022
LynxKite: a complete graph data science platform for very large graphs and other datasets.

LynxKite is a complete graph data science platform for very large graphs and other datasets. It seamlessly combines the benefits of a friendly graphical interface and a powerful Python API.

124 Dec 14, 2022
Py-price-monitoring - A Python price monitor

A Python price monitor This project was focused on Brazil, so the monitoring is

Samuel 1 Jan 04, 2022
Monitor the stability of a pandas or spark dataframe ⚙︎

Population Shift Monitoring popmon is a package that allows one to check the stability of a dataset. popmon works with both pandas and spark datasets.

ING Bank 403 Dec 07, 2022
A real-time financial data streaming pipeline and visualization platform using Apache Kafka, Cassandra, and Bokeh.

Realtime Financial Market Data Visualization and Analysis Introduction This repo shows my project about real-time stock data pipeline. All the code is

6 Sep 07, 2022