osqueryIR is an artifact collection tool for Linux systems.

Overview

osqueryIR

osqueryIR is an artifact collection tool for Linux systems. It provides the following capabilities:

  • Execute osquery SQL queries
  • Collect files and folder
  • Execute system commands
  • Parse log files (ex. nginx, auth, syslog, etc) using regex

Try it

  1. Clone this repo

    git clone https://github.com/abdulrhmanalfaifi/osqueryIR
  2. Download python dependencies

    python3 -m pip install -r requirments.txt
  3. Try it using this command

    python3 osqueryIR.py -h

Usage

The following is the help message for osqueryIR:

usage: osqueryIR.py [-h] [--osquery-binary OSQUERY_BINARY] [-c CONFIG]
                    [-o OUTPUT] [-q] [--log-file-name LOG_FILE_NAME]
                    [--log-level {CRITICAL,ERROR,WARNING,INFO,DEBUG}]
                    [--output-format {jsonl,kjson}] [--disable-collect]

A Linux artifact collection tool

optional arguments:
  -h, --help            show this help message and exit
  --osquery-binary OSQUERY_BINARY
                        osqueryd binary path (Default=./osqueryd)
  -c CONFIG, --config CONFIG
                        Path to the configuration file (Default=./config.yaml)
  -o OUTPUT, --output OUTPUT
                        Change the output folder name (Defaults to the machine
                        hostname)
  -q, --quiet           Do not print log messages
  --log-file-name LOG_FILE_NAME
                        Name of the log file (Default=osqueryIR_log)
  --log-level {CRITICAL,ERROR,WARNING,INFO,DEBUG}
                        Set logging level (Default=INFO)
  --output-format {jsonl,kjson}
                        Change the output format (Default=jsonl)
  --disable-collect     Disable collection artifacts
  • --osquery-binary: osqueryd binary path. By default it will uses the binary in this repo.

  • -c or --config: path to osqueryIR configuration. By default it will be config.yaml in this repo.

  • -o or --output: the output file (zip file) name, By default it will be the machine hostname.

  • -q or --quit: Do not print logging to the stdout. osqueryIR will always write the log to the output file.

  • --log-file-name: change the default name for the log file (osqueryIR_log).

  • --log-level: the log level, default is INFO.

  • --output-format: osqueryIR support writing the results in two different formats:

    • jsonl: a newline separated JSON object. Each object represent a record.
    • kjson: the format understood by Kuiper. If you are planing to use Kuiper for analysis then you should use this format.
  • --disable-collect: disable artifact collecting. Only parsing and osquery artifacts will be acquired.

Configuration

osqueryIR accepts a configuration file that contains artifact specification. The following is an example configuration along with comments:

artifacts:  
    # Name of artifact. the results will be saved to a file with this name
  - logged_in_users:
      # artifact type. queries run osquery SQL queries and return the results as json
      queries:
        - 'select * from logged_in_users'
      # Optional: map the field called `name` to `@timestamp` and run the modfier `epoch_to_iso` on the value. `modifier` field is not required
      maps:
        - name: time
          map_to: '@timestamp'
          modifier: epoch_to_iso
      # Optional: description of the artifact
      description: 'Collect and parse the currently loggedin users'
  - logs:
      # artifact type. collect the specified files and directories without parsing
      collect:
        - '/var/log/**'
        - '/home/*/.vnc/*.log'
      description: 'Collect logs wellknow paths'
  - auth_log:
      # artifact type. parse the specified files using regex and return the results as json.
      parse:
        # files to parse
        path: '/var/log/auth.log*'
        # regex used for parsing
        regex: '([A-Z][a-z]{2}[ ]{1,}[0-9]{1,2}[ ]{1,2}[0-9]{1,2}:[0-9]{2}:[0-9]{2}) ([a-zA-Z0-9_\-]+) ([a-zA-Z0-9_\-\]\(\)=\./]+)\[?([0-9]+)?\]?: (.*)'
        # the name of the extracted fields
        fields:
          - 'time'
          - 'hostname'
          - 'service'
          - 'pid'
          - 'msg'
      maps:
        - name: time
          map_to: '@timestamp'
          modifier: time_without_year_to_iso
      description: 'Parse auth logs from the path /var/log/, and return the results as jsonl/kjson'
  - bad_logins:
  	  # artifact type. Execute system command and return stdout & stderr
      command:
        - 'lastb'

Example

To collect the artifacts from the provided configurations, execute the following command:

python3 osqueryIR.py

A file will be created named {HOSTNAME}.zip that contains all artifacts.

Useing osqueryIR with Kuiper

osqueryIR can generate the result in kjson format which could be ingested by Kuiper. To collect artifacts in kjson format execute the following command:

python3 osqueryIR.py --output-format kjson --disable-collect

upload the file to Kuiper and execute the kjson parser

osqueryIR_Kuiper

Owner
AbdulRhman Alfaifi
AbdulRhman Alfaifi
The Black shade analyser and comparison tool.

diff-shades The Black shade analyser and comparison tool. AKA Richard's personal take at a better black-primer (by stealing ideas from mypy-primer) :p

Richard Si 10 Apr 29, 2022
A set of Python scripts to surpass human limits in accomplishing simple tasks.

Human benchmark fooler Summary A set of Python scripts with Selenium designed to surpass human limits in accomplishing simple tasks available on https

Bohdan Dudchenko 3 Feb 10, 2022
Backman is a random/fixed background image setter for wlroots based compositors

backman Backman is a random/fixed background image setter for wlroots based compositors Dependencies: The program depends on swaybg, python3-toml (or

Hemish 3 Mar 09, 2022
A simple toolchain for moving Remarkable highlights to Readwise

A simple toolchain for moving Remarkable highlights to Readwise

zach wick 20 Dec 20, 2022
A collection of custom scripts for working with Quake assets.

Custom Quake Tools A collection of custom scripts for working with Quake assets. Features Script to list all BSP files in a Quake mod

Jason Brownlee 3 Jul 05, 2022
Python 3 script unpacking statically x86 CryptOne packer.

Python 3 script unpacking statically x86 CryptOne packer. CryptOne versions: 2021/08 until now (2021/12)

5 Feb 23, 2022
✨ Un code pour voir les disponibilités des vaccins contre le covid totalement fait en Python par moi, et en français.

Vaccine Notifier ❗ Un chois aléatoire d'un article sur Wikipedia totalement fait en Python par moi, et en français. 🔮 Grâce a une requète API, on peu

MrGabin 3 Jun 06, 2021
Protect your eyes from eye strain using this simple and beautiful, yet extensible break reminder

Protect your eyes from eye strain using this simple and beautiful, yet extensible break reminder

Gobinath 1.2k Jan 01, 2023
Go through a random file in your favourite open source projects!

Random Source Codes Never be bored again! Staring at your screen and just scrolling the great world wide web? Would you rather read through some code

Mridul Seth 1 Nov 03, 2022
Adding two matrix from scratch using python.

Adding-two-matrix-from-scratch-using-python. Here, I have take two matrix from user and add it without using any library. I made this program from scr

Sachin Vinayak Dabhade 4 Sep 24, 2021
Python lightweight dependency injection library

pythondi pythondi is a lightweight dependency injection library for python Support both sync and async functions Installation pip3 install pythondi Us

Hide 41 Dec 16, 2022
Python bytecode manipulation and import process customization to do evil stuff with format strings. Nasty!

formathack Python bytecode manipulation and import process customization to do evil stuff with format strings. Nasty! This is an answer to a StackOver

Michiel Van den Berghe 5 Jan 18, 2022
Software to help automate collecting crowdsourced annotations using Mechanical Turk.

Video Crowdsourcing Software to help automate collecting crowdsourced annotations using Mechanical Turk. The goal of this project is to enable crowdso

Mike Peven 1 Oct 25, 2021
The producer-consumer problem implemented with threads in Python

This was developed using a Python virtual environment, I would strongly recommend to do the same if you want to clone this repository. How to run this

Omar Beltran 1 Oct 30, 2021
Check the basic quality of any dataset

Data Quality Checker in Python Check the basic quality of any dataset. Sneak Peek Read full tutorial at Medium. Explore the app Requirements python 3.

MalaDeep 8 Feb 23, 2022
A library to easily convert climbing route grades between different grading systems.

pyclimb A library to easily convert climbing route grades between different grading systems. In rock climbing, mountaineering, and other climbing disc

Ilias Antonopoulos 4 Jan 26, 2022
An OData v4 query parser and transpiler for Python

odata-query is a library that parses OData v4 filter strings, and can convert them to other forms such as Django Queries, SQLAlchemy Queries, or just plain SQL.

Gorilla 39 Jan 05, 2023
Aggregating gridded data (xarray) to polygons

A package to aggregate gridded data in xarray to polygons in geopandas using area-weighting from the relative area overlaps between pixels and polygons.

Kevin Schwarzwald 42 Nov 09, 2022
Skywater 130nm Klayout Device Generators PDK

Skywaters 130nm Technology for KLayout Device Generators Mabrains is excited to share with you our Device Generator Library for Skywater 130nm PDK. It

Mabrains 18 Dec 14, 2022
Teleport Ur Logs with Love

Whatever you pipe into tull, will get a unique UUID and the data gets stored locally - accessible via a flask server with simple endpoints. You can use ngrok or localtunnel then to share it outside L

Lokendra Sharma 11 Jul 30, 2021