Modern robots.txt Parser for Python

Related tags

Miscellaneousreppy
Overview

Robots Exclusion Protocol Parser for Python

Build Status

Robots.txt parsing in Python.

Goals

  • Fetching -- helper utilities for fetching and parsing robots.txts, including checking cache-control and expires headers
  • Support for newer features -- like Crawl-Delay and Sitemaps
  • Wildcard matching -- without using regexes, no less
  • Performance -- with >100k parses per second, >1M URL checks per second once parsed
  • Caching -- utilities to help with the caching of robots.txt responses

Installation

reppy is available on pypi:

pip install reppy

When installing from source, there are submodule dependencies that must also be fetched:

git submodule update --init --recursive
make install

Usage

Checking when pages are allowed

Two classes answer questions about whether a URL is allowed: Robots and Agent:

from reppy.robots import Robots

# This utility uses `requests` to fetch the content
robots = Robots.fetch('http://example.com/robots.txt')
robots.allowed('http://example.com/some/path/', 'my-user-agent')

# Get the rules for a specific agent
agent = robots.agent('my-user-agent')
agent.allowed('http://example.com/some/path/')

The Robots class also exposes properties expired and ttl to describe how long the response should be considered valid. A reppy.ttl policy is used to determine what that should be:

from reppy.ttl import HeaderWithDefaultPolicy

# Use the `cache-control` or `expires` headers, defaulting to a 30 minutes and
# ensuring it's at least 10 minutes
policy = HeaderWithDefaultPolicy(default=1800, minimum=600)

robots = Robots.fetch('http://example.com/robots.txt', ttl_policy=policy)

Customizing fetch

The fetch method accepts *args and **kwargs that are passed on to requests.get, allowing you to customize the way the fetch is executed:

robots = Robots.fetch('http://example.com/robots.txt', headers={...})

Matching Rules and Wildcards

Both * and $ are supported for wildcard matching.

This library follows the matching 1996 RFC describes. In the case where multiple rules match a query, the longest rules wins as it is presumed to be the most specific.

Checking sitemaps

The Robots class also lists the sitemaps that are listed in a robots.txt

# This property holds a list of URL strings of all the sitemaps listed
robots.sitemaps

Delay

The Crawl-Delay directive is per agent and can be accessed through that class. If none was specified, it's None:

# What's the delay my-user-agent should use
robots.agent('my-user-agent').delay

Determining the robots.txt URL

Given a URL, there's a utility to determine the URL of the corresponding robots.txt. It preserves the scheme and hostname and the port (if it's not the default port for the scheme).

# Get robots.txt URL for http://[email protected]:8080/path;params?query#fragment
# It's http://example.com:8080/robots.txt
Robots.robots_url('http://[email protected]:8080/path;params?query#fragment')

Caching

There are two cache classes provided -- RobotsCache, which caches entire reppy.Robots objects, and AgentCache, which only caches the reppy.Agent relevant to a client. These caches duck-type the class that they cache for the purposes of checking if a URL is allowed:

from reppy.cache import RobotsCache
cache = RobotsCache(capacity=100)
cache.allowed('http://example.com/foo/bar', 'my-user-agent')

from reppy.cache import AgentCache
cache = AgentCache(agent='my-user-agent', capacity=100)
cache.allowed('http://example.com/foo/bar')

Like reppy.Robots.fetch, the cache constructory accepts a ttl_policy to inform the expiration of the fetched Robots objects, as well as *args and **kwargs to be passed to reppy.Robots.fetch.

Caching Failures

There's a piece of classic caching advice: "don't cache failures." However, this is not always appropriate in certain circumstances. For example, if the failure is a timeout, clients may want to cache this result so that every check doesn't take a very long time.

To this end, the cache module provides a notion of a cache policy. It determines what to do in the case of an exception. The default is to cache a form of a disallowed response for 10 minutes, but you can configure it as you see fit:

# Do not cache failures (note the `ttl=0`):
from reppy.cache.policy import ReraiseExceptionPolicy
cache = AgentCache('my-user-agent', cache_policy=ReraiseExceptionPolicy(ttl=0))

# Cache and reraise failures for 10 minutes (note the `ttl=600`):
cache = AgentCache('my-user-agent', cache_policy=ReraiseExceptionPolicy(ttl=600))

# Treat failures as being disallowed
cache = AgentCache(
    'my-user-agent',
    cache_policy=DefaultObjectPolicy(ttl=600, lambda _: Agent().disallow('/')))

Development

A Vagrantfile is provided to bootstrap a development environment:

vagrant up

Alternatively, development can be conducted using a virtualenv:

virtualenv venv
source venv/bin/activate
pip install -r requirements.txt

Tests

Tests may be run in vagrant:

make test

Development

Environment

To launch the vagrant image, we only need to vagrant up (though you may have to provide a --provider flag):

vagrant up

With a running vagrant instance, you can log in and run tests:

vagrant ssh
make test

Running Tests

Tests are run with the top-level Makefile:

make test

PRs

These are not all hard-and-fast rules, but in general PRs have the following expectations:

  • pass Travis -- or more generally, whatever CI is used for the particular project
  • be a complete unit -- whether a bug fix or feature, it should appear as a complete unit before consideration.
  • maintain code coverage -- some projects may include code coverage requirements as part of the build as well
  • maintain the established style -- this means the existing style of established projects, the established conventions of the team for a given language on new projects, and the guidelines of the community of the relevant languages and frameworks.
  • include failing tests -- in the case of bugs, failing tests demonstrating the bug should be included as one commit, followed by a commit making the test succeed. This allows us to jump to a world with a bug included, and prove that our test in fact exercises the bug.
  • be reviewed by one or more developers -- not all feedback has to be accepted, but it should all be considered.
  • avoid 'addressed PR feedback' commits -- in general, PR feedback should be rebased back into the appropriate commits that introduced the change. In cases, where this is burdensome, PR feedback commits may be used but should still describe the changed contained therein.

PR reviews consider the design, organization, and functionality of the submitted code.

Commits

Certain types of changes should be made in their own commits to improve readability. When too many different types of changes happen simultaneous to a single commit, the purpose of each change is muddled. By giving each commit a single logical purpose, it is implicitly clear why changes in that commit took place.

  • updating / upgrading dependencies -- this is especially true for invocations like bundle update or berks update.
  • introducing a new dependency -- often preceeded by a commit updating existing dependencies, this should only include the changes for the new dependency.
  • refactoring -- these commits should preserve all the existing functionality and merely update how it's done.
  • utility components to be used by a new feature -- if introducing an auxiliary class in support of a subsequent commit, add this new class (and its tests) in its own commit.
  • config changes -- when adjusting configuration in isolation
  • formatting / whitespace commits -- when adjusting code only for stylistic purposes.

New Features

Small new features (where small refers to the size and complexity of the change, not the impact) are often introduced in a single commit. Larger features or components might be built up piecewise, with each commit containing a single part of it (and its corresponding tests).

Bug Fixes

In general, bug fixes should come in two-commit pairs: a commit adding a failing test demonstrating the bug, and a commit making that failing test pass.

Tagging and Versioning

Whenever the version included in setup.py is changed (and it should be changed when appropriate using http://semver.org/), a corresponding tag should be created with the same version number (formatted v ).

git tag -a v0.1.0 -m 'Version 0.1.0

This release contains an initial working version of the `crawl` and `parse`
utilities.'

git push --tags origin
Never get kicked for inactivity ever again!

FFXIV AFK Bot Tired of getting kicked from games due to inactivity? This Bot will make random movements in random intervals to prevent you from gettin

5 Jan 12, 2022
Python for Microscopists and other image processing enthusiasts

The YouTube channel associated with this code walks you through the entire process of learning to code in Python; all the way from basics to advanced machine learning and deep learning.

Dr. Sreenivas Bhattiprolu 2.3k Jan 01, 2023
Generate Openbox Menus from a easy to write configuration file.

openbox-menu-generator Generate Openbox Menus from a easy to write configuration file. Example Configuration: ('#' indicate comments but not implement

3 Jul 14, 2022
RCCで開催する『バックエンド勉強会』の資料

RCC バックエンド勉強会 開発環境 Python 3.9 Pipenv 使い方 1. インストール pipenv install 2. アプリケーションを起動 pipenv run start 本コマンドを実行するとlocalhost:8000へアクセスできるようになります。 3. テストを実行

Averak 7 Nov 14, 2021
python scripts to perform coin die clustering (performed on Riedones3D).

python scripts to perform coin die clustering (performed on Riedones3D).

Sofiane 2 Apr 29, 2022
PKU team for 2021 project 'Guangchangwu detection'.

PKU team for 2021 project 'Guangchangwu detection'.

Helin Wang 3 Feb 21, 2022
A plugin for poetry that allows you to execute scripts defined in your pyproject.toml, just like you can in npm or pipenv

poetry-exec-plugin A plugin for poetry that allows you to execute scripts defined in your pyproject.toml, just like you can in npm or pipenv Installat

38 Jan 06, 2023
Enjoy Discords Unlimited Storage

Discord Storage V.3.5 (Beta) Made by BoKa Enjoy Discords free and unlimited storage... Prepare: Clone this from Github, make sure there either a folde

0 Dec 16, 2021
A python server markup language

PSML - Python server markup language How to install: python install.py

LMFS 6 May 18, 2022
Wordler - A program to support you to solve the wordle puzzles

solve wordle (https://www.powerlanguage.co.uk/wordle) A program to support you t

Viktor Martinović 2 Jan 17, 2022
A calculator for common measurements used in sci-fi books.

Sci-fi-speed-calculator A calculator for common measurements used in sci-fi books. Author: Tyler Windmemuth Purpose: This program allows sci-fi author

Tyler Windemuth 0 Apr 22, 2022
Prometheus exporter for Spigot accounts

SpigotExporter Prometheus exporter for Spigot accounts What it provides SpigotExporter will output metrics for each of your plugins and a cumulative d

Jacob Bashista 5 Dec 20, 2021
使用clash核心,对服务器进行Netflix解锁批量测试。

注意事项 测速及解锁测试仅供参考,不代表实际使用情况,由于网络情况变化、Netflix封锁及ip更换,测速具有时效性 本项目使用 Python 编写,使用前请完成环境安装 首次运行前请安装pip及相关依赖,也可使用 pip install -r requirements.txt 命令自行安装 Net

11 Dec 07, 2022
Interactivity Lab: Household Pulse Explorable

Interactivity Lab: Household Pulse Explorable Goal: Build an interactive application that incorporates fundamental Streamlit components to offer a cur

1 Feb 10, 2022
MiniJVM is simple java virtual machine written by python language, it can load class file from file system and run it.

MiniJVM MiniJVM是一款使用python编写的简易JVM,能够从本地加载class文件并且执行绝大多数指令。 支持的功能 1.从本地磁盘加载class并解析 2.支持绝大多数指令集的执行 3.支持虚拟机内存分区以及对象的创建 4.支持方法的调用和参数传递 5.支持静态代码块的初始化 不支

keguoyu 60 Apr 01, 2022
This tool for beginner and help those people they gather information about Email Header Analysis, Instagram Information, Instagram Username Check, Ip Information, Phone Number Information, Port Scan

This tool for beginner and help those people they gather information about Email Header Analysis, Instagram Information, Instagram Username Check, Ip Information, Phone Number Information, Port Scan.

cb-kali 5 Feb 18, 2022
Linux GUI app to codon optimize many single-fasta files with coding sequences , using many taxonomy ids

codon_optimize_cds_with_many_taxids_singlefasta Linux GUI app to codon optimize many single-fasta files with coding sequences, using many taxonomy ids

Olga Tsiouri 1 Jan 23, 2022
A log likelihood fit for extracting neutrino oscillation parameters

A-log-likelihood-fit-for-extracting-neutrino-oscillation-parameters Minimised the negative log-likelihood fit to extract neutrino oscillation paramete

Vid Homsak 1 Jan 23, 2022
Demodulate and error correct FIS-B and ADS-B signals on 978 MHz.

FIS-B 978 ('fisb-978') is a set of programs that demodulates and error corrects FIS-B (Flight Information System - Broadcast) and ADS-B (Automatic Dep

2 Nov 15, 2022
一个IDA脚本,可以检测出哈希算法(无论是否魔改常数)并生成frida hook 代码。

findhash 在哈希算法上,比Findcrypt更好的检测工具,同时生成Frida hook代码。 使用方法 把findhash.xml和findhash.py扔到ida plugins目录下 ida -edit-plugin-findhash 试图解决的问题 哈希函数的初始化魔数被修改 想快速

266 Dec 29, 2022