Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Overview

Parsel

Build Status PyPI Version Coverage report

Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with regular expressions.

Find the Parsel online documentation at https://parsel.readthedocs.org.

Example (open online demo):

>>> from parsel import Selector
>>> selector = Selector(text=u"""<html>
        <body>
            <h1>Hello, Parsel!</h1>
            <ul>
                <li><a href="http://example.com">Link 1</a></li>
                <li><a href="http://scrapy.org">Link 2</a></li>
            </ul>
        </body>
        </html>""")
>>> selector.css('h1::text').get()
'Hello, Parsel!'
>>> selector.xpath('//h1/text()').re(r'\w+')
['Hello', 'Parsel']
>>> for li in selector.css('ul > li'):
...     print(li.xpath('.//@href').get())
http://example.com
http://scrapy.org
Comments
  • Add attributes dict accessor

    Add attributes dict accessor

    It's often useful to get the attributes of the underlying elements, and Parsel currently doesn't make it obvious how to do that.

    Currently, the first instinct is to use XPath, which makes it a bit awkward because you need a trick like the name(@*[i]) described in this blog post.

    This PR proposes adding two methods .attrs() and .attrs_all() (mirroring get and getall) for getting attributes in a way that more or less made sense to me. What do you think?

    opened by eliasdorneles 18
  • [MRG+1] Add has-class xpath extension function

    [MRG+1] Add has-class xpath extension function

    This is a POC implementation of what's been discussed in #13

    The benchmark extended to include this implementation of has-class is available here. To summarize its results, throughout this morning I have seen custom xpath functions (except the old has-class) implementations taking 150-200% of the css->xpath approach's time in the first test, and the last case, sel.xpath('//*[has-class("story")]//a'), took only 60% of sel.css('.story').xpath('.//a') time consistenly.

    But right now I'm seeing a different situation:

    $ python bench.py 
    sel.css(".story")                                                       0.654  1.000
    sel.xpath("//*[has-class-old('story')]")                               12.256 18.737
    sel.xpath("//*[has-class-set('story')]")                                1.907  2.915
    sel.xpath("//*[has-one-class('story')]")                                1.715  2.623
    sel.xpath("//*[has-class-plain('story')]")                              1.770  2.706
    
    
    sel.css("article.story")                                                0.201  1.000
    sel.xpath("//article[has-class-old('story')]")                          1.219  6.072
    sel.xpath("//article[has-class-set('story')]")                          0.314  1.566
    sel.xpath("//article[has-one-class('story')]")                          0.292  1.454
    sel.xpath("//article[has-class-plain('story')]")                        0.299  1.490
    
    
    sel.css("article.theme-summary.story")                                  0.192  1.000
    sel.xpath("//article[has-class-old('theme-summary', 'story')]")         1.288  6.699
    sel.xpath("//article[has-class-set('theme-summary', 'story')]")         0.266  1.384
    sel.xpath("//article[has-class-plain('theme-summary', 'story')]")       0.247  1.284
    
    
    sel.css(".story").xpath(".//a")                                         1.798  1.000
    sel.xpath("//*[has-class-old('story')]//a")                            11.995  6.671
    sel.xpath("//*[has-class-set('story')]//a")                             1.944  1.081
    sel.xpath("//*[has-one-class('story')]//a")                             1.747  0.972
    sel.xpath("//*[has-class-plain('story')]//a")                           1.778  0.989
    
    opened by immerrr 16
  • [MRG+1] Correct build/test fail with no module names 'tests'

    [MRG+1] Correct build/test fail with no module names 'tests'

    When working on the Debian package of parsel, I got the following error, due to the 'tests' module not being found. This commit corrects it.

    Traceback (most recent call last):
      File "setup.py", line 54, in <module>
        test_suite='tests',
      File "/usr/lib/python2.7/distutils/core.py", line 151, in setup
        dist.run_commands()
      File "/usr/lib/python2.7/distutils/dist.py", line 953, in run_commands
        self.run_command(cmd)
      File "/usr/lib/python2.7/distutils/dist.py", line 972, in run_command
        cmd_obj.run()
      File "/usr/lib/python2.7/dist-packages/setuptools/command/test.py", line 159, in run
        self.with_project_on_sys_path(self.run_tests)
      File "/usr/lib/python2.7/dist-packages/setuptools/command/test.py", line 140, in with_project_on_sys_path
        func()
      File "/usr/lib/python2.7/dist-packages/setuptools/command/test.py", line 180, in run_tests
        testRunner=self._resolve_as_ep(self.test_runner),
      File "/usr/lib/python2.7/unittest/main.py", line 94, in __init__
        self.parseArgs(argv)
      File "/usr/lib/python2.7/unittest/main.py", line 149, in parseArgs
        self.createTests()
      File "/usr/lib/python2.7/unittest/main.py", line 158, in createTests
        self.module)
      File "/usr/lib/python2.7/unittest/loader.py", line 130, in loadTestsFromNames
        suites = [self.loadTestsFromName(name, module) for name in names]
      File "/usr/lib/python2.7/unittest/loader.py", line 91, in loadTestsFromName
        module = __import__('.'.join(parts_copy))
    ImportError: No module named tests
    E: pybuild pybuild:274: test: plugin distutils failed with: exit code=1: python2.7 setup.py test 
    dh_auto_test: pybuild --test -i python{version} -p 2.7 returned exit code 13
    

    Cheers

    opened by ghantoos 15
  • Added text_content() method to selectors.

    Added text_content() method to selectors.

    I recently needed to extract the text contents of HTML nodes as plain old strings, ignoring nested tags and extra spaces.

    While that wasn't hard, it is a common operations that should be built into scrapy

    opened by paulo-raca 13
  • updating classifiers in setup.py

    updating classifiers in setup.py

    Hey fellows,

    This fixes the pre-alpha classifier (see #18) and also add Topic :: Text Processing :: Markup.

    We can consider it stable since v1.0, Scrapy is even depending on the current API already.

    Also, maybe we could add the more specific Topic :: Text Processing :: Markup :: HTML and Topic :: Text Processing :: Markup :: XML too (see list of classifiers here).

    What do you think?

    Thanks!

    opened by eliasdorneles 13
  • Docstrings and autodocs for API reference

    Docstrings and autodocs for API reference

    Hey, fellows!

    Here is a change to use docstrings + autodocs for the API reference, making it easier to keep it in sync with the code. This fixes #16

    Does this look good?

    opened by eliasdorneles 12
  • [MRG+1] exception handling was hiding original exception

    [MRG+1] exception handling was hiding original exception

    Any xpath error was caught and reraised as a ValueError complaining about an Invalid XPath, quoting the original xpath for debugging purposes.

    First, "Invalid XPath" is misleading because the same exception is also raised for xpath evaluation errors. However it also hides the original exception message which ends up making xpath debugging harder. I made it quote the original exception message too which can be "Unregisted function", "Invalid Expression", "Undefined namespace prefix" etc.

    Now before merging: Any exception can occur during xpath evaluation because any python function can be registered and called in an xpath. I doubt there's anyone wrapping xpath method calls in user code in another try/except blocks for "ValueError". Even if somebody actually does this, I bet it's for logging some custom error message and this can't justify the usefulness of the current try/except block. That's why I'm leaning more towards dropping the try/except altogether. However I opened this PR instead because I doubt you'd accept dropping it.

    opened by Digenis 11
  • SelectorList re_first and regex optimizations

    SelectorList re_first and regex optimizations

    I implemented https://github.com/scrapy/parsel/issues/52 and also did some optimization on the regex functions. I added pytest-benchmark to the project and created benchmarking tests to cover the regex stuff.

    It probably needs to be cleaned up a bit before merge. Also I'm not sure if you want to include benchmarks in your project. In that case I can create a branch without the benchmarks.

    Running the benchmarks

    To run the benchmarks with tox:

    tox -e benchmark
    

    You can also run the benchmarks with py.test directly in order to e.g. compare results.

    py.test --benchmark-only --benchmark-max-time=5
    

    Speedup

    I ran my tests by comparing the following branches. https://github.com/Tethik/parsel/tree/re_benchmark_tests https://github.com/Tethik/parsel/tree/re_opt_with_benchmarks (source of the pull request)

    git checkout re_benchmark_tests
    py.test --benchmark-only --benchmark-max-time=20 --benchmark-save=without
    

    Then compared with the opt branch.

    git checkout re_opt_with_benchmarks
    py.test --benchmark-only --benchmark-max-time=20 --benchmark-compare=<num_without>
    

    Sample benchmark results found in the following gist. "NOW" is the optimized version and "0014" is the current version with a naïve re_first implementation. https://gist.github.com/Tethik/8885c5c349c8922467b31a22078baf48

    Loosely interpreting the results I get up to 2.5 speedup on the re_first function, but also a smaller improvement on the re function.

    opened by Tethik 10
  • [MRG+1] Fix has-class to deal with newlines in class names

    [MRG+1] Fix has-class to deal with newlines in class names

    The has-class() XPath function fails to select elements by class when there's a \n character right after the class name we're looking for. For example:

    >>> import parsel
    >>> html = '''<p class="foo
    bar">Hello</p>
    '''
    >>> parsel.Selector(text=html).xpath('//p[has-class("foo")]/text()').get() is None
    True
    

    Such (broken?) elements are not that uncommon around the web and they break has-class expected behavior (at least from my point of view).

    Any thoughts on it?

    opened by stummjr 9
  • Caching css_to_xpath()'s recently used patterns to improve efficiency

    Caching css_to_xpath()'s recently used patterns to improve efficiency

    I profiled the scrapy-bench spider which uses response.css() for extracting information.

    The profiling results are here. The function css_to_xpath() takes 5% of the total time.

    When response.xpath()(profiling result) was used, the items extracted per second (benchmark result) was higher.

    Hence, I'm proposing caching for the recently used patterns, so that the function takes lesser time. I'm working on a prototype for the same and will add the results for it too.

    opened by Parth-Vader 9
  • Add .get() and .getall() aliases

    Add .get() and .getall() aliases

    In quite a few projects, I've been using a .get() alias to .extract_first() as most of the time, getting the first match is what I want. To me, .extract_first() feels a bit long to write (I'm probably getting lazy with age...)

    For cases where I do need to loop on results, I added a .getall() alias for .extract() on .xpath() and .css() calls results.

    I know there's been quite some discussion already to have .extract_first() in the first place, but I'm submitting my preference again.

    opened by redapple 9
  • Improve typing in parsel.selector._ctgroup

    Improve typing in parsel.selector._ctgroup

    parsel.selector._ctgroup, used to switch between mode implementations, is an untyped dict of dicts, it makes sense to change it into something cleaner as it's a private var.

    enhancement 
    opened by wRAR 1
  • Modernize SelectorList-related code

    Modernize SelectorList-related code

    There were multiple issues identified with the code of SelectorList itself, Selector.selectorlist_cls and their typing. Ideally:

    • SelectorList should only be able to contain Selector objects
    • SelectorList subclasses made to work with Selector subclasses should only able to contain those
    • Selector subclasses shouldn't need to set selectorlist_cls to a respective SelectorList subclass manually
    • all of this should be properly typed without need for casts and other overrides

    This may require changing Selector and/or SelectorList base classes, but I think we will need to keep the API compatibility? It's also non-trivial because the API for subclassing them doesn't seem to be documented, the only reference is SelectorTestCase.test_extending_selector() (the related code was also changed when adding typing, not sure if it changed the interface).

    enhancement 
    opened by wRAR 0
  • Adding a `strip` kwarg to `get()` and `getall()`

    Adding a `strip` kwarg to `get()` and `getall()`

    Hi,

    Thank you very much for this excellent library ❤️

    I've been using Parsel for a while and I constantly find myself calling .strip() after .get() or .getall(). I think it would be very helpful if Parsel provided a built-in mechanism for that.

    I suggest adding a strip kwarg to get() and getall(). It would be a boolean value, and when it's true, Parsel would call strip() on every match.

    Example with get():

    # Before
    author = selector.css("[itemprop=author] [itemprop=name]::text").get()
    if author:
       author = author.strip()
    
    # After
    author = selector.css("[itemprop=author] [itemprop=name]::text").get(strip=True)
    

    Example with getall():

    # Before
    authors = [author.strip() for author in selector.css("[itemprop=author] [itemprop=name]::text").getall()]
    
    # After
    authors = selector.css("[itemprop=author] [itemprop=name]::text").getall(strip=True)
    

    Alternatively, we could change the ::text pseudo-element to support an argument, like ::text(strip=1). That would be extremely handy too and probably more flexible than my original suggestion, but also more difficult to implement.

    I know I could strip whitespaces with re() and re_first() but it's overkill and hides the intent.

    Best regards, Benoit

    enhancement 
    opened by bblanchon 0
  • Discussion on implementing selectolax support

    Discussion on implementing selectolax support

    Here are some of the changes I thought of implementing

    High level changes -

    1. Selector class takes a new argument "parser" which indicates which parser backend to use (lxml or selectolax).
    2. Selectolax itself provides two backends Lexbor and Modest by default it uses the Modest backend. Should additional support for lexbor be added? We could use modest by default and have the users pass an argument if they want to use lexbor
    3. If the "parser" argument is not provided lxml will be used by default, since I thought it preserves the current behavior and allows backward support. It also allows the test suite to be used without changes to all the existing methods.
    4. If the xpath method is called on a selector instantiated with selectolax as parser raise NotImplementedError.

    Low level changes -

    1. Add selectolax to the list of parsers in _ctgroup and modify create_root_node to instantiate the selected parser with the provided data.
    2. Modify the xpath and css methods behavior to use both selectolax and lxml or write separate methods or classes to handle them.
    3. Utilize HTMLParser class in Selectolax and its css method to apply the css expression specified and return the data collected.
    4. Create a Selectorlist with Selector objects created with the type and parser specified.

    This is still a work in progress and I will make a lot of changes, Please suggest the changes that need to made to the current list

    opened by deepakdinesh1123 7
  • Support JSONPath

    Support JSONPath

    Support for JSONPath has been added with the jsonpath-ng library. Most of the implementation is based on #181 which adds support for json using the JMESPath library.

    closes #204

    opened by deepakdinesh1123 5
Releases(v1.7.0)
  • v1.7.0(Nov 1, 2022)

    • Add PEP 561-style type information
    • Support for Python 2.7, 3.5 and 3.6 is removed
    • Support for Python 3.9-3.11 is added
    • Very large documents (with deep nesting or long tag content) can now be parsed, and Selector now takes a new argument huge_tree to disable this
    • Support for new features of cssselect 1.2.0 is added
    • The Selector.remove() and SelectorList.remove() methods are deprecated and replaced with the new Selector.drop() and SelectorList.drop() methods which don’t delete text after the dropped elements when used in the HTML mode.
    Source code(tar.gz)
    Source code(zip)
  • v1.6.0(May 7, 2020)

    • Python 3.4 is no longer supported
    • New Selector.remove() and SelectorList.remove() methods to remove selected elements from the parsed document tree
    • Improvements to error reporting, test coverage and documentation, and code cleanup
    Source code(tar.gz)
    Source code(zip)
  • v1.3.1(Dec 28, 2017)

    • has-class XPath extension function;
    • parsel.xpathfuncs.set_xpathfunc is a simplified way to register XPath extensions;
    • Selector.remove_namespaces now removes namespace declarations;
    • Python 3.3 support is dropped;
    • make htmlview command for easier Parsel docs development.
    • CI: PyPy installation is fixed; parsel now runs tests for PyPy3 as well.

    1.3.1 was released shortly after 1.3.0 to fix pypi upload issue.

    Source code(tar.gz)
    Source code(zip)
  • v1.2.0(May 17, 2017)

    • Add get() and getall() methods as aliases for extract_first and extract respectively
    • Add default value parameter to SelectorList.re_first method
    • Add Selector.re_first method
    • Bug fix: detect None result from lxml parsing and fallback with an empty document
    • Rearrange XML/HTML examples in the selectors usage docs
    • Travis CI:
      • Test against Python 3.6
      • Test against PyPy using "Portable PyPy for Linux" distribution
    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Nov 22, 2016)

    • Change default HTML parser to lxml.html.HTMLParser, which makes easier to use some HTML specific features
    • Add css2xpath function to translate CSS to XPath
    • Add support for ad-hoc namespaces declarations
    • Add support for XPath variables
    • Documentation improvements and updates
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 29, 2016)

    • Add BSD-3-Clause license file
    • Re-enable PyPy tests
    • Integrate py.test runs with setuptools (needed for Debian packaging)
    • Changelog is now called NEWS
    Source code(tar.gz)
    Source code(zip)
  • v1.0.2(Apr 26, 2016)

  • v1.0.1(Sep 3, 2015)

  • v1.0.0(Sep 3, 2015)

  • v0.9.3(Aug 7, 2015)

  • v0.9.2(Aug 7, 2015)

  • v0.9.1(Aug 7, 2015)

  • v0.9.0(Jul 30, 2015)

    Released first version of Parsel -- a library extracted out from Scrapy project that lets you extract text from XML/HTML documents using XPath or CSS selectors.

    Source code(tar.gz)
    Source code(zip)
Owner
Scrapy project
An open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.
Scrapy project
A simple reddit scraper to get memes (only images) from r/ProgrammerHumor.

memey A simple reddit scraper to get memes (only images) from r/ProgrammerHumor. Note Only works if you have firefox installed (yet). Instructions foo

2 Nov 16, 2021
Find thumbnails and original images from URL or HTML file.

Haul Find thumbnails and original images from URL or HTML file. Demo Hauler on Heroku Installation on Ubuntu $ sudo apt-get install build-essential py

Vinta Chen 150 Oct 15, 2022
IGLS - Instagram Like Scraper CLI tool

IGLS - Instagram Like Scraper It's a web scraping command line tool based on python and selenium. Description This is a trial tool for learning purpos

Shreshth Goyal 5 Oct 29, 2021
Scrap the 42 Intranet's elearning videos in a single click

42intra_scraper Scrap the 42 Intranet's elearning videos in a single click. Why you would want to use it ? Adjust speed at your convenience. (The intr

Noufel 5 Oct 27, 2022
一款利用Python来自动获取QQ音乐上某个歌手所有歌曲歌词的爬虫软件

QQ音乐歌词爬虫 一款利用Python来自动获取QQ音乐上某个歌手所有歌曲歌词的爬虫软件,默认去除了所有演唱会(Live)版本的歌曲。 使用方法 直接运行python run.py即可,然后输入你想获取的歌手名字,然后静静等待片刻。 output目录下保存生成的歌词和歌名文件。以周杰伦为例,会生成两

Yang Wei 11 Jul 27, 2022
A spider for Universal Online Judge(UOJ) system, converting problem pages to PDFs.

Universal Online Judge Spider Introduction This is a spider for Universal Online Judge (UOJ) system (https://uoj.ac/). It also works for all other Onl

TriNitroTofu 1 Dec 07, 2021
Telegram group scraper tool

Telegram Group Scrapper

Wahyusaputra 2 Jan 11, 2022
Get paper names from dblp.org

scraper-dblp Get paper names from dblp.org and store them in a .txt file Useful for a related literature :) Install libraries pip3 install -r requirem

Daisy Lab 1 Dec 07, 2021
a Scrapy spider that utilizes Postgres as a DB, Squid as a proxy server, Redis for de-duplication and Splash to render JavaScript. All in a microservices architecture utilizing Docker and Docker Compose

This is George's Scraping Project To get started cd into the theZoo file and run: chmod +x script.sh then: ./script.sh This will spin up a Postgres co

George Reyes 7 Nov 27, 2022
Simply scrape / download all the media from an fansly account.

Simply scrape / download all the media from an fansly account. Providing updates as long as its continuously gaining popularity, so hit the ⭐ button!

Mika C. 334 Jan 01, 2023
Extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file.

GetTss python Package extract gene TSS site form gencode/ensembl/gencode database GTF file and export bed format file. Install $ pip install GetTss Us

laojunjun 6 Nov 21, 2022
Libextract: extract data from websites

Libextract is a statistics-enabled data extraction library that works on HTML and XML documents and written in Python

499 Dec 09, 2022
A command-line program to download media, like and unlike posts, and more from creators on OnlyFans.

onlyfans-scraper A command-line program to download media, like and unlike posts, and more from creators on OnlyFans. Installation You can install thi

185 Jul 23, 2022
A web scraper which checks price of a product regularly and sends price alerts by email if price reduces.

Amazon-Web-Scarper Created a web scraper using simple functions to check price of a product on amazon (can be duplicated to check price at other marke

Swaroop Todankar 1 Jan 17, 2022
AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

AssistScraper - program for /r/nba to use to find list of all players a player assisted and how many assists each player recieved

5 Nov 25, 2021
a small library for extracting rich content from urls

A small library for extracting rich content from urls. what does it do? micawber supplies a few methods for retrieving rich metadata about a variety o

Charles Leifer 588 Dec 27, 2022
This tool can be used to extract information from any website

WEB-INFO- This tool can be used to extract information from any website Install Termux and run the command --- $ apt-get update $ apt-get upgrade $ pk

1 Oct 24, 2021
Simple proxy scraper made by using ProxyScrape's api.

What is Moon? Moon is a lightweight and fast proxy scraper made by using ProxyScrape's api. What can i do with this? You can use proxies for varietys

1 Jul 04, 2022
A Web Scraping Program.

Web Scraping AUTHOR: Saurabh G. MTech Information Security, IIT Jammu. If you find this repository useful. I would appreciate if you Star it and Fork

Saurabh G. 2 Dec 14, 2022
Web-Scraping using Selenium Master

Web-Scraping using Selenium What is the need of Selenium? Some websites don't like to be scrapped and in that case you need to disguise your webscrapi

Md Rashidul Islam 1 Oct 26, 2021