Snowflake database loading utility with Scrapy integration

Overview

Snowflake Stage Exporter

Snowflake database loading utility with Scrapy integration.
Meant for streaming ingestion of JSON serializable objects into Snowflake stages and tables.

Installation

> pip install git+https://github.com/hermit-crab/snowflake-stage-exporter.git

Basic example

from snowflake_stage_exporter import SnowflakeStageExporter

with SnowflakeStageExporter(
    user='...',
    password='...',
    account='...',
    table_path='MY_DATABASE.PUBLIC.{item_type_name}',
) as exporter:
    exporter.export_item({'name': 'Jack', 'salary': 100}, item_type_name='employee')
    exporter.export_item({'name': 'Sal', 'salary': 90, 'extra_info': {'age': 20}}, item_type_name='employee')
    exporter.export_item({'title': 'Steel Grill', 'price': 5.5}, item_type_name='product')
    exporter.finish_export()  # flushes all stage buffers, creates tables and populates them with data inside stages

After you call finish() 2 tables will be created: EMPLOYEE (2 rows, 3 columns) and PRODUCT (1 row, 2 columns) located inside database MY_DATABASE and database schema PUBLIC (Snowflake default database schema).

Same thing achieved via Scrapy integration.

How this works

For each object that you feed into the exporter it will write it into a local buffer (temporary JSON file). Once a configurable maximum buffer size is reached the file is uploaded to Snowflake internal stage via PUT statement. Upon the end of the execution exporter will create all specified tables then instruct Snowflake to populate each table from every staged JSON file via COPY INTO

statements

  • If you output to multiple tables then a buffer is maintained for each.
  • Alternatively you can create / populate tables as soon as the buffers are flushed via *_on parameters described below.
  • *_on parameters also allow you to disable any table creation / population and just deal with the stages yourself.
  • For table creation the exporter will try to figure out column types dynamically during execution, otherwise you can pass them explicitly via parameter.

Why "Stages"?

The use of local buffers and stages opposed to typical SQL INSERT statements is motivated largely by Snowflake performance implications and their billing model (see https://community.snowflake.com/s/question/0D50Z00008JpBymSAF/implications-of-multiple-insert-statements-vs-copy-into).

An illustrative example can be a long running Scrapy / ScrapyCloud job that constantly outputs data. If the job was to keep the connection constantly executing the INSERTs - Snowflake would also keep the warehouse running / consuming the credits for the entire duration of the job.

Secondary consideration was for allowing the user to be able to work with purely just the stages like one would work with S3 or similar blob file storage. This covers cases when user would needs their own sophisticated table management approach and simply wants a convenient place to store raw data.

IMPORTANT NOTE: it won't make much sense to use this library if you're already working with S3 or similar storages (including just local machine) where your data is one of the Snowflake supported file formats. Snowflake has built-in support of ingesting several 3rd party blob storages and for local files you can upload them via PUT statements.

Configuration

All of the configurations are done via arguments to main exporter class SnowflakeStageExporter.

  • user/password/account - Snowflake account credentials, passed as is to snowflake.connector.connect.
  • connection_kwargs - any additional parameters to snowflake.connector.connect.
  • table_path - table path to use.
    • If you specify database / database schema in connection_kwargs you won't need to specify them in the table path.
    • The path can include template variables which are expanded when you feed an item to exporter. By default only item variable is passed (e.g. 'MY_DB.PUBLIC.TABLE_{item[entity_type]}' here it's assumed all of your items have "entity_type" field).
    • Any additional variables you can pass yourself as keyword arguments when calling exporter.export_item().
    • Additionally in Scrapy integration the following fields are passed:
      • spider - spider instance.
      • item_type_name - type(item).__name__. In the basic example above you passed this explicitly yourself.
  • stage - which internal stage to use. By default user stage ("@~") is used.
  • stage_path - naming for the files being uploaded to the stage.
    • By default it's "{table_path}/{instance_ms}_{batch_n}.jl" where table_path is table_path with all variables resolved, instance_ms epoch milliseconds when exporter was instantiated and batch_n being sequential number of the buffer.
    • In Scrapy integration by default this is "{table_path}/{job}/{instance_ms}_{batch_n}.jl" where job is the key of the ScrapyCloud job or "local" if spider ran locally.
  • max_file_size - maximum buffer size in bytes. 1GiB by default.
  • predefined_column_types - dictionary of table_path to Snowflake columns types for table creation.
    • e.g. {"MY_DB.PUBLIC.PRODUCT": {"title": "STRING", "price": "NUMBER"}, "MY_DB.PUBLIC.EMPLOYEE": {"name": "STRING", "salary": "NUMBER", "extra_info": "OBJECT"}}.
  • ignore_unexpected_fields - ignore fields not passed in predefined_column_types during table creation / population.
    • True by default but only takes effect when table does have predefined column types.
    • The data is still exported in full to the staged files.
  • allow_varying_value_types - if False during table creation / population skip columns that had multiple value types.
    • True by default. VARIANT type is assigned to such column.
    • Error is logged when False and such column is encountered.
    • Takes effect only when there is a need for exporter to figure out the column type.
    • The data is still exported in full to the staged files.
  • create_tables_on - one of "finish/flush/never". "finish" by default. "flush" is for each time a file is staged.
  • populate_tables_on - ditto.
  • clear_stage_on - same as above but "never" is default. Each file is removed from stage individually when enabled.

Configuration (Scrapy)

All of the exporter instance parameters are exposed as Scrapy settings like SNOWFLAKE_ (e.g. SNOWFLAKE_MAX_FILE_SIZE).

Once a Scrapy job ends, all remaining buffers are flushed. If the job outcome is not "finished" (something went wrong) then no table creation / table population / stage clear takes place.

TODO

  • Unit tests >_>.
  • Test on windows?
.
Owner
Oleg T.
Oleg T.
Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc)

Python based Web Scraper which can discover javascript files and parse them for juicy information (API keys, IP's, Hidden Paths etc).

Amit 6 Aug 26, 2022
A Web Scraper built with beautiful soup, that fetches udemy course information. Get udemy course information and convert it to json, csv or xml file

Udemy Scraper A Web Scraper built with beautiful soup, that fetches udemy course information. Installation Virtual Environment Firstly, it is recommen

Aditya Gupta 15 May 17, 2022
An automated, headless YouTube Watcher and Scraper

Searches YouTube, queries recommended videos and watches them. All fully automated and anonymised through the Tor network. The project consists of two independently usable components, the YouTube aut

44 Oct 18, 2022
Web crawling framework based on asyncio.

Web crawling framework for everyone. Written with asyncio, uvloop and aiohttp. Requirements Python3.5+ Installation pip install gain pip install uvloo

Jiuli Gao 2k Jan 05, 2023
Html Content / Article Extractor, web scrapping lib in Python

Python-Goose - Article Extractor Intro Goose was originally an article extractor written in Java that has most recently (Aug2011) been converted to a

Xavier Grangier 3.8k Jan 02, 2023
Web-Scrapper using Python and Flask

Web-Scrapper "[초급]Python으로 웹 스크래퍼 만들기" 코스 -NomadCoders 기초적인 Python 문법강의부터 시작하여 웹사이트의 html파일에서 원하는 내용을 Scrapping해서 출력, csv 파일로 저장, flask를 이용한 간단한 웹페이지

윤성도 1 Nov 10, 2021
🥫 The simple, fast, and modern web scraping library

About gazpacho is a simple, fast, and modern web scraping library. The library is stable, actively maintained, and installed with zero dependencies. I

Max Humber 692 Dec 22, 2022
A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response and scrap complete article - No need to write scrappers for articles fetching anymore

GNews 🚩 A Happy and lightweight Python Package that searches Google News RSS Feed and returns a usable JSON response 🚩 As well as you can fetch full

Muhammad Abdullah 273 Dec 31, 2022
A simple flask application to scrape gogoanime website.

gogoanime-api-flask A simple flask application to scrape gogoanime website. Used for demo and learning purposes only. How to use the API The base api

1 Oct 29, 2021
A high-level distributed crawling framework.

Cola: high-level distributed crawling framework Overview Cola is a high-level distributed crawling framework, used to crawl pages and extract structur

Xuye (Chris) Qin 1.5k Jan 04, 2023
Displays market info for the LUNI token on the Terra Blockchain

LuniBot for Discord Displays market info for the LUNI/LUNA token on the Terra Blockchain (Webscrape method currently scraping CoinMarketCap). Will evo

0 Jan 22, 2022
WebScraper - A script that prints out a list of all EXTERNAL references in the HTML response to an HTTP/S request

Project A: WebScraper A script that prints out a list of all EXTERNAL references

2 Apr 26, 2022
Scrapes Every Email Address of Every Society in Every University

society-email-scrape Site Live at https://kcsoc.github.io/society-email-scrape/ How to automatically generate new data Go to unis.yml Add your uni Cre

Krishna Consciousness Society 18 Dec 14, 2022
This program scrapes information and images for movies and TV shows.

Media-WebScraper This program scrapes information and images for movies and TV shows. Summary For more information on the program, read the WebScrape_

1 Dec 05, 2021
Scrapy uses Request and Response objects for crawling web sites.

Requests and Responses¶ Scrapy uses Request and Response objects for crawling web sites. Typically, Request objects are generated in the spiders and p

Md Rashidul Islam 1 Nov 03, 2021
🕷 Phone Crawler with multi-thread functionality

Phone Crawler: Phone Crawler with multi-thread functionality Disclaimer: I'm not responsible for any illegal/misuse actions, this program was made for

Kmuv1t 3 Feb 10, 2022
Transistor, a Python web scraping framework for intelligent use cases.

Web data collection and storage for intelligent use cases. transistor About The web is full of data. Transistor is a web scraping framework for collec

BOM Quote Manufacturing 212 Nov 05, 2022
Google Developer Profile Badge Scraper

Google Developer Profile Badge Scraper GDev Profile Badge Scraper is a Google Developer Profile Web Scraper which scrapes for specific badges in a use

Siddhant Lad 7 Jan 10, 2022
爱奇艺会员,腾讯视频,哔哩哔哩,百度,各类签到

My-Actions 个人收集并适配Github Actions的各类签到大杂烩 不要fork了 ⭐️ star就行 使用方式 新建仓库并同步代码 点击Settings - Secrets - 点击绿色按钮 (如无绿色按钮说明已激活。直接到下一步。) 新增 new secret 并设置 Secr

280 Dec 30, 2022
Parsel lets you extract data from XML/HTML documents using XPath or CSS selectors

Parsel Parsel is a BSD-licensed Python library to extract and remove data from HTML and XML using XPath and CSS selectors, optionally combined with re

Scrapy project 859 Dec 29, 2022