The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
DEPRECATED - Official Python Client for the Discogs API

⚠️ DEPRECATED This repository is no longer maintained. You can still use a REST client like Requests or other third-party Python library to access the

Discogs 483 Dec 31, 2022
A chatbot that helps you set price alerts for your amazon products.

Amazon Price Alert Bot Description A Telegram chatbot that helps you set price alerts for amazon products. The bot checks the price of your watchliste

Rittik Basu 24 Dec 29, 2022
A modern Python client for controlling Wyze devices.

Python Wyze SDK A modern Python client for controlling Wyze devices. Whether you're building a custom app, or integrating into a third-party service l

Shaun Tarves 205 Jan 02, 2023
A modular bot running on python3 with anime theme and have a lot features

STINKY ROBOT Emiko Robot is a modular bot running on python3 with anime theme and have a lot features. Easiest Way To Deploy On Heroku This Bot is Cre

Riyan.rz 3 Jan 21, 2022
Fast IP address lookup

ipscoop Fast IP Scoop Table of Contents Installation CLI Getting Started Ref Installation To install ipscoop, simply: $ python3 -m pip install -U git+

6 Mar 16, 2022
Automatically scrape all of your artifacts in Genshin Impact.

Genshin Artifact Scraper Automatically scrape all of your artifacts in Genshin Impact. Features: Simple recalibration (2 steps). GUI to select OCR reg

21 Dec 17, 2022
Volt is yet another discord api wrapper for Python. It supports python 3.8 +

Volt Volt is yet another discord api wrapper for Python. It supports python 3.8 + How to install [Currently Not Supported.] pip install volt.py Speed

Minjun Kim (Lapis0875) 11 Nov 21, 2022
This is a Innexia Chat Bot Open Source Code 🤬

⚡ Innexia ⚡ A Powerful, Smart And Simple Chat Bot ... Written with Python... Available on Telegram as @InnexiaChatBot ❤️ Support ⭐️ Thanks to everyone

Dark Cyber 4 Oct 02, 2022
A simple way to create a request to the coinpayment API with a valid HMAC using your private key and command

Coinpayments Verify TXID Created for Astral Discord bot A simple way to create a request to the coinpayment API with a valid HMAC using your private k

HellSec 1 Nov 07, 2022
A Telegram bot that can stream Telegram files to users over HTTP

AK-FILE-TO-LINK-BOT A Telegram bot that can stream Telegram files to users over HTTP. Setup Install dependencies (see requirements.txt), configure env

3 Dec 29, 2021
dex.guru python sdk

dexguru-sdk.py dexguru-sdk.py allows you to access dex.guru public methods from your async python scripts. Installation To install latest version, jus

DexGuru 17 Dec 06, 2022
Automatically changes your discord status

Automatically changes your discord status, Be careful as this may get you rate limited and banned

octo 5 Sep 20, 2022
Riverside Rocks Python API

APIv2 Riverside Rocks Python API Routes GET / Get status of the API GET /api/v1/tor Get Tor metrics of RR family GET /api/v1/metrics Get bandwidth

3 Dec 20, 2021
Telegram File Renamer Bot

RENAMER_BOT Telegram File Renamer Bot Configs TG_BOT_TOKEN - Get bot token from @BotFather API_ID - From my.telegram.org API_HASH - From my.telegram.o

Lntechnical 37 Dec 27, 2022
An all-in-one financial analytics and smart portfolio creator as a Discord bot!

An all-in-one financial analytics bot to help you gain quantitative financial insights. Finn is a Discord Bot that lets you explore the stock market like you've never before!

6 Jan 12, 2022
The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards.

Shweta_kumawat 2 Jan 20, 2022
WikiChecker - Repositorio oficial del complemento WikiChecker para NVDA.

WikiChecker Buscador rápido de artículos en Wikipedia. Introducción. El complemento WikiChecker para NVDA permite a los usuarios consultar de forma rá

2 Jan 10, 2022
Ap lokit lokit

🎵 FANDA MUSIC BOT Fanda Music adalah proyek bot telegram yang memungkinkan Anda memutar musik di obrolan suara grup telegram. a href="https://www.py

Fatur 2 Nov 18, 2021
Checks if Minecraft accounts are available, or taken.

MCNameChecker Checks validity of Minecraft IGN's. Using async to make it even faster. Has rate-limit detections and Proxy support Usage Q. How do I us

Dimitri Demarkus 5 Apr 22, 2022
A collective list of free APIs for use in software and web development.

Public APIs A collective list of free APIs for use in software and web development. A public API for this project can be found here! For information o

222.5k Jan 02, 2023