The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
A stable telegram bot to get restricted messages with custom thumbnail support

Save restricted content Bot A stable telegram bot to get restricted messages with custom thumbnail support

DEVANSH 3 Feb 09, 2022
Get informed when your DeFI Earn CRO Validator is jailed or changes the commission rate.

CRO-DeFi-Warner Intro CRO-DeFi-Warner can be used to notify you when a validator changes the commission rate or gets jailed. It can also notify you wh

5 May 16, 2022
A userbot made for telegram

πšƒπ™·π™΄ π™Όπ™°π™΅π™Έπ™°π™±π™Ύπšƒ This is a userbot made for telegram. I made this userbot with help of all other userbots available in telegram. All credits go

MafiaBotOP 8 Apr 08, 2022
A simple google translator telegram bot

Translator-Bot A simple google translator telegram bot Please fork this repository don't import code Made with Python3 (C) @FayasNoushad Copyright per

Fayas Noushad 14 Nov 12, 2022
Simple Telegram Bot To Get Feedback from users & Some Other Features

FeedbackBot Simple Telegram Bot To Get Feedback from users & Some Other Features. Features Get Feedback from users Reply to user's feedback Customisab

Arun 18 Dec 29, 2022
Hazard-Nuker - Hazard Nuker With Python

🌟 Since hazard is free, donations are really appriciate and keeps the developme

†† 9 Oct 26, 2022
Graviti-python-sdk - Graviti Data Platform Python SDK

Graviti Python SDK Graviti Python SDK is a python library to access Graviti Data

Graviti 13 Dec 15, 2022
Mini Tool to lovers of debe from eksisozluk (one of the most famous website -reffered as collaborative dictionary like reddit- in Turkey) for pushing debe (Most Liked Entries of Yesterday) to kindle every day via Github Actions.

debe to kindle Mini Tool to lovers of debe from eksisozluk (one of the most famous website -refered as collaborative dictionary like reddit- in Turkey

11 Oct 11, 2022
a list of disposable and temporary email address domains

List of disposable email domains This repo contains a list of disposable and temporary email address domains often used to register dummy users in ord

1.6k Jan 08, 2023
Neko: An Anime themed advance Telegram group management bot

π‘ͺπ’–π’•π’Šπ’†π’‘π’Šπ’Š 𝑹𝒐𝒃𝒐𝒕 A modular telegram Python bot running on python3 wit

γ€Œ Rajkumarβ„’ 」 39 Jan 08, 2023
Advanced telegram link in a message attach bot

Attach-Bot-V2 An advanced telegram attach bot Made with Python3 (C) @FayasNoushad Copyright permission under MIT License License - https://github.com

Fayas Noushad 8 Oct 21, 2022
The most expensive version of Conway's Game of Life - running on the Ethereum Blockchain

GameOfLife The most expensive implementation of Conway's Game of Life ever - over $2,000 per step! (Probably the slowest too!) Conway's Game of Life r

75 Nov 26, 2022
Async ShareX uploader written in python

Async ShareX uploader written in python

Jacob 2 Jan 07, 2022
A simple Facebook Account generator, written in python (needs different Email so Accounts do not get banned)

FacebookAccountGenerator FAB is a Facebook-Account generating script, written in python Installation Use the package manager pip to install selenium p

MrOverload 7 Jan 05, 2023
A Telegram mirror bot which can be deployed using Heroku.

Slam Mirror Bot This is a telegram bot writen in python for mirroring files on the internet to our beloved Google Drive. Getting Google OAuth API cred

Hafitz Setya 1.2k Jan 01, 2023
Python script to backup/convert your Spotify playlists into the XSPF format.

Python script to backup/convert your Spotify playlists into the XSPF format.

Chris Ovenden 4 Jun 09, 2022
Uses discords api to see if a token has a valid payment method.

Discord Payment Checker Uses discords api to see if a token has a valid payment method. Report Bug Β· Request Feature Features Checks tokens Checks all

dropout 10 Dec 01, 2022
A basic Ubisoft API wrapper created in python.

UbisoftAPI A basic Ubisoft API wrapper created in python. I will be updating this with more endpoints as time goes on. Please note that this is my fir

Ethan 2 Oct 31, 2021
Read manga from your favourites websites on telegram.

tg-manga-bot Read manga from your favourites websites on telegram. Current Development Bot @idkpythonbot Telegram Channel tg_manga_bot Commands start

Daniel Rivero 41 Dec 22, 2022