The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards

Overview

Project Scopes

The scope of this project will be to build a data ware house on Google Cloud Platform that will help answer common business questions as well as powering dashboards. To do that, a conceptual data model and a data pipeline will be defined.

Architecture

Data are uploaded to Google Cloud Storage bucket. GCS will act as the data lake where all raw files are stored. Data will then be loaded to staging tables on BigQuery. The ETL process will take data from those staging tables and create data mart tables. An Airflow instance can be deployed on a Google Compute Engine or locally to orchestrate the pipeline.

Here are the justifications for the technologies used:

  • Google Cloud Storage: act as the data lake, vertically scalable.
  • Google Big Query: act as data base engine for data warehousing, data mart and ETL processes. BigQuery is a serverless solution that can easily and effectively process petabytes scale dataset.
  • Apache Airflow: orchestrate the workflow by issuing command line to load data to BigQuery or SQL queries for ETL process. Airflow does not have to process any data by itself, thus allowing the architecture to scale.

Data Model

The database is designed following a star-schema principal with 1 fact table and 5 dimensions tables.

image

  • F_IMMIGRATION_DATA: contains immigration information such as arrival date, departure date, visa type, gender, country of origin, etc.
  • D_TIME: contains dimensions for date column
  • D_PORT: contains port_id and port_name
  • D_AIRPORT: contains airports within a state
  • D_STATE: contains state_id and state_name
  • D_COUNTRY: contains country_id and country_name
  • D_WEATHER: contains average weather for a state
  • D_CITY_DEMO: contains demographic information for a city

Data pipeline

This project uses Airflow for orchestration.

image

A DummyOperator start_pipeline kick off the pipeline followed by 4 load operations. Those operations load data from GCS bucket to BigQuery tables. The immigration_data is loaded as parquet files while the others are csv formatted. There are operations to check rows after loading to BigQuery.

Next the pipeline loads 3 master data object from the I94 Data dictionary. Then the F_IMMIGRATION_DATA table is created and check to make sure that there is no duplicates. Other dimension tables are also created and the pipelines finishes.

Scenarios

Data increase by 100x

Currently infrastructure can easily supports 100x increase in data size. GCS and BigQuery can handle petabytes scale data. Airflow is not a bottle neck since it only issue commands to other services.

Pipelines would be run on 7am daily. how to update dashboard? would it still work?

Schedule dag to be run daily at 7 AM. Setup dag retry, email/slack notification on failures.

Make it available to 100+ people

BigQuery is auto-scaling so if 100+ people need to access, it can handle that easily. If more people or services need access to the database, we can add steps to write to a NoSQL database like Data Store or Cassandra, or write to a SQL one that supports horizontal scaling like BigTable.

Project Instructions

GCP setup

Follow the following steps:

  • Create a project on GCP
  • Enable billing by adding a credit card (you have free credits worth $300)
  • Navigate to IAM and create a service account
  • Grant the account project owner. It is convenient for this project, but not recommended for production system. You should keep your key somewhere safe.

Create a bucket on your project and upload the data with the following structure:

gs://cloud-data-lake-gcp/airports/:
gs://cloud-data-lake-gcp/airports/airport-codes_csv.csv
gs://cloud-data-lake-gcp/airports/airport_codes.json

gs://cloud-data-lake-gcp/cities/:
gs://cloud-data-lake-gcp/cities/us-cities-demographics.csv
gs://cloud-data-lake-gcp/cities/us_cities_demo.json

gs://cloud-data-lake-gcp/immigration_data/:
gs://cloud-data-lake-gcp/immigration_data/part-00000-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00001-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00002-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00003-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00004-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00005-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00006-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00007-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00008-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00009-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00010-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00011-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00012-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet
gs://cloud-data-lake-gcp/immigration_data/part-00013-b9542815-7a8d-45fc-9c67-c9c5007ad0d4-c000.snappy.parquet

gs://cloud-data-lake-gcp/master_data/:
gs://cloud-data-lake-gcp/master_data/
gs://cloud-data-lake-gcp/master_data/I94ADDR.csv
gs://cloud-data-lake-gcp/master_data/I94CIT_I94RES.csv
gs://cloud-data-lake-gcp/master_data/I94PORT.csv

gs://cloud-data-lake-gcp/weather/:
gs://cloud-data-lake-gcp/weather/GlobalLandTemperaturesByCity.csv
gs://cloud-data-lake-gcp/weather/temperature_by_city.json

You can copy the data to your own bucket by running the following:

gsutil cp -r gs://cloud-data-lake-gcp/ gs://{your_bucket_name}

Local setup

Clone the project, create environment, install required packages by running the following:

Install docker if it's not already installed. You can find the resources to do that here.

Install the Astronomer CLI following the instructions here.

Run the following commands to bring up the Airflow instance:

astro d start

You can look at the logs by running make logs if you need to debug something. You can access and manage the pipeline by typing the following address to a browser:

localhost:8080/admin/

If everything is setup correctly, you will see the following screen:

image

Navigate to Admin -> Connections and paste in the credentials for the following two connections: bigquery_default and google_cloud_default

image

Navigate to the main dag on path dags\cloud-data-lake-pipeline.py and change the following parameters with your own setup:

project_id = 'cloud-data-lake'
staging_dataset = 'IMMIGRATION_DWH_STAGING'
dwh_dataset = 'IMMIGRATION_DWH'
gs_bucket = 'cloud-data-lake-gcp'

You can then trigger the dag and the pipeline will run.

The data warehouse

The final data warehouse looks like this: img

Owner
Shweta_kumawat
AI software @ Computer programmer, Deep Learning, Computer Vision Researcher and Developer. Implement AI products and machine learning solutions
Shweta_kumawat
Telegram bot for searching videos in your PDisk account by @AbirHasan2005

PDisk-Videos-Search A Telegram bot for searching videos in your PDisk account by @AbirHasan2005. Configs API_ID - Get from @TeleORG_Bot API_HASH - Get

Abir Hasan 39 Oct 21, 2022
Upload-Instagram - Auto Uploading Instagram Bot

###Instagram Uploading Bot### Download Python and Chrome browser pip install -r

byeonggeon sim 1 Feb 13, 2022
HackZ-Token-Grabber-V2 - HackZ Token Grabber V2

HackZ-Token-Grabber-V2 was made by Love ❌ code ✅ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ ‎ 🌟

! ™NightMare 2 Mar 01, 2022
Python SDK for accessing the Hanko Authentication API

Hanko Authentication SDK for Python This package is maintained by Hanko. Contents Introduction Documentation Installation Usage Prerequisites Create a

Hanko.io 3 Mar 08, 2022
Playing around with the slack api for learning purposes

SlackBotTest Playing around with the slack api for learning purposes and getting people to contribute Reason for this Project: Bots are very versatile

1 Nov 24, 2021
A simple discord bot based on python

A simple discord bot based on python

SENPAI LEGEND 2 Jul 24, 2022
PyMed is a Python library that provides access to PubMed.

IMPORTANT NOTE: I don't have time to maintain this library (as some of you might have noticed). The PubMed API is a little chaotic, without a clear do

Gijs Wobben 143 Dec 21, 2022
This is Pdisk Upload Bot made using Python with Pyrogram Framework. Its capable of uploading direct download link with thumbnail or without thumbnail & with Title Support.

Pdisk-Upload-Bot Introduction This Is PDisk Upload Bot Used To Upload Direct Link To Pdisk With Thumb Support Deploy Heroku Deploy Local Deploy pip in

HEIMAN PICTURES 32 Oct 21, 2022
💰 Import your ING Germany bank statements via FinTS into YNAB.

Import your ING Germany bank statements via FinTS into YNAB. Setup Before setting this up, please register your FinTS product – it's free and takes on

Arne Bahlo 23 Jan 21, 2022
Telegram anime bot that uses Anilist API

Telegram Bot Repo Capable of fetching the following Info via Anilist API inspired from AniFluid and Nepgear Anime Airing Manga Character Scheduled Top

Lucky Jain 71 Jan 03, 2023
Financial portfolio optimisation in python, including classical efficient frontier, Black-Litterman, Hierarchical Risk Parity

PyPortfolioOpt has recently been published in the Journal of Open Source Software 🎉 PyPortfolioOpt is a library that implements portfolio optimizatio

Robert Martin 3.2k Jan 02, 2023
An Open Source ALL-In-One Telegram RoBot, that can do lot of things.

URL Uploader Bot An Open Source ALL-In-One Telegram RoBot, that can do lot of things. My Features Installation The Easy Way You can also tap the Deplo

NT BOTS 1 Oct 23, 2021
Simple Similarities Service

simsity Simsity is a Super Simple Similarities Service[tm]. It's all about building a neighborhood. Literally! This repository contains simple tools t

vincent d warmerdam 95 Dec 25, 2022
Autofill HZDR Zeitman entries

Zeitman_autofill Filling out Zeitman is boring. This script might make some of the pain go away. Requirements The selenium package and Chrome webdrive

Tim Callow 8 Mar 14, 2022
Experimental bridges between Telegram calls and other platforms.

Bridges by Calls Music Experimental bridges between Telegram calls and other platforms. Current bridges Bridge 1 (YouTube, Twitch, Facebook, etc...) B

Calls Music 14 Oct 08, 2022
ThetaGang is an IBKR bot for collecting money

💬 Join the Matrix chat, we can get money together. Θ ThetaGang Θ Beat the capitalists at their own game with ThetaGang 📈 ThetaGang is an IBKR tradin

Brenden Matthews 1.5k Jan 08, 2023
OliviaV2: danger bot with python

🎶 OLIVIA V2 🎵 Requirements 📝 FFmpeg NodeJS nodesource.com Python 3.7 or higher PyTgCalls 🧪 Get SESSION_NAME from below: Pyrogram 🎖 History Featur

Alvaro Einstein 2 Nov 04, 2021
A multipurpose Telegram Bot written in Python for mirroring files on the Internet to Google Drive

Mirror Leech Bot Mirror Leech Bot is a multipurpose Telegram Bot written in Python for mirroring files on the Internet to our beloved Google Drive. Ba

1 Jan 01, 2022
Versatile async-friendly library to retry failed operations with configurable backoff strategies

riprova riprova (meaning retry in Italian) is a small, general-purpose and versatile Python library that provides retry mechanisms with multiple backo

Tom 108 Apr 27, 2022