The game company we work for has two events that we want to track: buy an item and join a guild. Each of them has metadata characteristic of such events.

Overview

Project 3: Carolina Arriaga, Daniel Ryu, Jonathan Moges, Ziwei Zhao

Understanding user behavior

The game company we work for has two events that we want to track: buy an item and join a guild. Each of them has metadata characteristic of such events.

Our Business Questions

Given the influence the business question has on the design on the pipeline, we chose to begin our presentation with the focus on our business question. Our questions focused on the trends of user behavior and internet host providers. To have a more basic and affordable pipeline, we chose to use daily batch processing of our events. The rationale behind this is we would not expect strong deviations in host providers or user events from day to day.

Pipeline description

Before describing our pipeline, those who wish to replicate the pipeline should do so by running the steps described in the terminal and not a Jupyter notebook.

  1. Sourcing: Events are sourced to our system through the use of two python files "game_api_with_extended_json_events" and "event_generator" run through Flask. The game_api file produces two main logs to kafka - purchasing an item or joining a guild. These are the main two actions a user can take in the game. The event_generator takes the universe of use cases and serves as a proxy for daily user events. The file allows us the flexibility to create as many events as we desire. The events are as flat as possible to reduce the amount of data munging steps.

  2. Ingesting: Events are ingested through kafka where we have created a topic called "events." Given the event structure and the fact that the "event_generator" file represents the universe of events, notifications were not needed and we only utilized one partition. The topic did not require further utilization of brokers.

  3. Storing: Once all events have been created, we will read the associated kafka topic (events) and use the "filtered_writes" python file to unroll the json files and write the data to a parquet file. This parquet file will be the starting point for the analysis needed to answer the business questions.

  4. Analysis: Analysis begins by utilizing pyspark, we also need to use NumPy, Pandas and SparkSQL to sufficiently prepare the data to answer the business questions.

Creating a pipeline

A docker container will manage all the services required in each step of the process via Zookeeper.

To capture the information we want to analyze later, we propose the following data ingestion:

YML Requirement: Zookeeper, which is used as the coordination service for our distributed applications. We're exposing certain ports to enable connections with other nodes in the cluster and to enable other users to connect via port 2181. We also introduce an extra-hosts port ("moby:127.0.0.1") that represents another process that is running as a service

version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 32181
      ZOOKEEPER_TICK_TIME: 2000
    expose:
      - "2181"
      - "2888"
      - "32181"
      - "3888"
    extra_hosts:
      - "moby:127.0.0.1"

Cloudera will be used to run Hadoop, our distributed data store. This is where will store the parquet files before analysis. We set our ports to default for the name node and apache hue services. The latter is the web interface that supports Hadoop related data analysis.

  cloudera:
    image: midsw205/cdh-minimal:latest
    expose:
      - "8020" # nn
      - "50070" # nn http
      - "8888" # hue
    #ports:
    #- "8888:8888"
    extra_hosts:
      - "moby:127.0.0.1"

Spark is our data processing tool. It's pupose is the take our code and turn it into multiple tasks that are executed via worker nodes.

  spark:
    image: midsw205/spark-python:0.0.5
    stdin_open: true
    tty: true
    volumes:
      - ~/w205:/w205
    expose:
      - "8888"
    ports:
      - "8888:8888"
    depends_on:
      - cloudera
    environment:
      HADOOP_NAMENODE: cloudera
    extra_hosts:
      - "moby:127.0.0.1"
    command: bash

The mids container is used as the connection to between the local files (volumes) services used and the extra host that can be accessible to other users.

  mids:
    image: midsw205/base:0.1.9
    stdin_open: true
    tty: true
    volumes:
      - ~/w205:/w205
    expose:
      - "5000"
    ports:
      - "5000:5000"
    extra_hosts:
      - "moby:127.0.0.1"

Instrument the API server to log events to Kafka

In this first step, the user interacts with our mobile app. The mobile app makes calls to the web service and the API server handles the requests: buy an item or join a guild. Then the request is logged into Kafka.

YML Requirement: kafka

  • depends on Zookeeper
  • gathers all logs existent
  • exposes ip address to connect with live data
kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:32181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
    expose:
      - "9092"
      - "29092"
    extra_hosts:
      - "moby:127.0.0.1"

Create a topic in kafka

In docker we can create an event in kafka were all the events will be registered.

Using Zookeeper and Kafka we can test our API server logging events to Kafka.

Spin up the cluster:

docker-compose up -d

Create a topic events

docker-compose exec kafka \
   kafka-topics \
     --create \
     --topic events \
     --partitions 1 \
     --replication-factor 1 \
     --if-not-exists \
     --zookeeper zookeeper:32181

After running in the CL, it should show the topic events has been created.

API POST requests

The game company must redirect the events to specific HTTPS (API endpoint) for example:

  • POST/purchase/item_name
  • POST/join_a_guild

Examples:

| Items | Event | API endpoint |
| --- | --- | --- |
| sword | Purchase | POST/purchase/sword |
| shield | Purchase | POST/purchase/shield |
| helmet | Purchase | POST/purchase/helmet |
| knife | Purchase | POST/purchase/knife |
| gauntlet | Purchase | POST/purchase/gauntlet |
| NA | Join a guild | POST/ join_a_guild |

Similarly, we can use GET to retrieve specific call using

GET/join_a_guild

For our project, we will use the suggested endpoints.

Logging events

We are proposing using Flask to route the event calls and log them.

") def purchase_a_sword(item_name): purchase_event = {'event_type': '{} purchased'.format(item_name)} log_to_kafka('events', purchase_event) return "{} Purchased!\n".format(item_name) @app.route("/join_a_guild") def join_guild(): join_guild_event = {'event_type': 'join_guild'} log_to_kafka('events', join_guild_event) return "Joined guild.\n"">
#!/usr/bin/env python
import json
from kafka import KafkaProducer
from flask import Flask, request

app = Flask(__name__)
producer = KafkaProducer(bootstrap_servers='kafka:29092')


def log_to_kafka(topic, event):
    event.update(request.headers)
    producer.send(topic, json.dumps(event).encode())
    
@app.route("/purchase/
     
      "
     )
def purchase_a_sword(item_name):
    purchase_event = {'event_type': '{} purchased'.format(item_name)}
    log_to_kafka('events', purchase_event)
    return "{} Purchased!\n".format(item_name)

@app.route("/join_a_guild")
def join_guild():
    join_guild_event = {'event_type': 'join_guild'}
    log_to_kafka('events', join_guild_event)
    return "Joined guild.\n"

This file will be included in the same folder under as a python file under the name game_api_with_extended_json_events

Run Flask

Running Flask will require you to use another terminal

docker-compose exec mids env FLASK_APP=/w205/Project_3/carolina_daniel_jonathan_ziwei/game_api_with_extended_json_events.py flask run

Testing the Pipeline

We've created a script (event_generator.py) that hits the flask app with various quantities and combinations of event types, with randomly generated host names. This generated data will go through the pipeline and analysis.

pip install numpy
python event_generator.py n #will run n number of separate ab commands, each with an inverse-logarithmic number of iterations.

Reading from Kafka

We can read all the logged events using from the "events" topic:

docker-compose exec mids bash -c "kafkacat -C -b kafka:29092 -t events -o beginning -e"

Example of output

After using flask to capture data the API requests and processed them using Flaks this is how the log should look like this:

Separating events and land in HDFS

filtered_writes.py parses and separates purchase events and join guild events into two separate files in HDFS, ready to be queried and analyzed.

python file that filters and writes events into files by type

#!/usr/bin/env python
"""Extract events from kafka and write them to hdfs
"""
import json
from pyspark.sql import SparkSession, Row
from pyspark.sql.functions import udf


@udf('boolean')
def is_purchase(event_as_json):
    event = json.loads(event_as_json)
    if 'purchase' in event['event_type']:
        return True
    return False

@udf('boolean')
def is_join_guild(event_as_json):
    event = json.loads(event_as_json)
    if event['event_type']=='join_guild':
        return True
    return False


def main():
    """main
    """
    spark = SparkSession \
        .builder \
        .appName("ExtractEventsJob") \
        .getOrCreate()

    raw_events = spark \
        .read \
        .format("kafka") \
        .option("kafka.bootstrap.servers", "kafka:29092") \
        .option("subscribe", "events") \
        .option("startingOffsets", "earliest") \
        .option("endingOffsets", "latest") \
        .load()

    purchase_events = raw_events \
        .select(raw_events.value.cast('string').alias('raw'),
                raw_events.timestamp.cast('string')) \
        .filter(is_purchase('raw'))

    extracted_purchase_events = purchase_events \
        .rdd \
        .map(lambda r: Row(timestamp=r.timestamp, **json.loads(r.raw))) \
        .toDF()
    extracted_purchase_events.printSchema()
    extracted_purchase_events.show()

    extracted_purchase_events \
        .write \
        .mode('overwrite') \
        .parquet('/tmp/purchases')

    join_events = raw_events \
        .select(raw_events.value.cast('string').alias('raw'),
                raw_events.timestamp.cast('string')) \
        .filter(is_join_guild('raw'))

    extracted_join_events = join_events \
        .rdd \
        .map(lambda r: Row(timestamp=r.timestamp, **json.loads(r.raw))) \
        .toDF()
    extracted_join_events.printSchema()
    extracted_join_events.show()

    extracted_join_events \
        .write \
        .mode('overwrite') \
        .parquet('/tmp/join_guild')


if __name__ == "__main__":
    main()

Run it

docker-compose exec spark spark-submit /w205/Project_3/carolina_daniel_jonathan_ziwei/filtered_writes.py

Check output

docker-compose exec cloudera hadoop fs -ls /tmp/

Should look like this, files join_guild and purchases were created

drwxrwxrwt   - mapred mapred              0 2018-02-06 18:27 /tmp/hadoop-yarn
drwx-wx-wx   - root   supergroup          0 2021-04-03 22:24 /tmp/hive
drwxr-xr-x   - root   supergroup          0 2021-04-05 00:09 /tmp/join_guild
drwxr-xr-x   - root   supergroup          0 2021-04-05 00:09 /tmp/purchases

These files are ready for query in the next step.

Using Pyspark to analyze the data

First, spin up a Pyspark cluster

docker-compose exec spark pyspark 

import the necessary tools

import numpy as np
import pandas as pd
from pyspark.sql import SparkSession, Row
from pyspark.sql.functions import udf
from pyspark.sql.functions import col

transform the parquet file to spark dataframes

purchases = spark.read.parquet('/tmp/purchases')
join = spark.read.parquet('/tmp/join_guild')

redefine the host provider list to answer the second business question. the host provider list represents the universe of internet service providers

host_provider_list = ['comcast','att','google','frontier','cox','earthlink']

Our first business question is what are the most popular events across our user base?

purchases.groupBy('event_type').count().sort(col("count").desc()).show()

We see that purchasing a knife is by far the most popular event, with 49 instances. The second is purchasing a helmet with 30 instances and purchasing a shield at 29 instances.

This insight allows us to improve gameplay by creating more knife varieties and using future data analysis to inform strategies to encourage an increase of user events around less popular event types (e.g. purchasing gauntlets)

To answer the next business question, we will need to transform the pyspark data frame to a Pandas data frame

purchases_2 = purchases.toPandas()

Our next business question builds from the initial question and ask what internet host providers do our users utilize and what events are most popular on those hosts? However, the internet host provider information needs to be parsed since we cannot clearly aggregate the data because there is a one to one relationship between users and host.

We are going to define a matcher function to parse through the variable "host_provider_list"

def matcher(x): 
...     for i in host_provider_list:
...         if i in x:
...             return i
...     else:
...         return np.nan 

We will them apply the matcher function to our Pandas data frame and create a new column called "Host" to represent the parsed host name

purchases_2['Host'] = purchases_2['Host'].apply(matcher)

We will then group the data frame by host names and event types and show the data frame

purchases_3 = purchases_2.groupby(['Host', 'event_type']).count()[['Accept']]

purchases_3.rename(columns={"Accept":"Count"})

We can see our users mainly use cox and earthlink as their internet hosts, with both providers hosting 38 people. Additionally, we see that most of our knife purchases come from users hosted on earthlink and most of our helmets are purchased from users using cox.

The importance of this discovery is that we can focus on developing relationships with local providers to understand any potential service issues that might impact gameplay. We can explore the option of prioritizing traffic from earthlink and cox since they represent a significant amount of our users. Further, we can observe host trends as a proxy of user behavior. For example, if most of our users move from earthlink to att since the service is more affordability, and att has reliability issues, we may see a decrease in user purchases. This external impact would mean we could save time and effort on understanding if this issue was platform driven.

Owner
Caro Arriaga
Caro Arriaga
Quiz Game: answering questions naturally with a friendly UI to enjoy the game

About Quiz Game : The Game is about answering questions naturally with a friendl

4 Jan 19, 2022
A didactic GUI chess game made in Python3 using pygame.

Chess A didactic GUI chess game made in Python3 using pygame. At the moment, there is no AI. The only way you can test the game is by playing against

Leonardo Delfino 1 Dec 22, 2021
XPlaneROS is a ROS wrapper for the XPlane-11 flight simulator.

XPlaneROS XPlaneROS is a ROS wrapper for the XPlane-11 flight simulator. The wrapper provides functionality for extracting aircraft data from the simu

AirLab Stacks 26 Dec 04, 2022
Bingo game with python

bingo-game-with-python type of plays possible player vs computer player vs player computer vs computer game is built with 4 objects classes 1.game 2.b

1 Nov 27, 2021
An implementation of John Conway's Game of Life.

This is an implementation of John Conway's Game of Life in Python, and a very basic and straightforward one at that.

Mae 3 Feb 11, 2022
Netskrafl - an Icelandic crossword game website

Netskrafl - an Icelandic crossword game website English summary This repository contains the implementation of an Icelandic crossword game in the genr

Miðeind ehf 30 May 09, 2022
A simple script which allows you to see how much GEXP you earned for playing in the last Minecraft Hypixel server session

Project Landscape A simple script which allows you to see how much GEXP you earned for playing in the Minecraft Server Hypixel Usage Install python 3.

Vincenzo Deluca 4 Dec 18, 2021
SuperChess is a GUI application for playing chess.

About SuperChess is a GUI application for playing chess. It is written in Python 3.10 programming language, uses PySide6 GUI library, python-chess lib

Boštjan Mejak 1 Oct 16, 2022
Este repositorio es creado con el fin de brindar soporte a las personas que están en el proceso de aprendizaje MISIONTIC 2022 en la universidad de Antioquia

Este repositorio es creado con el fin de brindar soporte a las personas que están en el proceso de aprendizaje MISIONTIC 2022 en la universidad de Antioquia. Hecho por los estudiantes para los estudi

Andrés Mauricio Gómez 11 Jun 22, 2022
BUG OUTBREAK is a game of adventure and shooting.

BUG OUTBREAK BUG OUTBREAK is a game of adventure and shooting. I am building the game for Github Game Off 2021. This game has 5 levels. You have to co

Shreejan Dolai 3 Nov 11, 2022
Games: Create interesting games by pure python.

Create interesting games by pure python.

4.2k Jan 06, 2023
Open source translation for the Tsukihime Remake game

Tsukihime-Translation Open source translation for the Tsukihime Remake game prepared by Clovermoon and Tsukihimates. Copyright Disclaimer under Sectio

118 Jan 01, 2023
EL JUEGO DEL GUSANITO

EL JUEGO DEL GUSANITO El juego consiste en una línea que no para de moverse, el usuario lo controla con las flechas de: → derecha ← izquierda ↑ arriba

Valeria Saidid Miranda Ibarra 0 Dec 19, 2021
AutoPilot is a game where the player controls a car and tries to get the highest score he can while not dying under falling cement blocks.

AutoPilot AutoPilot is a game where the player controls a car and tries to get the highest score he can while not dying under falling cement blocks. C

Enoc Mena 1 Nov 17, 2021
Searches the word list in Wordle based on search pattern.

Wordle Searcher Searches the word list in Wordle based on search pattern. Warning: like all forms of cheating, it trivializes the game, and robs you o

Tyler Martin 1 Jan 29, 2022
TetrisAI - Tetris AI Bot using computer vision to play game automatically

Tetris AI Tetris AI Bot using computer vision to play game automatically bot.py

11 Aug 29, 2022
Tool for Path of Exile game to automatically scan Archemesis inventory and display related information

poe-archnemesis-scanner Tool for Path of Exile game to automatically scan Archemesis inventory and display related information Features Controls When

70 Nov 10, 2022
Meu primeiro jogo desenvolvido em Python, usado o módulo do Pygame

📖 Sobre Esse repositório é dedicado ao meu primeiro jogo feito em Python, utilizando o módulo do pygame. O jogo foi desenvolvido seguindo o tutorial

Michael Douglas 0 May 06, 2022
Open source Board Games Like Tic Tac Toe, Connect 4, Ludo, Snakes and Ladder etc...

Board-Games What to do... Add Board games like Tic Tac Toe, Connect 4, Ludo, Snakes and Ladder etc... How to do... Fork the repo Clone the repo git cl

Bit By Bit 1 Oct 10, 2022
A Pygame game made in 48 hours

Flappuccino Flappuccino is a game created in 48 hours for the PyGame Community New Years Jam using Python with Pygame. Screenshots Background Informat

PolyMars 242 Jan 02, 2023