Used Logistic Regression, Random Forest, and XGBoost to predict the outcome of Search & Destroy games from the Call of Duty World League for the 2018 and 2019 seasons.

Overview

Call of Duty World League: Search & Destroy Outcome Predictions

CWL Image

Growing up as an avid Call of Duty player, I was always curious about what factors led to a team winning or losing a match. Was it strictly based on the number of kills each player obtained? Was it who played the objective more? Or was it something different? Finally, after years of waiting, I decided that it was time to find my answers. Coupling my love for Call of Duty and my passion for data science, I began to investigate predicting the outcome of Search & Destroy games from the Call of Duty World League's 2018 and 2019 seasons.

Utilizing Python, I created a Logistic Regression binary classification model that provided insight into the significant factors that led teams to win Search and Destroy matches. Did you know that every time a player has exactly two kills in around a team's odds of winning increase by 59%? Or that every time a team defuses the bomb, their odds of winning the match increase by 54%? What about when someone on the team commits suicide? The team's odds of winning the match decreased by a whopping 43%!

I also built an XGBoost and a Random Forest model to see how accurately I could predict a Search & Destroy match outcome. The XGBoost model was ~89% accurate when predicting Search & Destroy match outcomes on test data! This model found that one of the least important variables for predicting a team's win or loss is if the team had a sneak defuse at any point during the match. Although sneak defuses are beneficial to a team's success, it would be more impactful if players removed all enemies from the round before defusing the bomb.

Project Goals

  1. Learn about essential factors that play into a team's outcome for Search & Destroy matches
  2. See how well I can predict a team's wins and losses for Search & Destroy matches

What did I do?

I used data from 17 different CWL tournaments spanning two years. If you are curious, you can find each dataset within this Activision repository hosted here. I excluded the data from the 2017 CWL Championships tournament because this set does not have all the Search & Destroy variables that the other datasets have. The final dataset had 3,128 observations with 30 variables. In total, there are 1,564 Search & Destroy matches in this dataset. All variables are continuous; there were no categorical variables within the final data used for modeling besides the binary indicator for the match's outcome.

To reach the first goal of this project, I created a Logistic Regression model to learn about the crucial factors that can either help a team win or pull a team toward a loss. To reach the second goal of this project, I elected to use both Random Forest and XGBoost models for classification to try and find the best model possible at predicting match outcomes.

How did I do it?

Logistic Regression

After joining the data, I first needed to group the observations by each match and team, then I filtered for only Search & Destroy games. That way, we have observations for both wins and losses of only Search & Destroy matches. I used a set of 14 variables for the model development process. The variables are as follows: Deaths, Assists, Headshots, Suicides, Hits, Bomb Plants, Bomb Defuses, Bomb Sneak Defuses, Snd Firstbloods, Snd 2-kill round, Snd 3-kill round, Snd 4-kill round, 2-piece, & 3-piece. If you are curious, you can find an explanation of each variable in the entire dataset in the Activision repository linked above.

Since we are using these models to classify wins and losses correctly, I elected to use the Area Under the Receiver Operating Characteristic (AUROC) curve as a metric for determining the best model. I used AUROC because of its balance between the True Positive Rate and the False Positive Rate. I found that the Logistic Regression model with the highest AUROC value on training data had the following variables: Assists, Headshots, Suicides, Defuses, Snd 2-kill round, Snd 3-kill round, & Snd 4-kill round. This model was then used to predict test data and produced the following AUROC curve:

Logistic AUROC Graph

It is worth noting that this model was 75% accurate when predicting wins and losses on test data. Overall, I expected this model to perform worse due to the small number of variables used. Still, it seems as if these variables do an excellent job at deciphering the wins and losses in Search & Destroy matches. You can find the actual values in the confusion matrix built by this model here.

Random Forest & XGBoost

For the second goal of this project, I used both Random Forest and XGBoost classification models to see just how well we could predict the outcome of a match. Neither of these algorithms has the same assumptions as Logistic Regression, so I used the complete set of 14 variables for each technique. Without optimizing hyperparameters, I first built both models to have a baseline model for both algorithms. After this, I decided to use a grid search on the hyperparameters in each model to find the best possible tune for the data.

I found that the optimized XGBoost model had a higher AUROC value than the optimized Random Forest model on training data, so I used the XGBoost model to predict the test data. This model produced the following AUROC curve:

XGBoost AUROC Graph

As expected, this model did much better than the Logistic Regression for predicting match outcomes! This model is ~89% accurate when predicting wins and losses on test data. You can find the confusion matrix for this model here.

What did I find?

From the Logistic Regression model, I found that a team's odds of winning the entire match increase by ~5% every time someone gets a kill with a headshot and ~54% every time the bomb gets defused. A team's odds of winning also increase by 59% every time a player has exactly two kills in a round, ~115% every time a player has precisely three kills in a round, and ~121% every time a player has precisely four kills in around. I also found that a team's odds of winning the entire match decrease by 43% every time a player commits suicide and (oddly enough) 0.34% every time a player receives an assist.

I recommend that professional COD teams looking to up their Search & Destroy win percentage need to find and recruit players with a high amount of bomb defuses and many headshots in Search & Destroy games. If I were a coach, I would be looking to grab Arcitys, Zer0, Clayster, Rated, & Silly. These are five players who have a high count of headshots and defuses in Search & Destroy matches.

If you are curious to learn about the essential variables in the XGBoost model, head over here!

Owner
Brett Vogelsang
M.S. Candidate at the Institute for Advanced Analytics at NC State University.
Brett Vogelsang
Python 3.6+ toolbox for submitting jobs to Slurm

Submit it! What is submitit? Submitit is a lightweight tool for submitting Python functions for computation within a Slurm cluster. It basically wraps

Facebook Incubator 768 Jan 03, 2023
PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors.

PyNNDescent PyNNDescent is a Python nearest neighbor descent for approximate nearest neighbors. It provides a python implementation of Nearest Neighbo

Leland McInnes 699 Jan 09, 2023
Banpei is a Python package of the anomaly detection.

Banpei Banpei is a Python package of the anomaly detection. Anomaly detection is a technique used to identify unusual patterns that do not conform to

Hirofumi Tsuruta 282 Jan 03, 2023
Class-imbalanced / Long-tailed ensemble learning in Python. Modular, flexible, and extensible

IMBENS: Class-imbalanced Ensemble Learning in Python Language: English | Chinese/中文 Links: Documentation | Gallery | PyPI | Changelog | Source | Downl

Zhining Liu 176 Jan 04, 2023
SIMD-accelerated bitwise hamming distance Python module for hexidecimal strings

hexhamming What does it do? This module performs a fast bitwise hamming distance of two hexadecimal strings. This looks like: DEADBEEF = 1101111010101

Michael Recachinas 12 Oct 14, 2022
Simple, fast, and parallelized symbolic regression in Python/Julia via regularized evolution and simulated annealing

Parallelized symbolic regression built on Julia, and interfaced by Python. Uses regularized evolution, simulated annealing, and gradient-free optimization.

Miles Cranmer 924 Jan 03, 2023
Python module for performing linear regression for data with measurement errors and intrinsic scatter

Linear regression for data with measurement errors and intrinsic scatter (BCES) Python module for performing robust linear regression on (X,Y) data po

Rodrigo Nemmen 56 Sep 27, 2022
Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning

Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API.

7.4k Jan 04, 2023
This is a Machine Learning model which predicts the presence of Diabetes in Patients

Diabetes Disease Prediction This is a machine Learning mode which tries to determine if a person has a diabetes or not. Data The dataset is in comma s

Edem Gold 4 Mar 16, 2022
(3D): LeGO-LOAM, LIO-SAM, and LVI-SAM installation and application

SLAM-application: installation and test (3D): LeGO-LOAM, LIO-SAM, and LVI-SAM Tested on Quadruped robot in Gazebo ● Results: video, video2 Requirement

EungChang-Mason-Lee 203 Dec 26, 2022
Time Series Prediction with tf.contrib.timeseries

TensorFlow-Time-Series-Examples Additional examples for TensorFlow Time Series(TFTS). Read a Time Series with TFTS From a Numpy Array: See "test_input

Zhiyuan He 476 Nov 17, 2022
stability-selection - A scikit-learn compatible implementation of stability selection

stability-selection - A scikit-learn compatible implementation of stability selection stability-selection is a Python implementation of the stability

185 Dec 03, 2022
Machine learning algorithms implementation

Machine learning algorithms implementation This repository consisits of implementation of various machine learning algorithms. The algorithms implemen

Karun Dawadi 1 Jan 03, 2022
Accelerating model creation and evaluation.

EmeraldML A machine learning library for streamlining the process of (1) cleaning and splitting data, (2) training, optimizing, and testing various mo

Yusuf 0 Dec 06, 2021
Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Predico Disease Prediction system based on symptoms provided by patient- using Python-Django & Machine Learning

Felix Daudi 1 Jan 06, 2022
An implementation of Relaxed Linear Adversarial Concept Erasure (RLACE)

Background This repository contains an implementation of Relaxed Linear Adversarial Concept Erasure (RLACE). Given a dataset X of dense representation

Shauli Ravfogel 4 Apr 13, 2022
Python package for machine learning for healthcare using a OMOP common data model

This library was developed in order to facilitate rapid prototyping in Python of predictive machine-learning models using longitudinal medical data from an OMOP CDM-standard database.

Sontag Lab 75 Jan 03, 2023
A data preprocessing package for time series data. Design for machine learning and deep learning.

A data preprocessing package for time series data. Design for machine learning and deep learning.

Allen Chiang 152 Jan 07, 2023
Book Item Based Collaborative Filtering

Book-Item-Based-Collaborative-Filtering Collaborative filtering methods are used

Şebnem 3 Jan 06, 2022
LinearRegression2 Tvads and CarSales

LinearRegression2_Tvads_and_CarSales This project infers the insight that how the TV ads for cars and car Sales are being linked with each other. It i

Ashish Kumar Yadav 1 Dec 29, 2021